- Ph.D. in Zoology and Ecology, James Cook University — Cum laude (2024)
- Skills & Tools: Bayesian and frequentist inference, hierarchical modelling, machine learning, predictive modelling, spatial analysis (GIS, remote sensing), automated data pipelines, data cleaning, R programming, reproducible workflows.
- Data Types: Multi-source ecological, biological, climatic, physiological, and biogeochemical datasets (soil, foliage chemistry).
- Applications: Developed novel modelling frameworks to predict species vulnerability to extreme events and identify high-risk habitats.
- M.S. in Biology and Conservation of Biodiversity, Universidad de Salamanca (2016)
- Skills & Tools: GIS, advanced statistics, applied statistical modelling, R programming, spatial analysis, workflow automation.
- Applications: Designed and executed analytical workflows for biodiversity monitoring and conservation planning.
- B.S. in Biology, Universidad de Salamanca (2014)
- Skills & Tools: Mathematics, algebra, biostatistics, physics, introductory statistical programming, ecological modelling.
- Applications: Undergraduate research project integrating environmental and ecological data.
Statistical, Computational & Coding Training
Bayesian and hierarchical modelling
Coding in R
Coding in Python & Machine learning
- The Complete Python Bootcamp: From Zero to Hero in Python
- Completed a comprehensive, hands-on course covering the full Python stack from beginner fundamentals to advanced programming concepts.
- Built a strong professional foundation in syntax, data structures, control flow, functions, and Object-Oriented Programming (OOP) through practical exercises and projects.
- Developed the ability to write clean, efficient, and maintainable code, applying Python across automation, scripting, data handling, and application development.
- Explored Python’s standard library, automation patterns, file processing, and interactive workflows through end-to-end mini-projects.
- Core skills acquired:
- Python essentials: variables, data types, operators, loops, conditionals, and functions.
- Intermediate and advanced concepts: decorators, debugging, error handling, regex, and iterators.
- Functional and Object-Oriented Programming (classes, inheritance, encapsulation).
- Working with key modules:
collections, datetime, os, math, random, re, and file I/O. - Handling files, images, PDFs, CSVs, and automating repetitive workflows.
- Web scraping with BeautifulSoup and Requests.
- Email automation and productivity scripting.
- Building and debugging in both Jupyter notebooks and standalone
.py files. - Creating interactive mini-projects and games (e.g., Blackjack, Tic-Tac-Toe).
- Introductory GUI development and interactive notebook elements.
- Applying Python to real-world scenarios, from automation to small production-ready projects.
- Python for Data Science and Machine Learning Bootcamp
- Completed a comprehensive, project-based introduction to data science and machine learning in Python, covering the full modelling workflow from data cleaning to evaluation and interpretation.
- Built practical experience across the modern Python data stack (NumPy, pandas, Matplotlib, Seaborn, Plotly) and implemented supervised, unsupervised, and introductory deep-learning models.
- Core skills acquired:
- Data wrangling and numerical computing with pandas and NumPy.
- Exploratory and statistical visualisation using Matplotlib, Seaborn, and Plotly.
- Machine learning with scikit-learn, including:
- Supervised learning: Linear/Logistic Regression, k-NN, Decision Trees, Random Forests, Support Vector Machines, feed-forward Neural Networks.
- Unsupervised learning: K-Means Clustering, PCA (dimensionality reduction).
- Introductory NLP (text cleaning, tokenisation, vectorisation with CountVectorizer/TF-IDF) and Naive Bayes classification.
- Introductory Recommender Systems (similarity metrics and collaborative filtering).
- Foundations of Deep Learning (activation functions, simple network training with Keras).
- Model evaluation, cross-validation, train/test workflows, bias–variance analysis, and performance optimisation.
- Exposure to big-data concepts with Spark, including handling distributed datasets in Python.
SQL & Databases
- The Complete SQL Bootcamp: PostgreSQL & pgAdmin
- Hands-on training in PostgreSQL for data analysis, reporting, and database querying, using practical exercises and real-world datasets.
- Gained full proficiency with SQL fundamentals and intermediate query patterns for analytics workflows.
- Core skills acquired:
- SQL querying with
SELECT, WHERE, ORDER BY, logical filtering, and pattern matching. - Data aggregation with
GROUP BY, HAVING. - Table joins (INNER, LEFT/RIGHT, FULL, CROSS) and multi-table relational operations.
- Database design principles: tables, constraints, data types, and schema structure.
- Practical work using PostgreSQL + pgAdmin, including database setup, table creation, and data loading.
- Integrating SQL with Python as part of data-science pipelines.
- SQL for Data Analysis: Advanced SQL Querying Techniques
- Completed an applied advanced-SQL course focused on professional analytical querying and multi-table relational workflows.
- Core skills acquired:
- Advanced JOIN logic, including multi-table joins, self-joins, cross joins, and unions for complex relational analysis.
- Subqueries and Common Table Expressions (CTEs) (including recursive CTEs) to structure multi-step logic and improve readability of analytical pipelines.
- Window functions (
ROW_NUMBER, RANK, DENSE_RANK, LAG, LEAD, FIRST_VALUE, LAST_VALUE) for ranking, rolling calculations, trend detection, and partition-based summarisation. - Comprehensive string, numeric, and date functions, including pattern matching, conditional expressions, and reliable handling of NULLs and irregular values.
- Applied analytical patterns: de-duplication workflows, conditional aggregation, segmentation, pivot-style summaries, and multi-stage data transformations.
- Use of views and modular query structures to encapsulate reusable logic and streamline complex analytical workflows.
Artificial Intelligence / Prompt Engineering
- ChatGPT Prompt Engineering for Developers
- Completed a hands-on introduction to prompt engineering for building LLM-powered applications, taught by Isa Fulford (OpenAI) and Andrew Ng.
- Learned best practices for writing clear, reliable prompts and systematically refining them to improve output quality.
- Gained practical experience using the OpenAI API, including developing a custom chatbot in the course labs.
- Practised prompt engineering across core application areas such as summarisation, inference, text transformation, and generative expansion, developing intuition for how LLMs interpret and follow instructions.
- Built a conceptual understanding of LLM behaviour and how prompting can be used to prototype and deploy new capabilities.
- Core skills acquired:
- Writing concise, structured, instruction-focused prompts.
- Iterative refinement techniques for accuracy, robustness, and alignment.
- Designing prompts for summarisation, classification, transformation, and generation tasks.
- Using LLM APIs in practical development workflows.
- Building basic LLM-driven applications with reproducible prompting strategies.
- Agentic AI by Andrew Ng
- Completed Andrew Ng’s flagship introduction to Agentic AI — systems that can plan, act, reflect, and self-improve through autonomous workflows.
- Learned how agentic systems extend traditional prompting by integrating tools, memory, iterative reasoning, and multi-step planning for complex tasks.
- Built agents using reflection loops, tool calling, retrieval, search, APIs, and multi-agent collaboration, with hands-on code labs reinforcing each pattern.
- Studied key design patterns for real-world agents, including task decomposition, reflection-driven refinement, evaluation loops, and autonomous decision-making.
- Implemented agent workflows in Python, gaining practical intuition for how modern agent frameworks function under the hood.
- Core skills acquired:
- Understanding the Agentic AI architecture: planners, actors, critics, tools, memory, and feedback loops.
- Designing agents for multi-step reasoning, iterative improvement, and self-correction.
- Integrating tool calling and API actions to interact with external systems and retrieve live information.
- Building retrieval-augmented agents using vector databases and structured search.
- Constructing multi-agent systems, including coordinator–specialist patterns.
- Applying evaluation and debugging techniques such as trace inspection, behavioural tests, and performance refinement.
- Awareness of real-world applications of agentic workflows in data science, automation, research, and production AI systems.
- AI Agents in LangGraph
- Completed a practical introduction to building stateful, production-ready AI agents using the LangGraph framework.
- Learned how to structure agent workflows as deterministic graphs with clear control flow, tool calling, memory management, and error handling.
- Developed agents capable of planning, reflecting, and iterating through multi-step tasks, with transparent behavioural tracing and debugging.
- Built complete agent pipelines in Python, gaining intuition for how LangGraph orchestrates LLMs, tools, and state updates.
- Core skills acquired:
- Designing agent workflows using LangGraph’s nodes, edges, and state objects.
- Integrating tool calling, APIs, and memory into multi-step agent systems.
- Implementing reflection loops, retries, and human-in-the-loop stages.
- Debugging and evaluating agents through event tracing and structured testing.
- Understanding how graph-based architectures support reliable, scalable agentic applications.