The Complete Coursera Learning Path: 10 Specializations, 65+ Courses, 11 Guided Projects
A single, opinionated learning path across the Pragmatic AI Labs and Duke University Coursera catalogs — 10 specializations, 65+ courses, 11 guided projects, and 3 standalone courses. This is the curriculum for an engineer building a career from foundations through production AI.
How To Read This Path
The catalog sequences naturally into four tracks: Foundations, Data and Cloud, AI and LLMs, and Platform and Tooling. You can follow them in order or skip between them once you have the prerequisites. Each track builds artifacts you can ship to production and show in a portfolio.
Track 1 — Foundations
Start here if you're new to data engineering or need a refresher on the three languages every backend engineer writes.
Python, Bash and SQL Essentials for Data Engineering — Duke
- Python and Pandas for Data Engineering
- Linux and Bash for Data Engineering
- Scripting with Python and SQL for Data Engineering
- Web Applications and Command-Line Tools for Data Engineering
Mastering GitHub — Pragmatic AI Labs
- GitHub: From Zero to Pull Request
- GitHub: Codespaces, Actions, and Ecosystem Tools
- GitHub Enterprise Administration
- GitHub: Advanced Prompt Engineering for Code
- GitHub Production Applications
- GitHub: Governing AI-Generated Code
- GitHub: Security, Identity, and Access
- GitHub: Evaluating and Integrating AI Models
- GitHub: AI-Augmented Testing and Refactoring
Track 2 — Data and Cloud
The data engineering and cloud path — from big-data foundations through enterprise lakehouse architectures.
Applied Python Data Engineering — Duke
- Spark, Hadoop, and Snowflake for Data Engineering
- Virtualization, Docker, and Kubernetes for Data Engineering
- Data Visualization with Python
Building Cloud Computing Solutions at Scale — Duke
- Cloud Computing Foundations
- Cloud Virtualization, Containers and APIs
- Cloud Data Engineering
- Cloud Machine Learning Engineering and MLOps
Enterprise AI and Data Engineering with Databricks — Pragmatic AI Labs
- Enterprise AI with Databricks
- Data Engineering with Databricks
- Applied Data Science with Databricks
- Databricks MLOps
- Databricks Solutions Architecture
Track 3 — AI and LLMs
Where the industry is heading — foundation models, LLM operations, and the open-source AI stack.
MLOps | Machine Learning Operations — Duke
- Python Essentials for MLOps
- DevOps, DataOps, MLOps
- MLOps Tools: MLflow and Hugging Face
- MLOps Platforms: Amazon SageMaker and Azure ML
Large Language Model Operations (LLMOps) — Duke
- Introduction to Generative AI
- Operationalizing LLMs on Azure
- Advanced Data Engineering
- GenAI and LLMs on AWS
- Databricks to Local LLMs
- Open Source LLMOps Solutions
Next-Gen AI Development with Hugging Face — Pragmatic AI Labs
- Hugging Face Foundations
- Fine-Tuning with Hugging Face
- Rust-Native AI with Candle and Burn
- Deploying Hugging Face Models
- Hugging Face Agents and Tools
Track 4 — Platform and Tooling
Production AI engineering — building the systems others build on.
AI Tooling — Pragmatic AI Labs
Foundation Models and Bedrock:
- Generative AI and Foundation Models on AWS
- Intelligent Applications with Amazon Bedrock
- Prompt Architecture and NLP on Amazon Bedrock
- AI Orchestration: From Local Models to Cloud
Enterprise AI and Security:
- Enterprise AIOps with Amazon Q Business
- AI Security and Governance on AWS
- AI-Powered Analytics and Performance Engineering
- CLI Automation with Amazon Q and CloudShell
Agents, Debugging, and Multi-Modal:
- Deterministic LLM Programming
- Agentic AI: Actor Models and Subagent Architecture
- AI Debugging and Test-Driven Fixes
- Multi-Modal AI
Privacy, Pipelines, and MCP:
- Privacy-Conscious Development with AI Assistants
- AI-Powered Data Pipelines with Deno
- Building Deterministic MCP Agents
- Conversational Bot Architecture with Rust and Deno
Production and Capstone:
- AI Code Review Automation with GitHub Actions
- LLM Security and Vulnerabilities
- Build a Production SaaS Application with AI
- AI Tooling Capstone: Serverless Multi-Model Systems
Rust Programming — Duke
- Rust Fundamentals
- Data Engineering with Rust
- Rust for DevOps
- Python and Rust with Linux Command Line Tools
- Rust for Large Language Model Operations (LLMOps)
Standalone Courses
- Beginning Llamafile for Local Large Language Models
- Foundations of Local Large Language Models
- End to End LLMs with Azure
Guided Projects (1–2 hour portfolio builds)
- Object-Oriented Programming in Python
- MySQL for Data Engineering
- Python Generators
- Build a Static Website with Rust and Zola
- Building Rust AWS Lambda Microservices with Cargo Lambda
- Rust Secret Cipher CLI
- Python Decorator Functions
- Understand Big O Notation in Python
- Building a Bash Command-Line Tool
- Rust Axum Greedy Coin Microservice
- Local LLMs with llamafile
Recommended Sequences
Data engineer path: Track 1 → Track 2 → MLOps spec in Track 3.
AI engineer path: Track 1 → MLOps + LLMOps + Hugging Face in Track 3 → AI Tooling in Track 4.
Platform / staff engineer path: All four tracks; Track 4's AI Tooling capstone is the portfolio artifact.
Rust-curious: Finish Track 1, then jump into the Rust Programming spec in Track 4 and Rust-Native AI within the Hugging Face spec.