Building AI Applications with Open Source Models
Curriculum
Phase 1: Local Environment (Collab Notebooks)
Part 1: Foundations
- Why Take This Course & Overview
- Pre-requisites
- Introduction to LLMs
- Basic Open-Source LLM usage
- Basic Open-Source ASR usage
Part 2: Prompting (Coming July)
- Foundations of prompt engineering
- Organising prompts via jinja templates
Part 3: Running Open-Source Models with Llama.cpp
- Installation and setup
- Selecting & downloading a model
- Working with llama cpp and the Python bindings
- Using Grammars to control output format
- Streaming Responses
- Multi-modal models (CLIP)
- Summary
Part 4: Creating Retrieval Augmented Generation Pipelines (RAG) with Open Source Models
- Introduction to RAG
- Creating a RAG system with llama index
Part 5: Evaluating Models & Pipelines
- Evaluation Theory
- Implementing tests with deepeval
Project 1: Chatbot Application (FastAPI & Together AI)
Project 2: Building a Podcast Transcript Summarizer (Offline ETL with Llama CPP)
Project: Multimodal (Coming July)
Pre-Requisites
Must Haves:
- Python Knowledge
- Git knowledge
Nice to haves (i.e. only required for certain parts of the course, these parts will be explained but may require further reading if you're unfamiliar with them):
- Knowledge of web development (FastAPI)
- Knowledge of SQL databases
- Knowledge of AWS
Not Covered In This Course
- Pre-training
- Fine-tuning
Both of these activities require access to significant GPU resources which will beyond many students.
- Deployment (I will create a separate course for this, feel free to message me if you have questions)
About Me
Hi I'm Chris! I'm an experienced software developer who has taught over 30,000 software professionals online. If you Googled something about FastAPI before, you've probably ended up on my blog.