A fast-paced, hands-on day that takes you from prompt-only demos to production-ready Retrieval-Augmented Generation (RAG).
You’ll learn how to ingest and chunk documents, embed them into a vector store, retrieve the right context, and synthesize accurate answers with LangChain—plus pragmatic tips to reduce hallucinations and measure quality.
Software engineers, ML/AI engineers, solution architects, and technical product folks who want to ship reliable LLM features. If you’re integrating internal docs, PDFs, wikis, or support knowledge into an AI assistant or search experience, this workshop is for you.
Part 1
• Intro to LLM, how and why it understands us?
• Intro to prompt engineering
• Hands-on: Setup local dev environment
Part 2
• Intro to langchain
• Langchain main concepts: prompts, chains, models, chat
• Hands-on: Build a LLM powered app
Part 3
• Understanding RAG, why it’s better than legacy search?
• Document parsing
• Building great context
• Chunking best practices
Part 4
• Hands-on: Build a RAG powered chatbot