How to build RAG applications with langchain

Main Speaker

Learning Tracks

Course ID

42790

Date

03/12/2025

Time

Daily seminar
9:00-16:30

Location

John Bryce ECO Tower, Homa Umigdal 29 Tel-Aviv

Overview

A fast-paced, hands-on day that takes you from prompt-only demos to production-ready Retrieval-Augmented Generation (RAG).
You’ll learn how to ingest and chunk documents, embed them into a vector store, retrieve the right context, and synthesize accurate answers with LangChain—plus pragmatic tips to reduce hallucinations and measure quality.

Who Should Attend

Software engineers, ML/AI engineers, solution architects, and technical product folks who want to ship reliable LLM features. If you’re integrating internal docs, PDFs, wikis, or support knowledge into an AI assistant or search experience, this workshop is for you.

Prerequisites

Working knowledge of Python.

Course Contents

Part 1
• Intro to LLM, how and why it understands us?
• Intro to prompt engineering
• Hands-on: Setup local dev environment

Part 2
• Intro to langchain
• Langchain main concepts: prompts, chains, models, chat
• Hands-on: Build a LLM powered app

Part 3
• Understanding RAG, why it’s better than legacy search?
• Document parsing
• Building great context
• Chunking best practices

Part 4
• Hands-on: Build a RAG powered chatbot

The conference starts in

Days
Hours
Minutes
Seconds