⚑ RAG Engineering Bootcamp

Zero to Job-Ready RAG
Engineer in 5 Days

For final-year students who want to build real AI apps, ace interviews, and land their first AI/backend job.

5
Days
20+
Concepts
10+
Code Tasks
1
Live Project
50+
Interview Qs
πŸ“… Day 1 of 5 Β· 4–5 hours

Understanding RAG Fundamentals

Before you write a single line of code, you need a rock-solid mental model. Today you'll understand WHY RAG exists, what problems it solves, and how every piece connects. This day is the most important.
What is RAGLLM LimitationsHallucinations Context WindowTokensEmbeddings Semantic SearchVector DatabasesChunking

πŸ€” 1.1 β€” The Problem RAG Solves

To understand RAG, you first need to understand what's broken without it. Let's start with a real-world scenario.

The Scenario

Imagine you're building a chatbot for a law firm. Their lawyers need to query 50,000 legal documents. You try using GPT-4 directly:

  • ❌ GPT-4's training data is from 2023 β€” it doesn't know about the firm's internal case files
  • ❌ Even if you stuff documents into the prompt, GPT-4's context window fits maybe 20 pages max
  • ❌ The model confidently makes up case names and legal precedents that don't exist
  • ❌ Sending 50,000 documents to OpenAI every query = $500 per query

RAG is the solution to all four problems above.

The 4 Fatal Limitations of Raw LLMs

LimitationWhat It MeansImpact
Knowledge CutoffModel doesn't know anything after training dateOutdated answers, missed current events
HallucinationModel confidently fabricates facts that sound rightWrong info delivered with total confidence β€” dangerous in production
Context Window LimitMax text the model can process at once (GPT-4: ~128K tokens β‰ˆ 100 pages)Can't query 50,000 documents at once
No Private KnowledgeModel only knows public internet data from trainingCan't answer questions about YOUR company's data
🧠 Memory Trick: LLMs suffer from HCKP β€” Hallucination, Cutoff, kontext (context) window, Private knowledge gap. "Happy Cats Keep Purring" (but they're lying about what they know)

πŸ’‘ 1.2 β€” What is RAG?

RAG = Retrieval-Augmented Generation. It's a technique that gives an LLM access to external knowledge by retrieving relevant documents first, then passing them to the LLM as context, then generating an answer grounded in those documents.

The Perfect Analogy

Think of an LLM as a very smart student who has read millions of books but can't bring those books to the exam room. Their memory is imperfect (hallucinations). RAG is like giving that student an open-book exam:

  • πŸ“š The student doesn't need to memorize everything
  • πŸ” They look up relevant pages before answering
  • ✍️ They write answers grounded in the actual text
  • βœ… Answers are accurate and verifiable

RAG in One Diagram

WITHOUT RAG: User: "What did our CEO announce in Q4 2024?" ↓ [ LLM ] ← only knows public training data ↓ "I don't have information about your company's announcements." (or worse, makes something up) WITH RAG: User: "What did our CEO announce in Q4 2024?" ↓ [ Retriever ] β†’ searches company documents β†’ finds relevant Q4 report pages ↓ [ Prompt Builder ] β†’ "Answer this question using these documents: [docs]" ↓ [ LLM ] β†’ reads the actual document content ↓ "According to the Q4 2024 earnings call, the CEO announced a 20% headcount reduction..." Grounded in real document. No hallucination. βœ…

Why RAG is Everywhere Now

CompanyRAG Use Case
Notion AIRAG over your personal workspace notes
GitHub CopilotRAG over your codebase for context-aware suggestions
Perplexity AIRAG over real-time web search results
ChatGPT (with files)RAG over uploaded PDFs and documents
Every enterprise AI chatbotRAG over internal wikis, Confluence, Slack, policies

🎯 1.3 β€” Tokens, Embeddings & Semantic Search

These 3 concepts are the vocabulary of RAG. You can't explain RAG in an interview without understanding these cold.

Tokens β€” What LLMs Actually See

LLMs don't process words β€” they process tokens. A token is roughly ΒΎ of a word. "RAG engineering" = 3 tokens. Tokens matter because:

  • Every API call costs money based on token count (input + output)
  • Context window limits are measured in tokens (e.g., GPT-4o: 128K tokens)
  • Your chunk size and retrieval strategy directly affect token usage and cost
# Quick token estimation (rule of thumb):
1 token  β‰ˆ  ΒΎ of a word  β‰ˆ  4 characters
1 page   β‰ˆ  ~500 words  β‰ˆ  ~650 tokens
1 novel  β‰ˆ  ~100,000 words  β‰ˆ  ~130,000 tokens

# GPT-4o pricing (as of 2024):
Input:  $5.00 per million tokens
Output: $15.00 per million tokens

# RAG cost control: only send RELEVANT chunks (500-1000 tokens) 
# instead of the entire document (millions of tokens)

Embeddings β€” Turning Words into Numbers

An embedding is a list of numbers (a vector) that represents the meaning of text. Similar meanings = similar vectors. This is how RAG "understands" that "automobile" and "car" mean the same thing without matching keywords.

Text β†’ Embedding Model β†’ Vector (list of numbers) "The cat sat on the mat" β†’ [0.23, -0.87, 0.41, 0.15, ... 1536 numbers] "A kitten rested on a rug" β†’ [0.25, -0.84, 0.38, 0.18, ... 1536 numbers] "Python is a programming language" β†’ [-0.72, 0.31, -0.55, 0.88, ... 1536 numbers] Distance between vectors 1 & 2: 0.05 β†’ VERY SIMILAR (same meaning) Distance between vectors 1 & 3: 0.89 β†’ VERY DIFFERENT (different topic) This is how RAG finds relevant documents β€” not by keyword matching, but by MEANING matching!

Semantic Search vs Keyword Search

Query: "How do I fix my car's engine?"Keyword SearchSemantic Search
Would find:"car engine repair" (exact words)"automobile motor troubleshooting", "vehicle powertrain issues", "fixing ignition problems"
Misses:Any synonym variationAlmost nothing relevant
How it works:String matching (TF-IDF, BM25)Vector similarity (cosine similarity)
Used in RAG:Hybrid search (combined)Primary retrieval method
🧠 Memory Trick: Semantic = SEMantics = meaning. Embeddings capture SEMantics, not spelling. Think "semantic β‰  spelling."

βœ‚οΈ 1.4 β€” Chunking & Vector Databases

Why Chunking Exists

Imagine you have a 500-page PDF manual. You can't embed the whole thing as one vector β€” that loses all granularity. And you can't send the whole document to an LLM for every query (too expensive, hits context limit). So you chunk β€” split the document into smaller overlapping pieces, each gets its own embedding.

500-page PDF β†’ Chunking β†’ 2000 chunks of ~250 words each Page 1 content: "Introduction to AWS... [500 words]" ↓ chunk Chunk 1: "Introduction to AWS... [250 words]" β†’ embedding β†’ stored in vector DB Chunk 2: "...to AWS. Key services include... [250 words]" β†’ embedding β†’ stored in vector DB ↑ overlap of ~50 words keeps context across boundaries Query: "What is EC2?" ↓ Embed query β†’ find 3 most similar chunks β†’ send to LLM β†’ answer "EC2 is covered in chunks 45, 46, 823. Only those 3 chunks go to LLM." Cost: 3 chunks Γ— 250 words β‰ˆ ~1000 tokens (vs 500 pages Γ— 500 tokens = 250,000 tokens)

Vector Database β€” The Search Engine for Embeddings

A vector database stores millions of embedding vectors and can find the most similar ones to a query vector in milliseconds. It's the core retrieval infrastructure of every RAG system.

Vector DBTypeBest ForWhen to Use
ChromaDBOpen source, localLearning, prototypes, small appsDay 1-3 of your project
FAISSOpen source, in-memoryHigh-performance local searchResearch, no persistence needed
PineconeManaged cloudProduction apps at scaleWhen you need managed infra
WeaviateOpen source / cloudComplex queries, GraphQL interfaceEnterprise features needed
QdrantOpen source / cloudFast Rust backend, rich filteringPerformance-critical production
pgvectorPostgreSQL extensionExisting Postgres usersYou already use PostgreSQL
πŸ’‘
For your course projects: Use ChromaDB β€” it's local, zero config, pythonic, and perfect for learning. For production apps in your portfolio, mention Pinecone or Qdrant.

Interview Questions β€” Day 1 Concepts

Q: What is RAG and why is it better than fine-tuning an LLM?
RAG retrieves relevant documents at query time and passes them as context to the LLM. Fine-tuning bakes knowledge into model weights permanently. RAG is better when: (1) knowledge changes frequently, (2) you need source attribution, (3) you have limited compute/budget for fine-tuning, (4) you need to query private proprietary data without sending it to model training. Fine-tuning is better for: teaching a model a new skill or style, not new knowledge.
Q: What is a hallucination in LLMs and how does RAG reduce it?
Hallucination = when an LLM generates factually incorrect information confidently. Causes: the model interpolates from patterns in training data to fill gaps. RAG reduces this by providing actual source documents as context β€” the model now has a reference to ground its answer in. You can also instruct: "Answer only using the provided context. If not in context, say 'I don't know.'" This is called grounding.
Q: What is the difference between an embedding and a token?
A token is the basic unit of text an LLM processes (roughly ΒΎ of a word). An embedding is a dense numerical vector (list of floats) that represents the semantic meaning of a piece of text. Tokens are used for language modeling (predicting next token). Embeddings are used for similarity search (finding related text). In RAG, you embed both the documents and the query to find matches.
πŸ› οΈ Day 1 Hands-On Tasks
  1. Setup environment: pip install openai chromadb langchain sentence-transformers tiktoken
  2. Token counting: Use tiktoken to count tokens in a paragraph β€” see how text becomes numbers
  3. Generate your first embedding: Use sentence-transformers to embed 5 sentences, print the vector shape
  4. Semantic similarity: Calculate cosine similarity between "dog" and "puppy" vs "dog" and "python". Observe the difference.
  5. Manual chunking: Take any 3-page text, split into 250-word chunks with 50-word overlap manually in Python
# Task: Your First Embedding + Similarity Check
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

model = SentenceTransformer('all-MiniLM-L6-v2')  # free, fast, good

sentences = [
    "A dog is playing in the park",
    "A puppy is running outdoors",     # should be similar
    "Python is a programming language", # should be different
    "Machine learning models learn patterns"
]

embeddings = model.encode(sentences)
print(f"Embedding shape: {embeddings.shape}")  # (4, 384)

# Calculate similarity between all pairs
sim_matrix = cosine_similarity(embeddings)
print(f"Dog vs Puppy similarity: {sim_matrix[0][1]:.3f}")   # ~0.85 high!
print(f"Dog vs Python similarity: {sim_matrix[0][2]:.3f}")  # ~0.12 low

# Output: Dog vs Puppy ~0.85  ← semantically related βœ…
# Output: Dog vs Python ~0.12 ← semantically unrelated βœ…
πŸ“ GitHub Commit Suggestion: feat: day1-foundations - embedding exploration and similarity demo Files: embeddings_demo.py, chunking_demo.py, requirements.txt, README.md

πŸ“‹ Day 1 Revision Notes

  • RAG = retrieve relevant docs β†’ augment LLM prompt β†’ generate grounded answer
  • 4 LLM limits RAG solves: hallucination, knowledge cutoff, context window, private data
  • Token = basic LLM text unit (~ΒΎ word) | Embedding = semantic meaning as a number vector
  • Semantic search = search by meaning (embeddings) vs keyword search = string matching
  • Chunking = split large docs into small pieces, each with overlap, each gets own embedding
  • Vector DB = stores embeddings, finds similar ones fast β€” core infrastructure of every RAG system
  • ChromaDB for learning β†’ Pinecone/Qdrant for production
🧠
Day 1 Quiz:

1. A user asks your legal chatbot "What cases did we win in Q3?" and the LLM makes up 3 case names. What problem is this and how does RAG fix it?
2. Why can't you just send your entire company knowledge base to GPT-4 with every query?
3. "automobile" and "car" have different spellings but high semantic similarity. Why?
4. Why do we use chunking with overlap instead of just splitting into non-overlapping pieces?
5. Name 2 differences between ChromaDB and Pinecone.
LLMs are impressive but fundamentally limited. RAG makes them actually useful in production. You now understand the "why" that 90% of people skip. Every line of code you write in the next 4 days directly solves the problems you learned today.
πŸ“… Day 2 of 5 Β· 5–6 hours

Building the Core RAG Pipeline

Today you build a complete RAG system from scratch β€” document loading, chunking, embedding, storing, retrieving, and generating. By end of today you'll have a working Q&A system over your own documents.
Document IngestionChunking StrategiesEmbedding Generation Vector StorageRetrievalRe-ranking Prompt AugmentationLLM Generation

πŸ—ΊοΈ 2.1 β€” The Full RAG Pipeline Architecture

The RAG pipeline has two distinct phases. Understanding this split is critical for interviews.

╔══════════════════════════════════════════════════════════════════╗ β•‘ PHASE 1: INDEXING (runs once, offline) β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• [Raw Documents] PDFs, Word docs, web pages, Notion pages, Confluence, code, etc. ↓ [Document Loader] Parse files β†’ extract clean text ↓ [Text Splitter / Chunker] Split into overlapping chunks of N tokens/characters ↓ [Embedding Model] Each chunk β†’ dense vector (e.g., 1536 dimensions) ↓ [Vector Database] Store (chunk_text, embedding_vector, metadata) for every chunk ╔══════════════════════════════════════════════════════════════════╗ β•‘ PHASE 2: RETRIEVAL + GENERATION (every query) β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• [User Query] "What is the refund policy?" ↓ [Query Embedding] Embed the query using the SAME embedding model ↓ [Vector Search] Find top-K most similar chunks (K = 3 to 10 typically) ↓ [Re-ranker] (optional but improves quality) Re-score retrieved chunks for relevance to query ↓ [Prompt Builder] "You are a helpful assistant. Context: [chunk1][chunk2][chunk3] Question: {user_query}. Answer using only the context." ↓ [LLM] GPT-4, Claude, Gemini, Llama, etc. ↓ [Final Answer] Grounded, cited, accurate βœ…

πŸ“„ 2.2 β€” Document Loading & Chunking Strategies

Document Loaders

SourceLangChain LoaderNotes
PDF filesPyPDFLoader, PDFMinerLoaderPDFMiner handles complex layouts better
Word docsDocx2txtLoaderPreserves paragraph structure
WebsitesWebBaseLoaderUses BeautifulSoup, strips HTML
CSV/ExcelCSVLoaderEach row becomes a document
NotionNotionDirectoryLoaderExport Notion as markdown first
Code (Python, JS)GenericLoader + parserLanguage-aware splitting by functions
YouTube videosYoutubeLoaderUses transcript API

Chunking Strategies β€” This is Where Most RAG Systems Fail

StrategyHow It WorksBest ForDownside
Fixed SizeSplit every N characters/tokens, overlap by XQuick prototypes, general textCan split mid-sentence, mid-thought
Recursive CharacterTries to split at paragraphs β†’ sentences β†’ words β†’ charsMost text types (LangChain default)Chunks may be uneven
Semantic ChunkingSplit when topic/meaning changes (embedding-based)Long documents with topic shiftsSlower, needs embedding model
Document StructureSplit by headers, sections, paragraphsStructured docs like manuals, wikisChunks can be too long or too short
Sentence-basedSplit into individual sentences or sentence groupsFAQ, policy docs, Q&A contentContext loss across sentences
⚠️
Chunk size is the most important hyperparameter in RAG. Too small β†’ missing context. Too large β†’ noise dilutes relevance. Start with 512 tokens, 50-token overlap. Tune from there based on your evaluation results.
# Complete Document Loading + Chunking Example
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
import tiktoken

# Step 1: Load a PDF
loader = PyPDFLoader("company_handbook.pdf")
raw_docs = loader.load()
print(f"Loaded {len(raw_docs)} pages")

# Step 2: Count tokens to understand document size
enc = tiktoken.encoding_for_model("gpt-4")
total_tokens = sum(len(enc.encode(doc.page_content)) for doc in raw_docs)
print(f"Total tokens: {total_tokens} (~${total_tokens/1000 * 0.005:.2f} if sent directly)")

# Step 3: Smart chunking β€” Recursive splits at natural boundaries
splitter = RecursiveCharacterTextSplitter(
    chunk_size=512,       # tokens per chunk (NOT characters)
    chunk_overlap=50,     # overlap to preserve context at boundaries
    length_function=lambda text: len(enc.encode(text)),
    separators=["\n\n", "\n", ". ", " ", ""]  # try these in order
)

chunks = splitter.split_documents(raw_docs)
print(f"Created {len(chunks)} chunks")
print(f"Sample chunk:\n{chunks[0].page_content[:200]}")
print(f"Chunk metadata: {chunks[0].metadata}")  # includes page number, source!

πŸ”’ 2.3 β€” Embedding Generation & Vector Storage

# Complete Embedding + ChromaDB Storage Pipeline
from langchain.embeddings import OpenAIEmbeddings
from langchain.embeddings import HuggingFaceEmbeddings  # free alternative
from langchain.vectorstores import Chroma
import os

# Option A: OpenAI embeddings (paid, high quality)
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
embed_model = OpenAIEmbeddings(model="text-embedding-3-small")
# Cost: $0.02 per million tokens β€” very cheap

# Option B: Free local embeddings (great for learning)
embed_model = HuggingFaceEmbeddings(
    model_name="all-MiniLM-L6-v2",  # 384 dimensions, fast
    model_kwargs={'device': 'cpu'}
)

# Step 4: Create vector store β€” embeds and stores all chunks
vectorstore = Chroma.from_documents(
    documents=chunks,           # your chunked documents
    embedding=embed_model,      # embedding model
    persist_directory="./chroma_db",  # save to disk
    collection_name="company_docs"
)

print(f"Stored {vectorstore._collection.count()} embeddings!")

# To reload later without re-embedding:
vectorstore = Chroma(
    persist_directory="./chroma_db",
    embedding_function=embed_model,
    collection_name="company_docs"
)

πŸ” 2.4 β€” Retrieval, Re-ranking & Prompt Augmentation

# Step 5: Retrieval β€” find relevant chunks for a query
retriever = vectorstore.as_retriever(
    search_type="similarity",   # or "mmr" for diverse results
    search_kwargs={"k": 4}       # retrieve top 4 chunks
)

query = "What is the parental leave policy?"
relevant_chunks = retriever.get_relevant_documents(query)
for i, chunk in enumerate(relevant_chunks):
    print(f"Chunk {i+1} (page {chunk.metadata.get('page', '?')}):")
    print(chunk.page_content[:200])
    print()

# Step 6: Build augmented prompt
def build_rag_prompt(query: str, chunks: list) -> str:
    context = "\n\n---\n\n".join([c.page_content for c in chunks])
    
    return f"""You are a helpful assistant that answers questions based ONLY on 
the provided context. If the answer is not in the context, say 
"I don't have that information in the provided documents."

CONTEXT:
{context}

QUESTION: {query}

ANSWER (based only on context above):"""

prompt = build_rag_prompt(query, relevant_chunks)
print(f"Total prompt tokens: ~{len(prompt.split()) * 4 // 3}")

# Step 7: Generate response with LLM
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o-mini",   # cheapest GPT-4 class model
    messages=[{"role": "user", "content": prompt}],
    temperature=0,         # 0 = deterministic, grounded answers
    max_tokens=500
)

answer = response.choices[0].message.content
print(f"Answer: {answer}")

LangChain RetrievalQA β€” The One-Liner Version

# LangChain handles the whole pipeline in a few lines
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",          # "stuff" all chunks into prompt
    retriever=retriever,
    return_source_documents=True   # get source chunks back
)

result = qa_chain.invoke({"query": "What is the refund policy?"})
print(f"Answer: {result['result']}")
print(f"Sources: {[d.metadata['source'] for d in result['source_documents']]}")

Chain Types β€” Interviewers Ask This

Chain TypeHow It WorksBest When
stuffStuff ALL chunks directly into one promptFew small chunks, short context needed
map_reduceRun LLM on each chunk separately, then combine answersMany chunks, parallel processing
refineStart with first chunk, refine answer with each next chunkLong documents, iterative refinement
map_rerankRun LLM on each chunk, score relevance, pick bestNeed most relevant single answer
Q: What is the difference between the indexing and retrieval phases in RAG?
Indexing (offline, runs once): load documents β†’ chunk β†’ embed β†’ store in vector DB. This is expensive (time + API cost) but done once. Retrieval (online, every query): embed query β†’ find similar chunks β†’ augment prompt β†’ LLM generates answer. This must be fast (sub-second for good UX). The split allows you to pre-compute expensive embeddings and serve queries quickly.
πŸ› οΈ Day 2 β€” Build Your First RAG System
  1. Download a PDF (any manual, textbook chapter, or company policy β€” 5+ pages)
  2. Build the indexing pipeline: load β†’ chunk β†’ embed β†’ store in ChromaDB. Print: number of chunks, sample chunk with metadata
  3. Build the retrieval pipeline: query β†’ retrieve top 3 chunks β†’ print them with their similarity scores
  4. Build the generation step: manually write the prompt, call OpenAI API, print answer
  5. Use LangChain RetrievalQA to do the same in 10 lines
  6. Test 5 different queries and note which ones return accurate vs inaccurate answers. Why?
πŸ“ GitHub Commit Suggestions: feat: document-loader - PDF ingestion with PyPDFLoader feat: chunking - RecursiveCharacterTextSplitter with token counting feat: vector-store - ChromaDB embedding storage pipeline feat: retrieval-qa - full RAG pipeline with LangChain

πŸ“‹ Day 2 Revision Notes

  • 2 phases: Indexing (offline) = load β†’ chunk β†’ embed β†’ store | Retrieval (online) = query β†’ retrieve β†’ augment β†’ generate
  • Chunking tip: 512 tokens chunk size, 50 tokens overlap is a solid starting point for most documents
  • RecursiveCharacterTextSplitter is the best default splitter β€” tries natural boundaries first
  • Same embedding model MUST be used for both indexing and retrieval β€” different models produce incompatible vectors
  • Retriever k=4 is a good default β€” too few misses info, too many adds noise
  • temperature=0 for RAG LLMs β€” you want deterministic, factual answers, not creative ones
  • LangChain RetrievalQA wraps the whole pipeline β€” production code uses LCEL (LangChain Expression Language) instead
🧠
Day 2 Quiz:

1. You index 1000 documents and then query "What is our vacation policy?" β€” describe every step that happens internally.
2. You use OpenAI for indexing embeddings but switch to HuggingFace for retrieval. Will it work? Why not?
3. What is chunk overlap and what happens if you set it to 0?
4. What does temperature=0 mean and why do RAG systems use it?
5. You have 20 retrieved chunks but the LLM context window only fits 5. What are your options?
πŸ“… Day 3 of 5 Β· 5–6 hours

Tools, Frameworks & Building Real APIs

Today you graduate from scripts to real software. Build a FastAPI backend that serves your RAG system as an API, add a Streamlit frontend, and understand every tool in the modern AI engineering stack.
OpenAI APILangChain LCELChromaDB FAISSFastAPIStreamlit Async RAGStreaming

πŸ”Œ 3.1 β€” OpenAI API Deep Dive

OpenAI is the backbone of most RAG systems. You need to understand its API deeply for both implementation and interviews.

Key OpenAI Models for RAG

ModelUse ForContext WindowCost
gpt-4o-miniBest value for RAG generation128K tokens~$0.15/1M input tokens
gpt-4oComplex reasoning, highest quality128K tokens~$5/1M input tokens
text-embedding-3-smallFast, cheap embedding for indexing8191 tokens input$0.02/1M tokens
text-embedding-3-largeHighest quality embeddings8191 tokens input$0.13/1M tokens
# OpenAI API β€” Everything You Need for RAG
from openai import OpenAI
import os

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

## 1. Generate embeddings (for indexing documents)
def embed_text(text: str) -> list[float]:
    response = client.embeddings.create(
        input=text,
        model="text-embedding-3-small"
    )
    return response.data[0].embedding  # list of 1536 floats

## 2. Batch embedding (more efficient)
def embed_batch(texts: list[str]) -> list[list[float]]:
    response = client.embeddings.create(
        input=texts,  # send up to 2048 texts at once
        model="text-embedding-3-small"
    )
    return [item.embedding for item in response.data]

## 3. Chat completion with full control
def generate_answer(system_prompt: str, user_query: str, context: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": f"Context:\n{context}\n\nQuestion: {user_query}"}
        ],
        temperature=0,
        max_tokens=800
    )
    return response.choices[0].message.content

## 4. Streaming response (better UX β€” shows answer as it generates)
def stream_answer(prompt: str):
    stream = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
        stream=True
    )
    for chunk in stream:
        if chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)

⛓️ 3.2 β€” LangChain LCEL β€” Modern RAG Chains Industry Standard

LangChain Expression Language (LCEL) is the modern way to build RAG pipelines. It uses the pipe operator (|) to chain components β€” readable, composable, and production-ready.

# LCEL: Modern LangChain RAG Chain
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import Chroma

# Setup
embed = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = Chroma(persist_directory="./chroma_db", embedding_function=embed)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# Prompt template
prompt = ChatPromptTemplate.from_template("""
You are an expert assistant. Answer based ONLY on the context below.
If unsure, say "I don't know based on the provided documents."

Context:
{context}

Question: {question}

Answer:
""")

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Helper to format retrieved docs into a string
def format_docs(docs):
    return "\n\n---\n\n".join(doc.page_content for doc in docs)

# LCEL Chain β€” reads left to right like a pipeline
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

# Use it:
answer = rag_chain.invoke("What is the leave encashment policy?")
print(answer)

# Streaming (yields tokens as generated):
for token in rag_chain.stream("What is the leave encashment policy?"):
    print(token, end="", flush=True)

πŸš€ 3.3 β€” Building a FastAPI RAG Backend Portfolio Ready

A Jupyter notebook RAG system is a prototype. A FastAPI app is a product. Here's how to build a production-ready RAG API that you can show to recruiters.

FastAPI RAG Service Architecture POST /ingest β†’ Upload PDF β†’ chunk β†’ embed β†’ store in ChromaDB POST /query β†’ Question β†’ retrieve β†’ augment β†’ GPT β†’ answer GET /health β†’ Service health check GET /sources β†’ List ingested documents Client (Streamlit/React) β†’ FastAPI β†’ ChromaDB + OpenAI
# rag_api/main.py β€” Production FastAPI RAG Backend
from fastapi import FastAPI, UploadFile, File, HTTPException
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from contextlib import asynccontextmanager
import tempfile, os, asyncio

from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_community.vectorstores import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# === MODELS ===
class QueryRequest(BaseModel):
    question: str
    k: int = 4         # number of chunks to retrieve
    stream: bool = False  # streaming response?

class QueryResponse(BaseModel):
    answer: str
    sources: list[str]
    chunks_used: int

# === GLOBALS ===
vectorstore = None
embed_model = OpenAIEmbeddings(model="text-embedding-3-small")
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# === APP ===
app = FastAPI(
    title="RAG Document QA API",
    description="Upload documents and query them with AI",
    version="1.0.0"
)

@app.on_event("startup")
async def startup():
    global vectorstore
    # Load existing vector store if it exists
    if os.path.exists("./chroma_db"):
        vectorstore = Chroma(
            persist_directory="./chroma_db",
            embedding_function=embed_model
        )
        print(f"Loaded existing vector store")

@app.get("/health")
async def health():
    return {"status": "healthy", "docs_indexed": vectorstore._collection.count() if vectorstore else 0}

@app.post("/ingest")
async def ingest_document(file: UploadFile = File(...)):
    """Upload and index a PDF document"""
    global vectorstore
    
    if not file.filename.endswith('.pdf'):
        raise HTTPException(400, "Only PDF files supported")
    
    # Save uploaded file temporarily
    with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as tmp:
        tmp.write(await file.read())
        tmp_path = tmp.name
    
    try:
        loader = PyPDFLoader(tmp_path)
        docs = loader.load()
        
        splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=50)
        chunks = splitter.split_documents(docs)
        
        # Add source filename to metadata
        for chunk in chunks:
            chunk.metadata["filename"] = file.filename
        
        vectorstore = Chroma.from_documents(
            chunks, embed_model, persist_directory="./chroma_db"
        ) if not vectorstore else vectorstore.add_documents(chunks)
        
        return {"message": f"Indexed {len(chunks)} chunks from {file.filename}", "chunks": len(chunks)}
    finally:
        os.unlink(tmp_path)

@app.post("/query", response_model=QueryResponse)
async def query_documents(req: QueryRequest):
    """Query the indexed documents"""
    if not vectorstore:
        raise HTTPException(404, "No documents indexed yet. Use /ingest first.")
    
    retriever = vectorstore.as_retriever(search_kwargs={"k": req.k})
    retrieved_docs = retriever.get_relevant_documents(req.question)
    
    prompt = ChatPromptTemplate.from_template("""Answer using ONLY the context. 
If not in context, say "I don't have that information."

Context: {context}
Question: {question}
Answer:""")
    
    chain = (
        {"context": lambda x: "\n\n".join(d.page_content for d in retrieved_docs),
         "question": lambda x: x}
        | prompt | llm | StrOutputParser()
    )
    
    answer = chain.invoke(req.question)
    sources = list({d.metadata.get("filename", "unknown") for d in retrieved_docs})
    
    return QueryResponse(answer=answer, sources=sources, chunks_used=len(retrieved_docs))

πŸ–₯️ 3.4 β€” Streamlit Frontend for RAG

# app.py β€” Streamlit Chat UI for RAG
import streamlit as st
import requests

st.set_page_config(page_title="πŸ“š Doc QA", layout="wide")
st.title("πŸ“š AI Document Q&A")
st.caption("Upload a PDF and ask anything about it")

API_URL = "http://localhost:8000"

# Sidebar: document upload
with st.sidebar:
    st.header("πŸ“€ Upload Document")
    uploaded = st.file_uploader("Choose a PDF", type=["pdf"])
    if uploaded and st.button("Index Document"):
        with st.spinner("Indexing..."):
            resp = requests.post(f"{API_URL}/ingest", files={"file": uploaded})
            if resp.status_code == 200:
                st.success(resp.json()["message"])
            else:
                st.error("Failed to index")

# Chat interface with history
if "messages" not in st.session_state:
    st.session_state.messages = []

for msg in st.session_state.messages:
    with st.chat_message(msg["role"]):
        st.write(msg["content"])

if question := st.chat_input("Ask about your document..."):
    st.session_state.messages.append({"role": "user", "content": question})
    with st.chat_message("user"):
        st.write(question)
    
    with st.chat_message("assistant"):
        with st.spinner("Thinking..."):
            resp = requests.post(f"{API_URL}/query", json={"question": question})
            if resp.status_code == 200:
                data = resp.json()
                st.write(data["answer"])
                st.caption(f"πŸ“Ž Sources: {', '.join(data['sources'])}")
                st.session_state.messages.append({"role": "assistant", "content": data["answer"]})

# Run: streamlit run app.py
# Backend: uvicorn rag_api.main:app --reload
Q: Why use FastAPI instead of Flask for a RAG backend?
FastAPI advantages for RAG: (1) Async support β€” LLM API calls are I/O bound; async allows serving multiple requests concurrently without threads. (2) Automatic Pydantic validation β€” request/response models are validated automatically. (3) Built-in streaming β€” StreamingResponse makes token streaming trivial. (4) Auto API docs at /docs β€” great for demos. (5) Type hints throughout β€” better code quality. Flask works but FastAPI is the modern choice for AI APIs.
πŸ› οΈ Day 3 β€” Build RAG API + UI
  1. Build the FastAPI backend with /ingest, /query, and /health endpoints
  2. Test with Postman or curl: upload a PDF β†’ query it β†’ verify answer + sources
  3. Add the Streamlit frontend β€” connect it to your FastAPI backend
  4. Add streaming: Modify /query to stream tokens using StreamingResponse + Server-Sent Events
  5. Add error handling: What if no docs are indexed? What if PDF is corrupted?
  6. Write a README.md with setup instructions and API documentation
πŸ“ GitHub Commit Suggestions: feat: fastapi-backend - /ingest /query /health endpoints with Pydantic models feat: streamlit-ui - chat interface with file upload and conversation history feat: streaming - token streaming via StreamingResponse docs: README with architecture diagram and setup instructions

πŸ“‹ Day 3 Revision Notes

  • text-embedding-3-small = best price/performance embedding | use for learning and production
  • gpt-4o-mini = best value LLM for RAG generation | temperature=0 always for RAG
  • LCEL pipe syntax: retriever | prompt | llm | parser = modern LangChain way
  • FastAPI advantages: async, Pydantic validation, streaming, auto-docs at /docs
  • Streaming = yield tokens as generated β†’ better UX, user sees answer building in real-time
  • Streamlit = 10 lines for a working chat UI, session_state for conversation history
  • Always add metadata (filename, page) to chunks β€” needed for source attribution in answers
πŸ“… Day 4 of 5 Β· 5–6 hours

Advanced RAG β€” The Techniques That Actually Work

Basic RAG has a 60-70% accuracy ceiling. Today you learn the techniques that push it to 85-95%: hybrid search, re-ranking, query transformation, metadata filtering, and evaluation. This is what separates junior from senior AI engineers.
Hybrid SearchBM25Re-ranking Query TransformationHyDEMulti-query Metadata FilteringRAG EvaluationCost Optimization

πŸ”€ 4.1 β€” Hybrid Search: The Best of Both Worlds Used in Production

Pure semantic search (vectors) is great for conceptual questions but misses exact matches. BM25 keyword search is great for exact terms but misses synonyms. Hybrid search combines both.

HYBRID SEARCH ARCHITECTURE Query: "What does API Gateway do?" ↓ ↓ [BM25/Keyword] [Vector/Semantic] Finds: "API", Finds: "HTTP endpoint", "Gateway" "request routing", "REST API" exact matches similar meanings ↓ ↓ [Reciprocal Rank Fusion] Merges both result lists using rank-based scoring ↓ Final ranked results ← best of both worlds Result: Exact keyword matches + semantic matches combined Recall improves dramatically for technical terms, IDs, names
# Hybrid Search with LangChain + BM25
from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain_community.vectorstores import Chroma

# Assume chunks is your list of LangChain Documents

# BM25 (keyword-based)
bm25_retriever = BM25Retriever.from_documents(chunks)
bm25_retriever.k = 4

# Vector (semantic)
vector_retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# Ensemble: 40% BM25, 60% semantic (tune these weights)
hybrid_retriever = EnsembleRetriever(
    retrievers=[bm25_retriever, vector_retriever],
    weights=[0.4, 0.6]
)

results = hybrid_retriever.get_relevant_documents("What is the API rate limit?")
# Better at finding "rate limit" (exact) AND "throttling/quota" (semantic)
🧠 Memory Trick: Hybrid = BM25 + Vectors. BM25 = "Be My 25th birthday" (exact memory). Vectors = "Vibes" (fuzzy meaning). Hybrid = remember exactly AND understand meaning.

⬆️ 4.2 β€” Re-ranking: The Secret Sauce Biggest Quality Boost

Vector similarity finds candidates. Re-ranking picks the best ones. A re-ranker is a cross-encoder model that evaluates a query + document pair together β€” much more accurate than just embedding similarity, but slower (that's why we use it only on top-K candidates).

Two-Stage Retrieval with Re-ranking Stage 1 β€” Retrieval (fast, approximate) Query β†’ Vector Search β†’ Top 20 candidates (fast, ~10ms) Stage 2 β€” Re-ranking (slower, more accurate) For each of 20 candidates: Cross-encoder(query, document) β†’ relevance score 0..1 Final: Top 4 of 20, sorted by re-ranker score β†’ sent to LLM Cost: ~20ms extra but HUGE quality improvement The re-ranker reads query and document TOGETHER (not separately) This gives it much better understanding of relevance
# Re-ranking with Cohere Rerank (cloud) or cross-encoder (local)

## Option A: Cohere Rerank API (easy, high quality)
import cohere
co = cohere.Client(os.getenv("COHERE_API_KEY"))

def rerank_documents(query: str, docs: list, top_n: int = 4):
    texts = [d.page_content for d in docs]
    results = co.rerank(
        query=query,
        documents=texts,
        top_n=top_n,
        model="rerank-english-v3.0"
    )
    return [docs[r.index] for r in results.results]

## Option B: Local cross-encoder (free)
from sentence_transformers import CrossEncoder

reranker = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')

def rerank_local(query: str, docs: list, top_n: int = 4) -> list:
    pairs = [(query, doc.page_content) for doc in docs]
    scores = reranker.predict(pairs)
    # Sort by score descending, take top N
    ranked = sorted(zip(docs, scores), key=lambda x: x[1], reverse=True)
    return [doc for doc, score in ranked[:top_n]]

# Usage: retrieve 20, rerank to top 4
candidates = vectorstore.as_retriever(search_kwargs={"k": 20}).get_relevant_documents(query)
reranked = rerank_local(query, candidates, top_n=4)

πŸ”„ 4.3 β€” Query Transformation Techniques

Users write bad queries. Query transformation improves them before retrieval. This alone can boost RAG accuracy by 20-30%.

TechniqueHow It WorksBest For
Multi-queryLLM generates 3-5 different phrasings of query β†’ retrieve for each β†’ mergeShort or ambiguous queries
HyDELLM generates a hypothetical answer β†’ embed that β†’ find similar real docsComplex questions, abstract topics
Step-back promptingLLM generates a more general "step-back" question β†’ retrieve broader contextSpecific questions needing broader context
Query decompositionBreak complex multi-part question into sub-questions β†’ answer each β†’ combineComplex multi-hop questions
Query expansionAdd synonyms and related terms to queryDomain-specific terminology
# Multi-Query Retrieval β€” Most commonly used in production
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.3)  # slight creativity for variety

multi_query_retriever = MultiQueryRetriever.from_llm(
    retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
    llm=llm
)
# For query "What's the leave policy?", generates:
# 1. "How many vacation days do employees get?"
# 2. "What are the sick leave rules?"
# 3. "How do I apply for time off?"
# Retrieves for all 3, deduplicates, returns union

# HyDE β€” Hypothetical Document Embeddings
hyde_prompt = ChatPromptTemplate.from_template("""
Write a short paragraph that would be a perfect answer to this question.
Write it as if it's from a document, not as a direct answer.

Question: {question}

Hypothetical document paragraph:""")

hyde_chain = hyde_prompt | llm | StrOutputParser()

def hyde_retrieve(question: str, retriever, k: int = 4):
    # Generate hypothetical answer, use it as query
    hypothetical_doc = hyde_chain.invoke({"question": question})
    return retriever.get_relevant_documents(hypothetical_doc)

Metadata Filtering β€” Precision at Scale

# Metadata filtering: search only within specific documents/sections
# ChromaDB supports filtering during search

# When ingesting, add rich metadata:
for chunk in chunks:
    chunk.metadata.update({
        "department": "HR",
        "doc_type": "policy",
        "year": 2024,
        "filename": "hr_policy_2024.pdf"
    })

# Filter search to only HR department docs from 2024:
results = vectorstore.similarity_search(
    query="What is the leave policy?",
    k=4,
    filter={"department": "HR", "year": 2024}  # ChromaDB $and filter
)
# Only searches HR 2024 docs β€” much more precise, faster

πŸ“Š 4.4 β€” RAG Evaluation β€” How Do You Know If It's Working?

You cannot improve what you don't measure. RAG evaluation is what senior AI engineers do that juniors skip. This is a high-value interview topic.

4 Key RAG Metrics

MetricMeasuresQuestion It Answers
Context RecallDid retrieval find all necessary information?"Did we retrieve the right chunks?"
Context PrecisionAre retrieved chunks relevant? No noise?"Did we retrieve ONLY the right chunks?"
Answer FaithfulnessIs the answer grounded in retrieved context?"Did the LLM use the context or hallucinate?"
Answer RelevancyDoes the answer actually address the question?"Is the answer actually helpful?"
# RAG Evaluation with RAGAS (the standard library)
from ragas import evaluate
from ragas.metrics import (
    faithfulness,        # answer grounded in context?
    answer_relevancy,   # answer relevant to question?
    context_recall,     # retrieved right context?
    context_precision   # retrieved only relevant context?
)
from datasets import Dataset

# Build test dataset: question, expected answer, retrieved context, generated answer
test_data = {
    "question": ["What is the parental leave policy?", "How do I apply for remote work?"],
    "answer": [generated_answer_1, generated_answer_2],
    "contexts": [[chunk.page_content for chunk in retrieved_1], 
               [chunk.page_content for chunk in retrieved_2]],
    "ground_truth": ["Employees get 26 weeks paid leave...", "Submit a remote work request form..."]
}

dataset = Dataset.from_dict(test_data)
results = evaluate(dataset, metrics=[faithfulness, answer_relevancy, context_recall])
print(results)
# Output:
# faithfulness: 0.92    ← 92% of answers grounded in context βœ…
# answer_relevancy: 0.87 ← 87% answers actually address question βœ…  
# context_recall: 0.79  ← missed some relevant chunks ⚠️ (tune chunk size)

Cost vs Accuracy Tradeoffs

DecisionMore Accurate (higher cost)Cheaper (lower accuracy)
Embedding modeltext-embedding-3-large ($0.13/M)all-MiniLM-L6-v2 (free)
Generation modelgpt-4o ($5/M)gpt-4o-mini ($0.15/M)
Chunks retrieved (k)k=10 (more context)k=3 (cheaper, less noise)
Re-rankingCohere re-ranker + k=20 initialNo re-ranking, k=4
Query transformationMulti-query (3x LLM calls)Direct query (1 LLM call)
Q: What is HyDE and when would you use it?
HyDE = Hypothetical Document Embeddings. Instead of embedding the user's short question (which may not match long document sentences), you ask the LLM to write a hypothetical answer paragraph, then embed that to find similar real documents. Use HyDE when: (1) user queries are very short/abstract ("explain machine learning"), (2) standard retrieval misses relevant docs, (3) the question phrasing differs a lot from how answers are written in documents. Downside: adds 1 extra LLM call (cost + latency).
Q: How do you decide chunk size for a RAG system?
It depends on your content type: (1) FAQs/short answers: 128-256 tokens (each Q&A is self-contained). (2) Technical docs/manuals: 512-1024 tokens (need enough context for complex topics). (3) Legal docs: 256-512 tokens (precise language matters, small chunks). (4) Code: Split by function/class, not token count. Start with 512 tokens + RAGAS evaluation, then tune. Common mistake: too-small chunks lose context; too-large chunks dilute relevance.

πŸ“‹ Day 4 Revision Notes

  • Hybrid search = BM25 (exact keywords) + vector (semantic) β†’ best of both, use EnsembleRetriever
  • Re-ranking = retrieve 20 candidates β†’ re-rank to top 4 β†’ biggest quality boost per dollar
  • Multi-query = LLM generates 3-5 query variations β†’ retrieve for each β†’ deduplicate and merge
  • HyDE = generate hypothetical answer β†’ embed it β†’ find similar real docs (helps abstract questions)
  • Metadata filtering = add rich metadata during indexing β†’ filter during retrieval β†’ precision + speed
  • RAGAS metrics: faithfulness, answer_relevancy, context_recall, context_precision
  • Cost optimization: Free embeddings for prototypes β†’ OpenAI small for production β†’ use re-ranking to offset smaller k
🧠
Day 4 Quiz:

1. A user queries "ARN format in AWS". Pure semantic search returns docs about "AWS resource naming conventions" but misses the exact doc saying "arn:aws:...". Which retrieval method would fix this?
2. You retrieve k=4 chunks for every query but your RAGAS context_recall is 0.62. What are 3 things you can try?
3. Explain the two-stage retrieval pattern. Why not just use re-ranker on all documents?
4. Your RAG system answers legal questions but sometimes cites the wrong year's policy. How does metadata filtering solve this?
5. Your faithfulness score is 0.55 (low). What does this mean and how do you fix it?
πŸ“… Day 5 of 5 Β· 5–6 hours

Production RAG β€” Real Architecture & Deployment

Today you zoom out from code to systems thinking. Understand how companies actually ship RAG. Learn deployment, production pitfalls, and architectural patterns used by real AI teams. This is what gets you promoted.
Production ArchitectureDeploymentDocker Real Company PatternsConversation Memory PDF Chatbot ArchitectureSecurity

🏒 5.1 β€” How Real Companies Use RAG

Company / Use CaseRAG ArchitectureKey Challenge Solved
Customer Support BotRAG over help docs + ticket history + FAQsDeflect 60% tickets, always up-to-date with product changes
Internal HR ChatbotRAG over policy docs, Confluence, NotionEmployees get instant accurate policy answers 24/7
Legal Document ReviewRAG over case files, contracts, precedentsLawyers query 50,000+ docs in seconds, with citations
Developer Docs SearchRAG over API docs, GitHub issues, SO postsDevelopers find answers without manual searching
Sales IntelligenceRAG over call transcripts, CRM, market reportsSales reps get tailored pitch points before every call
Medical Knowledge BaseRAG over clinical trials, drug referencesDoctors query latest research grounded in evidence

The Full Production RAG Architecture

PRODUCTION RAG SYSTEM (Enterprise Grade) [Ingestion Pipeline] [Query Pipeline] (runs when docs update) (runs for every user query) [Document Sources] [Auth Gateway] S3 / GCS / SharePoint User auth + rate limiting Confluence / Notion ↓ Google Drive / PDFs [Query Router] ↓ Is it RAG? SQL? Calculator? [Document Processor] ↓ Parse + clean + OCR [Query Transform] ↓ Multi-query / HyDE / decompose [Chunker] ↓ Semantic + structure-aware [Hybrid Retriever] ↓ BM25 + Vector + metadata filter [Embedding Model] ↓ OpenAI / Cohere / local [Re-ranker] ↓ ↓ [Vector DB] [Context Builder] Pinecone / Qdrant Format + dedup + truncate with rich metadata ↓ [Conversation Memory] Last N turns context ↓ [LLM Generation] GPT-4o / Claude / Gemini ↓ [Post-processing] Citation extraction Guardrails check Response streaming ↓ [User + Logging] Answer + sources + latency

πŸ’¬ 5.2 β€” Conversation Memory in RAG

Without memory, every query to your RAG system is independent. Users can't ask follow-up questions like "Tell me more about that" or "What about for remote employees?"

# Conversational RAG with memory β€” the right way
from langchain.memory import ConversationBufferWindowMemory
from langchain.chains import ConversationalRetrievalChain

# Keep last 5 conversation turns in memory
memory = ConversationBufferWindowMemory(
    memory_key="chat_history",
    k=5,
    return_messages=True,
    output_key="answer"
)

conv_chain = ConversationalRetrievalChain.from_llm(
    llm=ChatOpenAI(model="gpt-4o-mini", temperature=0),
    retriever=hybrid_retriever,
    memory=memory,
    return_source_documents=True
)

# Turn 1
r1 = conv_chain.invoke({"question": "What is the parental leave policy?"})
print(r1["answer"])

# Turn 2 β€” system remembers we're talking about parental leave
r2 = conv_chain.invoke({"question": "Does it apply to adoption too?"})
print(r2["answer"])  # Correctly contextualizes "it" = parental leave

# LCEL approach with manual history (more control):
from langchain_core.messages import HumanMessage, AIMessage

chat_history = []

def chat_with_memory(question: str) -> str:
    # Condense question with history context
    condense_prompt = f"""Given the conversation history and new question, 
rephrase the question to be standalone.
History: {chat_history[-4:]}
Question: {question}
Standalone question:"""
    standalone = llm.invoke(condense_prompt).content
    
    # Retrieve and answer
    docs = retriever.get_relevant_documents(standalone)
    answer = rag_chain.invoke(standalone)
    
    # Update history
    chat_history.append(HumanMessage(content=question))
    chat_history.append(AIMessage(content=answer))
    
    return answer

🐳 5.3 β€” Deploying RAG with Docker + Cloud

Project Structure (GitHub-Ready)

rag-pdf-chatbot/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ main.py             # FastAPI app
β”‚   β”œβ”€β”€ rag/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ ingestion.py    # Document loading + chunking
β”‚   β”‚   β”œβ”€β”€ embeddings.py   # Embedding model wrapper
β”‚   β”‚   β”œβ”€β”€ retrieval.py    # Vector store + retrieval
β”‚   β”‚   β”œβ”€β”€ generation.py   # LLM + prompt building
β”‚   β”‚   └── evaluation.py   # RAGAS evaluation
β”‚   β”œβ”€β”€ tests/
β”‚   β”‚   └── test_rag.py
β”‚   β”œβ”€β”€ Dockerfile
β”‚   └── requirements.txt
β”œβ”€β”€ frontend/
β”‚   └── streamlit_app.py
β”œβ”€β”€ data/
β”‚   └── sample_docs/
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ .env.example           # Template for env vars (NO real keys)
β”œβ”€β”€ .gitignore             # Include: chroma_db/, .env, __pycache__
└── README.md              # Architecture diagram + setup instructions
# Dockerfile
FROM python:3.11-slim

WORKDIR /app

# Install dependencies first (caching layer)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create directory for ChromaDB persistence
RUN mkdir -p /app/chroma_db

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
# docker-compose.yml
version: '3.8'
services:
  backend:
    build: ./backend
    ports: ["8000:8000"]
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - COHERE_API_KEY=${COHERE_API_KEY}
    volumes:
      - chroma_data:/app/chroma_db  # persist vector DB
  
  frontend:
    build: ./frontend
    ports: ["8501:8501"]
    depends_on: [backend]
    environment:
      - API_URL=http://backend:8000

volumes:
  chroma_data:

# Run: docker-compose up --build

Deployment Options

OptionCostEffortBest For
Railway.appFree tier available⭐ EasiestPortfolio demos, hackathons
Render.comFree tier (sleeps)⭐⭐ EasyPersonal projects
AWS EC2 + Docker~$10-20/month⭐⭐⭐ MediumProduction, shows AWS skills
Google Cloud RunPay per request⭐⭐⭐ MediumScalable serverless
HuggingFace SpacesFree for Streamlit⭐ EasiestML portfolio showcase
πŸ’‘
For your portfolio: Deploy on Railway or HuggingFace Spaces (free, one-click), then also show how you'd deploy on AWS EC2 with Docker in your README. This proves you understand both ease-of-use AND production deployment.

⚠️ 5.4 β€” Production Pitfalls & Security

PitfallProblemFix
Prompt injectionUser crafts query to override system promptInput sanitization, separate system/user context, output validation
Context poisoningMalicious doc in vector DB injects instructionsSanitize documents at ingestion, separate trusted/untrusted sources
No rate limitingBot floods your API β†’ $1000 OpenAI bill overnightFastAPI rate limiting middleware, usage quotas per user/API key
Storing raw textPII, secrets, confidential data in vector DBPII detection at ingestion, access control, encryption at rest
No guardrailsLLM answers questions outside scope ("how to hack?")Input/output guards (Guardrails AI, NeMo Guardrails)
Stale embeddingsDoc updated but old embedding still in DB β†’ wrong answersDocument versioning, update/delete embeddings when source changes
Q: How do you handle document updates in a RAG system?
This is a common production problem. When a document is updated: (1) Delete old chunks from vector DB using metadata filter (filename + version). (2) Re-chunk and re-embed the new version. (3) Store with new version metadata. Best practice: assign a document_id and hash to each document at ingestion. On update, compare hash β€” if changed, delete all chunks with that document_id and re-index. Use a document registry (simple SQLite table) to track what's indexed and when.

πŸ“‹ Day 5 Revision Notes

  • Production RAG adds: auth gateway, query router, re-ranking, conversation memory, post-processing, logging
  • Conversation memory = ConversationBufferWindowMemory (last K turns) β†’ condense question with history before retrieval
  • Docker structure: backend FastAPI + frontend Streamlit in docker-compose with volume for ChromaDB
  • Project structure: separate rag/ module with ingestion, embeddings, retrieval, generation, evaluation files
  • Security must-haves: rate limiting, input sanitization, no hardcoded API keys, PII detection at ingestion
  • Document updates: track document_id + hash β†’ delete old chunks β†’ re-index on change
  • Deploy to Railway/HuggingFace for portfolio, mention AWS EC2 in README for production credibility
πŸ† Bonus β€” Project + Resume + Interview Prep

The PDF Chatbot Project + Job Readiness Kit

Everything you need to get hired. A complete portfolio-worthy project, resume descriptions that stand out, and the top interview questions with answers that get people hired at AI companies.
Mini ProjectGitHub SetupResume Interview Q&ABeginner MistakesNext Steps

πŸš€ The Mini Project: AI PDF Research Assistant

Build a full-stack RAG application that lets users upload multiple PDFs and have an intelligent conversation about their content. This is something real companies pay engineers to build.

PROJECT: AI PDF Research Assistant Features: βœ… Upload multiple PDFs (up to 10) βœ… Intelligent Q&A with source citations βœ… Conversation memory (multi-turn) βœ… Hybrid search (BM25 + semantic) βœ… Re-ranking for accuracy βœ… Streaming responses βœ… Document management (list/delete) βœ… Query history βœ… Cost tracker (tokens used) Stack: Backend: FastAPI + LangChain + ChromaDB + OpenAI Frontend: Streamlit (responsive chat UI) Deploy: Docker + Railway/Render/HuggingFace Spaces Architecture: [Streamlit UI] Upload PDFs β†’ /ingest Ask questions β†’ /chat (streaming) View docs β†’ /documents ↓ HTTP [FastAPI Backend] /ingest β†’ PDF β†’ chunks β†’ embed β†’ ChromaDB /chat β†’ query β†’ hybrid retrieve β†’ rerank β†’ GPT β†’ stream /documents β†’ list indexed docs + metadata /delete/{id} β†’ remove doc + its chunks ↓ [ChromaDB] (local, persisted) ↓ [OpenAI API] embeddings + generation

Key Implementation: The /chat Streaming Endpoint

# The most impressive feature: streaming RAG with sources
from fastapi.responses import StreamingResponse
import json

@app.post("/chat")
async def chat_stream(req: ChatRequest):
    """Stream RAG response with sources"""
    
    # Retrieve with hybrid search
    docs = hybrid_retriever.get_relevant_documents(req.question)
    docs = rerank_local(req.question, docs, top_n=4)
    
    context = "\n\n---\n\n".join(d.page_content for d in docs)
    sources = list({d.metadata.get("filename") for d in docs})
    
    prompt = build_prompt(req.question, context, req.chat_history)
    
    async def generate():
        # First yield: sources (so UI can show them immediately)
        yield f"data: {json.dumps({'type': 'sources', 'data': sources})}\n\n"
        
        # Stream LLM tokens
        stream = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": prompt}],
            stream=True
        )
        for chunk in stream:
            token = chunk.choices[0].delta.content or ""
            if token:
                yield f"data: {json.dumps({'type': 'token', 'data': token})}\n\n"
        
        yield f"data: {json.dumps({'type': 'done'})}\n\n"
    
    return StreamingResponse(generate(), media_type="text/event-stream")

GitHub Repository Checklist

βœ… Before Making Repository Public
  1. .gitignore includes: .env, chroma_db/, __pycache__/, *.pyc, .DS_Store, *.egg-info
  2. .env.example: Shows required variables WITHOUT actual values β€” OPENAI_API_KEY=your_key_here
  3. README.md includes: Project description, architecture diagram (ASCII), tech stack, setup instructions (pip install + docker-compose), demo GIF/screenshot, API docs link
  4. requirements.txt: All dependencies with version pins (pip freeze > requirements.txt)
  5. tests/: At least 3 tests β€” ingestion, retrieval quality, API endpoint
  6. Commit history: Clean commits with meaningful messages (feat: hybrid-search, fix: chunk-overlap, docs: add-readme)

πŸ“„ Resume-Ready Project Descriptions

AI PDF Research Assistant | Python, FastAPI, LangChain, ChromaDB, OpenAI
github.com/yourname/ai-pdf-chatbot  |  live: railway.app/project/...

β€’ Built a production-ready RAG system enabling conversational Q&A over
  multiple PDF documents with <2s response latency
β€’ Implemented hybrid search (BM25 + semantic vectors) with cross-encoder
  re-ranking, improving retrieval accuracy by ~30% vs naive vector search
β€’ Designed streaming FastAPI backend with conversation memory supporting
  multi-turn queries with source citations
β€’ Deployed containerized application (Docker + docker-compose) serving
  both FastAPI and Streamlit frontend
β€’ Integrated RAGAS evaluation framework; achieved faithfulness score 0.89
  and context recall 0.83 on test dataset

Skills demonstrated: RAG architecture, LLM engineering, FastAPI, vector
databases, embeddings, hybrid search, Docker, prompt engineering

Skills Section Format

AI / LLM Engineering:  RAG systems, LangChain, OpenAI API (Chat + Embeddings),
                      Prompt engineering, ChromaDB, FAISS, Semantic search,
                      Hybrid search, Re-ranking, RAGAS evaluation

Backend:  FastAPI, Python, REST APIs, async programming, Docker

Databases:  ChromaDB (vector), PostgreSQL, MongoDB, pgvector

πŸ’¬ Top 30 RAG Interview Questions with Answers

🌱 Fundamentals

Q1: Explain RAG to a non-technical manager.
Instead of relying on an AI's built-in memory (which can be outdated or wrong), RAG works like an open-book exam: the AI searches your company's documents first, finds the relevant pages, then uses those pages to write its answer. The answer is grounded in your actual documents β€” you can verify it and see which page it came from.
Q2: When would you choose RAG over fine-tuning?
Choose RAG when: data changes frequently (policies, prices, news), you need source attribution, you have budget constraints (no GPU for fine-tuning), you need to query private proprietary data, or you want to add knowledge without touching model weights. Choose fine-tuning when: teaching a new skill or style (not just knowledge), you need consistent format/tone, or have static domain knowledge that rarely changes.
Q3: What is the difference between dense retrieval and sparse retrieval?
Dense retrieval: Uses embedding vectors (dense, continuous values) β€” captures semantic meaning, handles synonyms, works across languages. Sparse retrieval (BM25/TF-IDF): Uses term frequency matrices (mostly zeros, hence sparse) β€” exact keyword matching, fast, interpretable, works great for technical terms and names. RAG best practice: use both (hybrid search) for maximum coverage.
Q4: What is a vector database and how is it different from a regular database?
A regular database (SQL/NoSQL) stores and retrieves data by exact matching (WHERE id = 5, WHERE name = 'Alice'). A vector database stores embedding vectors and retrieves by similarity β€” "find me the 10 vectors most similar to this query vector." This requires specialized indexing algorithms (HNSW, IVF) that approximate nearest-neighbor search at scale. Regular DBs can't do this efficiently.

πŸ”§ Pipeline & Implementation

Q5: Why must you use the same embedding model for indexing and retrieval?
Embedding models map text to a specific high-dimensional vector space. Different models create different spaces β€” vectors from model A and model B are incompatible. Comparing them is like measuring height in meters vs feet and saying "1 > 0.9" (when 1 foot < 0.9 meters). The similarity scores would be meaningless.
Q6: How do you handle very long documents (books, large reports) in RAG?
(1) Hierarchical indexing: chunk at multiple granularities β€” paragraphs for retrieval, sentences for context. (2) Parent-child chunking: retrieve small chunks, but pass their parent (larger) chunk to LLM for more context. (3) Summary indexing: store both summary and full text β€” retrieve by summary, generate from full. (4) Document-level metadata: store document summary as metadata for broad questions, chunks for specific ones.
Q7: A user asks "What did they announce last quarter?" β€” what's wrong with this for RAG?
The query has ambiguous references: "they" (who?) and "last quarter" (relative time). RAG's retrieval will get confused. Solutions: (1) Query clarification: ask user to specify company/person, (2) Query expansion: LLM infers from context and expands to "What did [Company] announce in Q3 2024?", (3) Conversation memory: if previous turns mention the company, use that context to resolve ambiguity.
Q8: How do you handle tables and charts in PDFs for RAG?
Standard PDF parsers (PyPDF) convert tables to mangled text. Better approaches: (1) Unstructured.io: extracts tables as structured HTML, (2) Camelot/Tabula: specialized PDF table extraction to DataFrames, (3) Vision LLMs (GPT-4V): screenshot each page β†’ LLM describes tables as text β†’ embed that text, (4) Separate table index: extract tables to JSON/CSV β†’ separate SQL retrieval path for data questions.

πŸš€ Advanced

Q9: Explain the "Lost in the Middle" problem in RAG.
Research shows LLMs pay more attention to the beginning and end of their context window, and lose attention to information in the middle. If you pass 10 chunks, the LLM may miss the most relevant chunk if it's in position 4-7. Solutions: (1) Fewer, more relevant chunks (use re-ranking to keep only top 3-4), (2) Put most relevant chunk first in the context, (3) Map-reduce: answer from each chunk separately, then synthesize.
Q10: How do you reduce hallucinations in a RAG system?
(1) Explicit instruction: "Answer ONLY using the provided context. Say 'I don't know' if not in context." (2) Temperature = 0 for deterministic answers. (3) Citation enforcement: require LLM to quote the source text it used. (4) Faithfulness check: secondary LLM call verifies answer is supported by retrieved docs. (5) High-quality retrieval: bad retrieval forces LLM to fill gaps with hallucinations β€” fix retrieval first.
Q11: What is RAGAS and what metrics does it measure?
RAGAS (RAG Assessment) is an evaluation framework with 4 key metrics: Faithfulness (is answer supported by context?), Answer Relevancy (does answer address the question?), Context Recall (did we retrieve all needed information?), Context Precision (are all retrieved chunks useful?). It uses LLM-as-judge internally. Target: all metrics above 0.80 for production.
Q12: How would you design a RAG system for 50 million documents?
Architecture: (1) Managed vector DB (Pinecone/Weaviate) β€” handles scale, sharding, replication, (2) Async ingestion pipeline (Celery/Redis Queue) β€” process document uploads as background jobs, (3) Hierarchical retrieval β€” first retrieve at category level, then fine-grained, (4) Distributed embedding service β€” GPU instances batch-embedding documents, (5) Caching β€” Redis for frequent queries and their top chunks, (6) ANN indexing β€” HNSW index for fast approximate nearest neighbor search at scale.
Q13: What is the difference between RAG and a search engine?
Search engines (Elasticsearch, Google) retrieve and rank documents β€” they return a list of links/documents for the user to read. RAG retrieves documents and then synthesizes a single coherent answer from them using an LLM. RAG is conversational and generative; search is retrievive and presentational. Modern AI search (Perplexity) combines both: retrieve (like search) + generate answer (like RAG) + cite sources.

πŸ’Ό Practical / Scenario

Q14: Your RAG chatbot answers "What is our pricing?" incorrectly. Debug the issue step by step.
Step 1: Check retrieval β€” print the chunks retrieved for this query. Are pricing-related chunks present? Step 2: Check chunk quality β€” is the pricing information split badly across chunk boundaries? Increase chunk overlap. Step 3: Check embedding relevance β€” manually embed the query and pricing chunk, compute cosine similarity. Is it high enough? Step 4: Check prompt β€” is the context being passed correctly? Is there a prompt template issue? Step 5: Check LLM behavior β€” is the LLM ignoring the context and using training knowledge instead? Add "ONLY use the context" instruction more explicitly.
Q15: How do you make a RAG system respond faster?
(1) Streaming β€” show tokens as they generate, perceived latency drops dramatically. (2) Cache frequent queries β€” hash common queries, return cached answer (Redis). (3) Smaller LLM β€” gpt-4o-mini vs gpt-4o (10x faster). (4) Fewer, better chunks β€” smaller context = faster generation. (5) Async retrieval β€” run BM25 and vector retrieval in parallel (asyncio.gather). (6) Local embeddings β€” avoid API latency for embeddings. (7) HNSW index in vector DB for faster ANN search.

❌ Top 10 Beginner RAG Mistakes

  1. Using different embedding models for indexing and retrieval β†’ vectors incompatible β†’ garbage results
  2. No chunk overlap β†’ information at chunk boundaries is lost β†’ incomplete answers
  3. Too-small chunk size (50-100 tokens) β†’ no context, individual sentences are meaningless
  4. k=1 retrieval β†’ depending on one chunk β†’ brittle, misses nuance
  5. High temperature (0.7+) for RAG β†’ model gets creative instead of factual β†’ more hallucinations
  6. Not storing source metadata β†’ can't attribute answers, can't filter by source
  7. Re-indexing entire vector DB on every update β†’ slow, expensive β†’ use incremental updates
  8. No evaluation/testing β†’ don't know if RAG is actually working, can't improve it
  9. Ignoring the "Lost in the Middle" problem β†’ LLM ignores middle chunks β†’ use re-ranking to prioritize
  10. Hardcoding API keys in source code β†’ pushed to GitHub β†’ credentials stolen β†’ huge bill

πŸ—ΊοΈ What to Learn Next

TrackWhat to LearnWhy
Week 2LangGraph for multi-agent RAGAgentic RAG is the next wave β€” agents that decide when/what to retrieve
Week 3Pinecone or Qdrant in productionChromaDB doesn't scale β€” learn managed vector DBs
Week 4OpenAI Assistants API / File SearchManaged RAG with no infrastructure β€” common in enterprise projects
Month 2GraphRAG (Microsoft) or KnowledgeGraph + RAGState-of-the-art for complex reasoning across many documents
Month 2Fine-tuning + RAG comboFine-tune for style/format, RAG for knowledge β€” best of both worlds
Month 3LLMOps: LangSmith, Weights & BiasesProduction monitoring, tracing, experiment tracking β€” what companies use
ParallelLlama.cpp + Ollama (local LLMs)Free, private, no API costs β€” great for learning and offline use cases
πŸ’‘
The fastest career path: Build 2-3 RAG projects with different data sources (PDF, web scraping, SQL + RAG, code RAG). Each project proves a different skill. Ship them. Write about them on LinkedIn. Recruiters DM people who build in public.
Five days ago, you didn't know what an embedding was. Now you can build a complete RAG system with hybrid search, re-ranking, streaming responses, FastAPI backend, Streamlit frontend, Docker deployment, and RAGAS evaluation.

The difference between you and other freshers: they talk about AI. You've built it. You can explain why embeddings are vectors, what chunk overlap does to retrieval quality, why temperature=0 matters for grounding, and how to debug a RAG system that's giving wrong answers.

Ship the PDF chatbot. Star your own repo. Post a demo on LinkedIn. The AI engineering job you want? You're now qualified for it. πŸš€