What can I expect from this course?

  • Practical, tested solutions to getting better higher accuracy out of your POC apps.

  • Systematic RAG evaluation techniques.

  • Best practices for consistent and reliable outputs while minimizing hallucination.

  • Cohere credits to run course notebooks.

In collaboration with

Course curriculum

    1. Welcome to the course

    2. Weave and Cohere credits set up

    1. Chapter goals

    2. Notebook 1: Baseline RAG Pipeline

    3. From basic to advanced RAG

    4. Wandbot

    5. 80/20 rule

    6. RAG best practices

    7. Challenges and solutions

    1. Chapter goals

    2. Evaluation basics

    3. Notebook 1: Baseline RAG Pipeline

    4. Notebook 2: Evaluation

    5. Evaluating retrievers

    6. LLM as a judge

    7. Assertions

    8. Traditional NLP limitations

    9. LLM evaluation in action

    10. Re-evaluating models

    11. LLM eval limitations

    12. Pairwise evaluation

    13. Conclusion

    1. Chapter goals

    2. Notebook 3: Data preparation, chunking and BM25 Retrieval

    3. Notebook 3: Chunking in practice

    4. Notebook 3: BM25 Retrieval

    5. Data ingestion

    6. Data parsing

    7. Chunking

    8. Metadata management

    9. Data ingestion challenges

    10. Best practices

    11. Conclusion

    1. Chapter goals

    2. Notebook 4: Query enhancement

    3. 4 key techniques for query enhancement

    4. Enhancing context

    5. LLM in query enhancement

    6. Query enhancement case study: Wandbot

    1. Chapter goals

    2. Limitations

    3. Compare evaluations

    4. Query translation

    5. Retrieve with CoT

    6. Metadata filtering

    7. Logical routing

    8. Context stuffing

    9. Cross encoder

    10. Notebook 5: retrieval and reranking

    11. Reciprocal rank fusion

    12. Hybrid retriever

    13. Weaviate Vector Database

    14. Weaviate Hybrid Search

    15. Conclusion

About this course

  • Free
  • 76 lessons
  • 2 hours of video content
  • $15 Cohere credits

Guest instructors

Meor Amer

Developer Advocate at Cohere

Meor is a Developer Advocate at Cohere, a platform optimized for enterprise generative AI and advanced retrieval. He helps developers build cutting-edge applications with Cohere’s Large Language Models (LLMs).

Charles Pierse

Head of Weaviate Labs

Charles Pierse is a ML Engineer at Weaviate on the Weaviate Labs team. His work is focussed on putting the latest research in AI into production. The labs team is focussed on building out AI native services that build upon and complement Weaviate's existing core offering.

Recommended prerequisites

This course is for people with:

  • familiarity with Python

  • basic understanding of RAG

Course instructors

Bharat Ramanathan

Machine Learning Engineer

Bharat is a Machine Learning Engineer at Weights & Biases, where he built and manages Wandbot, a technical support bot that can run in Discord, Slack, ChatGPT and Zendesk. Currently also pursuing a Data Science Master's at Harvard Extension School. Bharat is an outdoor enthusiast who enjoys reading, rock climbing, swimming, and biking.

Ayush Thakur

Machine Learning Engineer

Ayush Thakur is a MLE at Weights and Biases and Google Developer Expert in Machine Learning (TensorFlow). He is interested in everything computer vision and representation learning. For the past 2 years he’s been working with LLMs and have covered RLHF and how and what of building LLM-based systems.

“Very broad view on many levers to increase RAG performances. And grounded with concrete examples and notebooks to apply these technics... I highly recommend !”

Gabriel Grandamy

“I've just started chapter 3, it is a really engaging course with great depth and breadth. Really appreciate you guys sharing your journey and the fantastic resources. I highly recommend starting if you have not yet.”

Elle

“This free course has everything you need to know to bring your RAG prototype to production.”

Leonie

“I really enjoyed the RAG++ course.”

Alec

“I really like the fact that this course comes with a stronger curriculum and covers many topics to go from PoC to prod (topics like data ingestion, query enhancements, and optimizing for latency and efficiency etc.).”

Aishwarya

Learning outcomes

  • Get better performance out of your RAG apps using practical and tested solutions

    Spend 1.5h learning what we have spent 12 months debugging, testing in real-life scenarios and evaluating.

  • Increase the consistency and reliability of your outputs

    Achieve reliable outputs with fewer hallucinations, higher accuracy, and improved query relevance.

  • Save costs while improving performance

    Optimize your RAG applications to achieve higher performance at a lower cost.

If you would like to start with a more introductory course get started with the Building LLM-Powered Applications