Course curriculum

    1. What is a Large Language Model?

    2. Tracing the history and evolution of LLMs

    3. Exploring state-of-the-art LLMs

    4. How large is large? LLM scaling laws

    5. Building versus buying pre-trained LLM models

    1. Popular datasets for pretraining LLMs

    2. Specialized domain datasets: biology, medicine, code, etc.

    3. Creating custom datasets: data scraping, preprocessing, and curation

    4. Addressing dataset licensing and ethical considerations

    5. Managing dataset storage and versioning best practices

    1. Evaluation metrics for various NLP tasks

    2. Conducting comparative analyses of different LLMs

    3. Identifying benchmark datasets and leaderboards

    4. Recognizing limitations and biases in LLMs

    1. Hardware requirements and distributed training strategies

    2. Understanding model architecture choices

    3. Implementing tokenization methods

    4. Loss functions and optimization techniques

    5. Monitoring and evaluating training progress

    6. Applying regularization and avoiding overfitting

    1. Exploring parameter-efficient fine-tuning

    2. LoRA: Low-Rank Adaptation of Large Language Models

    3. Prefix tuning and prompt tuning

    4. Instruction Tuning

    5. Reinforcement learning through human feedback (RLHF)

    1. Experimentation phase: pipeline testing and model scaling

    2. Stable training of large models

    3. When is training complete?

    4. Large model infrastructure

About this course

  • Free
  • 32 lessons

Your Goals

Sign up for this free Weights & Biases course to:

  • Learn the fundamentals of large language models

    Find out about the types of LLMs, model architectures, parameter sizes and scaling laws.

  • Curate a dataset and establish an evaluation approach

    Learn how to find or curate a dataset for LLM training. Dive into the evaluation metrics for various LLM tasks and compare their performance across a range of benchmarks.

  • Master training and fine-tuning techniques

    Learn hands-on advanced training strategies like LoRA, prefix tuning, prompt tuning, and Reinforcement Learning through Human Feedback (RLHF).


  • Working knowledge of machine learning

  • Intermediate Python experience

  • Familiarity with DL frameworks (Pytorch/Tensorflow)


Coming soon!

Stay tuned for an exciting announcement of our instructor!

Darek Kłeczek

Machine Learning Engineer

Darek Kłeczek is a Machine Learning Engineer at Weights & Biases, where he leads the W&B education program. Previously, he applied machine learning across supply chain, manufacturing, legal, and commercial use cases. He also worked on operationalizing machine learning at P&G. Darek contributed the first Polish versions of BERT and GPT language models and is a leader in the Polish NLP community. He’s a Kaggle competition winner and 3x Kaggle Master.

Thomas Capelle

Machine Learning Engineer

Thomas Capelle is a Machine Learning Engineer at Weights & Biases working on the Growth Team. He’s a contributor to fastai library and a maintainer of wandb/examples repository. His focus is on MLOps, wandb applications in industry and fun deep learning in general. Previously he was using deep learning to solve short term forecasting for solar energy at SteadySun. He has a background in Urban Planning, Combinatorial Optimization, Transportation Economics and Applied Math.

Be first in line to unlock your LLM potential and earn your certificate.