Back to Browse

LLM internals

LLM internals

Expert
10 Sections

1. Foundations of Large Language Models: Architecture and Tokenization

10 slides

2. Embedding Layers and Positional Encoding Mechanisms

10 slides

3. Transformer Attention Mechanisms: Self-Attention and Cross-Attention

10 slides

4. Feedforward Networks and Layer Normalization in LLMs

10 slides

5. Training Paradigms: Pretraining, Fine-tuning, and Instruction Tuning

10 slides

6. Optimization Techniques: Gradient Descent, Adam, and Learning Rate Schedules

9 slides

7. Memory and Efficient Inference: Sparse Attention, Quantization, and Pruning

10 slides

8. Handling Context Windows and Long-Range Dependencies

10 slides

9. Scaling Laws and Model Parallelism Strategies

10 slides

10. Evaluation, Bias Mitigation, and Safety Mechanisms in LLMs

10 slides

Enroll in this course to access the content