A visual and mathematical walkthrough of RoPE, relative position intuition, rotation matrices, and attention score behavior.
Notes on sinusoidal positional encoding, why transformers need position information, and how different frequencies encode token positions.
Chapter-wise notes on building AI applications with foundation models, evaluation, data, retrieval, agents, and production systems.
A step-by-step note on attention, scaled dot-product attention, softmax gradients, and transformer intuition.
Learning notes from Hands-On Large Language Models covering tokenizers, embeddings, transformer blocks, and LLM components.
Chapter-wise summary notes from Meta Learning, focused on learning practice, programming habits, and deep learning growth.
Detailed notes on quantize/dequantize workflows, quantization error, calibration, packing, and practical model optimization.
Introductory notes on model quantization, lower-precision data types, scaling, zero points, and model compression basics.