Mastering Transformer Models for GenAI LLMs Live Cohort Jan-2025
30-Hour course spread over 12 sessions. Timing: 7.20pm to 9.40pm IST
2nd  Jan (Thu) - S1 |  3rd Jan (Fri) - S2  | 8th Jan (Wed) - S3 | 9th Jan (Thu) - S4 | 17th Jan (Fri) - S5
21st Jan (Tue) - S6 | 22nd Jan (Wed) - S7 | 23rd Jan (Thu) - S8 | 29th Jan (Wed) - S9 | 30th Jan (Thu) - S10
05th Feb (Wed) - S11 | 06th Feb (Thu) - S12
Unlock the Power of Large Language Models with Transformer Architectures with Dr.Anand
Course Instructor
As the founder of Learn AI with Anand, Dr.Anand's journey unfolds with education, innovation, and a passion for Data Science and AI. Starting his career at VIT University, Dr. Anand's early years shaped a teaching philosophy beyond conventional boundaries.Transitioning to corporate training, he meticulously designed and delivered comprehensive learning programs tailored for aspiring Data Scientists.Recognized as the "Best Data Science & AI Educator" by AI Global Media (UK), Corporate Vision Magazine 2022 for his outstanding contributions in Education and Training. Earlier in the year 2000, he received AT&T Labs award from IEEE Headquarters (USA) and MV. Chauhan Award from IEEE India Council adding honors to his professional journey.
30-Hour Intensive Training: Course Content
Session-1: Introduction to Byte Pair Encoding (Tokenization), Word Embeddings, Positional Encoding
Session-2: Visualization and Interpretation of Word Embeddings & Positional Encoding
Session-3: Introduction to the Self Attention Mechanism in Encoder: Attention Score Vs Attention Vector
Session-4: Role of Feed Forward Layers & different output layer configuration for encoder only BERT/RoBERTa
Session-5: Loading and Inferring BERT, Transfer Learning, BERT as feature extractor, Full Model Training for BERT
Session-6: Introduction to Decoder Side of Transformer, Masked Self Attention and Cross Multi-Head Attention.
Session-7: End-to-End Encoder-Decoder Transformer for GenAI Tasks, Loading GenAI models like GPT Series/Gemini, Llama, Gemma for direct Inference
Session-8: Introduction to RAG, Docstore and VectorDB, Llama Index and LangChain Frameworks
Session-9: Advanced RAG Systems: MergerRetriever, MultiVectorRetriever, Cross Encoder based Re-Ranking
Session-10: Advanced Fine-Tuning Techniques for LLMs: Exploring LoRA, PEFT, and QLORA Techniques for Llama & Gemma Models
Session-11: Multi-AI Agent Systems, Crew & Google Gemini, LLM evaluation metrics: Faithfullness & Context Relevance using RAGAS
Session-12: Deploying LLMs as APIs: Integration with LangChain and FastAPI, Standalone Vs Cloud Base Configurations
Provisional Enrollment
Enroll Now
Made with Gamma