20-Hour Live course spread over 8 sessions. Commencing 9th July 2025, Timing: 07.30pm to 09.45pm IST during every Wednesdays & Thursdays for 4-weeks {Enrollment Closed - Seats FULL} Check our recorded courses
Get a complimentary prerequisite module of 30+ hour course recordings of Classical Computer Vision Architectures including RCNN, Fast & Faster RCNN, YOLO, U-Net, SIAMESE-FaceNet, GAN & more…
Mastering Multi-Agentic AI Frameworks
(Course Code : MMAAI)
20-Hour course spread over 8 sessions. Commencing 5th Aug 2025, Timing: 07.30pm to 09.45pm IST during every Wednesdays & Thursdays for 4-weeks {Enrollment Open}
Get a complimentary prerequisite module of 25+ hour course recordings of LLM Architecture, Transformers Building Blocks, Encoder only applications, Encoder-Decoder based applications, GenAI Inference, LLM + RAG configurations
Session-1: Introduction to Vision Transformers (ViT) , Overview of Transformers in NLP vs. Computer Vision
Session-2: Vision Transformer Architecture, Image tokenization: Patching and embedding, Multi-Head Self-Attention in Vision
Session-3: Transfer learning in ViT models, Fine-tuning a pre-trained ViT model using PyTorch
Session-4: Object detection using Vision Transformers (e.g., DETR), Inference & Fine-Tuning RF-DETR Model using Super Vision and Roboflow
Session-5: Zero-Shot Object Detection using Self-Supervised DINO Architecture, DINO for Auto Labeling
Session-6: Segment Anything Model (SAM) - Model Inference, Fine-Tuning SAM using Super Vision and Roboflow
Session-7: CLIP (Contrastive Language–Image Pre-training): Zero-Shot Image Classification using CLIP, CLIP as a Building block in Stable Diffusion
Session-8: Stable Diffusion Architecture, Variational Autoencoder (VAE), U-Net & Text Encoders for Image Generation & Neural Style Transfer applications.
MMAAI Course Outline
Session-1: Introduction to Agents, Tools, Internal Reasoning and Re-Act approach, Actions, Enabling the Agent to engage with its environment
Session-2: LlamaIndex Agentic AI Framework, Using tools in LlamaIndex, Creating Agentic workflows in LlamaIndex
Session-3: Introduction to LangGraph Framework, Building block of LangGraph, Creating Agentic workflows in LangGraph
Session-4: Introduction to SmolAgents, Tools in SmolAgents, Retrieval Agents, Multi-Agent Systems
Session-5: Building Agentic RAG : An use-case walk through
Session-6: Building an Effective Agents, When ( and when not) to use Agents, Agentic Design Patterns
Session-7: AI Agentic Design Patterns with Microsoft AutoGen, An use-case walk through
Session-8: Multi AI Agent Systems with CrewAI, Multi Agent Collaboration
Unlock the Power of AI & GenAI with Dr.Anand
Course Instructor
Dr. S. Mahesh Anand is a distinguished educator, corporate trainer, keynote speaker, and consultant in the fields of Data Science, Machine Learning, and Artificial Intelligence. With over two decades of experience, Dr. Anand has been instrumental in shaping the learning journey of more than 50,000 students and professionals across India.
Dr. Anand served as a full-time faculty member at VIT University (Vellore) for a decade, where he honed his academic and research skills, before founding his consulting and training firm, Scientific Computing Solutions (SCS-India), in 2012.
His professional footprint includes delivering transformative corporate training sessions for leading organizations like Great Learning, Chegg, TNQTech, CGI, Mad Street Den and many startups, alongside conducting over 800 master training sessions for faculty members in higher education academic institutions across India.
Among his accolades, Dr. Anand is the recipient of the AT&T Labs Award from IEEE Headquarters and the M.V. Chauhan Award from IEEE India Council for his pioneering work in ANN-Fuzzy hybrid AI model for cancer prediction. He has also been recognized with the Best Data Science & AI Educator Award by AI Global Media, UK in the year 2022.
As the founder of "Learn AI with Anand" a flagship program of SCS-India, he continues to inspire learners through his cohort online courses.
MTM for GenAI LLMs: Course Outline (Available in Recorded Format Only as a Complimentary Module of MMAAI)
Session-1: Introduction to Byte Pair Encoding (Tokenization), Word Embeddings, Positional Encoding
Session-2: Visualization and Interpretation of Word Embeddings & Positional Encoding
Session-3: Introduction to the Self Attention Mechanism in Encoder: Attention Score Vs Attention Vector, Multi-Head Attention, Latent Attention
Session-4: Role of Feed Forward Layers, Mixture of Experts (MoE) & different output layer configuration for encoder only BERT/RoBERTa
Session-5: Loading and Inferring BERT, Transfer Learning, BERT as feature extractor, Full Model Training for BERT
Session-6: Introduction to Decoder Side of Transformer, Masked Self Attention and Cross Multi-Head Attention.
Session-7: End-to-End Encoder-Decoder Transformer for GenAI Tasks, Loading GenAI models like GPT Series/Gemini, Llama, Gemma for direct Inference
Session-8: Introduction to RAG, Docstore and VectorDB, Llama Index and LangChain Frameworks
Session-9: Advanced RAG Systems: MergerRetriever, MultiVectorRetriever, Cross Encoder based Re-Ranking
Session-10: Advanced Fine-Tuning Techniques for LLMs: Exploring LoRA, PEFT, and QLORA Techniques for Llama & Gemma Models
Session-11: Multi-AI Agent Systems, Crew & Google Gemini, LLM evaluation metrics: Faithfullness & Context Relevance using RAGAS
Session-12: Deploying LLMs as APIs: Integration with LangChain and FastAPI, Standalone Vs Cloud Base Configurations