📄️ Agentic AI Framework
This module teaches the design, development, and deployment of autonomous AI agent frameworks using tools like Langchain, Crew AI, and Hugging Face. It covers everything from package publishing and API integration to building multimodal, multi-agent systems with CI/CD automation, monitoring, and public contribution pipelines.
📄️ API Development for AI Agents
This module focuses on building and deploying robust APIs tailored for AI agents, including pip, npm, and poetry packages with full CI/CD automation. It also covers API integration planning, billing systems, version control, access management, and public contribution pipelines to enable scalable and maintainable agent-based systems.
📄️ LLM-based Model Deployment
This module teaches the complete lifecycle of LLM-based model deployment—from understanding foundational research (like transformers and RAG) to training, quantization, and deploying both standard and multimodal LLMs. It emphasizes hands-on implementation, evaluation techniques, production readiness, and real-world deployment challenges across various frameworks and model types.
📄️ Product Development using AI Framework
This module focuses on end-to-end product development using AI agents—starting from technical research and PRD writing to deploying real-world AI-powered applications like blogging agents, sales tools, storybook generators, and content automation systems. It emphasizes both planning and implementation, leveraging frameworks like LangChain, Pydantic Agents, and Relevance AI for building scalable, automated products.
📄️ AI Research (LLM Focused)
This module provides a comprehensive foundation in LLM-focused research, covering core architecture, tokenization, fine-tuning, RAG, evaluation, ethics, and multimodal capabilities. Learners will engage in hands-on experiments, collaborative projects, and scientific writing to produce publication-ready research grounded in real-world applications.
📄️ AI Research (Core)
This module builds deep foundational and advanced expertise in AI through rigorous study of mathematics, coding, and research-driven framework development. It takes learners from theoretical underpinnings to hands-on implementation, reproducible experimentation, and final research paper dissemination—equipping them to become independent AI researchers.
📄️ AI Full Stack Development
This module trains learners to build end-to-end AI-powered web applications by integrating modern frontend frameworks, backend APIs, and deployed AI models. It covers full-stack development, DevOps, real-time analytics, third-party API integration, and scalable deployment, culminating in a production-ready AI application.
📄️ Deep Learning V2
This module equips learners with end-to-end expertise in deploying deep learning models at scale—from model optimization (ONNX, quantization, distillation) to secure, production-grade APIs integrated with CI/CD, monitoring, and stress testing. It covers real-world deployment across edge and cloud platforms, ensuring learners can deliver high-performance, scalable, and collaborative AI services.
📄️ Graph Databases
This module teaches how to use graph databases like Neo4j for building recommendation systems and enhancing Retrieval-Augmented Generation (RAG) pipelines. It covers everything from graph theory fundamentals and data modeling to deploying graph-enhanced AI systems with monitoring, optimization, and integration into modern LLM workflows.
📄️ Enterprise RAG Pipeline
This module teaches how to design, build, and deploy an enterprise-grade Retrieval-Augmented Generation (RAG) pipeline using modern open-source tools. Covering everything from multi-source data ingestion to LLM orchestration, hybrid search, monitoring, and secure CI/CD deployment, it culminates in a complete, production-ready RAG system with analytics and real-world optimization strategies.
📄️ LLM Evaluation System
Block 1 (Days 1–15): Foundations & Environment Setup