Python
Python is the most widely used programming language, especially in Generative AI development. It’s an ocean of possibilities — but the good news is, you don’t need to know everything to start building powerful AI solutions.
|
Variables |
Basic Building blocks of programing |
|
Functions |
Reusable blocks of code that perform specific tasks and improve efficiency. |
|
Object Oriented Programing: ·Class ·Objects ·Inheritance ·Abstraction ·Encapsulation ·Polymorphism |
organizes code into reusable “objects” built from classes, making software more modular, scalable, and easier to maintain. |
|
Containers: ·List ·Tuple ·Named Tuple ·Sets ·Named Tuples ·Dicts |
Containers in Python are data structures that help store, organize, and manage collections of values efficiently. |
|
Exception handling |
Gracefully manage and recover from runtime errors. |
|
File handling |
Read, write, and manipulate data stored in files. |
|
Pandas |
A powerful Python library for data analysis and manipulation. |
|
FAST API |
A modern web framework for building high- performance APIs. |
AWS has been the undisputed leader in cloud computing for the last 10+ years. Now booming in Generative AI, it provides the most powerful tools and infrastructure to build, train, and scale AI solutions with ease.
|
S3 |
Durable object storage for everything from data lakes to media, at massive scale and low cost |
|
VPC (Virtual Private Cloud) |
Your own isolated network slice in AWS to control traffic, security, and connectivity |
|
Subnets, Routing Tables, Security Groups |
Essential building blocks of cloud networking: define segments, routes & access rules. |
|
ECS( Fargate ) |
Run containerised applications server-less, without managing the underlying servers or clusters. |
|
App Load Balancer |
Distributes incoming HTTP/S traffic intelligently to your backend containers or instances. |
|
API Gateway |
Gateway to expose, secure and scale your APIs effortlessly. |
|
Bedrock |
AWS’s managed foundation model service: build GenAI apps quickly with pre-trained models and agent tools. |
|
Agentic Core |
Agentic infrastructure layer for deploying and scaling AI agents with reliability and security in Bedrock. |
|
Lambda |
Serverless computing to run code in response to events with zero server management. |
|
ECR |
Manage your containerized workloads with orchestration, cluster management & autoscaling. |
|
RDS – Postgres |
Fully managed relational database (PostgreSQL) with automated backups, scaling & patches. |
|
IAM |
IAM ensures secure access control — who can do what, when, with AWS resources. |
Art of Prompt Engineering(High quality prompts leads to high quality outputs)
Prompt Engineering has become the bridge between human intent and AI output — mastering it lets you shape, guide, and control GenAI with precision.
Generative AI capability where a model converts natural language input (like English questions) into SQL queries. This lets non-technical users (business analysts, managers, etc.) access and analyse structured data without writing SQL themselves.
Text-to-SQL agents often reach 80-90%+ accuracy on standard benchmarks like Spider and BIRD, but in real-world deployments their success can drop significantly unless best practices are followed. In this project, we teach you gold-standard prompt-engineering techniques, schema awareness, and external context usage for robust implementation. Using e-commerce data, you’ll build an end-to-end system (Frontend + Backend) on AWS — backend built with Fast API, frontend with Streamlit, data stored in RDS, containerized via ECR & ECS, load balanced with ALB, and using OpenAI LLMs for query generation and summarization.
LLMs have a limited context window, i.e. they can only “see” a fixed number of tokens of input
+ retrieved text at a time. As documents or datasets grow, it becomes impossible to fit everything into the prompt. Without RAG, either you truncate (lose useful info) or you pay huge cost/latency to use models with bigger windows. RAG solves this by first retrieving only the
most relevant chunks of information from external knowledge bases or documents, then feeding them into the model at inference/runtime. That way, the LLM can answer with up-to-date, domain-specific facts without being retrained or overburdened.
A RAG pipeline’s success hinges on how efficiently data is extracted — noisy, fragmented or poorly structured input can derail even the best models.
You Learn:
How to efficiently how extract clean text, tables, images, and metadata from documents — accelerating RAG pipelines, reducing errors, and letting you focus on building intelligence rather than wrangling raw data.
Embedding models turn words, images, or anything into vectors of meaning — letting GenAI find what’s similar, relevant, or useful. With embeddings, your AI system sees relationships: like how ‘movie’ and ‘film’ are similar, or ‘cat’ and ‘dog’ more than ‘table’ — powering smarter search, RAG, and recommendations.
You Learn:
Convert text to embeddings, Image to embeddings using multimodal and text embeddings models.
Vector databases store data as embeddings rather than just text or tables — enabling similarity search that understands meaning, not just keywords. these databases help GenAI systems find relevant context fast, reduce hallucinations, and scale semantic search.
You Learn:
#Semantic Search #SimpleRAG #RecomondationSystems #HybridSearch
By inserting a reranker stage, RAG pipelines cut down irrelevant noise, minimize hallucinations, and ensure the final output is more accurate and contextually aligned.
You learn:
Graph databases store data as nodes and edges, making relationships first-class—enabling fast, multi-hop queries for recommendations, fraud detection, and more. For use cases where what data is connected matters more than what the data is, graph databases outperform relational systems—offering flexible schema, real-time relationship querying, and superior performance at scale.
You learn:
Using ClinicalTrials.gov data, construct a knowledge graph where each trial, condition, intervention, and outcome becomes a node, linked by meaningful relations.
LangChain is an open-source framework for building LLM-powered applications that lets you combine models, prompt engineering, and external data sources seamlessly. It simplifies development of RAG, agents, chatbots, and workflows so you can go from prototype to production with much less overhead.
You Learn:
Chaining LLMs with external data sources & APIs to build modular, production-ready applications. tracing chains, debugging, monitoring performance & logging. evaluation frameworks to measure output quality, set benchmarks, and iterate for better results.
Implementing guardrails ensures your AI behaves responsibly—filtering out bias, reducing hallucinations, securing data, and aligning with values from input to output.
You Learn:
How to configure and enforce Guardrails in AWS Bedrock to detect & filter harmful content, block undesirable or restricted topics, and ensure compliance with safety & privacy policies. How to use contextual grounding and Automated Reasoning checks to reduce hallucinations, validate model outputs, and maintain trust in GenAI systems.
Transforming ClinicalTrials.gov data into vector + graph databases enables deeper insight into relationships among trials, conditions, interventions, sponsors, and outcomes — making discovery, comparison, and analysis far more powerful than simple tables alone. By doing so, you help researchers ask better questions, accelerate hypothesis generation, improve trial design, and avoid redundant work.
Build a knowledge base using real ClinicalTrials.gov data — with trial PDFs stored in a vector database for semantic search and structured trial metadata modelled as nodes & relationships in a graph database. In this project, you’ll build both frontend and backend systems: backend using FastAPI, LangChain, AWS Bedrock + Lambda/API Gateway; containerized with ECR & ECS; storage via S3 and vector buckets; and the frontend for querying and visualizing results.
Agentic AI takes RAG a step further: it doesn’t just fetch context, it uses LLMs to plan, decide, and act with minimal supervision. Whether it’s chaining tasks, using tools, or coordinating actions, agentic systems enable real-world automation, adaptivity, and smarter outcomes.
LangGraph is a stateful orchestration framework for building long-running, multi-actor AI agent workflows — giving you control over state, tool usage, memory, and decision-flows. It bridges the gap between prototype agents and reliable production systems by enabling dynamic control, debugging, and scalable deployment.
You will Learn:
Introduction to Chain, Router, Agent, Agent with memory State and Memory
Long Term memory
Deep Agents — Learn the fundamental characteristics of Deep Agents and how to implement your own Deep Agent for complex, long-running tasks.
#Strands Agents
Strands Agents is an open-source SDK from AWS, built on a model-first philosophy for creating AI agents. It simplifies agent development: you define a prompt + tools → Strands handles reasoning, planning, tool execution, state, multi-agent workflows, and deployment.
#MCP(Model Context Protocol) Server:
MCP servers are like universal adapters for AI agents — letting them securely talk to external tools, data, and workflows with a standard protocol, not custom code everywhere. An MCP server acts as the bridge: it exposes specific functionality (e.g. database queries, file access, APIs) to the AI agent via MCP, so that agents do not need bespoke integrations for every new tool or dataset.
#A2A(Agent 2 Agent Protocol):
Agent-to-Agent (A2A) is an open standard introduced by Google (backed by many partners) to enable AI agents from different vendors or frameworks to communicate securely, share capabilities, and collaborate on tasks
AWS Agent Core is the foundation for building agentic AI on Bedrock — giving developers tools to create autonomous agents that can plan, reason, and act using AWS and external services.
In this capstone project, you will design and deploy a next-gen multi-agent GenAI system powered by the A2A protocol and MCP servers. The system integrates search agents, ingestion agents, annotation agents, and analysis agents, all orchestrated via LangGraph. Agents collaborate autonomously to ingest clinical/scientific documents, normalize entities, enrich a graph database, and answer complex queries with context- aware reasoning..
Click on the below link to Download Gen AI Training Content
