Engineering Track
Generative AI – Technical Implementation
Deep-dive sprint for builders who need to architect, integrate and operate GenAI products. Assemble RAG pipelines, orchestrate prompts, harden deployments and monitor models end-to-end.
Advanced engineering experience covering GenAI stack design, retrieval augmentation, evaluation and operations.
Remote or onsite, limited cohorts for live coding, compatible with CPF and enterprise .
Prerequisites: solid Python skills, familiarity with APIs/cloud services and a basic understanding of ML workflows. Participants bring their preferred IDE; we provide datasets, repositories and deployment sandboxes.
Generative AI – Technical Implementation
An engineering-focused bootcamp that helps dev teams move from prototype to production. We cover GenAI reference architecture, retrieval augmented generation (RAG), agent orchestration, deployment and monitoring pipelines so you can deliver secure copilots.
Programme Objectives
- Design GenAI application architecture and choose the right stack (LLMs, vector DBs, orchestrators).
- Build and optimise RAG pipelines (ingestion, embeddings, retrieval strategies, evaluation).
- Implement prompt orchestration, tool calling and agent workflows with guardrails.
- Deploy services through APIs, serverless endpoints or containerised runtimes.
- Secure, monitor and tune models with AI Ops practices (observability, cost control, policy enforcement).
Target Audience
- Software / ML engineers and solution architects building GenAI products.
- Technical product owners and platform leads responsible for AI copilots.
- Automation engineers and MLOps teams modernising AI stacks.
- Consultants implementing RAG or agentic systems for clients.
Format & Learning Approach
- Duration: 2 days (14 hours) – optional 3-day version with extended labs.
- Live coding, architecture reviews, pair programming and troubleshooting clinics.
- Sample repositories (Python + Typescript), infrastructure templates and evaluation dashboards provided.
Programme Overview – Engineering Roadmap
Module 1 – GenAI Architecture & Stack Choices
Review LLM options (OpenAI, Azure OpenAI, Anthropic, open-source), context windows, latency vs. cost trade-offs, orchestrators (LangChain, Semantic Kernel, LlamaIndex) and deployment patterns.
Module 2 – Retrieval Augmented Generation (RAG) Foundations
Ingestion pipelines, document chunking, embedding selection, vector databases (Pinecone, Redis, pgvector) and hybrid search. Includes evaluation of retrieval quality.
Module 3 – Prompt Orchestration & Tool Calling
System prompts, dynamic templates, structured output, function calling / tools, agent frameworks, error handling and safety guardrails.
Module 4 – Deployment & Integration
Expose GenAI via REST/gRPC APIs, integrate with messaging, automate CICD, containerise workloads and manage secrets. Covers serverless options and GPU scheduling.
Module 5 – AI Ops: Monitoring, Assessment & certification & Governance
Telemetry, tracing, hallucination detection, feedback loops, policy enforcement, rate limiting, usage analytics and cost optimisation.
Deliverable: team-specific reference architecture and RAG implementation checklist.
Outcomes & Key Benefits
- Production-grade understanding of GenAI stack components and integration points.
- Reusable pipelines, prompts and infrastructure templates.
- Improved reliability, observability and cost control across GenAI workloads.
- Clear roadmap to industrialise copilots while meeting security and compliance standards.