Building Intelligent Solutions with Generative AI.
Type
PaidMarch 10, 2025 @ 9:00 am – March 14, 2025 @ 5:00 pm UTC+5.30
About the event
Join us for an immersive journey into the world of Generative AI with our exclusive public event: “Generative AI for Developers | Level 2”. This event offers an intensive exploration of Generative AI and Large Language Models (LLMs), designed for developers, AI enthusiasts, and professionals eager to dive deeper into the realm of artificial intelligence.
Building upon foundational knowledge, participants will delve into advanced concepts, tools, and techniques through engaging modules and hands-on lab exercises. By attending, you’ll master the intricacies of leveraging LLMs for diverse applications such as image generation, natural language processing, and reinforcement learning.
Key Outcomes & Benefits:
- Gain advanced proficiency in Generative AI and LLMs.
- Acquire practical skills through hands-on lab exercises.
- Learn cutting-edge techniques for building intelligent solutions.
- Network with like-minded professionals and industry experts.
- Receive a certificate of completion showcasing your expertise.
Who Should Attend:
- Developers and software engineers interested in advancing their knowledge of Generative AI.
- AI enthusiasts seeking to deepen their understanding of artificial intelligence technologies.
- Data scientists and machine learning practitioners aiming to enhance their skills with OpenAI models.
Prerequisite:
Attendance of “Generative AI for Developers | Level 1” or equivalent knowledge is required to ensure participants have the foundational understanding necessary for this advanced workshop.
Don’t miss this opportunity to elevate your skills and unlock new possibilities in the realm of Generative AI. Reserve your spot today!
- A quick recap of Generative AI & LLMs
- Transformer Architecture
- Working with LangChain
- AutoGPT
- VAEs for image generation
- Getting started with Image Generation
- Conditional Image Generation based on class labels or text description
- Concepts related to Style Transfer
- Approaches for GANs
- CycleGAN
- pix2pix
- Understanding OpenAI’s DALL-E Image Generation Process
- Building Diffusion Models from Scratch
- Starting with Noise and Progressing to Final Images
- Developing Intuition at Each Step
- MidJourney: Demonstration App
- Lab(s): DALL-E 2 through Azure OpenAI Services
- Introduction to Large Language Models
- Pre-training Large Language Models
- Computational Challenges in Training LLMs
- Scaling Laws and Compute-Optimal Models
- Fine-tuning Techniques
- Instruction Fine-tuning
- Fine-tuning on a Single Task
- Multi-task Instruction Fine-tuning
- Parameter Efficient Fine-tuning (PEFT)
- PEFT Techniques: LoRA and Soft Prompts
- Model Evaluation and Benchmarks
- Evaluating Fine-tuned Models
- Introduction to Benchmarks
- Fine-tuning Lab Setup on Azure OpenAi Studio
- Lab Exercise Walkthrough
- Finetuning on a given use case
- Introduction
- Overview of Fine-tuning Large Language Models
- Importance of Aligning Models with Human Values
- Reinforcement Learning from Human Feedback (RLHF)
- Introduction to RLHF
- Obtaining Feedback from Humans
- Developing a Reward Model for RLHF
- Fine-tuning with Reinforcement Learning
- Fine-tuning Process using RLHF
- Techniques for Optimizing RLHF Performance
- Optional Video: Proximal Policy Optimization
- Addressing Reward Hacking
- Scaling Human Feedback
- Challenges and Considerations
- Strategies for Collecting and Incorporating Large-scale Feedback
- Evaluation and Assessment
- Methods for Evaluating Fine-tuned Language Models
- Assessing Model Performance in Alignment with Human Values
- Applications of Generative Models
- Importance and Usefulness
- Potential Applications
- Text Generation Use Cases
- Audio Synthesis Use Cases
- Text-to-Image Generation
- Building NLP Applications using OpenAI API
- Summarization, Text Classification, and Fine-tuning GPT Models
- Building Midjourney Clone Application
- Using OpenAI DALL-E and StableDiffusion on Hugging Face
- Generative AI Project Lifecycle
- Using LLMs in Applications
- Interacting with External Applications
- Helping LLMs Reason and Plan with Chain-of-Thought
- Program-aided Language Models
- LLM Application Architectures
- Responsible AI Considerations
- Concepts for LangChain Projects
- Utilizing embeddings and vector data stores
- Enhancing LangChain performance
- LangChain Framework
- Taking LLMs out of the box
- Integrating LLMs into new environments using memories, chains, and agents
Lab Exercise Using LangChain
- Developing a question-answering application with LangChain, OpenAI, and Hugging Face Spaces