Whizlabs

Prompt Engineering Generative AI & LLM Models Fundamentals

Limited time! Save 40% on 3 months of Coursera Plus and full access to thousands of courses.

Whizlabs

Prompt Engineering Generative AI & LLM Models Fundamentals

Whizlabs Instructor

Instructor: Whizlabs Instructor

Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

6 hours to complete
Flexible schedule
Learn at your own pace
Gain insight into a topic and learn the fundamentals.
Intermediate level

Recommended experience

6 hours to complete
Flexible schedule
Learn at your own pace

What you'll learn

  • Understand the fundamentals of Large Language Models (LLMs) and Generative AI

  • Apply prompt engineering techniques to guide LLM outputs and improve response quality.

  • Explore LLM optimization and advanced techniques such as fine-tuning, evaluation metrics, and Retrieval-Augmented Generation (RAG).

Details to know

Shareable certificate

Add to your LinkedIn profile

Recently updated!

March 2026

Assessments

6 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

There are 3 modules in this course

Welcome to the module Foundations of Large Language Models and Generative AI. In this module, you will explore the core concepts behind Large Language Models (LLMs) and understand how Generative AI systems are designed and applied. We begin by introducing LLMs and their role within artificial intelligence and machine learning. You will learn what defines a Generative AI model and examine the key components that power these systems. Through a hands-on demo using HuggingFace, you will see how LLMs are applied to common NLP tasks such as text generation and classification. The module also highlights the importance of training data, including how LLMs are trained on large datasets and why data cleaning is critical for improving model performance and reliability. By the end of this module, you will have a clear understanding of how LLMs and Generative AI systems work, how they are trained, and the role of high-quality data in building effective AI solutions.

What's included

6 videos2 readings2 assignments

Welcome to the module LLM Training, Optimization, and Evaluation. In this module, you will dive deeper into how Large Language Models are trained, optimized, and assessed for performance and reliability. You will begin by understanding the fundamentals of LLM training and optimization, including how massive datasets and computational resources are used to build high-performing models. The module explores different learning techniques such as zero-shot, few-shot, instruction tuning, and Reinforcement Learning from Human Feedback (RLHF), helping you understand how models adapt to tasks with minimal examples. You will also learn about loss functions and how they guide model learning during training. The concept of LLM alignment is introduced to explain how models are tuned to produce safe, accurate, and human-aligned responses. On the evaluation side, you will examine key evaluation metrics, including perplexity, and understand how model quality is measured. The module highlights the critical role humans play in evaluating outputs and refining models, as well as the importance of GPUs in enabling large-scale model training. By the end of this module, you will have a strong understanding of how LLMs are trained, optimized, aligned, and evaluated in real-world AI systems.

What's included

8 videos1 reading2 assignments

Welcome to the module Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures. In this module, you will focus on practical techniques for controlling, adapting, and enhancing Large Language Models to meet real-world requirements. You will start with Prompt Engineering, learning the fundamentals of prompt design and how prompt structure directly impacts model output. The module covers proven techniques for crafting effective prompts that improve accuracy, reasoning quality, and response consistency. A hands-on demo will help you see how small prompt changes can significantly influence LLM behavior. Next, you will explore LLM fine-tuning approaches, including prompt tuning and Parameter-Efficient Fine-Tuning (PEFT). You will understand how prompt-efficient methods such as P-Tuning adapt large models with minimal computational cost. The introduction to NVIDIA NeMo provides insight into frameworks used for customizing and optimizing enterprise-scale language models. Finally, you will examine Retrieval-Augmented Generation (RAG) architecture and learn how combining LLMs with external knowledge sources improves factual grounding and domain-specific performance. By the end of this module, you will understand how to design high-quality prompts, apply efficient fine-tuning techniques, and leverage advanced LLM architectures for scalable generative AI solutions.

What's included

9 videos1 reading2 assignments

Instructor

Whizlabs Instructor
Whizlabs
144 Courses 111,236 learners

Offered by

Whizlabs

Why people choose Coursera for their career

Felipe M.

Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."

Jennifer J.

Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."

Larry W.

Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."

Chaitanya A.

"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."
Coursera Plus

Open new doors with Coursera Plus

Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions