Prompt Engineering, Generative AI & LLM Models Fundamentals course is designed for learners who want to build a strong foundation in Large Language Models (LLMs), Generative AI concepts, and prompt engineering techniques. The course focuses on helping technical professionals and AI enthusiasts understand how modern generative AI systems work and how to effectively interact with and optimize these models for real-world applications.

Prompt Engineering Generative AI & LLM Models Fundamentals
Limited time! Save 40% on 3 months of Coursera Plus and full access to thousands of courses.

Recommended experience
What you'll learn
Understand the fundamentals of Large Language Models (LLMs) and Generative AI
Apply prompt engineering techniques to guide LLM outputs and improve response quality.
Explore LLM optimization and advanced techniques such as fine-tuning, evaluation metrics, and Retrieval-Augmented Generation (RAG).
Details to know

Add to your LinkedIn profile
March 2026
6 assignments
See how employees at top companies are mastering in-demand skills

There are 3 modules in this course
Welcome to the module Foundations of Large Language Models and Generative AI. In this module, you will explore the core concepts behind Large Language Models (LLMs) and understand how Generative AI systems are designed and applied. We begin by introducing LLMs and their role within artificial intelligence and machine learning. You will learn what defines a Generative AI model and examine the key components that power these systems. Through a hands-on demo using HuggingFace, you will see how LLMs are applied to common NLP tasks such as text generation and classification. The module also highlights the importance of training data, including how LLMs are trained on large datasets and why data cleaning is critical for improving model performance and reliability. By the end of this module, you will have a clear understanding of how LLMs and Generative AI systems work, how they are trained, and the role of high-quality data in building effective AI solutions.
What's included
6 videos2 readings2 assignments
Welcome to the module LLM Training, Optimization, and Evaluation. In this module, you will dive deeper into how Large Language Models are trained, optimized, and assessed for performance and reliability. You will begin by understanding the fundamentals of LLM training and optimization, including how massive datasets and computational resources are used to build high-performing models. The module explores different learning techniques such as zero-shot, few-shot, instruction tuning, and Reinforcement Learning from Human Feedback (RLHF), helping you understand how models adapt to tasks with minimal examples. You will also learn about loss functions and how they guide model learning during training. The concept of LLM alignment is introduced to explain how models are tuned to produce safe, accurate, and human-aligned responses. On the evaluation side, you will examine key evaluation metrics, including perplexity, and understand how model quality is measured. The module highlights the critical role humans play in evaluating outputs and refining models, as well as the importance of GPUs in enabling large-scale model training. By the end of this module, you will have a strong understanding of how LLMs are trained, optimized, aligned, and evaluated in real-world AI systems.
What's included
8 videos1 reading2 assignments
Welcome to the module Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures. In this module, you will focus on practical techniques for controlling, adapting, and enhancing Large Language Models to meet real-world requirements. You will start with Prompt Engineering, learning the fundamentals of prompt design and how prompt structure directly impacts model output. The module covers proven techniques for crafting effective prompts that improve accuracy, reasoning quality, and response consistency. A hands-on demo will help you see how small prompt changes can significantly influence LLM behavior. Next, you will explore LLM fine-tuning approaches, including prompt tuning and Parameter-Efficient Fine-Tuning (PEFT). You will understand how prompt-efficient methods such as P-Tuning adapt large models with minimal computational cost. The introduction to NVIDIA NeMo provides insight into frameworks used for customizing and optimizing enterprise-scale language models. Finally, you will examine Retrieval-Augmented Generation (RAG) architecture and learn how combining LLMs with external knowledge sources improves factual grounding and domain-specific performance. By the end of this module, you will understand how to design high-quality prompts, apply efficient fine-tuning techniques, and leverage advanced LLM architectures for scalable generative AI solutions.
What's included
9 videos1 reading2 assignments
Instructor

Offered by
Why people choose Coursera for their career

Felipe M.

Jennifer J.

Larry W.

Chaitanya A.

Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
More questions
Financial aid available,

