Using ChatGPT in Medical Education for Virtual Patient and Cases

Introduction
ChatGPT describes itself as “an AI-powered entity designed to process and generate human-like text based on vast amounts of data and pre-existing knowledge.” ChatGPT and similar models can generate large volumes of text and information within seconds, but they have limitations. Many educators are concerned about how to prevent students from using ChatGPT. However, instead of focusing on restrictions, this article explores how healthcare educators and students can leverage ChatGPT as a learning tool while keeping in mind its current limitations.
Beyond answering basic questions, ChatGPT can act as a virtual patient simulator, facilitate communication practice, create cases and vignettes, and produce lab results. Even in its early development phases, this capability holds great potential as a time-saving tool for both students and educators. While various educational and clinical applications are still being explored, much remains unknown.
For now, here are some ways ChatGPT can be integrated into evidence-based teaching practices to enhance student engagement effectively.

1. Communication Skills
What ChatGPT can do:
As a language prediction model, ChatGPT allows students to engage in discussions and practice communication skills through role playing. While roleplaying is limited, it can stimulate patient interactions, making it particularly useful for preclinical and early clinical practice. Additionally, ChatGPT can provide detailed feedback when given a framework, such as SPIKES. It can highlight where students met specific criteria, identify missing components, and explain their importance.
For example, “It’s unclear whether you provided Mr. Smith with an opportunity to ask questions or express his concerns about the diagnosis and his prognosis. This step is crucial to engage the patient in the decision-making process and ensure that their needs and preferences are taken into account.”
What ChatGPT can’t do:
ChatGPT can give detailed feedback, but it does not score, rate, or compare attempts. More importantly, ChatGPT lacks emotion, which is a critical aspect of patient interactions. Even when delivering bad news poorly, ChatGPT remains polite and optimistic.
Furthermore, ChatGPT tends to anticipate what a physician or nurse wants to hear, providing more detailed information and accurate medical terminology than a typical patient expects. As a result, ChatGPT is only suited for early communication, such as learning a framework or preparing for commonly asked patient questions at the start of a new rotation.
Sample prompts:
- “I will play the role of a doctor, and you will play the role of a 20-year-old patient with mild chest pain so I can take your history.”
- “I will play the role of a doctor giving bad news. You will play the role of a 50-year-old patient.”
- After practicing communication with a patient: “Rate my conversation with the patient based on the SPIKES framework.”

2. Virtual Patient Simulations
What ChatGPT can do:
ChatGPT claims it can act as a simulated patient, but the actual results can be disappointing. It often generates full dialogues between patient and doctor, even with an attempt to train it. However, when prompted with “Can you be a virtual patient?” or “Can I ask you questions as if you’re a patient?” ChatGPT performs well. Learners can provide parameters, such as “a 30-year-old with abdominal pain” and ChatGPT will answer as if it were the patient, produce lab results, and suggest potential diagnoses.
This can be beneficial for learning to take a history, coming up with possible diagnoses, and narrowing down to a specific diagnosis. Students can even inquire if a next step or diagnosis is reasonable, though instructors should note that ChatGPT avoids giving medical advice, so students may need to “remind” it that it is a practice scenario.
What ChatGPT can’t do:
As mentioned previously, ChatGPT does not display emotion as a real patient might. It also does not verify the accuracy of medical information provided or guessed. A student could input incorrect diagnoses and not know, as ChatGPT generates content from common or popular information and scientific literature. When asked to assess student responses, ChatGPT’s response may be incomplete. Any use of ChatGPT as a virtual patient should always be followed up by a debrief or discussion to clarify any misconceptions.
Target learning objectives with these exercises can be challenging. The desired outcome must be conveyed to ChatGPT at the start of the exercise, which unfortunately reveals the result to the student before they begin. While ChatGPT can be given broad parameters with various outcomes, this limits the educator’s ability to ensure learning objectives are met. One workaround is to provide ChatGPT with a range of options and have the student ask questions and order labs to determine the correct outcome. While this approach ensures learning objectives are met, it limits the options students have to explore.
Another limitation is that the student may simply ask ChatGPT for the potential diagnoses, next step, or other information assigned to find. Therefore, these exercises may be best used as formative evaluations or discussion starting points.
Sample prompts:
- “ChatGPT, play the role of a patient that is a 30-year-old male with a cough that might be chronic bronchitis, reflux, asthma, or lung cancer.”
- “What would be a reasonable diagnostic test to use in diagnosing this patient?”
- “Would ____ be a reasonable diagnostic test in this case?”
- “Would ____ be a reasonable next step in treating this patient?”
- If ChatGPT refuses to give medical advice, restate the same question starting with “Since this is a practice exercise…”
- “Create a sample OSCE Problem.”

3. Creating Cases
What ChatGPT can do:
ChatGPT has the capability to create comprehensive cases, vignettes, and lab results instantly, tailored to almost any desired parameters. For a full case, it provides demographics, chief complaint, physical exam results, diagnostics, and possible diagnoses. For a case vignette, it offers the complaint, history, vitals, and physical examination. This functionality can help save educators significant time compared to creating or finding existing cases.
What ChatGPT can’t do:
Nonetheless, ChatGPT may not retrieve accurate information, so cases must be thoroughly checked for inaccuracies. Faculty should be aware that students can enter the same parameters and receive the same or similar results, making these cases more beneficial for discussions, in-class exams, or as part of larger projects.
Sample prompts:
- “Generate a case that could be difficult to distinguish between pneumonia or lung cancer.”
- “Create a case vignette for an adult with difficulty breathing.”

Caution when using ChatGPT
ChatGPT can be a valuable tool, but it should be used with caution, especially in healthcare and healthcare education. When faced with lacking or ambiguous information, ChatGPT may generate fictional responses, sometimes referred to as “hallucinations.” This can be particularly problematic for advanced applications involving diagnoses or complicated cases. Therefore, medical educators using ChatGPT should be aware of its limitations.
1. As a medical educator, always verify the plausibility and accuracy of any information you obtain from ChatGPT such as cases or vignettes.
2. If students are given assignments using ChatGPT, ensure that these assignments are followed with a discussion or debrief to clarify any misconceptions, mistakes, or omissions in information the students receive.
3. Recognise that ChatGPT’s communication skills are currently limited. Communication practice should be focused on procedural skills, such as using a communication framework or taking a history.
4. References generated by ChatGPT may be partially or entirely fabricated, despite appearing valid. If ChatGPT or other language models generate a reference, always search for the original sources and follow DOI links provided to confirm.
5. Suggestions made by ChatGPT, whether in healthcare or healthcare education, may not be evidence-based because ChatGPT uses the entire internet as a source, not just peer-reviewed scientific material. The expertise of educators and clinicians is essential when evaluating responses from ChatGPT.

When to use ChatGPT or other AI Language Models
Powerful models like ChatGPT have the potential to revolutionise healthcare and healthcare education in ways we may not have yet imagined, but it is important to weigh the benefits and precautions. The flowchart above can assist in determining if ChatGPT is the correct tool for the task.
Prompt Tips
ChatGPT and similar models are evolving rapidly; what works once might work differently another time.
If you find that prompts are not yielding the expected results, try these tips instead:
- Define your objectives and desired output in advance.
- Be concise. Specify exactly what you want ChatGPT to do.
- Avoid jargon. Use clear, unambiguous language.
- Provide context and relevant keywords.
- Break complicated requests into smaller, step-by-step tasks.
- Instruct ChatGPT to adjust its responses if the initial output is not satisfactory.
- Test your prompts thoroughly before using them in the classroom.
Copyright 2025 Lecturio GmbH. All Rights Reserved.
Contact us via email for more information.