•  
  •  
 

Abstract

Background: Generative artificial intelligence is now embedded in how many healthcare students learn. Tools provide rapid explanations, summaries and practice questions, but use does not equal AI literacy, particularly where accuracy, medicines safety and guideline alignment matter.

Objective: To describe current use of generative AI by Physician Associate students and outline a practical approach to teaching critical AI literacy.

Methods: Educational evaluation using three activities: (1) baseline survey of students about use and attitudes, (2) an embedded compare and critique session in applied pharmacology where small groups assessed AI answers against trusted UK sources, including BNF and NICE guidance, followed by a short post session survey, and (3) reflective conversations with two students to contextualise use within problem based learning.

Results: In the cohort survey (n=40), use for learning was near universal (39 of 40, 98 per cent). Most students were very or extremely confident using AI for learning (27 of 40, 68 per cent) despite limited prior training (5 of 40, 13 per cent). Students most valued summarising complex topics (38 of 40) and generating revision questions (34 of 40). After the compare and critique session (n=26), 24 of 26 (92 per cent) reported greater awareness of the need to check accuracy and reliability when using AI or online resources. Students described using AI to reduce overwhelm and turn learning outcomes into manageable steps, while emphasising verification against trusted sources across academic and clinical contexts.

Conclusions: Students are already using AI at scale. The educational priority is to help learners calibrate trust, verify outputs against credible sources and preserve depth of understanding. A compare and critique session offers a scalable entry point, but AI literacy should be embedded across the curriculum and linked to professional judgement.

Share

COinS