Skip to Main Content

Information and Digital Literacies GLO: AI-integrated assessments

About AI-Integrated Assessment Tasks

The examples below showcase assessment tasks where AI use is a core, assessable element. Each task requires students to:

  • Clearly explain how AI was used
  • Verify the accuracy of AI-generated content
  • Reflect on risks, limitations, and ethical considerations

Choose a category from the index to explore tasks at Foundation, Proficient, and Advanced levels.

How to Use These Examples

1. Select a task type and level
Pick an example that aligns with your subject’s learning outcomes, discipline, and workload. Adapt the topic, sources/tools, and audience as needed. Keep expectations (e.g. source count, verification depth) consistent with the level.

2. Set clear AI boundaries
Specify which tools (with version/date) are allowed and for what purposes (e.g. outlining, rewriting, slide design). Prohibit uses like fabricating data, uploading licensed or personal content, or generating final analyses. Ensure students have equitable access to tools and require full disclosure of AI use.

3. Require transparency artefacts. For example:

  • AI-use statement (1 paragraph): tool/version, purpose, what AI produced vs student work, verification steps, privacy notes
  • Verification log (table): claim → source → check → result → citation
  • Prompt appendix: include exact prompts (anonymised) and key revisions

4. Assess transparency and verification
Allocate marks for how well students verify facts, distinguish AI vs human work, and use AI appropriately. Include rubric criteria for method, accuracy, authorship, and accessibility.

5. Reinforce privacy, licensing, and accessibility
No sensitive or client data in AI tools. Don’t upload paywalled PDFs. Require alt-text, readable contrast, captions (for media), and proper attribution/licensing.

Final submission includes:
Deliverable + transparency artefacts (AI-use statement + verification log + prompt appendix)

Research and Synthesis

Task details

Students use a generative AI tool (e.g. ChatGPT, Copilot, or Elicit) to explore a discipline-relevant topic. They submit the AI prompt, the generated response, and a short annotation explaining the purpose of their prompt. Students then critically evaluate the AI-generated content for accuracy, bias, and completeness by comparing it with credible scholarly sources. A reflective component asks students to consider the ethical implications of using AI in academic work, including issues of transparency, attribution, and academic integrity.

Goal: Use GenAI to explore a topic, then verify and critique outputs.

Examples by level

  • Foundation: Health Sciences — Use Copilot to outline factors in rural telehealth uptake; verify with two peer‑reviewed sources; disclose prompts.
  • Proficient: Marketing — Use a LLM e.g. Copilot/ChatGPT to draft customer personas for a regional campaign; verify with ABS data and industry reports; reflect on bias.
  • Advanced: Social Work — Use Elicit to surface interventions for family violence; cross‑check with systematic reviews; include ethical risk discussion.

This task encourages thoughtful and responsible use of AI, supports critical thinking, and promotes ethical information practices in digital environments.

Transparency Artefacts

AI-Use statement:

  • AI tools used: [Name + version] Purpose: [brainstorm/outline/summarise/translate/check clarity]
  • Prompts (exact): See Prompt Appendix below.
  • Human work vs AI: I [planned/searched/selected sources/analysed/synthesised/wrote the final text]. AI outputs were treated as draft suggestions only.
  • Verification: Every factual claim generated by AI was checked against [discipline databases] and cited sources (see Verification Log).
  • Data handling: No sensitive, personal, or licensed full‑text was entered. I avoided uploading PDFs behind paywalls.
  • Bias & limitations noted: [brief reflection].

Verification log:

AI claim/idea Source(s) used to verify How I checked Outcome (confirmed/refuted) Citation added
         

Prompt appendix:

SYSTEM/Instruction: You are a cautious research assistant. When unsure, state uncertainty and suggest authoritative sources.

USER: Draft an outline on <topic> for <audience>. List key factors and questions to investigate. Avoid fabricating citations.

FOLLOW‑UP: Suggest search terms and controlled vocabulary (e.g., MeSH) and likely databases.

Task details

Students use an AI tool (e.g. Copilot, SciSpace, Elicit or Consensus) to help generate an initial overview of literature on a chosen research question. They critically evaluate the Tool’s outputs against a database search, identifying if there are gaps, errors or biases. They submit their short edited literature review plus a reflective commentary of the advantages and disadvantages of AI use and if this could be used in practice/industry. 

Goal: Use tools like Elicit/Consensus to scaffold, then verify against database searches.

Examples by level

  • Foundation: Education — Use Copilot to gather starting points on feedback literacy; verify in an Education database (e.g. ERIC); record false positives.
  • Proficient: Nursing — Use Consensus for wound‑care dressings; verify in CINAHL/MEDLINE; write a bias/gaps reflection.
  • Advanced: Psychology — Use Elicit to map constructs in digital CBT; verify in PsycINFO; include inclusion/exclusion rationale.

This task encourages rigorous, transparent evaluation of AI-generated literature overviews against database searches - building skills in bias/gap detection, verification, concise synthesis, and professional judgement about when AI is appropriate in practice.

Transparency artefacts

AI-use statement:

  • AI tools used: [Elicit/Consensus/SciSpace/Perplexity + version/date]
  • Role of AI: seed questions, preliminary concept map, and suggested papers (no AI‑generated citations were accepted without verification).
  • Search parity: All AI‑suggested hits were re‑checked in [discipline databases]; inclusion/exclusion decisions were made only on verified records.
  • De‑duplication: I managed records in [Zotero/EndNote], removed duplicates, and tagged AI‑surfaced vs database‑found items.
  • Transparency: PRISMA‑style flow note attached (optional).

Verification log:

AI-suggested article/paper Database check (where/how) Match confirmed Notes on quality/relevance Kept?
         

Prompt appendix:

USER: Map key constructs and seminal works on <topic>. Provide hypotheses and competing viewpoints. Do not fabricate references; if unsure, say so.

FOLLOW‑UP: Propose precise, database‑portable search strings (Boolean + field limits + controlled vocabulary).

Reporting and analysis

Task details

Students will take on the role of industry consultants tasked with preparing a briefing package for a professional client (e.g., a healthcare provider, education organisation, or government department, IT organisation). Using AI tools (e.g., ChatGPT, Perplexity, Elicit, or industry-specific platforms), they will: 

  1. Research & Collate: Use AI to gather and synthesise information on a given industry issue (e.g., workforce challenges, regulatory changes, project management implementation, or technology adoption). 
  2. Analyse & Interpret: Critically evaluate the AI-generated outputs, verifying accuracy against at least two authoritative sources. 
  3. Produce Multi-Format Deliverables:Reframe their findings into three different professional formats: 
    • An executive summary report (2–3 pages) for organisational leaders. 
    • A visual infographic or dashboard snapshot tailored for staff or stakeholder communication. 
    • A one-page policy or practice briefing note for external stakeholders. 

Students must also include a reflection (500 words) on the strengths, limitations, and ethical considerations of using AI for professional reporting. 

Goal: Produce multi‑format deliverables (exec summary, infographic/dashboard, policy/practice brief) with AI used transparently and verified.

Examples by level

  • Foundation: Business — Local SME workforce retention brief; verify AI claims with industry data; include AI‑use statement.
  • Proficient: Education — School digital‑safety briefing; triangulate AI outputs with ACSC guidance and peer‑reviewed studies.
  • Advanced: Health Sciences — Hospital teletriage implementation pack; verify with clinical standards and cost‑effectiveness evidence.

This task encourages professional-style, ethical AI use by requiring students to research with AI, verify claims against authoritative sources, and translate insights into multiple stakeholder-ready formats - building judgement, verification discipline, and clear multimodal communication.

Transparency artefacts

AI-use statement:

  • AI tools used: [ChatGPT/Copilot/Perplexity + date]
  • Use cases: draft headings, summarise long docs, outline pros/cons, generate alternative phrasing for non‑expert audiences.
  • Boundaries: AI did not generate final numbers, charts, or recommendations; all figures come from cited datasets (ABS, industry, gov).
  • Verification & provenance: All statistics were checked; data sources are cited next to each figure (see Fact Check Table).
  • Accessibility & integrity: Plain language, alt‑text, correct scales; no misleading axes; licences included for images/data.

Fact check table:

Claim/metric Source (link/citation) Last accessed Verification method Ok to use?
         

Prompt appendix:

USER: Suggest a concise executive‑summary structure for <topic/audience>. Provide 3 headline options and 3 risks/limitations to flag. Do not invent numbers.

FOLLOW‑UP: Draft a plain‑language paragraph for non‑expert readers; keep to <120 words> and include uncertainty notes.

Presentations

Task details

Students research a discipline-relevant case study and deliver a short oral presentation to a defined audience (e.g. peers, practitioners). They may use an AI slide tool (e.g. Gamma, SlidesGo, PowerPoint/Copilot) to draft structure, layouts, and visuals, but must verify all content, edit for accuracy and accessibility (clear headings, alt text, readable contrast), and cite sources. Include one slide with an AI-use statement (tool/model, purpose, prompts) and a brief note on changes made to AI-generated material; note: do not upload sensitive data to AI tools.

Goal: Use AI to draft structure/visuals; verify, edit for accuracy/accessibility; disclose prompts and changes.

Examples by level

  • Foundation: Law — 5‑minute briefing on privacy and consent in school apps; include AI‑use statement and alt‑text on images.
  • Proficient: Paramedicine — Case‑based talk on sepsis recognition; verify with guidelines; add accessibility checks.
  • Advanced: Vet/Animal Science — Grand rounds‑style presentation on feline CKD; include evidence appraisal and limits.

This task encourages transparent, ethical AI use in professional communication - building judgement about when AI helps, verification discipline, and clear audience-appropriate presenting.

Transparency artefacts

AI-use statement:

  • AI tools used: [Beautiful.ai/Canva Assistant/Copilot/ChatGPT]
  • What AI produced: slide outline, draft speaker notes, icon suggestions. What I authored: final slide content, data points, narration, and design adjustments.
  • Verification: Clinical/legal facts were checked against [guidelines/legislation/case law]; citations appear on relevant slides; see Slide Fact Check.
  • Accessibility: All images have alt‑text; sufficient contrast; captions on media; animations used sparingly.

Slide fact check:

Slide # Key claim Source used How verified OK to use?
         

Prompt appendix:

USER: Propose a 6‑slide outline for <topic> to <audience>. For each slide give: key message, 3 bullet ideas, and a relevant graphic concept. Avoid medical/legal advice.

FOLLOW‑UP: Draft speaker notes in <2–3 sentences/slide> using plain language.

Task details

Students research a discipline-relevant topic and create an infographic/poster for a defined audience (e.g., public, executives, peers). They may use AI design/writing tools (e.g., Canva, Gemini, ChatGPT, Infogram) to propose layouts, draft copy, or generate graphics, but must verify all facts and data, ensure accessibility (clear hierarchy, readable contrast, alt text/captions), credit image/data sources and licenses, and avoid misleading visuals. Include a brief AI-use statement (tool/model, purpose, prompts) noting edits made to AI-generated elements; note: do not input sensitive or licensed full-text into AI tools.

Goal: Produce an accessible, audience‑targeted visual using verified sources and transparent AI use.

Examples by level

  • Foundation: Health Sciences — Poster for the public on “Sun safety for outdoor workers.” Use Canva’s design suggestions; verify facts with Australian government/NGO sources; include alt‑text, clear hierarchy, and a short AI‑use statement (tool, prompts, edits).
  • Proficient: Marketing/Business — Executive infographic on “Regional SME e‑commerce trends.” Use AI to draft headline options and layout ideas, then validate figures with ABS and industry reports; include small‑print methods note and licence attributions.
  • Advanced: Psychology/Education — Practice brief poster on “Feedback literacy strategies that improve learning outcomes.” Use AI for draft copy, verify against systematic reviews, include data cautions, and attach a verification log + AI‑use appendix.

This task encourages transparent, ethical AI use in visual communication - building verification discipline, audience-aware plain-language writing, and responsible data presentation.

Transparency artefacts

AI-use statement:

  • AI tools used: [Canva Assistant/ChatGPT/Gemini/Infogram]
  • Purpose: layout options, draft microcopy, alt‑text suggestions, icon styles.
  • Edits made: I rewrote all fact statements, replaced generic icons, and adjusted hierarchy/contrast.
  • Verification: All numbers/text were cross‑checked (see Visuals Verification Log); sources and licences listed in small print.
  • Privacy/licensing: No sensitive data or licensed PDFs were uploaded; only openly licensed imagery used.

Visuals verification log:

Visual element Fact/number shown Source and access date Check performed OK to use?
         

Prompt appendix:

USER: Suggest 3 layout options for an A3 infographic on <topic> for <audience>. Include section headings and suggested chart types. Do not invent statistics.

FOLLOW‑UP: Draft concise, plain‑language captions (<20 words each) for the charts.

Task details

Students produce a web-readable digital essay on a discipline-relevant technology topic. They must use an approved AI tool for specific drafting/support tasks (e.g., outline options, plain-language rewrites, alt-text/caption suggestions, link curation) and document this use. The essay must include multimedia (images/audio/video), descriptive alt-text/captions, and hyperlinks to authoritative sources. Students demonstrate clear digital communication, critical engagement with content, and responsible, transparent AI use.

This task supports multimodal literacy, digital authorship, and reflective exploration of technology within academic and professional contexts.

Goal: Compose a hyperlinked, media‑rich essay with source transparency.

Examples by level

  • Foundation: Paramedicine — Use an approved general LLM (e.g., Copilot tenant, ChatGPT EDU) for outline + plain-language rewrites, and a design assistant (e.g., Canva Assistant) for alt-text/caption suggestions. Do not upload personal data or licensed PDFs. Verify all claims in your log.
  • Proficient: Vet/Animal Science — Web essay on heat stress in working dogs; include field images with alt‑text and licensing attribution. Use an AI research tools (e.g., Elicit/Consensus) to surface starting papers and a design assistant for captions/licences. Re-find all papers in CINAHL/MEDLINE; record false positives.
  • Advanced: Business — Interactive data essay on regional tourism trends; link to live datasets and describe methods. Use an LLM for structure/micro-edits and e.g. Perplexity to identify authoritative source links. Produce all figures from ABS/live datasets; every number must appear in the verification log with a source link.

Transparency artefacts

AI-use statement (students complete):

  • AI tools (name + version/date):
  • Purpose of use: (e.g., outline options, plain-language rewrite, draft alt-text, caption suggestions, link summarisation)
  • Human authorship: I selected sources, wrote/edited the final text, and decided all media and links.
  • Verification: Facts, quotes, figures and media attributions were checked against cited sources/datasets (see Verification Log). No AI-generated citations accepted without verification.
  • Data & privacy: No personal/clinical/client data or licensed full-text PDFs were uploaded.
  • Limitations noted: (e.g., missing context, over-confident wording, generic alt-text—addressed in edits)

Verification log:

Claim/fact/quote Source (URL/citation) How checked (method) Outcome (confirm/refute) Citations/hyperlink added?
         

Prompt appendix:

SYSTEM - Be cautious and factual. Don’t invent citations/data. Flag uncertainty and suggest credible sources. Write for the web (concise, clear, accessible). Don’t process personal/clinical data or licensed full-texts.
USER - Give a clear outline for a digital essay on <topic> for <audience>. Draft a ~120–140 word sample section in plain language (≈ Grade 8). Provide 3–5 authoritative links (publisher + year, 1-line rationale). Suggest alt-text (≤125 chars) for <image> and a 1–2 sentence caption for <figure> (no new numbers).
FOLLOW-UP - Refine the outline (3–5 bullets/section). List key misrepresentation risks and fixes. Recommend verification targets for each factual claim (e.g., ABS table, guideline, review). Propose precise edits for accuracy/attribution and a summary.

 

Governance and compliance

Task details

Students are given a workplace-style scenario involving human, organisational, or sensitive data. They design a practical AI-use protocol that includes: a data-flow diagram; concise consent/notice text; a de-identification and access plan; a tool/vendor risk table (data residency, retention, training reuse, security, accessibility); a storage & retention schedule; an “AI off-limits” checklist with a manual fallback; an incident/escalation mini-playbook; and a reusable AI-use statement template for outputs. Students must not enter real personal data into AI tools and should verify any AI-assisted drafting against authoritative sources.

Goal: Design a practical AI‑use protocol for sensitive or organisational data with privacy, consent, licensing, and risk controls.

Examples by level

  • Foundation: Health Sciences — Draft a basic AI “dos and don’ts” for de‑identified case summaries; add a simple data‑flow diagram (collection → storage → AI tool → output) and a one‑paragraph consent/notice template.
  • Proficient: Business/Education — Build a policy pack for a school or SME: vendor/tool risk table (data residency, retention, training reuse, security, accessibility), role‑based access controls, storage/retention schedule, and an “AI off‑limits” checklist with manual fallback.
  • Advanced: Law/Psychology/Social Work — Full protocol for a research or clinical context: DPIA‑style assessment, de‑identification plan, consent artifacts (adult/youth/guardian), incident & escalation mini‑playbook, and a reusable AI‑use statement template for publications/briefs; prohibit entry of real personal data into AI tools and require verification of any AI‑drafted text.

This task makes students apply privacy, consent, licensing, and risk management to real workflows.

Transparency artefacts

AI-use statement:

  • Context: [teaching/research/operations] involving [data categories].
  • AI tools permitted: [list + versions]; prohibited: [list] for [reasons].
  • Purposes allowed: drafting non‑sensitive text, classification of de‑identified data, code scaffolding; not allowed: processing personal/health/identifying data.
  • Data handling: No uploading of personal or licensed full‑text; apply de‑identification standards; store outputs in [approved system] with retention of [X].
  • Vendor risks: See risk table; high‑risk tools require [approval path].
  • Incident response: Follow escalation mini‑playbook; document and notify within [timeframe].

Risk and verification table:

Tool/vendor Data residency Retention Training reuse Security/accessibility notes Approved use?
           

Prompt appendix:

USER: Draft a data‑flow diagram and consent notice for using de‑identified <data type> with <AI tool>. Highlight where human review occurs. Do not include any personal data.

FOLLOW‑UP: Generate a checklist of “AI off‑limits” use cases and a fallback manual process for each.

Clinical and industry simulations

Task details

Students are provided with a patient case study and asked to design an appropriate clinical examination plan. Working in pairs, each student conducts a mock examination while using an AI-powered medical scribe tool (e.g. Heidi Health - AI Medical Scribe for AU Clinicians). Students will then submit: 

  1. The raw AI-generated transcript of their examination. 
  2. A patient-friendly handout (generated by the AI tool) summarising the examination and next steps. 
  3. A professionally edited and refined version of the transcript suitable for inclusion in clinical records. 
  4. Reflection on using an AI tool and the quality of the patient handout. 

Goal: Design and perform an appropriate clinical examination for a presented case, use an AI scribe ethically during the mock exam, then verify and refine outputs for professional use and patient communication.

Examples by level:

  • Foundation (100‑level/early clinical readiness): Nursing — Case: suspected cellulitis in a rural clinic. In pairs, plan the focused exam (vitals, local assessment, red flags). Use an AI scribe during a 5‑minute mock encounter. Submit: (1) raw AI transcript; (2) AI‑generated patient handout (plain language); (3) edited transcript suitable for a clinical note; (4) 300‑word reflection on AI accuracy, omissions, and risks (privacy, overstatement).
  • Proficient (200–300): Paramedicine — Case: possible sepsis in prehospital setting. Plan a primary/secondary survey with time‑critical cues. Use an AI scribe during a 7‑minute mock assessment. Submit: (1) raw transcript; (2) AI handout for patient/carer; (3) professionally edited ePCR‑style note including differential and risk stratification; (4) 400‑word reflection including a verification log for any AI‑stated advice and an AI failure‑mode checklist (hallucinations, missed vitals, inconsistent timings).
  • Advanced (300+ / Honours / PG): Vet/Animal Science — Case: canine acute abdomen in regional practice. Plan a species‑appropriate clinical exam and immediate work‑up. Use an AI scribe during an 8‑minute mock consult. Submit: (1) raw transcript; (2) AI‑generated owner handout (plain language, include consent and after‑hours instructions); (3) edited clinical record aligned to practice templates (problem list, differentials, plan, client communication); (4) 500‑word reflection covering verification, medico‑legal documentation standards, cultural/owner considerations, and a brief data‑governance note (no real client data; de‑identification).

This task encourages transparent, ethical AI use by requiring students to disclose, justify, and verify any AI assistance - strengthening integrity, reproducibility, and workplace-ready practice.

Transparency artefacts

AI-use statement:

  • AI tool(s) used: [Name + version/date] Purpose during task: real‑time scribing of mock examination; plain‑language draft of patient/owner handout.
  • Human authorship: I planned and performed the exam; I authored the final clinical note and edited all AI text for accuracy, tone, and appropriateness.
  • Verification: All advice in the handout and critical facts in the note were checked against current clinical guidelines/authoritative texts (see Verification Log). No AI‑generated citations were accepted without verification.
  • Data handling: No real patient/client identifiers were entered. Content was de‑identified and stored in approved locations only. I did not upload licensed PDFs behind paywalls.
  • Limitations & risks noted: Possible omissions, incorrect emphasis, bias, or fabricated details in the AI transcript/handout; I mitigated these via structured checklists and verification.

Verification log:

Item/claim from AI output Source used to verify (guideline/text) How checked Outcome (confirm/qualify/refute) Citation added?
         

Prompt appendix:

SYSTEM: You are an accurate, cautious medical scribe. Do not invent details. Flag uncertainty. Avoid medical advice beyond documenting what was said.

USER (scribe use): Capture the clinician–patient interaction verbatim for a mock exam on <case>. Structure output as time‑stamped notes with headings (HPI, Exam, Impression, Plan). Do not fabricate values.

USER (handout): Draft a patient‑friendly handout for <condition/exam> in plain language (approx Grade 8 lvl). Include what to expect, self‑care, red flags, and when to seek help. Do not include medication doses unless provided.

Accessibility and integrity notes:

  • Provide alt‑text for any icons/images; ensure colour contrast in handouts
  • Include an AI‑use statement; retain a copy of the raw transcript
  • Acknowledge sources; respect licensing for any reused material

Integrating AI literacy into non-AI assessments

As an alternative to setting fully AI-focused assessments, build AI literacy by weaving small, clearly disclosed AI steps into existing tasks - for brainstorming, scoping, drafting, or quality checks. Require students to disclose how AI was used, verify any AI-influenced elements, and reflect on their choices. This mirrors workplace practice: AI can support parts of a workflow, but people remain accountable for the final product. 

How this helps 

Small, well-scoped AI elements develop transparent, ethical, and verifiable habits; improve search and revision strategies; and build workplace-ready judgement about when and how AI adds value. 

Easy ways to weave AI in: 

Guidance notes for students: Examples 

  • Permitted AI use (scoped): “AI may be used for idea generation, outline variants, search-term discovery, and targeted feedback on clarity/structure. Do not use AI to write analysis, generate citations, or summarise articles.” 
  • Disclosure requirement: “Include a 50–100 word AI-use statement and a one-page prompt log (tool/model/date/purpose, key prompts).” 
  • Verification step: “List 2–3 elements influenced by AI and how each was verified or corrected with credible sources.” 
  • Privacy & licensing: “Do not input personal/confidential data or licensed full text into AI tools; use synthetic/redacted examples only.” 
  • Assessment weight: “AI transparency & verification = 5–10% of the grade.” 

AI-use statement: Example template 

“I used [tool/model] on [date] for [purpose: brainstorming terms/outline/clarity feedback]. I verified AI suggestions against [databases/guidelines] and edited where inaccurate or biased. No AI wrote analysis/findings or summaries. I accept responsibility for the accuracy, originality, and ethics of this submission.” 

Acknowledging AI use in assessments (why and how)

Why add an acknowledgement? Requiring students to disclose and reflect on AI use builds workplace-ready habits: transparency, reproducibility, ethical judgement, and risk awareness. It also protects academic integrity (no hidden assistance), improves verification skills, and mirrors professional expectations where AI-assisted work may need to be logged and defensible.

How to build it into tasks (quick options)

  • Include an AI-use statement (short paragraph) on the cover page or methods section.
  • Require a prompt/log appendix (tool, date, purpose, key prompts/settings).
  • Add a verification step (what was checked against which sources; fixes made).
  • Set guardrails (permitted/forbidden uses, privacy & licensing reminders).
  • Assess disclosure (allocate 5–10% for quality of transparency and verification).

Further resources:

Charles Sturt University acknowledges the traditional custodians of the lands on which its campuses are located, paying respect to Elders, both past and present, and extend that respect to all First Nations Peoples.Acknowledgement of Country

Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.