The examples below showcase assessment tasks where AI use is a core, assessable element. Each task requires students to:
Choose a category from the index to explore tasks at Foundation, Proficient, and Advanced levels.
1. Select a task type and level
Pick an example that aligns with your subject’s learning outcomes, discipline, and workload. Adapt the topic, sources/tools, and audience as needed. Keep expectations (e.g. source count, verification depth) consistent with the level.
2. Set clear AI boundaries
Specify which tools (with version/date) are allowed and for what purposes (e.g. outlining, rewriting, slide design). Prohibit uses like fabricating data, uploading licensed or personal content, or generating final analyses. Ensure students have equitable access to tools and require full disclosure of AI use.
3. Require transparency artefacts. For example:
4. Assess transparency and verification
Allocate marks for how well students verify facts, distinguish AI vs human work, and use AI appropriately. Include rubric criteria for method, accuracy, authorship, and accessibility.
5. Reinforce privacy, licensing, and accessibility
No sensitive or client data in AI tools. Don’t upload paywalled PDFs. Require alt-text, readable contrast, captions (for media), and proper attribution/licensing.
Final submission includes:
Deliverable + transparency artefacts (AI-use statement + verification log + prompt appendix)
Students use a generative AI tool (e.g. ChatGPT, Copilot, or Elicit) to explore a discipline-relevant topic. They submit the AI prompt, the generated response, and a short annotation explaining the purpose of their prompt. Students then critically evaluate the AI-generated content for accuracy, bias, and completeness by comparing it with credible scholarly sources. A reflective component asks students to consider the ethical implications of using AI in academic work, including issues of transparency, attribution, and academic integrity.
Goal: Use GenAI to explore a topic, then verify and critique outputs.
This task encourages thoughtful and responsible use of AI, supports critical thinking, and promotes ethical information practices in digital environments.
AI-Use statement:
Verification log:
| AI claim/idea | Source(s) used to verify | How I checked | Outcome (confirmed/refuted) | Citation added |
Prompt appendix:
SYSTEM/Instruction: You are a cautious research assistant. When unsure, state uncertainty and suggest authoritative sources.
USER: Draft an outline on <topic> for <audience>. List key factors and questions to investigate. Avoid fabricating citations.
FOLLOW‑UP: Suggest search terms and controlled vocabulary (e.g., MeSH) and likely databases.
Students use an AI tool (e.g. Copilot, SciSpace, Elicit or Consensus) to help generate an initial overview of literature on a chosen research question. They critically evaluate the Tool’s outputs against a database search, identifying if there are gaps, errors or biases. They submit their short edited literature review plus a reflective commentary of the advantages and disadvantages of AI use and if this could be used in practice/industry.
Goal: Use tools like Elicit/Consensus to scaffold, then verify against database searches.
This task encourages rigorous, transparent evaluation of AI-generated literature overviews against database searches - building skills in bias/gap detection, verification, concise synthesis, and professional judgement about when AI is appropriate in practice.
AI-use statement:
Verification log:
| AI-suggested article/paper | Database check (where/how) | Match confirmed | Notes on quality/relevance | Kept? |
Prompt appendix:
USER: Map key constructs and seminal works on <topic>. Provide hypotheses and competing viewpoints. Do not fabricate references; if unsure, say so.
FOLLOW‑UP: Propose precise, database‑portable search strings (Boolean + field limits + controlled vocabulary).
Students will take on the role of industry consultants tasked with preparing a briefing package for a professional client (e.g., a healthcare provider, education organisation, or government department, IT organisation). Using AI tools (e.g., ChatGPT, Perplexity, Elicit, or industry-specific platforms), they will:
Students must also include a reflection (500 words) on the strengths, limitations, and ethical considerations of using AI for professional reporting.
Goal: Produce multi‑format deliverables (exec summary, infographic/dashboard, policy/practice brief) with AI used transparently and verified.
This task encourages professional-style, ethical AI use by requiring students to research with AI, verify claims against authoritative sources, and translate insights into multiple stakeholder-ready formats - building judgement, verification discipline, and clear multimodal communication.
AI-use statement:
Fact check table:
| Claim/metric | Source (link/citation) | Last accessed | Verification method | Ok to use? |
Prompt appendix:
USER: Suggest a concise executive‑summary structure for <topic/audience>. Provide 3 headline options and 3 risks/limitations to flag. Do not invent numbers.
FOLLOW‑UP: Draft a plain‑language paragraph for non‑expert readers; keep to <120 words> and include uncertainty notes.
Students research a discipline-relevant case study and deliver a short oral presentation to a defined audience (e.g. peers, practitioners). They may use an AI slide tool (e.g. Gamma, SlidesGo, PowerPoint/Copilot) to draft structure, layouts, and visuals, but must verify all content, edit for accuracy and accessibility (clear headings, alt text, readable contrast), and cite sources. Include one slide with an AI-use statement (tool/model, purpose, prompts) and a brief note on changes made to AI-generated material; note: do not upload sensitive data to AI tools.
Goal: Use AI to draft structure/visuals; verify, edit for accuracy/accessibility; disclose prompts and changes.
This task encourages transparent, ethical AI use in professional communication - building judgement about when AI helps, verification discipline, and clear audience-appropriate presenting.
AI-use statement:
Slide fact check:
| Slide # | Key claim | Source used | How verified | OK to use? |
Prompt appendix:
USER: Propose a 6‑slide outline for <topic> to <audience>. For each slide give: key message, 3 bullet ideas, and a relevant graphic concept. Avoid medical/legal advice.
FOLLOW‑UP: Draft speaker notes in <2–3 sentences/slide> using plain language.
Students research a discipline-relevant topic and create an infographic/poster for a defined audience (e.g., public, executives, peers). They may use AI design/writing tools (e.g., Canva, Gemini, ChatGPT, Infogram) to propose layouts, draft copy, or generate graphics, but must verify all facts and data, ensure accessibility (clear hierarchy, readable contrast, alt text/captions), credit image/data sources and licenses, and avoid misleading visuals. Include a brief AI-use statement (tool/model, purpose, prompts) noting edits made to AI-generated elements; note: do not input sensitive or licensed full-text into AI tools.
Goal: Produce an accessible, audience‑targeted visual using verified sources and transparent AI use.
This task encourages transparent, ethical AI use in visual communication - building verification discipline, audience-aware plain-language writing, and responsible data presentation.
AI-use statement:
Visuals verification log:
| Visual element | Fact/number shown | Source and access date | Check performed | OK to use? |
Prompt appendix:
USER: Suggest 3 layout options for an A3 infographic on <topic> for <audience>. Include section headings and suggested chart types. Do not invent statistics.
FOLLOW‑UP: Draft concise, plain‑language captions (<20 words each) for the charts.
Students produce a web-readable digital essay on a discipline-relevant technology topic. They must use an approved AI tool for specific drafting/support tasks (e.g., outline options, plain-language rewrites, alt-text/caption suggestions, link curation) and document this use. The essay must include multimedia (images/audio/video), descriptive alt-text/captions, and hyperlinks to authoritative sources. Students demonstrate clear digital communication, critical engagement with content, and responsible, transparent AI use.
This task supports multimodal literacy, digital authorship, and reflective exploration of technology within academic and professional contexts.
Goal: Compose a hyperlinked, media‑rich essay with source transparency.
AI-use statement (students complete):
Verification log:
| Claim/fact/quote | Source (URL/citation) | How checked (method) | Outcome (confirm/refute) | Citations/hyperlink added? |
Prompt appendix:
SYSTEM - Be cautious and factual. Don’t invent citations/data. Flag uncertainty and suggest credible sources. Write for the web (concise, clear, accessible). Don’t process personal/clinical data or licensed full-texts.
USER - Give a clear outline for a digital essay on <topic> for <audience>. Draft a ~120–140 word sample section in plain language (≈ Grade 8). Provide 3–5 authoritative links (publisher + year, 1-line rationale). Suggest alt-text (≤125 chars) for <image> and a 1–2 sentence caption for <figure> (no new numbers).
FOLLOW-UP - Refine the outline (3–5 bullets/section). List key misrepresentation risks and fixes. Recommend verification targets for each factual claim (e.g., ABS table, guideline, review). Propose precise edits for accuracy/attribution and a summary.
Students are given a workplace-style scenario involving human, organisational, or sensitive data. They design a practical AI-use protocol that includes: a data-flow diagram; concise consent/notice text; a de-identification and access plan; a tool/vendor risk table (data residency, retention, training reuse, security, accessibility); a storage & retention schedule; an “AI off-limits” checklist with a manual fallback; an incident/escalation mini-playbook; and a reusable AI-use statement template for outputs. Students must not enter real personal data into AI tools and should verify any AI-assisted drafting against authoritative sources.
Goal: Design a practical AI‑use protocol for sensitive or organisational data with privacy, consent, licensing, and risk controls.
This task makes students apply privacy, consent, licensing, and risk management to real workflows.
AI-use statement:
Risk and verification table:
| Tool/vendor | Data residency | Retention | Training reuse | Security/accessibility notes | Approved use? |
Prompt appendix:
USER: Draft a data‑flow diagram and consent notice for using de‑identified <data type> with <AI tool>. Highlight where human review occurs. Do not include any personal data.
FOLLOW‑UP: Generate a checklist of “AI off‑limits” use cases and a fallback manual process for each.
Students are provided with a patient case study and asked to design an appropriate clinical examination plan. Working in pairs, each student conducts a mock examination while using an AI-powered medical scribe tool (e.g. Heidi Health - AI Medical Scribe for AU Clinicians). Students will then submit:
Goal: Design and perform an appropriate clinical examination for a presented case, use an AI scribe ethically during the mock exam, then verify and refine outputs for professional use and patient communication.
This task encourages transparent, ethical AI use by requiring students to disclose, justify, and verify any AI assistance - strengthening integrity, reproducibility, and workplace-ready practice.
AI-use statement:
Verification log:
| Item/claim from AI output | Source used to verify (guideline/text) | How checked | Outcome (confirm/qualify/refute) | Citation added? |
Prompt appendix:
SYSTEM: You are an accurate, cautious medical scribe. Do not invent details. Flag uncertainty. Avoid medical advice beyond documenting what was said.
USER (scribe use): Capture the clinician–patient interaction verbatim for a mock exam on <case>. Structure output as time‑stamped notes with headings (HPI, Exam, Impression, Plan). Do not fabricate values.
USER (handout): Draft a patient‑friendly handout for <condition/exam> in plain language (approx Grade 8 lvl). Include what to expect, self‑care, red flags, and when to seek help. Do not include medication doses unless provided.
Accessibility and integrity notes:
As an alternative to setting fully AI-focused assessments, build AI literacy by weaving small, clearly disclosed AI steps into existing tasks - for brainstorming, scoping, drafting, or quality checks. Require students to disclose how AI was used, verify any AI-influenced elements, and reflect on their choices. This mirrors workplace practice: AI can support parts of a workflow, but people remain accountable for the final product.
Small, well-scoped AI elements develop transparent, ethical, and verifiable habits; improve search and revision strategies; and build workplace-ready judgement about when and how AI adds value.
“I used [tool/model] on [date] for [purpose: brainstorming terms/outline/clarity feedback]. I verified AI suggestions against [databases/guidelines] and edited where inaccurate or biased. No AI wrote analysis/findings or summaries. I accept responsibility for the accuracy, originality, and ethics of this submission.”
Why add an acknowledgement? Requiring students to disclose and reflect on AI use builds workplace-ready habits: transparency, reproducibility, ethical judgement, and risk awareness. It also protects academic integrity (no hidden assistance), improves verification skills, and mirrors professional expectations where AI-assisted work may need to be logged and defensible.
How to build it into tasks (quick options)
Charles Sturt University acknowledges the traditional custodians of the lands on which its campuses are located, paying respect to Elders, both past and present, and extend that respect to all First Nations Peoples.
Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.