This page showcases practical assessment ideas that intentionally embed elements of the Information and Digital Literacies Graduate Learning Outcome - covering information, digital, and AI literacies. Examples are designed to help students locate, evaluate, create, and communicate information ethically and effectively, while building confidence with contemporary digital and AI-enabled tools.
Literature reviews are powerful vehicles for the Information & Digital Literacies GLO: they require students to define/refine a question, search strategically, evaluate sources, synthesise evidence, and reference ethically - using contemporary digital tools along the way. You can also build AI literacy into the task through scaffolded steps - for example, asking students to use an AI tool to iterate on their research question or keywords, then reflect on what they kept, what they discarded, and why, before validating results in scholarly databases.
Designing a literature review assessment? Watch the video below for practical tips on task design and clear guidance to support your students.
Below are task ideas you can adapt to integrate the Information & Digital Literacies GLO into your subject.
Some of the above content was adapted from Drew University’s Designing assignments to develop information literacy.
Below you will find task starter examples that explicitly involve AI tools. These examples are grouped separately, as this is an emerging, fast-evolving area. They require additional guardrails (clear allowable use, transparency/disclosure, verification of outputs, privacy/consent, and equity of access) and may vary by discipline and policy. Use or adapt them where AI adds genuine learning value, and include an AI-use statement plus a verification step.
Why add this? Requiring students to disclose and reflect on AI use builds workplace-ready habits: transparency, reproducibility, ethical judgement, and risk awareness. It also helps protect academic integrity (no hidden assistance), improves verification skills, and mirrors professional expectations where AI-assisted work may need to be logged and defensible.
How to build it into tasks (quick options)
As an alternative to setting fully AI-focused assessments, build AI literacy by weaving small, clearly disclosed AI steps into existing tasks - for brainstorming, scoping, drafting, or quality checks. Require students to disclose how AI was used, verify any AI-influenced elements, and reflect on their choices. This mirrors workplace practice: AI can support parts of a workflow, but people remain accountable for the final product.
Small, well-scoped AI elements develop transparent, ethical, and verifiable habits; improve search and revision strategies; and build workplace-ready judgement about when and how AI adds value.
“I used [tool/model] on [date] for [purpose: brainstorming terms/outline/clarity feedback]. I verified AI suggestions against [databases/guidelines] and edited where inaccurate or biased. No AI wrote analysis/findings or summaries. I accept responsibility for the accuracy, originality, and ethics of this submission.”
Charles Sturt University acknowledges the traditional custodians of the lands on which its campuses are located, paying respect to Elders, both past and present, and extend that respect to all First Nations Peoples.
Charles Sturt University is an Australian University, TEQSA Provider Identification: PRV12018. CRICOS Provider: 00005F.