ICT902 Artificial Intelligence and Machine Learning
Semester 1 2026
Assessment 2 – Ethical Considerations in AI Solutions – 30% Guide
(Open Assessment)
Submission Deadline: Sunday 19 April 2026 11:59 PM
Total Assessment weighting – 30%
Purpose of this assessment
This assessment aims to develop student’s ability to critically examine ethical issues arising from the development and deployment of Artificial Intelligence (AI) technologies in real-world contexts. By engaging with a contemporary AI application or scenario, students will apply conceptual knowledge, ethical reasoning frameworks, and analytical thinking to identify potential risks, biases, governance challenges, and societal implications. The task requires students to evaluate the responsible use of AI systems, assess their broader organizational and social impact, and formulate well-reasoned, evidence-based recommendations. Through this process, students will strengthen their capacity for critical reflection, professional judgment, and the production of a structured academic report that demonstrates ethical awareness and responsible AI practice in complex decision-making environments.
Demonstrate achievement of this learning outcome:
ULO 4: Explore and critically assess the ethical considerations surrounding the development and deployment of AI technologies in society.
Task description:
This is an individual assessment designed to evaluate student’s ability to critically analyse ethical considerations arising from the development and deployment of Artificial Intelligence (AI) systems in real-world contexts. Students will be assigned a contemporary AI application or scenario situated within a realistic organisational or societal setting. The scenario will outline the purpose of the AI system, its operational environment, relevant stakeholders, data usage practices, and key ethical concerns.
The task requires students to produce a comprehensive written report of approximately 2000 words that demonstrates critical analysis, ethical reasoning, conceptual understanding, and professional academic communication. The report must evaluate the ethical implications of the AI application, assess risks and governance challenges, and propose well-reasoned recommendations to support responsible AI development and deployment.
This assessment aims to simulate a professional ethical review of an AI solution. Students will be required to:
- Analyse the assigned AI scenario and identify key ethical issues and stakeholders
- Examine potential risks including bias, fairness, transparency, accountability, privacy, and societal impact
- Evaluate the organisational, regulatory, and governance implications of deploying the AI system
- Propose evidence-based recommendations to mitigate ethical risks and enhance responsible AI practice
- Critically reflect on the broader implications of AI adoption in organisational and societal contexts
To complete this assessment, students are required to:
- Critically analyse the assigned AI application or scenario, identifying ethical risks and contextual factors.
- Evaluate potential impacts on individuals, organisations, and society, including issues of fairness, bias, privacy, and accountability.
- Apply relevant ethical principles, governance considerations, and responsible AI concepts introduced in the unit.
- Develop structured, evidence-based recommendations to improve ethical design, implementation, and oversight of the AI system.
- Demonstrate academic rigour through clear argumentation, appropriate referencing, and integration of scholarly or industry sources.
The final submission must include:
- A structured written report (PDF, approximately 2000 words) analysing the AI scenario, evaluating ethical implications, and presenting recommendations.
- Appropriate academic references supporting ethical analysis and argumentation.
This assessment aims to help students:
- Critically evaluate ethical challenges in AI development and deployment.
- Apply responsible AI principles to realistic organisational scenarios.
- Formulate structured and evidence-based ethical recommendations.
- Communicate complex ethical and technical considerations in a professional academic format.
- Demonstrate readiness to engage responsibly with AI technologies in professional practice.
Structure:
This assessment must be submitted in an academic report format, including the provided assessment cover sheet from the ICT 902 Moodle page. The report should include an introduction, main body, conclusion, recommendations, and a reference list.
Formatting: This assessment should be submitted with a word count of 2,000 words using either Calibri or Times New Roman font, size 12. The document should be double-spaced, with a minimum of ten references in APA 7 format. The assessment carries a total weight of 30%.
- Headings and Subheadings: Use a clear and consistent hierarchy for headings and subheadings. For instance, main headings should be in bold and a larger font size, while subheadings should be bold with a smaller font size.
- Font Style and Size: Ensure consistency in font style (e.g., Calibri or Times New Roman) and size (12-point for body text). The entire document should be double-spaced with uniform paragraph spacing.
- Alignment: Keep the text left-aligned for better readability. The body text should be justified for a neat, organised appearance.
- Proofreading and Editing: Review your work for grammatical errors and clarity. Consider using tools like Grammarly or peer feedback to improve writing quality.
Due Date: Week 7
Resources Available: Lecture slides and notes from weeks one to ten. Videos available in the “Readings and Viewings” section of OASIS.
Guide: The guide below provides a concise overview of how to approach the assessment.
- SCI Cover Page: (No word count)
- Table of Contents: (No word count, structured outline)
- Introduction (300–350 words)
- Introduce the assigned AI application or scenario and its relevance in today’s data-driven environment.
- Highlight the growing importance of ethical oversight in AI development and deployment.
- Outline how the report will examine the AI system, analyse ethical risks, evaluate governance implications, and propose responsible AI recommendations.
- Body of the Report:
Paragraph 1: AI System and Impact Analysis (400–450 words)
- Describe the nature and purpose of the AI system (e.g., predictive model, automated decision system, intelligent assistant).
- Identify affected stakeholders, including individuals, organisations, and society.
- Assess potential impacts such as fairness concerns, bias, privacy risks, transparency limitations, or accountability gaps.
- Discuss possible regulatory or compliance implications where relevant.
Paragraph 2: Ethical Risk and Governance Evaluation (450–500 words)
- Identify key ethical risks associated with the AI system, including data quality concerns, bias in training data, model opacity, or decision-making autonomy.
- Evaluate organisational responsibility, governance structures, and oversight mechanisms.
- Assess risk severity, potential harm, and long-term societal implications.
- Integrate relevant ethical principles and responsible AI concepts introduced in the unit.
Paragraph 3: Responsible AI Strategy and Mitigation Framework (450–500 words)
- Propose a structured strategy to mitigate identified ethical risks.
- Recommend governance mechanisms, transparency practices, monitoring processes, or policy interventions.
- Discuss accountability structures, fairness auditing, and continuous evaluation mechanisms.
- Justify recommendations using scholarly and industry sources.
- Conclusion (200–250 words)
- Summarise the key ethical concerns identified in the analysis.
- Reflect on the importance of responsible AI governance in sustaining trust and legitimacy.
- Emphasise the long-term benefits of embedding ethical principles into AI system design and deployment.
- Recommendations (250–300 words)
- Provide clear, actionable recommendations to strengthen ethical AI development and oversight.
- Suggest improvements to governance frameworks, data practices, transparency mechanisms, or monitoring processes.
- Present a structured roadmap for responsible and sustainable AI implementation.
- References (No word count)
- Minimum 10 credible academic and industry sources cited in APA 7 format.
- Sources may include academic journals, government guidelines, industry frameworks, and professional standards related to AI ethics and governance.
Total Word Count: 2,000 words
This includes ethical analysis, governance evaluation, responsible AI strategy development, and structured recommendations. The report must integrate conceptual understanding and critical reasoning aligned with ethical principles and responsible AI practice discussed in the unit.
Assessment submission
Before the due date, each group is allowed three (3) submission attempts, providing an opportunity to check for unintended plagiarism using text-matching software. As a team, review the similarity report together, discuss any necessary revisions, and ensure your final submission reflects the original work. If the similarity score is 31% or higher, collaborate to revise the content before making your final submission, as high similarity may indicate academic misconduct.
Academic Integrity and Misconduct
Students must submit original work and uphold academic integrity at Southern Cross Institute (SCI). The Academic Integrity Policy and Procedure outlines the principles of academic honesty and details the consequences of misconduct, including plagiarism, recycling, fabricating information, collusion, cheating in examinations, contract cheating, artificial intelligence tools, dishonest behaviour etc. SCI utilises Turnitin to encourage proper citation practices and to detect potential academic misconduct.
Ethical Use of Generative Artificial Intelligence (GenAI) Tools
Refer to the Quick Guide for Students created by the Learning Support Team for best practices in using GenAI tools. While GenAI can assist with idea generation, structuring, and drafting, students must carefully review, paraphrase, and properly reference any AI-generated content if used. Overreliance on AI may raise academic integrity concerns such as fabricating information.
Creating a reference to ChatGPT or other AI models and software
As per American Psychological Association (2020), the reference and in-text citations for ChatGPT are formatted as follows:
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
- Parenthetical citation: (OpenAI, 2023)
- Narrative citation: OpenAI (2023)
Note: Although here we focus on ChatGPT, they can be adapted to the use of other large language models (e.g., Bard), algorithms, and similar software.
For further details, please refer to the ICT902 Unit Outline and ICT902 Unit Assessment Guide for additional information or contact your Lecturer. Please refer to the next page for the marking rubric for Assessment 2.
Rubric for Assessment 2 – Ethical Considerations in AI Solutions (30%) – OPEN ASSESSMENT
| Criteria | Fail (0 – 49%) | Pass (50 – 64%) | Credit (65-74%) | Distinction (75-84%) | High Distinction (85 – 100%) |
|---|---|---|---|---|---|
| Research Quality and Solution Feasibility (10%) | Insufficient research conducted, with few or no feasible solutions presented. Relies on non-credible or unsupported ideas. | Adequate research conducted, presenting moderately feasible solutions supported by credible sources. | Above-average quality of research, presenting feasible solutions backed by good sources. | Very good research, presenting highly feasible solutions supported by strong academic and professional sources. | Exceptional research, presenting innovative and highly feasible solutions with extensive academic support. |
| Understanding of the challenge in terms of theories and concepts (20%) | Not adequately understood the challenge in terms of the theories and concepts studied (eg have used terminology incorrectly or design/prototype is based on theoretically/conceptually incorrect assumptions or misconceived the issue/problem) | Adequately understood the challenge in terms of the theories and concepts studied (eg have correctly used terminology and design/prototype is based on theoretically/conceptually correct assumptions) | Adequately understood the challenge in terms of the theories and concepts studied to an above average standard | Adequately understood the challenge in terms of the theories and concepts studied to a very good standard | Adequately understood the challenge in terms of the theories and concepts studied to an exceptional standard |
| Coherence of analysis justifying design/prototype (10%) | The rationale for the design/prototype is illogical and/or poorly reasoned (eg because it relies on unfounded assumptions or misunderstands the theories and concepts applied) | The rationale for the design/prototype is mostly logical and well-reasoned | The rationale for the design/prototype is logical and well-reasoned to an above average standard | The rationale for the design/prototype is logical and well-reasoned to a very good standard | The rationale for the design/prototype is logical and well-reasoned to an exceptional standard |
| Support for design/prototype (10%) | The design/prototype is insufficiently supported by theory and/or evidence | The design/prototype is supported by theory and/or evidence | The design/prototype is supported by theory and/or evidence to an above average standard | The design/prototype is supported by theory and/or evidence to a very good standard | The design/prototype is supported by theory and/or evidence to an exceptional standard |
Note: This report is provided as a sample for reference purposes only. For further guidance, detailed solutions,