BEL+T Guidance on Generative AI
BEL+T Guidance on Generative AI
What is this guide?
The BEL+T Generative AI guide was first published in June 2023. This guidance was last updated on 12 Dec 2024.
This BEL+T Guidance on Generative AI (genAI) offers ABP subject coordinators, and other educators in built environments disciplines, some key concepts and definitions in this fast-developing space. It reviews more general advice and guidance, and complements MCSHE and University guidance (April 2023 update) with a focus on some issues of specific relevance to built environment educators. This guidance is also influenced by the University of Melbourne AI Principles, which have been designed to guide actions around the adoption and use of AI tools and systems.
The BEL+T Guidance on genAI was initially developed in 2023, and has been updated in anticipation of Semester 1, 2024. This guidance will continue to be updated as institutional advice is released, and as the broader academic discussions of these issues develop.
Of course, the range of pedagogies and teaching practices relevant to built environment disciplines is particularly wide. Some subjects take teaching approaches that are closer to HASS disciplines, while others are more aligned to STEM pedagogies. This means that the types of genAI tools and issues relevant to teaching are similarly broad. In addition, studio pedagogies introduce practices as well as concerns relating to creativity and original authorship in a genAI context.
The table below provides a summary of each of the sections within this guidance. These sections can be accessed using the navigational sidebar on the top-left corner of the page.
There is much to be learned and many nuances to this complex and evolving space, and there are clearly significant ways in which these new tools are likely to impact built environment disciplines and education. There are many opportunities for innovation as well as valid concerns to consider, particularly relating to how students develop foundational knowledge and critical perspectives, the implications of biased datasets and the treatment of intellectual property. For all subjects, clear and careful assessment design offers a heightened focus on what our students are learning, and what they will need to learn, as they engage with these tools.
We are looking forward to your comments on this guide, and to sharing the excellent and creative approaches that ABP educators are taking in this dynamic and evolving landscape – keep watching this space!
This section introduces genAI, explaining its principles and highlighting examples of its applications. It explores how genAI generates outputs and highlights various models used for generating those outputs. | |
This section provides an overview of the impact of genAI in Built Environment disciplines. It presents insights from ABP academics on the evolving role of genAI in professional practice, highlighting the implications for future graduates' knowledge and skills. | |
This section introduces the complicated landscape of genAI through a learning and teaching lens. It outlines the challenges and opportunities of genAI with a focus on student perspectives, biases and data-related concerns and considerations pertaining to creativity and intellectual property. | |
This section outlines how students might use genAI to support and supplement their learning. | |
This section provides an overview of assessment design in built environments education in the new context of genAI. It explores opportunities and complexities, emphasises the importance of aligning learning outcomes with AI literacy skills, and provides recommendations for meaningful assessment tasks and collaborative approaches involving genAI. | |
Assessment design for collaboration with genAI | This section presents a framework for understanding and implementing genAI in assessment through the lens of group work dynamics. It introduces three distinct approaches to assessment design - individual, cooperative, and collaborative - exploring how each type defines different roles and relationships between students and genAI. The section provides detailed guidance on task design considerations, learning objectives, and evaluation criteria, supported by practical examples and level descriptors for each assessment type. |
This section captures guidance for teaching staff on University policy governing students’ use of generative AI, in the context of academic integrity and academic misconduct. | |
GenAI for assessment of student submissions | This section outlines the University's current guidelines on using AI tools for assessing student work. It presents key institutional policies and procedures for 2024, clarifying staff responsibilities when using genAI for assessment and feedback. |
What is genAI?
GenAI is a remarkable branch of artificial intelligence that enables users to create novel outputs that closely resemble human-generated content. These outputs can take various forms, including text, images, videos, sounds and 3D models. Thanks to recent advancements in the field, genAI has witnessed unprecedented levels of growth and adoption and is revolutionising numerous industries and domains.
The crux of genAI is in its name–generative. While traditional AI algorithms have focused on identifying patterns within data for predictive purposes, say to predict whether the next image in a sequence of images is a cat or a dog, genAI leverages the learned patterns to create entirely new outputs. Given the above example, a genAI model could create an entirely new representation of cat or dog, or cat-dog!
There are now hundreds of genAI platforms with a diverse range of use cases, but undoubtably the most relevant and influential of these is ChatGPT. ChatGPT is a conversational agent that can engage in natural language conversations with humans. It offers a range of applications in higher education and in design, but is often misunderstood as a direct question-and-answer tool. This perception tends to oversimplify ChatGPT and overlooks its potential application in a wider range of more intricate and nuanced tasks.
How does Generative AI work?
GenAI is grounded in machine learning techniques that draw inspiration from the neural systems of the human brain, known as neural networks. These genAI networks are ‘trained’ on extremely large amounts of data, from which they learn to capture and identify features, patterns and relationships within the data. This enables the model to generate new data instances that are similar yet distinct from the original training data.
For instance, a genAI model trained on images of faces learns to understand facial features like nose shape, eye placement, and smile curvature, allowing it to generate new faces that possess the same structure and features but do not match any specific face. Similarly, models like ChatGPT, trained on vast amounts of language data, learn to understand grammar, sentence structure, common phrases, context, tone and style, enabling them to generate coherent and contextually appropriate text.
GenAI can utilise a variety of models, each employing unique methods for training the AI and generating results. There are many types of models, the most popular being Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs) and Denoising Diffusion Probabilistic Models (DDPMs). Each model possesses unique strengths and limitations, making them suitable for different contexts. Some models excel at producing high-quality results, while others offer better control over the generation process. Consequently, the choice of model plays a crucial role in determining the capabilities and limitations of genAI applications across disciplines.
GenAI, the models used, its applications and its outputs are developing very quickly. Current debates relating to its legal status, control and commercialisation, as well as its potential, will impact the ways it develops into the future and the impact on built environment disciplines and on learning and teaching.
GenAI in the built environment
GenAI is fundamentally altering the landscape of built environment disciplines, presenting both exciting innovations and significant challenges to these fields. It promises to transform the way buildings are designed – lowering costs, increasing productivity and reducing waste. Yet at the same time, the adoption of genAI raises profound questions around creativity, efficiency, ethics, and the nature of human involvement in these processes.
GenAI's impact on the built environment can already be seen in contemporary professional practice. Zaha Hadid Architects (ZHA) recently shared some groundbreaking ways that the practice is integrating AI into their design process. ZHA are leveraging AI to inform and personalise office spaces, employing fine-grained data analysis to enhance the design project's outcomes. In another part of the practice, the firm is utilizing AI text-to-image generators to stimulate design ideas for projects and to aid early ideation. The invocation of 'Zaha Hadid' in the AI's prompts seeks to claim authorship, marking a profound integration of genAI into the oeuvre of the firm.
To better understand this dynamic practice landscape, we asked ABP academics for their current perspectives on the biggest impact of genAI in their disciplines, and what these might mean for knowledge or skills needed by future graduates. The following section presents their insights, reflecting a diversity of viewpoints on the evolving role of genAI in the pedagogy and practice of BE disciplines and the challenges and opportunities this presents for the next generation of professionals.
-
GenAI has significantly impacted architecture, policy planning, and urban design, redefining the boundaries of creativity and efficiency. It offers the ability to generate countless design options and automate repetitive tasks, aiding architects and urban designers in developing optimised, innovative solutions.
However, concerns over the potential loss of creativity, homogenisation of designs, and insensitivity to unique local factors persist. There's a fear that genAI could overshadow human intuition, local cultural nuances, and personal identities within design. Privacy and data security are also crucial issues as urban planning becomes increasingly data-driven.
Despite these challenges, genAI is opening up remarkable opportunities. The technology can aid sustainable, equitable, and resilient urban development, simulating design intervention impacts on a range of factors, from climate adaptation to socio-economic growth. It also democratises the design process, allowing for genuine citizen engagement and interaction.
The emergence of genAI has significant implications for future graduates. While traditional skills remain relevant, there's an increasing demand for proficiency in AI, data science, social and environmental systems, and cross-disciplinary collaboration. Additionally, future professionals must be well-versed in the ethical application of AI, coupled with critical thinking abilities to responsibly navigate its complexities.
With genAI gaining ground, the urgency for upskilling has never been higher. Graduates who can wield this technology will have a competitive edge in the market. Moreover, the novelty of these techniques presents a significant opportunity for academic and commercial advancements, potentially sparking an influx of startups and monetisation possibilities. The future of these disciplines, therefore, converges at the intersection of technology, creativity, societal considerations, and a thorough understanding of genAI.
Authors:
- Dr Thanh Ho - University of Melbourne
- A/Prof Jason Thompson - Senior Research Fellow, Univeristy of Melbourne
- Dr Sachitch Seneviratne - Research Fellow in Computer Vision and Health, University of Melbourne
-
To understand the effect of genAI in the BE disciplines, it is useful to first distinguish between genAI for design visualisation and communication (image-to-image applications) and the design concept and design development tools (text-to-image models).
Image-to-image tools are useful to generate photorealistic images from a reference image and sketches or clay renderings. These tools are expected to speed up the process of rendering production, and users won’t need to be as skilled as current experts in design visualisation.
Text-to-image tools allow designers to produce images of design ideas through textual prompts.
The development of text-to-image and image-to-image tools has already created a new professional figure, called ‘prompt engineer’. Prompt engineers currently work in visual arts and advertisement, and can likely be employed in architecture in future years, even without any specific formal design training. For the moment, we have seen a number of academics and architects promoting themselves online as AI experts because they use genAI tools. In fact, we believe it is more appropriate to consider such designers and academics simply as users and designers, testing and exploring the potential and limits of new digital tools.
GenAI is seen as a novelty and is used as a cutting-edge technology. The myth of the ‘novitas’ has always been appealing to architects and designers, and it was previously seen in digital design developments, including parametric design and optimisation. However, we believe that designers will have to go back to the roots of AI development (theory and technology) to fully explore how AI developments will unfold in the near future. Designers will need to rediscover Negroponte’s work and associated studies of the 60s and 70s. The same applies to many other research projects of the 90s, which focused on intelligent CAD systems and extensively discussed issues associated with intelligence, creativity, and human-machine interaction.
With this in mind, we can begin to theorise what this might mean for future graduates' knowledge or skills. Students can use genAI in design studio settings and seminar-based subjects without further elaboration. We believe that future graduates will need to develop critical thinking skills to reflect on the products of genAI. To enable a fruitful collaboration with such tools, future graduates must develop their communication and teamwork skills, which are the basis of a successful human-machine interaction. It sounds like a paradox, but the knowledge and skills future graduates will require can be found in the basic principles of architectural design thinking and design processes.
Authors:
- Dr Alberto Pugnale - Senior Lecturer in Architectural Design, University of Melbourne
- Dr Gabriele Mirra - University of Melbourne
-
The impact of genAI on the discipline of landscape architecture is still emerging and somewhat uncertain. There's an anticipation of a surge in AI-generated images that communicate design concepts more evocatively, albeit without the detailed design work typical of the field.
A challenge presented by the implementation of GenAI is the inherent recursiveness and potential bias in the process. While AI can access the vast repository of the internet, this does not always present an accurate or comprehensive representation of landscapes. Many landscape sites, even urban ones, lack exhaustive documentation or robust datasets. The nuances and intricacies that landscape designers often need to work with may not be adequately captured or represented by AI.
Nevertheless, the advent of GenAI opens up intriguing opportunities in the field. Landscape architecture has had a long-standing, occasionally tense relationship with representational tools, particularly living materials like plants that are highly variable in structure and growth outcomes. Predicting their appearance as part of a design can be a challenge because the eventual look of a design isn't always known. GenAI provides an accessible way to rapidly explore, visualise, and communicate these future states. Yet, it's important to note that this forms part of creative speculation and should not be regarded as absolute or a replacement for a comprehensive design process.
In terms of what this means for the skills and knowledge required of future graduates, criticality is of paramount importance. GenAI can be an exciting and engaging tool for designers to explore and visualise ideas, but students must understand its limitations. Given the language-based nature of AI prompts, they also present an additional challenge. In design disciplines, language isn't always thoroughly scrutinised due to the reliance on drawings and models for communication. Hence, to effectively engage with these emerging tools, graduates must become adept at critically examining language.
For a more comprehensive discussion, refer to this article on AI and landscape architecture by Landscape Australia.
Author:
- Wendy Walls - Lecturer in Landscape Architectural Design, University of Melbourne
-
The emergence of generative AI image generators is profoundly impacting the realm of architecture and design. These tools, deeply rooted in computational practices, offer new avenues for academics and creative practitioners to engage with their craft. The following video showcases the AI assisted Sketchbook project by Leire Asennsio Villoria and David Mah. They have pioneered a method of digital archaeology which involves reverse engineering and understanding the material intelligence of historical cultural artefacts, and embedding these into generative associative models. Through this process, new and novel design iterations are created that are deeply rooted in historical precedence. These outputs challenge traditional architectural paradigms and highlight the potential of AI generative tools in the design process.
However, a critical question remains: Can AI, built upon existing cultural artefacts, truly produce novelty, or does it risk anchoring culture in a repetitive cycle? The transformative potential of AI in design is evident, but its true capability to innovate and redefine remains a topic of exploration.
GenAI in learning and teaching
The introduction of novel educational technologies often arouses strong emotions, ranging from doomsday predictions to endless euphoria (Rudolph et al., 2023). In the case of genAI, opinions are polarised between those who are excited about the potential it brings and those who advocate for its prohibition.
GenAI undoubtedly presents both opportunities and challenges in higher education. It offers the potential to fundamentally change the way we think about education and learning, with opportunities for improving efficiency, effectiveness and societal impact for both students and educators (Atlas, 2023). However, alongside these innovations comes considerable risk, including threats to academic integrity, concerns around the accuracy of AI-generated content, propagation of biases or misinformation and potential overreliance on the technology (Gimpel et al, 2023).
Therefore, in approaching the use of these tools, it is imperative for educators and students to be both aware and critically reflective. This guide recommends that built environment learners and teachers proceed with caution, with strong emphasis on ethical and responsible engagement, and a focus on the development of AI literacy. With this in mind, let us look at some of the current and potential issues that must be foregrounded.
Efforts by the Australian Government’s Tertiary Education Quality and Standards Agency (TEQSA) to develop and also share collected guidance from across the Australian HE sector via the TEQSA good practice hub is also helpful to note. Of course, this valuable advice should be considered in a UoM policy context.
-
As with any emergent technology, student perceptions towards genAI are sure to be nuanced and varied. Considering genAI as part of a broader “learning environment”, it is worth remembering that student perceptions of their learning environment—including assessment methods and support services—impact learning outcomes and their ability to engage in “deep learning” (Biggs, 1999). Chan and Hu (2023) argue:
Understanding students on their willingness and concerns regarding the use of GenAI tools can help educators to better integrate these technologies into the learning process, ensuring they complement and enhance traditional teaching methods. This integration can lead to improved learning outcomes, as students will be more likely to adopt a deep approach to learning when they perceive GenAI as a valuable and supportive resource.
The authors’ 2023 survey of university students in Hong Kong revealed that students perceive a set of opportunities and threats related to genAI. The greatest benefits of genAI reported were: personalised and immediate learning support; writing and brainstorming support; research and analysis support; visual and audio multi-media support; and administrative support. Student respondents noted the following challenges related to genAI: accuracy and transparency; privacy and ethical issues; holistic competencies; career prospects; human values; and uncertain policies.
Understanding why students might decide to use AI for their coursework is critical to ensuring that we genuinely support learning, and that the institution meets its obligation to graduate employable and ethical citizens. Students might elect to use genAI for their coursework for numerous reasons, and it is important to understand that not all of these reasons are mischievous or with an intent to cheat. Some students may lack confidence in producing work entirely themselves, whilst others may not feel motivated or supported to do so. Indeed, scholarship on why students participate in academic dishonesty more widely suggests that the possible reasons can extend beyond the desire to achieve certain results to include: feeling inadequately prepared for assessments; caring more about results than learning; confusion around what constitutes academically dishonest behaviour; feeling like the behaviour is commonplace amongst their peers; or feeling a lack of connection to their studies or institution more generally (Bryzgornia, 2022). Some scholars have even raised the notion of ‘ethical cheating’ in reference to students collaborating, sharing knowledge/information/ideas and using open-source platforms precisely to develop 21st-century skills, yet in ways that might traditionally have been considered cheating (Brimble, 2016). Anecdotal reports suggest that students are also using genAI tools because they are fun, and also because they just want to explore what it can do.
As discussed in the GenAI and assessment section of this guide, students deserve clarity and clear communication around what is considered proper versus improper use of AI in their studies and, for each assessment task, what is encouraged and what may be required. This includes when and how students should disclose the use of AI tools, and any distinctions around expectations when it comes to AI use in text-based versus graphic-based formats. Apart from clarifying university policies and expectations, it may be beneficial to discuss with students the use of AI by professionals and academics in the field, and the current set of ethical questions surrounding these practices. Siva Vaidhyanathan writes, this is a teachable moment for our students as well as ourselves. Not only are tools and technologies certain to develop over time, institutional and personal stances towards AI are context-dependent. Generally, if students feel uncomfortable or discouraged to discuss their views or habits with staff, this can contribute to a problematic gap between teacher assumptions/expectations and learner practices. The more educators and students can feel like they are working together to promote learning and professional development the better. As Ouyang and Jiao (2021) argue, the advancement of AI technologies does not ensure good education outcomes; rather, the long-term goal of AI use in educational contexts is to contribute to a paradigm where learners are supported and empowered to take agency for their own learning.
-
GenAI has great potential in higher education, but it is crucial to approach these tools with care, and consider the ethical considerations and potential associated with them. This includes consideration of equity in assessment design, as paid and unpaid versions of genAI tools (such as ChatGPT) have access to different datasets.
Some obvious examples of bias in outputs by GenAI include the tendency of AI models to assume that orientation for design of buildings or landscapes refers to the northern hemisphere. There is clearly potential for emerging GenAI tools producing drawings or representations to incorporate conventions drawn from other locations and professional cultures.
A significant but more subtle and pervasive concern is the tendency of these models to perpetuate societal biases and discrimination (Dahmen et al., 2023). These models are trained on large amounts of data, and if that data is biased, the models will reflect these biases in their output (Atlas, 2023). In doing so, they reinforce existing societal issues and discriminations. To address this, it is essential for users to be educated about these biases, develop critical evaluation skills and gain technical expertise in mitigating biases when using these tools (Gimpel et al., 2023). This includes employing strategies such as proper prompt engineering to guide the genAI models towards generating content that is more inclusive, unbiased and aligned with ethical considerations. By proactively engaging in responsible practices, users can reduce bias and foster an equitable and ethically sound application
Paradigms of AI Usage by Learners in Higher Education: According to Ouyang and Jiao (2021) three paradigms can describe how AI is currently being utilised in education.
- AI-directed, where learner is considered as recipient (paradigm 1),
- AI-supported where learner is perceived as a collaborator (paradigm 2), and
- AI-empowered, where learner contributes as a leader (paradigm 3).
Paradigms 1 and 2 have been the focus of AI in higher education in the past two decades. There is a current call for Paradigm 3, an AI-empowered, Learner-as-Leader approach centred upon promoting human intelligence and integrated AI . This approach aims to resolve issues of bias in AI algorithms and datasets, lack of governance of AI decision-making, to promote learning and teaching experiences that are more socially just and inclusive.
Types of Datasets: For learning and teaching experiences to be more socially just and inclusive, it is important to understand how students use the AI platforms and the forms of information that are input and outputs (Dwivedi, 2023). In built environment education, students can engage with genAI platforms using several types of datasets including image, text, audio and/or code, depending on the subject.
Biases in Datasets: Depending on the type of genAI platform, datasets may not be curated or selected to identify an inclusive range of issues or perspectives. Such biases may be systemic, societal, cultural, racial, ethnic and/or methodologically, as well as intellectually fraught as large social datasets are fed into algorithms and unchecked algorithms can result in systemic discrimination that favours certain individuals or groups over others (Ferrara, 2023; Ray, 2023). Most datasets are Western-centric because of the dominance of these forms of information that are readily available for genAI platforms such as ChatGPT to utilise. As above, they may also be biased toward northern hemisphere assumptions.
This range of potential biases is relevant to cultural, linguistic, ethnic and historic background to the content, or for a student, and should be recognised in support of socially just/inclusive learning (Ferrara, 2023). Datasets drive textual outputs such as essays, reports, summaries, reflective narratives, thesis, rendered images, development of audio outputs and drafts, as well as data analysis. Algorithms in AI/Machine Learning systems that seek to increase efficiencies can embed existing biases and propagate ongoing disparities. This compounding bias can hinder the achievement of social justice in classroom and decolonisation efforts by higher educational institutions.
Responding to Dataset Bias in your teaching: Datasets have a lifecycle of input, usage and interpretation. It is important that at each stage of the lifecycle, students are supported in how they relate to data and its interpretation for their learning and assessment (Dwivedi, 2023).
Some teaching strategies to consider include:
- Build students’ awareness of different types of biases that might be inherent to datasets;
- Encourage students to develop prompts that respond to biases by adding additional information such as ‘internationalise the prompt’ or ‘consider the Global South perspective’;
- Provide students with examples of how bias in datasets might impact their own worldviews about interpretation of readings/scholarship. This can include showing students that certain genAI outputs can impact respectful and ethical engagement from diverse scholars with varied cultural backgrounds, or may also lead to misinterpretation and distortion of information. Such distortion can be disrespectful and may project further bias/exclusion of diverse communities and places;
- Encourage students to check the authenticity of resources and not to rely solely on an output from a genAI platform as reliable information about various cultures, genders, races, ethnicities, histories or experiences of diverse communities to decolonise educational and professional practice efforts.
-
Important issues relating to genAI and teaching in ABP disciplines are related to creativity and authorship, and impact studios and other subjects involving innovation and creativity. Each subject coordinator will need to explore and identify how students can best engage with these tools in relation to specific subject learning outcomes. Related questions are provided within the text below, alongside related links to assist this important thinking.
When we ask students to ‘be creative’ in design-related disciplines or learning activities, we are asking them to contribute and iteratively refine their own beliefs, values and attitudes as they respond to a design challenge. Students learn to select from and/or transform ideas from precedents, research and their own experiences, as well as how to consciously reflect on and direct their approaches (Lawson & Dorst, 2009; Cross et al, 1994). We are asking them to participate in the ‘curious and beautiful relation between design problems and their solutions’(Lawson, 2007). Students are rewarded for designs that contribute positively and innovatively in this context.
How is student innovation or creativity framed and identified in your subject through the ILOs and elsewhere? How is it assessed via the brief and/or rubric?
By contrast, genAI tools search, re-combine and deliver elements from data sets, producing a wide range of outputs including textual, numeric, code and graphic forms, in response to user prompts. Many doubt the capacity for AI to participate in ‘true creativity’ (Lawson, 2007; Kelly, 2019), claiming it lacks the motivation and independent judgement to create something truly new and useful. Human thinking is described as creative and flexible by nature, in contrast to the strengths of AI in relation to repetitive actions at vast scale, and managing complexity and multi-tasking. Some claim the Turing Test (in which an outcome that is indistinguishable from human production offers proof of intelligence) has been reached, while others claim creativity should be relocated to the perception of the beholder, rather than the contribution of a potential author (Natale & Henrickson, 2022).
Purported opportunities to ‘collaborate’ with genAI see typical models of design thinking transformed to propose linked human and computer contributions to identified phases. These outline perceived improvements to human thinking, expressing, building, testing and perceiving, using these tools to increase scope and decrease time and cost (Wu et al, 2021); or outline the ways in which different disciplines may creatively understand and engage with AI including as co-creators (Wingstrom et al., 2022).
A recent genAI-focussed panel discussion at the CSHE Teaching and Learning Conference heard panel members encouraging the design of learning experiences in which students could refine their creative practices and deepen their capacity for evaluative judgement by ‘sparring’ with the machine through reflective use of prompts and the creative recombination of outputs.
How might genAI strengths be distinguished from, and/or contribute to, student learning in the subject?
Elsewhere, human and AI production is being considered through the lenses of moral rights (Miernicki & Ng, 2021), intellectual property and copyright (Shtefan, 2021 ). A recent case before the US Copyright Office found the location of authorship to be via input (prompts) as opposed to output (AI-produced images), although the debate continues.
Simultaneously, some developers of various genAI platforms are in legal hot water, as artists claim these companies are infringing copyright by drawing on their published work without attribution, and the opportunities for recourse or even protection are overly limited (see a summary here). Legal challenges have been gathering pace and resources, including a high-profile case brought by the New York Times (Grynbaum and Mac, 2023) claiming that copyrighted articles are being used to train OpenAI chatbot models, producing outputs that are undermining consumer engagement with the original content, causing damage to authors and creators, and to the publishers.
Related concerns are raised for users, as the tools also collect requests and data from prompts, directly or via ‘plug-ins’. As AI starts to draw on outputs it may claim as ‘its own’ for future production, it all gets a lot more complicated.
How can students learn about and respond to the IP concerns of others? How can students protect their own IP in this subject?
Our challenges as educators include supporting students to engage creatively with emerging tools, to build their own creative expertise and judgement, and to effectively demonstrate authorship and to protect their own IP and privacy in the process. The value and personalisation of creativity and its expression remain central to this learning, as is confidence in an individual students’ right and capacity to develop a novel and a crucial aspect of the learning that students need space and support to practice and refine.
GenAI and self-directed learning
In addition to directly modifying assessment design to respond to genAI, it’s also worth considering the ways in which students might usefully engage with the technology outside the classroom, for self-directed learning.
Again, when encouraging students to creatively engage with genAI, within or outside the classroom, it's crucial to clearly identify and articulate the boundaries of academic integrity and highlight the threshold at which use of these tools may become academic misconduct. Students should be reminded that any information or ideas obtained from genAI should be used as a starting point for their own work and not as a final product, and that AI-produced content may be inaccurate and biased.
It’s worth noting that the value that student will be able to extract from genAI will vary based on their understanding. For example, a recent study found that, facing a complex entrepreneurial task, “[participants who were already] high performers benefited over 20% from AI advice, whereas low performers did roughly 10% worse.” (Otis et al, 2023). Conversely, another study found that genAI can improve worker performance by as much as 40%, with low-performance workers experiencing a bigger jump in performance scores compared with high-performance workers (43% and 17 respectively). These results underlie the complicated and conflicting nature of genAI use, and the need to understand this complexity to enhance teaching and learning.
As Gimpel et al (2023) eloquently put it:
"Students need to learn how to use ChatGPT and similar tools purposefully, productively, and responsibly. We should neither panic nor wait calmly and prevent innovation. Instead, we should focus on how we can innovate education to harvest the potential of AI in teaching and learning."
Acknowledging both the risks and the opportunities, the guidance below outlines seven approaches that students can take to incorporate generative AI in their learning journey. Importantly, these approaches not only help students learn with AI, but also help them learn about AI. Much of this guidance has been adapted directly from Dr. Ethan Mollick and Dr Lilach Mollick’s 2023 Assigning AI: Seven approaches for students with prompts, recontextualised to address for built environment education.
Furthermore, Academic Skills have developed Canvas modules targeting foundational topics about GenAI for students. The modules can be downloaded from Canvas Commons, and shared with students via your subject Canvas site. These offer a straightforward introduction and complement advice for educators.
-
AI can assist students by offering immediate and responsive feedback on their assignments, enabling them to receive ongoing guidance throughout their learning process. The example below illustrates how a student could use ChatGPT to provide feedback on a landscape site analysis .
Prompt:
I am an undergraduate landscape architecture student working on a landscape site analysis of Albert Park Lake in Melbourne, Australia. Provide concise feedback on my landscape site analysis, focusing on thoroughness, clarity, and environmental and cultural accuracy. Here is my work…
Response:
In your topography analysis, include more specific elevation data for a clearer terrain picture… For soil types, relate each type to potential vegetation and drainage impacts…
Your historical site use analysis is good, but explicitly connect past and present usage for design insights… In discussing native plant species, expand on their ecological benefits…
Overall, your analysis is well-conducted. Strengthen it by integrating detailed data, visual aids, and clear connections between historical, cultural, and environmental factors…
While valuable, it is essential for students to engage critically with this feedback, which is not infallible and may include inaccuracies. Students should approach AI feedback as a supplement, not a substitute, for their critical judgment. (And students should note that, for subject assessment tasks, their subject coordinator will be the final authority on marks and feedback for submitted work.) They must verify sources and challenge the feedback when necessary, embracing the opportunity to refine their analytical skills.
-
Acting as a personalised tutor, AI can provide direct instruction and educational guidance, tailored to the learning pace and preferences of each student. In this role, students can prompt AI to deliver custom exercises and quizzes, explain complex topics and engage in meaningful dialogue.
The example below illustrates how a student could use ChatGPT to create a custom quiz for their learning.
Prompt:
Act as a tutor in a first-year university architecture subject. Ask me multiple choice questions that will test my application of architectural principles, specifically orientation. After I respond, give me feedback on my answer.
Response:
Here is a sequence of questions focusing on the principle of orientation. After you provide an answer to each question, you'll receive feedback, the correct answer, and the subsequent question. Let's start with the following:
In the context of building orientation and its impact on energy efficiency, how does the orientation of a building's longest side affect its passive solar gain in the Southern Hemisphere?
- Orienting the longest side towards the north minimises solar gain and is preferred for cooler climates.
- Orienting the longest side towards the south maximises winter solar gain, which is beneficial for heating purposes.
- Orienting the longest side towards the east maximises solar gain in the summer, increasing cooling loads.
- Orienting the longest side towards the west has no significant impact on the building's solar gain .
In addition to the capability that this demonstrates, please note that ChatGPT has responded here based on the—invisible, biased—assumption that the prompter is in the northern hemisphere. For further discussion of this type of bias, please see the GenAI in L+T section above. Students engaging with AI tutoring must be aware of the risk of confabulation—where AI may generate plausible-sounding yet incorrect information. It is essential for students to critically evaluate the questions posed and the AI-generated answer.
-
Assuming the role of a coach, AI can encourage students to partake in metacognitive activities that enhance their learning journey. In this capacity, AI can guide students to reflect on their recent learning experiences or assist in strategising for upcoming projects or assessments. By facilitating self-reflection and planning, AI may help students to deepen their comprehension and adopt more effective learning behaviours.
This approach of prompting student reflection has been developed in fine-tuned proprietary deployments including Leticia Britos Cavagnaro’s Riffbot. This AI chatbot is specifically designed to augment individual reflection with questions that encourage the learner to reflect meaningfully on their experience. However, students should be mindful of the limitations of an AI coach. For example, the AI may unintentionally mirror a student's stress or negative tone, which could impede constructive dialogue. It might also generate advice that doesn't align with a student's specific circumstances. It's important for students to actively steer these conversations, knowing they can adjust or redirect the AI's responses whenever necessary.
-
Incorporating AI as a teammate or as a teamwork facilitator in group activities offers students an opportunity to engage in group work in novel ways. AI can assist team members in recognising and applying their various strengths within a group setting. It has the capability to offer counterpoints, prompting teams to re-examine their initial assumptions and consider different angles when making decisions. As such, AI can function as a consultative member of a team, providing additional viewpoints that may inform and shape action.
Prompt:
Act as an AI team contributor in a student project group. Provide a prompt that would encourage students to discuss and align their individual strengths and skills with specific tasks in the project to enhance the team's overall workflow and productivity.
Response:
Let's take a moment to assess our team's dynamic. Can you share what unique strengths you bring to our project? We can use this understanding to ensure everyone's talents are aligned with the project's needs, ultimately streamlining our process and enhancing our collective output.
-
Shuffling roles—with the AI taking on the role of student, and the student taking on the role of teacher—offers students an opportunity to test and to refine their understanding of a topic. Taking on the role of teacher calls on the student to revisit, clarify, organise, and explicitly label the concepts that they’re trying to convey.
The example below illustrates how a student could use ChatGPT to generate ‘student work’ to assess and respond to.
Prompt:
Take on the role of a student who has studied undergraduate Urban Planning. Explain walkability and demonstrate your understanding by providing two applications of walkability in Urban Planning. Then ask me to explain what you got right or wrong in your examples and explanation and how you might improve your response.
Response:
In Urban Planning, walkability, measuring an area's pedestrian-friendliness, finds application in mixed-use development, blending residential, commercial, and recreational spaces. This approach diminishes car dependence, easing traffic congestion and fostering a sustainable lifestyle. Planners strategically design neighbourhoods to encourage walking, promoting community interaction. Another pivotal application is the adoption of complete streets, where roads are designed to accommodate pedestrians, cyclists, and public transportation. Allocating space for wide sidewalks, dedicated bike lanes, and accessible transit stops enhances safety and convenience, emphasising connectivity and improving overall quality of life. These applications collectively contribute to the creation of urban spaces that prioritise pedestrians and facilitate a more inclusive and sustainable environment. Your feedback on the clarity and accuracy of these examples is appreciated.
This approach relies on the student considering the AI’s response, comparing it to their own understanding of the topic, and interrogating gaps or discrepancies. Issues with this approach might include the AI pushing back against valid feedback, or, conversely, accepting invalid feedback.
As an alternative to the above, the student might also continue in the role of 'knowledge holder’ by responding to Socratic questioning (Cotterell, 2023) by the AI.
Prompt:
I am an undergraduate student in Urban Planning. Acting as an expert on the process of Socratic dialogue, please assess my knowledge and understanding of walkability. Ask me one question at a time to measure my understanding of walkability in Urban Planning. Once you are confident that you know my level of understanding, please stop the conversation and give me some productive feedback on my responses. Do you understand?
Response:
Yes, I understand. Let's begin with the first question: What are some key factors that contribute to the walkability of a neighbourhood or urban area?
This Socratic approach has been developed further in fine-tuned proprietary deployments including in Khan Academy’s Khanmigo AI assistant.
-
AI can be prompted to script scenarios for the student to participate in and respond to, giving them the opportunity to practice applying their knowledge.
Prompt:
I want to practice developing an architectural fee proposal. Create a scenario in which you’ll play the role of a client who has invited tender for design of a new gymnasium. I’ll play the role of the architect. The aim is for me to practice asking questions to gather the information I need to produce an architectural fee proposal. As the client, you should respond to my questions, so that I can learn about the project you envision. You shouldn't volunteer all the information at once. Do not play my role. Only play the role of the client. Wait for me to respond. After I have asked five questions,
please stop the conversation and give me any additional relevant information I may have missed. Do you understand?Response:
Yes, I understand. I'll take on the role of the client, and I'll wait for your questions before providing information about the gymnasium project. Feel free to start whenever you're ready.
Issues with this approach might stem from gaps in the AI’s information about the requested topic and from its propensity to hallucinate to fill in those gaps. The AI might also lose track of the narrative. Or the student might focus to closely on the narrative, rather than on the application of their understanding.
Note that some prompts require iteration and fine-tuning. And in some cases, prompting can be counter-intuitive. For example, in the prompt above, adding “Do you understand” helps to guide ChatGPT to take a single role within the dialogue, instead of responding with a scripted dialogue between two characters. Learn Prompting provides a useful introduction to prompt engineering.
-
Finally, AI can be used as a tool to accomplish tasks, enabling students to produce more, quicker. This can be valuable for learning if students then reflect, iterate, and build on what they’re able to produce using the AI. But it brings with it the risk of the student producing material without truly engaging, thinking, or understanding. Again, this use of AI should also be carefully considered in the context of academic integrity. Please see the GenAI and student academic integrity section below.
GenAI and assessment design
Unsurprisingly, there is no black-or-white answer to the complex questions regarding genAI in ABP teaching.
This section focuses on assessment design for ABP subjects, and also includes references to academic integrity guidance. It will continue to be updated as this guidance and new valuable practices develop. The Melbourne Centre for the Study of Higher Education has also published in-depth guidance around Assessment and Generative AI, which, combined with the following guidance, will prepare you for dealing with assessment in an AI world. These resources align with the University of Melbourne AI Principles, which are designed to help guide actions around the adoption and use of AI tools and systems.
Commencing with Clear and Candid Conversations with Students
Clear and transparent conversations with students about the use of tools as part of their learning are needed and should be conducted early in the semester and regularly touched upon before assessment milestones to ensure a consistent understanding across the cohort. It is recommended that these conversations are framed around the specific learning that each subject is designed to support. Furthermore, the University encourages educators to address appropriate uses of these tools, in line with relevant policy, and with an understanding of their limitations and potential application for each discipline area. This will indeed traverse into the topic of academic misconduct and the University’s expectations of “good” scholarly behaviour from the students. The subjects of proper methods/processes of citing work generated through any genAI platforms along with the consequences of falling outside of the University’s policies and expectations should be clearly addressed. Subject coordinators may consider framing these conversations around students’ lived experiences and concerns to provide meaningful context and gravity to the impact and role genAI has on their learning.
Details on Academic Integrity and the University’s policies on GenAI can be found on in the GenAI and student academic integrity section, below.
Assessments in the age of genAI
The emergence of genAI has profoundly influenced assessment design and approaches to assessment. Increased access to genAI poses complex challenges, particularly when designing meaningful assessment tasks that accurately capture a student’s learning.
AI platforms present a multitude of new issues for both educators and learners, however, it is important to remember the purpose of assessment as the search for evidence of learning. Well-designed assessments will provide valuable evidence to support both learners and educators in their respective roles to learn and teach. The BEL+T team is available, so please reach out to us at abp-belt@unimelb.edu.au to discuss.
This section provides an overview of assessment design in built environments education in relation to genAI. It explores opportunities and complexities, emphasising the importance of aligning learning outcomes with AI literacy skills, and providing recommendations for meaningful assessment tasks and collaborative approaches involving genAI. This section also details the University's policy and position on genAI in teaching and learning – please look to the bottom of this page.
As outlined in previous sections, AI platforms afford students the opportunity to cognitively offload some elements of their tasks, allowing them to focus their efforts on higher level cognitive processes such as critical thinking, reflective thinking, problem solving and creative thinking. When considering the design of assessment tasks and integration of AI, educators should consider the following questions:
- What are the intended learning outcomes (ILOs) of the subject?
- How can students demonstrate their learning if they are collaborating with a generative platform such as ChatGPT?
- Can tasks be designed to focus on higher levels of cognitive thinking? What fundamental cognitive skills are needed?
- Is there opportunity to align/update subject ILOs in the future as identified in Futures of Work with AI?
It is recommended the above considerations be read in conjunction with BEL+T’s Guidance on Assessment & Feedback which provides further detailed guidance and resources around planning and design of assessment and feedback. This advice can be thoughtfully applied in a genAI context.
Assessment design to engage students’ capacity to evaluate and create
GenAI provides both educators and learners with some thoroughly exciting pedagogical prospects, particularly approaches concerned with demonstrating and evidencing learning (i.e. assessments). The following offer key considerations as educators plan and design their assessment tasks.
- Centre the assessment task on a context and/or situation where AI cannot access information and data (i.e., assess materials discussed in class as a summary of tutorial discussions).
- Shift the focus of what is being examined/assessed from the final output to the “behind-the-scene” process (e.g., the assessment task may instruct students to keep a detailed reflective/design journal that documents their process).
- Collect evidence of learning through modes that AI technologies are unable to replicate/output (e.g., students may participate in synchronous conversations or interviews).
Developing Assessment designs in the context of genAI
Planning an assessment task requires the contemporary educator to be clear on the role a genAI platform is expected to play in student learning. By understanding how AI works, educators can make informed decisions about the desired level of interaction between students and this artificial individual/intelligence.
In approaching assessment design, educators may consider genAI as a potential participant/collaborator that students will engage with as part of their assessment process. This approach offers insight into genAI’s capabilities and limitations with respect to learners and educators. BEL+T has developed a typology to inform assessment designs for collaboration with genAI.
More broadly, Cope et al. (2021) offer a valuable framework for understanding the functional parameters of AI in the context of teaching and learning. The following table outlines these functions, highlighting the strengths and opportunities that each can provide for evidencing learning. These roles will correlate with the AI platform’s capacity to name, calculate, measure and represent
Functional Parameters of AI | Ability | Strength | Opportunity in Evidencing Learning | AI Platforms |
---|---|---|---|---|
Naming: | AI can efficiently identify and name content, so long as it has been defined in the training data | The sheer number of items is significantly more than referencing personal experience and memory of the learner. | Educators should note that this process of naming and identifying is linear and simplistic, based solely on what the machine has been “taught”, and does not take into consideration any other parameters (i.e. context, situation, etc.). This means that accuracy and reliability of AI generated content also provides opportunity for learners to demonstrate their capacity for critical thinking and judgement. | AI Chatbots (e.g. ChatGPT, Gemini) |
Calculability: | AI can count and calculate large numbers, datasets and process long sequential algorithms. | The capacity to automate a significant number of successive small calculations (i.e. Boolean decisions). On its own these small “unsmart” calculations can be seen as trivial, however, when combined the possibility of complex branches along the decision tree affords AI its “smart” appearance. | The conditions determining decision forks in the branches cannot be generated through “unsmart” calculations. This provides opportunity for learners to demonstrate higher levels of evaluation and creativity when formulating further probability pathways. | Smart Sparrow AI Chatbots (e.g. ChatGPT, Bard by Google) |
Measurability: | AI can quantify some qualities of human experiences and perception if a conceded numeric value has been assigned for calculability. eg distances, dimensions, shapes, colours, time, temperatures, sound, etc. | Sensors and other instruments designed to measure these qualities offer the capacity to deliver data continually and incrementally in real-time at vast quantities. | Measurements collected by AI is only as useful as the instructions/algorithm the AI is designed on. This provides learners an opportunity to demonstrate their analytical skills and ability to evaluate through their reading and judgement / feedback of collected data. | Socratic |
Representability: | AI can re-present information to name, calculate, and measure via various modes of communication. This is observable in automated rendering platforms (e.g. generated art and 2D graphics, etc.), 3D modelling, and speech / sound generators. | The speed in which numerous variations can be produced is significantly faster than human. | Despite the quantity of output AI can generate this does not represent the “best” quality or approach for a specific context/situation. Re-presented outputs display the mean of information that has been defined through the ”internet of things” and not the learners own personal cognition and opinions, or a diverse range of experience or perspective. | AI Processing Tool (e.g. Vizcom, Rendered.ai, DALL-E) |
For further information concerning genAI and assessment design the following sources are available for guidance:
- An in-depth breakdown of further prompts and considerations concerned with genAI and its impact in assessment design can be viewed through Monash University’s GenAI and assessment resource.
- Flinders University provides a useful flow chart that guides educators through the assessment design process when genAI is incorporated into the decision making considerations.
- Assistant Provost of Vanderbilt University. Derek Buff, provides valuable insight, guidance and some examples through this post on how educators can approach written assessment tasks (e.g. essays) in response to genAI.
- Tertiary Education Quality and Standards Agency (TEQSA) provides a comprehensive suite of webinars examining the impact and implications of genAi in assessment design with a strong focus on academic integrity
Assessment design for collaboration with genAI
While many models for assessment design in the age of genAI focus on the 'quantity' of authorised genAI use by students, the following guidance outlines collaborative roles between a student and genAI as a form of group work for learning. Approaching the authorised use of genAI through the lens of group work offers both a useful and familiar approach for university educators and students, as well as language to clearly communicate 'authorised' use of genAI for specific assessment items.
Group work is frequently integrated into the design of learning activities or assessment tasks. Educators may include group work activity in subject design to promote deeper learning of subject content (Gaudet et al., 2011) or to focus on developing interpersonal skills (Kotey, 2007). The social interactions afforded through group work support the development of skills to work in teams successfully (Boud et al., 2001). This includes the development of critical self-awareness of students’ own learning through the exchange of feedback with group members. GenAI’s ability to tailor its responses through prompting draws parallels to learner experiences of peer interaction.
GenAI platforms can respond and adapt to interactions with the user, moving beyond a tool for cognitive offloading (e.g. calculator) as an involved co-learner contributing to the construction of knowledge. Lodge, Yang, et al. (2023) describe the nature of this relationship along a spectrum in which the interaction between the human and the AI is either focused and driven by the individual learner or a collaborative arrangement between human and machine. This spectrum of collaborative relationships is also observable in higher education when students engage in group work.
Building on these parallels between group work and genAI interaction, BEL+T has produced a new framework for evaluating and authorising student use of genAI in assessment. The framework presents a typology that views the student-AI relationship through the established lens of group work dynamics, identifying three distinct assessment types: individual, cooperative, and collaborative. This typology, presented in the table below, provides educators with clear guidance for understanding the roles and interactions between students and genAI, while offering practical insights for task design and learning objectives. The framework is further illustrated through examples and suggested level descriptors in the accordions below, helping educators make informed decisions about integrating genAI into their assessment design.
This framework is propositional, and aims to assist teachers in considering the challenge of incorporating AI into their assessment design through a familiar lens. It aims to help teachers to ‘authorise’ and to assess emergent skills in the context of (and alongside) a learning focus for a particular subject. The level descriptors may be adjusted and added to other criteria in rubrics for example.
It is of note that evidenced proof of ‘authorised use’ is still not completely reliable, ie actual student use of AI may not be detectable, despite the requirements of an assessment outline. However, the application of this approach means that students have an incentive to describe and to develop their use of these tools.
This approach was first presented at the ASCILITE Conference 2025. Please use the following citation – the conference paper can be accessed via the DOI link below.
Tregloan, K., & Song, H.(2024).From How Much to Whodunnit: A framework for authorising and evaluating student AI use. In Cochrane, T., Narayan, V., Bone, E., Deneen, C., Saligari, M., Tregloan, K., Vanderburg, R.(Eds.), Navigating the Terrain: Emerging frontiers in learning spaces, pedagogies, and technologies. Proceedings ASCILITE 2024. Melbourne (pp. 255-265). https//doi.org/10.14742/apubs.2024.1441
| Individual Assessment | Cooperative Assessment | Collaborative Assessment |
Student Roles & Expectations | Student as Author: Primary producer of the final output(s) Goal determined by the student. | Student as Project Director: Managing production and curating contributions; Producing section of final output; Commentary on the relationship of parts and contribution (i.e. peer-evaluation) | Student as Co-Designer: Contributor to joint iterative exercise, ultimately directing and evaluating the shared work towards a final output; Student will train AI re shared visions and goals; Student will adjust the vision and goals in response to Ai’s adaptive generation. |
GenAI Role | GenAI as Assistant: Limited cognitive offloading as a refinement of student production (e.g. spellcheck, grammar, code). | GenAI as Group Member Produce defined segments/sections of the final output under the direction of the student(s). | GenAI as Co-Designer: Iteratively refine and adapt contributions responding to students’ efforts; Refining datasets/inputs (defined or developed by student) |
Task Design | Goals and outcomes are pre-determined by educator Students work independently to accomplish learning goals | Goals and outcomes are pre-determined by educator A clear boundary is set regarding the body of knowledge Activities have detailed instructions of how the final outcome(s) | Open-ended but focused task(s) for learning Exploration of ideas Learning to learn Activities are structured but means of how to achieve the final outcome(s) determined through engagement with the task |
Learning Focus | Process of individual skill development and knowledge acquisition | Development of skills and knowledge through known strategies (i.e. specific activities are set for students to conduct as part of the assessment) | Social construction of knowledge and skills through that may involve trial-and-error of testing and iteration of novel outputs |
Note: though these assessment design types have been presented independently , these approaches may be integrated as complementary elements of a more comprehensive assessment task. As outlined, it is of note that the roles undertaken by the student and AI are different and are (currently) not equivalent to the approaches that may be taken by two independent humans.
-
Individual Assessment designs focus on a student’s personal achievements and learning. The assessment is designed with the expectation that the student is working by themselves to accomplish the final outcome, and that evaluation in this context is about validating a student’s personal skills or knowledge. When considering a role for genAI in such assessment designs, educators may consider minimal cognitive offloading. Functions may include spell-checking, code-checking, calculations by a calculator, presentation layout suggestions (e.g., in PowerPoint slides), or summarising selected text for further analysis by students. Higher-order learning outcomes to be evidenced through the assessment task can occur independently from the support genAI provides to the student (Lodge, Yang, et al., 2023). In this form, the educator must set clear goals and outcomes for the assessment task while the student leads the development and decision making towards the final outcome. Students are responsible for producing the required submission, with limited support.
Examples:
An example of an individual assessment design could be a self-reflection essay. In this particular type of assessment, the educator requires students to evaluate their own thoughts and opinions, evidenced through a written essay. Individual assessments are solely focused on evidencing students’ learning by tasking students to work independently towards the final outcome and where their learning progressions are unrelated to other students. In a self-reflective essay, students are tasked with demonstrating the capacity to reflect and articulate their own personal thoughts and insights. The student may incorporate genAI to acts as a passive tool, providing some minimal cognitive offloading by editing grammar and/or proofing the written text, however, it does not contribute to development and engagement of the students’ reflective thinking.
Other examples of individual assessments include forms of written essays where genAI platforms may suggest synonyms and alternative word choices and restructure sentences and/or written paragraphs. Additionally, individual assessments designed to incorporate multimodal forms of submission (e.g. visual images, multimedia, etc.) may involve students engaging in genAI in the editing process through generative filling and expanding (i.e. in-painting and out-painting). Educators may consider instructing students to submit their assessment task before they engage with genAI. Additionally, educators may wish to incorporate activities requiring students to critically reflect on how genAI has contributed to their work and how the student has been able to manage the platform to support the development of the final output.
Suggested Level Descriptors: Individual assessment
PoorAI use moves beyond the authorised use as set out in the task requirements
AI use is ineffective and does not improve the student’s own work
AI use does not align with relevant conventions or assessment requirements
GoodApplication of AI is clear and effective for the task requirements
AI use has improved the student-produced work in relation to the authorised aspects
ExcellentUse of AI is strategic and deliberate
Student evaluates the application of AI, and adjusts further AI use to significantly augment the work
Student may combine multiple AI tools to address specific aspects of the submission
-
Olsen and Kagan (1992, p8) describe cooperative learning as group learning that is: “dependent on socially structured exchange of information between learners in groups and in which each learner is held accountable for his or her own learning”. Students who engage in well-designed cooperative learning demonstrate increased intrinsic motivation in engaging with their studies, developing higher-order thinking skills and improved attitudes towards curriculum (Johnson & Johnson, 2013). Cooperative assessment designs are planned and prescriptive, providing students with highly structured and descriptive materials and clear directions about how to work together in groups towards a single output that will demonstrate their learning. Elements may be driven by independent personal goals and values, and may be independently assessed. This approach aims to support an interdependent relationship between members. The assignment of roles clarifies expected contributions for each member and their responsibilities.
Examples:
An example of a cooperative assessment design is a jigsaw reading task. In this case, the educator allocates a specific reading to each group member who will share insights with the rest of the group. The student role and expectations are clearly communicated, including expectations around building expertise in assigned reading. For a student paired with a genAI tool, an educator will provide structured directions on how to engage with the genAI, perhaps including the types of prompts that might shape the platforms response. This assessment design may also involve students evaluating the quality of responses produced by the genAI, demonstrating higher-order critical thinking skills. This critique may involve students comparing genAI generated work against a human-generated counterpart, or according to the assessment task’s evaluation criteria (i.e. rubric). Ultimately, the student will lead the assessment task as project manager, making all decisions in response to the information delivered by the genAI tool.
Other examples of cooperative assessments include the assessment design tasking students to utilise genAI to produce foundational content on a particular topic and/or theme. Such content could include datasets, draft diagrams/images, and first drafts of paragraphs. Students would continue to work the genAI generated product towards their own original final outcome.
Suggested Level Descriptors: Cooperative assessment
PoorAI use moves beyond the authorised use as set out in the task requirements
Task outcome is incoherent or the sections produced are not effectively integrated
AI use is lacking or unclear, or does not effectively deliver the required outputs
GoodSpecified student and AI contributions align with authorised use
Student has managed the AI effectively to deliver the required contributions
Contributions are clearly identified and complementary within an integrated whole
ExcellentWhile independently produced in line with the assessment brief, elements of the task outcome are presented as a coherent whole
Assessment task development, incl its parts, has been skilfully and effectively managed
Student contributions include analysis of differences between assignment sections, and these are evaluated, described and/or resolved as part of the outcome
-
The purpose of collaborative assessments is to support and enable students’ social construction of knowledge through participation with others. Successful collaborative assessments may deliver similar learning benefits to cooperative assessments with the additional benefit of promoting students’ capacity to reflect (Xiao et al., 2008) and retain complex information through deep learning (Atman Ulsu & Yildiz Durka, 2022). They may also encourage an openness to diverse voices (Cabrera et al., 2002). This results from the inclusion of open-ended but focused tasks that require students to collaboratively and iteratively develop the final outcome. While an educator may provide a loose structure around activities to ensure students meet the intended learning outcomes of the subject, students ultimately determine how the final outcome is achieved. Collective decision-making, including allocation of roles, may explore and exchange ideas during the development of an assessment task outcome. This heavily relies on the quality of interaction between group members, in which relationship dynamics are nurtured to promote positive engagement and participation by all members. Here, the educator takes the role of a facilitator to support constructive and positive group interactions.
An example of collaborative assessments includes the design studio project - a common assessment task within design education providing students with enough information to commence their design process in response to involving an open-ended brief. Actionable tasks are student-led as steps are identified within a flexible workflow informed by highly collaborative social interactions such as sharing information and ideas student-to-teacher and peer-to-peer. This interaction is reciprocal and a critical part of the design process enabling students to navigate back and forth through the problem scape towards an optimal final outcome (Lawson, 2006; Schön, 1995). In a paired student-genAI scenario, the student will train the genAI’s responses towards a shared goal. The uncertain wicked nature of design problems requires students to work with genAI as a team, where the reciprocal dialogue engaged by both student and machine facilitates a solidification of what the final outcome will look like. Considering the development of architectural studios with a focus on machine learning for design, Caitlin T. Mueller suggests “As in fully human collaborations, I find that empathy and insights into the thinking of creative partners are critical to productive and innovative design outcomes. … I am interested in promoting curiosity-driven approaches that wonder why AI models generate what they do, rather than treating them solely as solution machines” (Broome, 2024). Other examples of collaborative assessments include brainstorming activities engaging students to work in tandem with genAI towards exploring generating ideas and responses to complex problems. Another example includes designing project proposals involving students and genAI to co-create a project vision, followed by engaging in a continuous feedback loop towards iteratively shaping the final proposal. The student and genAI are engaged in a cyclic exchange of information to develop the needed knowledge and skills for the final outcome.
Suggested Level Descriptors: Collaborative assessment assessment
PoorUse of AI does not move beyond direction, student ideas are not expanded outside of initial or student-originated ideas
Student has not developed an approach to work creatively in partnership with the AI, limiting the capacity for an original response to the task brief
Task outcome is incoherent, such that human and AI contributions are independent or unbalanced, and/or the task focus is not sufficiently addressed
GoodBoth student and AI contributors have expanded initial perspectives through collaboration
Student has effectively ‘trained’ the AI through iterative prompting or other development approaches to deliver useful contributions to the final outcome
Task outcome combines both human and AI contributions for a coherent outcome
ExcellentStudent has both ‘trained’ the AI, and learned from its responses, to deliver an original and creative response to the task
Contributions of the student and the AI are balanced and integrated
Assessment task development has been directed by the student, and has been responsive to emergent opportunities and directions throughout
GenAI and student academic integrity
Guidance for teaching staff on students’ use of genAI
It’s important to have open discussions with students—before they submit work—on the ethical use of genAI and on the University’s requirements around its use in assessment tasks. Key information for students is captured in the ‘Guidance for students on the use of genAI’ section below.
Subject coordinators may also consider reviewing assessment task design in the context of genAI as captured in the ‘GenAI and assessment’ section above. The BEL+T team are available to provide support. Please reach out to us at abp-belt@unimelb.edu.au to discuss.
For use once students have submitted work, an AI writing detection tool has been integrated into Turnitin, and is being further refined through its use at the University. Learning Environments has produced comprehensive information on the reliability of the tool, its functionality, and what to do if the tool returns a high percentage of text flagged as likely to have been AI-written. Note that:
As with the Turnitin similarity report, a high percentage of text flagged as likely to have been AI-written is not proof that academic misconduct has taken place but may be a sign that further investigation is warranted. Academic judgement should be used to determine whether to investigate the matter further, including by discussing with the student.
Investigations of such cases may look for additional evidence that the assessment material submitted was completed by the student, on the balance of probabilities. This might include looking at the metadata of the files submitted, comparing the assessment to other work completed by the same student, asking the student to provide drafts of the work, to describe how they completed it, or to explain the content of the assignment. Such evidence, taken together, may allow committees to come to a judgement about whether or not the work in question is likely to be the students’ own writing.
As with all potential student academic misconduct, subject coordinators are the main point of contact. Staff should not accuse students of academic misconduct, but should seek more information where appropriate, and gather evidence. Steps for identifying and responding to potential academic misconduct for investigation are outlined via BEL+T’s Academic Integrity Guidance. Please also refer to the links to Academic Misconduct on the ABP Faculty Intranet.
If a student is found to have committed academic misconduct by representing work generated by artificial intelligence software as their own, they will be subject to the penalties outlined in the Schedule of Student Academic Misconduct Penalties.
For guidance on those investigations and on their possible outcomes, please reach out to the ABP Student Programs team at abp-misconduct@unimelb.edu.au. Even if the outcome involves an educative approach—which may be suitable in some initial cases—please keep the ABP Student Programs team in the loop, so that the Faculty can maintain a comprehensive record of engagement with each student.
Guidance for students on the use of genAI
Per the University’s Advice for Students Regarding Turnitin and AI Writing Detection:
The acceptable use of AI will vary across disciplines, subjects, and assessment tasks. Your subject coordinator will provide this information, but it is your responsibility to check the assessment guidelines and relevant policies, and to understand what is expected of you. Resources on academic integrity are available to you through your subject’s LMS site, Academic Skills, and the Library.
If an assessment task does permit the use of AI tools and technologies in the preparation of the submission, this usage must be appropriately acknowledged and cited in accordance with the Assessment and Results Policy (MPF1326).
If an assessment task does not permit the use of such tools, or if you use such tools in the preparation of an assessment submission without acknowledgement, this is academic misconduct. In accordance with the Student Academic Integrity Policy (MPF1310), any student who commits academic misconduct is subject to the penalties outlined in the Schedule of Student Academic Misconduct Penalties.
Teaching staff should please encourage students to review this advice (also reflected in the University’s 21/4/2023 Statement on the Use of Artificial Intelligence Software in the Preparation of Material for Assessment). Students should note that this advice applies to a variety of translation and editing tools beyond the common chat-based genAI tools. This document highlights the important role that subject coordinators play in clarifying the appropriate use—or not—of genAI tools for learning and assessment.
GenAI for assessment of student submissions
Chancellery (Academic) has developed guidance in relation to the use of AI tools for assessment of student submissions. Similar to other questions related to genAI and education, this raises a complex, and fast-moving set of issues. Current guidelines are impacted by ongoing development of related technologies and tools, including updates to tools already in use at the University, and considerations including Privacy and Security of Student, Staff and University Data; Staff Responsibilities, Roles, and Pedagogical Use of tools; Validity and Reliability of AI-generated Feedback.
The following guidelines apply for 2024:
- From Semester 1, 2024, staff can use new AI systems to support their evaluation of students’ work and to provide feedback to students. However, staff remain responsible for any academic judgements made on students’ submitted work and any feedback provided to students. The outputs from any AI system used for assessment or feedback must be reviewed by staff and the prompts and inputs the AI system is using must be well understood and managed. The sole use of new Gen-AI tools to allocate marks or grades to students is not appropriate.
- Staff wishing to use new Gen-AI tools to support student assessment and feedback should use the University’s internal, secure platform of Gen-AI tools called ‘Spark’ and should seek the endorsement of their Faculty’s Associate Dean, Teaching and Learning prior to doing so.
- Staff who are interested in using new Gen-AI tools that are not within the University’s secure Spark platform or enterprise technology suite need to seek endorsement from the Deputy Vice-Chancellor, Academic.
Importantly, this advice specifically focuses on how staff might use new AI tools for the assessment of, and feedback on, students' work. To avoid confusion, this advice is not related or relevant to:
- Student use of artificial intelligence tools (even where they are directed to use such tools in the completion of work in their subjects that will be assessed by staff).
- Staff use of artificial intelligence tools to prepare their own teaching, learning and assessment material that is then used in their subject or program.