BEL+T Guidance on Generative AI
What is this Guide?
This BEL+T Guidance on Generative AI (genAI) offers ABP subject coordinators, and other educators in built environments disciplines, some key concepts and definitions in this fast-developing space. It reviews more general advice and guidance, and complements MCSHE and University guidance (April 2023 update) with a focus on some issues of specific relevance to built environment educators. This guidance is developed in anticipation of Semester 2, 2023, and is also expected to change as institutional advice is released and updated, and as the broader academic discussions of these issues develop.
Of course, the range of pedagogies and teaching practices relevant to built environment disciplines is particularly wide. Some subjects take teaching approaches that are closer to HASS disciplines, while others are more aligned to STEM pedagogies. This means that the types of genAI tools and issues relevant to teaching are similarly broad. In addition, studio pedagogies introduce practices as well as concerns relating to creativity and original authorship in a genAI context.
The table below provides a summary of each of the sections within this guidance. These sections can be accessed using the navigational sidebar on the top-left corner of the page.
There is much to be learned and many nuances to this complex and evolving space, and there are clearly significant ways in which these new tools are likely to impact built environment disciplines and education. There are many opportunities for innovation as well as valid concerns to consider, particularly relating to the ways in which students develop foundational knowledge and critical perspectives, the implications of biased datasets and the treatment of intellectual property. For all subjects, clear and careful assessment design offers a heightened focus on what our students are learning, and what they will need to learn, as they engage with these tools.
We are looking forward to your comments on this guide, and to sharing the excellent and creative approaches that ABP educators are taking in this dynamic and evolving landscape – watch this space!
|What is genAI?|
This section introduces genAI, explaining its principles and highlighting examples of its applications. It explores how genAI generates new and creative outputs and highlights various models used for generating these outputs.
|GenAI in BE|
This section provides an overview of the impact of genAI in Built Environment disciplines. It presents insights from ABP academics on the evolving role of genAI in professional practice, highlighting the implications for future graduates' knowledge and skills.
|GenAI in L+T|
This section introduces the complicated landscape of genAI through a learning and teaching lens. It outlines the challenges and opportunities of genAI with a focus on student perspectives, biases and data-related concerns and considerations pertaining to creativity and intellectual property.
|What about my Assessment Design?|
This section provides an overview of assessment design in built environments education in relation to genAI. It explores opportunities and complexities, emphasises the importance of aligning learning outcomes with AI literacy skills, and provides recommendations for meaningful assessment tasks and collaborative approaches involving genAI. This section also details the university's policy and position on genAI in teaching and learning.
What is genAI?
GenAI is a remarkable branch of artificial intelligence that enables users to create novel outputs that closely resemble human-generated content. These outputs can take various forms, including text, images, videos, sounds and 3D models. Thanks to recent advancements in the field, genAI has witnessed unprecedented levels of growth and adoption and is revolutionising numerous industries and domains.
The crux of genAI is in its name – generative. While traditional AI algorithms have focused on identifying patterns within data for predictive purposes, say to predict whether the next image in a sequence of images is a cat or a dog, genAI leverages the learned patterns to create entirely new outputs. Given the above example, a genAI model could create an entirely new representation of cat or dog, or cat-dog!
How does Generative AI work?
GenAI is grounded in machine learning techniques that draw inspiration from the neural systems of the human brain, known as neural networks. These genAI networks are ‘trained’ on extremely large amounts of data, from which they learn to capture and identify features, patterns and relationships within the data. This enables the model to generate new data instances that are similar yet distinct from the original training data.
For instance, a genAI model trained on images of faces learns to understand facial features like nose shape, eye placement, and smile curvature, allowing it to generate new faces that possess the same structure and features but do not match any specific face. Similarly, models like ChatGPT, trained on vast amounts of language data, learn to understand grammar, sentence structure, common phrases, context, tone and style, enabling them to generate coherent and contextually appropriate text.
GenAI can utilise a variety of models, each employing unique methods for training the AI and generating results. There are many types of models, the most popular being Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs) and Denoising Diffusion Probabilistic Models (DDPMs). Each model possesses unique strengths and limitations, making them suitable for different contexts. Some models excel at producing high-quality results, while others offer better control over the generation process. Consequently, the choice of model plays a crucial role in determining the capabilities and limitations of genAI applications across disciplines.
GenAI, the models used, its applications and its outputs are developing very quickly. Current debates relating to its legal status, control and commercialisation, as well as its potential, will impact the ways it develops into the future and the impact on built environment disciplines and on learning and teaching.
> > GenAI in BE
GenAI is fundamentally altering the landscape of built environment disciplines, presenting both exciting innovations and significant challenges to these fields. It promises to transform the way buildings are designed – lowering costs, increasing productivity and reducing waste. Yet at the same time, the adoption of genAI raises profound questions around creativity, efficiency, ethics, and the nature of human involvement in these processes.
GenAI's impact on the built environment can already be seen in contemporary professional practice. Zaha Hadid Architects (ZHA) recently shared some groundbreaking ways that the practice is integrating AI into their design process. ZHA are leveraging AI to inform and personalise office spaces, employing fine-grained data analysis to enhance the design project's outcomes. In another part of the practice, the firm is utilizing AI text-to-image generators to stimulate design ideas for projects and to aid early ideation. The invocation of 'Zaha Hadid' in the AI's prompts seeks to claim authorship, marking a profound integration of genAI into the oeuvre of the firm.
To better understand this dynamic practice landscape, we asked ABP academics for their current perspectives on the biggest impact of genAI in their disciplines, and what these might mean for knowledge or skills needed by future graduates. The following section presents their insights, reflecting a diversity of viewpoints on the evolving role of genAI in the pedagogy and practice of BE disciplines and the challenges and opportunities this presents for the next generation of professionals.
Creativity and Efficiency in Urban Design
GenAI has significantly impacted architecture, policy planning, and urban design, redefining the boundaries of creativity and efficiency. It offers the ability to generate countless design options and automate repetitive tasks, aiding architects and urban designers in developing optimised, innovative solutions.
However, concerns over the potential loss of creativity, homogenisation of designs, and insensitivity to unique local factors persist. There's a fear that genAI could overshadow human intuition, local cultural nuances, and personal identities within design. Privacy and data security are also crucial issues as urban planning becomes increasingly data-driven.
Despite these challenges, genAI is opening up remarkable opportunities. The technology can aid sustainable, equitable, and resilient urban development, simulating design intervention impacts on a range of factors, from climate adaptation to socio-economic growth. It also democratises the design process, allowing for genuine citizen engagement and interaction.
The emergence of genAI has significant implications for future graduates. While traditional skills remain relevant, there's an increasing demand for proficiency in AI, data science, social and environmental systems, and cross-disciplinary collaboration. Additionally, future professionals must be well-versed in the ethical application of AI, coupled with critical thinking abilities to responsibly navigate its complexities.
With genAI gaining ground, the urgency for upskilling has never been higher. Graduates who can wield this technology will have a competitive edge in the market. Moreover, the novelty of these techniques presents a significant opportunity for academic and commercial advancements, potentially sparking an influx of startups and monetisation possibilities. The future of these disciplines, therefore, converges at the intersection of technology, creativity, societal considerations, and a thorough understanding of genAI.
- Dr Thanh Ho - University of Melbourne
- A/Prof Jason Thompson - Senior Research Fellow, Univeristy of Melbourne
- Dr Sachitch Seneviratne - Research Fellow in Computer Vision and Health, University of Melbourne
Changing Practices of Architectural Design
To understand the effect of genAI in the BE disciplines, it is useful to first distinguish between genAI for design visualisation and communication (image-to-image applications) and the design concept and design development tools (text-to-image models).
Image-to-image tools are useful to generate photorealistic images from a reference image and sketches or clay renderings. These tools are expected to speed up the process of rendering production, and users won’t need to be as skilled as current experts in design visualisation.
Text-to-image tools allow designers to produce images of design ideas through textual prompts.
The development of text-to-image and image-to-image tools has already created a new professional figure, called ‘prompt engineer’. Prompt engineers currently work in visual arts and advertisement, and can likely be employed in architecture in future years, even without any specific formal design training. For the moment, we have seen a number of academics and architects promoting themselves online as AI experts because they use genAI tools. In fact, we believe it is more appropriate to consider such designers and academics simply as users and designers, testing and exploring the potential and limits of new digital tools.
GenAI is seen as a novelty and is used as a cutting-edge technology. The myth of the ‘novitas’ has always been appealing to architects and designers, and it was previously seen in digital design developments, including parametric design and optimisation. However, we believe that designers will have to go back to the roots of AI development (theory and technology) to fully explore how AI developments will unfold in the near future. Designers will need to rediscover Negroponte’s work and associated studies of the 60s and 70s. The same applies to many other research projects of the 90s, which focused on intelligent CAD systems and extensively discussed issues associated with intelligence, creativity, and human-machine interaction.
With this in mind, we can begin to theorise what this might mean for future graduates' knowledge or skills. Students can use genAI in design studio settings and seminar-based subjects without further elaboration. We believe that future graduates will need to develop critical thinking skills to reflect on the products of genAI. To enable a fruitful collaboration with such tools, future graduates must develop their communication and teamwork skills, which are the basis of a successful human-machine interaction. It sounds like a paradox, but the knowledge and skills future graduates will require can be found in the basic principles of architectural design thinking and design processes.
- Dr Alberto Pugnale - Senior Lecturer in Architectural Design, University of Melbourne
- Dr Gabriele Mirra - University of Melbourne
Prospects and Challenges in Landscape Architecture
- Wendy Walls - Lecturer in Landscape Architectural Design, University of Melbourne
AI Image Generation in Design
The emergence of generative AI image generators is profoundly impacting the realm of architecture and design. These tools, deeply rooted in computational practices, offer new avenues for academics and creative practitioners to engage with their craft. The following video showcases the AI assisted Sketchbook project by Leire Asennsio Villoria and David Mah. They have pioneered a method of digital archaeology which involves reverse engineering and understanding the material intelligence of historical cultural artifacts, and embedding these into generative associative models. Through this process, new and novel design iterations are created that are deeply rooted in historical precedence. These outputs challenge traditional architectural paradigms and highlight the potential of AI generative tools in the design process.
However, a critical question remains: Can AI, built upon existing cultural artifacts, truly produce novelty, or does it risk anchoring culture in a repetitive cycle? The transformative potential of AI in design is evident, but its true capability to innovate and redefine remains a topic of exploration.
> > GenAI in L+T
The introduction of novel educational technologies often arouses strong emotions, ranging from doomsday predictions to endless euphoria (Rudolph et al., 2023). In the case of genAI, opinions are polarised between those who are excited about the potential it brings and those who advocate for its prohibition.
GenAI undoubtedly presents both opportunities and challenges in higher education. It offers the potential to fundamentally change the way we think about education and learning, with opportunities for improving efficiency, effectiveness and societal impact for both students and educators (Atlas, 2023). However, alongside these innovations comes considerable risk, including threats to academic integrity, concerns around the accuracy of AI-generated content, propagation of biases or misinformation and potential overreliance on the technology (Gimpel et al, 2023).
Therefore, in approaching the use of these tools, it is imperative for educators and students to be both aware and critically reflective. This guide recommends that built environment learners and teachers proceed with caution, with strong emphasis on ethical and responsible engagement, and a focus on the development of AI literacy. With this in mind, let us look at some of the current and potential issues that must be foregrounded.
Efforts by the Australian Government’s Tertiary Education Quality and Standards Agency (TEQSA) to develop and also share collected guidance from across the Australian HE sector via the TEQSA good practice hub is also helpful to note. Of course, this valuable advice should be considered in a UoM policy context.
Understanding why students might decide to use AI for their coursework is critical to ensuring that we genuinely support learning, and that the institution meets its obligation to graduate employable and ethical citizens.
Students might elect to use genAI for their coursework for numerous reasons, and it is important to understand that not all of these reasons are mischievous or with an intent to cheat. Some students may lack confidence in producing work entirely themselves, whilst others may not feel motivated or supported to do so. Indeed, scholarship on why students participate in academic dishonesty more widely suggests that the possible reasons can extend beyond the desire to achieve certain results to include: feeling inadequately prepared for assessments; caring more about results than learning; confusion around what constitutes academically dishonest behaviour; feeling like the behaviour is commonplace amongst their peers; or feeling a lack of connection to their studies or institution more generally (Bryzgornia, 2022). Some scholars have even raised the notion of ‘ethical cheating’ in reference to students collaborating, sharing knowledge/information/ideas and using open-source platforms precisely to develop 21st century skills, yet in ways that might traditionally have been considered cheating (see Brimble, 2016).
In relation to AI specifically, a survey of US higher education students conducted by Best Colleges in March 2023 showed that students hold diverse views towards the use of AI in university coursework, ranging from those who actively use it to those who believe it should be prohibited in educational settings (Richards, 2023). In the same survey, 40 percent of respondents said the use of AI defeats the purpose of education, and 63 percent said AI cannot replace human intelligence or creativity. Anecdotal reports suggest that students are also using genAI tools because they are fun, and also because they just want to explore what it can do. As discussed in the Assessment section of this guide, students deserve clarity and clear communication around what is considered proper versus improper use of AI in their studies and for each assessment task, what is encouraged and what may be required. This includes when and how students should disclose the use of AI tools, and any distinctions around expectations when it comes to AI use in text-based versus graphic-based formats.
Apart from clarifying university policies and expectations, it may be beneficial to discuss with students the use of AI by professionals and academics in the field, and the current set of ethical questions surrounding these practices. Siva Vaidhyanathan writes, this is a teachable moment for our students as well as ourselves. Not only are tools and technologies certain to develop over time, institutional and personal stances towards AI are context-dependent. Generally, if students feel uncomfortable or discouraged to discuss their views or habits with staff, this can contribute to a problematic gap between teacher assumptions/expectations and learner practices. The more educators and students can feel like they are working together to promote learning and professional development the better. As Ouyang and Jiao (2021) argue, the advancement of AI technologies does not ensure good education outcomes; rather, the long-term goal of AI use in educational contexts is to contribute to a paradigm where learners are supported and empowered to take agency for their own learning.
Datasets and Bias
GenAI has great potential in higher education, but it is crucial to approach these tools responsibly and consider the ethical considerations associated with them. This includes consideration of equity in assessment design, as paid and unpaid versions of genAI tools (such as ChatGPT) have access to different datasets.
A significant concern is the tendency of these models to perpetuate societal biases and discrimination (Dahmen et al., 2023). These models are trained on large amounts of data, and if that data is biased, the models will reflect these biases in their output (Atlas, 2023). In doing so, they reinforce existing societal issues and discriminations. To address this, it is essential for users to be educated about these biases, develop critical evaluation skills and gain technical expertise in mitigating biases when using these tools (Gimpel et al., 2023). This includes employing strategies such as proper prompt engineering to guide the genAI models towards generating content that is more inclusive, unbiased and aligned with ethical considerations. By proactively engaging in responsible practices, users can reduce bias and foster an equitable and ethically sound application
Paradigms of AI Usage by Learners in Higher Education: According to Ouyang and Jiao (2021) three paradigms can describe how AI is currently being utilised in education.
- AI-directed, where learner is considered as recipient (paradigm 1),
- AI-supported where learner is perceived as a collaborator (paradigm 2), and
- AI-empowered, where learner contributes as a leader (paradigm 3).
Paradigms 1 and 2 have been the focus of AI in higher education in the past two decades. There is a current call for Paradigm 3, an AI-empowered, Learner-as-Leader approach centred upon promoting human intelligence and integrated AI . This approach aims to resolve issues of bias in AI algorithms and datasets, lack of governance of AI decision-making, to promote learning and teaching experiences that are more socially just and inclusive.
Types of Datasets: For learning and teaching experiences to be more socially just and inclusive, it is important to understand how students use the AI platforms and the forms of information that are input and outputs (Dwivedi, 2023). In built environment education, students can engage with genAI platforms using several types of datasets including image, text, audio and/or code, depending on the subject.
Biases in Datasets: Depending on the type of genAI platform, datasets may not be curated or selected to identify an inclusive range of issues or perspectives. Such biases may be systemic, societal, cultural, racial, ethnic and/or methodologically, as well as intellectually fraught as large social datasets are fed into algorithms and unchecked algorithms can result in systemic discrimination that favours certain individuals or groups over others (Ferrara, 2023; Ray, 2023). Most datasets are Western-centric because of the dominance of these forms of information that are readily available for genAI platforms such as ChatGPT to utilise.
This range of potential biases is relevant to cultural, linguistic, ethnic and historic background to the content, or for a student, and should be recognised in support of socially just/inclusive learning (Ferrara, 2023). Datasets drive textual outputs such as essays, reports, summaries, reflective narratives, thesis, rendered images, development of audio outputs and drafts, as well as data analysis. Algorithms in AI/Machine Learning systems that seek to increase efficiencies can embed existing biases and propagate ongoing disparities. This compounding bias can hinder the achievement of social justice in classroom and decolonisation efforts by higher educational institutions.
Responding to Dataset Bias in your teaching: Datasets have a lifecycle of input, usage and interpretation. It is important that at each stage of the lifecycle, students are supported in how they relate to data and its interpretation for their learning and assessment (Dwivedi, 2023).
Some teaching strategies to consider include:
- Build students’ awareness of different types of biases that might be inherent to datasets;
- Encourage students to develop prompts that respond to biases by adding additional information such as ‘internationalise the prompt’ or ‘consider the Global South perspective’;
- Provide students with examples of how bias in datasets might impact their own worldviews about interpretation of readings/scholarship. This can include showing students that certain genAI outputs can impact respectful and ethical engagement from diverse scholars with varied cultural backgrounds, or may also lead to misinterpretation and distortion of information. Such distortion can be disrespectful and may project further bias/exclusion of diverse communities and places;
- Encourage students to check the authenticity of resources and not to rely solely on an output from a genAI platform as reliable information about various cultures, genders, races, ethnicities, histories or experiences of diverse communities to decolonise educational and professional practice efforts.
Creativity and IP Rights
Important issues relating to genAI and teaching in ABP disciplines are related to creativity and authorship, and impact studios and other subjects involving innovation and creativity. Each coordinator will need to explore and identify how students can best engage with these tools in relation to subject learning outcomes. Some questions and references are outlined below to assist this thinking.
When we ask students to ‘be creative’ in design-related disciplines or learning activities, we are asking them to contribute and iteratively refine their own beliefs, values and attitudes as they respond to a design challenge. Students learn to select from and/or transform ideas from precedents, research and their own experiences, as well as how to consciously reflect on and direct their approaches (Lawson & Dorst, 2009; Cross et al, 1994). We are asking them to participate in the ‘curious and beautiful relation between design problems and their solutions’(Lawson, 2007). Students are rewarded for designs that contribute positively and innovatively in this context.
How is student innovation or creativity framed and identified in your subject through the ILOs and elsewhere? How is it assessed via brief and/or rubric?
By contrast, genAI tools search, re-combine and deliver elements from data sets, producing a wide range of outputs including textual, numeric, code and graphic forms, in response to user prompts. Many doubt the capacity for AI to participate in ‘true creativity’ (Lawson, 2007; Kelly, 2019), claiming it lacks the motivation and independent judgement to create something truly new and useful. Human thinking is described as creative and flexible by nature, in contrast to the strengths of AI in relation to repetitive actions at vast scale, and managing complexity and multi-tasking. Some claim the Turing Test (in which an outcome that is indistinguishable from human production offers proof of intelligence) has been reached, while others claim creativity should be relocated to the perception of the beholder, rather than the contribution of a potential author (Natale & Henrickson, 2022).
Purported opportunities to ‘collaborate’ with genAI see typical models of design thinking transformed to propose linked human and computer contributions to identified phases. These outline perceived improvements to human thinking/expressing/building/testing/perceiving, using these tools to increase scope and decrease time and cost (Wu et al, 2021); or outline the ways in which different disciplines may creatively understand and engage with AI including as co-creators (Wingstrom et al., 2022).
A recent genAI-focussed panel discussion at the CSHE Teaching and Learning Conference heard panel members encouraging the design of learning experiences in which students could refine their creative practices and deepen their judgement by ‘sparring’ with the machine through reflective use of prompts and the creative recombination of outputs.
How might genAI strengths be distinguished from, and/or contribute to, student learning in the subject?
Elsewhere, human and AI production is being considered through the lenses of moral rights (Miernicki & Ng, 2020), intellectual property and copyright (Shtefan, 2021). A recent case before the US Copyright Office found the location of authorship to be via input (prompts) as opposed to output (AI-produced images), although the debate continues.
Simultaneously, some developers of various genAI platforms are in legal hot water, as artists claim these companies are infringing copyright by drawing on their published work without attribution, and the opportunities for recourse or even protection are overly limited (see a summary here). Concerns are raised for users, as the tools also collect requests and data from prompts, directly or via ‘plug-ins’. As AI starts to draw on outputs it claims as ‘its own’ for future production, it all gets a lot more complicated.
How can students learn about and respond to the IP concerns of others? How can students protect their own IP in this subject?
Our challenges include supporting students to engage creatively with emerging tools, to build their own creative expertise and judgement, and to effectively demonstrate authorship and to protect their own IP and privacy in the process. The value and personalisation of creativity and its expression remain central to this learning, and a crucial aspect of the learning that students need space and support to practice and refine.
What About my Assessment Design?
Unsurprisingly, there is no black-or-white answer to the complex questions regarding genAI in ABP teaching. Clear and transparent conversations with students about the use of tools as part of their learning are needed, and it is recommended that these focus on the specific learning that each subject is designed to support. As recommended by the University, these conversations should address appropriate uses of these tools, in line with relevant policy, and with an understanding of their limitations and potential application for each discipline area. Understanding student experiences and concerns would also be part of these conversations. The potential for academic misconduct should be clearly addressed.
This section focuses on assessment design for ABP subjects, and also includes references to academic integrity guidance. It will continue to be updated as this guidance and new valuable practices develop. The Melbourne Centre for the Study of Higher Education has also published in-depth guidance around Assessment and Generative AI, which, combined with the following guidance, will prepare you for dealing with assessment in an AI world.
The emergence of genAI has profoundly influenced assessment design and approaches to assessment. Increased access to genAI poses complex challenges, particularly when designing meaningful assessment tasks that accurately capture a student’s learning.
AI platforms present a multitude of new issues for both educators and learners, however, it is important to remember the purpose of assessment as the search for evidence of learning. Well-designed assessments will provide valuable evidence to support both learners and educators in their respective roles to learn and teach. The BEL+T team is available, so please reach out to us at firstname.lastname@example.org to discuss.
This section provides an overview of assessment design in built environments education in relation to genAI. It explores opportunities and complexities, emphasising the importance of aligning learning outcomes with AI literacy skills, and providing recommendations for meaningful assessment tasks and collaborative approaches involving genAI. This section also details the University's policy and position on genAI in teaching and learning – please look to the bottom of this page.
As outlined in previous sections, AI platforms afford students the opportunity to cognitively offload some elements of their tasks, allowing them to focus their efforts on higher level cognitive processes such as critical thinking, reflective thinking, problem solving and creative thinking. When considering the design of assessment tasks and integration of AI, educators should consider the following questions:
- What are the intended learning outcomes (ILOs) of the subject?
- How can students demonstrate their learning if they are collaborating with a generative platform such as ChatGPT?
- Can tasks be designed to focus on higher levels of cognitive thinking? What fundamental cognitive skills are needed?
- Is there opportunity to align/update subject ILOs in the future as identified in Futures of Work with AI?
It is recommended the above considerations be read in conjunction with BEL+T’s Guidance on Assessment & Feedback which provides further detailed guidance and resources around planning and design of assessment and feedback. This advice can be thoughtfully applied in a genAI context.
Colluding Collaborator or Learning Partner?
In approaching assessment design, it is useful to consider genAI as a potential participant/collaborator that students will engage with as part of their assessment process. This approach offers insight into genAI’s capabilities and limitations with respect to learners and educators. By understanding how AI works, educators can make informed decisions about the desired level of interaction between students and this artificial individual/intelligence.
Cope et al. (2021) offers a valuable framework for understanding the functional parameters of AI in the context of teaching and learning. The following table outlines these functions, highlighting the strengths and opportunities that each can provide for evidencing learning.
Functional Parameters of AI
Opportunity in Evidencing Learning
AI can efficiently identify and name content, so long as it has been defined in the training data
The sheer number of items is significantly more than referencing personal experience and memory of the learner.
Educators should note that this process of naming and identifying is linear and simplistic, based solely on what the machine has been “taught”, and does not take into consideration any other parameters (i.e. context, situation, etc.).
This means that accuracy and reliability of AI generated content also provides opportunity for learners to demonstrate their capacity for critical thinking and judgement.
AI Chatbots (e.g. ChatGPT, Bard by Google)
AI can count and calculate large numbers, datasets and process long sequential algorithms.
The capacity to automate a significant number of successive small calculations (i.e. Boolean decisions). On its own these small “unsmart” calculations can be seen as trivial, however, when combined the possibility of complex branches along the decision tree affords AI its “smart” appearance.
The conditions determining decision forks in the branches cannot be generated through “unsmart” calculations.
This provides opportunity for learners to demonstrate higher levels of evaluation and creativity when formulating further probability pathways.
AI Chatbots (e.g. ChatGPT, Bard by Google)
AI can quantify some qualities of human experiences and perception if a conceded numeric value has been assigned for calculability. eg distances, dimensions, shapes, colours, time, temperatures, sound, etc.
Sensors and other instruments designed to measure these qualities offer the capacity to deliver data continually and incrementally in real-time at vast quantities.
Measurements collected by AI is only as useful as the instructions/algorithm the AI is designed on.
This provides learners an opportunity to demonstrate their analytical skills and ability to evaluate through their reading and judgement / feedback of collected data.
AI can re-present information to name, calculate, and measure via various modes of communication. This is observable in automated rendering platforms (e.g. generated art and 2D graphics, etc.), 3D modelling, and speech / sound generators.
The speed in which numerous variations can be produced is significantly faster than human.
Despite the quantity of output AI can generate this does not represent the “best” quality or approach for a specific context/situation. Re-presented outputs display the mean of information that has been defined through the ”internet of things” and not the learners own personal cognition and opinions, or a diverse range of experience or perspective.
AI Processing Tool (e.g. Vizcom, Rednered.ai, DALL-E)
GenAI provides both educators and learners with some thoroughly exciting pedagogical prospects, particularly approaches concerned with demonstrating and evidencing learning (i.e. assessments). The following offer key considerations as educators plan and design their assessment tasks.
Assessment design to engage students’ capacity to evaluate and create
- Centre the assessment task on a context and/or situation where AI cannot access information and data (i.e., assess materials discussed in class as a summary of tutorial discussions).
- Shift the focus of what is being examined/assessed from the final output to the “behind-the-scene” process (e.g., the assessment task may instruct students to keep a detailed reflective/design journal that documents their process).
- Collect evidence of learning through modes that AI technologies are unable to replicate/output (e.g., students may participate in synchronous conversations or interviews).
Assessment designs to collaborate with AI
Planning a collaborative AI assessment task requires the educator to be clear on the role the AI platform is expected to play and how students are anticipated to collaborate with it. These roles will correlate with the AI platform’s capacity to name, calculate, measure and represent.
The following are some roles that genAI can fill to collaborate with students as they demonstrate their learning, and should be read alongside University guidance regarding appropriate use and referencing.
- AI as a Brainstorming Buddy to facilitate divergent thinking by generating representations of gathered data into numerous modes of visual communication (e.g., diagrams). Students can then demonstrate their ability to analyse, evaluate and judge by determining the most optimal option for their use.
- AI as a Copy Editor to assist students in identifying areas in which they can improve. Students can be tasked with reflecting on the improvements the AI platform highlighted and whether they accepted/rejected the suggestions, and why.
- AI as a Super Processor to calculate BIM data. Students could be tasked with explaining the process and algorithm(s) the AI platform engages in to generate the output and critically discuss the strengths, opportunities or limitations the platform poses to the industry.
For further information concerning genAI and assessment design the following sources are available for guidance:
- An in-depth breakdown of further prompts and considerations concerned with genAI and its impact in assessment design can be viewed through Monash University’s GenAI and assessment resource.
- Flinders University provides a useful flow chart that guides educators through the assessment design process when genAI is incorporated into the decision making considerations.
University Policy on genAI
As the opportunities and challenges offered by genAI technologies continue to emerge, the University and the Faculty are working to clarify what is required of students and of teaching staff. We’ll summarise key policy and guidance here.
Guidance for teaching staff
As outlined above, it is important to have open discussions with students—before they submit work—on the ethical use of genAI and on the University’s requirements around its use in assessment tasks. Subject coordinators may also consider reviewing assessment task design in the context of genAI.
The BEL+T team are available, please reach out to us at email@example.com to discuss.
An AI writing detection tool has been integrated into Turnitin, and is being further refined through its use at the University. Learning Environments has produced comprehensive information on the reliability of the tool, its functionality, and what to do if the tool returns a high percentage of text flagged as likely to have been AI-written. Note that:
As with the Turnitin similarity report, a high percentage of text flagged as likely to have been AI-written is not proof that academic misconduct has taken place but may be a sign that further investigation is warranted. Academic judgement should be used to determine whether to investigate the matter further, including by discussing with the student.
Investigations of such cases may look for additional evidence that the assessment material submitted was completed by the student, on the balance of probabilities. This might include looking at the metadata of the files submitted, comparing the assessment to other work completed by the same student, asking the student to provide drafts of the work, to describe how they completed it, or to explain the content of the assignment. Such evidence, taken together, may allow committees to come to a judgement about whether or not the work in question is likely to be the students’ own writing.
As with all potential student academic misconduct, subject coordinators are the main point of contact. Staff should not accuse students of academic misconduct, but should seek more information where appropriate, and gather evidence. Steps for referring any suspected student academic misconduct for investigation are outlined via BEL+T’s Academic Integrity Guidance
If a student is found to have committed academic misconduct by representing work generated by artificial intelligence software as their own, they will be subject to the penalties outlined in the Schedule of Student Academic Misconduct Penalties.
For guidance on those investigations and on their possible outcomes, please reach out to the ABP Student Programs team at firstname.lastname@example.org. (Even if the outcome involves an educative approach—which may be suitable in some initial cases—please keep the ABP Student Programs team in the loop, so that the Faculty can maintain a comprehensive record of engagement with each student.)
For further information on identifying and responding to potential academic misconduct in ABP subjects, please see BEL+T’s Academic Integrity Guidance, and the links to Academic Misconduct on the ABP Faculty Intranet.
What’s required of students?
Per the University’s Statement on the Use of Artificial Intelligence Software in the Preparation of Material for Assessment (21/4/2023):
…all work submitted by an individual student must be their own. In the case of group work, the individual contribution of each student must be their own work.
If a student uses artificial intelligence software… …to generate material for assessment that they represent as their own ideas, research and/or analysis, they are NOT submitting their own work. Knowingly having a third party, including artificial intelligence technologies, write or produce any work (paid or unpaid) that a student submits as their own work for assessment is deliberate cheating and is academic misconduct.
If a student uses AI generated material in the preparation of their assessment submission, this must be appropriately acknowledged and cited…
Please encourage students to review Academic Integrity at the University of Melbourne for further information on why academic integrity matters and how to avoid academic misconduct. Students should also review the Student Academic Integrity Policy (MPF1310). The Advice for Students Regarding Turnitin and AI Writing Detection provides further information about the use of AI tools and the ways the University may detect and respond to inappropriate use of AI tools. This document highlights the important role that subject coordinators play in clarifying the appropriate use, or not, of genAI tools for learning and assessment.
As the University and Faculty continue to engage with the opportunities and challenges offered by genAI, further policy and guidance will continue to be developed.
The Melbourne CSHE has compiled a set of useful resources on Assessment, AI and Academic Integrity.
The ABP Student Programs team is currently developing a guide for subject coordinators around genAI and academic integrity. Further advice is also in development by Chancellery and will be available before the start of Semester 2, 2023.