In the words of Chimamanda Ngozi Adichie, “….when we reject the single story, when we realize that there is never a single story about any place, we regain a kind of paradise.”

Artificial Intelligence (AI) has largely been embraced by the University of Cape Town, and webinars, workshops, and conferences focussed on various aspects of AI have become a regular feature on our academic calendars. Yet, while advances in AI are supporting our workflows, enabling new possibilities for research, and expanding our horizons, the recent evolution of generative AI (Gen AI) tools, such as ChatGPT, Co-Pilot, and Gemini (amongst others), have been met with more apprehension. 

Many are deeply concerned about the impact that Gen AI will have on assessment, with its potential for academic dishonesty. However, given that students can freely access Gen AI, and institutions cannot reliably detect it, the only reasonable recourse is to find ways of developing competency in the use of AI, rather than trying to prohibit it. 

To aid in this process, the Centre for Innovation in Learning and Teaching at UCT has produced a comprehensive suite of resources for students, staff, and researchers, which provide valuable guidance on the principles, opportunities, limitations, and considerations of using AI tools.

While Gen AI can be used to generate a product, when we outsource the work of production, we lose the learning and developmental value that can only be derived through the active process of intentionally crafting a piece of writing to convey a specific message to a particular audience. 

At the UCT Faculty of Health Sciences Writing Lab, engaging with Gen AI and developing our own understanding has prompted a philosophical interrogation of the nature of our work and the role that writing centres play. If students can now use Gen AI to “brainstorm” ideas, “paraphrase” information, and “write” an essay, then why do they need to continue learning how to do these things themselves? And what value does a space like the Writing Lab have? To address these uncertainties, we begin with unpacking our understanding of what it means to write. 

The earliest known writing systems date back to around 3000 BCE, and in the intervening 5000 years writing has arguably become entrenched as a dominant mode, if not the dominant mode of communication. Nothing begins or happens without writing (think applications and proposals), and nothing is acknowledged or completed without writing (think exams, dissertations, reports, and journal articles). 

This concept of writing as a central practice is foundational in the Writing Lab’s consultant training and a key message that we integrate into our teaching to help students appreciate the relevance and importance of writing in relation to their academic and professional careers. However, as those of us in writing centres know, writing is not just a product, but also a process. And as much as these products anchor moments in a journey, it is in the production of these products that learning and development take place. So, while Gen AI can be used to generate a product, when we outsource the work of production, we lose the learning and developmental value that can only be derived through the active process of intentionally crafting a piece of writing to convey a specific message to a particular audience. 

Gen AI – not designed for meaning-making’

We recognise the transformative potential of writing to provide a process through which students can negotiate and represent their identity and perspective.

At the Writing Lab, the notion of intentional meaning-making is at the heart of our understanding of what it is to write. From this perspective, we believe it is essential to remain cognisant that Gen AI can generate text, but it cannot write. Let alone write for you. Each writer (each you) is a unique individual with a distinct authorial identity that is informed by the sum-total of their life experience. Authorial identity and voice manifest through every decision a writer makes – from what to write about and why, to how to write about it. As we have shared previously, at the Writing Lab, we are guided by the transformative ideology of the academic literacies approach, meaning that we value the diversity of our students, and we recognise the transformative potential of writing to provide a process through which students can negotiate and represent their identity and perspective. So, while Gen AI can generate text, it cannot generate text that represents you, and we are explicit about this in our workshops with students when we discuss the implications of using Gen AI. 

Central to the process of academic writing, is the process of knowledge-making, perhaps more usefully framed here as ‘sense-making’. Students must sift through the information, ideas, perspectives and values they encounter in their learning environments, to make sense of these as part of their own growing knowledge base and, more importantly, their developing academic and professional identities. This process of sense-making is about evaluating what is encountered, comparing it to established and alternative frames of reference and reaching conclusions about one’s own understanding and perspective. 

However, when students use Gen AI to ‘brainstorm’ or ‘research’ a topic, the nature of the AI output inhibits this process and shifts the emphasis from evaluating to ‘fact checking’. This presents a challenge to students, because to recognise inaccuracies and hallucinations in a Gen AI output, the student must already possess a level of experience and expertise about the topic being explored. 

At the undergraduate level, where students are still disciplinary novices, the risk is that they will default to uncritically accepting Gen AI outputs that appear legitimate. Furthermore, an emphasis on fact-checking has the potential to skew students’ perceptions towards knowledge as binary – true or false, right or wrong. And, without a process of critical questioning, students further risk negating their agency in the knowledge-making process.  

Adapt an evaluative agentic lens

As scholars from South Africa and the Global South more broadly, we need to be vigilant about whose perspectives we are digesting and how we allow our own voices and values to be shaped by the perspectives and preferences of others.

At the Writing Lab, we explicitly discuss positivist and interpretivist paradigms with students in writing workshops and encourage both students and Writing Lab consultants to adopt an evaluative agentic lens through the practice of asking critical questions – How was the knowledge produced? Who does the knowledge privilege? Who does it exclude? What assumptions are being made? What values are being asserted? 

In our experience, AI tools for reviewing literature that allow for asking questions (such as SciSpace) are most well-aligned with the process of knowledge making. And senior and postgraduate students can learn to effectively use AI tools such as ResearchRabbit and Ellicit that enable sense-making of the literature or knowledge around a focus area. However, tools such as Quillbot that many students use for paraphrasing should be approached with caution. 

It is through interpreting other peoples’ texts that we ourselves come to know, and by explaining our interpretation in writing (paraphrasing) we can communicate our educated perspectives. Yet, students often avoid this work because academic texts can be difficult to read and understand, and the possibility of ‘getting it wrong’ may feel like too great a risk. 

To support students becoming confident in their engagement with academic texts, we have increased our emphasis on reading in Writing Lab workshops, explicitly unpacking texts and comprehension strategies with students, and exploring referencing strategies that enable authorial voice development. 

Emphasize reading

Because of our respect for ‘The Literature’ as the canon of our collective knowledge in academic contexts, the way that AI tools align with referencing practices is an important consideration regarding their efficacy. Referencing is a practice that builds the trace record for the evolution of ideas over time; each reference pointing backwards to the products upon which the current knowledge-making is based. In alignment with this function, we teach students that everything, except for common knowledge (itself a contested idea), must be referenced, and we direct them to detailed referencing guides that explain how to reference all manner of potential source material. 

Yet, Gen AI outputs challenge this practice because they cannot be referenced. They are not ‘fixed’ or replicable, and even using the same prompt twice will not reliably lead to the same output. At best, referencing Gen AI could be approached akin to referencing a personal communication. And, while this may be necessary in rare situations and thus acceptable on occasion, the risks of citing unverifiable information mean we cannot accept this as a standard or common practice. 

Naturally, the example of citing Gen AI as ‘pers. comm’ is tongue-in-cheek, but it does prompt us to explore a larger concern regarding the character of Gen AI outputs. If, for example, ChatGPT was a person, who would they be? What would their perspective be? What might they sound like? What might they care about? To get a sense of this, we have to look at the content upon which large language models like ChatGPT are trained, and the authors that dominate that content – white, western, male, heteronormative, able-bodied, Judaeo-Christian. The problem with this gestalt voice is that these intersectional lines of identity are all privileged identities. 

Ask: from whose perspective is AI created?

As such, in both overt and subtle ways, the text generated by AI perpetuates the exclusion of historically marginalised voices. As scholars from South Africa and the Global South more broadly, we need to be vigilant about whose perspectives we are digesting and how we allow our own voices and values to be shaped by the perspectives and preferences of others.  In the words of Chimamanda Ngozi Adichie, “….when we reject the single story, when we realize that there is never a single story about any place, we regain a kind of paradise.”  

From a philosophical perspective, we are convinced that you cannot outsource your intellectual engagement, and that AI cannot replace the writing and learning process. As such, within writing centres our work must continue to focus on the necessary, messy, liminal, discomforting process of writing, through which disciplinary knowledge is concretised, and authorial identity is negotiated and shaped. Given the potential for AI tools to undermine this process of learning and identity formation, we do not encourage their use at the undergraduate level. However, to support senior students and mature writers in their thoughtful application of AI tools, we have developed a guiding resource.

Develop AI literacies in your staff and students

Ultimately, we want to graduate students who are competent professionals, socially responsive citizens, and lifelong learners. To achieve this, we need to enable students to develop a strong metacognitive foundation in academic reading and writing through the use of aligned pedagogies. And, to navigate this learning within the shifting landscape of AI, universities need to establish a strong emphasis on AI literacies to aid staff and students in learning how to utilise these tools in intentional and critical ways that do not undermine academic integrity or authenticity. 

Looking forward, the only certainty seems to be uncertainty as AI tools continue to rapidly evolve and proliferate. However, rather than making writing centres redundant, it is our contention that the roles of language and literacies specialists have become more important than ever, as we work to support indigenous knowledge creation and the amplification of diverse and authentic voices.


Natashia Muna has a background in science, with specialisations in zoology, biodiversity, and molecular and cell biology. During her PhD, Natashia worked as a student consultant at the University of Cape Town (UCT) Writing Centre, and it was there that she discovered her passion for working with the languages of science. She has been the Director of the UCT Faculty of Health Sciences Writing Lab since 2015. The Writing Lab is guided by the transformative ideology of the academic literacies approach, and provides academic literacies support, teaching, and capacity development available to all staff and students in the faculty. Natashia is currently researching and supervising in the areas of multimodal social semiotics, authorial identity development, the role of writing in team-based learning, and student reading practices.

As an academic literacies specialist, Taahira Goolam Hoosen supports student and research writers’ acquisition of academic literacy practices at the Faculty of Health Sciences Writing Lab, located in the Department of Health Sciences Education at the University of Cape Town (UCT). Her teaching is shaped by transformative Academic Literacies, threshold practices and genre theory. Her research focuses on the pedagogy of voice and postgraduate supervision to understand the processes of authorial voice and identity development. She is also a writing mentor, facilitator and coach on various UCT postgraduate short courses and programmes.