📄 Read the full article in Contemporary Educational Technology

In the fall of 2022, peer tutors at the American University of Sharjah Writing Center began raising a new set of questions:
“Can I help a student revise something if ChatGPT wrote it?”
“What if the student doesn’t say they used AI, but I suspect they did?”
“What do I do when a student asks me to ‘humanize’ their AI-generated draft?”
As generative AI tools quietly became part of many students’ writing processes, tutors found themselves navigating a space filled with uncertainty, contradictions, and ethical gray areas. Faculty policies were inconsistent, students used AI in different ways (and sometimes didn’t say so), and tutors were left to negotiate these situations in real time.
As director of the Writing Center, I wanted to support and guide the tutors. I drew on my own teaching practice at first but soon saw the need for guidance grounded in writing center pedagogy. At the time, though, there was virtually no scholarship on how peer tutors should respond to AI-generated writing.
To address this gap, we began by listening. We conducted a qualitative study with our tutoring team: a focus group discussion, followed by a staff meeting using a shared Google doc as a collaborative writing space where tutors described current challenges, answered scenario-based questions about concealed or “humanized” AI writing, and co-authored the guidelines in real time.
From Policing to Partnership
It became clear that tutors did not want to act as enforcers or academic integrity “detectives.” Instead, they opted for open, student-centered conversations that encouraged transparency. Rather than confronting students, they asked gentle, open-ended questions like, “Was there anything that helped you get started with this draft” and “What tools, if any, did you use as you worked on this draft?” These invitations often led students to disclose their use of AI and opened space for critical dialogue.
When students acknowledged using AI, tutors used those moments to teach revision, structure, and voice because writing center pedagogy prioritizes higher-order concerns and student ownership. They often praised sections the student had written themselves and encouraged them to build from there. As one tutor put it, “I want them to see their own potential as writers.” Throughout these tutoring sessions, AI was treated as a resource to support learning, not a substitute for the student’s own thinking. Tutors emphasized that while AI could be helpful for getting started, students shouldn’t become dependent on it. Instead, they encouraged critical engagement with the tool by helping students understand why changes were needed and how to strengthen their writing through revision. In this way, the focus remained not just on finishing a paper, but on developing as a writer.
When students asked for help “humanizing” an AI draft, tutors didn’t shut the conversation down. Instead, they reframed the request. They explored what sounded “off” in the text and worked collaboratively with students to revise in their own voice. Some invited students to critique the AI’s choices as a way to sharpen their analytical skills and recognize its limitations. Rather than treating AI as a shortcut, tutors used it as a prompt for more reflective, engaged writing.
Another layer of complexity came from the inconsistent ways faculty approached AI. Some instructors encouraged it for brainstorming or grammar support; others banned it altogether. Tutors responded by encouraging students to check with their professors and grounding their advice in academic integrity.
How Tutors Are Using AI (Emerging Data)
As we analyzed the data, another pattern began to emerge—one that we’re now exploring in a follow-up study. Tutors weren’t just responding to student AI use; they were also experimenting with using AI themselves as a tool during sessions.Early findings suggest tutors are incorporating AI in thoughtful, pedagogically sound ways. Some use it to
help students generate ideas, titles, or outlines
- simplify challenging readings or explain unfamiliar concepts
- model thesis statements or paragraph structures
- offer vocabulary suggestions, especially for multilingual students
But this practice isn’t without challenges. Not all students appreciate AI being introduced into sessions. Some have pushed back or expressed discomfort, which has prompted us to reflect more deeply on the issue of consent. Should students be asked explicitly before AI is used in a session? What does ethical, student-centered AI use look like when initiated by a tutor rather than the student?
Tutors as Co-Researchers
As we move forward, we’re keeping tutor voices central to the research process. Their insights are helping us shape local practices and contribute to an understanding of what ethical, inclusive, and effective AI use might look like in writing centers.
Peer tutors aren’t only adapting to AI but also helping define how writing support can evolve alongside it. Their questions, strategies, and reflections remind us that thoughtful engagement with AI is possible and that the writing center remains a vital space for negotiating those possibilities.
*Dr. Eleftheriou shared some of the findings of this study in her plenary talk at MENAWCA 9th Biennial Conference at NYU Abu Dhabi.
Leave a Reply