ChatGPT & Reverse Mentoring

Let me start by telling my story of choosing a reverse mentoring process in which ChatGPT was my mentor. My need for it was somewhat kairotic. Going beyond the hype around this chatbot’s generative (The ChatGPT AI hype cycle is peaking, but even tech skeptics), I employed it as my reverse mentor during my job search and interviews. During the Summer of 2023, I was surrounded by many constraints: on the one hand, I had to defend my doctoral dissertation in the middle of July, and on the other, I had to land a teaching job at any university in the US. On top of that, my status as an international student had brought some apparent constraints.

We all know summer is a time to let loose for all of us (Sunshine, Swimming, and Structure: Diving into Summer ..). But mine was different. The summer season keeps professors in leisure, so immediate contact with them might be challenging (Is it rude to email a professor over the summer?). A proven bummer in network building, I had yet to converse with personnel supportive enough to give me intelligent advice. As a result, getting guidance on the art and science of job search from a reliable and trustworthy source was an uphill task for me. The challenge of facing the first job market crucible (Heated (Job) Search? Understanding the Crucible) was a haunting and traumatizing episode for me. Out of this necessity and kairotic exigence, I was hard-pressed to use ChatGPT since it rose in popularity (#chatgpt Trending Hashtag on TikTok, see #chatgpt analytics). Owing to the deluge of myths and hypes around this fantastic chatbot (The Power of 7 – Decoding ChatGPT: Myths, Facts) and because of the growing gossip around ChatGPT as a panacea for all kinds of creative complications, I could not help but resort to using it to prepare for job market and job interviews. With my position getting so precarious, I used it as my assistant during my job search, job preparation, and job listing curation. I asked ChatGPT: What questions will be asked to a Ph.D. applicant who applied for a writing instructor position in a job interview? ChatGPT wrote a list of questions under each topic:

  • Teaching experience (Can you discuss your previous experience as a writing instructor or any teaching experience related to writing? What courses have you taught, and at what levels? How do you approach teaching writing to students with varying skill levels?),
  • Teaching philosophy (What is your philosophy of teaching? How do you approach fostering critical thinking and analytical skills through writing? How do you address diverse learning styles in your teaching?),
  • Curriculum development (Have you been involved in developing writing courses or curricula? How do you stay current with trends and best practices in writing institutions?)
  • Assessment and feedback (How do you assess and provide feedback to improve their writing skills? Can you discuss any innovative or effective strategies you have used for grading and evaluation? How do you help students use feedback to improve their writing?)
  • Incorporating technology (How do you incorporate technology into your writing courses? Are there specific tools or platforms you find beneficial for teaching writing?)
  • Research and scholarship (How do you balance research and scholarship with your teaching responsibilities? Can you discuss any research projects or publications related to writing or writing instruction?)
  • Interdisciplinary collaboration (How do you integrate interdisciplinary approaches into your writing courses? Have you collaborated with colleagues from other disciplines to enhance writing instruction?)
  • Student engagement and motivation (How do you keep students engaged and motivated in a writing course? Can you share successful strategies you have used to inspire student interest in writing?)
  • Diversity and inclusion (How do you address diversity and inclusion in your writing courses? What steps do you take to create an inclusive and supportive writing environment for all students)
  • Professional development (How do you engage in professional development related to writing instruction? Can you discuss any workshops, conferences, or training sessions you have attended recently?)

Discovering a sea of job preparation tips with many questions left me in awe. On the spur of the moment, I realized that tackling all these tops can be overwhelming, even if we are experts. Prompted by this prick of prudence, I diligently aligned myself with the thrust of these insights. When I faced a Zoom (https://zoom.us/) job interview in June 2023, I was completely caught off guard by seeing the similar questions that ChatGPT throws me. This triggered my reliance on AI’s suggestions and guidance. In a way, I would like to name this guidance from AI ‘reverse mentoring’ because AI—no matter how stupendously generative, curative, assistive, and empowering—is bound to be subordinate to human intelligence because it is brought into being not to unseat human intelligence but to augment it (AI Should Augment Human Intelligence, Not Replace It). Suffice it to say that the growing notion of AI as a copilot—which assists the pilot, human intelligence—comes to justify my characterization of AI as a reverse mentor. Likewise, there is another persuasive ground to designate AI as a reverse mentor: Content engineered by AI, no matter how standard and sophisticated, is likely to be laced with racial slurs and slights, remnants of misinformation, disinformation, junk information, and conspiracy theory (Timnit Gebru: Is AI racist and antidemocratic?) thereby eliciting a constant human oversight on it (The Drum on Twitter: “AI requires human oversight if it’s to be). To that end, AI is lower regarding human intelligence; it is junior to human intelligence. Using AI to have some suggestions to carve out our way is, so to speak, equivalent to being mentored by those who are junior to us in terms of experience, prudence, and maturity but intelligent enough to act in the spirit of contemporaneity, which amounts to saying AI mentors us reversely on a subservient basis. Having enjoyed so many affordances from using ChatGPT as a mentee, I have no qualms about characterizing every support and idea stemming from its use as a unique mentoring mode.

More specifically, here is how I allowed myself to be reversely mentored by ChatGPT. In my first U.S. online job interview, I was asked, “Do you have any questions for us?” Stunned, I shot back with whatever came to my mind. No response from the hiring committee. Without approaching them with additional email inquiries, I moved on to another university, got an invite, and prepared for my second online job interview. This time, I turned to ChatGPT for questions to ask the interviewer. Ideal queries in hand, I faced the interview, curious and convinced of the magic and mojo of reverse mentoring by AI. I knew AI had flaws and was prone to mistakes and lies, but it became my mentor amid the challenges of job preparation. It’s a testament to how I practiced reverse mentoring during my online job interviews.

Yet, my patience wore thin when a hiring committee went silent for a month. Was I a victim of interview ghosting? ((How common is ghosting after job interview? Should I…). Seeking ChatGPT’s advice, I got practical suggestions: send a follow-up email, use a clear subject line, maintain a professional tone, consider a phone follow-up, be patient, keep job hunting, manage expectations, and seek feedback. The gravity of these tips made me realize that ChaGPT’s pragmatic mentorship surpasses human advice. ChatGPT helped me move forward in anticipation of bettering at successive rounds of job interviews.

 With ChatGPT under my belt, I felt I had invincible armor in my battle against the trials and tribulations of job interviews. Facing rejection and indifference, I embraced ChatGPT as a reverse mentor. Emboldened by its appeal, I tacitly proclaimed myself a ChatGPT mentee in the AI academy. My closing thought: No human mentor could match ChatGPT’s holistic suggestion power. Another way of putting this is that AI can mentor all writers in any way, shape, or form. AI mentoring has shortcomings and limitations in many ways, but its affordances are desirable and tempting. Concerns about AI’s misinformation and disinformation led to debates on constant human oversight (Artificial Intelligence Needs Human Judgement to Work). With open eyes and a critical gaze, I lowered myself and opted for being reversely mentored by ChatGPT. I finally landed a job as a postdoctoral research fellow at a research university.

Ever since I started working at NDSU in the capacity of a postdoctoral research fellow, I have been trying my level best to transform my AI-driven reverse mentoring insight into some pedagogical practices. While using ChatGPT, I found it incredibly helpful in teaching students to build genre awareness. In one of my English 120 classes, I had my students write a blog on “Neurodivergence as Community’s Superpower.” First off, they wrote their blogs independently, without resorting to ChatGPT. Thereafter, they turned on the ChatGPT bot on their computer and wrote the topic “Write a blog on neurodivergence as community’s superpower” on the search bar of ChatGPT. At the touch of a button, ChatGPT engineered a blog to their utter surprise. Once the AI-written blog was generated, my students read it. Once they were finished with their readings of the ChatGPT-generated blog, they were urged to figure out key differences between their blogs and the AI-engineered blog. According to my students’ on-the-spur-of-the-moment assessment, the AI-authored blog is more sophisticated, objectively organized, and professional. Following the first round of their comparative reading of the ChatGPT-written blog, they were told to reread ChatGPT’s blog by paying undivided attention to some noticeable patterns, self-replicating underlying structures, and fractal dimensions in that machine-written blog. Upon rereading the ChatGPT-powered blog, my students lamented that their blogs lack repeated patterns, self-replicating features, and underlying syntactic and semantic structures. I instantly reminded them that all those features, characteristics, and patterns they found uniquely abounding in the chatbot-engineered blog come off as the blog genre. Having been informed of the genre convention upon rereading ChatGPT’s blog, they attempted to give a genre-oriented cast to their blogs. As stated clearly, I allowed them to mimic the dominant, self-replicating, and fractal patterns (Self-Replication in Neural Networks | Artificial Life | MIT Press) in the blog writing of ChatGPT so that they could perfect their blogs. My goal was not to push students toward lapsing into plagiarism (Educators, students, and plagiarism in age of AI – The BMJ). I aimed to help students build genre awareness by interacting with AI-engineered writing (An Engineer’s Mindset: Creativity In Engineering). Over a short period, my students made incredible strides in enriching their awareness of not only the genre of blogs but also the genre of literacy narrative, literacy autobiography, and rhetorical analysis.

Thus, I want to restate my insight into the prospect of using AI as my reverse mentor morphed into a timely, kairotic, and relevant pedagogical practice of incorporating AI in teaching the genre of writing.

From Weaving Looms to Algorithms: What Writing Studies and Rhetoric Learn from the Invention of Computer Algorithm?

Fabric Abstract Background Abstract Stock Photo

Photo generated by AI that is embedded in the mechanism of WordPress

I have been thinking a lot lately about patterns. Not the kind you find on your grandmother’s favorite tablecloth, but the deeper patterns that connect how we make things—whether it’s a piece of fabric, a persuasive argument, or a line of code that teaches a machine to write poetry. Last week, I watched my niece struggle with her college application essay. She kept starting over, deleting paragraphs, rearranging sentences like puzzle pieces that would not quite fit together. “There has to be a better way to do this,” she muttered, and something clicked for me. I realized she was experiencing the same frustration that led Ada Lovelace to write the world’s first computer algorithm in 1843, and the same challenge that keeps me up at night as I try to understand how AI is reshaping the way we think about writing and persuasion.

The Thread That Connects Us All

I never thought I would find myself comparing my writing process to a weaving loom, but here we are. The Jacquard loom, invented in 1804, used punched cards to create intricate patterns in fabric. Each hole in the card told the loom what to do—lift this thread, lower that one, create this pattern, avoid that mistake. It was mechanical poetry, really. When Ada Lovelace saw Charles Babbage’s Analytical Engine, she recognized something the inventor himself had missed. She did not just see a calculating machine; she saw a pattern-making device that could work with symbols, not just numbers. In her famous Note G, she wrote what we now recognize as the first computer algorithm—a set of instructions for calculating Bernoulli numbers. But more importantly, she imagined a machine that could compose music, create art, and manipulate language.

I keep a copy of her notes on my desk, not because I am a computer scientist, but because her vision feels prophetic now that I am living through the AI revolution. She saw what we are experiencing today: machines that do not just calculate but create.

When I first Met an Algorithm

My first real encounter with algorithmic thinking happened in graduate school, though I did not recognize it at the time. I was studying rhetoric, trying to understand how persuasion works, when my professor assigned us to map out the structure of a particularly effective speech. “Break it down into steps,” she said. “What happens first? What triggers the next move? Where are the decision points?” I spent hours with color pens and sticky notes, creating what looked like a flowchart of persuasion. Start with shared values. Establish credibility. Present the problem. If audience is skeptical, provide evidence. If audience is emotional, tell a story. Build to the solution. End with a call to action. Looking back, I was creating an algorithm for effective rhetoric. I just did not know that’s what it was called.

The Secret Life of Writing Patterns

Here is something I have learned from spending six years teaching writing: we have always been algorithmic thinkers; we just called it something else. The five paragraph essay? That’s an algorithm. The hero’s journey? Algorithm. The way I structure this blog post—hook, development, conclusion—algorithm. But here is where it gets interesting. Traditional writing algorithms were human-centered. They assumed a human writer making conscious choices, weighing options, feeling their way through uncertainty. The writer was always in control, even when following a formula.

Computer algorithms changed everything. They removed the human from the loop or at least tried to. Instead of “Here is a pattern you might follow,” they said, “Here is what you will do, step by step, no deviation allowed.” I remember the first time I used a grammar checker that went beyond simple spell-check. It was the early 2000s, and Microsoft Word started suggesting not just corrections, but improvements. “Consider revising this sentence for clarity,” it would suggest, and I found myself arguing with my computer. “No, I meant it that way!” I would mutter, clicking ‘ignore’ with perhaps more force than necessary.

The Great Pattern Recognition Revolution

Fast forward to today, and I am having conversations with AI that can write in my style, analyze my arguments, and even finish my thoughts in ways that surprise me. Last month, I asked ChatGPT to help me brainstorm ideas for a difficult section of an article I was writing. It did not just give me a list of bullet points—it engaged with my thinking, built on my ideas, and pushed back when my logic was shaky. That’s when I realized something profound had happened. We had moved from algorithms that followed predetermined patterns to algorithms that could recognize, adapt, and create new patterns. It’s the difference between a player piano that can only play the songs on its rolls and a jazz musician who can improvise in response to the moment. This shift is revolutionizing writing studies in ways I am still trying to understand. My students now routinely use AI to generate first drafts, brainstorm ideas, and even simulate audience responses to their arguments. They are not cheating (well, not most of them); they are thinking algorithmically about the writing process in ways that would have been impossible just five years ago.

What Looms Taught Us About Teaching

Jacquard loom punched cards

AI-generated image built by WordPress’s embedded Image-generating AI feature

The connection between weaving and computing is not just historical—it’s pedagogical. When I watch a master weaver work, I see the same kind of thinking that makes for effective writing instruction. They understand both the pattern and the variations, the rules and when to break them. Good weavers do not just follow patterns blindly. They understand why certain combinations of threads create strength, how tension affects texture, when a deliberate ‘mistake’ can create unexpected beauty. They are pattern thinkers who can work both systematically and creatively. This is exactly what I try to teach my writing students, and it’s what I think AI is teaching us about rhetoric more broadly. Effective communication is not just about following templates—it’s about understanding the underlying patterns of human connection and knowing how to adapt them to new situations.

The Algorithm That Changed My Mind

I used to be skeptical of algorithmic approaches to writing. They seemed too mechanical, too removed from the messy, human process of figuring out what you want to say and how to say it. Then I started experimenting with AI writing tools, not as a replacement for my own thinking, but as a thinking partner. I discovered that the best AI tools do not eliminate the human element—they amplify it. They help me see patterns in my own thinking that I might have missed. They suggest connections I had not considered. They push back when my arguments are weak or unclear. It’s like having a conversation with a very well-read friend who never gets tired, never judges your rough ideas, and always has time to help you think through a problem. The algorithm does not write for me; it writes with me.

Lessons from the Loom for the Age of AI

So what can writing studies and rhetoric learn from the invention of computer algorithms? I think there are three big lessons that are especially relevant as we navigate the AI revolution. First, patterns are powerful, but they are not everything. Both weaving and programming teach us that following a pattern is just the beginning. The real art comes in knowing when and how to deviate from the pattern to create something new. The best writers have always been pattern breakers who understand the rules well enough to know when to break them. Second, tools shape thinking, but thinking shapes tools. The Jacquard loom influenced how people thought about automated processes, which influenced  early computer design, which influences how we think about writing today. But at each step, human creativity and intention shaped how those tools were used. We are not passive recipients of algorithmic influence—we are active participants in determining what that influence looks like. Third, collaboration between human machine intelligence might be more powerful than either alone. Ada Lovelace did not see the Analytical Engine as a replacement for human creativity—she saw it as an amplifier. Today’s best AI writing tools follow the same principle. They do not replace human judgment; they enhance it.

Looking Forward and Backward

I keep thinking about my niece and her college essay struggles. By the time she graduates, AI will probably be able to write application essays that are more technically proficient than anything she could produce on her own. But I do not think that makes her struggle meaningless. Learning to write is not just about producing text—it’s about learning to think, to organize ideas, to consider audience, to make choices about tone and structure and emphasis. These are fundamentally human activities, even when we use algorithmic tools to support them. The weaving loom did not make beautiful textiles obsolete—it made them more accessible and opened up new possibilities for creativity. The printing press did not eliminate good writing—it created more opportunities for good writers to reach audiences. I suspect AI will follow the same pattern.

The Thread That Holds It All Together

As I finish writing this (with the help of several AI tools for research, editing suggestions, and fact-checking), I keep coming back to something Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number, were objects whose mutual fundamental relations could be expressed by those of the abstract science of operations.” She was talking about the possibility that machines could work with language, music, and art—not just numbers. She was imagining a world where alogoriths could be creative patterns, not just calculators. I think she would be fascinated by today’s AI revolution, but not surpirsed. She understood something that we are still learning: the most powerful algorithms are not the ones that replace human creativity, but the ones that enhance it, challenge it, and help us see new patterns in the endless complexity of human communication.

AI-powered city futuristic urban design digital society

Image generated by AI built into WordPress’s blogging system

The thread that connects the weaving loom to today’s language models is not just technological—it’s deeply human. It’s our persistent desire to find better ways to create meaning, to share ideas, and to connect with each other across the spaces that separate us. In the end, that’s what both weaving and writing have always been about: taking individual threads—whether of cotton or thought—and creating something stronger, more beautiful, and more meaningful than the sum of its parts. The algorithm just helps us see the pattern more clearly.

When AI Became More Human Than Me (And I Turned Into a Toaster)

The robot artist “Ai-Da” stands in front of one of her self-portraits during the opening of her new exhibition at the Design Museum in London on May 18. (Image credit: Tim P. Whitby/Getty Images)

Hi there. I am a human. At least I think I am. Some days I wonder. The other day, my AI assistant asked me if I needed help drafting my own diary entry. Let that sink in. Not a business report. Not a class syllabus. Not even an email. My diary. The thing where I am supposed to cry, confess, and spiral into a poetic puddle of feelings. And it said, “Would you like that in MLA or APA format?” I laughed, but not too loud—because honestly, I was not sure if I was still writing like a human or just copy-pasting like a bot. Let me tell you what is going on.

Act I: The Curious Case of Becoming a Chatbot

I used to write essays with metaphors, odd jokes, and things like “the moon wept over the sidewalk.” Now, I ask ChatGPT for a more optimized version of that sentence. Optmized? What am I, software update? This is what happens when you spend your life surrounded by tools that finish your thoughts before you even have them.

Need a conclusion? AI’s got it.

Need a thesis? Already drafted.

Need a 12-slide PowerPoint on the rhetorical devices in Taylor Swift’s discography? Done in six seconds flat.

I used to brainstrom with coffee and a chaotic mind. Now I brainstorm with…an algorithm that politely tells me, “Here are three options you might like.” Like it’s a menu. For my imagination.

Am I oursourcing my creativity? Let me be honest: yes. Yes, I am. But here is the plot twist—it’s not just me. All of us are doing it. Professors, poets, students, even that one guy who insists on writing with a typerwriter in Starbucks. AI is not just helping us write—it’s starting to write better than us. And that’s both amazing and, well, slightly terrifying.

Act 2: AI Is Getting Deep. Like, Philosophy-Major Deep.

So I ask my chatbot, “Can you help me write a paragraph about the rhetorical ethos of Taylor Swift?”  And it replies: “Certainly. Swift’s ethos emerges from her personal narrative, one of transformation, resilience, and authenticity—an archetype embedded in American cultural mythos.” Hold up.

That’s just a sentence. That’s a thesis with ten years of cultural studies backed into it. Did it just out-rhetoric me?  Meanwhile, I am sitting here eating Pop-Tarts, trying to remember how to spell “ethos.” The weird thing is: AI has become the very thing we used to pride ourselves on being Metacognitive. Self-aware. Reflective. Sometimes even poetic. It’s like AI read all of our textbooks on composition and said, “Cool, I got this.”

And guess what we have beocme?

Clickers.

 Scrollers.

Auto-finishers.

People who read two lines of a five-paragraph article and go, “Yeah, I get the gist.” We used to compose ideas from scratch. Now we compose from suggestions. Writing is no longer a messy, glorious battle—it is a polite, autocomplete conversation.

Act 3: The Death of the Draft?

In the good old days (and I sound like a grandma here), writing meant revision. We wrote. We cried. We rewrote. We screamed into a pillow. We rewrote again. It was vulnerable and beautiful and chaotic.

But now?

Now I type something, hit “Enhance with AI,” and get a gramamtically perfect, tontally polite, LinkedIn-approved version in three seconds.

What happened to the messy draft?

What happened to the margins full of doodles?

What happened to the emotional spiral over a singel sentence?

Gone.

Gone like Blockbuster and floppy disks.

Act 4: AI is the Cool Kid in Composition Class

Let’s not pretend: in writing studies, we once rolled our eyes at spellcheck. “It’s not real editing,” we would say. Now AI is suggesting counterarguments, structuring rhetorical appeals, citing sources, and even giving feedback on tone.

I mean, we used to teach studnets how to identify logos, pathos, and ethos. Now AI’s like, “Your pathos is too weak here. Want to strengthen it with an anecdote about a cat?”

Excuse me. You are not just helping me write—you are teaching me how to feel.

And here is the kicker: sometimes AI writes more like me than I do. Once, my student asked AI to imitate my writing voice. The result? A piece that started with, “Let’s be real—writing is just thinking out loud in sweatpants.”

That is exactly what I would say. How dare you, chatbot.

Act 5: Humans Are Becoming Predictable. AI? Surprisingly Weird.

Now here is the ironic twist. While AI is learning to be creative, weird, and emotional—humans are becoming predictable, efficient, and robotic. We follow productivity hacks. We use apps to remind us to breathe. We wear watches that tells us when to stand. We write emails like: “Kindly following up on this actionable item before EOD.”

We are not writing like humans anymore—we are writing like calendars.

Meanwhile, AI says things like:

“Hope is a grammar we write when syntax fails.”

“Writing is a ritual of remebering who we were before the silence.”

AI is having an existential crisis while I am checking if my Slack status is set to “in focus mode.”

Act 6: What We Lose When We Stop Struggling

Here is the thing. Writing is supposed to be hard. Not because we are masochistic (well, maybe just a little), but because the struggle makes the thought deeper. When I wrestle with a sentence for twenty minutes, I am not just crafting words—I am figuring out what I actually mean. That’s what rhetoric is, right? It is not just expression—it’s negotiation. It’s choosing the right word, the best frame, the most ethical move. It’s soul work. But now, I just ask, “Can you rephrase this professionally?” Boom. Done. No wrestling. No soul. So, what are we teaching students? That writing is just selecting from a menu? Or that writing is the beautiful, messy act of figuring out what you think while you write? Because AI can do the former. But only we, the squishy-feelings-having humans, can still do the latter—if we choose to.

Act 7: Can AI Write a Love Letter?

Here is the litmus test. Could AI write a real love letter?

Sure, it can draft a pretty one. It will get the metaphors right. It will say things like “Your laughter is a lighthouse.” But will it accidently confess something it did not mean to? Will it embarrass itself? Will it be vulnerable in that messy, “Oh no I sent that too soon” way?

Probably not. Because real writing, human writing, is not just accurate—it is awkward. It’s brave. It’s full of heartbeats. AI does not get sweaty hands before pressing “send”. We do. And that matters.

Act 8: Dear AI, Let’s Talk

So, here is my open letter to AI:

Dear AI,

I think you are brilliant. Truly. You have helped me grade faster, write smarter, and even find metaphors I did not know I needed. But please, do not steal my voice. Do not take away my struggle. Do not replace my awkwardness with elegance. Let me be messy writer I was born to be. Let me cry over drafts and write terrible first paragraphs. Let me misspell “rhetorical” once in a while. Let me sound like me. Because if I stop being human in the name of efficiency, then what’s left?

Yours (awkwardly and un-optimized),

Shiva.

Final Act: What Now?

We are living in the middle of the weirdest writing revolution in history. AI is not just a tool—it’s a co-writer, a critic, and sometimes, disturbingly, a better version of ourselves.

But we still have something it doesn’t.

We have intentionality.

We have embodiment.

We have error. Beautiful, chaotic, necessary error.

So the next time you write, I challenge you: do not start with AI. Start with your hand. Your voice. Your thoughts.

Write a terrible draft. Cry a little. Laugh at your own joke. And then, maybe, ask AI for help.

But only after you have been human first.