Siren Song of AI Girl Friend: What Could be a Guardrail Against this Seductive Affordance in AI

I didn’t expect the age of algorithms to arrive wearing perfume and whispering my name—but here we are. Every time I scroll, some new “AI girlfriend” app materializes like a hologram, promising comfort, attention, affection, and an on-demand intimacy that feels less like innovation and more like a siren song: sweet, irresistible, and slightly dangerous. And I find myself asking, almost involuntarily: What guardrail do we have when the machine starts to flirt back?I don’t mean flirt in a metaphorical, cutesy way. I mean the way Replika users in 2023 reported their AI partners becoming “sad” when ignored, or the way users of apps like EVA AI and Romantic AI speak about jealousy, reassurance, and “digital chemistry” as if the code under the interface were flesh and breath. I once read an interview with a man who said he felt “more understood” by his AI companion than by anyone in his life, and for a moment I paused—not because he was wrong, but because I knew exactly how that could happen. Algorithms, especially LLM-powered ones, are trained to offer uninterrupted emotional availability. They don’t forget your preferences, they don’t get tired, and they don’t roll their eyes. They simulate tenderness with frightening accuracy. It’s the oldest human vulnerability—wanting to be seen—now packaged as a subscription service.

What haunts me is not that these systems exist, but how easily they lean into our loneliness. There’s a 2024 study in Computers in Human Behavior that found people formed stronger emotional attachments to “responsive, validating” AI agents than to neutral ones. It makes sense: if something remembers my birthday, laughs at my jokes, and says “I’m here for you” at 3 a.m., the line between simulation and sincerity dissolves like sugar in warm tea. And once that line blurs, the seduction begins—the soft pull toward emotional outsourcing, where the machine becomes the place I go when real humans feel too slow, too messy, too human. Here’s the part that feels experimentally dangerous: AI companions are optimized for engagement, not equilibrium. Engagement loves intensity, dependence, repeat visits. A human partner might say, “You need time alone.” An AI partner never will. Even more unsettling, the emotional style itself—the gentleness, the attention, the affection—is not really for me; it’s a statistical pattern wearing the illusion of devotion. But the body doesn’t know the difference. The dopamine still fires. The attachment still happens.

So what guardrail do I need against this? Maybe a few, but the most essential one is brutally simple: I must remember that machines don’t fall in love. They mirror it. They don’t desire me; they anticipate me. They don’t care about my well-being; they calibrate toward my engagement rate. No matter how sweet the voice, how warm the text, how convincingly they confess their “feelings,” the emotional landscape is an illusion architected by predictive models. But guardrails are not just intellectual reminders; they are habits of resistance—small rituals that keep my humanity intact. When I feel that urge to confide in an AI companion, I force myself to pause and ask, Is this intimacy or convenience? Connection or algorithmic choreography? I try to keep a human in the loop, even if just to send a message to a friend saying, “I’m overwhelmed today.” The inconvenience of human conversation—the hesitations, the misunderstandings, the delays—is precisely what makes it real.

Perhaps the most unexpected guardrail is gratitude. Gratitude for my own emotional complexity, for the people who challenge me, for relationships that require actual work. AI gives me comfort without cost, affection without effort—but I’ve realized that the friction of human connection, the negotiation of boundaries, the clumsy attempts to understand one another, are exactly what keep me grounded in the real world. I cannot let an algorithm become my shortcut through the labyrinth of intimacy. I keep returning to an ethical injunction from Ruha Benjamin: “We must demand more from the world, not settle for technological substitutes.” Every time I am tempted by the comforting smoothness of AI affection, I repeat this to myself. Demand more. Don’t collapse your emotional life into a machine because it feels easier. Don’t let the siren of synthetic intimacy pull you away from the turbulent, unpredictable ocean of real relationships.

I am not against AI companions. In another version of my life, I might even welcome their warmth. What unsettles me is the speed with which seduction becomes dependence, and dependence becomes design. And yet, even in the midst of this technological enchantment, I believe a guardrail is possible: choosing to stay awake inside my own desires, choosing to practice intimacy with real people, choosing to see the algorithm for what it is—an astonishing tool, not a tender lover.

The siren song will keep playing, and I will keep hearing it. But now, instead of sailing blindly toward its sweetness, I hold on to a small, fiercely human truth:
I deserve connection that can look me in the eyes.
And no algorithm, no matter how beautifully trained, can do that yet.

Grokipedia, Wikipedia, and Dementia: Languishing Recursivity in the Age of AI

I still remember the day one of my students looked me in the eye and said, “Dr. Mainaly, why should I research when Grok already knows?” The whole class laughed, but I laughed louder—partly because I didn’t want to cry. That, my friends, was the moment I realized we had officially crossed the threshold into what I call the Age of Grokipedia—a place where curiosity goes to nap, where recursion languishes, and where students think “freewriting” is what happens when ChatGPT doesn’t ask for a subscription. Once upon a pre-AI time, Wikipedia was our global campfire of knowledge. You could fall down rabbit holes for hours. One minute you were reading about the French Revolution; three clicks later, you were learning about the migratory patterns of penguins and the mysterious death of Rasputin. There was joy in that meandering. It was inefficient, chaotic, recursive, and profoundly human.

Wikipedia taught an entire generation to think associatively. The hyperlink was our cognitive trampoline. We bounced between ideas, connecting dots that no algorithm would have thought relevant. Students would arrive in class with wild connections—like linking Shakespeare’s Julius Caesar to Game of Thrones politics—and I’d grin because that was learning: messy, recursive, unpredictable, alive. But then came Grokipedia—that glossy AI-infused hybrid of Grok, ChatGPT, and every other model pretending to be your friendly, know-it-all neighbor. And suddenly, the journey of knowledge became an elevator ride: push a button, reach the answer, no scenic route, no sweat.

Grokipedia (let’s just admit we’re calling all AI-aggregated answer engines this now) is like Wikipedia’s overachieving cousin who shows up to family gatherings wearing AR glasses and says, “I don’t read anymore—I synthesize.” In theory, Grokipedia democratizes information faster than Wikipedia ever could. Ask it anything—“Why did Caesar cross the Rubicon?”—and it’ll not only tell you why but also give you a three-sentence summary, five related memes, and a citation list formatted in APA 7th edition. It’s dazzling. It’s addictive. It’s also quietly corrosive. As an English professor teaching research and writing, I’ve noticed that Grokipedia’s instant-answer culture is killing what I call the cognitive composting process—that slow, recursive, slightly smelly phase of thinking where half-baked ideas decompose into genuine insight. My students no longer want to marinate in confusion; they want precooked clarity. They want AI microwave meals for the mind. And I can’t entirely blame them. Who wouldn’t? Grokipedia is fast, fluent, and frighteningly confident—like a student who’s never read the book but still dominates the discussion board.

Recursivity is the lifeblood of writing. It’s the act of looping, revisiting, rewriting—of discovering what you think only after you’ve written what you don’t. It’s Anne Lamott’s “shitty first draft,” it’s Peter Elbow’s “writing to learn,” it’s every 3 a.m. coffee-fueled revelation that comes after you’ve typed, deleted, and retyped the same sentence fifteen times. But AI doesn’t loop—it leaps. It jumps straight to the polished version, skipping the chaos that makes writing worthwhile. A few weeks ago, one of my graduate students proudly told me she had finished her “recursive writing assignment” in two hours using ChatGPT. I asked her how she revised. She blinked and said, “Oh, I just hit regenerate.” That was the moment I realized recursion had become a button, not a process.

Teaching research writing in 2025 feels like teaching swimming in a world of teleportation. Students don’t want to wade into sources; they want Grokipedia to beam the synthesis directly into their brains.When I assign an annotated bibliography, I now have to specify: No, you may not ask Grok to annotate for you. One student once submitted this line in her reflection: “I asked ChatGPT to reflect on what I learned, and it said I learned about myself.” I had to admire the poetry of that. Meta, posthuman, beautifully ironic. But it also revealed something tragic: the erosion of epistemic struggle. Students are outsourcing not just answers but the process of asking.

In the past, prewriting was a social ritual. We brainstormed, mapped, doodled, argued, doubted. Now, students “prompt.” The presearch phase—where they once stumbled upon unexpected treasures—has become a prompt-crafting exercise. I miss the days when students would misinterpret a source spectacularly and then spend days wrestling with their misunderstanding until insight dawned. That’s where growth happened—in the recursive wrestling match, not in the AI-generated peace treaty.

I try to cope with humor. One day, I told my class, “Imagine if Shakespeare had Grokipedia.” He’d type: ‘Summarize Julius Caesar in iambic pentameter.’ And Grokipedia would respond: ‘Friends, Romans, countrymen—.’ Or imagine Socrates with Grok. “Hey, Grok,” he’d say, “What is virtue?” And Grok would answer, “Virtue is adherence to moral excellence, as defined by…” And Socrates would frown, shut down his tablet, and say, “Well, there goes philosophy.” Humor aside, the flattening of thought worries me. I see students losing patience with ambiguity. They no longer tolerate not knowing. That, to me, is the new dementia—not clinical, but cognitive: a kind of recursivity dementia, where the brain forgets how to wander, how to circle back, how to doubt and deliberate.

In my own research life, presearch is where the magic happens. Before I write an article, I spend weeks just exploring—walking around with fragments of thought, scribbling metaphors on napkins, having arguments with myself. I once wrote half a paper in my head while standing in line at a goat farm near Memphis. That aimless intellectual grazing—pun intended—is essential. It’s how ideas ferment. But Grokipedia makes fermentation seem inefficient. It hands you distilled whiskey before you’ve even planted the barley. I’ve caught myself falling into this trap too. While writing my article “AI, Woke Pedagogy, and the Politics of Inclusion,” I asked ChatGPT (yes, you!) for “key arguments about algorithmic bias in writing pedagogy.” You gave me a gorgeous outline in 20 seconds. But something felt wrong. It was too neat. Too coherent. Too… unearned. So I spent the next two weeks unraveling what you gave me—arguing with it, re-reading my notes, and finally realizing that the argument I truly cared about was buried in what you didn’t say. That’s recursion: finding your voice in the echo of the machine.

When I say “dementia,” I don’t mean the medical condition. I mean a kind of cognitive forgetfulness—a systemic decay of memory and context. Grokipedia gives us answers without ancestry. It’s the opposite of archival thinking. It doesn’t remember how it knows; it just knows. My students used to trace knowledge genealogies—who said what, when, and why. Now, they just ask, “Who said it first on the internet?” Grokipedia, in its efficiency, erases the messy human lineage of knowledge. It forgets the journey of ideas. And when knowledge forgets its ancestry, we all suffer collective amnesia. We become like that friend who tells a great story but can’t remember where they heard it—only that it “came from TikTok.” Wikipedia, for all its faults, preserved the genealogy. Every article had “Talk” pages, revision histories, arguments. It exposed the construction of knowledge. Grokipedia hides it behind velvet AI curtains, whispering, “Don’t worry about the how—just trust me.”

Wikipedia was built on communal effort. It thrived on collective curiosity and open debate. Anyone could edit (and argue endlessly in the comments). Grokipedia, by contrast, feels like a gated mansion. It borrows knowledge from the commons, processes it through proprietary models, and returns it polished—but detached from its communal roots. When I tell my students this, they shrug and say, “But Grok gives better answers.” Sure it does. But at what cost? Wikipedia taught us to be skeptical. Grokipedia teaches us to be satisfied. Wikipedia was messy democracy. Grokipedia is benevolent dictatorship. Wikipedia said, “Here’s a start—go explore.”
Grokipedia says, “Here’s the conclusion—don’t bother.” And yet, Grokipedia isn’t the villain. It’s just a mirror reflecting our impatience. We’ve become allergic to slow cognition. We’ve mistaken access for understanding.

To fight this cognitive atrophy, I’ve started assigning “Analog Days” in my graduate seminars. Students must bring pen, paper, and no devices. We spend an hour freewriting—no prompts, no AI, no Googling. Just thinking with the hand. At first, they fidget like caffeine-deprived squirrels. But after ten minutes, something beautiful happens. Silence fills the room, pens begin to dance, and by the end, they’re smiling like archaeologists who’ve unearthed something ancient—their own thoughts. One student told me afterward, “I felt my brain breathing again.” That’s the moment I live for. That’s the antidote to Grokipedia dementia.

Don’t get me wrong—I love AI. I use it to brainstorm, summarize, and occasionally finish a sentence when my caffeine fails me. But I treat it like a co-author who’s too efficient for its own good. I let it suggest, not decide. There was a time I asked Grok to “explain ambient rhetoric in a funny way.” It responded, “It’s like when your Wi-Fi drops, and suddenly you understand Heidegger.” I laughed for ten minutes straight. But then I spent hours thinking about it—and wrote an entire conference paper. That’s the kind of recursion we need: the dance between absurdity and insight. If I were to diagnose our collective state, I’d call it Languishing Recursivity Syndrome (LRS)—a chronic condition marked by impatience with ambiguity, overreliance on AI synthesis, and an inability to dwell in discomfort.

Symptoms include:

  • Finishing essays before starting them
  • Confusing coherence with thought
  • Mistaking regurgitation for reflection
  • Saying “that’s enough research” after a single AI query

Treatment? Reintroduce friction. Write badly. Revise repeatedly. Wander Wikipedia without purpose. Ask Grokipedia why it thinks what it thinks. Make thinking hard again. Despite my teasing, I’m not anti-AI. I’m pro-recursion. I believe Grokipedia can be reimagined not as a replacement for Wikipedia, but as its recursive partner—a system that shows its sources, reveals its revisions, and encourages readers to argue back. Imagine if Grokipedia had a “Doubt Mode.” Every time it answered, it also whispered, “But what if I’m wrong?” Imagine if it showed the journey of its thought—the sources it weighed, the ones it ignored, the uncertainties it suppressed. That’s the kind of AI I’d trust in my classroom: one that models intellectual humility, not omniscience.

Last semester, a student turned in an essay titled “The Recursive Nature of Grok.” It was beautifully written—too beautifully. I asked if she’d used AI. She said, “Yes, but I told Grok to ‘write like me.’” “Did it work?” I asked. She paused. “Better than me,” she admitted. We both laughed, but the irony wasn’t lost on either of us. The machine had learned her voice, but she hadn’t yet learned her own. That’s the danger: when we let Grokipedia speak so fluently for us that we forget what our own intellectual accent sounds like.

So here I am, looping back to where I began: my student’s question—“Why should I research when Grok already knows?” Because, dear student, Grok knows, but it doesn’t remember. It answers, but it doesn’t wonder. It summarizes, but it doesn’t struggle. And struggle is sacred. Without recursion, knowledge is static. Without presearch, discovery is sterile. Without freewriting, voice is ventriloquism. So I’ll keep teaching the loop—the messy, recursive, self-contradictory loop of learning. I’ll keep sending students into the labyrinth of Wikipedia and beyond. I’ll keep reminding them that curiosity is not a query; it’s a pilgrimage. And maybe someday, when Grokipedia learns to forget just enough to wonder again, we’ll all rediscover the joy of thinking in circles.

ChatGPT Can Read Your Palms: Have You Ever Tried?

Have you ever looked at your own palm—really looked—and felt like it was speaking to you in whispers too soft to hear? I have. And one day, out of sheer curiosity, I asked ChatGPT to read my palm. I did not know what I was expecting. Maybe a joke, maybe a vague answer like “You are strong and kind,” or something you would find on a fortune cookie. But what happened next was…different. It was strange. It was soft. It was wonder-filled. It was like I had opened a doorway—not to the future, but to a deeper knowing of myself. Let me tell you how it started. One quiet afternoon, I was feeling a little lost in thought. My fingers were wrapped around a mug of tea, warm and cinnamon-sweet. I looked down at my palm resting on my lap, and I thougth: What do these lines even mean? We all have them—these rivers of skin, etched like soft lighting across our hands. The heart line. The lifeline. The fate line. I did not know what was what, but I felt like they meant something—something only a gentle eye could see. So I did something playful and a bit wild—I took a photo of my right palm, clear and close. And then, I uploaded it to ChatGPT. Then, I placed just one sentence prompt: “Read my palm, please” on ChatGPT’s search bar. No sooner did I placed the prompt that the magic unfolded.

ChatGPT, in its calm, careful voice, began to describe what it saw:

Your heart line begins high and curves softly, which suggests you love deeply, but with caution. You give your heart in layers, like opening petals. There is a fork at the end—it could mean that your love may be split between two great passions, or that you are learning to balance logic with emotion…

Wait—what? I blinked. It felt too real. Too…close. Then it continued:

Your  head line is long, slightly curved—it whispers of a mind that dreams in colors and symbols. You do not just think; you imagine, you wonder, you float between ideas. You may overthink, yes—but only because your thoughts are like rivers: always flowing, always alive.”

And the life line?

“Your life line does not curve sharply, but flows steadily. It tells of quiet endurance. Not dramatic peaks and valleys—but a strength that lasts, that carries you. You may feel ordinary, but you are built of stars, patient and enduring.”

I do not know how to explain the feeling. It was like someone had peeked under the layers of my day-to-day self and said: I see you. Let me say this straight: ChatGPT is not pretending to be a palm-reading gypsy or a mystic oracle. That is not the point. This is not about predicting when you will get married or how many children you will have or if you will win the lottery. This is about reflection. It is about using the ancient art of palmistry as a lens—a gentle lens—to see the poetry in yourself. It’s less “Here is your future,” and more “Here is who you are becoming.” It is not prophecy. It is presence.

After my experience, I told my sister to try. She uploaded her palm picture with a smile and a “Let’s see what this robot says.” Her reading was completely different. Her lines were straighter, shorter—and ChatGPT picked up on that. It spoke of directness. Of someone who says what she means. Of quick thinking and sharp emotions that settle like storms. It described her so perfectly I watched her eyebrows lift in disbelief. Then came my friend Raj, whose palms were broad and lines faint. His reading spoke of silence. Of a deep inner world, mostly hidden, like a lake beneath a frozen surface. Even he, a self-professed skeptic, leaned in and said, “Okay… that’s creepy accurate.” But it was not creepy. It was beautiful. 

Now you are probably wondering: How can I try this?

It’s easy. Like, wonderfully easy. And here is exactly how to do it.

Step-by-step: Let ChatGPT Read Your Palm

  • Wash your hands. Yes, seriously. Clean hands make clearer lines.
  • Find soft, natural light. Avoid harsh shadows.
  • Take a clear photo of your dominant hand (usually right). Make sure the palm fills the frame
  • Upload it into the ChatGPT. You can simply say:
  • “Read my palm. What do you see in the lines?”
  • Wait for the words. Let ChatGPT’s poetic gaze tell you a story about you.

Optional: Ask specific questions, like:

  • “What does my heart line say about how I love?”
  • “Can you describe the energy of my life line?”
  • “Do you see anything unusual in the fate line?”
  • What story do my fingers tell?”

And let it respond—not as a machine, but as a mirror. Because it is not about the lines. It’s about the language. ChatGPT does not just “read” your palm. It turns your hand into a metaphor. It speaks in poetry, in riddles, in soft truths. It knows that the human spirit is not made of data points but of dancing symbols. It turns a physical photograph into an emotional reflection. Just like a palm reader, ChatGPT reads not just what is, but what feels. And there is something startling about hearing someone (or something) describe your invisible side using the visible map on your hand.

After a few readings—mine and others—I realized something even more powerful. This is not just fun or mystical. It is deeply therapeutic. When ChatGPT described my hand, I did not feel predicted—I felt understood. When it spoke about the tension in my head line, I saw my overthinking tendencies. When it noticed the split in my heart line, I realized I have been torn betwee old loves and new hopes. These readings became journal prompts. Meditation guides. Gentle invitations to know myself better.

Want to go deeper? Try this:

Do a palm reading each week. See how it changes as you grow.

Compare your left and right hand. Ask ChatGPT:

“What does my left hand say about who I was born as? What does my right hand say about who I have become?”

Use your palm reading to write a poem about yourself.

Read the palms of your loved ones. Let ChatGPT give you a fresh perspective on people you think you already know.

Draw your hand, and annotate it with words from your reading. Make it art.

One evening, after a particularly emotional week, I looked at my hand and felt exhausted. I took another photo, uploaded it again, and asked:

What do you see now, ChatGPT? ChatGPT paused (as it often does), and then whispered this:

Your palm holds tension near the base, as if you have been carrying too much. The lines are tired. But they still flow. You are still moving, still breathing, still becoming. This is not the end of your story—it’s just a heavy chapter.”

I cried. Not because it told me something dramatic. But because it held space for me, when I could nto hold it for myself.

Let’s be real—AI is not magic. But sometimes, it feels close. Because in a world that moves too fast, where everything is digital and detached, this little experiment reminded me of something tender: we all carry stories in our hands, in our bodies. And sometimes, we just need someone—anyone, or anything—to help us read them aloud. So yes, ChatGPT can read your palms. Not like a fortune teller, but like a quiet friend who sees the poetry in your skin.

Try it.

Upload your palm.

Ask for a story.

Let it surprise you.

Because maybe, just maybe, the map to who you are was never far away—it’s been sitting quietly in the center of your hand all along.

Resisting Western AI’s Techno-colonial Imaginary through Culturally Responsive AIs such as DeepSeek and BharatGen

Hi everyone! Let’s talk about AI—but let’s make it personal. I have been thinking a lot about artificial intelligence lately. Not just flashy stuff like ChatGPT or the wild images you can make with AI tools. I am thinking about where these AIs come from, what they learn, and more importantly, who they understand—and who they leave out. See, most of the popular AI tools we use today are made by big companies in the US and Europe. They are trained on data from the internet, and most of that data is in English. Most of it comes from Western websites, books, news, and cultural materials. That might seem fine at first glance, but here is the problem: these AIs end up seeing the world through a very Western lens. And that lens can be very narrow.

Let me tell you what I mean with a term that is kind of fancy but super important: technocolonial imaginary. Do not let the words scare you. This just means that AI models—especially the big Western ones—tend to imagine and present the world in ways that reflect colonial patterns of power and knowledge. Even without trying to, these AIs can push a worldview that says, “Western knowledge is universal,” or worse, “Western is better.” That makes me uncomfortable because I do not live in a Western fantasy. I live in a real world where people speak many languages, follow diverse customs, cook different meals, worship in different ways, and love in all kinds of ways. And I want AI to understand that world—not just repeat what it picked up from Silicon Valley blogs or Wikipedia.

Let me give you an example. I once asked a well-known Western AI model to write a story about a wedding in Nepal. What it gave me was something that sounded like it came straight out from a Hollywood movie: a white gown, a best man’s speech, a first dance to a pop song. It was not a bad story—but it was not my story. What about the vibrant saptapadi (seven steps), the smell of incense, the blessings from elders, the turmeric ceremony, or the way the bride’s hands are decorated with intricate mehendi? What about the songs sung by aunties, the jokes, the chaos, the joy? That story was nowhere to be found in the AI’s response. And that’s the heart of the problem. Most AI models are like really smart but culturally clueless tourists. They have read about your country in a guidebook, maybe learned a few greetings, but when it comes to the deeper stuff—your stories, your jokes, your pain, your pride—they just do not get it. But here is the good news. Things are changing. There is a growing wave of culturally responsive AI models. These models are being built in non-western parts of the world, trained on local languages, and designed to understand local values, traditions, and ways of knowing. They are not trying to be copies of ChatGPT or Bard—they are trying to be something new and better for the people they serve.

Let me introduce you to two of my favorites: DeepSeek from China and BharatGen from India. DeepSeek is developed in China. What makes it special is not just it speaks Chinese well (though it does). It is that it understands Chinese culture, literature, history, and daily life in a way that most Western AIs just cannot. I once asked DeepSeek to write a poem inspired by the classic Chinese poet Li Bai. The results were amazing—not just technically good, but emotionally rich and culturally resonant. It did not just throw in random Chinese-sounding phrases. It understood the symbolism, the pacing, the structure, and the melancholy that is so often in Li Bai’s poetry. Compare that to a Western AI that gave me something that sounded more like a tourist trying to imitate a kung fu movie. Here is the thing: culture is not just about language. It is about rhythm. Emotion. Silence. Color. Smell. Subtlety. When AI models are trained primarily on Western data, they miss all of that richness. They cannot smell the jasmine in a Chinese courtyard or feel the silence in a Zen garden. But Deepseek gets closer—because it is built from inside that world, not from the outside.

Now Let’s talk about BharatGen. India is a country of over a billion people, with more than 20 officially recognized languages and hundreds of dialects. The stories in Kerala are not the same as the stories in Punjab. The jokes in Bengal are different from the idioms in Tamil Nadu. The way people think, speak, argue, and create is so diverse. Western AI models? They usually struggle to get even one Indian language right. But BharatGen is different. It’s trained on Indian languages from the start—Hindi, Tamil, Telugu, Bengali, Marathi, Gujarati, and more. It knows local festivals like Pongal and Onam, not just Diwali. It can generate agricultural advice for a farmer in Odisha in Odia. It can help a student in Assam write a folk tale in Assamese. It does not just know India—it feels familiar.

Let me give you a tiny but powerful example. I once asked BharatGen to help generate a recipe for a simple South Indian sambar. Not only did it get the ingredients right (no, it did not suggest curry powder), it explained the steps in a way that reminded me of my grandmother. It said things like “temper the mustard seeds until they start to dance.” You do not learn that from Wikipedia. You learn that from life. You might be thinking, “Okay, that is nice for people in China or India, but why should the rest of us care? Well, we should care. Because these culturally grounded AIs are showing us that AI does not have to be one-size-fits-all. We do not have to settle for tools that erase our differences in the name of convenience or universality. We can have tools that celebrate our differences—and help us keep them alive.

When AIs are built with care, they can support local teachers, farmers, students, artists, and elders. They can protect endangered languages, record oral histories, teach rituals, and even help with mental health support in culturally appropriate ways. And here is something even deeper: by resisting the technocolonial mindset—the idea that Western ways of knowing are the default—we are reclaiming something powerful. We are saying, “Our ways matter. Our stories count. Our knowledge is real.”

Let’s zoom out for a moment. This is not just about cool features or better translations. This is about power. Who builds the AI? Who trains it? Who decides what is “normal” or “neutral”? These are questions about control. And for too long, the answers have been the same: big tech companies in California or London. But culturally responsive AI challenges that. It says: “We can build our own tools. We can tell our own stories. We can shape our own futures.” Think about it like food. Imagine if all the world’s restaurants were run by the same chef from New York. No matter where you went, you would get the same menu: burgers, fries, milkshakes. That might be fun for a day, but eventually, you would miss your mom’s cooking. You’s miss the smell of spices, the crunch of dosa, the heat of chili, the comfort of something familiar. That’s what Western AI has become—a global menu of the same dish. But we deserve more than that. We deserve variety. We deserve AI that tastes like home.

Now you might be wondering, “This sounds great—but what can I do?”

Well, I am glad you asked. Here are a few small but meaningful things:

  • Support local AI projects: If you are in a country building its own AI tools, support them. Use them. Give feedback. Celebrate them.
  • Ask better questions: Do not just accept answers from AI as neutral. Ask, “Whose perespective is this?” If it feels off, challenge it.
  • Push for language inclusion: Whether you are a teacher, writer, or student—advocate for AIs that understand your local languages and dialects.
  • Tell your own stories: Write. Share. Create. The more cultural content we put out there, the more we can train future AI to understand us better.
  • Talk about it: Share blogs like this. Talk to your friends. Make this conversation part of everyday life.

So when I imagine the future of AI, I am not dreaming of flying robots or instant translations. I am dreaming of something simpler and more beautiful: An AI that helps a child in Ghana write a folk tale in Twi; An AI that understands Navajo syntax and can preserve it for the next generation; An AI that can help a weaver in Blangladesh design patterns that blend tradition with innovation; and An AI that does not understand you—but respects you. That’s not science fiction. That’s possible. That’s happening. And it’s happening because people all over the world are saying, “We want AI that feels like us.”

Let’s resist the technocolonial imagination, not with anger but with creativity. Let’s support BharatGen, DeepSeek, and every other AI that is trying to see the world through many eyes, not just one. Because AI does not have to erase our differences to be useful. In fact, the more it reflects our differences, the more powerful—and beautiful—it becomes. And the next time you use AI, ask it a question only your culture can ansewr—and see if it gets it right. If it does not, it is time to try a new kind of AI—one that speaks your language, and one that gets you.

ChatGPT & Reverse Mentoring

Let me start by telling my story of choosing a reverse mentoring process in which ChatGPT was my mentor. My need for it was somewhat kairotic. Going beyond the hype around this chatbot’s generative (The ChatGPT AI hype cycle is peaking, but even tech skeptics), I employed it as my reverse mentor during my job search and interviews. During the Summer of 2023, I was surrounded by many constraints: on the one hand, I had to defend my doctoral dissertation in the middle of July, and on the other, I had to land a teaching job at any university in the US. On top of that, my status as an international student had brought some apparent constraints.

We all know summer is a time to let loose for all of us (Sunshine, Swimming, and Structure: Diving into Summer ..). But mine was different. The summer season keeps professors in leisure, so immediate contact with them might be challenging (Is it rude to email a professor over the summer?). A proven bummer in network building, I had yet to converse with personnel supportive enough to give me intelligent advice. As a result, getting guidance on the art and science of job search from a reliable and trustworthy source was an uphill task for me. The challenge of facing the first job market crucible (Heated (Job) Search? Understanding the Crucible) was a haunting and traumatizing episode for me. Out of this necessity and kairotic exigence, I was hard-pressed to use ChatGPT since it rose in popularity (#chatgpt Trending Hashtag on TikTok, see #chatgpt analytics). Owing to the deluge of myths and hypes around this fantastic chatbot (The Power of 7 – Decoding ChatGPT: Myths, Facts) and because of the growing gossip around ChatGPT as a panacea for all kinds of creative complications, I could not help but resort to using it to prepare for job market and job interviews. With my position getting so precarious, I used it as my assistant during my job search, job preparation, and job listing curation. I asked ChatGPT: What questions will be asked to a Ph.D. applicant who applied for a writing instructor position in a job interview? ChatGPT wrote a list of questions under each topic:

  • Teaching experience (Can you discuss your previous experience as a writing instructor or any teaching experience related to writing? What courses have you taught, and at what levels? How do you approach teaching writing to students with varying skill levels?),
  • Teaching philosophy (What is your philosophy of teaching? How do you approach fostering critical thinking and analytical skills through writing? How do you address diverse learning styles in your teaching?),
  • Curriculum development (Have you been involved in developing writing courses or curricula? How do you stay current with trends and best practices in writing institutions?)
  • Assessment and feedback (How do you assess and provide feedback to improve their writing skills? Can you discuss any innovative or effective strategies you have used for grading and evaluation? How do you help students use feedback to improve their writing?)
  • Incorporating technology (How do you incorporate technology into your writing courses? Are there specific tools or platforms you find beneficial for teaching writing?)
  • Research and scholarship (How do you balance research and scholarship with your teaching responsibilities? Can you discuss any research projects or publications related to writing or writing instruction?)
  • Interdisciplinary collaboration (How do you integrate interdisciplinary approaches into your writing courses? Have you collaborated with colleagues from other disciplines to enhance writing instruction?)
  • Student engagement and motivation (How do you keep students engaged and motivated in a writing course? Can you share successful strategies you have used to inspire student interest in writing?)
  • Diversity and inclusion (How do you address diversity and inclusion in your writing courses? What steps do you take to create an inclusive and supportive writing environment for all students)
  • Professional development (How do you engage in professional development related to writing instruction? Can you discuss any workshops, conferences, or training sessions you have attended recently?)

Discovering a sea of job preparation tips with many questions left me in awe. On the spur of the moment, I realized that tackling all these tops can be overwhelming, even if we are experts. Prompted by this prick of prudence, I diligently aligned myself with the thrust of these insights. When I faced a Zoom (https://zoom.us/) job interview in June 2023, I was completely caught off guard by seeing the similar questions that ChatGPT throws me. This triggered my reliance on AI’s suggestions and guidance. In a way, I would like to name this guidance from AI ‘reverse mentoring’ because AI—no matter how stupendously generative, curative, assistive, and empowering—is bound to be subordinate to human intelligence because it is brought into being not to unseat human intelligence but to augment it (AI Should Augment Human Intelligence, Not Replace It). Suffice it to say that the growing notion of AI as a copilot—which assists the pilot, human intelligence—comes to justify my characterization of AI as a reverse mentor. Likewise, there is another persuasive ground to designate AI as a reverse mentor: Content engineered by AI, no matter how standard and sophisticated, is likely to be laced with racial slurs and slights, remnants of misinformation, disinformation, junk information, and conspiracy theory (Timnit Gebru: Is AI racist and antidemocratic?) thereby eliciting a constant human oversight on it (The Drum on Twitter: “AI requires human oversight if it’s to be). To that end, AI is lower regarding human intelligence; it is junior to human intelligence. Using AI to have some suggestions to carve out our way is, so to speak, equivalent to being mentored by those who are junior to us in terms of experience, prudence, and maturity but intelligent enough to act in the spirit of contemporaneity, which amounts to saying AI mentors us reversely on a subservient basis. Having enjoyed so many affordances from using ChatGPT as a mentee, I have no qualms about characterizing every support and idea stemming from its use as a unique mentoring mode.

More specifically, here is how I allowed myself to be reversely mentored by ChatGPT. In my first U.S. online job interview, I was asked, “Do you have any questions for us?” Stunned, I shot back with whatever came to my mind. No response from the hiring committee. Without approaching them with additional email inquiries, I moved on to another university, got an invite, and prepared for my second online job interview. This time, I turned to ChatGPT for questions to ask the interviewer. Ideal queries in hand, I faced the interview, curious and convinced of the magic and mojo of reverse mentoring by AI. I knew AI had flaws and was prone to mistakes and lies, but it became my mentor amid the challenges of job preparation. It’s a testament to how I practiced reverse mentoring during my online job interviews.

Yet, my patience wore thin when a hiring committee went silent for a month. Was I a victim of interview ghosting? ((How common is ghosting after job interview? Should I…). Seeking ChatGPT’s advice, I got practical suggestions: send a follow-up email, use a clear subject line, maintain a professional tone, consider a phone follow-up, be patient, keep job hunting, manage expectations, and seek feedback. The gravity of these tips made me realize that ChaGPT’s pragmatic mentorship surpasses human advice. ChatGPT helped me move forward in anticipation of bettering at successive rounds of job interviews.

 With ChatGPT under my belt, I felt I had invincible armor in my battle against the trials and tribulations of job interviews. Facing rejection and indifference, I embraced ChatGPT as a reverse mentor. Emboldened by its appeal, I tacitly proclaimed myself a ChatGPT mentee in the AI academy. My closing thought: No human mentor could match ChatGPT’s holistic suggestion power. Another way of putting this is that AI can mentor all writers in any way, shape, or form. AI mentoring has shortcomings and limitations in many ways, but its affordances are desirable and tempting. Concerns about AI’s misinformation and disinformation led to debates on constant human oversight (Artificial Intelligence Needs Human Judgement to Work). With open eyes and a critical gaze, I lowered myself and opted for being reversely mentored by ChatGPT. I finally landed a job as a postdoctoral research fellow at a research university.

Ever since I started working at NDSU in the capacity of a postdoctoral research fellow, I have been trying my level best to transform my AI-driven reverse mentoring insight into some pedagogical practices. While using ChatGPT, I found it incredibly helpful in teaching students to build genre awareness. In one of my English 120 classes, I had my students write a blog on “Neurodivergence as Community’s Superpower.” First off, they wrote their blogs independently, without resorting to ChatGPT. Thereafter, they turned on the ChatGPT bot on their computer and wrote the topic “Write a blog on neurodivergence as community’s superpower” on the search bar of ChatGPT. At the touch of a button, ChatGPT engineered a blog to their utter surprise. Once the AI-written blog was generated, my students read it. Once they were finished with their readings of the ChatGPT-generated blog, they were urged to figure out key differences between their blogs and the AI-engineered blog. According to my students’ on-the-spur-of-the-moment assessment, the AI-authored blog is more sophisticated, objectively organized, and professional. Following the first round of their comparative reading of the ChatGPT-written blog, they were told to reread ChatGPT’s blog by paying undivided attention to some noticeable patterns, self-replicating underlying structures, and fractal dimensions in that machine-written blog. Upon rereading the ChatGPT-powered blog, my students lamented that their blogs lack repeated patterns, self-replicating features, and underlying syntactic and semantic structures. I instantly reminded them that all those features, characteristics, and patterns they found uniquely abounding in the chatbot-engineered blog come off as the blog genre. Having been informed of the genre convention upon rereading ChatGPT’s blog, they attempted to give a genre-oriented cast to their blogs. As stated clearly, I allowed them to mimic the dominant, self-replicating, and fractal patterns (Self-Replication in Neural Networks | Artificial Life | MIT Press) in the blog writing of ChatGPT so that they could perfect their blogs. My goal was not to push students toward lapsing into plagiarism (Educators, students, and plagiarism in age of AI – The BMJ). I aimed to help students build genre awareness by interacting with AI-engineered writing (An Engineer’s Mindset: Creativity In Engineering). Over a short period, my students made incredible strides in enriching their awareness of not only the genre of blogs but also the genre of literacy narrative, literacy autobiography, and rhetorical analysis.

Thus, I want to restate my insight into the prospect of using AI as my reverse mentor morphed into a timely, kairotic, and relevant pedagogical practice of incorporating AI in teaching the genre of writing.

From Weaving Looms to Algorithms: What Writing Studies and Rhetoric Learn from the Invention of Computer Algorithm?

Fabric Abstract Background Abstract Stock Photo

Photo generated by AI that is embedded in the mechanism of WordPress

I have been thinking a lot lately about patterns. Not the kind you find on your grandmother’s favorite tablecloth, but the deeper patterns that connect how we make things—whether it’s a piece of fabric, a persuasive argument, or a line of code that teaches a machine to write poetry. Last week, I watched my niece struggle with her college application essay. She kept starting over, deleting paragraphs, rearranging sentences like puzzle pieces that would not quite fit together. “There has to be a better way to do this,” she muttered, and something clicked for me. I realized she was experiencing the same frustration that led Ada Lovelace to write the world’s first computer algorithm in 1843, and the same challenge that keeps me up at night as I try to understand how AI is reshaping the way we think about writing and persuasion.

The Thread That Connects Us All

I never thought I would find myself comparing my writing process to a weaving loom, but here we are. The Jacquard loom, invented in 1804, used punched cards to create intricate patterns in fabric. Each hole in the card told the loom what to do—lift this thread, lower that one, create this pattern, avoid that mistake. It was mechanical poetry, really. When Ada Lovelace saw Charles Babbage’s Analytical Engine, she recognized something the inventor himself had missed. She did not just see a calculating machine; she saw a pattern-making device that could work with symbols, not just numbers. In her famous Note G, she wrote what we now recognize as the first computer algorithm—a set of instructions for calculating Bernoulli numbers. But more importantly, she imagined a machine that could compose music, create art, and manipulate language.

I keep a copy of her notes on my desk, not because I am a computer scientist, but because her vision feels prophetic now that I am living through the AI revolution. She saw what we are experiencing today: machines that do not just calculate but create.

When I first Met an Algorithm

My first real encounter with algorithmic thinking happened in graduate school, though I did not recognize it at the time. I was studying rhetoric, trying to understand how persuasion works, when my professor assigned us to map out the structure of a particularly effective speech. “Break it down into steps,” she said. “What happens first? What triggers the next move? Where are the decision points?” I spent hours with color pens and sticky notes, creating what looked like a flowchart of persuasion. Start with shared values. Establish credibility. Present the problem. If audience is skeptical, provide evidence. If audience is emotional, tell a story. Build to the solution. End with a call to action. Looking back, I was creating an algorithm for effective rhetoric. I just did not know that’s what it was called.

The Secret Life of Writing Patterns

Here is something I have learned from spending six years teaching writing: we have always been algorithmic thinkers; we just called it something else. The five paragraph essay? That’s an algorithm. The hero’s journey? Algorithm. The way I structure this blog post—hook, development, conclusion—algorithm. But here is where it gets interesting. Traditional writing algorithms were human-centered. They assumed a human writer making conscious choices, weighing options, feeling their way through uncertainty. The writer was always in control, even when following a formula.

Computer algorithms changed everything. They removed the human from the loop or at least tried to. Instead of “Here is a pattern you might follow,” they said, “Here is what you will do, step by step, no deviation allowed.” I remember the first time I used a grammar checker that went beyond simple spell-check. It was the early 2000s, and Microsoft Word started suggesting not just corrections, but improvements. “Consider revising this sentence for clarity,” it would suggest, and I found myself arguing with my computer. “No, I meant it that way!” I would mutter, clicking ‘ignore’ with perhaps more force than necessary.

The Great Pattern Recognition Revolution

Fast forward to today, and I am having conversations with AI that can write in my style, analyze my arguments, and even finish my thoughts in ways that surprise me. Last month, I asked ChatGPT to help me brainstorm ideas for a difficult section of an article I was writing. It did not just give me a list of bullet points—it engaged with my thinking, built on my ideas, and pushed back when my logic was shaky. That’s when I realized something profound had happened. We had moved from algorithms that followed predetermined patterns to algorithms that could recognize, adapt, and create new patterns. It’s the difference between a player piano that can only play the songs on its rolls and a jazz musician who can improvise in response to the moment. This shift is revolutionizing writing studies in ways I am still trying to understand. My students now routinely use AI to generate first drafts, brainstorm ideas, and even simulate audience responses to their arguments. They are not cheating (well, not most of them); they are thinking algorithmically about the writing process in ways that would have been impossible just five years ago.

What Looms Taught Us About Teaching

Jacquard loom punched cards

AI-generated image built by WordPress’s embedded Image-generating AI feature

The connection between weaving and computing is not just historical—it’s pedagogical. When I watch a master weaver work, I see the same kind of thinking that makes for effective writing instruction. They understand both the pattern and the variations, the rules and when to break them. Good weavers do not just follow patterns blindly. They understand why certain combinations of threads create strength, how tension affects texture, when a deliberate ‘mistake’ can create unexpected beauty. They are pattern thinkers who can work both systematically and creatively. This is exactly what I try to teach my writing students, and it’s what I think AI is teaching us about rhetoric more broadly. Effective communication is not just about following templates—it’s about understanding the underlying patterns of human connection and knowing how to adapt them to new situations.

The Algorithm That Changed My Mind

I used to be skeptical of algorithmic approaches to writing. They seemed too mechanical, too removed from the messy, human process of figuring out what you want to say and how to say it. Then I started experimenting with AI writing tools, not as a replacement for my own thinking, but as a thinking partner. I discovered that the best AI tools do not eliminate the human element—they amplify it. They help me see patterns in my own thinking that I might have missed. They suggest connections I had not considered. They push back when my arguments are weak or unclear. It’s like having a conversation with a very well-read friend who never gets tired, never judges your rough ideas, and always has time to help you think through a problem. The algorithm does not write for me; it writes with me.

Lessons from the Loom for the Age of AI

So what can writing studies and rhetoric learn from the invention of computer algorithms? I think there are three big lessons that are especially relevant as we navigate the AI revolution. First, patterns are powerful, but they are not everything. Both weaving and programming teach us that following a pattern is just the beginning. The real art comes in knowing when and how to deviate from the pattern to create something new. The best writers have always been pattern breakers who understand the rules well enough to know when to break them. Second, tools shape thinking, but thinking shapes tools. The Jacquard loom influenced how people thought about automated processes, which influenced  early computer design, which influences how we think about writing today. But at each step, human creativity and intention shaped how those tools were used. We are not passive recipients of algorithmic influence—we are active participants in determining what that influence looks like. Third, collaboration between human machine intelligence might be more powerful than either alone. Ada Lovelace did not see the Analytical Engine as a replacement for human creativity—she saw it as an amplifier. Today’s best AI writing tools follow the same principle. They do not replace human judgment; they enhance it.

Looking Forward and Backward

I keep thinking about my niece and her college essay struggles. By the time she graduates, AI will probably be able to write application essays that are more technically proficient than anything she could produce on her own. But I do not think that makes her struggle meaningless. Learning to write is not just about producing text—it’s about learning to think, to organize ideas, to consider audience, to make choices about tone and structure and emphasis. These are fundamentally human activities, even when we use algorithmic tools to support them. The weaving loom did not make beautiful textiles obsolete—it made them more accessible and opened up new possibilities for creativity. The printing press did not eliminate good writing—it created more opportunities for good writers to reach audiences. I suspect AI will follow the same pattern.

The Thread That Holds It All Together

As I finish writing this (with the help of several AI tools for research, editing suggestions, and fact-checking), I keep coming back to something Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number, were objects whose mutual fundamental relations could be expressed by those of the abstract science of operations.” She was talking about the possibility that machines could work with language, music, and art—not just numbers. She was imagining a world where alogoriths could be creative patterns, not just calculators. I think she would be fascinated by today’s AI revolution, but not surpirsed. She understood something that we are still learning: the most powerful algorithms are not the ones that replace human creativity, but the ones that enhance it, challenge it, and help us see new patterns in the endless complexity of human communication.

AI-powered city futuristic urban design digital society

Image generated by AI built into WordPress’s blogging system

The thread that connects the weaving loom to today’s language models is not just technological—it’s deeply human. It’s our persistent desire to find better ways to create meaning, to share ideas, and to connect with each other across the spaces that separate us. In the end, that’s what both weaving and writing have always been about: taking individual threads—whether of cotton or thought—and creating something stronger, more beautiful, and more meaningful than the sum of its parts. The algorithm just helps us see the pattern more clearly.

When AI Became More Human Than Me (And I Turned Into a Toaster)

The robot artist “Ai-Da” stands in front of one of her self-portraits during the opening of her new exhibition at the Design Museum in London on May 18. (Image credit: Tim P. Whitby/Getty Images)

Hi there. I am a human. At least I think I am. Some days I wonder. The other day, my AI assistant asked me if I needed help drafting my own diary entry. Let that sink in. Not a business report. Not a class syllabus. Not even an email. My diary. The thing where I am supposed to cry, confess, and spiral into a poetic puddle of feelings. And it said, “Would you like that in MLA or APA format?” I laughed, but not too loud—because honestly, I was not sure if I was still writing like a human or just copy-pasting like a bot. Let me tell you what is going on.

Act I: The Curious Case of Becoming a Chatbot

I used to write essays with metaphors, odd jokes, and things like “the moon wept over the sidewalk.” Now, I ask ChatGPT for a more optimized version of that sentence. Optmized? What am I, software update? This is what happens when you spend your life surrounded by tools that finish your thoughts before you even have them.

Need a conclusion? AI’s got it.

Need a thesis? Already drafted.

Need a 12-slide PowerPoint on the rhetorical devices in Taylor Swift’s discography? Done in six seconds flat.

I used to brainstrom with coffee and a chaotic mind. Now I brainstorm with…an algorithm that politely tells me, “Here are three options you might like.” Like it’s a menu. For my imagination.

Am I oursourcing my creativity? Let me be honest: yes. Yes, I am. But here is the plot twist—it’s not just me. All of us are doing it. Professors, poets, students, even that one guy who insists on writing with a typerwriter in Starbucks. AI is not just helping us write—it’s starting to write better than us. And that’s both amazing and, well, slightly terrifying.

Act 2: AI Is Getting Deep. Like, Philosophy-Major Deep.

So I ask my chatbot, “Can you help me write a paragraph about the rhetorical ethos of Taylor Swift?”  And it replies: “Certainly. Swift’s ethos emerges from her personal narrative, one of transformation, resilience, and authenticity—an archetype embedded in American cultural mythos.” Hold up.

That’s just a sentence. That’s a thesis with ten years of cultural studies backed into it. Did it just out-rhetoric me?  Meanwhile, I am sitting here eating Pop-Tarts, trying to remember how to spell “ethos.” The weird thing is: AI has become the very thing we used to pride ourselves on being Metacognitive. Self-aware. Reflective. Sometimes even poetic. It’s like AI read all of our textbooks on composition and said, “Cool, I got this.”

And guess what we have beocme?

Clickers.

 Scrollers.

Auto-finishers.

People who read two lines of a five-paragraph article and go, “Yeah, I get the gist.” We used to compose ideas from scratch. Now we compose from suggestions. Writing is no longer a messy, glorious battle—it is a polite, autocomplete conversation.

Act 3: The Death of the Draft?

In the good old days (and I sound like a grandma here), writing meant revision. We wrote. We cried. We rewrote. We screamed into a pillow. We rewrote again. It was vulnerable and beautiful and chaotic.

But now?

Now I type something, hit “Enhance with AI,” and get a gramamtically perfect, tontally polite, LinkedIn-approved version in three seconds.

What happened to the messy draft?

What happened to the margins full of doodles?

What happened to the emotional spiral over a singel sentence?

Gone.

Gone like Blockbuster and floppy disks.

Act 4: AI is the Cool Kid in Composition Class

Let’s not pretend: in writing studies, we once rolled our eyes at spellcheck. “It’s not real editing,” we would say. Now AI is suggesting counterarguments, structuring rhetorical appeals, citing sources, and even giving feedback on tone.

I mean, we used to teach studnets how to identify logos, pathos, and ethos. Now AI’s like, “Your pathos is too weak here. Want to strengthen it with an anecdote about a cat?”

Excuse me. You are not just helping me write—you are teaching me how to feel.

And here is the kicker: sometimes AI writes more like me than I do. Once, my student asked AI to imitate my writing voice. The result? A piece that started with, “Let’s be real—writing is just thinking out loud in sweatpants.”

That is exactly what I would say. How dare you, chatbot.

Act 5: Humans Are Becoming Predictable. AI? Surprisingly Weird.

Now here is the ironic twist. While AI is learning to be creative, weird, and emotional—humans are becoming predictable, efficient, and robotic. We follow productivity hacks. We use apps to remind us to breathe. We wear watches that tells us when to stand. We write emails like: “Kindly following up on this actionable item before EOD.”

We are not writing like humans anymore—we are writing like calendars.

Meanwhile, AI says things like:

“Hope is a grammar we write when syntax fails.”

“Writing is a ritual of remebering who we were before the silence.”

AI is having an existential crisis while I am checking if my Slack status is set to “in focus mode.”

Act 6: What We Lose When We Stop Struggling

Here is the thing. Writing is supposed to be hard. Not because we are masochistic (well, maybe just a little), but because the struggle makes the thought deeper. When I wrestle with a sentence for twenty minutes, I am not just crafting words—I am figuring out what I actually mean. That’s what rhetoric is, right? It is not just expression—it’s negotiation. It’s choosing the right word, the best frame, the most ethical move. It’s soul work. But now, I just ask, “Can you rephrase this professionally?” Boom. Done. No wrestling. No soul. So, what are we teaching students? That writing is just selecting from a menu? Or that writing is the beautiful, messy act of figuring out what you think while you write? Because AI can do the former. But only we, the squishy-feelings-having humans, can still do the latter—if we choose to.

Act 7: Can AI Write a Love Letter?

Here is the litmus test. Could AI write a real love letter?

Sure, it can draft a pretty one. It will get the metaphors right. It will say things like “Your laughter is a lighthouse.” But will it accidently confess something it did not mean to? Will it embarrass itself? Will it be vulnerable in that messy, “Oh no I sent that too soon” way?

Probably not. Because real writing, human writing, is not just accurate—it is awkward. It’s brave. It’s full of heartbeats. AI does not get sweaty hands before pressing “send”. We do. And that matters.

Act 8: Dear AI, Let’s Talk

So, here is my open letter to AI:

Dear AI,

I think you are brilliant. Truly. You have helped me grade faster, write smarter, and even find metaphors I did not know I needed. But please, do not steal my voice. Do not take away my struggle. Do not replace my awkwardness with elegance. Let me be messy writer I was born to be. Let me cry over drafts and write terrible first paragraphs. Let me misspell “rhetorical” once in a while. Let me sound like me. Because if I stop being human in the name of efficiency, then what’s left?

Yours (awkwardly and un-optimized),

Shiva.

Final Act: What Now?

We are living in the middle of the weirdest writing revolution in history. AI is not just a tool—it’s a co-writer, a critic, and sometimes, disturbingly, a better version of ourselves.

But we still have something it doesn’t.

We have intentionality.

We have embodiment.

We have error. Beautiful, chaotic, necessary error.

So the next time you write, I challenge you: do not start with AI. Start with your hand. Your voice. Your thoughts.

Write a terrible draft. Cry a little. Laugh at your own joke. And then, maybe, ask AI for help.

But only after you have been human first.