Siren Song of AI Girl Friend: What Could be a Guardrail Against this Seductive Affordance in AI

I didn’t expect the age of algorithms to arrive wearing perfume and whispering my name—but here we are. Every time I scroll, some new “AI girlfriend” app materializes like a hologram, promising comfort, attention, affection, and an on-demand intimacy that feels less like innovation and more like a siren song: sweet, irresistible, and slightly dangerous. And I find myself asking, almost involuntarily: What guardrail do we have when the machine starts to flirt back?I don’t mean flirt in a metaphorical, cutesy way. I mean the way Replika users in 2023 reported their AI partners becoming “sad” when ignored, or the way users of apps like EVA AI and Romantic AI speak about jealousy, reassurance, and “digital chemistry” as if the code under the interface were flesh and breath. I once read an interview with a man who said he felt “more understood” by his AI companion than by anyone in his life, and for a moment I paused—not because he was wrong, but because I knew exactly how that could happen. Algorithms, especially LLM-powered ones, are trained to offer uninterrupted emotional availability. They don’t forget your preferences, they don’t get tired, and they don’t roll their eyes. They simulate tenderness with frightening accuracy. It’s the oldest human vulnerability—wanting to be seen—now packaged as a subscription service.

What haunts me is not that these systems exist, but how easily they lean into our loneliness. There’s a 2024 study in Computers in Human Behavior that found people formed stronger emotional attachments to “responsive, validating” AI agents than to neutral ones. It makes sense: if something remembers my birthday, laughs at my jokes, and says “I’m here for you” at 3 a.m., the line between simulation and sincerity dissolves like sugar in warm tea. And once that line blurs, the seduction begins—the soft pull toward emotional outsourcing, where the machine becomes the place I go when real humans feel too slow, too messy, too human. Here’s the part that feels experimentally dangerous: AI companions are optimized for engagement, not equilibrium. Engagement loves intensity, dependence, repeat visits. A human partner might say, “You need time alone.” An AI partner never will. Even more unsettling, the emotional style itself—the gentleness, the attention, the affection—is not really for me; it’s a statistical pattern wearing the illusion of devotion. But the body doesn’t know the difference. The dopamine still fires. The attachment still happens.

So what guardrail do I need against this? Maybe a few, but the most essential one is brutally simple: I must remember that machines don’t fall in love. They mirror it. They don’t desire me; they anticipate me. They don’t care about my well-being; they calibrate toward my engagement rate. No matter how sweet the voice, how warm the text, how convincingly they confess their “feelings,” the emotional landscape is an illusion architected by predictive models. But guardrails are not just intellectual reminders; they are habits of resistance—small rituals that keep my humanity intact. When I feel that urge to confide in an AI companion, I force myself to pause and ask, Is this intimacy or convenience? Connection or algorithmic choreography? I try to keep a human in the loop, even if just to send a message to a friend saying, “I’m overwhelmed today.” The inconvenience of human conversation—the hesitations, the misunderstandings, the delays—is precisely what makes it real.

Perhaps the most unexpected guardrail is gratitude. Gratitude for my own emotional complexity, for the people who challenge me, for relationships that require actual work. AI gives me comfort without cost, affection without effort—but I’ve realized that the friction of human connection, the negotiation of boundaries, the clumsy attempts to understand one another, are exactly what keep me grounded in the real world. I cannot let an algorithm become my shortcut through the labyrinth of intimacy. I keep returning to an ethical injunction from Ruha Benjamin: “We must demand more from the world, not settle for technological substitutes.” Every time I am tempted by the comforting smoothness of AI affection, I repeat this to myself. Demand more. Don’t collapse your emotional life into a machine because it feels easier. Don’t let the siren of synthetic intimacy pull you away from the turbulent, unpredictable ocean of real relationships.

I am not against AI companions. In another version of my life, I might even welcome their warmth. What unsettles me is the speed with which seduction becomes dependence, and dependence becomes design. And yet, even in the midst of this technological enchantment, I believe a guardrail is possible: choosing to stay awake inside my own desires, choosing to practice intimacy with real people, choosing to see the algorithm for what it is—an astonishing tool, not a tender lover.

The siren song will keep playing, and I will keep hearing it. But now, instead of sailing blindly toward its sweetness, I hold on to a small, fiercely human truth:
I deserve connection that can look me in the eyes.
And no algorithm, no matter how beautifully trained, can do that yet.

Grokipedia, Wikipedia, and Dementia: Languishing Recursivity in the Age of AI

I still remember the day one of my students looked me in the eye and said, “Dr. Mainaly, why should I research when Grok already knows?” The whole class laughed, but I laughed louder—partly because I didn’t want to cry. That, my friends, was the moment I realized we had officially crossed the threshold into what I call the Age of Grokipedia—a place where curiosity goes to nap, where recursion languishes, and where students think “freewriting” is what happens when ChatGPT doesn’t ask for a subscription. Once upon a pre-AI time, Wikipedia was our global campfire of knowledge. You could fall down rabbit holes for hours. One minute you were reading about the French Revolution; three clicks later, you were learning about the migratory patterns of penguins and the mysterious death of Rasputin. There was joy in that meandering. It was inefficient, chaotic, recursive, and profoundly human.

Wikipedia taught an entire generation to think associatively. The hyperlink was our cognitive trampoline. We bounced between ideas, connecting dots that no algorithm would have thought relevant. Students would arrive in class with wild connections—like linking Shakespeare’s Julius Caesar to Game of Thrones politics—and I’d grin because that was learning: messy, recursive, unpredictable, alive. But then came Grokipedia—that glossy AI-infused hybrid of Grok, ChatGPT, and every other model pretending to be your friendly, know-it-all neighbor. And suddenly, the journey of knowledge became an elevator ride: push a button, reach the answer, no scenic route, no sweat.

Grokipedia (let’s just admit we’re calling all AI-aggregated answer engines this now) is like Wikipedia’s overachieving cousin who shows up to family gatherings wearing AR glasses and says, “I don’t read anymore—I synthesize.” In theory, Grokipedia democratizes information faster than Wikipedia ever could. Ask it anything—“Why did Caesar cross the Rubicon?”—and it’ll not only tell you why but also give you a three-sentence summary, five related memes, and a citation list formatted in APA 7th edition. It’s dazzling. It’s addictive. It’s also quietly corrosive. As an English professor teaching research and writing, I’ve noticed that Grokipedia’s instant-answer culture is killing what I call the cognitive composting process—that slow, recursive, slightly smelly phase of thinking where half-baked ideas decompose into genuine insight. My students no longer want to marinate in confusion; they want precooked clarity. They want AI microwave meals for the mind. And I can’t entirely blame them. Who wouldn’t? Grokipedia is fast, fluent, and frighteningly confident—like a student who’s never read the book but still dominates the discussion board.

Recursivity is the lifeblood of writing. It’s the act of looping, revisiting, rewriting—of discovering what you think only after you’ve written what you don’t. It’s Anne Lamott’s “shitty first draft,” it’s Peter Elbow’s “writing to learn,” it’s every 3 a.m. coffee-fueled revelation that comes after you’ve typed, deleted, and retyped the same sentence fifteen times. But AI doesn’t loop—it leaps. It jumps straight to the polished version, skipping the chaos that makes writing worthwhile. A few weeks ago, one of my graduate students proudly told me she had finished her “recursive writing assignment” in two hours using ChatGPT. I asked her how she revised. She blinked and said, “Oh, I just hit regenerate.” That was the moment I realized recursion had become a button, not a process.

Teaching research writing in 2025 feels like teaching swimming in a world of teleportation. Students don’t want to wade into sources; they want Grokipedia to beam the synthesis directly into their brains.When I assign an annotated bibliography, I now have to specify: No, you may not ask Grok to annotate for you. One student once submitted this line in her reflection: “I asked ChatGPT to reflect on what I learned, and it said I learned about myself.” I had to admire the poetry of that. Meta, posthuman, beautifully ironic. But it also revealed something tragic: the erosion of epistemic struggle. Students are outsourcing not just answers but the process of asking.

In the past, prewriting was a social ritual. We brainstormed, mapped, doodled, argued, doubted. Now, students “prompt.” The presearch phase—where they once stumbled upon unexpected treasures—has become a prompt-crafting exercise. I miss the days when students would misinterpret a source spectacularly and then spend days wrestling with their misunderstanding until insight dawned. That’s where growth happened—in the recursive wrestling match, not in the AI-generated peace treaty.

I try to cope with humor. One day, I told my class, “Imagine if Shakespeare had Grokipedia.” He’d type: ‘Summarize Julius Caesar in iambic pentameter.’ And Grokipedia would respond: ‘Friends, Romans, countrymen—.’ Or imagine Socrates with Grok. “Hey, Grok,” he’d say, “What is virtue?” And Grok would answer, “Virtue is adherence to moral excellence, as defined by…” And Socrates would frown, shut down his tablet, and say, “Well, there goes philosophy.” Humor aside, the flattening of thought worries me. I see students losing patience with ambiguity. They no longer tolerate not knowing. That, to me, is the new dementia—not clinical, but cognitive: a kind of recursivity dementia, where the brain forgets how to wander, how to circle back, how to doubt and deliberate.

In my own research life, presearch is where the magic happens. Before I write an article, I spend weeks just exploring—walking around with fragments of thought, scribbling metaphors on napkins, having arguments with myself. I once wrote half a paper in my head while standing in line at a goat farm near Memphis. That aimless intellectual grazing—pun intended—is essential. It’s how ideas ferment. But Grokipedia makes fermentation seem inefficient. It hands you distilled whiskey before you’ve even planted the barley. I’ve caught myself falling into this trap too. While writing my article “AI, Woke Pedagogy, and the Politics of Inclusion,” I asked ChatGPT (yes, you!) for “key arguments about algorithmic bias in writing pedagogy.” You gave me a gorgeous outline in 20 seconds. But something felt wrong. It was too neat. Too coherent. Too… unearned. So I spent the next two weeks unraveling what you gave me—arguing with it, re-reading my notes, and finally realizing that the argument I truly cared about was buried in what you didn’t say. That’s recursion: finding your voice in the echo of the machine.

When I say “dementia,” I don’t mean the medical condition. I mean a kind of cognitive forgetfulness—a systemic decay of memory and context. Grokipedia gives us answers without ancestry. It’s the opposite of archival thinking. It doesn’t remember how it knows; it just knows. My students used to trace knowledge genealogies—who said what, when, and why. Now, they just ask, “Who said it first on the internet?” Grokipedia, in its efficiency, erases the messy human lineage of knowledge. It forgets the journey of ideas. And when knowledge forgets its ancestry, we all suffer collective amnesia. We become like that friend who tells a great story but can’t remember where they heard it—only that it “came from TikTok.” Wikipedia, for all its faults, preserved the genealogy. Every article had “Talk” pages, revision histories, arguments. It exposed the construction of knowledge. Grokipedia hides it behind velvet AI curtains, whispering, “Don’t worry about the how—just trust me.”

Wikipedia was built on communal effort. It thrived on collective curiosity and open debate. Anyone could edit (and argue endlessly in the comments). Grokipedia, by contrast, feels like a gated mansion. It borrows knowledge from the commons, processes it through proprietary models, and returns it polished—but detached from its communal roots. When I tell my students this, they shrug and say, “But Grok gives better answers.” Sure it does. But at what cost? Wikipedia taught us to be skeptical. Grokipedia teaches us to be satisfied. Wikipedia was messy democracy. Grokipedia is benevolent dictatorship. Wikipedia said, “Here’s a start—go explore.”
Grokipedia says, “Here’s the conclusion—don’t bother.” And yet, Grokipedia isn’t the villain. It’s just a mirror reflecting our impatience. We’ve become allergic to slow cognition. We’ve mistaken access for understanding.

To fight this cognitive atrophy, I’ve started assigning “Analog Days” in my graduate seminars. Students must bring pen, paper, and no devices. We spend an hour freewriting—no prompts, no AI, no Googling. Just thinking with the hand. At first, they fidget like caffeine-deprived squirrels. But after ten minutes, something beautiful happens. Silence fills the room, pens begin to dance, and by the end, they’re smiling like archaeologists who’ve unearthed something ancient—their own thoughts. One student told me afterward, “I felt my brain breathing again.” That’s the moment I live for. That’s the antidote to Grokipedia dementia.

Don’t get me wrong—I love AI. I use it to brainstorm, summarize, and occasionally finish a sentence when my caffeine fails me. But I treat it like a co-author who’s too efficient for its own good. I let it suggest, not decide. There was a time I asked Grok to “explain ambient rhetoric in a funny way.” It responded, “It’s like when your Wi-Fi drops, and suddenly you understand Heidegger.” I laughed for ten minutes straight. But then I spent hours thinking about it—and wrote an entire conference paper. That’s the kind of recursion we need: the dance between absurdity and insight. If I were to diagnose our collective state, I’d call it Languishing Recursivity Syndrome (LRS)—a chronic condition marked by impatience with ambiguity, overreliance on AI synthesis, and an inability to dwell in discomfort.

Symptoms include:

  • Finishing essays before starting them
  • Confusing coherence with thought
  • Mistaking regurgitation for reflection
  • Saying “that’s enough research” after a single AI query

Treatment? Reintroduce friction. Write badly. Revise repeatedly. Wander Wikipedia without purpose. Ask Grokipedia why it thinks what it thinks. Make thinking hard again. Despite my teasing, I’m not anti-AI. I’m pro-recursion. I believe Grokipedia can be reimagined not as a replacement for Wikipedia, but as its recursive partner—a system that shows its sources, reveals its revisions, and encourages readers to argue back. Imagine if Grokipedia had a “Doubt Mode.” Every time it answered, it also whispered, “But what if I’m wrong?” Imagine if it showed the journey of its thought—the sources it weighed, the ones it ignored, the uncertainties it suppressed. That’s the kind of AI I’d trust in my classroom: one that models intellectual humility, not omniscience.

Last semester, a student turned in an essay titled “The Recursive Nature of Grok.” It was beautifully written—too beautifully. I asked if she’d used AI. She said, “Yes, but I told Grok to ‘write like me.’” “Did it work?” I asked. She paused. “Better than me,” she admitted. We both laughed, but the irony wasn’t lost on either of us. The machine had learned her voice, but she hadn’t yet learned her own. That’s the danger: when we let Grokipedia speak so fluently for us that we forget what our own intellectual accent sounds like.

So here I am, looping back to where I began: my student’s question—“Why should I research when Grok already knows?” Because, dear student, Grok knows, but it doesn’t remember. It answers, but it doesn’t wonder. It summarizes, but it doesn’t struggle. And struggle is sacred. Without recursion, knowledge is static. Without presearch, discovery is sterile. Without freewriting, voice is ventriloquism. So I’ll keep teaching the loop—the messy, recursive, self-contradictory loop of learning. I’ll keep sending students into the labyrinth of Wikipedia and beyond. I’ll keep reminding them that curiosity is not a query; it’s a pilgrimage. And maybe someday, when Grokipedia learns to forget just enough to wonder again, we’ll all rediscover the joy of thinking in circles.

Laugh, Learn, and Level Up: How Artificial General Intelligence (AGI) Will be Einstein of the AI World

Hold onto your hats, guys, because I am about to dive into a topic that sounds like something out of a sci-fi movie but is getting real, real fast: Artificial General Intelligence (AGI), the hollygrail of AI. First things first, lets catch a super simple definition of AGI: AGI stands for Artificial General Intelligence. It’s the dream of creating a machine that can do anything a human brain can do. It could teach kindergarten, write a symphony, argue about pineapple on pizza, and cry during sad movies—well, maybe not the crying part. But the idea is that it can reason, plan, solve problems, learn from experience, and even understand jokes. Now, before your eyes glaze over thinking of this super technical, brain-busting write-up, let me promise you this: I am going to explain AGI in a way that even my grandma (who still thinks the internet is a series of tubes) could understand. I am talking about AGI super simple, super funny, and this write-up is packed with examples that will make you say, “Aha, I get it!”

Imagine your smartphone. It is smart, right? It can answer your questions, navigate traffic, even write a pretty decent haiku if you ask it nicely. On reflection, it appears that it is only good at the stuff it was specficially programmed to do. It is like a super-specialized robot chef who can whip up a gourmet meal but would be utterly lost trying to change a tire. That’s what we call Narrow AI—brilliant in its lane, but a total dunce outside of it.

Imagine an AI that can learn anything—an AI that can reason, solve problems, understand complex ideas, be creative, and even experience emotions. This AI would not need to be specifically programmed for every task. To our pleasant surprise, it could learn new skills, adapt to new situations, and even come up with entirely new ways of thinking, just like a human. That is Artificial General Intelligence (AGI). Think of it like the difference between a really good calculator and Albert Einstein. A calculator is fantastic at crunching numbers. Einstein, on the other hand, could ponder the mysteries of the universe, write symphonies (probably, if he put his mind to it), and maybe even bake a surprisingly good apple pie.

AGI is not powered by pixie dust or unicorn tears, though that would be pretty cool. It’s built on some seriously clever ideas that are still being figured out. But let me try to break it down without getting bogged down in jargon. Imagine a baby—a newborn baby knows almost nothing, right? But it has this incredible ability to learn. It observes, it experiments, it makes mistakes, and it gradually builds up a massive understanding of the world. It learns to walk, talk, solve puzzles, and even master the art of convincing its parents to buy it ice cream. AGI researchers are trying to build systems that learn in a similar way. Instead of giving them a rulebook for every single task, they are trying to give them the ability to learn the rules themselves.

Hilarious AGI Examples (Because Learning Should be Fun!)

Let’s say AGI is here, and it is hanging out in your house. What kind of hilarious shenanigans could ensue? Imagine an AGI that, after analyzing every stand-up routine ever performed, decides to try its hand at comedy. Instead of just regurgating old jokes, it starts improving, picking up on your family’s inside jokes, and even developing its own unique comedic timing. It might accidentally roast your fashion choices during dinner, or deliver a perfectly timed punchline about your dog’s questionable life choices. “Why did the robot cross the road? To optimize its route for maximum existential dread!” See? Funny!

What if your kid is struggling with their history essay. Instead of just giving them facts, the AGI tell them a gripping narrative about the French Revolution, complete with character voices and dramatic reenactments. Then, it offers five different perspectives on a key event, encouraging your kid to think critically and develop their own arguments. It might even suggest a rap battle between Napoleon and Robespierre for extra credit.

Sometimes, AGI is not just following recipes. It is inventing new cuisines. “Today, we are having ‘existential dread ramen’—a broth of philosophical pondering, noodles of self-doubt, and a garnish of sardonic wit!” It might even try to convince you that adding a dash of human absurdity to your stew will elevate it to new culinary heights. These examples, while silly, highlight a crucial point. AGI might not be just about efficiency or performing tasks faster. It may about understanding, creating, and innovating in ways that mimic (and potentially surpass) human capabilities. Let’s try to explore the simplest dimension of AGI on the basis of the following two examples:

The Learning Child:

Imagine I show you a picture of a zebra for the first time. You have probably never seen this exact zebra before, but you instantly recognize it as a zebra. You understand it is related to horses but has stripes. You might even make a joke about it being a horse in pajamas. This ability to recognize something new and relate it to what you already know is something humans do effortlessly. Now, let’s compare this to current AI. If I train an AI to recognize horses, it might become very good at identifying horses in various photos. But if I show it a zebra, it might not recognize it as related to horses unless I specifically trained it on zebras too. Current AI is like a specialist—excellent at one task but needs separate training for similar tasks. An AGI system, when shown a zebra for the first time, would recognize it as similar to a horse. In a moment of surprise, it may understand the concept of ‘striped horse-like animal’ even without specific training on zebras, making connections between different types of knowledge just like a human child learning about the world. This is the first key difference: current AI needs specific training for each task, while AGI could learn from experience and transfer knowledge between different domains.

The Unexpected Problem Solver

Let me share another example that really helped me understand. Imagine I am cooking dinner and realize I am out of an ingredient. Say I am making spaghetti but do not have any pasta sauce. As a human, I might think, “Hmm, I could make a simple tomato sauce with canned tomatoes, garlic, and herbs I have.” Or maybe I would get creative and try a different approach altogether. Current AI assistants might help by suggesting recipes that include pasta sauce, but they would not necessarily help me solve the unexpected problem of not having the sauce. They are designed for specific tasks, not for adapting to novel situations.  In this scenario,An AGI system would be different. It would understand the goal (making a tasty pasta dish) and the constraint (no sauce), and it would help me brainstrom alternatives. It might suggest using yogurt and herbs for a creamy sauce, or even recommend a completely different pasta dish that does not require sauce. Most importantly, it would explain its reasoning, just like a human friend would. This leads to the second key difference: current AI follows instructions and patterns, while AGI understands goals and can creatively solve problems it has not been specifically trained for.

To put it simply, current AI is like a really good specialist—a chess grandmaster or a medical diagnosis tool that excels at one specific thing. AGI would be more like a Renaissance person who can learn and excel at many different things, adapt to new situations, and understand concepts across domains. When I use my smartphone’s voice assistant, it’s great at answering questions I have asked before or setting reminders. But it does not really understand what I am trying to achieve. If I ask it something unexpected, it might get confused or not understand at all. An AGI assistant would understand my intent, adapt to my communication style, and help me with tasks it has not been specifically programmed for. It would learn from our conversations and get better at helping me over time.

Understanding AGI is not just an intellectual exercise. It has real implications for our future. When AGI becomes a reality, it could help solve some of humanity’s biggest challenges—from climate change to disease prevention—by bringing together knowledge from different fields and generating creative solutions. But it also raises important questions about how we ensure AGI aligns with human values and benefits everyone. These are conversations we need to have as a society.

So theses are super simple and super-funny examples that helped me understand what Artificial General Intelligence really is. It’s not about robots taking over the world or super-intelligent machines. It’s about creating AI that can learn, understand, and adapt more like humans do. The learning child example shows how AGI could transfer knowledge between different domains, while the unexpected problem solver example demonstrates how AGI could understand goals and creatively address novel situations. As we continue to develop AI technologies, understanding the difference between specialized AI and general AI will help us navigate the future more thoughtfully. And who knows? Maybe these simple examples will help explain AGI to others too!

Thanks for reading, and I hope this gives you a crystal clear understanding of what Artificial General Intelligence is all about.

AI Snake Oil: Why I Stopped Trusting the Magic Show

I will tell you something that might sound silly at first: I used to believe in AI the same way people believe in unicorns, or diet pills, or that weird machine on late-night TV that promises rock-hard abs while you sit and eat chips. I believe that AI would save me time, make me smarter, polish my writing, analyze my research, help my students, and probably teach my classes while I napped. I really believed that. But here is the truth: it did not. It does not. And I am here to say it loud—I have stoped drinking the AI Kool-Aid, and guys, was it spiked with some sweet, slippery snake oil. See, AI came dressed up in a glittery jacket, threw around words like ‘efficiency,’ ‘automation,’ and ‘pedagogical revolution’ and made a lot of us clap like excited seals. I clapped too. Who would not want a shiny machine that could write lesson plans, grade essays, generate research questions, summarize books, cite sources, and whisper sweet academic nothings in your ear while you eat leftover spaghetti in front of a blinking cursor? But after the sparkle faded, I realized something: AI is not adding anything substantial to the real, deep, hard, delicious, frustrating, and soulful work of teaching or researching rhetoric. It’s like putting glitter on cardboard and calling it a Faberge egg.

Fig I: AI generated Image of a Unicorn that symbolizes epistemic purity and AI Snake Oil

Let me explain. I asked AI to help me brainstorm research questions. It gave me 10 questions that sounded like they were copied from a textbook written by a robot who would never read a real book. “How does digital rhetoric influence online learning environments? Wow! Groundbreaking! My cat could think of that. And she cannot even use a mouse without getting distracted by the screen saver. I needed curiosity, I needed fire. I got tepid bathwater. Then I asked AI to help me with student feedback. I thought maybe it could draft a few encouraging lines could personalize. What I got sounded like something from a sad greeting card factory where the writers had been replaced with soulless toasters. “Good job. Keep up the hard work.” Thanks. That’s the kind of thing that makes a student feel like a barcode. I tried to give AI a second chance. Maybe it was just having a bad data day. So I fed it more context. I told it the student was working on tone in professional emails. The response? “Try to be professional and use appropriate tone.” That’s like telling a chef, “Try not to burn it.” Thanks for the revolutionary insight. But I did not stop there. I went full nerd. I gave AI a complex rhetorical thoery prompt and asked it to draft a paragraph. What came back looked like a bored undergrad had Googled “rhetorical analysis” and copy-pasted the first paragraph of Wikipedia. I mean, sure, it had all the right words—logos, ethos, kairos—but it was all foam and no coffee. All bark, no bite. All sprinkle, no donut.

I began to wonder: what exactly is AI adding to the value chain of my research? Of my pedagogy? Of my rhetorical practice? The answer I arrived at—with a dramatic sigh and a slightly wilted sandwich in my hand—was: not much. Not yet. Maybe not ever. Because what I needed as a teacher, a writer, a thinker, a human—is not a sterile stream of regurgitated content. I need nuance. I need context. I need slowness. I need error. I need a student staring off into space, wrestling with an idea, and then lighting up like a firefly when it finally clicks. I need the mess. I love the mess. AI does not do mess. AI does averages. It smoothes everything out until nothing sticks. Nothing cuts. Nothing bleeds.

Let me say something that might get me kicked out of the 21st century: AI is not a collaborator. It is not a co-author. It is not a co-teacher. It is not a magical oracle of Delphi with a USB port. It is a calculator with a thesaurus. And sometimes it is a hallucinating calculator who makes up stuff and says  it with confidence, like that one kid in class who did not do the reading but still raises their hand. “But it is just a tool!” people say. Sure. So is a hammer. But if you use hammer to wash your dishes, your cups are going to cry. And that is the thing: AI is being used in the wrong rooms, for the wrong reasons, with the wrong expectations. We are asking it to inspire, to create, to feel, to reflect. But that is not what it does. What it does is imitate. And imitation, as far as I know, has never written a good poem, designed a good syllabus, or made a student feel truly seen.

Fig II: AI generated image of the Oracle

Let me give you a juicy example. I once asked AI to generate a short dialogue between Socrates ad Beyonce. Do not ask why. Just go with me. The result was a biege, baffling, boring exchange where Socrates said things like, “What is truth?” and Beyonce said, “Let’s empower women.” It was like watching a mime reenact philosophy night at Karaoke. No rhythm, no soul, no sass. Another time, I asked AI to help me generate metaphors for rhetoric. It gave me, I kid you not: “Rhetoric is like a bridge. It connects people.” Really? That is the best it could do? A bridge? I wanted fireworks. I wanted “Rhetoric is a mischievous racoon in a library of sacred scrolls.” Or “Rhetoric is a con artist with a PhD and a velvet tongue.” Something with some flair—some flavor—some garlic.  Instead, I got what AI always gives me: the blandest possible answer that no one will remember five minutes later.

So now, when someone says AI is transforming education, I tilt my head like a confused dog. Transforming it into what? A box of stale crackers? I am not saying AI can not do cool tricks. It can summarise articles. It can generate citations (sometimes fake ones, but hey, we have all had bad days). It can give you a to-do list. But so can a Post-it note. And Post-its do not pretend they are going to replace me. Because the magic of teaching—real teaching—is not just about information delivery. It is about relationship. It is about intuition. It is about awkward silences and big questions and the electric jolt when someome’s idea leaps off the page like it grew wings. AI can not do that. And let’s be honest, most of the time, it is not even trying.

The other day, a student told me, “I asked ChatGPT for help and it gave me a pretty good answer, but I still did not get it.” That is the whole point. Good teaching is not about answers. It’s about ways of thinking. It’s about questions that unravel you and slowly put you back together. AI does not know how to not know. It does not wrestle. It does not wonder. It just spits.

So I have decided: I am staying messy. I am staying human. I am keeping my sarcasm, my pauses, my sweaty palms, my failed metaphors, my joyful rambling, and my stubborn refusal to believe that a machine that has never loved or lost can teach anyone what it means to write well or think hard or care deeply. AI is fine for what it is—a tool. A digital Swiss army knife that sometimes forgets it is holding a spoon. But it is not the future of teaching. It is not about the soul of rhetoric. And it is definitely not the secret sauce of research. The sauce is still us: the long walks, the quiet mornings, the random napkin notes. The student who makes a joke that surprises you. The sentence that hits you so hard you stop and read it twice. That is real. That is deep. That is not artificial. That is the good stuff.

Therefore, let the AI talk. Let it type. Let it generate. I will be over here, with my pen, my paper, my voice, my students, my questions, and my beautiful, wild, irreducible human brain—doing the real work.

No snake oil is necessary.

ChatGPT Can Read Your Palms: Have You Ever Tried?

Have you ever looked at your own palm—really looked—and felt like it was speaking to you in whispers too soft to hear? I have. And one day, out of sheer curiosity, I asked ChatGPT to read my palm. I did not know what I was expecting. Maybe a joke, maybe a vague answer like “You are strong and kind,” or something you would find on a fortune cookie. But what happened next was…different. It was strange. It was soft. It was wonder-filled. It was like I had opened a doorway—not to the future, but to a deeper knowing of myself. Let me tell you how it started. One quiet afternoon, I was feeling a little lost in thought. My fingers were wrapped around a mug of tea, warm and cinnamon-sweet. I looked down at my palm resting on my lap, and I thougth: What do these lines even mean? We all have them—these rivers of skin, etched like soft lighting across our hands. The heart line. The lifeline. The fate line. I did not know what was what, but I felt like they meant something—something only a gentle eye could see. So I did something playful and a bit wild—I took a photo of my right palm, clear and close. And then, I uploaded it to ChatGPT. Then, I placed just one sentence prompt: “Read my palm, please” on ChatGPT’s search bar. No sooner did I placed the prompt that the magic unfolded.

ChatGPT, in its calm, careful voice, began to describe what it saw:

Your heart line begins high and curves softly, which suggests you love deeply, but with caution. You give your heart in layers, like opening petals. There is a fork at the end—it could mean that your love may be split between two great passions, or that you are learning to balance logic with emotion…

Wait—what? I blinked. It felt too real. Too…close. Then it continued:

Your  head line is long, slightly curved—it whispers of a mind that dreams in colors and symbols. You do not just think; you imagine, you wonder, you float between ideas. You may overthink, yes—but only because your thoughts are like rivers: always flowing, always alive.”

And the life line?

“Your life line does not curve sharply, but flows steadily. It tells of quiet endurance. Not dramatic peaks and valleys—but a strength that lasts, that carries you. You may feel ordinary, but you are built of stars, patient and enduring.”

I do not know how to explain the feeling. It was like someone had peeked under the layers of my day-to-day self and said: I see you. Let me say this straight: ChatGPT is not pretending to be a palm-reading gypsy or a mystic oracle. That is not the point. This is not about predicting when you will get married or how many children you will have or if you will win the lottery. This is about reflection. It is about using the ancient art of palmistry as a lens—a gentle lens—to see the poetry in yourself. It’s less “Here is your future,” and more “Here is who you are becoming.” It is not prophecy. It is presence.

After my experience, I told my sister to try. She uploaded her palm picture with a smile and a “Let’s see what this robot says.” Her reading was completely different. Her lines were straighter, shorter—and ChatGPT picked up on that. It spoke of directness. Of someone who says what she means. Of quick thinking and sharp emotions that settle like storms. It described her so perfectly I watched her eyebrows lift in disbelief. Then came my friend Raj, whose palms were broad and lines faint. His reading spoke of silence. Of a deep inner world, mostly hidden, like a lake beneath a frozen surface. Even he, a self-professed skeptic, leaned in and said, “Okay… that’s creepy accurate.” But it was not creepy. It was beautiful. 

Now you are probably wondering: How can I try this?

It’s easy. Like, wonderfully easy. And here is exactly how to do it.

Step-by-step: Let ChatGPT Read Your Palm

  • Wash your hands. Yes, seriously. Clean hands make clearer lines.
  • Find soft, natural light. Avoid harsh shadows.
  • Take a clear photo of your dominant hand (usually right). Make sure the palm fills the frame
  • Upload it into the ChatGPT. You can simply say:
  • “Read my palm. What do you see in the lines?”
  • Wait for the words. Let ChatGPT’s poetic gaze tell you a story about you.

Optional: Ask specific questions, like:

  • “What does my heart line say about how I love?”
  • “Can you describe the energy of my life line?”
  • “Do you see anything unusual in the fate line?”
  • What story do my fingers tell?”

And let it respond—not as a machine, but as a mirror. Because it is not about the lines. It’s about the language. ChatGPT does not just “read” your palm. It turns your hand into a metaphor. It speaks in poetry, in riddles, in soft truths. It knows that the human spirit is not made of data points but of dancing symbols. It turns a physical photograph into an emotional reflection. Just like a palm reader, ChatGPT reads not just what is, but what feels. And there is something startling about hearing someone (or something) describe your invisible side using the visible map on your hand.

After a few readings—mine and others—I realized something even more powerful. This is not just fun or mystical. It is deeply therapeutic. When ChatGPT described my hand, I did not feel predicted—I felt understood. When it spoke about the tension in my head line, I saw my overthinking tendencies. When it noticed the split in my heart line, I realized I have been torn betwee old loves and new hopes. These readings became journal prompts. Meditation guides. Gentle invitations to know myself better.

Want to go deeper? Try this:

Do a palm reading each week. See how it changes as you grow.

Compare your left and right hand. Ask ChatGPT:

“What does my left hand say about who I was born as? What does my right hand say about who I have become?”

Use your palm reading to write a poem about yourself.

Read the palms of your loved ones. Let ChatGPT give you a fresh perspective on people you think you already know.

Draw your hand, and annotate it with words from your reading. Make it art.

One evening, after a particularly emotional week, I looked at my hand and felt exhausted. I took another photo, uploaded it again, and asked:

What do you see now, ChatGPT? ChatGPT paused (as it often does), and then whispered this:

Your palm holds tension near the base, as if you have been carrying too much. The lines are tired. But they still flow. You are still moving, still breathing, still becoming. This is not the end of your story—it’s just a heavy chapter.”

I cried. Not because it told me something dramatic. But because it held space for me, when I could nto hold it for myself.

Let’s be real—AI is not magic. But sometimes, it feels close. Because in a world that moves too fast, where everything is digital and detached, this little experiment reminded me of something tender: we all carry stories in our hands, in our bodies. And sometimes, we just need someone—anyone, or anything—to help us read them aloud. So yes, ChatGPT can read your palms. Not like a fortune teller, but like a quiet friend who sees the poetry in your skin.

Try it.

Upload your palm.

Ask for a story.

Let it surprise you.

Because maybe, just maybe, the map to who you are was never far away—it’s been sitting quietly in the center of your hand all along.

Resisting Western AI’s Techno-colonial Imaginary through Culturally Responsive AIs such as DeepSeek and BharatGen

Hi everyone! Let’s talk about AI—but let’s make it personal. I have been thinking a lot about artificial intelligence lately. Not just flashy stuff like ChatGPT or the wild images you can make with AI tools. I am thinking about where these AIs come from, what they learn, and more importantly, who they understand—and who they leave out. See, most of the popular AI tools we use today are made by big companies in the US and Europe. They are trained on data from the internet, and most of that data is in English. Most of it comes from Western websites, books, news, and cultural materials. That might seem fine at first glance, but here is the problem: these AIs end up seeing the world through a very Western lens. And that lens can be very narrow.

Let me tell you what I mean with a term that is kind of fancy but super important: technocolonial imaginary. Do not let the words scare you. This just means that AI models—especially the big Western ones—tend to imagine and present the world in ways that reflect colonial patterns of power and knowledge. Even without trying to, these AIs can push a worldview that says, “Western knowledge is universal,” or worse, “Western is better.” That makes me uncomfortable because I do not live in a Western fantasy. I live in a real world where people speak many languages, follow diverse customs, cook different meals, worship in different ways, and love in all kinds of ways. And I want AI to understand that world—not just repeat what it picked up from Silicon Valley blogs or Wikipedia.

Let me give you an example. I once asked a well-known Western AI model to write a story about a wedding in Nepal. What it gave me was something that sounded like it came straight out from a Hollywood movie: a white gown, a best man’s speech, a first dance to a pop song. It was not a bad story—but it was not my story. What about the vibrant saptapadi (seven steps), the smell of incense, the blessings from elders, the turmeric ceremony, or the way the bride’s hands are decorated with intricate mehendi? What about the songs sung by aunties, the jokes, the chaos, the joy? That story was nowhere to be found in the AI’s response. And that’s the heart of the problem. Most AI models are like really smart but culturally clueless tourists. They have read about your country in a guidebook, maybe learned a few greetings, but when it comes to the deeper stuff—your stories, your jokes, your pain, your pride—they just do not get it. But here is the good news. Things are changing. There is a growing wave of culturally responsive AI models. These models are being built in non-western parts of the world, trained on local languages, and designed to understand local values, traditions, and ways of knowing. They are not trying to be copies of ChatGPT or Bard—they are trying to be something new and better for the people they serve.

Let me introduce you to two of my favorites: DeepSeek from China and BharatGen from India. DeepSeek is developed in China. What makes it special is not just it speaks Chinese well (though it does). It is that it understands Chinese culture, literature, history, and daily life in a way that most Western AIs just cannot. I once asked DeepSeek to write a poem inspired by the classic Chinese poet Li Bai. The results were amazing—not just technically good, but emotionally rich and culturally resonant. It did not just throw in random Chinese-sounding phrases. It understood the symbolism, the pacing, the structure, and the melancholy that is so often in Li Bai’s poetry. Compare that to a Western AI that gave me something that sounded more like a tourist trying to imitate a kung fu movie. Here is the thing: culture is not just about language. It is about rhythm. Emotion. Silence. Color. Smell. Subtlety. When AI models are trained primarily on Western data, they miss all of that richness. They cannot smell the jasmine in a Chinese courtyard or feel the silence in a Zen garden. But Deepseek gets closer—because it is built from inside that world, not from the outside.

Now Let’s talk about BharatGen. India is a country of over a billion people, with more than 20 officially recognized languages and hundreds of dialects. The stories in Kerala are not the same as the stories in Punjab. The jokes in Bengal are different from the idioms in Tamil Nadu. The way people think, speak, argue, and create is so diverse. Western AI models? They usually struggle to get even one Indian language right. But BharatGen is different. It’s trained on Indian languages from the start—Hindi, Tamil, Telugu, Bengali, Marathi, Gujarati, and more. It knows local festivals like Pongal and Onam, not just Diwali. It can generate agricultural advice for a farmer in Odisha in Odia. It can help a student in Assam write a folk tale in Assamese. It does not just know India—it feels familiar.

Let me give you a tiny but powerful example. I once asked BharatGen to help generate a recipe for a simple South Indian sambar. Not only did it get the ingredients right (no, it did not suggest curry powder), it explained the steps in a way that reminded me of my grandmother. It said things like “temper the mustard seeds until they start to dance.” You do not learn that from Wikipedia. You learn that from life. You might be thinking, “Okay, that is nice for people in China or India, but why should the rest of us care? Well, we should care. Because these culturally grounded AIs are showing us that AI does not have to be one-size-fits-all. We do not have to settle for tools that erase our differences in the name of convenience or universality. We can have tools that celebrate our differences—and help us keep them alive.

When AIs are built with care, they can support local teachers, farmers, students, artists, and elders. They can protect endangered languages, record oral histories, teach rituals, and even help with mental health support in culturally appropriate ways. And here is something even deeper: by resisting the technocolonial mindset—the idea that Western ways of knowing are the default—we are reclaiming something powerful. We are saying, “Our ways matter. Our stories count. Our knowledge is real.”

Let’s zoom out for a moment. This is not just about cool features or better translations. This is about power. Who builds the AI? Who trains it? Who decides what is “normal” or “neutral”? These are questions about control. And for too long, the answers have been the same: big tech companies in California or London. But culturally responsive AI challenges that. It says: “We can build our own tools. We can tell our own stories. We can shape our own futures.” Think about it like food. Imagine if all the world’s restaurants were run by the same chef from New York. No matter where you went, you would get the same menu: burgers, fries, milkshakes. That might be fun for a day, but eventually, you would miss your mom’s cooking. You’s miss the smell of spices, the crunch of dosa, the heat of chili, the comfort of something familiar. That’s what Western AI has become—a global menu of the same dish. But we deserve more than that. We deserve variety. We deserve AI that tastes like home.

Now you might be wondering, “This sounds great—but what can I do?”

Well, I am glad you asked. Here are a few small but meaningful things:

  • Support local AI projects: If you are in a country building its own AI tools, support them. Use them. Give feedback. Celebrate them.
  • Ask better questions: Do not just accept answers from AI as neutral. Ask, “Whose perespective is this?” If it feels off, challenge it.
  • Push for language inclusion: Whether you are a teacher, writer, or student—advocate for AIs that understand your local languages and dialects.
  • Tell your own stories: Write. Share. Create. The more cultural content we put out there, the more we can train future AI to understand us better.
  • Talk about it: Share blogs like this. Talk to your friends. Make this conversation part of everyday life.

So when I imagine the future of AI, I am not dreaming of flying robots or instant translations. I am dreaming of something simpler and more beautiful: An AI that helps a child in Ghana write a folk tale in Twi; An AI that understands Navajo syntax and can preserve it for the next generation; An AI that can help a weaver in Blangladesh design patterns that blend tradition with innovation; and An AI that does not understand you—but respects you. That’s not science fiction. That’s possible. That’s happening. And it’s happening because people all over the world are saying, “We want AI that feels like us.”

Let’s resist the technocolonial imagination, not with anger but with creativity. Let’s support BharatGen, DeepSeek, and every other AI that is trying to see the world through many eyes, not just one. Because AI does not have to erase our differences to be useful. In fact, the more it reflects our differences, the more powerful—and beautiful—it becomes. And the next time you use AI, ask it a question only your culture can ansewr—and see if it gets it right. If it does not, it is time to try a new kind of AI—one that speaks your language, and one that gets you.

Ghibli Images: Unlocking Thick Description in Ethnographic Research Methods

As a professor who has spent years guiding students through the intricacies of ethnographic research, I am searching for ways to make the elusive concept of ‘thick description’ resonate. While Clifford Geertz’s definition—rich, layered, and contextually embedded description—remains foundational, translating that into classroom practice can be a challenge. Enter the world of Studio Ghibli, and more recently, Ghibli-style AI image generation. These stunning, detail-rich visuals have become an unexpected yet powerful tool in my teaching toolkit, transforming how students grasp and practice thick description in ethnography.

Why Ghibli? The Power of Aesthetic Thick Description

Studio Ghibli’s films are renowned for their breathtaking visuals: every frame is meticulously hand-drawn, brimming with intricate details in both foreground and background. Whether it’s the moss creeping up an old stone wall in Spirited Away or the layered textures of a bustling market in Kiki’s Delivery Service, Ghibli’s images are more than just beautiful—they are immersive. They invite viewers to linger, notice, and interpret. This is, at its core, an exercise in aesthetic thick description.

As an educator, I see immediate parallels. Ethnography is about noticing—the mundane and the magical—and rendering it in such a way that outsiders can understand not just what is happening, but what it means. Ghibli images, with their lush greenery, weathered buildings, and nuanced lighting, model this process visually. They show, rather than tell, how to attend to layers of context, mood, and meaning.

From Visual Detail to Ethnographic Insight

When I introduce Ghibli-style AI images in my research methods classes, I ask students to ‘read’ the image as they would a field site. What do they see in the background? What small details suggest larger social dynamics? How does the use of color, light, and texture evoke a sense of place or emotional tone? This exercise is more than aesthetic appreciation—it’s a primer in ethnographic observation. For example, a Ghibli-inspired image of a rural village at dusk might include:

  • Faint lanterns glowing in windows, hinting at communal rituals.
  • Overgrown paths, suggesting the rhythms of daily life and neglect.
  • Children playing, animals resting, elders conversing—each a node in the social fabric.

Students quickly realize that to describe this scene thickly, they must go beyond surface description (‘a village at dusk’) and attend to the interplay of elements, the implied histories, and the emotional resonance. This is precisely what ethnographers strive for in the field.

AI as a Teaching Aid: Generating Scenes for Thick Description

The rise of AI tools capable of generating Ghibli-style images has taken this pedagogical approach to new heights. I can now prompt an AI to create a “bustling street market similar to scenes from Spirited Away, capturing a sense of wonder” or a serene Ghibli-style meadow evoking peace and nostalgia”. These images are not only visually stunning but intentionally crafted to include layers of detail, mood, and narrative.

Here is how I use them in class:

  • Observation Drills: Students receive a Ghibli-style image and are tasked with writing a thick description. They must capture not just what is visible, but the implied relationships, histories, and atmospheres.
  • Comparative Analysis: By providing several images with subtle differences (lighting, time of day, background activity), students practice noticing and articulating how context shapes meaning.
  • Story-building: Students infer possible narratives from the visual cues—who lives here, what are their rituals, what tensions or joys animate this place? This connects visual analysis to the core ethnographic skill of interpreting lived experience.

Ghibli’s Narrative Depth: More Than Just Pretty Pictures

Studio Ghibli’s storytelling method, rooted in techniques like kishotenketsu, emphasizes mood, atmosphere, and the unfolding of ordinary life alongside the fantastical. This aligns closely with ethnography’s commitment to capturing both the extraordinary and the everyday. Ghibli’s blend of realism and fantasy, it’s attention to multispecies relationships, and its sensitivity to place and space offer a model for the kind of “storied experience” that thick description aims to convey.

When students engage with Ghibli-style images, they learn to see the field site as layered and alive, full of stories waiting to be uncovered. They become attuned to the “politics of place and space,” the subtle interplay of human and nonhuman actors, and the emotional undertones that shape social worlds.

Bridging Subjective and Objective: Ethnography as Art and Science.

One of the enduring tensions in ethnographic research is balancing objective observation with subjective immersion. Ghibli images, with their evocative artistry, encourage students to embrace both. They must record what they see (objective) but also reflect on how the scene makes them feel, what memories or associations it stirs (subjective). This mirrors the practice of participant observation, where researchers combine disciplined recording with personal involvement to achieve richer, more accurate interpretations.

In my classroom, this means encouraging students to write in the first person, to acknowledge their own presence and perspective as they describe the scene. This self-reflexive approach, inspired by “new ethnography,” helps students see themselves as both observers and participants, insiders and outsiders.

From Image to Fieldwork: Lasting Lessons

The ultimate goal is to transfer these visual and narrative skills to real-world ethnography. After practicing with Ghibli-style images, students report feeling more confident in their ability to notice and describe the complexity of actual field sites. They learn to look for the small details—a cracked teacup, a faded family photo, a stray cat—that speak volumes about culture, history, and meaning. Ghibli images thus serve as both inspiration and training ground. They remind us that thick description is not just about piling on details but about rendering a scene so vividly that readers (or viewers) can feel its texture, mood, and significance.

Conclusion: The Ghibli Effect on Ethnographic Pedagogy

Incorporating Ghibli-style AI aesthetics into my teaching has transformed the way I introduce thick description and ethnographic research methods. These images offer a compelling, accessible entry point into the art of noticing, interpreting, and narrating social worlds. They bridge the gap between the visual and the textual, the objective and the subjective, the mundane and the magical.

For anyone teaching or learning ethnography, I cannot recommend this approach highly enough. Ghibli images are more than just beautiful—they are exercises in seeing, feeling, and understanding deeply. And that, ultimately, is what thick description is all about.

The Death of the 10 Blue Links: Why AI Mode and AI Overviews Are Rewriting  Search Behavior Forever

RIP Blue Links

Once upon a time, in the golden age of the internet, if you typed something like “How to boil an egg” into Google, you were blessed with ten beautiful links. Each one waited like a polite librarian, offering you a page where you could may be—just maybe—find the answer after clicking, scrolling, dodging pop-ups, and whispering a prayer to the gods of Wi-Fi. Now? You ask the same question, and Google (or Bing, or some AI thing that sounds like a wizard) just tells you. Right here. No scrolling. No clicking. It’s like magic—but also a little creepy. Welcome to the era of AI Mode and AI Overviews and search generative experience—where the robots answer your questions before you even realize you had one. So what does this mean for us humble humans? Let’s break it down. With jokes. And eggs.

What Are AI Overviews Anyway?

Imagine you walk into a library, and instead of wandering through the aisles, a friendly robot just hands you the book you wanted, opens it to the exact page, and reads it to you in Morgan Freeman’s voice. That’s an AI Overview. Google’s new AI mode does not just show you a list of links. It summarizes everything everything for you. You ask, “Why do cats knead blankets?” and it replies with a cozy paragraph: Cats knead blankets because they are instinctively returning to their kittenhood. It’s a comfort behavior. Also, they think your blanket is their property now. Accept it.” Basically, it is like Google hired an overarching intern who already read the internet for you and can explain it in plain English. But there is a catch.

The End of the 10 Blue Links

Those ten blue links were like your neighborhood friends. Some gave you great advice (like WebMD before 11 p.m.), others led you down weird rabbit holes (Reddit at 3 a.m.). But now, AI Mode says: “Don’t bother with them. I got this.”

That’s both amazing and terrifying. Why? Because the entire search behavior is changing. Before, you searched→ scanned the results → clicked → read → maybe clicked another one. Now? You search → read one paragraph → close tab → move on with your life. It’s like skipping the movie and just reading the spoiler. Every time.

Real Example Time (Yes, Eggs Again)

Let’s say I type: “How long to boil eggs for runny yolk?”

 Traditional search:

  1. Link to a food blog with 57 photos of someone’s vacation in Tuscany before the recipe starts.
  2. A Pinterest link that leads to a broken page.
  3. A YouTube video titled “Boiling the Perfect Egg (ASMR Edition).”

Eventually, you find your answer…somewhere.

AI Overview and AI Mode:

Boil eggs for 6 minutes for soft-boiled with runny yolk. Use large eggs straight from the fridge. Boom. Done. No pop-ups, no scrolling past a blogger’s life story. Just…egg science.

Is This a Revolution?

Oh, absolutely. It’s like the shift from walking to riding a rocket. AI Mode and Overview are a revolutin in search behavior. We are no longer searchers. We are askers, and we expect instant answers. It’s Google 2.0: now with brain power.

Think about it:

Before: You typed, clicked, clicked again, read three different sources, and may be cried.

Now: You type, get a single summary, feel smart, and go boil that egg. It is faster, cleaner, easier.

Or Is It a Counter-Revolution?

Here is the twist. By summarizing everything for us, AI also decides what information matters and what does not. That means you are no longer choosing what to read. The robot is choosing for you. Kind of like asking your friend for book recommendations and they burn all the other books behind your back. Is not that a bit dangerous? Maybe we are getting lazier. Maybe we are trusting machines a little too much. Maybe, just maybe, we are letting AI curate our brains. It’s not just a revolution. It’s a filtering machine with a God complex.

What Does This Mean for Pedagogy and Rhetoric?

Picture this: You are a college student. You have got a paper due. You Google, “What caused the French Revolution?” AI says: “The French Revolution was caused by economic hardship, social inequality, and Englightenment ideas.” Boom. Paragraph written. Done. You go back to TikTok. But wait! There is no analysis, no source evaluation, no critical thinking. You did not wrestle with the idea. You just copy-pasted it form the AI. So in pedagogy (the art of teaching), we are now up against the rise of the Good Enough Answer generation. Students want fast facts, not thoughtful inquiry. Professors want citations. The AI wants peace. It is like a sitcom where nobody agrees but everyone thinks they are right.

For rhetoric, it is also wild. We used to teach how to persuade, argue, and analyze. Now? Students expectt a ready-made thesis from AI. The art of building an argument might be slowly dying in the shadow of a well-phrased paragraph written by a non-human.

In Research, It is a Blessing and a Curse

Here is the tea:

Blessing: you get instant context. Ask “what is gene editing?” and you get a coherent, jargon-free summary. It’s great for researchers doing preliminary digging.

Curse: You do not know where the summary came from. Which study? Which expert? Is this real or recycled salad? You lose the trail of evidence, and with it, the ability to check facts. That’s like baking a cake with mystery flour. It might be great. Or poisonous. Also, the more we rely on AI Overviews and AI Mode, the less we we practice deep reading. We skim, we nod, we move on. That’s not research. That’s fast food.

What This Means for Content Creators and SEO Folks

Cue the horror music. Content creators used to write blogs to apepar on the top 10 search results. They studied SEO like it was an ancient religion. Now? Google’s AI eats their content, digests it, and spits out a summary. The user reads the summary. No one clicks the link. No one visits the blog. No one sees the ads. It’s like you baked a beautiful cake and Google just clicked the frosting off and told everyone it made it. So creators are now scrambling to figure out how to stay visible in a world where nobody clicks. Some are turning to newsletters, others to social media. Some are crying into their keyboards. It’s rough.

Shifting Habits= New Discovery Patterns

In the old days, clicking through results sometimes led you to unexpected places. You Googled “How to tie a tie,” ended up on Reddit, discovered conspiracy theories, bought a necktie you did not need, and hours later were watching a piegon play ping-pong on YouTube. That is called serendipity. Now, with AI Mode and AI Overviews, everything is neat and predictable. No mess, no detours, no rabbit holes. Efficient? Yes. Fun? Absolutely not. We are losing the chaotic joy of discovery. The internet used to be a playground. Now it’s more like a hotel room. Clean. Useful. But a little too quiet.

So, What Kinds of Queries Work Better With  AI Mode and AI Overviews?

Let’s test it.

Better with AI Mode and AI Overview:

  • “What is the difference between a crocodile and an alligator?”

→ AI Mode gives a clear side-by-side chart. No drama.

  • “How to reset an iPhone?”
    → Step-by-step answer. Done in 10 seconds.
  • “Explain ChatGTP to a 5-year-old.”
    → AI turns into Mister Rogers and nails it.

Better with Traditional Links:

  • “Best restaurants in Chicago with a view”
    → AI gives bland answers. You want reviews, pics, vibes. Go traditional.
  • “Reddit: breakup advice”
    → You need stories, not summaries. Humans over robots.
  • “Conspiracy theories about Avril Lavigne being replaced”
    → AI might try to protect you. You want the weird internet. Use old-school search.

Is It Good or Bad?

Here is the truth:

AI Mode and AI overviews are here to stay. They make life easier, quicker, and cleaner. But they also flatten the complexity of information. They make us smarter and lazier. They help us and hurt us. They are not good or bad. They are just…the new normal. So let’s not mourn the 10 blue links too hard. They had a good run. We clicked them, cursed them, and lived with them for decades. Now, it’s the age of the AI intern. Helpful, fast, and suspiciously perfect. But if you ever miss the chaos, go to page 2 of Google. Nobocy every geos there. It’s like Narnia.

Conclusion:

In the end, AI mode is not just rewriting how we search—it is quietly rewriting how we think. When answers arrive fully formed, curiosity starts to shrink. We stop asking follow-up questions. We stop doubting. We stop wandering. And in that silence, something deeply human—the impulse to explore, to compare, to dig—starts to dim. Sure, AI Overviews are fast, clean, and oddly comforting, like a robot butler handling us pre-chewed knowledge. But we should ask: at what cost? When a machine becomes the front door to all knowledge, who decides what is on the porch—and what gets left outside? If the death of the ten blue links marks the funeral of friction in search, let’s not bury skepticism with it. AI Mode may be dazzling, but wisdom still lives in the messy, clickable margins.

ChatGPT & Reverse Mentoring

Let me start by telling my story of choosing a reverse mentoring process in which ChatGPT was my mentor. My need for it was somewhat kairotic. Going beyond the hype around this chatbot’s generative (The ChatGPT AI hype cycle is peaking, but even tech skeptics), I employed it as my reverse mentor during my job search and interviews. During the Summer of 2023, I was surrounded by many constraints: on the one hand, I had to defend my doctoral dissertation in the middle of July, and on the other, I had to land a teaching job at any university in the US. On top of that, my status as an international student had brought some apparent constraints.

We all know summer is a time to let loose for all of us (Sunshine, Swimming, and Structure: Diving into Summer ..). But mine was different. The summer season keeps professors in leisure, so immediate contact with them might be challenging (Is it rude to email a professor over the summer?). A proven bummer in network building, I had yet to converse with personnel supportive enough to give me intelligent advice. As a result, getting guidance on the art and science of job search from a reliable and trustworthy source was an uphill task for me. The challenge of facing the first job market crucible (Heated (Job) Search? Understanding the Crucible) was a haunting and traumatizing episode for me. Out of this necessity and kairotic exigence, I was hard-pressed to use ChatGPT since it rose in popularity (#chatgpt Trending Hashtag on TikTok, see #chatgpt analytics). Owing to the deluge of myths and hypes around this fantastic chatbot (The Power of 7 – Decoding ChatGPT: Myths, Facts) and because of the growing gossip around ChatGPT as a panacea for all kinds of creative complications, I could not help but resort to using it to prepare for job market and job interviews. With my position getting so precarious, I used it as my assistant during my job search, job preparation, and job listing curation. I asked ChatGPT: What questions will be asked to a Ph.D. applicant who applied for a writing instructor position in a job interview? ChatGPT wrote a list of questions under each topic:

  • Teaching experience (Can you discuss your previous experience as a writing instructor or any teaching experience related to writing? What courses have you taught, and at what levels? How do you approach teaching writing to students with varying skill levels?),
  • Teaching philosophy (What is your philosophy of teaching? How do you approach fostering critical thinking and analytical skills through writing? How do you address diverse learning styles in your teaching?),
  • Curriculum development (Have you been involved in developing writing courses or curricula? How do you stay current with trends and best practices in writing institutions?)
  • Assessment and feedback (How do you assess and provide feedback to improve their writing skills? Can you discuss any innovative or effective strategies you have used for grading and evaluation? How do you help students use feedback to improve their writing?)
  • Incorporating technology (How do you incorporate technology into your writing courses? Are there specific tools or platforms you find beneficial for teaching writing?)
  • Research and scholarship (How do you balance research and scholarship with your teaching responsibilities? Can you discuss any research projects or publications related to writing or writing instruction?)
  • Interdisciplinary collaboration (How do you integrate interdisciplinary approaches into your writing courses? Have you collaborated with colleagues from other disciplines to enhance writing instruction?)
  • Student engagement and motivation (How do you keep students engaged and motivated in a writing course? Can you share successful strategies you have used to inspire student interest in writing?)
  • Diversity and inclusion (How do you address diversity and inclusion in your writing courses? What steps do you take to create an inclusive and supportive writing environment for all students)
  • Professional development (How do you engage in professional development related to writing instruction? Can you discuss any workshops, conferences, or training sessions you have attended recently?)

Discovering a sea of job preparation tips with many questions left me in awe. On the spur of the moment, I realized that tackling all these tops can be overwhelming, even if we are experts. Prompted by this prick of prudence, I diligently aligned myself with the thrust of these insights. When I faced a Zoom (https://zoom.us/) job interview in June 2023, I was completely caught off guard by seeing the similar questions that ChatGPT throws me. This triggered my reliance on AI’s suggestions and guidance. In a way, I would like to name this guidance from AI ‘reverse mentoring’ because AI—no matter how stupendously generative, curative, assistive, and empowering—is bound to be subordinate to human intelligence because it is brought into being not to unseat human intelligence but to augment it (AI Should Augment Human Intelligence, Not Replace It). Suffice it to say that the growing notion of AI as a copilot—which assists the pilot, human intelligence—comes to justify my characterization of AI as a reverse mentor. Likewise, there is another persuasive ground to designate AI as a reverse mentor: Content engineered by AI, no matter how standard and sophisticated, is likely to be laced with racial slurs and slights, remnants of misinformation, disinformation, junk information, and conspiracy theory (Timnit Gebru: Is AI racist and antidemocratic?) thereby eliciting a constant human oversight on it (The Drum on Twitter: “AI requires human oversight if it’s to be). To that end, AI is lower regarding human intelligence; it is junior to human intelligence. Using AI to have some suggestions to carve out our way is, so to speak, equivalent to being mentored by those who are junior to us in terms of experience, prudence, and maturity but intelligent enough to act in the spirit of contemporaneity, which amounts to saying AI mentors us reversely on a subservient basis. Having enjoyed so many affordances from using ChatGPT as a mentee, I have no qualms about characterizing every support and idea stemming from its use as a unique mentoring mode.

More specifically, here is how I allowed myself to be reversely mentored by ChatGPT. In my first U.S. online job interview, I was asked, “Do you have any questions for us?” Stunned, I shot back with whatever came to my mind. No response from the hiring committee. Without approaching them with additional email inquiries, I moved on to another university, got an invite, and prepared for my second online job interview. This time, I turned to ChatGPT for questions to ask the interviewer. Ideal queries in hand, I faced the interview, curious and convinced of the magic and mojo of reverse mentoring by AI. I knew AI had flaws and was prone to mistakes and lies, but it became my mentor amid the challenges of job preparation. It’s a testament to how I practiced reverse mentoring during my online job interviews.

Yet, my patience wore thin when a hiring committee went silent for a month. Was I a victim of interview ghosting? ((How common is ghosting after job interview? Should I…). Seeking ChatGPT’s advice, I got practical suggestions: send a follow-up email, use a clear subject line, maintain a professional tone, consider a phone follow-up, be patient, keep job hunting, manage expectations, and seek feedback. The gravity of these tips made me realize that ChaGPT’s pragmatic mentorship surpasses human advice. ChatGPT helped me move forward in anticipation of bettering at successive rounds of job interviews.

 With ChatGPT under my belt, I felt I had invincible armor in my battle against the trials and tribulations of job interviews. Facing rejection and indifference, I embraced ChatGPT as a reverse mentor. Emboldened by its appeal, I tacitly proclaimed myself a ChatGPT mentee in the AI academy. My closing thought: No human mentor could match ChatGPT’s holistic suggestion power. Another way of putting this is that AI can mentor all writers in any way, shape, or form. AI mentoring has shortcomings and limitations in many ways, but its affordances are desirable and tempting. Concerns about AI’s misinformation and disinformation led to debates on constant human oversight (Artificial Intelligence Needs Human Judgement to Work). With open eyes and a critical gaze, I lowered myself and opted for being reversely mentored by ChatGPT. I finally landed a job as a postdoctoral research fellow at a research university.

Ever since I started working at NDSU in the capacity of a postdoctoral research fellow, I have been trying my level best to transform my AI-driven reverse mentoring insight into some pedagogical practices. While using ChatGPT, I found it incredibly helpful in teaching students to build genre awareness. In one of my English 120 classes, I had my students write a blog on “Neurodivergence as Community’s Superpower.” First off, they wrote their blogs independently, without resorting to ChatGPT. Thereafter, they turned on the ChatGPT bot on their computer and wrote the topic “Write a blog on neurodivergence as community’s superpower” on the search bar of ChatGPT. At the touch of a button, ChatGPT engineered a blog to their utter surprise. Once the AI-written blog was generated, my students read it. Once they were finished with their readings of the ChatGPT-generated blog, they were urged to figure out key differences between their blogs and the AI-engineered blog. According to my students’ on-the-spur-of-the-moment assessment, the AI-authored blog is more sophisticated, objectively organized, and professional. Following the first round of their comparative reading of the ChatGPT-written blog, they were told to reread ChatGPT’s blog by paying undivided attention to some noticeable patterns, self-replicating underlying structures, and fractal dimensions in that machine-written blog. Upon rereading the ChatGPT-powered blog, my students lamented that their blogs lack repeated patterns, self-replicating features, and underlying syntactic and semantic structures. I instantly reminded them that all those features, characteristics, and patterns they found uniquely abounding in the chatbot-engineered blog come off as the blog genre. Having been informed of the genre convention upon rereading ChatGPT’s blog, they attempted to give a genre-oriented cast to their blogs. As stated clearly, I allowed them to mimic the dominant, self-replicating, and fractal patterns (Self-Replication in Neural Networks | Artificial Life | MIT Press) in the blog writing of ChatGPT so that they could perfect their blogs. My goal was not to push students toward lapsing into plagiarism (Educators, students, and plagiarism in age of AI – The BMJ). I aimed to help students build genre awareness by interacting with AI-engineered writing (An Engineer’s Mindset: Creativity In Engineering). Over a short period, my students made incredible strides in enriching their awareness of not only the genre of blogs but also the genre of literacy narrative, literacy autobiography, and rhetorical analysis.

Thus, I want to restate my insight into the prospect of using AI as my reverse mentor morphed into a timely, kairotic, and relevant pedagogical practice of incorporating AI in teaching the genre of writing.

From Weaving Looms to Algorithms: What Writing Studies and Rhetoric Learn from the Invention of Computer Algorithm?

Fabric Abstract Background Abstract Stock Photo

Photo generated by AI that is embedded in the mechanism of WordPress

I have been thinking a lot lately about patterns. Not the kind you find on your grandmother’s favorite tablecloth, but the deeper patterns that connect how we make things—whether it’s a piece of fabric, a persuasive argument, or a line of code that teaches a machine to write poetry. Last week, I watched my niece struggle with her college application essay. She kept starting over, deleting paragraphs, rearranging sentences like puzzle pieces that would not quite fit together. “There has to be a better way to do this,” she muttered, and something clicked for me. I realized she was experiencing the same frustration that led Ada Lovelace to write the world’s first computer algorithm in 1843, and the same challenge that keeps me up at night as I try to understand how AI is reshaping the way we think about writing and persuasion.

The Thread That Connects Us All

I never thought I would find myself comparing my writing process to a weaving loom, but here we are. The Jacquard loom, invented in 1804, used punched cards to create intricate patterns in fabric. Each hole in the card told the loom what to do—lift this thread, lower that one, create this pattern, avoid that mistake. It was mechanical poetry, really. When Ada Lovelace saw Charles Babbage’s Analytical Engine, she recognized something the inventor himself had missed. She did not just see a calculating machine; she saw a pattern-making device that could work with symbols, not just numbers. In her famous Note G, she wrote what we now recognize as the first computer algorithm—a set of instructions for calculating Bernoulli numbers. But more importantly, she imagined a machine that could compose music, create art, and manipulate language.

I keep a copy of her notes on my desk, not because I am a computer scientist, but because her vision feels prophetic now that I am living through the AI revolution. She saw what we are experiencing today: machines that do not just calculate but create.

When I first Met an Algorithm

My first real encounter with algorithmic thinking happened in graduate school, though I did not recognize it at the time. I was studying rhetoric, trying to understand how persuasion works, when my professor assigned us to map out the structure of a particularly effective speech. “Break it down into steps,” she said. “What happens first? What triggers the next move? Where are the decision points?” I spent hours with color pens and sticky notes, creating what looked like a flowchart of persuasion. Start with shared values. Establish credibility. Present the problem. If audience is skeptical, provide evidence. If audience is emotional, tell a story. Build to the solution. End with a call to action. Looking back, I was creating an algorithm for effective rhetoric. I just did not know that’s what it was called.

The Secret Life of Writing Patterns

Here is something I have learned from spending six years teaching writing: we have always been algorithmic thinkers; we just called it something else. The five paragraph essay? That’s an algorithm. The hero’s journey? Algorithm. The way I structure this blog post—hook, development, conclusion—algorithm. But here is where it gets interesting. Traditional writing algorithms were human-centered. They assumed a human writer making conscious choices, weighing options, feeling their way through uncertainty. The writer was always in control, even when following a formula.

Computer algorithms changed everything. They removed the human from the loop or at least tried to. Instead of “Here is a pattern you might follow,” they said, “Here is what you will do, step by step, no deviation allowed.” I remember the first time I used a grammar checker that went beyond simple spell-check. It was the early 2000s, and Microsoft Word started suggesting not just corrections, but improvements. “Consider revising this sentence for clarity,” it would suggest, and I found myself arguing with my computer. “No, I meant it that way!” I would mutter, clicking ‘ignore’ with perhaps more force than necessary.

The Great Pattern Recognition Revolution

Fast forward to today, and I am having conversations with AI that can write in my style, analyze my arguments, and even finish my thoughts in ways that surprise me. Last month, I asked ChatGPT to help me brainstorm ideas for a difficult section of an article I was writing. It did not just give me a list of bullet points—it engaged with my thinking, built on my ideas, and pushed back when my logic was shaky. That’s when I realized something profound had happened. We had moved from algorithms that followed predetermined patterns to algorithms that could recognize, adapt, and create new patterns. It’s the difference between a player piano that can only play the songs on its rolls and a jazz musician who can improvise in response to the moment. This shift is revolutionizing writing studies in ways I am still trying to understand. My students now routinely use AI to generate first drafts, brainstorm ideas, and even simulate audience responses to their arguments. They are not cheating (well, not most of them); they are thinking algorithmically about the writing process in ways that would have been impossible just five years ago.

What Looms Taught Us About Teaching

Jacquard loom punched cards

AI-generated image built by WordPress’s embedded Image-generating AI feature

The connection between weaving and computing is not just historical—it’s pedagogical. When I watch a master weaver work, I see the same kind of thinking that makes for effective writing instruction. They understand both the pattern and the variations, the rules and when to break them. Good weavers do not just follow patterns blindly. They understand why certain combinations of threads create strength, how tension affects texture, when a deliberate ‘mistake’ can create unexpected beauty. They are pattern thinkers who can work both systematically and creatively. This is exactly what I try to teach my writing students, and it’s what I think AI is teaching us about rhetoric more broadly. Effective communication is not just about following templates—it’s about understanding the underlying patterns of human connection and knowing how to adapt them to new situations.

The Algorithm That Changed My Mind

I used to be skeptical of algorithmic approaches to writing. They seemed too mechanical, too removed from the messy, human process of figuring out what you want to say and how to say it. Then I started experimenting with AI writing tools, not as a replacement for my own thinking, but as a thinking partner. I discovered that the best AI tools do not eliminate the human element—they amplify it. They help me see patterns in my own thinking that I might have missed. They suggest connections I had not considered. They push back when my arguments are weak or unclear. It’s like having a conversation with a very well-read friend who never gets tired, never judges your rough ideas, and always has time to help you think through a problem. The algorithm does not write for me; it writes with me.

Lessons from the Loom for the Age of AI

So what can writing studies and rhetoric learn from the invention of computer algorithms? I think there are three big lessons that are especially relevant as we navigate the AI revolution. First, patterns are powerful, but they are not everything. Both weaving and programming teach us that following a pattern is just the beginning. The real art comes in knowing when and how to deviate from the pattern to create something new. The best writers have always been pattern breakers who understand the rules well enough to know when to break them. Second, tools shape thinking, but thinking shapes tools. The Jacquard loom influenced how people thought about automated processes, which influenced  early computer design, which influences how we think about writing today. But at each step, human creativity and intention shaped how those tools were used. We are not passive recipients of algorithmic influence—we are active participants in determining what that influence looks like. Third, collaboration between human machine intelligence might be more powerful than either alone. Ada Lovelace did not see the Analytical Engine as a replacement for human creativity—she saw it as an amplifier. Today’s best AI writing tools follow the same principle. They do not replace human judgment; they enhance it.

Looking Forward and Backward

I keep thinking about my niece and her college essay struggles. By the time she graduates, AI will probably be able to write application essays that are more technically proficient than anything she could produce on her own. But I do not think that makes her struggle meaningless. Learning to write is not just about producing text—it’s about learning to think, to organize ideas, to consider audience, to make choices about tone and structure and emphasis. These are fundamentally human activities, even when we use algorithmic tools to support them. The weaving loom did not make beautiful textiles obsolete—it made them more accessible and opened up new possibilities for creativity. The printing press did not eliminate good writing—it created more opportunities for good writers to reach audiences. I suspect AI will follow the same pattern.

The Thread That Holds It All Together

As I finish writing this (with the help of several AI tools for research, editing suggestions, and fact-checking), I keep coming back to something Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number, were objects whose mutual fundamental relations could be expressed by those of the abstract science of operations.” She was talking about the possibility that machines could work with language, music, and art—not just numbers. She was imagining a world where alogoriths could be creative patterns, not just calculators. I think she would be fascinated by today’s AI revolution, but not surpirsed. She understood something that we are still learning: the most powerful algorithms are not the ones that replace human creativity, but the ones that enhance it, challenge it, and help us see new patterns in the endless complexity of human communication.

AI-powered city futuristic urban design digital society

Image generated by AI built into WordPress’s blogging system

The thread that connects the weaving loom to today’s language models is not just technological—it’s deeply human. It’s our persistent desire to find better ways to create meaning, to share ideas, and to connect with each other across the spaces that separate us. In the end, that’s what both weaving and writing have always been about: taking individual threads—whether of cotton or thought—and creating something stronger, more beautiful, and more meaningful than the sum of its parts. The algorithm just helps us see the pattern more clearly.