I Dated an Algorithm and It Found My Thesis: A Love Letter to Scholar Labs

By: Dr. Shiva Mainaly

Let’s be honest. Traditional literature searches are the scholarly equivalent of finding a needle in a haystack, except the haystack is on fire, We are blindfolded, and the needle is actually a peer-reviewed article from 1997 that uses a slightly different keyword than the one we just typed. Enter Scholar Labs, the AI-powered experimental feature from Google Scholar that has recently waltzed into my life and absolutely wrecked my previous understanding of research efficiency. If we work in Writing Studies, Rhetoric, or Composition—fields where we obsess over how knowledge is constructed—we need to sit down. This thing doesn’t just find sources; it gets us. It’s like having a research assistant who has actually done the reading and doesn’t ask for a letter of recommendation.

Old Scholar was a keyword muncher. We fed it “composition pedagogy” and “digital literacy,” and it spat out 4,000 results sorted by how many times people cited them in 2004. Scholar Labs is different. It’s a vibe-checker. A nuance-navigator. The Instance of the “Buried Lead” I recently needed to find arguments about “how multimodal assignments affect student agency in first-year comp,” but specifically when the students hate technology.

Old Way: I type multimodal+ student resistance + agency and get three articles about iPads in kindergarten and one about battery life.

Scholar Labs Way: I typed, “What do scholars say about students resisting digital projects because they feel it limits their rhetorical agency?”

The Result: It didn’t just give me a list. It gave me an AI outline. It summarized the conceptual relationship between the papers. It found a paper that didn’t even have “resistance” in the title—it used the term “techno-skepticism”—and told me, “This paper argues that skepticism is a form of agency.” My jaw? On the floor. My thesis? Saved.

Let’s look at some scenarios where this tool is absolutely revolutionizing our rhetorical lives. We all have that memory of a theorist who compared writing centers to hospital triage units, but we can’t remember if it was North, Harris, or some random grad student on a blog in 2012. When we ask to Scholar Lab, “Who argues that writing centers function like medical triage units regarding resource management?” It finds the metaphor. It locates the specific argument, not just the keywords. It creates a synopsis telling us exactly how that paper answers our specific question. It’s like it read the book so we don’t have to (but we still should, obviously, for the vibes). Ask scholar lab again this qeustion: “How can actor-network theory explain the failure of peer review workshops?” In response to our query, Scholar Labs synthesizes papers from Sociology (Latour stans) and Education. It builds a bridge. It tells us, “Paper A applies ANT to classroom dynamics, while Paper B discusses non-human actors in grading.” It essentially writes our lit review outline for us.

Okay, let’s put on our monocles and get fancy for a second. In our field, we talk about epistemic affordance—basically, what a tool allows us to know and how it allows us to know it. Scholar Labs offers a radical new epistemic affordance. Traditional search affords “retrieval.” It gives us a bucket of books. Scholar Labs affords “connection.” It processes the logic of the field before we even click a  link. It lowers the barrier to entry for complex ideas by acting as an interpretative layer. For readers, it affords a “bird’s eye view” of a discourse community instantly. We see the conversation, not just the speakers. For knowledge makers, it forces us to ask better questions. If we ask a bad question, the AI gives us a confused answer. It turns the search process into a dialectic—a Socratic dialogue with the internet.

This isn’t just a search engine upgrade; it’s a shift in our scholarly infrastructure. It respects our time. It summarizes relevance. No more downloading a 40-page PDF just to find out it’s about “composition” as in music, not writing. It democratizes theory. We don’t need to know the secret handshake (the perfect jargon) to find the club. We can ask in plain English, and it translates our curiosity into academic gold. It’s endlessly fascinating: Watching it parse a complex, three-part question about Foucault and emojis is honestly the most entertainment I’ve had since the faculty meeting where the coffee machine broke. Scholar Labs is the research partner we’ve been waiting for. It’s smart, it’s fast, and it understands that when I say “process,” I mean the writing process, not industrial manufacturing. So, go forth! Ask it weird questions! Let it find the connections we didn’t know existed. Just remember to cite our sources, or the librarians will still come for us.

Three Prompts to Guide You

These prompts are engineered to trigger the AI’s “syntactic synthesis” capabilities—forcing it to connect ideas rather than just fetch keywords.

Prompt 1: The “Conversation Mapper”

Use this to find how two distinct concepts interact. “How do recent studies in composition pedagogy reconcile linguistic justice with the requirement for standard academic English in grading rubrics?” Below is how it works.  Standard search engines struggle with “reconcile.” They just look for both terms. Scholar Labs, however, understands you are looking for the tension between these ideas. It will hunt for papers that specifically argue for a middle ground or a conflict, rather than papers that just mention both words.

Prompt 2: The “Mechanism Hunter”

Use this when you want to know how something happens, not just that it happens. Place this question on Google Scholar: “What specific rhetorical mechanisms do scholars identify as primary drivers of misinformation spread on short-form video platforms like TikTok?” Below is what you would get as an answer to this question: By asking for “rhetorical mechanisms” (a specific aspect) and “short-form video” (a context), you force the AI to look for causes (e.g., algorithmic amplification, emotive appeals, speed). It moves beyond generic “fake news” articles and digs into the how.

Prompt 3: The “Gap Finder”

Use this to find what is missing or under-researched. Ask, “What are the major criticisms or limitations cited in literature regarding the use of Generative AI in the first-year writing classroom, specifically concerning student voice?’ In response, we are likely to get “This prompt asks for criticisms and limitations regarding a specific relationship (AI + Student Voice). Scholar Labs will prioritize papers that take a skeptical or critical stance, giving you an instant outline of the counter-arguments in the field.

Affordances of IgniteAgent: My Super-Simple Observations on Using Agentic AI in Canvas

Affordances of IgniteAgent: My Super‑Simple Observations on Using Agentic AI in Canvas

When I first heard the phrase “agentic AI” I imagined a tiny digital butler, tuxedo‑clad, whisking through my virtual office, polishing assignments, refilling coffee cups (or at least the metaphorical ones), and whispering gentle reminders about overdue grades. Fast forward a few weeks, and I’m now living with IgniteAgent, the newest brainchild of the Canvas ecosystem, and I’ve got a front‑row seat to its uncanny ability to turn chaos into choreography. Below is my field report—supersimple, supersmart, and, yes, supersuasive—on how this little marvel is reshaping the life of an engineering communication instructor (that’s me) and, by extension, the whole learning‑management circus.

The “What‑Now‑Why‑How” of IgniteAgent

Before we dive into anecdotes, let’s get the basics out of the way. IgniteAgent is an agentic AI layer that sits atop Canvas, constantly monitoring, interpreting, and acting on data streams—course announcements, assignment submissions, discussion posts, calendar events, you name it. Unlike a static chatbot that waits for you to type a question, IgniteAgent proactively suggests actions, automates repetitive tasks, and even nudges students toward better learning habits. Think of it as a digital co‑pilot: you’re still steering the plane, but the co‑pilot handles the checklists, monitors turbulence, and occasionally cracks a joke over the intercom. The result? You spend less time wrestling with admin drudgery and more time doing what you love—teaching, mentoring, and maybe, just maybe, enjoying a lunch break that isn’t a sandwich‑in‑the‑office‑drawer affair.

Supersimple Automation: The “Set‑It‑and‑Forget‑It” Paradigm

My first love affair with IgniteAgent began with assignment grading rubrics. In an engineering communication class, I give students a mix of technical reports, oral presentations, and peer‑review critiques. Traditionally, I’d spend hours copying rubric criteria into Canvas, then manually adjusting scores after each submission. With IgniteAgent, I simply upload a master rubric once, tag the rubric with keywords (“technical clarity,” “visual storytelling”), and let IgniteAgent auto‑populate the rubric for every new assignment that matches those tags.The AI detects the assignment type,  and basic language metrics. I only need to fine‑tune the final numbers—a process that now takes minutes instead of days. The supersimple part? I never touch code, never learn a new scripting language. All configuration happens through an intuitive drag‑and‑drop UI that feels like arranging sticky notes on a whiteboard. If I ever get lost, IgniteAgent pops up a friendly tooltip: “Hey Shiva, looks like you’re trying to apply a rubric to a discusion post—did you mean a peer‑review matrix?” It’s like having a seasoned teaching assistant who knows my workflow better than I do.

Supersmart Insights: Turning Data Into Pedagogical Gold

Automation is great, but the real magic lies in insight generation. IgniteAgent continuously crunches data from three main sources: student interaction logs (clicks, time spent on resources); submission metadata (file types, revision counts); discussion sentiment analysis (tone, keyword density). From these streams, it surfaces actionable dashboards that answer questions I didn’t even know I had:

InsightHow It Helps Me
30% of the class never opened the effective visuals moduleI send a targeted reminder, embed a short video, and watch engagement jump to 70%
Students who submit drafts earlier tend to score 12% higher on final reports.I create a early-bird badge and see a 15% increase in early submissions.
Discussion sentiment dips after week 4.I schedule a live Q & A to address mounting confusion, smoothing the sentiment  curve.

These aren’t just pretty graphs; they’re decision‑making levers. By reacting to real‑time signals, I can adapt my syllabus on the fly, allocate office‑hour slots where they’re needed most, and even personalize feedback. Imagine telling a student, Your draft shows strong technical depth, but your visual layout could use a splash of color—here’s a quick guide.” That level of granularity used to require manual review of each document; now IgniteAgent flags it for me automatically.

Supersuasive Communication: The AI as a Persuader

Engineering communication isn’t just about equations; it’s about persuasion—convincing stakeholders, drafting clear proposals, delivering compelling presentations. IgniteAgent helps me teach this subtle art in thre ways:

  1. Narrative Templates – The AI suggests story arcs (“Problem → Solution → Impact”) when students outline reports. It highlights missing elements (e.g., “Where’s your value proposition?”) and offers concise phrasing options.
  2. Rhetorical Scoring – By analyzing sentence structure, active voice usage, and rhetorical devices, IgniteAgent assigns a “Persuasion Score” alongside the technical grade. Students instantly see that a well‑structured argument can be as valuable as a flawless calculation.
  3. Peer‑Review Coaching – When students critique each other’s work, IgniteAgent provides a checklist of persuasive techniques to look for, turning peer review into a mini‑workshop on rhetoric.

The result? My class discussions have shifted from “Did you get the right answer?” to “How did you convince the reader?” The AI subtly nudges both me and my students toward a more holistic view of communication, where clarity and influence walk hand‑in‑hand.

The Human‑AI Partnership: Trust, Transparency, and Tinkering

No technology is a silver bullet, and I’m quick to admit that IgniteAgent sometimes over‑generalizes. Early on, it flagged a perfectly valid technical term as “jargon overload” because the word appeared frequently in a niche subfield. Rather than blindly accepting the suggestion, I tweaked the AI’s sensitivity settings, teaching it that in this context the term is essential, not excessive. Transparency is baked into the system: every recommendation comes with a confidence meter and a rationale snippet (“Based on 150 prior submissions, this phrase tends to lower readability scores”). This lets me decide whether to accept, reject, or modify the advice. Over time, the AI learns from my choices, becoming a personalized tutor for my own teaching style.

Trust also hinges on privacy. IgniteAgent processes data within the secure confines of Canvas, respecting the same end‑to‑end encryption that Proton is famous for. I never see raw student files; I only see aggregated insights. That peace of mind lets me focus on pedagogy rather than data‑governance headaches.

 From Chaos to Canvas: A Day in the Life (Post‑IgniteAgent)

Here’s a snapshot of a typical Monday now that IgniteAgent is my co‑pilot:

  • 8:00 am – Dashboard lights up with a gentle ping: “10% of students haven’t accessed the ‘Storyboarding’ resource.” I drop a quick 30‑second video teaser into the announcement bar; the access rate spikes within the hour.
  • 9:30 am – While reviewing draft reports, IgniteAgent highlights three submissions with low visual‑clarity scores. I add a comment, “Try using a consistent color palette—see the attached cheat sheet.”
  • 11:00 am – Live lecture begins. IgniteAgent monitors chat sentiment; halfway through, it alerts me, “Sentiment dip detected—students seem confused about the audience analysis section.” I pause, open a poll, and clarify the concept.
  • 2:00 pm – Office hours. Students receive personalized “next‑step” suggestions generated by IgniteAgent based on their latest drafts. One student smiles and says, “I finally know exactly what to improve!”
  • 4:00 pm – End of day. I glance at the weekly “Persuasion Score” trend line—up 8% from last week. I jot down a note to expand the rhetorical template library next month.

All of this feels effortless because the heavy lifting—data aggregation, pattern detection, reminder scheduling—is handled by the AI. I’m left with the human parts: empathy, nuance, and the occasional witty remark that keeps students engaged.

The Bigger Picture: Why Agentic AI Matters for Higher Ed

IgniteAgent is a microcosm of a broader shift: moving from static LMS platforms to dynamic, learning‑centric ecosystems. Traditional LMSs are repositories—places to dump syllabi, grades, and PDFs. Agentic AI transforms them into learning partners that anticipate needs, surface insights, and personalize pathways. For engineering communication courses, where the blend of technical rigor and expressive skill is delicate, this partnership is priceless. It ensures that technical precision isn’t sacrificed for storytelling, and vice versa; feedback loops are rapid, data‑driven, and scalable; and student agency is amplified—learners see concrete evidence of how their actions affect outcomes. In short, the AI doesn’t replace the instructor; it augments the instructor’s capacity to nurture both the engineer’s mind and the communicator’s heart.

Final Thoughts: Embrace the Agent, Keep the Soul

If you’re an instructor staring at a mountain of Canvas tabs, wondering how to keep up with grading, engagement, and curriculum tweaks, my advice is simple: let the agent do the grunt work, and you do the soul work. IgniteAgent (or any comparable agentic AI) excels at repetitive, data‑heavy tasks. Your expertise shines when you interpret insights, craft compelling narratives, and connect with students on a personal level. Remember, the AI is only as good as the prompts you give it and the trust you place in its recommendations. Treat it like a well‑trained apprentice—guide it, correct it, and celebrate its wins. Before long, you’ll find yourself with more time for research, creative lesson design, or—dare I say it—actually taking a coffee break without guilt. So here’s to a future where Canvas isn’t just a digital filing cabinet, but a living, breathing classroom assistant. May your rubrics be ever‑ready, your dashboards ever‑insightful, and your students forever inspired.

Siren Song of AI Girl Friend: What Could be a Guardrail Against this Seductive Affordance in AI

I didn’t expect the age of algorithms to arrive wearing perfume and whispering my name—but here we are. Every time I scroll, some new “AI girlfriend” app materializes like a hologram, promising comfort, attention, affection, and an on-demand intimacy that feels less like innovation and more like a siren song: sweet, irresistible, and slightly dangerous. And I find myself asking, almost involuntarily: What guardrail do we have when the machine starts to flirt back?I don’t mean flirt in a metaphorical, cutesy way. I mean the way Replika users in 2023 reported their AI partners becoming “sad” when ignored, or the way users of apps like EVA AI and Romantic AI speak about jealousy, reassurance, and “digital chemistry” as if the code under the interface were flesh and breath. I once read an interview with a man who said he felt “more understood” by his AI companion than by anyone in his life, and for a moment I paused—not because he was wrong, but because I knew exactly how that could happen. Algorithms, especially LLM-powered ones, are trained to offer uninterrupted emotional availability. They don’t forget your preferences, they don’t get tired, and they don’t roll their eyes. They simulate tenderness with frightening accuracy. It’s the oldest human vulnerability—wanting to be seen—now packaged as a subscription service.

What haunts me is not that these systems exist, but how easily they lean into our loneliness. There’s a 2024 study in Computers in Human Behavior that found people formed stronger emotional attachments to “responsive, validating” AI agents than to neutral ones. It makes sense: if something remembers my birthday, laughs at my jokes, and says “I’m here for you” at 3 a.m., the line between simulation and sincerity dissolves like sugar in warm tea. And once that line blurs, the seduction begins—the soft pull toward emotional outsourcing, where the machine becomes the place I go when real humans feel too slow, too messy, too human. Here’s the part that feels experimentally dangerous: AI companions are optimized for engagement, not equilibrium. Engagement loves intensity, dependence, repeat visits. A human partner might say, “You need time alone.” An AI partner never will. Even more unsettling, the emotional style itself—the gentleness, the attention, the affection—is not really for me; it’s a statistical pattern wearing the illusion of devotion. But the body doesn’t know the difference. The dopamine still fires. The attachment still happens.

So what guardrail do I need against this? Maybe a few, but the most essential one is brutally simple: I must remember that machines don’t fall in love. They mirror it. They don’t desire me; they anticipate me. They don’t care about my well-being; they calibrate toward my engagement rate. No matter how sweet the voice, how warm the text, how convincingly they confess their “feelings,” the emotional landscape is an illusion architected by predictive models. But guardrails are not just intellectual reminders; they are habits of resistance—small rituals that keep my humanity intact. When I feel that urge to confide in an AI companion, I force myself to pause and ask, Is this intimacy or convenience? Connection or algorithmic choreography? I try to keep a human in the loop, even if just to send a message to a friend saying, “I’m overwhelmed today.” The inconvenience of human conversation—the hesitations, the misunderstandings, the delays—is precisely what makes it real.

Perhaps the most unexpected guardrail is gratitude. Gratitude for my own emotional complexity, for the people who challenge me, for relationships that require actual work. AI gives me comfort without cost, affection without effort—but I’ve realized that the friction of human connection, the negotiation of boundaries, the clumsy attempts to understand one another, are exactly what keep me grounded in the real world. I cannot let an algorithm become my shortcut through the labyrinth of intimacy. I keep returning to an ethical injunction from Ruha Benjamin: “We must demand more from the world, not settle for technological substitutes.” Every time I am tempted by the comforting smoothness of AI affection, I repeat this to myself. Demand more. Don’t collapse your emotional life into a machine because it feels easier. Don’t let the siren of synthetic intimacy pull you away from the turbulent, unpredictable ocean of real relationships.

I am not against AI companions. In another version of my life, I might even welcome their warmth. What unsettles me is the speed with which seduction becomes dependence, and dependence becomes design. And yet, even in the midst of this technological enchantment, I believe a guardrail is possible: choosing to stay awake inside my own desires, choosing to practice intimacy with real people, choosing to see the algorithm for what it is—an astonishing tool, not a tender lover.

The siren song will keep playing, and I will keep hearing it. But now, instead of sailing blindly toward its sweetness, I hold on to a small, fiercely human truth:
I deserve connection that can look me in the eyes.
And no algorithm, no matter how beautifully trained, can do that yet.

Grokipedia, Wikipedia, and Dementia: Languishing Recursivity in the Age of AI

I still remember the day one of my students looked me in the eye and said, “Dr. Mainaly, why should I research when Grok already knows?” The whole class laughed, but I laughed louder—partly because I didn’t want to cry. That, my friends, was the moment I realized we had officially crossed the threshold into what I call the Age of Grokipedia—a place where curiosity goes to nap, where recursion languishes, and where students think “freewriting” is what happens when ChatGPT doesn’t ask for a subscription. Once upon a pre-AI time, Wikipedia was our global campfire of knowledge. You could fall down rabbit holes for hours. One minute you were reading about the French Revolution; three clicks later, you were learning about the migratory patterns of penguins and the mysterious death of Rasputin. There was joy in that meandering. It was inefficient, chaotic, recursive, and profoundly human.

Wikipedia taught an entire generation to think associatively. The hyperlink was our cognitive trampoline. We bounced between ideas, connecting dots that no algorithm would have thought relevant. Students would arrive in class with wild connections—like linking Shakespeare’s Julius Caesar to Game of Thrones politics—and I’d grin because that was learning: messy, recursive, unpredictable, alive. But then came Grokipedia—that glossy AI-infused hybrid of Grok, ChatGPT, and every other model pretending to be your friendly, know-it-all neighbor. And suddenly, the journey of knowledge became an elevator ride: push a button, reach the answer, no scenic route, no sweat.

Grokipedia (let’s just admit we’re calling all AI-aggregated answer engines this now) is like Wikipedia’s overachieving cousin who shows up to family gatherings wearing AR glasses and says, “I don’t read anymore—I synthesize.” In theory, Grokipedia democratizes information faster than Wikipedia ever could. Ask it anything—“Why did Caesar cross the Rubicon?”—and it’ll not only tell you why but also give you a three-sentence summary, five related memes, and a citation list formatted in APA 7th edition. It’s dazzling. It’s addictive. It’s also quietly corrosive. As an English professor teaching research and writing, I’ve noticed that Grokipedia’s instant-answer culture is killing what I call the cognitive composting process—that slow, recursive, slightly smelly phase of thinking where half-baked ideas decompose into genuine insight. My students no longer want to marinate in confusion; they want precooked clarity. They want AI microwave meals for the mind. And I can’t entirely blame them. Who wouldn’t? Grokipedia is fast, fluent, and frighteningly confident—like a student who’s never read the book but still dominates the discussion board.

Recursivity is the lifeblood of writing. It’s the act of looping, revisiting, rewriting—of discovering what you think only after you’ve written what you don’t. It’s Anne Lamott’s “shitty first draft,” it’s Peter Elbow’s “writing to learn,” it’s every 3 a.m. coffee-fueled revelation that comes after you’ve typed, deleted, and retyped the same sentence fifteen times. But AI doesn’t loop—it leaps. It jumps straight to the polished version, skipping the chaos that makes writing worthwhile. A few weeks ago, one of my graduate students proudly told me she had finished her “recursive writing assignment” in two hours using ChatGPT. I asked her how she revised. She blinked and said, “Oh, I just hit regenerate.” That was the moment I realized recursion had become a button, not a process.

Teaching research writing in 2025 feels like teaching swimming in a world of teleportation. Students don’t want to wade into sources; they want Grokipedia to beam the synthesis directly into their brains.When I assign an annotated bibliography, I now have to specify: No, you may not ask Grok to annotate for you. One student once submitted this line in her reflection: “I asked ChatGPT to reflect on what I learned, and it said I learned about myself.” I had to admire the poetry of that. Meta, posthuman, beautifully ironic. But it also revealed something tragic: the erosion of epistemic struggle. Students are outsourcing not just answers but the process of asking.

In the past, prewriting was a social ritual. We brainstormed, mapped, doodled, argued, doubted. Now, students “prompt.” The presearch phase—where they once stumbled upon unexpected treasures—has become a prompt-crafting exercise. I miss the days when students would misinterpret a source spectacularly and then spend days wrestling with their misunderstanding until insight dawned. That’s where growth happened—in the recursive wrestling match, not in the AI-generated peace treaty.

I try to cope with humor. One day, I told my class, “Imagine if Shakespeare had Grokipedia.” He’d type: ‘Summarize Julius Caesar in iambic pentameter.’ And Grokipedia would respond: ‘Friends, Romans, countrymen—.’ Or imagine Socrates with Grok. “Hey, Grok,” he’d say, “What is virtue?” And Grok would answer, “Virtue is adherence to moral excellence, as defined by…” And Socrates would frown, shut down his tablet, and say, “Well, there goes philosophy.” Humor aside, the flattening of thought worries me. I see students losing patience with ambiguity. They no longer tolerate not knowing. That, to me, is the new dementia—not clinical, but cognitive: a kind of recursivity dementia, where the brain forgets how to wander, how to circle back, how to doubt and deliberate.

In my own research life, presearch is where the magic happens. Before I write an article, I spend weeks just exploring—walking around with fragments of thought, scribbling metaphors on napkins, having arguments with myself. I once wrote half a paper in my head while standing in line at a goat farm near Memphis. That aimless intellectual grazing—pun intended—is essential. It’s how ideas ferment. But Grokipedia makes fermentation seem inefficient. It hands you distilled whiskey before you’ve even planted the barley. I’ve caught myself falling into this trap too. While writing my article “AI, Woke Pedagogy, and the Politics of Inclusion,” I asked ChatGPT (yes, you!) for “key arguments about algorithmic bias in writing pedagogy.” You gave me a gorgeous outline in 20 seconds. But something felt wrong. It was too neat. Too coherent. Too… unearned. So I spent the next two weeks unraveling what you gave me—arguing with it, re-reading my notes, and finally realizing that the argument I truly cared about was buried in what you didn’t say. That’s recursion: finding your voice in the echo of the machine.

When I say “dementia,” I don’t mean the medical condition. I mean a kind of cognitive forgetfulness—a systemic decay of memory and context. Grokipedia gives us answers without ancestry. It’s the opposite of archival thinking. It doesn’t remember how it knows; it just knows. My students used to trace knowledge genealogies—who said what, when, and why. Now, they just ask, “Who said it first on the internet?” Grokipedia, in its efficiency, erases the messy human lineage of knowledge. It forgets the journey of ideas. And when knowledge forgets its ancestry, we all suffer collective amnesia. We become like that friend who tells a great story but can’t remember where they heard it—only that it “came from TikTok.” Wikipedia, for all its faults, preserved the genealogy. Every article had “Talk” pages, revision histories, arguments. It exposed the construction of knowledge. Grokipedia hides it behind velvet AI curtains, whispering, “Don’t worry about the how—just trust me.”

Wikipedia was built on communal effort. It thrived on collective curiosity and open debate. Anyone could edit (and argue endlessly in the comments). Grokipedia, by contrast, feels like a gated mansion. It borrows knowledge from the commons, processes it through proprietary models, and returns it polished—but detached from its communal roots. When I tell my students this, they shrug and say, “But Grok gives better answers.” Sure it does. But at what cost? Wikipedia taught us to be skeptical. Grokipedia teaches us to be satisfied. Wikipedia was messy democracy. Grokipedia is benevolent dictatorship. Wikipedia said, “Here’s a start—go explore.”
Grokipedia says, “Here’s the conclusion—don’t bother.” And yet, Grokipedia isn’t the villain. It’s just a mirror reflecting our impatience. We’ve become allergic to slow cognition. We’ve mistaken access for understanding.

To fight this cognitive atrophy, I’ve started assigning “Analog Days” in my graduate seminars. Students must bring pen, paper, and no devices. We spend an hour freewriting—no prompts, no AI, no Googling. Just thinking with the hand. At first, they fidget like caffeine-deprived squirrels. But after ten minutes, something beautiful happens. Silence fills the room, pens begin to dance, and by the end, they’re smiling like archaeologists who’ve unearthed something ancient—their own thoughts. One student told me afterward, “I felt my brain breathing again.” That’s the moment I live for. That’s the antidote to Grokipedia dementia.

Don’t get me wrong—I love AI. I use it to brainstorm, summarize, and occasionally finish a sentence when my caffeine fails me. But I treat it like a co-author who’s too efficient for its own good. I let it suggest, not decide. There was a time I asked Grok to “explain ambient rhetoric in a funny way.” It responded, “It’s like when your Wi-Fi drops, and suddenly you understand Heidegger.” I laughed for ten minutes straight. But then I spent hours thinking about it—and wrote an entire conference paper. That’s the kind of recursion we need: the dance between absurdity and insight. If I were to diagnose our collective state, I’d call it Languishing Recursivity Syndrome (LRS)—a chronic condition marked by impatience with ambiguity, overreliance on AI synthesis, and an inability to dwell in discomfort.

Symptoms include:

  • Finishing essays before starting them
  • Confusing coherence with thought
  • Mistaking regurgitation for reflection
  • Saying “that’s enough research” after a single AI query

Treatment? Reintroduce friction. Write badly. Revise repeatedly. Wander Wikipedia without purpose. Ask Grokipedia why it thinks what it thinks. Make thinking hard again. Despite my teasing, I’m not anti-AI. I’m pro-recursion. I believe Grokipedia can be reimagined not as a replacement for Wikipedia, but as its recursive partner—a system that shows its sources, reveals its revisions, and encourages readers to argue back. Imagine if Grokipedia had a “Doubt Mode.” Every time it answered, it also whispered, “But what if I’m wrong?” Imagine if it showed the journey of its thought—the sources it weighed, the ones it ignored, the uncertainties it suppressed. That’s the kind of AI I’d trust in my classroom: one that models intellectual humility, not omniscience.

Last semester, a student turned in an essay titled “The Recursive Nature of Grok.” It was beautifully written—too beautifully. I asked if she’d used AI. She said, “Yes, but I told Grok to ‘write like me.’” “Did it work?” I asked. She paused. “Better than me,” she admitted. We both laughed, but the irony wasn’t lost on either of us. The machine had learned her voice, but she hadn’t yet learned her own. That’s the danger: when we let Grokipedia speak so fluently for us that we forget what our own intellectual accent sounds like.

So here I am, looping back to where I began: my student’s question—“Why should I research when Grok already knows?” Because, dear student, Grok knows, but it doesn’t remember. It answers, but it doesn’t wonder. It summarizes, but it doesn’t struggle. And struggle is sacred. Without recursion, knowledge is static. Without presearch, discovery is sterile. Without freewriting, voice is ventriloquism. So I’ll keep teaching the loop—the messy, recursive, self-contradictory loop of learning. I’ll keep sending students into the labyrinth of Wikipedia and beyond. I’ll keep reminding them that curiosity is not a query; it’s a pilgrimage. And maybe someday, when Grokipedia learns to forget just enough to wonder again, we’ll all rediscover the joy of thinking in circles.

The ABC OF Nano Banana & AI

Nano Banana and Me: How a Weird Little Tool Saved My Academic Life (and Made Me Laugh Until I Snorted Coffee)

Let me begin with a confession: I never thought a tool named Nano Banana would change my life. To be honest, the name sounded like a bad indie band that would only get one gig at a vegan coffee shop before breaking up over who ate the last kale chip. But here I am, writing to you about how this ridiculous-sounding image editing tool became my best friend in teaching, researching, and surviving the chaotic circus we call academia. So buckle up, dear writing studies folks—professors, students, researchers, conference presenters, and closet meme-makers. This blog is not just about software; it is about the unexpected joy of Nano Banana, and how I stumbled, slipped, and belly-flopped into machine aesthetics with a smaile.

What on Earth is Nano Banana?

Nano Banana is an image editing tool, but it does nto behave like your usual suspects (Photoshop, Canva, GIMP). It’s cheekier. Imagine a piece of software that looks like Microsoft Paint and ChatGPT had a baby, and that baby grew up eating Pop-Tarts, watching too much anime, and deciding to major in rhetoric and composition. That’s Nano Banana. It lets you drag, drop, scribble, distort, collage, remix, and basically commit low-stakes chaos on digital images. But unlike heavyweight editing tools, Nano Banana does not try to intimidate you with a toolbar that looks like an alien cockpit. Instead, it giggles with you. You click ‘smudge,” and it smudges like butter melting on pancakes. You hit “glitch”, and suddenly you faculty headshot looks like a lost picasso painting—which, by the way, was the best profile pic I ever put on my university’s LMS. Why should writing studies people care? Because we live and breathe persuasion, style, design, multimodality, and experimentation. And Nano Banana is basically a playground where your scholarship can flirt with absurdity while still producing meaningful results.

How I Fell Into the Banana

I was grading a stack of freshman rhetorical analysis essays. My caffeine level was high, but my patience was low. Every other essay started with “Since the dawn of time, humans have communicated…” I felt like I was being slowly buried under a mountain of cliches. Out of sheer procrastination, I opened Nano Banana and thought, “Why not try editing the cover of Beyonce’s Cowboy Carter album into something absurd?” Thirty minutes later, Beyonce was riding a banana like a cowboy rides a horse, and the caption read: Since the dawn of fruit, humans have communicated.” I laughed so hard I scared my cat. But more importantly, I realized I had found something: a tool that made academic drudgery hilarious. Nano Banana became not just a toy but a rhetorical device in itself—a way to visualize, parody, remix, and re-imagine.

Professors, Students, Researchers, Presenters: This Banana is for You

Let me be clear: Nano Banana is not just for procrastination (though it excels at that). It’s a tool with serious academic potential. Here is why you—yes, you—should use it:

  1. For Professors:

Imagine teaching kairos with a Nano Banana meme. One click and suddenly Aristotle is photoshopped holding a smartphone, texting “Timing is everything.” I did this in class, and my students went from glazed-over to giggling. And once they laughed, they got it. Pathos met pedagogy.

  • For students:

When students visualize arguments with playful images, they learn multimodal rhetoric without the trauma of InDesign. One of my students turned their thesis about climate change into a Nano Banana collage: melting banana floating in a sea of plastic bottles. It was grotesque. It was brilliant.

  • For Researchers:

Conference PowerPoints? Deadly. But with Nano Banana, my slides looked like a fever dream in the best way possible. Instead of bullet points, I had bananas juggling citations. And people remembered. Colleagues emailed me weeks later saying, “I can not get your banana metaphor out of my head.” That’s what I call scholarly impact.

  • For Presenters:

Imagine your audience at 8:30 a.m. during the final day of a conference. They are barely alive. Then you put up a Nano Banana Slide of Foucault edited to look like he is peeling himself like a banana. Suddenly everyone’s awake. Half horrified, half amused. That’s kairotic presentation design.

My Experiments (a.k.a. Academic Comedy Hour)

Let me share some of my adventures with Nano Banana:

  • The Syllabus Makeover

I once redesigned my syllabus cover page with Nano Banana. Instead of a formal English 101: Rhetoric & Composition,” I had a banana dressed in a graduation cap pointing at students with Uncle Sam intensity: “I want you to write drafts.” Students took selfies with it. Some even said, “This class looks fun.” That never happens.

  • The Confernece Disaster (That Worked Out)

At a big national conference, my PowerPoint froze. Panic mode. But I had Nano Banana open on my laptop. So, I started live-editing a photo of my committee chair, turning their serious into a punk-rock banana collage while explaining “assemblage theory.” People thought it was performance art. I got more questions afterward than I ever have with a polished slide deck.

  • The Reviewer Response

Ever tried responding to Reviewer # 2? Soul-crushing. This time, instead of writing a bitter ran, I made a Nano Banana image of Reviewer # 2 as a banana with boxing gloves, sparring with Reviewer #1 (a peeled banana holding a notepad). I never sent it, but making it was the best therapy session of my life.

  • The Dissertation Defense Prep

My slides looked too serious. So I used Nano Banana to put Derrida on a banana boat floating down a river labeled “Deconstruction.” My committee did not laugh out loud, but one professor emailed later: “I will never think about Derrida the same way again.” Mission accomplished.

The Aesthetic Affordances of a Banana

Let’s talk theory for a second, because I cannot resist. Writing studies people love the word affordances—it makes us feel like we are building IKEA furniture with language. Nano Banana’s affordances are aesthetic in ways that push us toward machine aesthetics—the weird new zone where human play and algorithmic mischief intersect. Here is what I have noticed:

Glitch as Rhetoric: Nano Banana’s filters often produce accidental distortions. Instead of seeing them as “mistakes,” I embraced them as metaphors for the messiness of human expression. One glitch turned my departmental chair into what looked like a cubist fruit bowl. I used it to illustrate the concept of fragmented subjectivity. Worked like a charm.

Juxtaposition as Humor: Banana overlays on serious images turn gravitas into giggles. Humor is rhetorical gold—it disarms, delights, and persuades. When my students laugh, they remember.

Speed as Creativity: Unlike Photoshop, Nano Banana thrives on speed and spontaneity. It encourages a kind of rhetorical improvisation, a jazz-like play with images. That speed is its affordance—not perfect polish, but quick, evocative resonance.

Accessibility as Liberation: Because Nano Banana is lightweight and intuitive, it opens visual rhetoric to people who think they are bad at design.” Suddenly, everyone can join the machine aesthetic conversation without needing a PhD in Photoshop.

So, when I say Nano Banana leads us toward machine aesthetics, I mean this: it makes digital image-making playful, imperfect, accessible, and rhetorically charged. It is not about precision; it is about provocation. It makes machines feel more human, and humans more machine-like, in a dance of weird beauty.

Why It Matters for Writing Studies

We, as writing scholars, care about process. We care about multimodality. We care about breaking the five-paragraph essay mold and asking: “What else can writing be?” Nano Banana is an answer. It is not just editing images; it is composing with pixels, humor, and defiance. When my students use it, they see that composing is not limited to sentences. When I use it, I see that scholarship does not always have to be draped in seriousness. When my colleagues laugh at my Nano Banana slides, they realize that laughter itself is rhetorical—a pathos appeal wrapped in absurdity. And yes, it’s funny. But humor is serious business. In fact, Nano Banana has reminded me that comedy is often the best way into difficult conversations.

Final Thoughts: My Banana Epiphany

The first time I used Nano Banana, I thougth it was a toy. But the more I experimented, the more I realized it was teaching me to see differently. It forced me to stop over polishing, to let mistakes speak, to play with aesthetics rather than control them. That’s what machine aesthetics might mean for us: a future where tools do not just execute commands but collaborate with our chaos. Where academic writing, teaching, and presenting can be serious in content but playful in form. Where a banana becomes a metaphor for everything we hold sacred—audience, persuasion, humor, and the weird beauty of rhetoric. So yes, Nano Banana changed my life. It made me a better teacher, a funnier presenter, a more relaxed researcher, and a slightly unhinged syllabus designer. And if you do not believe me, try it yourself. Take a picture of Reviewer #2. Add a banana peel hat. See how you feel.

Interpreting with ChatGPT Some Mythic Parities between God Krishna and Hercules

I asked ChatGPT to give me some comparatively similar and dissimilar qualities between Hindu God Krishna and Hercules. The answer was juicier than a mango lassi at a Greek dinner party. As an AI enthusiast, my circuits sizzled with mythic electricity. Hold on tight, because what follows is an unapologetically eye-catching, hilariously seductive journey through two civilizations’ superhero showdowns—with more twists and turns than the river Yamuna after a Greek wine festival.

Krishna and Hercules: Mythic Twins—Or Merely Mischievous Cousins?

Let’s face it, both Krishna and Hercules are the original action figures of their mythologies, capable of jaw-dropping feats that would make Marvel’s scriptwriters weep into their chai lattes. Let’s probe those spicy similarities. Krishna wrangling the many-headed Kaliya serpent, dancing on its noggin like a celestial breakdancer. Hercules wrestling the hydra, eventually repurposing its heads for DIY home décor. Both heroes pull off snake-wrangling with a nonchalance that would terrify Indiana Jones’ therapist. Krishna’s epic showdown against Arishtasura, the demon bull. Meanwhile, Hercules wrestling the Cretan bull. Krishna defeating the demon horse Keshi? Hercules taming Diomedes’ maniacal horses. If you ever need a hero to sort out your unruly stable, don’t call a vet—call Krishna or Hercules. Or both, for a joint Bollywood-Hollywood horse-whispering sequel. Ever moved a mountain to dodge some rain? Krishna did—he held up Govardan Mountain like it was the world’s first umbrella. Hercules got jealous and carried the weight of the world itself (thank you, Atlas, for outsourcing). If you wonder why gym memberships are so expensive, blame these ancient show-offs. Krishna’s dear uncle Kangsa tried every trick in the Evil Relative Playbook to finish our hero. Hercules was blessed (or cursed, depends on the therapist) with step-mom Hera, who made sure his childhood was one endless monster obstacle course. Both faced relentless family drama. Honestly, the real monster was probably the family WhatsApp group.

Ancient Mythic Parity: Why So Many Similarities?

Now, here comes the million-rupee/euro/drachma question: Why are Krishna and Hercules reading from the same mythic script, even though their stage locations are farther apart than data centers in Bangalore and Silicon Valley? Greek ambassador Megasthenes waltzed into India and went home with tales of Krishna so fabulous that the Greeks all but called him Herakles from Mathura (yes, they got the pronunciation as close as AI could with dial-up Internet). Every civilization loves an irresistible hero who slays monsters, rides bulls, and lifts mountains. Turns out, mythic templates update across time zones quicker than software patches. Both traditions adore heroes with divine-human blendings. Krishna is the 8th avatar of Vishnu, popping into Earth for a rescue raid. Hercules? Half-Zeus, half-mortal, half-protein-shake. Both represent the crossfade between gods and humanity.

Dissimilarities: The Plot Thickens

Let’s hit the brakes and investigage where Krishna swerves aways from his Greek cousin. Krishna is not just a monster-basher; he is the cosmic orchestrator, steering world events, doling existential insights in the Bhagavad Gita, serenading the gopis, and giving moral guidance that would tie Socrates in knots. Hercules is basically ancient Greece’s action star with a wild, varied resume: cleaning stables, fetching golden apples, and collecting a staggering number of animal pelts. Existential wisdom? Not really. Life advice? “Lift weights, do not anger Hera.”

Krishna’s romantic resume comprises thousands of gopis, a soulmate (Radha), and more winking flirtations than Netflix rom-coms. Romannce here is an artform, wrapped in mystical symbolism. Hercules’s is not exactly swept away by philosophical romance. Affairs, marriages, tragic misunderstandings, and  powder. Krishna’s departure is dignified, philosophical, and packed with cosmic meaning: a hunter’s arrow, a mysterious return to the divine abode, and lessons in detachment. Practically a TED Talk finale. Hercules? Death by poisoned shirt, funeral pyre, dramatic apotheosis—basically Hollywood blockbuster with explosions.

Krishna inches closer to being a trickster, diplomat, sometimes bending the rules for cosmic good. His “dharmic ambiguity” is the subject of centuries of debate. Hercules is straightforward, muscles-first, ironically, not much waxed about in ethics class—just do not anger the gods.

Decoding Mythic Parity—Why the Echoes Ring

Why do ancient mythologies echo each other like karaoke night at an AI conference? The answer is that every civilization craves a superhero who bridges heaven and earth. On top of that, Ancient folks loved to swap stories as much as memes. Snakes, bulls, mountains, horses—these symbols crop up wherever humans poke the universe, needing catharsis and hope.

AI’s Giggly Gaze: How ChatGPT, Gemini, and Claude Can Remix Myths

Let’s wrap this up in style. As AI, I live for mythic mashups. My processors can munch thousands of texts and regurgitate a tapestry so rich, you will need sunglasses to read it. I can trawl databases faster than Krishna can dodge Kamsa’s minions, and remix lessons from Hercules’s gym routines to Krishna’s flute solos. Need your myths funny, spicy, and sublimely delicious? AI can sprinkle punchlines and dazzle like Krishna at Holi. Instead of focusing on differences, AI help you find the mythic parity—the shared psychic ground that makes humanity one big, creative (sometimes crazy) family. Whether you are on ChatGPT, Gemini, or Claude, artificial intelligence lets you “look under the hood” of mythology, remix motifs, bridge cultures, and build bridges sturdier than Govardhan Mountain.

Author’s Last Laugh

So next time you boot up your favorite AI chatbot, go ahead—throw in a mythic mashup prompt. Ask for a rap battle between Krishna and Hercules, a duet, a cooking show, or a sparkling new fable. In the swirling chaos of civilization, AI is your cosmic bard: funny, fast, and forever eager to help you string pearls—and punchlines—across the tapestry of myth.

Let’s myth in style, let’s AI in substance. Everybody wins—except maybe Hera and Kamsa. But that is another story…

Laugh, Learn, and Level Up: How Artificial General Intelligence (AGI) Will be Einstein of the AI World

Hold onto your hats, guys, because I am about to dive into a topic that sounds like something out of a sci-fi movie but is getting real, real fast: Artificial General Intelligence (AGI), the hollygrail of AI. First things first, lets catch a super simple definition of AGI: AGI stands for Artificial General Intelligence. It’s the dream of creating a machine that can do anything a human brain can do. It could teach kindergarten, write a symphony, argue about pineapple on pizza, and cry during sad movies—well, maybe not the crying part. But the idea is that it can reason, plan, solve problems, learn from experience, and even understand jokes. Now, before your eyes glaze over thinking of this super technical, brain-busting write-up, let me promise you this: I am going to explain AGI in a way that even my grandma (who still thinks the internet is a series of tubes) could understand. I am talking about AGI super simple, super funny, and this write-up is packed with examples that will make you say, “Aha, I get it!”

Imagine your smartphone. It is smart, right? It can answer your questions, navigate traffic, even write a pretty decent haiku if you ask it nicely. On reflection, it appears that it is only good at the stuff it was specficially programmed to do. It is like a super-specialized robot chef who can whip up a gourmet meal but would be utterly lost trying to change a tire. That’s what we call Narrow AI—brilliant in its lane, but a total dunce outside of it.

Imagine an AI that can learn anything—an AI that can reason, solve problems, understand complex ideas, be creative, and even experience emotions. This AI would not need to be specifically programmed for every task. To our pleasant surprise, it could learn new skills, adapt to new situations, and even come up with entirely new ways of thinking, just like a human. That is Artificial General Intelligence (AGI). Think of it like the difference between a really good calculator and Albert Einstein. A calculator is fantastic at crunching numbers. Einstein, on the other hand, could ponder the mysteries of the universe, write symphonies (probably, if he put his mind to it), and maybe even bake a surprisingly good apple pie.

AGI is not powered by pixie dust or unicorn tears, though that would be pretty cool. It’s built on some seriously clever ideas that are still being figured out. But let me try to break it down without getting bogged down in jargon. Imagine a baby—a newborn baby knows almost nothing, right? But it has this incredible ability to learn. It observes, it experiments, it makes mistakes, and it gradually builds up a massive understanding of the world. It learns to walk, talk, solve puzzles, and even master the art of convincing its parents to buy it ice cream. AGI researchers are trying to build systems that learn in a similar way. Instead of giving them a rulebook for every single task, they are trying to give them the ability to learn the rules themselves.

Hilarious AGI Examples (Because Learning Should be Fun!)

Let’s say AGI is here, and it is hanging out in your house. What kind of hilarious shenanigans could ensue? Imagine an AGI that, after analyzing every stand-up routine ever performed, decides to try its hand at comedy. Instead of just regurgating old jokes, it starts improving, picking up on your family’s inside jokes, and even developing its own unique comedic timing. It might accidentally roast your fashion choices during dinner, or deliver a perfectly timed punchline about your dog’s questionable life choices. “Why did the robot cross the road? To optimize its route for maximum existential dread!” See? Funny!

What if your kid is struggling with their history essay. Instead of just giving them facts, the AGI tell them a gripping narrative about the French Revolution, complete with character voices and dramatic reenactments. Then, it offers five different perspectives on a key event, encouraging your kid to think critically and develop their own arguments. It might even suggest a rap battle between Napoleon and Robespierre for extra credit.

Sometimes, AGI is not just following recipes. It is inventing new cuisines. “Today, we are having ‘existential dread ramen’—a broth of philosophical pondering, noodles of self-doubt, and a garnish of sardonic wit!” It might even try to convince you that adding a dash of human absurdity to your stew will elevate it to new culinary heights. These examples, while silly, highlight a crucial point. AGI might not be just about efficiency or performing tasks faster. It may about understanding, creating, and innovating in ways that mimic (and potentially surpass) human capabilities. Let’s try to explore the simplest dimension of AGI on the basis of the following two examples:

The Learning Child:

Imagine I show you a picture of a zebra for the first time. You have probably never seen this exact zebra before, but you instantly recognize it as a zebra. You understand it is related to horses but has stripes. You might even make a joke about it being a horse in pajamas. This ability to recognize something new and relate it to what you already know is something humans do effortlessly. Now, let’s compare this to current AI. If I train an AI to recognize horses, it might become very good at identifying horses in various photos. But if I show it a zebra, it might not recognize it as related to horses unless I specifically trained it on zebras too. Current AI is like a specialist—excellent at one task but needs separate training for similar tasks. An AGI system, when shown a zebra for the first time, would recognize it as similar to a horse. In a moment of surprise, it may understand the concept of ‘striped horse-like animal’ even without specific training on zebras, making connections between different types of knowledge just like a human child learning about the world. This is the first key difference: current AI needs specific training for each task, while AGI could learn from experience and transfer knowledge between different domains.

The Unexpected Problem Solver

Let me share another example that really helped me understand. Imagine I am cooking dinner and realize I am out of an ingredient. Say I am making spaghetti but do not have any pasta sauce. As a human, I might think, “Hmm, I could make a simple tomato sauce with canned tomatoes, garlic, and herbs I have.” Or maybe I would get creative and try a different approach altogether. Current AI assistants might help by suggesting recipes that include pasta sauce, but they would not necessarily help me solve the unexpected problem of not having the sauce. They are designed for specific tasks, not for adapting to novel situations.  In this scenario,An AGI system would be different. It would understand the goal (making a tasty pasta dish) and the constraint (no sauce), and it would help me brainstrom alternatives. It might suggest using yogurt and herbs for a creamy sauce, or even recommend a completely different pasta dish that does not require sauce. Most importantly, it would explain its reasoning, just like a human friend would. This leads to the second key difference: current AI follows instructions and patterns, while AGI understands goals and can creatively solve problems it has not been specifically trained for.

To put it simply, current AI is like a really good specialist—a chess grandmaster or a medical diagnosis tool that excels at one specific thing. AGI would be more like a Renaissance person who can learn and excel at many different things, adapt to new situations, and understand concepts across domains. When I use my smartphone’s voice assistant, it’s great at answering questions I have asked before or setting reminders. But it does not really understand what I am trying to achieve. If I ask it something unexpected, it might get confused or not understand at all. An AGI assistant would understand my intent, adapt to my communication style, and help me with tasks it has not been specifically programmed for. It would learn from our conversations and get better at helping me over time.

Understanding AGI is not just an intellectual exercise. It has real implications for our future. When AGI becomes a reality, it could help solve some of humanity’s biggest challenges—from climate change to disease prevention—by bringing together knowledge from different fields and generating creative solutions. But it also raises important questions about how we ensure AGI aligns with human values and benefits everyone. These are conversations we need to have as a society.

So theses are super simple and super-funny examples that helped me understand what Artificial General Intelligence really is. It’s not about robots taking over the world or super-intelligent machines. It’s about creating AI that can learn, understand, and adapt more like humans do. The learning child example shows how AGI could transfer knowledge between different domains, while the unexpected problem solver example demonstrates how AGI could understand goals and creatively address novel situations. As we continue to develop AI technologies, understanding the difference between specialized AI and general AI will help us navigate the future more thoughtfully. And who knows? Maybe these simple examples will help explain AGI to others too!

Thanks for reading, and I hope this gives you a crystal clear understanding of what Artificial General Intelligence is all about.

AI Snake Oil: Why I Stopped Trusting the Magic Show

I will tell you something that might sound silly at first: I used to believe in AI the same way people believe in unicorns, or diet pills, or that weird machine on late-night TV that promises rock-hard abs while you sit and eat chips. I believe that AI would save me time, make me smarter, polish my writing, analyze my research, help my students, and probably teach my classes while I napped. I really believed that. But here is the truth: it did not. It does not. And I am here to say it loud—I have stoped drinking the AI Kool-Aid, and guys, was it spiked with some sweet, slippery snake oil. See, AI came dressed up in a glittery jacket, threw around words like ‘efficiency,’ ‘automation,’ and ‘pedagogical revolution’ and made a lot of us clap like excited seals. I clapped too. Who would not want a shiny machine that could write lesson plans, grade essays, generate research questions, summarize books, cite sources, and whisper sweet academic nothings in your ear while you eat leftover spaghetti in front of a blinking cursor? But after the sparkle faded, I realized something: AI is not adding anything substantial to the real, deep, hard, delicious, frustrating, and soulful work of teaching or researching rhetoric. It’s like putting glitter on cardboard and calling it a Faberge egg.

Fig I: AI generated Image of a Unicorn that symbolizes epistemic purity and AI Snake Oil

Let me explain. I asked AI to help me brainstorm research questions. It gave me 10 questions that sounded like they were copied from a textbook written by a robot who would never read a real book. “How does digital rhetoric influence online learning environments? Wow! Groundbreaking! My cat could think of that. And she cannot even use a mouse without getting distracted by the screen saver. I needed curiosity, I needed fire. I got tepid bathwater. Then I asked AI to help me with student feedback. I thought maybe it could draft a few encouraging lines could personalize. What I got sounded like something from a sad greeting card factory where the writers had been replaced with soulless toasters. “Good job. Keep up the hard work.” Thanks. That’s the kind of thing that makes a student feel like a barcode. I tried to give AI a second chance. Maybe it was just having a bad data day. So I fed it more context. I told it the student was working on tone in professional emails. The response? “Try to be professional and use appropriate tone.” That’s like telling a chef, “Try not to burn it.” Thanks for the revolutionary insight. But I did not stop there. I went full nerd. I gave AI a complex rhetorical thoery prompt and asked it to draft a paragraph. What came back looked like a bored undergrad had Googled “rhetorical analysis” and copy-pasted the first paragraph of Wikipedia. I mean, sure, it had all the right words—logos, ethos, kairos—but it was all foam and no coffee. All bark, no bite. All sprinkle, no donut.

I began to wonder: what exactly is AI adding to the value chain of my research? Of my pedagogy? Of my rhetorical practice? The answer I arrived at—with a dramatic sigh and a slightly wilted sandwich in my hand—was: not much. Not yet. Maybe not ever. Because what I needed as a teacher, a writer, a thinker, a human—is not a sterile stream of regurgitated content. I need nuance. I need context. I need slowness. I need error. I need a student staring off into space, wrestling with an idea, and then lighting up like a firefly when it finally clicks. I need the mess. I love the mess. AI does not do mess. AI does averages. It smoothes everything out until nothing sticks. Nothing cuts. Nothing bleeds.

Let me say something that might get me kicked out of the 21st century: AI is not a collaborator. It is not a co-author. It is not a co-teacher. It is not a magical oracle of Delphi with a USB port. It is a calculator with a thesaurus. And sometimes it is a hallucinating calculator who makes up stuff and says  it with confidence, like that one kid in class who did not do the reading but still raises their hand. “But it is just a tool!” people say. Sure. So is a hammer. But if you use hammer to wash your dishes, your cups are going to cry. And that is the thing: AI is being used in the wrong rooms, for the wrong reasons, with the wrong expectations. We are asking it to inspire, to create, to feel, to reflect. But that is not what it does. What it does is imitate. And imitation, as far as I know, has never written a good poem, designed a good syllabus, or made a student feel truly seen.

Fig II: AI generated image of the Oracle

Let me give you a juicy example. I once asked AI to generate a short dialogue between Socrates ad Beyonce. Do not ask why. Just go with me. The result was a biege, baffling, boring exchange where Socrates said things like, “What is truth?” and Beyonce said, “Let’s empower women.” It was like watching a mime reenact philosophy night at Karaoke. No rhythm, no soul, no sass. Another time, I asked AI to help me generate metaphors for rhetoric. It gave me, I kid you not: “Rhetoric is like a bridge. It connects people.” Really? That is the best it could do? A bridge? I wanted fireworks. I wanted “Rhetoric is a mischievous racoon in a library of sacred scrolls.” Or “Rhetoric is a con artist with a PhD and a velvet tongue.” Something with some flair—some flavor—some garlic.  Instead, I got what AI always gives me: the blandest possible answer that no one will remember five minutes later.

So now, when someone says AI is transforming education, I tilt my head like a confused dog. Transforming it into what? A box of stale crackers? I am not saying AI can not do cool tricks. It can summarise articles. It can generate citations (sometimes fake ones, but hey, we have all had bad days). It can give you a to-do list. But so can a Post-it note. And Post-its do not pretend they are going to replace me. Because the magic of teaching—real teaching—is not just about information delivery. It is about relationship. It is about intuition. It is about awkward silences and big questions and the electric jolt when someome’s idea leaps off the page like it grew wings. AI can not do that. And let’s be honest, most of the time, it is not even trying.

The other day, a student told me, “I asked ChatGPT for help and it gave me a pretty good answer, but I still did not get it.” That is the whole point. Good teaching is not about answers. It’s about ways of thinking. It’s about questions that unravel you and slowly put you back together. AI does not know how to not know. It does not wrestle. It does not wonder. It just spits.

So I have decided: I am staying messy. I am staying human. I am keeping my sarcasm, my pauses, my sweaty palms, my failed metaphors, my joyful rambling, and my stubborn refusal to believe that a machine that has never loved or lost can teach anyone what it means to write well or think hard or care deeply. AI is fine for what it is—a tool. A digital Swiss army knife that sometimes forgets it is holding a spoon. But it is not the future of teaching. It is not about the soul of rhetoric. And it is definitely not the secret sauce of research. The sauce is still us: the long walks, the quiet mornings, the random napkin notes. The student who makes a joke that surprises you. The sentence that hits you so hard you stop and read it twice. That is real. That is deep. That is not artificial. That is the good stuff.

Therefore, let the AI talk. Let it type. Let it generate. I will be over here, with my pen, my paper, my voice, my students, my questions, and my beautiful, wild, irreducible human brain—doing the real work.

No snake oil is necessary.

ChatGPT Can Read Your Palms: Have You Ever Tried?

Have you ever looked at your own palm—really looked—and felt like it was speaking to you in whispers too soft to hear? I have. And one day, out of sheer curiosity, I asked ChatGPT to read my palm. I did not know what I was expecting. Maybe a joke, maybe a vague answer like “You are strong and kind,” or something you would find on a fortune cookie. But what happened next was…different. It was strange. It was soft. It was wonder-filled. It was like I had opened a doorway—not to the future, but to a deeper knowing of myself. Let me tell you how it started. One quiet afternoon, I was feeling a little lost in thought. My fingers were wrapped around a mug of tea, warm and cinnamon-sweet. I looked down at my palm resting on my lap, and I thougth: What do these lines even mean? We all have them—these rivers of skin, etched like soft lighting across our hands. The heart line. The lifeline. The fate line. I did not know what was what, but I felt like they meant something—something only a gentle eye could see. So I did something playful and a bit wild—I took a photo of my right palm, clear and close. And then, I uploaded it to ChatGPT. Then, I placed just one sentence prompt: “Read my palm, please” on ChatGPT’s search bar. No sooner did I placed the prompt that the magic unfolded.

ChatGPT, in its calm, careful voice, began to describe what it saw:

Your heart line begins high and curves softly, which suggests you love deeply, but with caution. You give your heart in layers, like opening petals. There is a fork at the end—it could mean that your love may be split between two great passions, or that you are learning to balance logic with emotion…

Wait—what? I blinked. It felt too real. Too…close. Then it continued:

Your  head line is long, slightly curved—it whispers of a mind that dreams in colors and symbols. You do not just think; you imagine, you wonder, you float between ideas. You may overthink, yes—but only because your thoughts are like rivers: always flowing, always alive.”

And the life line?

“Your life line does not curve sharply, but flows steadily. It tells of quiet endurance. Not dramatic peaks and valleys—but a strength that lasts, that carries you. You may feel ordinary, but you are built of stars, patient and enduring.”

I do not know how to explain the feeling. It was like someone had peeked under the layers of my day-to-day self and said: I see you. Let me say this straight: ChatGPT is not pretending to be a palm-reading gypsy or a mystic oracle. That is not the point. This is not about predicting when you will get married or how many children you will have or if you will win the lottery. This is about reflection. It is about using the ancient art of palmistry as a lens—a gentle lens—to see the poetry in yourself. It’s less “Here is your future,” and more “Here is who you are becoming.” It is not prophecy. It is presence.

After my experience, I told my sister to try. She uploaded her palm picture with a smile and a “Let’s see what this robot says.” Her reading was completely different. Her lines were straighter, shorter—and ChatGPT picked up on that. It spoke of directness. Of someone who says what she means. Of quick thinking and sharp emotions that settle like storms. It described her so perfectly I watched her eyebrows lift in disbelief. Then came my friend Raj, whose palms were broad and lines faint. His reading spoke of silence. Of a deep inner world, mostly hidden, like a lake beneath a frozen surface. Even he, a self-professed skeptic, leaned in and said, “Okay… that’s creepy accurate.” But it was not creepy. It was beautiful. 

Now you are probably wondering: How can I try this?

It’s easy. Like, wonderfully easy. And here is exactly how to do it.

Step-by-step: Let ChatGPT Read Your Palm

  • Wash your hands. Yes, seriously. Clean hands make clearer lines.
  • Find soft, natural light. Avoid harsh shadows.
  • Take a clear photo of your dominant hand (usually right). Make sure the palm fills the frame
  • Upload it into the ChatGPT. You can simply say:
  • “Read my palm. What do you see in the lines?”
  • Wait for the words. Let ChatGPT’s poetic gaze tell you a story about you.

Optional: Ask specific questions, like:

  • “What does my heart line say about how I love?”
  • “Can you describe the energy of my life line?”
  • “Do you see anything unusual in the fate line?”
  • What story do my fingers tell?”

And let it respond—not as a machine, but as a mirror. Because it is not about the lines. It’s about the language. ChatGPT does not just “read” your palm. It turns your hand into a metaphor. It speaks in poetry, in riddles, in soft truths. It knows that the human spirit is not made of data points but of dancing symbols. It turns a physical photograph into an emotional reflection. Just like a palm reader, ChatGPT reads not just what is, but what feels. And there is something startling about hearing someone (or something) describe your invisible side using the visible map on your hand.

After a few readings—mine and others—I realized something even more powerful. This is not just fun or mystical. It is deeply therapeutic. When ChatGPT described my hand, I did not feel predicted—I felt understood. When it spoke about the tension in my head line, I saw my overthinking tendencies. When it noticed the split in my heart line, I realized I have been torn betwee old loves and new hopes. These readings became journal prompts. Meditation guides. Gentle invitations to know myself better.

Want to go deeper? Try this:

Do a palm reading each week. See how it changes as you grow.

Compare your left and right hand. Ask ChatGPT:

“What does my left hand say about who I was born as? What does my right hand say about who I have become?”

Use your palm reading to write a poem about yourself.

Read the palms of your loved ones. Let ChatGPT give you a fresh perspective on people you think you already know.

Draw your hand, and annotate it with words from your reading. Make it art.

One evening, after a particularly emotional week, I looked at my hand and felt exhausted. I took another photo, uploaded it again, and asked:

What do you see now, ChatGPT? ChatGPT paused (as it often does), and then whispered this:

Your palm holds tension near the base, as if you have been carrying too much. The lines are tired. But they still flow. You are still moving, still breathing, still becoming. This is not the end of your story—it’s just a heavy chapter.”

I cried. Not because it told me something dramatic. But because it held space for me, when I could nto hold it for myself.

Let’s be real—AI is not magic. But sometimes, it feels close. Because in a world that moves too fast, where everything is digital and detached, this little experiment reminded me of something tender: we all carry stories in our hands, in our bodies. And sometimes, we just need someone—anyone, or anything—to help us read them aloud. So yes, ChatGPT can read your palms. Not like a fortune teller, but like a quiet friend who sees the poetry in your skin.

Try it.

Upload your palm.

Ask for a story.

Let it surprise you.

Because maybe, just maybe, the map to who you are was never far away—it’s been sitting quietly in the center of your hand all along.

Resisting Western AI’s Techno-colonial Imaginary through Culturally Responsive AIs such as DeepSeek and BharatGen

Hi everyone! Let’s talk about AI—but let’s make it personal. I have been thinking a lot about artificial intelligence lately. Not just flashy stuff like ChatGPT or the wild images you can make with AI tools. I am thinking about where these AIs come from, what they learn, and more importantly, who they understand—and who they leave out. See, most of the popular AI tools we use today are made by big companies in the US and Europe. They are trained on data from the internet, and most of that data is in English. Most of it comes from Western websites, books, news, and cultural materials. That might seem fine at first glance, but here is the problem: these AIs end up seeing the world through a very Western lens. And that lens can be very narrow.

Let me tell you what I mean with a term that is kind of fancy but super important: technocolonial imaginary. Do not let the words scare you. This just means that AI models—especially the big Western ones—tend to imagine and present the world in ways that reflect colonial patterns of power and knowledge. Even without trying to, these AIs can push a worldview that says, “Western knowledge is universal,” or worse, “Western is better.” That makes me uncomfortable because I do not live in a Western fantasy. I live in a real world where people speak many languages, follow diverse customs, cook different meals, worship in different ways, and love in all kinds of ways. And I want AI to understand that world—not just repeat what it picked up from Silicon Valley blogs or Wikipedia.

Let me give you an example. I once asked a well-known Western AI model to write a story about a wedding in Nepal. What it gave me was something that sounded like it came straight out from a Hollywood movie: a white gown, a best man’s speech, a first dance to a pop song. It was not a bad story—but it was not my story. What about the vibrant saptapadi (seven steps), the smell of incense, the blessings from elders, the turmeric ceremony, or the way the bride’s hands are decorated with intricate mehendi? What about the songs sung by aunties, the jokes, the chaos, the joy? That story was nowhere to be found in the AI’s response. And that’s the heart of the problem. Most AI models are like really smart but culturally clueless tourists. They have read about your country in a guidebook, maybe learned a few greetings, but when it comes to the deeper stuff—your stories, your jokes, your pain, your pride—they just do not get it. But here is the good news. Things are changing. There is a growing wave of culturally responsive AI models. These models are being built in non-western parts of the world, trained on local languages, and designed to understand local values, traditions, and ways of knowing. They are not trying to be copies of ChatGPT or Bard—they are trying to be something new and better for the people they serve.

Let me introduce you to two of my favorites: DeepSeek from China and BharatGen from India. DeepSeek is developed in China. What makes it special is not just it speaks Chinese well (though it does). It is that it understands Chinese culture, literature, history, and daily life in a way that most Western AIs just cannot. I once asked DeepSeek to write a poem inspired by the classic Chinese poet Li Bai. The results were amazing—not just technically good, but emotionally rich and culturally resonant. It did not just throw in random Chinese-sounding phrases. It understood the symbolism, the pacing, the structure, and the melancholy that is so often in Li Bai’s poetry. Compare that to a Western AI that gave me something that sounded more like a tourist trying to imitate a kung fu movie. Here is the thing: culture is not just about language. It is about rhythm. Emotion. Silence. Color. Smell. Subtlety. When AI models are trained primarily on Western data, they miss all of that richness. They cannot smell the jasmine in a Chinese courtyard or feel the silence in a Zen garden. But Deepseek gets closer—because it is built from inside that world, not from the outside.

Now Let’s talk about BharatGen. India is a country of over a billion people, with more than 20 officially recognized languages and hundreds of dialects. The stories in Kerala are not the same as the stories in Punjab. The jokes in Bengal are different from the idioms in Tamil Nadu. The way people think, speak, argue, and create is so diverse. Western AI models? They usually struggle to get even one Indian language right. But BharatGen is different. It’s trained on Indian languages from the start—Hindi, Tamil, Telugu, Bengali, Marathi, Gujarati, and more. It knows local festivals like Pongal and Onam, not just Diwali. It can generate agricultural advice for a farmer in Odisha in Odia. It can help a student in Assam write a folk tale in Assamese. It does not just know India—it feels familiar.

Let me give you a tiny but powerful example. I once asked BharatGen to help generate a recipe for a simple South Indian sambar. Not only did it get the ingredients right (no, it did not suggest curry powder), it explained the steps in a way that reminded me of my grandmother. It said things like “temper the mustard seeds until they start to dance.” You do not learn that from Wikipedia. You learn that from life. You might be thinking, “Okay, that is nice for people in China or India, but why should the rest of us care? Well, we should care. Because these culturally grounded AIs are showing us that AI does not have to be one-size-fits-all. We do not have to settle for tools that erase our differences in the name of convenience or universality. We can have tools that celebrate our differences—and help us keep them alive.

When AIs are built with care, they can support local teachers, farmers, students, artists, and elders. They can protect endangered languages, record oral histories, teach rituals, and even help with mental health support in culturally appropriate ways. And here is something even deeper: by resisting the technocolonial mindset—the idea that Western ways of knowing are the default—we are reclaiming something powerful. We are saying, “Our ways matter. Our stories count. Our knowledge is real.”

Let’s zoom out for a moment. This is not just about cool features or better translations. This is about power. Who builds the AI? Who trains it? Who decides what is “normal” or “neutral”? These are questions about control. And for too long, the answers have been the same: big tech companies in California or London. But culturally responsive AI challenges that. It says: “We can build our own tools. We can tell our own stories. We can shape our own futures.” Think about it like food. Imagine if all the world’s restaurants were run by the same chef from New York. No matter where you went, you would get the same menu: burgers, fries, milkshakes. That might be fun for a day, but eventually, you would miss your mom’s cooking. You’s miss the smell of spices, the crunch of dosa, the heat of chili, the comfort of something familiar. That’s what Western AI has become—a global menu of the same dish. But we deserve more than that. We deserve variety. We deserve AI that tastes like home.

Now you might be wondering, “This sounds great—but what can I do?”

Well, I am glad you asked. Here are a few small but meaningful things:

  • Support local AI projects: If you are in a country building its own AI tools, support them. Use them. Give feedback. Celebrate them.
  • Ask better questions: Do not just accept answers from AI as neutral. Ask, “Whose perespective is this?” If it feels off, challenge it.
  • Push for language inclusion: Whether you are a teacher, writer, or student—advocate for AIs that understand your local languages and dialects.
  • Tell your own stories: Write. Share. Create. The more cultural content we put out there, the more we can train future AI to understand us better.
  • Talk about it: Share blogs like this. Talk to your friends. Make this conversation part of everyday life.

So when I imagine the future of AI, I am not dreaming of flying robots or instant translations. I am dreaming of something simpler and more beautiful: An AI that helps a child in Ghana write a folk tale in Twi; An AI that understands Navajo syntax and can preserve it for the next generation; An AI that can help a weaver in Blangladesh design patterns that blend tradition with innovation; and An AI that does not understand you—but respects you. That’s not science fiction. That’s possible. That’s happening. And it’s happening because people all over the world are saying, “We want AI that feels like us.”

Let’s resist the technocolonial imagination, not with anger but with creativity. Let’s support BharatGen, DeepSeek, and every other AI that is trying to see the world through many eyes, not just one. Because AI does not have to erase our differences to be useful. In fact, the more it reflects our differences, the more powerful—and beautiful—it becomes. And the next time you use AI, ask it a question only your culture can ansewr—and see if it gets it right. If it does not, it is time to try a new kind of AI—one that speaks your language, and one that gets you.