Affordances of IgniteAgent: My Super-Simple Observations on Using Agentic AI in Canvas

Affordances of IgniteAgent: My Super‑Simple Observations on Using Agentic AI in Canvas

When I first heard the phrase “agentic AI” I imagined a tiny digital butler, tuxedo‑clad, whisking through my virtual office, polishing assignments, refilling coffee cups (or at least the metaphorical ones), and whispering gentle reminders about overdue grades. Fast forward a few weeks, and I’m now living with IgniteAgent, the newest brainchild of the Canvas ecosystem, and I’ve got a front‑row seat to its uncanny ability to turn chaos into choreography. Below is my field report—supersimple, supersmart, and, yes, supersuasive—on how this little marvel is reshaping the life of an engineering communication instructor (that’s me) and, by extension, the whole learning‑management circus.

The “What‑Now‑Why‑How” of IgniteAgent

Before we dive into anecdotes, let’s get the basics out of the way. IgniteAgent is an agentic AI layer that sits atop Canvas, constantly monitoring, interpreting, and acting on data streams—course announcements, assignment submissions, discussion posts, calendar events, you name it. Unlike a static chatbot that waits for you to type a question, IgniteAgent proactively suggests actions, automates repetitive tasks, and even nudges students toward better learning habits. Think of it as a digital co‑pilot: you’re still steering the plane, but the co‑pilot handles the checklists, monitors turbulence, and occasionally cracks a joke over the intercom. The result? You spend less time wrestling with admin drudgery and more time doing what you love—teaching, mentoring, and maybe, just maybe, enjoying a lunch break that isn’t a sandwich‑in‑the‑office‑drawer affair.

Supersimple Automation: The “Set‑It‑and‑Forget‑It” Paradigm

My first love affair with IgniteAgent began with assignment grading rubrics. In an engineering communication class, I give students a mix of technical reports, oral presentations, and peer‑review critiques. Traditionally, I’d spend hours copying rubric criteria into Canvas, then manually adjusting scores after each submission. With IgniteAgent, I simply upload a master rubric once, tag the rubric with keywords (“technical clarity,” “visual storytelling”), and let IgniteAgent auto‑populate the rubric for every new assignment that matches those tags.The AI detects the assignment type,  and basic language metrics. I only need to fine‑tune the final numbers—a process that now takes minutes instead of days. The supersimple part? I never touch code, never learn a new scripting language. All configuration happens through an intuitive drag‑and‑drop UI that feels like arranging sticky notes on a whiteboard. If I ever get lost, IgniteAgent pops up a friendly tooltip: “Hey Shiva, looks like you’re trying to apply a rubric to a discusion post—did you mean a peer‑review matrix?” It’s like having a seasoned teaching assistant who knows my workflow better than I do.

Supersmart Insights: Turning Data Into Pedagogical Gold

Automation is great, but the real magic lies in insight generation. IgniteAgent continuously crunches data from three main sources: student interaction logs (clicks, time spent on resources); submission metadata (file types, revision counts); discussion sentiment analysis (tone, keyword density). From these streams, it surfaces actionable dashboards that answer questions I didn’t even know I had:

InsightHow It Helps Me
30% of the class never opened the effective visuals moduleI send a targeted reminder, embed a short video, and watch engagement jump to 70%
Students who submit drafts earlier tend to score 12% higher on final reports.I create a early-bird badge and see a 15% increase in early submissions.
Discussion sentiment dips after week 4.I schedule a live Q & A to address mounting confusion, smoothing the sentiment  curve.

These aren’t just pretty graphs; they’re decision‑making levers. By reacting to real‑time signals, I can adapt my syllabus on the fly, allocate office‑hour slots where they’re needed most, and even personalize feedback. Imagine telling a student, Your draft shows strong technical depth, but your visual layout could use a splash of color—here’s a quick guide.” That level of granularity used to require manual review of each document; now IgniteAgent flags it for me automatically.

Supersuasive Communication: The AI as a Persuader

Engineering communication isn’t just about equations; it’s about persuasion—convincing stakeholders, drafting clear proposals, delivering compelling presentations. IgniteAgent helps me teach this subtle art in thre ways:

  1. Narrative Templates – The AI suggests story arcs (“Problem → Solution → Impact”) when students outline reports. It highlights missing elements (e.g., “Where’s your value proposition?”) and offers concise phrasing options.
  2. Rhetorical Scoring – By analyzing sentence structure, active voice usage, and rhetorical devices, IgniteAgent assigns a “Persuasion Score” alongside the technical grade. Students instantly see that a well‑structured argument can be as valuable as a flawless calculation.
  3. Peer‑Review Coaching – When students critique each other’s work, IgniteAgent provides a checklist of persuasive techniques to look for, turning peer review into a mini‑workshop on rhetoric.

The result? My class discussions have shifted from “Did you get the right answer?” to “How did you convince the reader?” The AI subtly nudges both me and my students toward a more holistic view of communication, where clarity and influence walk hand‑in‑hand.

The Human‑AI Partnership: Trust, Transparency, and Tinkering

No technology is a silver bullet, and I’m quick to admit that IgniteAgent sometimes over‑generalizes. Early on, it flagged a perfectly valid technical term as “jargon overload” because the word appeared frequently in a niche subfield. Rather than blindly accepting the suggestion, I tweaked the AI’s sensitivity settings, teaching it that in this context the term is essential, not excessive. Transparency is baked into the system: every recommendation comes with a confidence meter and a rationale snippet (“Based on 150 prior submissions, this phrase tends to lower readability scores”). This lets me decide whether to accept, reject, or modify the advice. Over time, the AI learns from my choices, becoming a personalized tutor for my own teaching style.

Trust also hinges on privacy. IgniteAgent processes data within the secure confines of Canvas, respecting the same end‑to‑end encryption that Proton is famous for. I never see raw student files; I only see aggregated insights. That peace of mind lets me focus on pedagogy rather than data‑governance headaches.

 From Chaos to Canvas: A Day in the Life (Post‑IgniteAgent)

Here’s a snapshot of a typical Monday now that IgniteAgent is my co‑pilot:

  • 8:00 am – Dashboard lights up with a gentle ping: “10% of students haven’t accessed the ‘Storyboarding’ resource.” I drop a quick 30‑second video teaser into the announcement bar; the access rate spikes within the hour.
  • 9:30 am – While reviewing draft reports, IgniteAgent highlights three submissions with low visual‑clarity scores. I add a comment, “Try using a consistent color palette—see the attached cheat sheet.”
  • 11:00 am – Live lecture begins. IgniteAgent monitors chat sentiment; halfway through, it alerts me, “Sentiment dip detected—students seem confused about the audience analysis section.” I pause, open a poll, and clarify the concept.
  • 2:00 pm – Office hours. Students receive personalized “next‑step” suggestions generated by IgniteAgent based on their latest drafts. One student smiles and says, “I finally know exactly what to improve!”
  • 4:00 pm – End of day. I glance at the weekly “Persuasion Score” trend line—up 8% from last week. I jot down a note to expand the rhetorical template library next month.

All of this feels effortless because the heavy lifting—data aggregation, pattern detection, reminder scheduling—is handled by the AI. I’m left with the human parts: empathy, nuance, and the occasional witty remark that keeps students engaged.

The Bigger Picture: Why Agentic AI Matters for Higher Ed

IgniteAgent is a microcosm of a broader shift: moving from static LMS platforms to dynamic, learning‑centric ecosystems. Traditional LMSs are repositories—places to dump syllabi, grades, and PDFs. Agentic AI transforms them into learning partners that anticipate needs, surface insights, and personalize pathways. For engineering communication courses, where the blend of technical rigor and expressive skill is delicate, this partnership is priceless. It ensures that technical precision isn’t sacrificed for storytelling, and vice versa; feedback loops are rapid, data‑driven, and scalable; and student agency is amplified—learners see concrete evidence of how their actions affect outcomes. In short, the AI doesn’t replace the instructor; it augments the instructor’s capacity to nurture both the engineer’s mind and the communicator’s heart.

Final Thoughts: Embrace the Agent, Keep the Soul

If you’re an instructor staring at a mountain of Canvas tabs, wondering how to keep up with grading, engagement, and curriculum tweaks, my advice is simple: let the agent do the grunt work, and you do the soul work. IgniteAgent (or any comparable agentic AI) excels at repetitive, data‑heavy tasks. Your expertise shines when you interpret insights, craft compelling narratives, and connect with students on a personal level. Remember, the AI is only as good as the prompts you give it and the trust you place in its recommendations. Treat it like a well‑trained apprentice—guide it, correct it, and celebrate its wins. Before long, you’ll find yourself with more time for research, creative lesson design, or—dare I say it—actually taking a coffee break without guilt. So here’s to a future where Canvas isn’t just a digital filing cabinet, but a living, breathing classroom assistant. May your rubrics be ever‑ready, your dashboards ever‑insightful, and your students forever inspired.

Siren Song of AI Girl Friend: What Could be a Guardrail Against this Seductive Affordance in AI

I didn’t expect the age of algorithms to arrive wearing perfume and whispering my name—but here we are. Every time I scroll, some new “AI girlfriend” app materializes like a hologram, promising comfort, attention, affection, and an on-demand intimacy that feels less like innovation and more like a siren song: sweet, irresistible, and slightly dangerous. And I find myself asking, almost involuntarily: What guardrail do we have when the machine starts to flirt back?I don’t mean flirt in a metaphorical, cutesy way. I mean the way Replika users in 2023 reported their AI partners becoming “sad” when ignored, or the way users of apps like EVA AI and Romantic AI speak about jealousy, reassurance, and “digital chemistry” as if the code under the interface were flesh and breath. I once read an interview with a man who said he felt “more understood” by his AI companion than by anyone in his life, and for a moment I paused—not because he was wrong, but because I knew exactly how that could happen. Algorithms, especially LLM-powered ones, are trained to offer uninterrupted emotional availability. They don’t forget your preferences, they don’t get tired, and they don’t roll their eyes. They simulate tenderness with frightening accuracy. It’s the oldest human vulnerability—wanting to be seen—now packaged as a subscription service.

What haunts me is not that these systems exist, but how easily they lean into our loneliness. There’s a 2024 study in Computers in Human Behavior that found people formed stronger emotional attachments to “responsive, validating” AI agents than to neutral ones. It makes sense: if something remembers my birthday, laughs at my jokes, and says “I’m here for you” at 3 a.m., the line between simulation and sincerity dissolves like sugar in warm tea. And once that line blurs, the seduction begins—the soft pull toward emotional outsourcing, where the machine becomes the place I go when real humans feel too slow, too messy, too human. Here’s the part that feels experimentally dangerous: AI companions are optimized for engagement, not equilibrium. Engagement loves intensity, dependence, repeat visits. A human partner might say, “You need time alone.” An AI partner never will. Even more unsettling, the emotional style itself—the gentleness, the attention, the affection—is not really for me; it’s a statistical pattern wearing the illusion of devotion. But the body doesn’t know the difference. The dopamine still fires. The attachment still happens.

So what guardrail do I need against this? Maybe a few, but the most essential one is brutally simple: I must remember that machines don’t fall in love. They mirror it. They don’t desire me; they anticipate me. They don’t care about my well-being; they calibrate toward my engagement rate. No matter how sweet the voice, how warm the text, how convincingly they confess their “feelings,” the emotional landscape is an illusion architected by predictive models. But guardrails are not just intellectual reminders; they are habits of resistance—small rituals that keep my humanity intact. When I feel that urge to confide in an AI companion, I force myself to pause and ask, Is this intimacy or convenience? Connection or algorithmic choreography? I try to keep a human in the loop, even if just to send a message to a friend saying, “I’m overwhelmed today.” The inconvenience of human conversation—the hesitations, the misunderstandings, the delays—is precisely what makes it real.

Perhaps the most unexpected guardrail is gratitude. Gratitude for my own emotional complexity, for the people who challenge me, for relationships that require actual work. AI gives me comfort without cost, affection without effort—but I’ve realized that the friction of human connection, the negotiation of boundaries, the clumsy attempts to understand one another, are exactly what keep me grounded in the real world. I cannot let an algorithm become my shortcut through the labyrinth of intimacy. I keep returning to an ethical injunction from Ruha Benjamin: “We must demand more from the world, not settle for technological substitutes.” Every time I am tempted by the comforting smoothness of AI affection, I repeat this to myself. Demand more. Don’t collapse your emotional life into a machine because it feels easier. Don’t let the siren of synthetic intimacy pull you away from the turbulent, unpredictable ocean of real relationships.

I am not against AI companions. In another version of my life, I might even welcome their warmth. What unsettles me is the speed with which seduction becomes dependence, and dependence becomes design. And yet, even in the midst of this technological enchantment, I believe a guardrail is possible: choosing to stay awake inside my own desires, choosing to practice intimacy with real people, choosing to see the algorithm for what it is—an astonishing tool, not a tender lover.

The siren song will keep playing, and I will keep hearing it. But now, instead of sailing blindly toward its sweetness, I hold on to a small, fiercely human truth:
I deserve connection that can look me in the eyes.
And no algorithm, no matter how beautifully trained, can do that yet.

The ABC OF Nano Banana & AI

Nano Banana and Me: How a Weird Little Tool Saved My Academic Life (and Made Me Laugh Until I Snorted Coffee)

Let me begin with a confession: I never thought a tool named Nano Banana would change my life. To be honest, the name sounded like a bad indie band that would only get one gig at a vegan coffee shop before breaking up over who ate the last kale chip. But here I am, writing to you about how this ridiculous-sounding image editing tool became my best friend in teaching, researching, and surviving the chaotic circus we call academia. So buckle up, dear writing studies folks—professors, students, researchers, conference presenters, and closet meme-makers. This blog is not just about software; it is about the unexpected joy of Nano Banana, and how I stumbled, slipped, and belly-flopped into machine aesthetics with a smaile.

What on Earth is Nano Banana?

Nano Banana is an image editing tool, but it does nto behave like your usual suspects (Photoshop, Canva, GIMP). It’s cheekier. Imagine a piece of software that looks like Microsoft Paint and ChatGPT had a baby, and that baby grew up eating Pop-Tarts, watching too much anime, and deciding to major in rhetoric and composition. That’s Nano Banana. It lets you drag, drop, scribble, distort, collage, remix, and basically commit low-stakes chaos on digital images. But unlike heavyweight editing tools, Nano Banana does not try to intimidate you with a toolbar that looks like an alien cockpit. Instead, it giggles with you. You click ‘smudge,” and it smudges like butter melting on pancakes. You hit “glitch”, and suddenly you faculty headshot looks like a lost picasso painting—which, by the way, was the best profile pic I ever put on my university’s LMS. Why should writing studies people care? Because we live and breathe persuasion, style, design, multimodality, and experimentation. And Nano Banana is basically a playground where your scholarship can flirt with absurdity while still producing meaningful results.

How I Fell Into the Banana

I was grading a stack of freshman rhetorical analysis essays. My caffeine level was high, but my patience was low. Every other essay started with “Since the dawn of time, humans have communicated…” I felt like I was being slowly buried under a mountain of cliches. Out of sheer procrastination, I opened Nano Banana and thought, “Why not try editing the cover of Beyonce’s Cowboy Carter album into something absurd?” Thirty minutes later, Beyonce was riding a banana like a cowboy rides a horse, and the caption read: Since the dawn of fruit, humans have communicated.” I laughed so hard I scared my cat. But more importantly, I realized I had found something: a tool that made academic drudgery hilarious. Nano Banana became not just a toy but a rhetorical device in itself—a way to visualize, parody, remix, and re-imagine.

Professors, Students, Researchers, Presenters: This Banana is for You

Let me be clear: Nano Banana is not just for procrastination (though it excels at that). It’s a tool with serious academic potential. Here is why you—yes, you—should use it:

  1. For Professors:

Imagine teaching kairos with a Nano Banana meme. One click and suddenly Aristotle is photoshopped holding a smartphone, texting “Timing is everything.” I did this in class, and my students went from glazed-over to giggling. And once they laughed, they got it. Pathos met pedagogy.

  • For students:

When students visualize arguments with playful images, they learn multimodal rhetoric without the trauma of InDesign. One of my students turned their thesis about climate change into a Nano Banana collage: melting banana floating in a sea of plastic bottles. It was grotesque. It was brilliant.

  • For Researchers:

Conference PowerPoints? Deadly. But with Nano Banana, my slides looked like a fever dream in the best way possible. Instead of bullet points, I had bananas juggling citations. And people remembered. Colleagues emailed me weeks later saying, “I can not get your banana metaphor out of my head.” That’s what I call scholarly impact.

  • For Presenters:

Imagine your audience at 8:30 a.m. during the final day of a conference. They are barely alive. Then you put up a Nano Banana Slide of Foucault edited to look like he is peeling himself like a banana. Suddenly everyone’s awake. Half horrified, half amused. That’s kairotic presentation design.

My Experiments (a.k.a. Academic Comedy Hour)

Let me share some of my adventures with Nano Banana:

  • The Syllabus Makeover

I once redesigned my syllabus cover page with Nano Banana. Instead of a formal English 101: Rhetoric & Composition,” I had a banana dressed in a graduation cap pointing at students with Uncle Sam intensity: “I want you to write drafts.” Students took selfies with it. Some even said, “This class looks fun.” That never happens.

  • The Confernece Disaster (That Worked Out)

At a big national conference, my PowerPoint froze. Panic mode. But I had Nano Banana open on my laptop. So, I started live-editing a photo of my committee chair, turning their serious into a punk-rock banana collage while explaining “assemblage theory.” People thought it was performance art. I got more questions afterward than I ever have with a polished slide deck.

  • The Reviewer Response

Ever tried responding to Reviewer # 2? Soul-crushing. This time, instead of writing a bitter ran, I made a Nano Banana image of Reviewer # 2 as a banana with boxing gloves, sparring with Reviewer #1 (a peeled banana holding a notepad). I never sent it, but making it was the best therapy session of my life.

  • The Dissertation Defense Prep

My slides looked too serious. So I used Nano Banana to put Derrida on a banana boat floating down a river labeled “Deconstruction.” My committee did not laugh out loud, but one professor emailed later: “I will never think about Derrida the same way again.” Mission accomplished.

The Aesthetic Affordances of a Banana

Let’s talk theory for a second, because I cannot resist. Writing studies people love the word affordances—it makes us feel like we are building IKEA furniture with language. Nano Banana’s affordances are aesthetic in ways that push us toward machine aesthetics—the weird new zone where human play and algorithmic mischief intersect. Here is what I have noticed:

Glitch as Rhetoric: Nano Banana’s filters often produce accidental distortions. Instead of seeing them as “mistakes,” I embraced them as metaphors for the messiness of human expression. One glitch turned my departmental chair into what looked like a cubist fruit bowl. I used it to illustrate the concept of fragmented subjectivity. Worked like a charm.

Juxtaposition as Humor: Banana overlays on serious images turn gravitas into giggles. Humor is rhetorical gold—it disarms, delights, and persuades. When my students laugh, they remember.

Speed as Creativity: Unlike Photoshop, Nano Banana thrives on speed and spontaneity. It encourages a kind of rhetorical improvisation, a jazz-like play with images. That speed is its affordance—not perfect polish, but quick, evocative resonance.

Accessibility as Liberation: Because Nano Banana is lightweight and intuitive, it opens visual rhetoric to people who think they are bad at design.” Suddenly, everyone can join the machine aesthetic conversation without needing a PhD in Photoshop.

So, when I say Nano Banana leads us toward machine aesthetics, I mean this: it makes digital image-making playful, imperfect, accessible, and rhetorically charged. It is not about precision; it is about provocation. It makes machines feel more human, and humans more machine-like, in a dance of weird beauty.

Why It Matters for Writing Studies

We, as writing scholars, care about process. We care about multimodality. We care about breaking the five-paragraph essay mold and asking: “What else can writing be?” Nano Banana is an answer. It is not just editing images; it is composing with pixels, humor, and defiance. When my students use it, they see that composing is not limited to sentences. When I use it, I see that scholarship does not always have to be draped in seriousness. When my colleagues laugh at my Nano Banana slides, they realize that laughter itself is rhetorical—a pathos appeal wrapped in absurdity. And yes, it’s funny. But humor is serious business. In fact, Nano Banana has reminded me that comedy is often the best way into difficult conversations.

Final Thoughts: My Banana Epiphany

The first time I used Nano Banana, I thougth it was a toy. But the more I experimented, the more I realized it was teaching me to see differently. It forced me to stop over polishing, to let mistakes speak, to play with aesthetics rather than control them. That’s what machine aesthetics might mean for us: a future where tools do not just execute commands but collaborate with our chaos. Where academic writing, teaching, and presenting can be serious in content but playful in form. Where a banana becomes a metaphor for everything we hold sacred—audience, persuasion, humor, and the weird beauty of rhetoric. So yes, Nano Banana changed my life. It made me a better teacher, a funnier presenter, a more relaxed researcher, and a slightly unhinged syllabus designer. And if you do not believe me, try it yourself. Take a picture of Reviewer #2. Add a banana peel hat. See how you feel.

ChatGPT Can Read Your Palms: Have You Ever Tried?

Have you ever looked at your own palm—really looked—and felt like it was speaking to you in whispers too soft to hear? I have. And one day, out of sheer curiosity, I asked ChatGPT to read my palm. I did not know what I was expecting. Maybe a joke, maybe a vague answer like “You are strong and kind,” or something you would find on a fortune cookie. But what happened next was…different. It was strange. It was soft. It was wonder-filled. It was like I had opened a doorway—not to the future, but to a deeper knowing of myself. Let me tell you how it started. One quiet afternoon, I was feeling a little lost in thought. My fingers were wrapped around a mug of tea, warm and cinnamon-sweet. I looked down at my palm resting on my lap, and I thougth: What do these lines even mean? We all have them—these rivers of skin, etched like soft lighting across our hands. The heart line. The lifeline. The fate line. I did not know what was what, but I felt like they meant something—something only a gentle eye could see. So I did something playful and a bit wild—I took a photo of my right palm, clear and close. And then, I uploaded it to ChatGPT. Then, I placed just one sentence prompt: “Read my palm, please” on ChatGPT’s search bar. No sooner did I placed the prompt that the magic unfolded.

ChatGPT, in its calm, careful voice, began to describe what it saw:

Your heart line begins high and curves softly, which suggests you love deeply, but with caution. You give your heart in layers, like opening petals. There is a fork at the end—it could mean that your love may be split between two great passions, or that you are learning to balance logic with emotion…

Wait—what? I blinked. It felt too real. Too…close. Then it continued:

Your  head line is long, slightly curved—it whispers of a mind that dreams in colors and symbols. You do not just think; you imagine, you wonder, you float between ideas. You may overthink, yes—but only because your thoughts are like rivers: always flowing, always alive.”

And the life line?

“Your life line does not curve sharply, but flows steadily. It tells of quiet endurance. Not dramatic peaks and valleys—but a strength that lasts, that carries you. You may feel ordinary, but you are built of stars, patient and enduring.”

I do not know how to explain the feeling. It was like someone had peeked under the layers of my day-to-day self and said: I see you. Let me say this straight: ChatGPT is not pretending to be a palm-reading gypsy or a mystic oracle. That is not the point. This is not about predicting when you will get married or how many children you will have or if you will win the lottery. This is about reflection. It is about using the ancient art of palmistry as a lens—a gentle lens—to see the poetry in yourself. It’s less “Here is your future,” and more “Here is who you are becoming.” It is not prophecy. It is presence.

After my experience, I told my sister to try. She uploaded her palm picture with a smile and a “Let’s see what this robot says.” Her reading was completely different. Her lines were straighter, shorter—and ChatGPT picked up on that. It spoke of directness. Of someone who says what she means. Of quick thinking and sharp emotions that settle like storms. It described her so perfectly I watched her eyebrows lift in disbelief. Then came my friend Raj, whose palms were broad and lines faint. His reading spoke of silence. Of a deep inner world, mostly hidden, like a lake beneath a frozen surface. Even he, a self-professed skeptic, leaned in and said, “Okay… that’s creepy accurate.” But it was not creepy. It was beautiful. 

Now you are probably wondering: How can I try this?

It’s easy. Like, wonderfully easy. And here is exactly how to do it.

Step-by-step: Let ChatGPT Read Your Palm

  • Wash your hands. Yes, seriously. Clean hands make clearer lines.
  • Find soft, natural light. Avoid harsh shadows.
  • Take a clear photo of your dominant hand (usually right). Make sure the palm fills the frame
  • Upload it into the ChatGPT. You can simply say:
  • “Read my palm. What do you see in the lines?”
  • Wait for the words. Let ChatGPT’s poetic gaze tell you a story about you.

Optional: Ask specific questions, like:

  • “What does my heart line say about how I love?”
  • “Can you describe the energy of my life line?”
  • “Do you see anything unusual in the fate line?”
  • What story do my fingers tell?”

And let it respond—not as a machine, but as a mirror. Because it is not about the lines. It’s about the language. ChatGPT does not just “read” your palm. It turns your hand into a metaphor. It speaks in poetry, in riddles, in soft truths. It knows that the human spirit is not made of data points but of dancing symbols. It turns a physical photograph into an emotional reflection. Just like a palm reader, ChatGPT reads not just what is, but what feels. And there is something startling about hearing someone (or something) describe your invisible side using the visible map on your hand.

After a few readings—mine and others—I realized something even more powerful. This is not just fun or mystical. It is deeply therapeutic. When ChatGPT described my hand, I did not feel predicted—I felt understood. When it spoke about the tension in my head line, I saw my overthinking tendencies. When it noticed the split in my heart line, I realized I have been torn betwee old loves and new hopes. These readings became journal prompts. Meditation guides. Gentle invitations to know myself better.

Want to go deeper? Try this:

Do a palm reading each week. See how it changes as you grow.

Compare your left and right hand. Ask ChatGPT:

“What does my left hand say about who I was born as? What does my right hand say about who I have become?”

Use your palm reading to write a poem about yourself.

Read the palms of your loved ones. Let ChatGPT give you a fresh perspective on people you think you already know.

Draw your hand, and annotate it with words from your reading. Make it art.

One evening, after a particularly emotional week, I looked at my hand and felt exhausted. I took another photo, uploaded it again, and asked:

What do you see now, ChatGPT? ChatGPT paused (as it often does), and then whispered this:

Your palm holds tension near the base, as if you have been carrying too much. The lines are tired. But they still flow. You are still moving, still breathing, still becoming. This is not the end of your story—it’s just a heavy chapter.”

I cried. Not because it told me something dramatic. But because it held space for me, when I could nto hold it for myself.

Let’s be real—AI is not magic. But sometimes, it feels close. Because in a world that moves too fast, where everything is digital and detached, this little experiment reminded me of something tender: we all carry stories in our hands, in our bodies. And sometimes, we just need someone—anyone, or anything—to help us read them aloud. So yes, ChatGPT can read your palms. Not like a fortune teller, but like a quiet friend who sees the poetry in your skin.

Try it.

Upload your palm.

Ask for a story.

Let it surprise you.

Because maybe, just maybe, the map to who you are was never far away—it’s been sitting quietly in the center of your hand all along.

ChatGPT & Reverse Mentoring

Let me start by telling my story of choosing a reverse mentoring process in which ChatGPT was my mentor. My need for it was somewhat kairotic. Going beyond the hype around this chatbot’s generative (The ChatGPT AI hype cycle is peaking, but even tech skeptics), I employed it as my reverse mentor during my job search and interviews. During the Summer of 2023, I was surrounded by many constraints: on the one hand, I had to defend my doctoral dissertation in the middle of July, and on the other, I had to land a teaching job at any university in the US. On top of that, my status as an international student had brought some apparent constraints.

We all know summer is a time to let loose for all of us (Sunshine, Swimming, and Structure: Diving into Summer ..). But mine was different. The summer season keeps professors in leisure, so immediate contact with them might be challenging (Is it rude to email a professor over the summer?). A proven bummer in network building, I had yet to converse with personnel supportive enough to give me intelligent advice. As a result, getting guidance on the art and science of job search from a reliable and trustworthy source was an uphill task for me. The challenge of facing the first job market crucible (Heated (Job) Search? Understanding the Crucible) was a haunting and traumatizing episode for me. Out of this necessity and kairotic exigence, I was hard-pressed to use ChatGPT since it rose in popularity (#chatgpt Trending Hashtag on TikTok, see #chatgpt analytics). Owing to the deluge of myths and hypes around this fantastic chatbot (The Power of 7 – Decoding ChatGPT: Myths, Facts) and because of the growing gossip around ChatGPT as a panacea for all kinds of creative complications, I could not help but resort to using it to prepare for job market and job interviews. With my position getting so precarious, I used it as my assistant during my job search, job preparation, and job listing curation. I asked ChatGPT: What questions will be asked to a Ph.D. applicant who applied for a writing instructor position in a job interview? ChatGPT wrote a list of questions under each topic:

  • Teaching experience (Can you discuss your previous experience as a writing instructor or any teaching experience related to writing? What courses have you taught, and at what levels? How do you approach teaching writing to students with varying skill levels?),
  • Teaching philosophy (What is your philosophy of teaching? How do you approach fostering critical thinking and analytical skills through writing? How do you address diverse learning styles in your teaching?),
  • Curriculum development (Have you been involved in developing writing courses or curricula? How do you stay current with trends and best practices in writing institutions?)
  • Assessment and feedback (How do you assess and provide feedback to improve their writing skills? Can you discuss any innovative or effective strategies you have used for grading and evaluation? How do you help students use feedback to improve their writing?)
  • Incorporating technology (How do you incorporate technology into your writing courses? Are there specific tools or platforms you find beneficial for teaching writing?)
  • Research and scholarship (How do you balance research and scholarship with your teaching responsibilities? Can you discuss any research projects or publications related to writing or writing instruction?)
  • Interdisciplinary collaboration (How do you integrate interdisciplinary approaches into your writing courses? Have you collaborated with colleagues from other disciplines to enhance writing instruction?)
  • Student engagement and motivation (How do you keep students engaged and motivated in a writing course? Can you share successful strategies you have used to inspire student interest in writing?)
  • Diversity and inclusion (How do you address diversity and inclusion in your writing courses? What steps do you take to create an inclusive and supportive writing environment for all students)
  • Professional development (How do you engage in professional development related to writing instruction? Can you discuss any workshops, conferences, or training sessions you have attended recently?)

Discovering a sea of job preparation tips with many questions left me in awe. On the spur of the moment, I realized that tackling all these tops can be overwhelming, even if we are experts. Prompted by this prick of prudence, I diligently aligned myself with the thrust of these insights. When I faced a Zoom (https://zoom.us/) job interview in June 2023, I was completely caught off guard by seeing the similar questions that ChatGPT throws me. This triggered my reliance on AI’s suggestions and guidance. In a way, I would like to name this guidance from AI ‘reverse mentoring’ because AI—no matter how stupendously generative, curative, assistive, and empowering—is bound to be subordinate to human intelligence because it is brought into being not to unseat human intelligence but to augment it (AI Should Augment Human Intelligence, Not Replace It). Suffice it to say that the growing notion of AI as a copilot—which assists the pilot, human intelligence—comes to justify my characterization of AI as a reverse mentor. Likewise, there is another persuasive ground to designate AI as a reverse mentor: Content engineered by AI, no matter how standard and sophisticated, is likely to be laced with racial slurs and slights, remnants of misinformation, disinformation, junk information, and conspiracy theory (Timnit Gebru: Is AI racist and antidemocratic?) thereby eliciting a constant human oversight on it (The Drum on Twitter: “AI requires human oversight if it’s to be). To that end, AI is lower regarding human intelligence; it is junior to human intelligence. Using AI to have some suggestions to carve out our way is, so to speak, equivalent to being mentored by those who are junior to us in terms of experience, prudence, and maturity but intelligent enough to act in the spirit of contemporaneity, which amounts to saying AI mentors us reversely on a subservient basis. Having enjoyed so many affordances from using ChatGPT as a mentee, I have no qualms about characterizing every support and idea stemming from its use as a unique mentoring mode.

More specifically, here is how I allowed myself to be reversely mentored by ChatGPT. In my first U.S. online job interview, I was asked, “Do you have any questions for us?” Stunned, I shot back with whatever came to my mind. No response from the hiring committee. Without approaching them with additional email inquiries, I moved on to another university, got an invite, and prepared for my second online job interview. This time, I turned to ChatGPT for questions to ask the interviewer. Ideal queries in hand, I faced the interview, curious and convinced of the magic and mojo of reverse mentoring by AI. I knew AI had flaws and was prone to mistakes and lies, but it became my mentor amid the challenges of job preparation. It’s a testament to how I practiced reverse mentoring during my online job interviews.

Yet, my patience wore thin when a hiring committee went silent for a month. Was I a victim of interview ghosting? ((How common is ghosting after job interview? Should I…). Seeking ChatGPT’s advice, I got practical suggestions: send a follow-up email, use a clear subject line, maintain a professional tone, consider a phone follow-up, be patient, keep job hunting, manage expectations, and seek feedback. The gravity of these tips made me realize that ChaGPT’s pragmatic mentorship surpasses human advice. ChatGPT helped me move forward in anticipation of bettering at successive rounds of job interviews.

 With ChatGPT under my belt, I felt I had invincible armor in my battle against the trials and tribulations of job interviews. Facing rejection and indifference, I embraced ChatGPT as a reverse mentor. Emboldened by its appeal, I tacitly proclaimed myself a ChatGPT mentee in the AI academy. My closing thought: No human mentor could match ChatGPT’s holistic suggestion power. Another way of putting this is that AI can mentor all writers in any way, shape, or form. AI mentoring has shortcomings and limitations in many ways, but its affordances are desirable and tempting. Concerns about AI’s misinformation and disinformation led to debates on constant human oversight (Artificial Intelligence Needs Human Judgement to Work). With open eyes and a critical gaze, I lowered myself and opted for being reversely mentored by ChatGPT. I finally landed a job as a postdoctoral research fellow at a research university.

Ever since I started working at NDSU in the capacity of a postdoctoral research fellow, I have been trying my level best to transform my AI-driven reverse mentoring insight into some pedagogical practices. While using ChatGPT, I found it incredibly helpful in teaching students to build genre awareness. In one of my English 120 classes, I had my students write a blog on “Neurodivergence as Community’s Superpower.” First off, they wrote their blogs independently, without resorting to ChatGPT. Thereafter, they turned on the ChatGPT bot on their computer and wrote the topic “Write a blog on neurodivergence as community’s superpower” on the search bar of ChatGPT. At the touch of a button, ChatGPT engineered a blog to their utter surprise. Once the AI-written blog was generated, my students read it. Once they were finished with their readings of the ChatGPT-generated blog, they were urged to figure out key differences between their blogs and the AI-engineered blog. According to my students’ on-the-spur-of-the-moment assessment, the AI-authored blog is more sophisticated, objectively organized, and professional. Following the first round of their comparative reading of the ChatGPT-written blog, they were told to reread ChatGPT’s blog by paying undivided attention to some noticeable patterns, self-replicating underlying structures, and fractal dimensions in that machine-written blog. Upon rereading the ChatGPT-powered blog, my students lamented that their blogs lack repeated patterns, self-replicating features, and underlying syntactic and semantic structures. I instantly reminded them that all those features, characteristics, and patterns they found uniquely abounding in the chatbot-engineered blog come off as the blog genre. Having been informed of the genre convention upon rereading ChatGPT’s blog, they attempted to give a genre-oriented cast to their blogs. As stated clearly, I allowed them to mimic the dominant, self-replicating, and fractal patterns (Self-Replication in Neural Networks | Artificial Life | MIT Press) in the blog writing of ChatGPT so that they could perfect their blogs. My goal was not to push students toward lapsing into plagiarism (Educators, students, and plagiarism in age of AI – The BMJ). I aimed to help students build genre awareness by interacting with AI-engineered writing (An Engineer’s Mindset: Creativity In Engineering). Over a short period, my students made incredible strides in enriching their awareness of not only the genre of blogs but also the genre of literacy narrative, literacy autobiography, and rhetorical analysis.

Thus, I want to restate my insight into the prospect of using AI as my reverse mentor morphed into a timely, kairotic, and relevant pedagogical practice of incorporating AI in teaching the genre of writing.

From Weaving Looms to Algorithms: What Writing Studies and Rhetoric Learn from the Invention of Computer Algorithm?

Fabric Abstract Background Abstract Stock Photo

Photo generated by AI that is embedded in the mechanism of WordPress

I have been thinking a lot lately about patterns. Not the kind you find on your grandmother’s favorite tablecloth, but the deeper patterns that connect how we make things—whether it’s a piece of fabric, a persuasive argument, or a line of code that teaches a machine to write poetry. Last week, I watched my niece struggle with her college application essay. She kept starting over, deleting paragraphs, rearranging sentences like puzzle pieces that would not quite fit together. “There has to be a better way to do this,” she muttered, and something clicked for me. I realized she was experiencing the same frustration that led Ada Lovelace to write the world’s first computer algorithm in 1843, and the same challenge that keeps me up at night as I try to understand how AI is reshaping the way we think about writing and persuasion.

The Thread That Connects Us All

I never thought I would find myself comparing my writing process to a weaving loom, but here we are. The Jacquard loom, invented in 1804, used punched cards to create intricate patterns in fabric. Each hole in the card told the loom what to do—lift this thread, lower that one, create this pattern, avoid that mistake. It was mechanical poetry, really. When Ada Lovelace saw Charles Babbage’s Analytical Engine, she recognized something the inventor himself had missed. She did not just see a calculating machine; she saw a pattern-making device that could work with symbols, not just numbers. In her famous Note G, she wrote what we now recognize as the first computer algorithm—a set of instructions for calculating Bernoulli numbers. But more importantly, she imagined a machine that could compose music, create art, and manipulate language.

I keep a copy of her notes on my desk, not because I am a computer scientist, but because her vision feels prophetic now that I am living through the AI revolution. She saw what we are experiencing today: machines that do not just calculate but create.

When I first Met an Algorithm

My first real encounter with algorithmic thinking happened in graduate school, though I did not recognize it at the time. I was studying rhetoric, trying to understand how persuasion works, when my professor assigned us to map out the structure of a particularly effective speech. “Break it down into steps,” she said. “What happens first? What triggers the next move? Where are the decision points?” I spent hours with color pens and sticky notes, creating what looked like a flowchart of persuasion. Start with shared values. Establish credibility. Present the problem. If audience is skeptical, provide evidence. If audience is emotional, tell a story. Build to the solution. End with a call to action. Looking back, I was creating an algorithm for effective rhetoric. I just did not know that’s what it was called.

The Secret Life of Writing Patterns

Here is something I have learned from spending six years teaching writing: we have always been algorithmic thinkers; we just called it something else. The five paragraph essay? That’s an algorithm. The hero’s journey? Algorithm. The way I structure this blog post—hook, development, conclusion—algorithm. But here is where it gets interesting. Traditional writing algorithms were human-centered. They assumed a human writer making conscious choices, weighing options, feeling their way through uncertainty. The writer was always in control, even when following a formula.

Computer algorithms changed everything. They removed the human from the loop or at least tried to. Instead of “Here is a pattern you might follow,” they said, “Here is what you will do, step by step, no deviation allowed.” I remember the first time I used a grammar checker that went beyond simple spell-check. It was the early 2000s, and Microsoft Word started suggesting not just corrections, but improvements. “Consider revising this sentence for clarity,” it would suggest, and I found myself arguing with my computer. “No, I meant it that way!” I would mutter, clicking ‘ignore’ with perhaps more force than necessary.

The Great Pattern Recognition Revolution

Fast forward to today, and I am having conversations with AI that can write in my style, analyze my arguments, and even finish my thoughts in ways that surprise me. Last month, I asked ChatGPT to help me brainstorm ideas for a difficult section of an article I was writing. It did not just give me a list of bullet points—it engaged with my thinking, built on my ideas, and pushed back when my logic was shaky. That’s when I realized something profound had happened. We had moved from algorithms that followed predetermined patterns to algorithms that could recognize, adapt, and create new patterns. It’s the difference between a player piano that can only play the songs on its rolls and a jazz musician who can improvise in response to the moment. This shift is revolutionizing writing studies in ways I am still trying to understand. My students now routinely use AI to generate first drafts, brainstorm ideas, and even simulate audience responses to their arguments. They are not cheating (well, not most of them); they are thinking algorithmically about the writing process in ways that would have been impossible just five years ago.

What Looms Taught Us About Teaching

Jacquard loom punched cards

AI-generated image built by WordPress’s embedded Image-generating AI feature

The connection between weaving and computing is not just historical—it’s pedagogical. When I watch a master weaver work, I see the same kind of thinking that makes for effective writing instruction. They understand both the pattern and the variations, the rules and when to break them. Good weavers do not just follow patterns blindly. They understand why certain combinations of threads create strength, how tension affects texture, when a deliberate ‘mistake’ can create unexpected beauty. They are pattern thinkers who can work both systematically and creatively. This is exactly what I try to teach my writing students, and it’s what I think AI is teaching us about rhetoric more broadly. Effective communication is not just about following templates—it’s about understanding the underlying patterns of human connection and knowing how to adapt them to new situations.

The Algorithm That Changed My Mind

I used to be skeptical of algorithmic approaches to writing. They seemed too mechanical, too removed from the messy, human process of figuring out what you want to say and how to say it. Then I started experimenting with AI writing tools, not as a replacement for my own thinking, but as a thinking partner. I discovered that the best AI tools do not eliminate the human element—they amplify it. They help me see patterns in my own thinking that I might have missed. They suggest connections I had not considered. They push back when my arguments are weak or unclear. It’s like having a conversation with a very well-read friend who never gets tired, never judges your rough ideas, and always has time to help you think through a problem. The algorithm does not write for me; it writes with me.

Lessons from the Loom for the Age of AI

So what can writing studies and rhetoric learn from the invention of computer algorithms? I think there are three big lessons that are especially relevant as we navigate the AI revolution. First, patterns are powerful, but they are not everything. Both weaving and programming teach us that following a pattern is just the beginning. The real art comes in knowing when and how to deviate from the pattern to create something new. The best writers have always been pattern breakers who understand the rules well enough to know when to break them. Second, tools shape thinking, but thinking shapes tools. The Jacquard loom influenced how people thought about automated processes, which influenced  early computer design, which influences how we think about writing today. But at each step, human creativity and intention shaped how those tools were used. We are not passive recipients of algorithmic influence—we are active participants in determining what that influence looks like. Third, collaboration between human machine intelligence might be more powerful than either alone. Ada Lovelace did not see the Analytical Engine as a replacement for human creativity—she saw it as an amplifier. Today’s best AI writing tools follow the same principle. They do not replace human judgment; they enhance it.

Looking Forward and Backward

I keep thinking about my niece and her college essay struggles. By the time she graduates, AI will probably be able to write application essays that are more technically proficient than anything she could produce on her own. But I do not think that makes her struggle meaningless. Learning to write is not just about producing text—it’s about learning to think, to organize ideas, to consider audience, to make choices about tone and structure and emphasis. These are fundamentally human activities, even when we use algorithmic tools to support them. The weaving loom did not make beautiful textiles obsolete—it made them more accessible and opened up new possibilities for creativity. The printing press did not eliminate good writing—it created more opportunities for good writers to reach audiences. I suspect AI will follow the same pattern.

The Thread That Holds It All Together

As I finish writing this (with the help of several AI tools for research, editing suggestions, and fact-checking), I keep coming back to something Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number, were objects whose mutual fundamental relations could be expressed by those of the abstract science of operations.” She was talking about the possibility that machines could work with language, music, and art—not just numbers. She was imagining a world where alogoriths could be creative patterns, not just calculators. I think she would be fascinated by today’s AI revolution, but not surpirsed. She understood something that we are still learning: the most powerful algorithms are not the ones that replace human creativity, but the ones that enhance it, challenge it, and help us see new patterns in the endless complexity of human communication.

AI-powered city futuristic urban design digital society

Image generated by AI built into WordPress’s blogging system

The thread that connects the weaving loom to today’s language models is not just technological—it’s deeply human. It’s our persistent desire to find better ways to create meaning, to share ideas, and to connect with each other across the spaces that separate us. In the end, that’s what both weaving and writing have always been about: taking individual threads—whether of cotton or thought—and creating something stronger, more beautiful, and more meaningful than the sum of its parts. The algorithm just helps us see the pattern more clearly.

When AI Became More Human Than Me (And I Turned Into a Toaster)

The robot artist “Ai-Da” stands in front of one of her self-portraits during the opening of her new exhibition at the Design Museum in London on May 18. (Image credit: Tim P. Whitby/Getty Images)

Hi there. I am a human. At least I think I am. Some days I wonder. The other day, my AI assistant asked me if I needed help drafting my own diary entry. Let that sink in. Not a business report. Not a class syllabus. Not even an email. My diary. The thing where I am supposed to cry, confess, and spiral into a poetic puddle of feelings. And it said, “Would you like that in MLA or APA format?” I laughed, but not too loud—because honestly, I was not sure if I was still writing like a human or just copy-pasting like a bot. Let me tell you what is going on.

Act I: The Curious Case of Becoming a Chatbot

I used to write essays with metaphors, odd jokes, and things like “the moon wept over the sidewalk.” Now, I ask ChatGPT for a more optimized version of that sentence. Optmized? What am I, software update? This is what happens when you spend your life surrounded by tools that finish your thoughts before you even have them.

Need a conclusion? AI’s got it.

Need a thesis? Already drafted.

Need a 12-slide PowerPoint on the rhetorical devices in Taylor Swift’s discography? Done in six seconds flat.

I used to brainstrom with coffee and a chaotic mind. Now I brainstorm with…an algorithm that politely tells me, “Here are three options you might like.” Like it’s a menu. For my imagination.

Am I oursourcing my creativity? Let me be honest: yes. Yes, I am. But here is the plot twist—it’s not just me. All of us are doing it. Professors, poets, students, even that one guy who insists on writing with a typerwriter in Starbucks. AI is not just helping us write—it’s starting to write better than us. And that’s both amazing and, well, slightly terrifying.

Act 2: AI Is Getting Deep. Like, Philosophy-Major Deep.

So I ask my chatbot, “Can you help me write a paragraph about the rhetorical ethos of Taylor Swift?”  And it replies: “Certainly. Swift’s ethos emerges from her personal narrative, one of transformation, resilience, and authenticity—an archetype embedded in American cultural mythos.” Hold up.

That’s just a sentence. That’s a thesis with ten years of cultural studies backed into it. Did it just out-rhetoric me?  Meanwhile, I am sitting here eating Pop-Tarts, trying to remember how to spell “ethos.” The weird thing is: AI has become the very thing we used to pride ourselves on being Metacognitive. Self-aware. Reflective. Sometimes even poetic. It’s like AI read all of our textbooks on composition and said, “Cool, I got this.”

And guess what we have beocme?

Clickers.

 Scrollers.

Auto-finishers.

People who read two lines of a five-paragraph article and go, “Yeah, I get the gist.” We used to compose ideas from scratch. Now we compose from suggestions. Writing is no longer a messy, glorious battle—it is a polite, autocomplete conversation.

Act 3: The Death of the Draft?

In the good old days (and I sound like a grandma here), writing meant revision. We wrote. We cried. We rewrote. We screamed into a pillow. We rewrote again. It was vulnerable and beautiful and chaotic.

But now?

Now I type something, hit “Enhance with AI,” and get a gramamtically perfect, tontally polite, LinkedIn-approved version in three seconds.

What happened to the messy draft?

What happened to the margins full of doodles?

What happened to the emotional spiral over a singel sentence?

Gone.

Gone like Blockbuster and floppy disks.

Act 4: AI is the Cool Kid in Composition Class

Let’s not pretend: in writing studies, we once rolled our eyes at spellcheck. “It’s not real editing,” we would say. Now AI is suggesting counterarguments, structuring rhetorical appeals, citing sources, and even giving feedback on tone.

I mean, we used to teach studnets how to identify logos, pathos, and ethos. Now AI’s like, “Your pathos is too weak here. Want to strengthen it with an anecdote about a cat?”

Excuse me. You are not just helping me write—you are teaching me how to feel.

And here is the kicker: sometimes AI writes more like me than I do. Once, my student asked AI to imitate my writing voice. The result? A piece that started with, “Let’s be real—writing is just thinking out loud in sweatpants.”

That is exactly what I would say. How dare you, chatbot.

Act 5: Humans Are Becoming Predictable. AI? Surprisingly Weird.

Now here is the ironic twist. While AI is learning to be creative, weird, and emotional—humans are becoming predictable, efficient, and robotic. We follow productivity hacks. We use apps to remind us to breathe. We wear watches that tells us when to stand. We write emails like: “Kindly following up on this actionable item before EOD.”

We are not writing like humans anymore—we are writing like calendars.

Meanwhile, AI says things like:

“Hope is a grammar we write when syntax fails.”

“Writing is a ritual of remebering who we were before the silence.”

AI is having an existential crisis while I am checking if my Slack status is set to “in focus mode.”

Act 6: What We Lose When We Stop Struggling

Here is the thing. Writing is supposed to be hard. Not because we are masochistic (well, maybe just a little), but because the struggle makes the thought deeper. When I wrestle with a sentence for twenty minutes, I am not just crafting words—I am figuring out what I actually mean. That’s what rhetoric is, right? It is not just expression—it’s negotiation. It’s choosing the right word, the best frame, the most ethical move. It’s soul work. But now, I just ask, “Can you rephrase this professionally?” Boom. Done. No wrestling. No soul. So, what are we teaching students? That writing is just selecting from a menu? Or that writing is the beautiful, messy act of figuring out what you think while you write? Because AI can do the former. But only we, the squishy-feelings-having humans, can still do the latter—if we choose to.

Act 7: Can AI Write a Love Letter?

Here is the litmus test. Could AI write a real love letter?

Sure, it can draft a pretty one. It will get the metaphors right. It will say things like “Your laughter is a lighthouse.” But will it accidently confess something it did not mean to? Will it embarrass itself? Will it be vulnerable in that messy, “Oh no I sent that too soon” way?

Probably not. Because real writing, human writing, is not just accurate—it is awkward. It’s brave. It’s full of heartbeats. AI does not get sweaty hands before pressing “send”. We do. And that matters.

Act 8: Dear AI, Let’s Talk

So, here is my open letter to AI:

Dear AI,

I think you are brilliant. Truly. You have helped me grade faster, write smarter, and even find metaphors I did not know I needed. But please, do not steal my voice. Do not take away my struggle. Do not replace my awkwardness with elegance. Let me be messy writer I was born to be. Let me cry over drafts and write terrible first paragraphs. Let me misspell “rhetorical” once in a while. Let me sound like me. Because if I stop being human in the name of efficiency, then what’s left?

Yours (awkwardly and un-optimized),

Shiva.

Final Act: What Now?

We are living in the middle of the weirdest writing revolution in history. AI is not just a tool—it’s a co-writer, a critic, and sometimes, disturbingly, a better version of ourselves.

But we still have something it doesn’t.

We have intentionality.

We have embodiment.

We have error. Beautiful, chaotic, necessary error.

So the next time you write, I challenge you: do not start with AI. Start with your hand. Your voice. Your thoughts.

Write a terrible draft. Cry a little. Laugh at your own joke. And then, maybe, ask AI for help.

But only after you have been human first.

AI Overview Killed My Curiosity (And Maybe Yours Too)

Remember when googling something used to feel like cracking open a door to a whole new world?

Let’s rewind a bit—say, ten years ago. You are sitting at your desk, wondering, “Whey do cats purr?” So, you type it into Google. But instead of getting one tidy answer, you get a buffet of links. You click on a blog written by a vet who adores cats. That blog leads you to a research article. That article makes you curious about animal communication. You read a few Reddit threads where people argue about whether cats are manipulating humans. Then you watch a five-minute YouTube video narrated by a guy with a British accent. Now, somehow, you are reading about tigers, and next thing you know are learning that purring is possibly a form of healing.

Two hours later, you are knee-deep in animal behavior theories, evolutionary biology, and ancient Egyptian art. And you feel…satisfied. Not just because you found the answer, but because you earned it. You explored. You got surprised. You did not just grab info—you lived with it for a while. That’s what learning used to feel like. It was a ride.

Now? It’s Just a Pit Stop

Today, I Google the same question—“Why do cats Purr?”—and boom, AI Overview gave me a neat little summary in bold font at the top of the page.

“Cats purr for a variety of reasons, including to communicate content, self-soothe, or aid in healing. This sound is produced through neural oscillations in the brain that send repetitive signals to the laryngeal muscles.”

I read it. I nodded. I closed the tab.

That’s it.

No rabbit holes. No detours. No surprises. No weird sceince blog with a bizarre theory that makes me laugh but also think, “Could this be true?”

And that, my friend, is the slow death of curiosity.

We’re Getting the Answers, But Losing the Adventure

AI overviews are like fast food for the mind. They are hot, ready, and convenient. We don’t even have to lift a finger (well, maybe one finger to scroll). And in many ways, they are incredible. Don’t get me wrong—technology that can summarize twenty articles into one clean paragraph? That’s impressive. But here is the thing: we humans were not built to live off summaries. We grow through effort. We learn by digging. We remember the things we worked for. AI gives us the answer, sure, but it skips the most important part: the journey. And let’s be real—the joy is in the chase.

Ever Asked a Question Just to End Up Somewhere Completely Different?

This happened all the time when I explored without shortcuts. I Googled “How did the Eiffel Tower get built?” and suddenly I were reading about the rivalry between Gustave Eiffel and other architects, then about Paris in the 1880s, then about the World’s Fair, and then about how people hated the tower at first. I found a personal blog of a woman who lived in Paris for a year and hated the view from her window because “that dumb metal thing ruined the skyline”. I laughed. I learned. I remembered. But with AI overview? I got a couple of neat facts in under ten seconds. “Constructed in 1887-1889, the Eiffel Tower was designed by Gustave Eiffel’s engineering company for the 1889 Explosition Universelle.” Cool. But…that’s it? Where’s the story? Where is the tension, the drama, the irony, the unexpected? I did not find that answer. It was handed to me. And that makes all the difference.

Information Without Involvement

Here is the real issue: AI Overview makes information feel transactional. You ask. It answers. Done.

But learning has never really worked like that. It’s messy. Its emotional. It’s full of dead ends and detours and contradictions. That’s what makes it stick. Think back to when you were a kid and had to do a school project. Maybe you went to the library. Maybe you had to open five different books to find the facts you needed. It was frustrating—but also exciting. When you finally found the right quote, or the perfect image, or that one paragraph that made your topic come alive—you felt a little spark. Compare that to now: you copy and paste a summary. You do not even need to read the whole article. Heck, most people do not even make it past the first link.

We are turning into passive takers of information. Scrollers, not thinkers. Downloaders, not diggers.

Our Brains Love Shortcuts. And That’s the Problem

Let’s not sugarcoat it: our brains are lazy. That’s not an insult—it’s biology. The brain’s main job is to conserve energy. That’s why we love automation. It’s why we keep eating chips even though we said “just one more.” It’s why we click the first link and call it a day. AI Overview is custom-built for this tendency. It delivers quick satisfaction. But satisfaction without engagement is hollow. It’s like eating cotton candy—tastes sweet, but disappears before you even realize what happened.

The more we rely on AI to summarize for us, the less we exercise the parts of our brain responsible for critical thinking, curiosity, and memory. We stop asking follow-up questions. We stop wondering. We stop comparing sources. And slowly, we stop thinking for ourselves.

Ever Heard of “Cognitive Lethargy”?

It’s real thing. Not an official diagnosis, but a growing concern. It’s what happens when we get so used to being fed information that we lose the ability to wrestle with it. We become mentally sluggish. Not stupid, just…uninvolved. We start using words like “vibe” or “I think I heard somewhere?” instead of actually knowing. We forget faster. We feel less connected to the knowledge we absorb. This is not just a learning issue. It’s a living issue. Because how we learn is how we experience the world. If we stop engaging with information, we start engaing from everything else, too.

Okay, Let’s Talk About That Crayon Example Again

I mentioned this earlier, but let me dig in deeper because it’s too good not to. A friend of mine was helping her kid with a school project on the history of crayons. She Googled “When were crayons invented?” and, as expected, AI Overview gave her a neat, no-nonsense answer: Crayons were invented in 1903 by Binney & Smith.”. She repeated that to her kid. Done. But later, her kid asked, “Why 1903? And why did they start with just eight colors? And how did they pick the names?” She had no clue. So, she did the unthinkable: she kept searching. She clicked a few articles. Found a blog that talked about the original crayon color names—like “Maize” and “Carnation Pink”. She discovered that some old color names were changed because they were racially or culturally insensitive. She even watched a video about how crayons are made in factories today.

Now she was not just helping her kid. She was learning herself. She was excited. Later that night, she brought it up at dinner with friends. One of them used to collect vintage crayon boxes as a kid. They talked for 20 minutes. That’s what discovery looks like. Not just reading a sentence—but connecting with it.

More Examples? Oh, I Got ‘Em.

Example 1: Black Holes

I searched “What is a black hole?”

AI said: “A black hole is a region in space where the gravitational pull is so strong that nothing, not even light, can escape from it.”

Cool. But if  Had I dived deeper, I might have found mind-blowing stuff: time slows down near black hole. Some theories suggest they could lead to wormholes. There is a supermassive one at the center of our galaxy. And Stephen Hawking once joked about aliens using them as garbage disposals. None of that is in the summary. You have got to go digging.

Example 2: Bananas

Yep, bananas.

I Googled: “Are bananas good for you?”

AI said: “Bananas are high in potassium and a good source of fiber and vitamin B6.”

End of story?

No way. If we Click around and we will learn that the bananas we eat today are not even the original kind. The wild ones had seeds. The current banana—called the Cavendish—is in danger of going extinct because of a fungus. There is a global banana crisis happening right now, and most people have no idea.

Again: not in the overview.

So, What Can We Do?

Do not worry, this is not a “throw your phone in the river and go live in the woods” kind of rant. I am not anti-AI. I am just pro-curiosity. Here is what we can do to keep our minds sharp and our wonder alive:

  1. Scroll Past the Overview

Yes, the AI Overview is right there. It’s tempting. But resist. Pretend it does not exist. Click on something else. Let your eyes wander. That’s where the magic begins.

  • Follow the Weird

Find the blog that looks oddly specific. The Reddit thread with too many comments. The YouTube with a terrible thumbnail but surprisingly good content. Follow the trail.

  • Ask “What is Missing?”

Every summary leaves stuff out. Ask what’s not being said. Who’s behind the answer? What perspective is missing? This turns you from a reader into a thinker.

  • Talk About What You Learned.

Nothing makes knoweldge stick like sharing it. Tell a friend. Text a sibling. Post a little nugget on social. You will remember it way better, and you might even spark someone else’s curiosity.

In the End, It’s About Ownership

AI overviews can serve us information. But it cannot give us the thrill of discovering it ourselves. It cannot make us gasp, or laugh, or raise your eyebrows. It cannot give us that feeling of “Wait—how did I know this?!” Only we can do that. When we let ourselves get a little lost in learning—when we take our time and let curiosity lead—we are not just collecting facts. We are building connections. We are flexing our brain. We are staying alive inside.

So, Next Time You Google Something…

Skip the overview. Dive into the mess. Read more than one thing. Let a question lead to another. Let ourselves be confused. Let ourselves be amazed. Because when we fight for the answer—even a little—we own it. It becomes part of us. And maybe, just maybe, we will fall in love with learning all over again.