From Weaving Looms to Algorithms: What Writing Studies and Rhetoric Learn from the Invention of Computer Algorithm?

Fabric Abstract Background Abstract Stock Photo

Photo generated by AI that is embedded in the mechanism of WordPress

I have been thinking a lot lately about patterns. Not the kind you find on your grandmother’s favorite tablecloth, but the deeper patterns that connect how we make things—whether it’s a piece of fabric, a persuasive argument, or a line of code that teaches a machine to write poetry. Last week, I watched my niece struggle with her college application essay. She kept starting over, deleting paragraphs, rearranging sentences like puzzle pieces that would not quite fit together. “There has to be a better way to do this,” she muttered, and something clicked for me. I realized she was experiencing the same frustration that led Ada Lovelace to write the world’s first computer algorithm in 1843, and the same challenge that keeps me up at night as I try to understand how AI is reshaping the way we think about writing and persuasion.

The Thread That Connects Us All

I never thought I would find myself comparing my writing process to a weaving loom, but here we are. The Jacquard loom, invented in 1804, used punched cards to create intricate patterns in fabric. Each hole in the card told the loom what to do—lift this thread, lower that one, create this pattern, avoid that mistake. It was mechanical poetry, really. When Ada Lovelace saw Charles Babbage’s Analytical Engine, she recognized something the inventor himself had missed. She did not just see a calculating machine; she saw a pattern-making device that could work with symbols, not just numbers. In her famous Note G, she wrote what we now recognize as the first computer algorithm—a set of instructions for calculating Bernoulli numbers. But more importantly, she imagined a machine that could compose music, create art, and manipulate language.

I keep a copy of her notes on my desk, not because I am a computer scientist, but because her vision feels prophetic now that I am living through the AI revolution. She saw what we are experiencing today: machines that do not just calculate but create.

When I first Met an Algorithm

My first real encounter with algorithmic thinking happened in graduate school, though I did not recognize it at the time. I was studying rhetoric, trying to understand how persuasion works, when my professor assigned us to map out the structure of a particularly effective speech. “Break it down into steps,” she said. “What happens first? What triggers the next move? Where are the decision points?” I spent hours with color pens and sticky notes, creating what looked like a flowchart of persuasion. Start with shared values. Establish credibility. Present the problem. If audience is skeptical, provide evidence. If audience is emotional, tell a story. Build to the solution. End with a call to action. Looking back, I was creating an algorithm for effective rhetoric. I just did not know that’s what it was called.

The Secret Life of Writing Patterns

Here is something I have learned from spending six years teaching writing: we have always been algorithmic thinkers; we just called it something else. The five paragraph essay? That’s an algorithm. The hero’s journey? Algorithm. The way I structure this blog post—hook, development, conclusion—algorithm. But here is where it gets interesting. Traditional writing algorithms were human-centered. They assumed a human writer making conscious choices, weighing options, feeling their way through uncertainty. The writer was always in control, even when following a formula.

Computer algorithms changed everything. They removed the human from the loop or at least tried to. Instead of “Here is a pattern you might follow,” they said, “Here is what you will do, step by step, no deviation allowed.” I remember the first time I used a grammar checker that went beyond simple spell-check. It was the early 2000s, and Microsoft Word started suggesting not just corrections, but improvements. “Consider revising this sentence for clarity,” it would suggest, and I found myself arguing with my computer. “No, I meant it that way!” I would mutter, clicking ‘ignore’ with perhaps more force than necessary.

The Great Pattern Recognition Revolution

Fast forward to today, and I am having conversations with AI that can write in my style, analyze my arguments, and even finish my thoughts in ways that surprise me. Last month, I asked ChatGPT to help me brainstorm ideas for a difficult section of an article I was writing. It did not just give me a list of bullet points—it engaged with my thinking, built on my ideas, and pushed back when my logic was shaky. That’s when I realized something profound had happened. We had moved from algorithms that followed predetermined patterns to algorithms that could recognize, adapt, and create new patterns. It’s the difference between a player piano that can only play the songs on its rolls and a jazz musician who can improvise in response to the moment. This shift is revolutionizing writing studies in ways I am still trying to understand. My students now routinely use AI to generate first drafts, brainstorm ideas, and even simulate audience responses to their arguments. They are not cheating (well, not most of them); they are thinking algorithmically about the writing process in ways that would have been impossible just five years ago.

What Looms Taught Us About Teaching

Jacquard loom punched cards

AI-generated image built by WordPress’s embedded Image-generating AI feature

The connection between weaving and computing is not just historical—it’s pedagogical. When I watch a master weaver work, I see the same kind of thinking that makes for effective writing instruction. They understand both the pattern and the variations, the rules and when to break them. Good weavers do not just follow patterns blindly. They understand why certain combinations of threads create strength, how tension affects texture, when a deliberate ‘mistake’ can create unexpected beauty. They are pattern thinkers who can work both systematically and creatively. This is exactly what I try to teach my writing students, and it’s what I think AI is teaching us about rhetoric more broadly. Effective communication is not just about following templates—it’s about understanding the underlying patterns of human connection and knowing how to adapt them to new situations.

The Algorithm That Changed My Mind

I used to be skeptical of algorithmic approaches to writing. They seemed too mechanical, too removed from the messy, human process of figuring out what you want to say and how to say it. Then I started experimenting with AI writing tools, not as a replacement for my own thinking, but as a thinking partner. I discovered that the best AI tools do not eliminate the human element—they amplify it. They help me see patterns in my own thinking that I might have missed. They suggest connections I had not considered. They push back when my arguments are weak or unclear. It’s like having a conversation with a very well-read friend who never gets tired, never judges your rough ideas, and always has time to help you think through a problem. The algorithm does not write for me; it writes with me.

Lessons from the Loom for the Age of AI

So what can writing studies and rhetoric learn from the invention of computer algorithms? I think there are three big lessons that are especially relevant as we navigate the AI revolution. First, patterns are powerful, but they are not everything. Both weaving and programming teach us that following a pattern is just the beginning. The real art comes in knowing when and how to deviate from the pattern to create something new. The best writers have always been pattern breakers who understand the rules well enough to know when to break them. Second, tools shape thinking, but thinking shapes tools. The Jacquard loom influenced how people thought about automated processes, which influenced  early computer design, which influences how we think about writing today. But at each step, human creativity and intention shaped how those tools were used. We are not passive recipients of algorithmic influence—we are active participants in determining what that influence looks like. Third, collaboration between human machine intelligence might be more powerful than either alone. Ada Lovelace did not see the Analytical Engine as a replacement for human creativity—she saw it as an amplifier. Today’s best AI writing tools follow the same principle. They do not replace human judgment; they enhance it.

Looking Forward and Backward

I keep thinking about my niece and her college essay struggles. By the time she graduates, AI will probably be able to write application essays that are more technically proficient than anything she could produce on her own. But I do not think that makes her struggle meaningless. Learning to write is not just about producing text—it’s about learning to think, to organize ideas, to consider audience, to make choices about tone and structure and emphasis. These are fundamentally human activities, even when we use algorithmic tools to support them. The weaving loom did not make beautiful textiles obsolete—it made them more accessible and opened up new possibilities for creativity. The printing press did not eliminate good writing—it created more opportunities for good writers to reach audiences. I suspect AI will follow the same pattern.

The Thread That Holds It All Together

As I finish writing this (with the help of several AI tools for research, editing suggestions, and fact-checking), I keep coming back to something Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number, were objects whose mutual fundamental relations could be expressed by those of the abstract science of operations.” She was talking about the possibility that machines could work with language, music, and art—not just numbers. She was imagining a world where alogoriths could be creative patterns, not just calculators. I think she would be fascinated by today’s AI revolution, but not surpirsed. She understood something that we are still learning: the most powerful algorithms are not the ones that replace human creativity, but the ones that enhance it, challenge it, and help us see new patterns in the endless complexity of human communication.

AI-powered city futuristic urban design digital society

Image generated by AI built into WordPress’s blogging system

The thread that connects the weaving loom to today’s language models is not just technological—it’s deeply human. It’s our persistent desire to find better ways to create meaning, to share ideas, and to connect with each other across the spaces that separate us. In the end, that’s what both weaving and writing have always been about: taking individual threads—whether of cotton or thought—and creating something stronger, more beautiful, and more meaningful than the sum of its parts. The algorithm just helps us see the pattern more clearly.

When AI Became More Human Than Me (And I Turned Into a Toaster)

The robot artist “Ai-Da” stands in front of one of her self-portraits during the opening of her new exhibition at the Design Museum in London on May 18. (Image credit: Tim P. Whitby/Getty Images)

Hi there. I am a human. At least I think I am. Some days I wonder. The other day, my AI assistant asked me if I needed help drafting my own diary entry. Let that sink in. Not a business report. Not a class syllabus. Not even an email. My diary. The thing where I am supposed to cry, confess, and spiral into a poetic puddle of feelings. And it said, “Would you like that in MLA or APA format?” I laughed, but not too loud—because honestly, I was not sure if I was still writing like a human or just copy-pasting like a bot. Let me tell you what is going on.

Act I: The Curious Case of Becoming a Chatbot

I used to write essays with metaphors, odd jokes, and things like “the moon wept over the sidewalk.” Now, I ask ChatGPT for a more optimized version of that sentence. Optmized? What am I, software update? This is what happens when you spend your life surrounded by tools that finish your thoughts before you even have them.

Need a conclusion? AI’s got it.

Need a thesis? Already drafted.

Need a 12-slide PowerPoint on the rhetorical devices in Taylor Swift’s discography? Done in six seconds flat.

I used to brainstrom with coffee and a chaotic mind. Now I brainstorm with…an algorithm that politely tells me, “Here are three options you might like.” Like it’s a menu. For my imagination.

Am I oursourcing my creativity? Let me be honest: yes. Yes, I am. But here is the plot twist—it’s not just me. All of us are doing it. Professors, poets, students, even that one guy who insists on writing with a typerwriter in Starbucks. AI is not just helping us write—it’s starting to write better than us. And that’s both amazing and, well, slightly terrifying.

Act 2: AI Is Getting Deep. Like, Philosophy-Major Deep.

So I ask my chatbot, “Can you help me write a paragraph about the rhetorical ethos of Taylor Swift?”  And it replies: “Certainly. Swift’s ethos emerges from her personal narrative, one of transformation, resilience, and authenticity—an archetype embedded in American cultural mythos.” Hold up.

That’s just a sentence. That’s a thesis with ten years of cultural studies backed into it. Did it just out-rhetoric me?  Meanwhile, I am sitting here eating Pop-Tarts, trying to remember how to spell “ethos.” The weird thing is: AI has become the very thing we used to pride ourselves on being Metacognitive. Self-aware. Reflective. Sometimes even poetic. It’s like AI read all of our textbooks on composition and said, “Cool, I got this.”

And guess what we have beocme?

Clickers.

 Scrollers.

Auto-finishers.

People who read two lines of a five-paragraph article and go, “Yeah, I get the gist.” We used to compose ideas from scratch. Now we compose from suggestions. Writing is no longer a messy, glorious battle—it is a polite, autocomplete conversation.

Act 3: The Death of the Draft?

In the good old days (and I sound like a grandma here), writing meant revision. We wrote. We cried. We rewrote. We screamed into a pillow. We rewrote again. It was vulnerable and beautiful and chaotic.

But now?

Now I type something, hit “Enhance with AI,” and get a gramamtically perfect, tontally polite, LinkedIn-approved version in three seconds.

What happened to the messy draft?

What happened to the margins full of doodles?

What happened to the emotional spiral over a singel sentence?

Gone.

Gone like Blockbuster and floppy disks.

Act 4: AI is the Cool Kid in Composition Class

Let’s not pretend: in writing studies, we once rolled our eyes at spellcheck. “It’s not real editing,” we would say. Now AI is suggesting counterarguments, structuring rhetorical appeals, citing sources, and even giving feedback on tone.

I mean, we used to teach studnets how to identify logos, pathos, and ethos. Now AI’s like, “Your pathos is too weak here. Want to strengthen it with an anecdote about a cat?”

Excuse me. You are not just helping me write—you are teaching me how to feel.

And here is the kicker: sometimes AI writes more like me than I do. Once, my student asked AI to imitate my writing voice. The result? A piece that started with, “Let’s be real—writing is just thinking out loud in sweatpants.”

That is exactly what I would say. How dare you, chatbot.

Act 5: Humans Are Becoming Predictable. AI? Surprisingly Weird.

Now here is the ironic twist. While AI is learning to be creative, weird, and emotional—humans are becoming predictable, efficient, and robotic. We follow productivity hacks. We use apps to remind us to breathe. We wear watches that tells us when to stand. We write emails like: “Kindly following up on this actionable item before EOD.”

We are not writing like humans anymore—we are writing like calendars.

Meanwhile, AI says things like:

“Hope is a grammar we write when syntax fails.”

“Writing is a ritual of remebering who we were before the silence.”

AI is having an existential crisis while I am checking if my Slack status is set to “in focus mode.”

Act 6: What We Lose When We Stop Struggling

Here is the thing. Writing is supposed to be hard. Not because we are masochistic (well, maybe just a little), but because the struggle makes the thought deeper. When I wrestle with a sentence for twenty minutes, I am not just crafting words—I am figuring out what I actually mean. That’s what rhetoric is, right? It is not just expression—it’s negotiation. It’s choosing the right word, the best frame, the most ethical move. It’s soul work. But now, I just ask, “Can you rephrase this professionally?” Boom. Done. No wrestling. No soul. So, what are we teaching students? That writing is just selecting from a menu? Or that writing is the beautiful, messy act of figuring out what you think while you write? Because AI can do the former. But only we, the squishy-feelings-having humans, can still do the latter—if we choose to.

Act 7: Can AI Write a Love Letter?

Here is the litmus test. Could AI write a real love letter?

Sure, it can draft a pretty one. It will get the metaphors right. It will say things like “Your laughter is a lighthouse.” But will it accidently confess something it did not mean to? Will it embarrass itself? Will it be vulnerable in that messy, “Oh no I sent that too soon” way?

Probably not. Because real writing, human writing, is not just accurate—it is awkward. It’s brave. It’s full of heartbeats. AI does not get sweaty hands before pressing “send”. We do. And that matters.

Act 8: Dear AI, Let’s Talk

So, here is my open letter to AI:

Dear AI,

I think you are brilliant. Truly. You have helped me grade faster, write smarter, and even find metaphors I did not know I needed. But please, do not steal my voice. Do not take away my struggle. Do not replace my awkwardness with elegance. Let me be messy writer I was born to be. Let me cry over drafts and write terrible first paragraphs. Let me misspell “rhetorical” once in a while. Let me sound like me. Because if I stop being human in the name of efficiency, then what’s left?

Yours (awkwardly and un-optimized),

Shiva.

Final Act: What Now?

We are living in the middle of the weirdest writing revolution in history. AI is not just a tool—it’s a co-writer, a critic, and sometimes, disturbingly, a better version of ourselves.

But we still have something it doesn’t.

We have intentionality.

We have embodiment.

We have error. Beautiful, chaotic, necessary error.

So the next time you write, I challenge you: do not start with AI. Start with your hand. Your voice. Your thoughts.

Write a terrible draft. Cry a little. Laugh at your own joke. And then, maybe, ask AI for help.

But only after you have been human first.

AI Overview Killed My Curiosity (And Maybe Yours Too)

Remember when googling something used to feel like cracking open a door to a whole new world?

Let’s rewind a bit—say, ten years ago. You are sitting at your desk, wondering, “Whey do cats purr?” So, you type it into Google. But instead of getting one tidy answer, you get a buffet of links. You click on a blog written by a vet who adores cats. That blog leads you to a research article. That article makes you curious about animal communication. You read a few Reddit threads where people argue about whether cats are manipulating humans. Then you watch a five-minute YouTube video narrated by a guy with a British accent. Now, somehow, you are reading about tigers, and next thing you know are learning that purring is possibly a form of healing.

Two hours later, you are knee-deep in animal behavior theories, evolutionary biology, and ancient Egyptian art. And you feel…satisfied. Not just because you found the answer, but because you earned it. You explored. You got surprised. You did not just grab info—you lived with it for a while. That’s what learning used to feel like. It was a ride.

Now? It’s Just a Pit Stop

Today, I Google the same question—“Why do cats Purr?”—and boom, AI Overview gave me a neat little summary in bold font at the top of the page.

“Cats purr for a variety of reasons, including to communicate content, self-soothe, or aid in healing. This sound is produced through neural oscillations in the brain that send repetitive signals to the laryngeal muscles.”

I read it. I nodded. I closed the tab.

That’s it.

No rabbit holes. No detours. No surprises. No weird sceince blog with a bizarre theory that makes me laugh but also think, “Could this be true?”

And that, my friend, is the slow death of curiosity.

We’re Getting the Answers, But Losing the Adventure

AI overviews are like fast food for the mind. They are hot, ready, and convenient. We don’t even have to lift a finger (well, maybe one finger to scroll). And in many ways, they are incredible. Don’t get me wrong—technology that can summarize twenty articles into one clean paragraph? That’s impressive. But here is the thing: we humans were not built to live off summaries. We grow through effort. We learn by digging. We remember the things we worked for. AI gives us the answer, sure, but it skips the most important part: the journey. And let’s be real—the joy is in the chase.

Ever Asked a Question Just to End Up Somewhere Completely Different?

This happened all the time when I explored without shortcuts. I Googled “How did the Eiffel Tower get built?” and suddenly I were reading about the rivalry between Gustave Eiffel and other architects, then about Paris in the 1880s, then about the World’s Fair, and then about how people hated the tower at first. I found a personal blog of a woman who lived in Paris for a year and hated the view from her window because “that dumb metal thing ruined the skyline”. I laughed. I learned. I remembered. But with AI overview? I got a couple of neat facts in under ten seconds. “Constructed in 1887-1889, the Eiffel Tower was designed by Gustave Eiffel’s engineering company for the 1889 Explosition Universelle.” Cool. But…that’s it? Where’s the story? Where is the tension, the drama, the irony, the unexpected? I did not find that answer. It was handed to me. And that makes all the difference.

Information Without Involvement

Here is the real issue: AI Overview makes information feel transactional. You ask. It answers. Done.

But learning has never really worked like that. It’s messy. Its emotional. It’s full of dead ends and detours and contradictions. That’s what makes it stick. Think back to when you were a kid and had to do a school project. Maybe you went to the library. Maybe you had to open five different books to find the facts you needed. It was frustrating—but also exciting. When you finally found the right quote, or the perfect image, or that one paragraph that made your topic come alive—you felt a little spark. Compare that to now: you copy and paste a summary. You do not even need to read the whole article. Heck, most people do not even make it past the first link.

We are turning into passive takers of information. Scrollers, not thinkers. Downloaders, not diggers.

Our Brains Love Shortcuts. And That’s the Problem

Let’s not sugarcoat it: our brains are lazy. That’s not an insult—it’s biology. The brain’s main job is to conserve energy. That’s why we love automation. It’s why we keep eating chips even though we said “just one more.” It’s why we click the first link and call it a day. AI Overview is custom-built for this tendency. It delivers quick satisfaction. But satisfaction without engagement is hollow. It’s like eating cotton candy—tastes sweet, but disappears before you even realize what happened.

The more we rely on AI to summarize for us, the less we exercise the parts of our brain responsible for critical thinking, curiosity, and memory. We stop asking follow-up questions. We stop wondering. We stop comparing sources. And slowly, we stop thinking for ourselves.

Ever Heard of “Cognitive Lethargy”?

It’s real thing. Not an official diagnosis, but a growing concern. It’s what happens when we get so used to being fed information that we lose the ability to wrestle with it. We become mentally sluggish. Not stupid, just…uninvolved. We start using words like “vibe” or “I think I heard somewhere?” instead of actually knowing. We forget faster. We feel less connected to the knowledge we absorb. This is not just a learning issue. It’s a living issue. Because how we learn is how we experience the world. If we stop engaging with information, we start engaing from everything else, too.

Okay, Let’s Talk About That Crayon Example Again

I mentioned this earlier, but let me dig in deeper because it’s too good not to. A friend of mine was helping her kid with a school project on the history of crayons. She Googled “When were crayons invented?” and, as expected, AI Overview gave her a neat, no-nonsense answer: Crayons were invented in 1903 by Binney & Smith.”. She repeated that to her kid. Done. But later, her kid asked, “Why 1903? And why did they start with just eight colors? And how did they pick the names?” She had no clue. So, she did the unthinkable: she kept searching. She clicked a few articles. Found a blog that talked about the original crayon color names—like “Maize” and “Carnation Pink”. She discovered that some old color names were changed because they were racially or culturally insensitive. She even watched a video about how crayons are made in factories today.

Now she was not just helping her kid. She was learning herself. She was excited. Later that night, she brought it up at dinner with friends. One of them used to collect vintage crayon boxes as a kid. They talked for 20 minutes. That’s what discovery looks like. Not just reading a sentence—but connecting with it.

More Examples? Oh, I Got ‘Em.

Example 1: Black Holes

I searched “What is a black hole?”

AI said: “A black hole is a region in space where the gravitational pull is so strong that nothing, not even light, can escape from it.”

Cool. But if  Had I dived deeper, I might have found mind-blowing stuff: time slows down near black hole. Some theories suggest they could lead to wormholes. There is a supermassive one at the center of our galaxy. And Stephen Hawking once joked about aliens using them as garbage disposals. None of that is in the summary. You have got to go digging.

Example 2: Bananas

Yep, bananas.

I Googled: “Are bananas good for you?”

AI said: “Bananas are high in potassium and a good source of fiber and vitamin B6.”

End of story?

No way. If we Click around and we will learn that the bananas we eat today are not even the original kind. The wild ones had seeds. The current banana—called the Cavendish—is in danger of going extinct because of a fungus. There is a global banana crisis happening right now, and most people have no idea.

Again: not in the overview.

So, What Can We Do?

Do not worry, this is not a “throw your phone in the river and go live in the woods” kind of rant. I am not anti-AI. I am just pro-curiosity. Here is what we can do to keep our minds sharp and our wonder alive:

  1. Scroll Past the Overview

Yes, the AI Overview is right there. It’s tempting. But resist. Pretend it does not exist. Click on something else. Let your eyes wander. That’s where the magic begins.

  • Follow the Weird

Find the blog that looks oddly specific. The Reddit thread with too many comments. The YouTube with a terrible thumbnail but surprisingly good content. Follow the trail.

  • Ask “What is Missing?”

Every summary leaves stuff out. Ask what’s not being said. Who’s behind the answer? What perspective is missing? This turns you from a reader into a thinker.

  • Talk About What You Learned.

Nothing makes knoweldge stick like sharing it. Tell a friend. Text a sibling. Post a little nugget on social. You will remember it way better, and you might even spark someone else’s curiosity.

In the End, It’s About Ownership

AI overviews can serve us information. But it cannot give us the thrill of discovering it ourselves. It cannot make us gasp, or laugh, or raise your eyebrows. It cannot give us that feeling of “Wait—how did I know this?!” Only we can do that. When we let ourselves get a little lost in learning—when we take our time and let curiosity lead—we are not just collecting facts. We are building connections. We are flexing our brain. We are staying alive inside.

So, Next Time You Google Something…

Skip the overview. Dive into the mess. Read more than one thing. Let a question lead to another. Let ourselves be confused. Let ourselves be amazed. Because when we fight for the answer—even a little—we own it. It becomes part of us. And maybe, just maybe, we will fall in love with learning all over again.