Am I a Grump For Not Liking ChatGPT? 

At this point, you’re living under a rock if you haven’t heard of ChatGPT. Seven countries, including Italy and China, have banned it, the ‘godfather of AI’, Dr Geoffrey Hinton, has said its dangers are ‘quite scary’, and even Elon Musk has signed an open letter asking for a halt to AI training.  

I’ll leave it to you to decide whether I’m in good company for claiming the status of an anti-ChatGPTer and feeling rather grumpy about the AI that exploded onto the scene at the end of 2022. 

 Why are we all so bothered about ChatGPT-4? 

The latest version of OpenAI’s chatbot, ChatGPT-4, was released in March. Adopted by people to provide answers to questions, write code, or generate copy on a certain topic, it’s felt like more a question of what ChatGPT-4 can’t be used for than what it can. Far more powerful than its predecessor, it can also respond to images – for example, digesting a photo of food ingredients and providing recipe suggestions. It’s basically an Amazon Alexa on steroids. 

But the seemingly positive reception afforded to the essay-writer, fact-finder, cheat-sheet, time-saver is quickly turning to concern. If I had a pound for the number of people that’ve asked me if I use ChatGPT in my content writing role, I’d probably have made around £100 in the last month alone. Because everyone seems so confused when I say I never have, and I doubt I ever will. My estate rises by £1,000 if we’re including the ‘are you worried it’ll replace your job?’ questions. 

How ‘human’ is ChatGPT? 

Much of the hype around ChatGPT-4 came from its answers being formatted with ‘human-like’ text. Which forms a hefty part of my grump – I won’t deny that this AI’s incredibly advanced in the way it’s learned to ‘imitate’ how we speak.  

But that doesn’t make it a human.  

What does? I don’t have the answer to one of the most complex philosophical questions of all time. Though the Encyclopedia Britannica might – it states our capacity for ‘articulate speech’ and ‘abstract reasoning’ is part of what makes us human beings.  

Sure, you can ask this latest version of ChatGPT to answer your question in a certain tone of voice. But crucially, the information still requires processing from a person. Or least, I’d hope that the person who’s inputted a question or demand actually takes the time to process the answer... Because whilst ChatGPT’s taking much of the legwork away, people still need to understand what it’s telling them – how to humanise the information they’ve been given. What the AI generates isn’t authentic – it's the amalgamation of scouring tens, hundreds, thousands of sources on the web. It’s not you, and it’s certainly not me. Which is why I think I’ve more reason to be confused when people ask me if I’m worried it’ll replace my job, which also happens to be my hobby.   

Kept up at night about ChatGPT replacing my job? I’m sleeping soundly 

Simply, the beauty of writing is within the words and phrases we craft as individuals to share our ideas, opinions, experiences. It might give it a good go, but I doubt ChatGPT would be half as thought-provoking as the copy we craft for client thought-leadership articles, or emotionally resonate anything like my personal tales. 

Now I know that ChatGPT’s meant to be used as a tool, not a solution, for writers, and can support with nailing down a certain tone-of-voice for your next magazine article or university essay. But, and it’s a big ‘but’, if you know the person who’s supposedly written the piece, I’d put my life savings* on being able to identify if it's been penned by them or ChatGPT.  

 On an obvious level for this island we call home, it’s the presence of Americanisms (e.g. organization not organisation) that litter the copy. On a more granular level, it’s the idioms, connectives, personality – or lack of – that signify the copy’s been produced with AI. Because what is writing if it lacks human touch and individuality? 

“In AI's current state, everything is regurgitated from other published work. And if you're not going to put in the effort to bring a unique opinion on a subject, why write about it at all?” 

A recent Tweet from Tina Donati, Freelance Writer 

And yes, I know that ChatGPT-4 isn’t necessarily used to write the whole thing – it can be used as a starting point, a prompt for writer’s block, or an aid to explaining something a little complex. But I think we should take it with a pinch of salt that the text it churns out is actually as we converse.  

Is AI dumbing us down? 

My main issue with ChatGPT, however, is I worry that it’ll dumb us down. A little like predictive text – don't get me wrong, I appreciate the speed that my phone lets me message by suggesting – predicting – the next word I'll type based off my previous behaviour. But like the squiggly red underline on Word, is predictive text lessening our need to actively learn to spell? That’s a fundamental part of being human that I feel I’m even becoming ignorant to, and I sit with a paperback thesaurus on my desk! 

If devices are going to spell for us and ChatGPT-4's going to answer questions for us, what exactly are we meant to be doing? If I needed to know something for my homework as a child, I'd consult my encyclopaedia or head to the library. When I was a teen needing information for an essay, I’d speak with my teachers and research via Google. Least back then, we’d still be able to access a variety of ideas, stats, facts, and form an opinion on them.   

How’s education going to fare? 

SO maybe my issue is with the wider education system as a whole, because I’m not oblivious to the benefits of being able to automate different areas of life. ChatGPT might save time in researching facts, with the user being able to extract these quickly and pop them into their essay. OR act as a translation tool that’s more reflective of a language than your traditional Google Translate, helping to aid communication between two parties. But when we’ve got the internet, a colossal library of facts, are we lessening the need to put some effort in and try and find the answer ourselves? 

 “The college essay is dead.” 

Declared by Stephen Marche in a recent piece for The Atlantic 

Saying it more eloquently than I could muster, despite the 1,000 words above, he goes on to say that “the essay, in particular the undergraduate essay, has been the centre of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up.” 

Because how can you learn if you’re relying on one thing to give you the answer? You're not, in short. Which is another reason why I don’t worry that it’ll take my job. Sure, it might be able to write something that makes it looks like I know about the latest MedTech development or supply chain challenge for a client. But when they ask me to develop on the points I've raised on a call and I go blank, that’s where the difference lies. 

Nowadays, if kids are asking ChatGPT-4 questions like ‘have attitudes to racism remained the same or changed since To Kill a Mockingbird was published?’, they’re reliant on one answer to paste into a check-box homework exercise, rather than using resources to support independent thinking. I should probably point out I’m a bit of a cynic... 

A self-checkout analogy  

Can I end by comparing ChatGPT-4 to a self-checkout? I’ll give it a go. It’s mad to think they’ve been mainstream for over two decades – Tesco rolled out its first back in 2003. Was the long-term plan for them to replace all human-ran checkouts? Perhaps. But we all know they still require at least one person, if not more, to get the job done.  

And that’s OK – we’ve recognised that we can’t rely on the self-checkout alone (not yet, at least). That they’re most effective when a supermarket employee has been assigned to help chivvy people through as the machine screams ‘unidentified item in the bagging area’ for the 619th time that day.  

I suppose what I’m trying to say is, I worry that people are using ChatGPT-4 to get to the chequered flag, not as a pit-stop along the way. Because I’m not ignorant to the capabilities or power of AI – I'm concerned about the our reliance on it, and how it’s going to shape future, and current, generations’ ways of thinking, forming opinions, holding discussions, and the scale of bias we’re opening ourselves up to.  

As Ian Bogust writes in The Atlantic, “It’s an easy conclusion for those who assume that AI is meant to replace human creativity rather than amend it.” I guess time will tell. But until then - I'll continue to write from the heart, rather than the Chatbot. 

*Prone to exaggeration since ‘96. 

Gabrielle Percival

As Content and Strategy Executive, Gabrielle researches topical affairs and industry trends to provide clients content with interesting angles at their core. She’s a keen question-asker and not shy of playing devil’s advocate to stimulate discussion and explore solutions. Particularly intrigued by the influence of language change on marketing, her insights tend to feature a good dose of linguistics and etymology!

Previous
Previous

War of the Chatbots

Next
Next

From Likes to Sales: Social Commerce and its Future Challenges