Five things I hate about you: AI and those small things that make us human
Those apparently 'simple' tasks we delegate to Chatgpt will end up costing us more than we know
In his 1994 noir novel ‘Gun, with occasional music’, the American writer Jonathan Lethem thought up a bleak version of the future in which the State makes the taking of a certain drug compulsory: that drug is called ‘Forgettol’, and it basically erases your memories, short or long, and makes it impossible to remember even basic facts about yourself, let alone others. That then leads to the invention of the ‘memory-box’: a device characters use to effectively externalize their brain. That memory-box stores all those memories your brain can’t handle; in fact, it stores all the information your brain can’t handle. The benefits for the state are obvious (no possibility of rebellion), but the effects on people and society are terrible: a fragmentation of one’s consciousness which leads to a fragmentation of the social fabric.
Sounds
familiar?
When the
printing press started to spread in Europe in the second half of the 15th
century, there was an outcry from some people who were concerned that the
printing of books would be like externalizing people’s memory: they argued that
people would lose the ability to remember long texts, or speeches, or
narratives in general.
But those
people were fools, weren’t they? Look at us, we read AND we remember; we read
AND we can memorise. But were they? Fools?
Because we
could also ask, like those monks in the 15th Century: what is
lost when something new comes along: is it anecdotal, or is essential? And
more importantly: is it knowledge of something, or knowledge of HOW TO
DO something? And if it is knowledge of how to do something, is it
mechanical knowledge (how to weave bamboo to make a boat, how to use a scythe
for harvesting), or is it cognition-related: how to think, how to evaluate, how
to determine, how to decide, how to understand, how to compare, how to know,
how to think about it all? Stuff that, you know, goes a long way towards making
us humans…
So please
now consider Generative AI.
I hear (and
acknowledge) AI is wonderful because it can help identify illnesses, compute
enormous amounts of information to help analyse, build, develop tools and
knowledge that will save many lives. Fine.
But that’s
not what you and me use it for, and that’s certainly not what my students and
kids at school use it for. In fact, one comes to realise that what we are all
doing these days with generative AI like Chatgpt is externalizing most of the
cognitive functions we were in charge of until now – creating a sort of
memory-box. So let’s focus on the everyday tasks now devolved to AI by normal
people and ask a simple question: what do we lose (or seriously erode) by using
AI in terms of core knowledge, core cognitive functions? Take those few
examples of tasks now routinely devolved to chatgpt, or any generative AI:
Ask
chatgpt to summarise documents:
Very handy
indeed, and time-saving no doubt. But summarizing is a real skill, which we can
describe as ‘Identifying first and second degree information’, or ‘Identifying
what matters from what is anecdotal’, or ‘Extracting the main point from
surrounding facts’, or even ‘Collecting and connecting various bits of
information in relation to a whole’. And that skill is crucial in everyday
life too, especially as we are bombarded with massive amounts of (often
contradictory) information. If we can’t distinguish between ‘what matters and
what is anecdotal’, what then? How do we understand the world around us? How do
we make sure we even know what is being said?
Ask
chatgpt to suggest ideas for a lesson/an essay/an article:
Ah yes: you’re stuck, or you think you are, you’re pressed for time, whatever the reason. Then you rationalize by saying ‘I need something to think from’. Sure you do: but isn’t that what you were asked to do in the first place when given a theme/idea/topic? Now what you’re doing is effectively asking someone else (or rather: some most statistically probable piece of text) to think for you: but where are your ideas? What will you do when asked to think for yourself with no access to chatgpt? And of course there is the fallacy we’re all embracing because it’s easier that way: the fallacy that says that ‘I just need to get started, then I’ll do my own thinking’. Really? Really?? Do you mean that when the machine comes up with a whole set of ideas, you will not be tempted to accept them all? Do people doubt what they read on Wikipedia? Do people double-check Google Maps? Do people calculate by hand what they just asked a machine to calculate for them? Will people really think (do they think right now) they can do better than a machine they use constantly? I think not. Plus when something is easy to use, it becomes easier to just use that - however you justify it.
Ask
chatgpt to give structure to your ideas (if they even are yours):
That also
seems sensible at first glance: structuring ideas (or a text) is a difficult
task as it requires connecting ideas or topics with one another, seeing a
hierarchy of ideas/arguments/examples, sensing the logical flow from the first
point to the conclusion, and having a sense of the whole and how each part
relates to it. But those are also crucial skills in everyday life when we need
to decide, act, choose, plan. How do you want to make a lesson plan if you
don’t have those skills? How do you express an argument to a friend, how do you
communicate your ideas without those skills? Here again, it’s so comfortable to
think that ‘Of course I could do it myself but you know…’: well, why don’t you
do it then, if only to make sure you actually CAN? We all make fun of Donald
Trump and his lack of articulate speech and deranged tirades: not having the
skills to organize ideas leads to that situation – how much longer will we make
fun of him if we lack that skill ourselves I wonder?
Ask
chatgpt to devise a test on certain material, or create questions on/from a
novel:
A
time-saver again, especially as so many people are asking the machine to do it
that the training data are now enormous (with one consequence that the answers
become ever more generic, not to mention Western-centric). But devising
questions in relation to certain knowledge (lesson material, a novel’s
contents) is really seeing the relationship between the whole and its parts; it
is understanding which type of question brings what type of answer; it is
having the skill to tease out information in relation to a certain goal; it is
understanding how elements of knowledge interrelate; it is understanding causal
relations. Are these skills not useful in everyday life? Are they not needed in
making sense of what’s around us, in finding out what’s real and what’s not, in
identifying logical fallacies?
Diversity:
The
statistical nature of Chatgpt (it predicts which words or part of words are
most likely to come next) is logically related to the training data used:
Chatgpt can only predict what is likely to come after a word on the basis of
the prevalence of one particular item above others. ‘I take coffee with milk’
is more prevalent than ‘I take coffee with cucumber’, so the first is much more
likely to be predicted as what should come next. But what of those places where
people do take coffee with cucumber?
You laugh
because ‘Coffee with cucumber’ is a bit ridiculous, since, yeah, nobody does
that. But surely we can all see the point here: as one culture is used to train
a machine which is now being used as a writer, thinker, giver of ideas, sparring
partner to think, source of information, trainer of pronunciation and
everything in-between, that one culture will come to be ‘What is’ for everyone:
it will come to define reality. There will be no other answer than ‘With milk’,
and so the possibility of other answers will slowly disappear: you don’t know
what you don’t know and all that. And of course, since Chatgpt is also trained
on any freely available online information, it becomes its own training data,
which can only mean that it will tend to reinforce those biases, so that the
answer ‘With milk’ can only become ever more hegemonic.
But what of other cultures, other ways, other perspectives? Now that people use Chatgpt like they did Google, the situation gets even worse: at least Google gives different possible answers, but Chatgpt will give just one. And of course, if I’ve stopped exercising my abilities to, among others, summarise, connect different data, infer, organize information hierarchically, I will be incapable of questioning the answers I get.
And all
this is taking place at a time where the dominant culture behind Chatgpt – The
U.S.A. – is not exactly embodying openness, recognition and acceptance of diversity
or even basic scientific competence.
I know AI is here to stay, and so is GenAI. But saying that is not saying much beyond ‘Accept it and move on’ LINK TO THERE IS DANGER?’. Yet examples abound in our history of accepting new toys without thinking the possible consequences through. And crucially, ‘Thinking it through’ doesn’t (have to) mean ‘Rejecting it’: it should mean identifying clearly what is at stake, what might get lost – and gained – and evaluating what we prize. Not out of some moral concern but rather out of concern for how we may change and what that may mean for future generations. And out of concern for what transpired recently: 'AI models are not designed to help you answer the questions you ask, they're designed to provide the most pleasing response and keep you engaged'.
I’m no Luddite, I’m no doomsayer, but neither am I one of those wide-eyed enthusiasts who welcome all that technology without ever asking themselves what they may be in danger of losing, of not being able to do anymore. I don’t want my thinking – my individuality – to be taken over by a machine because I’m lazy. You’ve got a brain: use it before you find yourself dependent on a memory-box and externalizing most of your brain’s functions, because once you do, it will probably be too late to reverse it.
I quite agree with your closing paragraph: use it or lose it. BTW: I noticed you wrote: "Now what you’re doing is effectively asking someone else (or rather: some most statistically probable piece of text) to think for you: ..." where I wonder about the udr og "think"...
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete