The real problem with AI? Inertia

 I work in an area – Education – where the latest developments in AI technology have had an immediate impact – and yes, the same has happened across the board, I know.

Since last year and Chatgpt’s arrival on the world scene, everyone – from pupils to students and teachers – has been using it. By ‘Everyone’ I obviously do not mean literally everyone, but it is clear that will soon be the case.

Pupils use it to get their homework done, or pass tests; students ditto; teachers to prepare lessons, develop material, design tests and have them marked – among many other things. Unsurprisingly, all those people claim that technology is great and helpful – well, they would, wouldn’t they? And to be fair, ‘great’ and ‘helpful’ are reasonable words to use in this case. So that’s not the problem.

Equally unsurprisingly, those users will make sure to remind you of two things at all times:

AI is not intelligent as we know and understand that word – it’s mindless, really – just a machine.

              Users must be ethical in their use of AI, and always double-check the AI-produced work.

And with that, they will tell you that everything will be fine – after all, didn’t we bemoan the calculator’s development, claiming we’d become stupid if we couldn’t add or divide sums?

All of you alert readers will have detected the age-old False Analogy fallacy, consistently used by the same people who urge you to ‘think critically about what the machine produced’. Well well…that promises much indeed. But fine – let’s attend to those two reminders.

That AI is not intelligent is, of course, neither here nor there: what counts for the general user is the result. If I may make an analogy myself: people buying clothes for 4 or 5 euros on Shein or an equivalent place know (or could know, or should know) that it is simply not possible to produce clothes at that price without someone, somewhere…paying the price. It could be those people sewing the clothes, or making the material, or shifting boxes, but you can be sure that the low price for you comes at the cost of someone else. Not to mention the very high carbon footprint of such companies, and of our buying cheap stuff from, say, China or any other far-away place (I live in Europe) and the impact that has on the environment.

Does that matter to people using Shein?

Or, more to the point really: does that matter enough that they will stop buying cheap?*

If I use Chatgpt and find that the results are acceptable to whoever I submit them, is my question going to be: Oh, wait, but…the machine is not intelligent?

Or is it going to be: Oh, wait, the machine does that pretty well, let’s ask it to do it again?

Telling people AI cannot think is irrelevant: people don’t know how their phone works, or electricity, or wind turbines, but as long as those things work none of it matters. What matters is the result. You may, somewhere in the back of your mind, have doubts sometimes, have that twinge of your moral organ that tells you all is not well, but yeah. See below.

The real question here is: how long does it take before the user trusts the machine enough that they won’t ask questions of what it produces?

As for the second point - urging users of chatgpt to ‘be ethical’ – hmmm….see above?

And as for double-checking what AI produces, I shall make use of a well-known rhetorical device, and ask a question in return: when was the last time you, or anyone you know, double-checked something they’d read on Wikipedia? When was the last time you, or anyone you know, double-checked any information, really? Hopefully, many of you will say ‘Me! Me!’, but most likely you will mean some political information, or the treatment of social information by different newspapers perhaps. But information about stuff, like When was she born?, What did he write?, Where is that place on the planet?, information like that? I bet no-one double-checks those anymore. In the beginning, perhaps? But now?

Partly it’s laziness; partly it’s ‘If it worked a few times, it’s bound to work again’ (a cognitive bias, by the way); partly it’s trust: trust that something dedicated to one task will do that task well; and partly it’s habit and a ‘Ah, it’ll be fine’ sort of approach.

And that’s inertia – the main problem when it comes to AI.

How many people, and for how long, do you really think are going to go over what Chatgpt produced in great detail, focused attention and a critical mind? Honestly? And I don’t mean giving it a quick glance-over, a ‘Oh yes, that looks ok, that should work’. I mean a real critical look, going in-depth, questioning line-by-line what the machine produced?

Because see above: laziness, or lack of time, or boredom, or more simply and less judgmentally, inertia: those are things that are real, and when you’ve got six lessons to prepare and teach every day, a family perhaps, friends, things to do, places to be – this is going to compete with what the machine offers.

And guess who’s going to win?

No analogy here really, but we need to face the possibility of AI going the way of the washing-machine, the electric blender or the toaster (and thousands of others) – machines we take for granted because they relieve us of physical effort, and save time. Which sounds great, of course - in fact, in many cases, it is great, full stop.

So the real real question becomes: is washing clothes by hand analogous to thinking? Is toasting bread by hand analogous to understanding, reflecting, analysing, deciding for yourself?

What do we lose by delegating our thinking to a machine?

I have to close off with this line by Henry Adams in his autobiography, commenting on a certain class of Englishmen: ‘[they were] all tending to free-thinking, but never venturing much freedom of thought’.

Let’s make sure we do not lose sight of this distinction, and that we don’t give in to intellectual inertia. We’ve got one brain: let’s not waste the opportunities we have to use it.

(this one is for Mila...and Maëlle 😊 )

 

*The business of fast fashion was worth around 103 billion euros in 2022, and is forecast to rise by more than 80 billion by 2027 (https://www.statista.com/statistics/1008241/fast-fashion-market-value-forecast-worldwide; www.businesswire.com)

Henry Adams: The education of Henry Adams. (One of those rare autobiographies where the author never says ‘I’ and relates his life with a third-person pronoun. A classic of American self-writing – but not the smoothest of reads…)

Comments

Popular posts from this blog

Book-lists? Classics, YAL, topical novels, free choice? The dilemma of using the right texts for the right reasons

Farewell 2023: my year in (a selection of) books