'AI is not the problem, YOU are', and other false analogies used to hoodwink us
It’s
a false analogy because by the same reasoning, nothing is a problem on its own,
and the ones responsible are the users. So the analogy continues: you don’t
blame hammers for being a potential weapon to kill with, why should you blame
guns? If no-one uses guns to kill other people, guns are simply…not dangerous.
I
keep seeing the same sort of disingenuous, mendacious and illogical reasoning
used when it comes to AI, especially GenAI (like Chatgpt): AI is not the problem,
you are. AI is a wonderful thing, it’s the users who don’t understand it,
misuse it, abuse it, fraud with it. Reform the people and hey presto! Everything
will be fine.
This
logical fallacy is quickly followed by another: AI is here to stay, so adapt!
Because there’s nothing you can do since, well, it’s here. And because it’s
here, we have to do something with it…so let’s rush and find uses for it
without taking the time to think it through: what are we running the risk of losing? (click on the link!) How will it affect core
cognitive skills we’ve had for hundreds of thousands of years? How will it
affect learning, deep learning, retention, comprehension, thinking, critical
thinking, creation, independence, personality, genius? (Genius in the 18th-century
sense of course, that is, personal talent and idiosyncratic characteristics).
Ah,
no worries mate, it’s here to stay so 'Use it or be square'. 'Use it or fall
behind'. Use it or be an old fart who can’t handle the truth. What’s the truth?
It’s HERE TO STAY - YOU HAVE NO CHOICE - STOP WHINGEING.
You
see, what’s funny (yet heart-breaking) about this is that, in many ways, this
illogical, unreflective thinking, this stringing together of logical fallacies
we’ve known about for ever: this is completely in line with what GenAI could
ultimately lead us to do – to paraphrase Dante: Ye who start using GenAI all the
time, abandon all hope of thinking critically for yourself, and let a
non-sentient, replicating, imitative, statistical machine do it for you.
Because of course the False Analogy is just that: a logical fallacy, a ‘denkfout’ as the Dutch would call it: a mistake in thinking. A gun is just not a hammer. Yes both can kill, but it’s mighty difficult to kill people with a hammer in 15 minutes, while guns make that very easy indeed: ‘As of September 10, 2025, there have been 358 mass shootings in America in 2025; of those, 47 were school shootings: twenty-four were on college campuses, and 23 were on K-12 school grounds'. (https://massshootingtracker.site/; https://www.gunviolencearchive.org).
No
hammer that I can see. Only guns and high-velocity weapons. Who’s to blame,
what’s to blame? Would those 5 school shootings a month happen if guns
were not available, and crucially, if guns were not what they are?
Obviously,
I am not comparing GenAi and guns – that would be a seriously false analogy: I’m
comparing logical fallacies and how they’re used to hoodwink us. And I’m simply
saying what you all know already: people may be a problem but they are not the
first problem – that’s guns. But it’s a lot easier to blame people, especially
when you have a stake in the game: you sale guns, say. When it comes to AI, the
most vocal parties who oppose any kind of regulation are of two kinds: those
who benefit from it financially, and those who benefit from it personally. By the
latter, I mean those users who are happy trading autonomy for convenience and
speed, so that instead of asking themselves what they think, what they should
do and how, they delegate it all to a mindless statistical programme.
By
blaming people and not the tool, we essentially do exactly the same: we confuse
the essence of humankind with a punctual moment in our history. We keep being
told we must be efficient, that things must happen quickly, that living fast is
the way, that immobility is a sin, that it’s better to do 5 things poorly but quickly
than one slowly but well. And so we justify using GenAI for everything because
it is ‘efficient’, because ‘it saves time’. And we hoodwink ourselves here, we
let ourselves be led by the well-known confirmation bias, whereas what we need is
to step back and think for a minute.
We
cannot allow interested parties to shame us into taking that step back by
falsely claiming ‘It’s here to stay, there’s nothing we can do, we must move on
with the times’. We cannot allow those parties to make us deceive ourselves by
pretending we have no choice. We do have a choice.
Wouldn’t
it be the ultimate irony that we end up being persuaded by a machine that what
is best for us is…to use that machine? Oh wait, that’s exactly what happened
with Capitalism, and the trickle-down theory.
Or
is the ultimate irony the fact that so many people blithely accept – and re-use
– logical fallacies on themselves, thereby proving that one of the things we
most stand to lose with GenAI are critical thinking faculties, and the need to develop and internalise their use?
So
before you tell me that ‘I’m falling behind because the world is racing forward
on the wings of AI’, please do the basic critical thinking work: define ‘Falling
behind’, define ‘Forward’, define 'bright future', and above all show me you’ve thought about it from more than the
perspective of your own interests.
Show
me you can think without assistance, even if only to start with.
Show
me you’re human.
Comments
Post a Comment