Co-creation with GenAI: the fallacy that needs debunking
The author and techno-specialist Cory Doctorow reminded us recently that there are at least two ways to look at human-machine interaction, and it’s worth quoting the paragraph in full:
‘In
automation theory, a “centaur” is a person who is assisted by a machine.
Driving a car makes you a centaur, and so does using autocomplete.
A
reverse centaur is a machine head on a human body, a person who is serving as a
squishy meat appendage for an uncaring machine. For example, an Amazon delivery
driver, who sits in a cabin surrounded by AI cameras that monitor the driver’s
eyes and take points off if the driver looks in a proscribed direction, and
monitors the driver’s mouth because singing is not allowed on the job, and rats
the driver out to the boss if they do not make quota.
The
driver is in that van because the van cannot drive itself and cannot get a
parcel from the curb to your porch. The driver is a peripheral for a van, and
the van drives the driver, at superhuman speed, demanding superhuman endurance’.
Doctorow
adds that obviously, while being a centaur is great and helpful, being a
reverse centaur is pretty bad: who would want that?
To some
extent of course, doom-scrolling is close to turning users into reverse
centaurs; after all, a secret (to users) algorithm force-feeds information to
said users without their explicit consent. If I like Country music, Spotify
will recommend more of the same, and will decide for me what other songs I
would like. Amazon does the same. Those algorithms need us only as spreaders of
a usage, and as data-base: once they have what they see as enough data on the
user, they will generate an almost inescapable gravitational force, leading
users to where they want them to go. Users end up having little say in the
matter, and everything that lies outside of the algorithm’s radius (i.e. what
you never showed an interest in) will remain invisible to you, unless you make
a conscious, determined and consistent effort to discover what you do not even know
exist. A strong sense of curiosity is required to break those invisible
barriers, and a strong choice-making ability is needed to go against endless
similar suggestions.
Over the
last three or four years, we have heard a lot about AI and GenAI (like ChatGPT,
which I use below as a generic name), and how the future is here, how we can’t
avoid it, and that we must adapt, be left behind or die. Leaving aside the fact
that this pseudo-Darwinism is typically used by those who already have the
power (and the fact that those who stand to gain the most from our adopting
GenAI are…those who develop and sell it), another idea quickly appeared in
educational circles: collaboration. Sparring with GenAI. Co-creation.
And I think this idea could be usefully framed in a slightly different way, by
asking two very simple questions:
1. What is required of each partner in
a collaborative project?
2. What is required to assess the worth
of one’s partner’s work?
For the
first question, I will limit myself to very basic notions, and I contend that
what is needed to cooperate with someone, to work with someone, to co-create,
is a measure of equality. Of course, you will accept that your partner
is better at task A or B than you are – hence the collaboration perhaps – but
similarly, you will know that your partner recognizes your skills in other
areas. You both benefit from each other’s expertise or ability but that doesn’t
make you unequal: you are both authorities in your field and both of you know
and acknowledge that. My contention is that increasingly, people see ChatGPT
as an authority, but not as an equal. Authority here comes to mean ‘an
opinion I have no means to assess, so I must accept it’. How that translates in
real life these days is people using ChatGPT like they would Google, ask it
‘What should I say to my toddler?’, or ‘What type of shoes is adequate for
rainy weather?’ (both real examples by the way), and then accept the answer as
true, and unopposable – ‘ChatGPT knows better’.
The same
mechanism is true of Google Maps, say: I defy you to name more than two or
three people you know who double-check the itinerary Google Maps give them to
go from point A to point B. in fact, I’d be very surprised if you could name
just one person who does that. Google Maps has become an authority, whose
judgement is necessarily better than mine and which I should follow without
thinking too much. Of course, because Google Maps took us to our destination so
many times without a hitch, that sense is reinforced: if it got it right 5, 10
or 50 times, why should you question it the next time you use it? And so it
comes to pass that you never double-check, and implicitly trust it to be right.
In many ways, we do the same with our coffee-machine, our car, our phone and
basically all our machines: we trust them to work, and be better at doing
something than we are. And we do not question our car’s ability to take us home
every time we switch on the engine: we trust that by turning the key or pushing
a button, the engine will start, the gears will work, the steering-wheel ditto.
The difference with Google Maps is that we’ve all experienced a flat tire or a
clogged sink, and know machines are fallible: we are centaurs after all, and
remain in charge. But Google maps doesn’t fail like that (unless it stops
working altogether), and its failures (picking the wrong road, ignoring
bicycles’ routes because it’s inherently car-centred) might not even be seen by
the user unless they double-check. But see above: who does that?
And
increasingly of course: who can actually do that? Young people can’t read analog
clock faces anymore – can they read maps? Do we care? Should we care?
In that
same way, ChatGPT has become an authority whose responses are increasingly
accepted as de facto unarguable. When students ask it to generate ideas
for an assignment or a lesson-plan, what comes out is now seen as coming
from an authority: someone who knows more and knows best. But it is a
profoundly unequal relationship of course because ChatGPT doesn’t recognize the
user as an equal nor an authority on anything (it’s a machine, it doesn’t think
or feel). And I contend that there can be no co-creation between two unequals –
there can be no meaningful collaboration between Master and Servant, between
Machine and Reverse Centaur.
The second
question was: What is needed to assess one’s partner’s work? The answer is here
again simple: the ability to evaluate and to choose. To know whether your
partner did their part well, you must have the knowledge and skills necessary
to evaluate that: to put in perspective the finished work in relation to
expectations based on knowledge. And you need to be able to choose between
results to decide which is best or worst.
An obvious
problem with GenAI here is of course evaluation: if you constantly ask ChatGPT
to generate ideas for you, on which basis will you evaluate the results you
get? The further away from your field of expertise, or routine, or general
knowledge, the more difficult that becomes. If you turn to ChatGPT to organise
and structure those ideas in transmissible form, like a presentation or an
essay or a memo, on which basis will you evaluate the output if you never
structure ideas yourself? And this is course compounded by GenAI having become
an authority, or being perceived as such, so that by now you have the following
situation:
Someone asks ChatGPT to generate a few ideas
for, say, a presentation they have to give. Not being used to generating ideas
themselves when it comes to that kind of work, that person will increasingly
tend to think that the GenAI output is likely to be superior to their own; not
having the expertise to assess the worth of the output, they cannot question it
very much. So that when it comes to choose whether the output is correct, or
good, or valid, they are increasingly likely to not do so – to let the
machine choose for them. After all, if it worked once it should work twice,
right? And whatever I tell myself about gaining time by delegating tasks to the
machine, the reality is that increasingly I cannot tell whether the output is
good or not, I cannot decide whether the choices on offer are good or not, and
ultimately, because I do not see myself as an equal, I will tend to accept what
I am given without checking, without double-checking…without even thinking very
much.
That is the
problem with ‘Co-creating with AI’, with ‘Collaborating with AI’, with ‘Sparring
with AI to generate ideas’: we are losing the means to check on the results,
and we increasingly come to see ourselves as appendages to the machine, and not
at all in charge of it.
Reverse
centaurs indeed: how can you be happy about that?
And more
importantly: will we be able to reverse that trend?
I think
not, at least as long as there is no serious engagement with the problem –
something our employers, politicians and financiers and education systems have
seemingly no interest in doing. Please wake up, and please oh please, stop urging me to 'collaborate' until you've done some serious thinking work about the meaning of that word, and the consequences of your urging on (meta)cognitive abilites.
Comments
Post a Comment