In his laconically named 1637 treatise, Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences, René Descartes argued that while a mechanical body could imitate human behaviour if it so wished, true thought (therefore true being) was exclusive to the res cogitans – the thinking substance – which machines could never possess.
One wonders if this was taken as a challenge, and (separately) if it was meant to be one.
In the centuries to follow, mechanistic fantasies could only further proliferate the living world. Jacques de Vaucanson's grain-kernel-digesting-and-excreting duck from 1764, for instance – deft as it was in its intended simulation – marked the beginning of the hunt for the line between imitation and genuine cognition.As fascinating as these Wunderkammer oddities were, their responses remained unoriginal, in that they were bound by a predetermined sequence, thereby failing to meet Descartes' original criteria. If all you can do it repeat yourself, it is very unlikely that you have the choice to think much.
By the nineteenth century, we begin to notice an interest in a different sort of machine. Charles Babbage's Analytic Engine, now considered a prototype of the modern calculator, inspired much talk – for now here was a device whose outputs were not so much repetitive as they were conditional. On this subject, Ada Lovelace, mother of computer science, had to say:
The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.
This view is being increasingly contested with the march of progress, with modern computer scientists having declared it outdated. Alan Turing mulled over it a fair bit in his 1950 paper, Computing Machinery and Intelligence.
Tests to judge whether a certain object or alien was humanlike (and/or just intelligent to the point of viable communication, given our hero is desperate enough) had reached the status of sci-fi convention by the time Turing came about to publishing his work.
A shiny, sparkly, pioneering example of this was the short
story A Martian Odyssey by (the unfortunately short lived)
American author Stanley G. Weinbaum. Pioneering, mostly in that it was one of
the earliest attempts to imagine an alien mind without reducing it to a
man in crimson greasepaint.
A Martian Odyssey’s wandering American chemist protagonist, Jarvis, staggers across the Martian plain to meet Tweel, a creature whose logic spirals orthogonally to human categories. Tweel repeats words earnestly, imitating gestures with approximated enthusiasm, and displaying a curious reasoning that oscillates between insightful and utter incomprehensible. The story, the conscientious reader will have noted, declined the easy route: it refused to make the alien merely the generic oddly behaved Englishman abroad.
Instead, it invented an intelligence whose inner
grammar could only be inferred through halting, improvised translation.
Once that narrative door cracked open, fiction eagerly
marched through. It was not long ere the twentieth century’s shelves began to
groan under the weight of makeshift Turing laboratories disguised as adventure
tales: from Philip K. Dick’s android bounty-hunting psychodramas; Stanisław Lem’s
sardonic Solaris wondering if humans could even define “intelligence”
without flattering themselves; to Arthur C. Clarke’s HAL 9000 running the most infamous diagnostic routine in cinematic history.
| Andrei Tarkovsky's 1972 film poster for Solaris |
The rituals may have varied, but the prayer never
strayed: a human trying to coax the Other into revealing whether its
perceived ‘mind’ is real, simulated, or (more disturbingly) an entity that made
the distinction look provincial.
Now, these are experiments in fiction. What about Turing
himself?
He tried to circumscribe this whirl of intuitions with
what he called the “Imitation Game”.
The question “Can machines think?” he dismissed as a
metaphysical briar patch and replaced with a more practical scenario: if a
machine’s dialogic performance is indistinguishable from a human, then arguing
about its inner essence is a waste of time. A machine that can imitate a human
might as well have reached res cogitans.
In effect, Turing raised the conversation from a question of ontology (‘the study of what there is’ – a ‘contested’ definition from the Stanford Encyclopedia of Philosophy) to a more realistic one of conduct.
What did the shift imply?
It reframed intelligence as little more than performance. The perceived inherent privilege of the human mind was now irrelevant; all that mattered was its behaviour, and a clever enough machine – or a clever enough illusionist, one mustn’t rule out any spiritual descendants of the Vaucanson kind – could, in principle, pass.
Oh wait. Let’s go over the Imitation Game first, shall we?
In its original incarnation, the Game (...that you also just lost) was austere in its ambitions. A human judge interrogates, via text, two hidden interlocutors, and must decide which is the machine. The test's central thesis lay in freeing the question of the mind from the confines of the skull, to the output on paper.
The duck excreted breadcrumbs; the chatbot excretes plausible sentences. Both are judged not by the authenticity of their inner workings but by the stability of the performance.
Early cyberneticists (“the science of control and communications in the animal and machine,” according to Norbert Weiner, an American front runner in the field) embraced the Game gleefully in its rejection of treating language as a divine spark.
And then, of course, came the dawn of the internet.
From the Imitation Game, it is but a short historical stumble to the rise of that least poetic of Turing’s descendants: the notorious CAPTCHA.
Words cannot describe how much I despise CAPTCHAs. They are the things of nightmares. Tiny digital sobriety tests that make every second human feel a curious kinship to every second malfunctioning toaster. “Select all images containing bicycles.” My good fellow. There have been occasions where I have missed my own car in parking lots. You expect me to discern whether that blurry pixel cluster in the upper-left corner is a bike, a mailbox, or an abstraction of despair?
Regardless.
The CAPTCHA was born in the middle of a joking duel betwixt graduate students of Carnegie Mellon University and MIT during the early 2000s. Their main purpose is to defend the digital frontier against influxes of bots and spam.
The earliest CAPTCHAs were grotesquely warped letterforms that no typographer would sanction outside of a fever dream. Behind them was the optimistic logic: humans had some primordial knack for deciphering degraded glyphs, that yet eluded our automated counterparts. Others shifted towards the visual – i.e., identifying which of several images contained a tree, a house, or “a storefront” (although the latter category seemed to oscillate between Victorian arcade, suburban laundromat, and what is best described as a liminal IKEA vignette).
Then we see the proud second generation: reCAPTCHA.
This one dared to go beyond mere proof of humanity to outsourcing the labour in order to digitise millions of pages of old books. The homely CAPTCHA suddenly began living an ingenious double life. In verifying your human status, you were also deciphering a faded term from a 19th-century tract on naval engineering or Duns Scotus commentary. The visual CAPTCHAs were no less – you were now, unbeknownst to yourself labelling streetscapes for self-driving cars.
| Got this message a total of six times while writing this. |
The third generation, i.e., the now-ubiquitous “I am not a robot” checkbox, represents perhaps the most opaque turn in the saga. By clicking a box, you do not actually assert your humanity. Instead, you permit an unseen engine to analyse the behavioural traces of your cursor movement. The human is inferred from the micro-tremors of embodied interaction – something of a cryptic echo of Descartes’ insistence that behaviour, however mechanistic, ultimately reveals the mind.
That some bots have learned to mimic these tremors with exquisite precision is simply the next chapter in the ongoing arms race. This is our life now.
If there is anything one can take away from this history , it is that lines drawn to safeguard human exceptionalism have a habit of dissolving. The duck’s clockwork bowels were once marvel enough; now they appear quaint beside algorithms (that claim to be!) capable of composing sonnets and negotiating ceasefires.
Does this mean machines have become our peers? I don't like to think so.
It is the criteria for peerage that must change with the times, shaped by centuries of erosion by automata, engines, simulations, and digital phantasmagoria.
There may come a day when the proliferation of tests renders the category “human” less a biological designation and more a credential repeatedly renewed. When that happens, Descartes’ res cogitans will have completed its transition from metaphysical postulate to bureaucratic formality.
* * *
Halloa!
Quick reminder about the mailing list that you can join by clicking the three horizontal lines on the top right corner of the banner. I also need to figure out if it's actually consistently functional, so if you're interested in the stuff I post here, please drop in your email-id and see if it works. Feedback is much valued.
As I’ve said before, I am no fan of mailing list pop-ups. I find them obtrusive and annoying, and I have no wish to subject readers here to them; although they would probably drive more traffic to the blog.
Your patience is much appreciated and envied!
Comments
Post a Comment