The topic of this course is the prospect of Artificial Intelligence, considered as the object of a threat analysis. Its method is dramatization, through systematic scenario construction. Our question: How would a global bureau, tasked with – and seriously committed to – the protection of human interests against a strategically-competent synthetic agency, formulate the problem it is dealing with? (With the persistent under-rumor: Would this problem be anything other than the challenge posed historically by Capital to Scientific Socialism, exotically re-stated?) The distant ancestor of the question is posed by Samuel Butler, in his The Book of the Machines:
“There is no security … against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organised machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?”
The proximal ancestor is the identification of existential risk (‘X-risk’) and its speculative Friendly AI (‘FAI’) solution. Here we find not only a threat analysis, but an institutional crystallization (i.e. a prototypical ‘Anthropol’, though with only a hint of the social authority it might consider ultimately essential to its mission). This germinal organizational response, represented by The Cambridge Centre for the Study of Existential Risk (CSER), and the Machine Intelligence Research Institute (MIRI), among other entities, provides us with a concrete point of departure. The course consists of two modules, each of four weeks. The spiral menace of self-escalating machine intelligence draws the abstract diagram of the course structure: beginning with a tightly-focused examination of the specific anxieties explored by the X-Risk / FAI institutions, and then (in Module-2) revisiting them within a widening gyre of expanded historical and theoretical scope. Topics include intelligence explosion (the ‘Foom Debate’); simulation, dissimulation, and substitution (the Imitation Game); coordination problems and human interests; and robot terror as media spectacle.
This is a seminar open to an arbitrary degree of ironization. For those with such severe skepticism-control problems that the question “Why are robots going to kill us?” appears irreducibly ridiculous, the questions “Why do we think robots going to kill us?” or even “Why are we being so insistently told that robots are going to kill us?” can frame the entire discussion without loss of information. If Butler’s Erewhon is excluded, because treated as an honorary theoretical text, the classic literary reference is still (and forever) William Gibson’s Neuromancer. (‘Anthropol’ is exactly our name for Gibson’s ‘Turing Cops’.)
Each module of the two-part seminar will be composed of four two and a half hour sessions, each of which will be conducted as an extended seminar. During this period material blogged the previous week will be discussed alongside the set material. Based upon the set readings, online news and commentary, and ongoing class discussion, students will be expected to contribute 400 words of content to the seminar blog on relevant topics weekly. (This will additionally be posted to the google classroom page for everyone to read and comment upon as they wish, providing some preliminary threads for the group discussion). The final assessment will consist of a 2500 word extended essay on a topic agreed upon with the instructor in advance.