More Thoughts on AI

As shown, computers just function as faster mathematicians. In fact, “computer” used to simply mean “mathematician” back in 1613. Even Turing built his eponymous machine to function that way:

“The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail.”

Advanced mathematics is just more efficient methods for simpler ones, multiplication and division are iterations of addition and subtraction respectively but done with greater rapidity. But our progress from simple to advanced, from the abacus to Pascaline to stepped reckoner to difference engine, has made us assume that we were progressing. The creation of the Arpanet and the space race all but confirmed it for some.

So there are those that feel confident in the future prospects of AI. But I would argue this is not unique to computer scientists or workers in this field. We tend to fetishize the newest thing. Each new “breakthrough” is heralded as a panacea, but then never comes. Sci-fi fan Paul Krugman says:

“The history of artificial intelligence is that it’s always 10 years ahead, and that’s been true for about 50 years.”

But I think he’s being too generous, he gave their story credence at the start, and I won’t. Like I said we fetishize the latest technology and using it to explain the mind is not a new pattern for us. Plato theorized that the mind was like a wax writing tablet. Robert Hooke’s work on phosphorescence made him conclude that the mind has innate stored memories the same way that phosphorescent liquids “store” light. Thomas Edison, after inventing the phonograph, took auditory memory to be “an album containing phonographic sheets”. Even the predictive coding model, where the brain is taken to be a predictor of future events, is based on technology used by WWII gunners to synchronize the 2 second delay between the fired round and the moving enemy plane.

And then came modern computers, followed by more mind metaphors. And companies helped the process along with click and drag icons, images of little tan folders holding files, and so on. This made the illusion more palatable, sold more products, and created jobs for people to manage those illusions.

So in the face of all that new technology, despite the temptation to proclaim progress, we should remember our origins and heed the voices of those who show us our limits. When Ada Lovelace was designing her mechanical computer, she made it clear that:

“The Analytical Engine has no pretensions to originate anything. It can [only] do whatever we know how to order it to perform.”

If there were more with that mindset, maybe progress can in fact be made. This criticism is not simply to burst anyone’s bubble. “Criticism has plucked the imaginary flowers on the chain not in order that man shall continue to bear that chain without fantasy or consolation, but so that he shall throw off the chain and pluck the living flower,” as a famous German once said.

Beyond the fetish of strong AI, where computers reach consciousness, my other concern is the complacency with which the culture has no worry about weak AI, or mere simulation of intelligent behavior. We can use these “weak” expert systems and databases to enhance, and not replace, our human intelligence. That’s the hope anyway. In this, I agree with Leibniz that, “it is beneath the dignity of excellent men to waste their time in calculation when any peasant could do the work just as accurately with the aid of a machine.”

While we do want to save ourselves toil, I find it hard to see the inevitable outcome, if left unchecked, as being optimal. As we approach an increasingly digital and accessible noosphere, I worry about how much authority is being ceded to others. I have doubts about AI driving my car, a fortiori, allowing the noosphere to do my thinking is a step too far. Similarly, when I buy “1984,” I actually want to own it. So I would have to agree with John Stuart Mill that “[it is] questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being”.

Yet if Moore’s law, or something like it, continues then we will increasingly automate. As far back as Turing, this seemed obvious:

“There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…at some stage therefore we should have to expect the machines to take control.”

Is that a good thing? Do we want less jobs while retaining an economy that requires employment? The idea of unemployment and economic collapse while having an overabundance of laborers speaks to the ubiquitous and unimaginative internalization of free market capitalism. Since coined in 1921, “robot” has been a metaphor for the dehumanized and exploited laborer, so much so that Gestapo named the author public enemy number 2. After all, if you disassembled a computer you’d find (electric) power evenly distributed, and a prioritizing of machine cycle queues done in the manner of from each according to ability, to each according to need. This contrasts deeply with those who want economic life to be Darwinian. And people fail to consider alternatives so much that even post-scarcity society is presumed angry at the thought of collectivization.

This inability to think of robots as anything but Other might be a new layer of false consciousness. Which presents another set of problems. Because even if AI remains weak and is never a full replication of human sentience, then we still wouldn’t want this, right? We want something closer to this. Otherwise we may get this en masse.

Maybe AI Risk is paranoia. Maybe the worst case scenarios are overblown. I’d still rather be cautious because both weak AI and strong AI can be lethal. So I think we should proceed slowly. Hopefully the problems of AI Risk never get to xenocide and plateau at nuisance. Or maybe we’re all wrong and a third benign unexpected reality happens, such as AI omphaloskepsis.

Fingers crossed.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s