Henry Kissinger: How the Enlightenment Ends

In a thoughtful exploration of philosophy and technology, Kissinger argues that AI developers should start thinking through the philosophical questions raised by AI and that the government should start seriously thinking about AI and its possible dangers. It's difficult to excerpt because he uses the entire article to make his point, but here is his hook:

As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

7 comments:

Anonymous said...

That dinasaur won't get his perks if computers do the thinking better than he does. Philosophical my eyes.

Grim said...

He won't be around long enough to worry about that, and probably has plenty of savings to rely on anyway. He does raise an interesting point about the difference between thinking and computation. I wonder if that will hold at higher levels of processing, though. It may be that thinking is an emergent property at higher levels of computation; even, that consciousness is.

Or it may be that consciousness is something inherent in matter. We don't actually have anything like an answer to that question.

Anonymous said...

Can one create something greater than himself? What are the rules of creation inside the Creation?

-stc Michael

Tom said...

I think it's an important question: What happens when the machines are smarter than we are? They have no inherent value system, and he points out that even when programmers try to give them one, weird things happen. They will be making decisions we not only do not but cannot understand.

I already see grown adults do dumb things when they're driving because that's what GPS said to do. ("If you look at the map ..." "No, GPS says ...") Increasingly, I think, people don't think they need to know facts because the internet knows them and they can just check the internet. They don't need to be able to think because they can Google.

And, they don't want to think because they might commit a thought-crime.

I also think he makes a good point that AI can do amazing things, but that will also result in some amazingly bad things which, in humans, common sense generally prevents.

And what happens when hackers hijack an AI? Gaming programs like the AI for chess and go have one goal: Learn how to win. What happens when that kind of AI gets re-purposed?

I think AI is fascinating, but his questions are good ones, I think. We should be thinking about them now, before we need the answers.

Christopher B said...

Tom, that chess AI is already being repurposed. I think chess was just the showcase for IBM's Watson AI project, not the goal.

I'm reminded if the Asamov quote about technology and magic. I think there will be a point where we won't be able to tell the difference between consciousness and computation.

Grim said...

I think there will be a point where we won't be able to tell the difference between consciousness and computation.

We can't now. That's one of the issues: if a 'thinking computer' did replace us, would the experience of consciousness be lost to the universe? How would we know if that were a danger? How would we know if the machine were conscious, like we are, so that it had an experience of going about the world, seeing the sunrise, hearing and feeling the rain?

E Hine said...

Algorithm is instinct. Both are programmed in. Intelligence is built on that, and a sense of morality is built within that intelligence.

Machine sentients (actually intelligent, as AIs are not) will have a sense of morality; we'll just need to figure out their paradigms. And this: the machine sentients' major advantage over us will be their speed of thought and better memories, not necessarily any superior intelligence.

Eric Hines