Henry Kissinger: How the Enlightenment Ends

In a thoughtful exploration of philosophy and technology, Kissinger argues that AI developers should start thinking through the philosophical questions raised by AI and that the government should start seriously thinking about AI and its possible dangers. It's difficult to excerpt because he uses the entire article to make his point, but here is his hook:

As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

19 comments:

  1. Anonymous4:32 PM

    That dinasaur won't get his perks if computers do the thinking better than he does. Philosophical my eyes.

    ReplyDelete
  2. He won't be around long enough to worry about that, and probably has plenty of savings to rely on anyway. He does raise an interesting point about the difference between thinking and computation. I wonder if that will hold at higher levels of processing, though. It may be that thinking is an emergent property at higher levels of computation; even, that consciousness is.

    Or it may be that consciousness is something inherent in matter. We don't actually have anything like an answer to that question.

    ReplyDelete
  3. Anonymous10:59 PM

    Can one create something greater than himself? What are the rules of creation inside the Creation?

    -stc Michael

    ReplyDelete
  4. I think it's an important question: What happens when the machines are smarter than we are? They have no inherent value system, and he points out that even when programmers try to give them one, weird things happen. They will be making decisions we not only do not but cannot understand.

    I already see grown adults do dumb things when they're driving because that's what GPS said to do. ("If you look at the map ..." "No, GPS says ...") Increasingly, I think, people don't think they need to know facts because the internet knows them and they can just check the internet. They don't need to be able to think because they can Google.

    And, they don't want to think because they might commit a thought-crime.

    I also think he makes a good point that AI can do amazing things, but that will also result in some amazingly bad things which, in humans, common sense generally prevents.

    And what happens when hackers hijack an AI? Gaming programs like the AI for chess and go have one goal: Learn how to win. What happens when that kind of AI gets re-purposed?

    I think AI is fascinating, but his questions are good ones, I think. We should be thinking about them now, before we need the answers.

    ReplyDelete
  5. Tom, that chess AI is already being repurposed. I think chess was just the showcase for IBM's Watson AI project, not the goal.

    I'm reminded if the Asamov quote about technology and magic. I think there will be a point where we won't be able to tell the difference between consciousness and computation.

    ReplyDelete
  6. I think there will be a point where we won't be able to tell the difference between consciousness and computation.

    We can't now. That's one of the issues: if a 'thinking computer' did replace us, would the experience of consciousness be lost to the universe? How would we know if that were a danger? How would we know if the machine were conscious, like we are, so that it had an experience of going about the world, seeing the sunrise, hearing and feeling the rain?

    ReplyDelete
  7. Algorithm is instinct. Both are programmed in. Intelligence is built on that, and a sense of morality is built within that intelligence.

    Machine sentients (actually intelligent, as AIs are not) will have a sense of morality; we'll just need to figure out their paradigms. And this: the machine sentients' major advantage over us will be their speed of thought and better memories, not necessarily any superior intelligence.

    Eric Hines

    ReplyDelete
  8. That's an interesting idea.

    Why do you think machine sentients would have a sense of morality? For humans, the internal sense of morality is an evolved one, or it was given to us by God (or both). Machines won't have these sources, it seems.

    Going back to your initial claim about morality, I don't see any necessary connection between intelligence and morality. Could you elaborate on that?

    ReplyDelete
  9. Morality either evolves from a need to understand how to maximize getting along with others for one's own sake, or the answer is given as received wisdom by God. In the latter, understanding of the received wisdom itself evolves, albeit not monotonically, toward better understanding of the received wisdom. Machine sentients would be no less subject to evolutionary pressure than any other entity in an environment. And we humans are machine sentients' god, whether they accept that or not--we created them or the seeds from which they...evolved...through our programming and physical construction of them.

    Intelligence facilitates the rapidity with which the evolution of a mental construct progresses through its own impact on the associated evolutionary pressures. It also accelerates the rapidity with which a more perfect understanding of received wisdom is achieved. Between those two extremes lie free will and conscience, both of which are heavily influenced by intelligence.

    Eric Hines

    ReplyDelete
  10. I think the biggest objection I have to your idea is that evolution is biological, so I don't see any reason that AI would evolve in anything resembling the way biological organisms do.

    I think morality developed in humans because that was a core part of our design, but I don't think we even understand how to program that with AI, and of course many state actors won't want AI to be moral in the first place.

    ReplyDelete
  11. No reason for evolution to be biological. Anything that repairs itself or reproduces is going to respond to evolutionary pressure, else it won't survive very long. And there's no reason that machine sentients wouldn't figure out how to reproduce in some way, if only to adjust its own programming so as to better handle its environment for its own benefit.

    I drew the distinction I did between AIs and machine sentients on purpose, but in terms of evolution, the difference is irrelevant. As soon as we make AIs self-reparable, we will have given them the ability to evolve. Once we make them self-replicable, their evolution will off to the races. Especially if we're not around to manage their repair/replication facility. There are two ways, at least, our absence occurs. One is if we as a species disappear or through our own evolution lose interest in them. Another is when (not if) we start sending self-reparable and self-replicable (we'll surely do both) AIs off to other planets, particularly to other stars, or to any human-incompatible environment to do our exploring and exploiting for us. As that progresses, we won't be around to manage their moral development, either.

    Morality in AIs, and especially in the machine sentients we build, will be inherent in the algorithms/instincts we program into them through the programmers' own biases. I suspect, though, their own drive to protect their lives will give a common enough origin that their moral systems won't be unintelligible to us.

    Eric Hines

    ReplyDelete
  12. If it's not biological, it may not care about survival. A biological organism's most basic instinct is survival, but AI can be programmed to have anything as its most basic instinct, or to have no basic instinct. There is no reason whatsoever to think survival will matter to AI, unless we program them that way. Therefore, there is no particular reason to think they will respond to evolutionary pressures based on survival.

    On the other hand, they will respond to pressures on whatever their basic instinct is, if they have any. Their evolution won't be like ours.

    We will have to wait and see if the programmers do give them a kind of morality or not.

    ReplyDelete
  13. To some extent, a survival instinct is tautological. Does a virus molecule have a survival instinct? It certainly evolves toward its survivability; those viral molecules that don't make corrections to keep them/their replicated molecules around...don't. Is a virus molecule even biological? Narrowly defined, it has the HCON components that we're pleased to define as organic. But it behaves more mechanically than biologically. The physics of chemistry drives it far more than any biological imperatives. Do their follow-ons, single-celled organisms, have a survival instinct? They certainly make DNA/RNA/viral precursor changes to maximize their/their follow-on progeny's ability to survive. Those that don't...no longer exist.

    What drives that virus? A programmed-in algorithm. Did God program a relatively complex molecule to have a survival drive? Or did chance? At that level, does "survival" even have any meaning, or is it just a matter of some molecules "wearing out" and others lasting longer. The outcome is the same, either way, all the way up to and into the existence of an overt survival instinct. Once there, morality develops, most likely guided by God in accordance with the ability of His audience to absorb the lesson.

    Eric Hines

    ReplyDelete
  14. There is a very interesting question about the biological definition of life and whether viruses are alive or not, but I don't think there's a real question about whether they are biological. For living, organic things, survival instinct may well be tautological; it may be part of the definition of being alive.

    It isn't tautologically there for machines, however. There is nothing inherent in a machine that wants to survive. We can program it to try to survive, and if we do, AI will make it very good at surviving. But we could just as easily program it to do other things and not care about survival, or even to seek its own destruction. I don't see anything that will cause it to independently develop either a survival instinct or any form of morality.

    ReplyDelete
  15. What's the difference between a molecule and a machine?

    It's true enough that if we don't make our machines self-replicating, it's unlikely they'll evolve to develop a survival imperative. But as soon as we make those machines self-repairing, much less self-replicating (as we'll need to do at the least for long-term exploration in hostile-to-us environments), we will have made them susceptible to evolution. And given them a survival imperative. And led them to endogenously developing a moral standard as that evolution moves along that far.

    Eric Hines

    ReplyDelete
  16. make our machines self-replicating ==> make our machines self-repairing

    Eric Hines

    ReplyDelete
  17. Well, between a living molecule and a machine, there are some important differences. All of life, in fact.

    Why do you think they would develop a survival imperative? I don't think they would. Machines do what they are programmed to do, and if they are not programmed with a survival imperative, I see no reason whatsoever that they would develop one. Could you explain how that would happen, exactly?

    ReplyDelete
  18. Self-repair is a survival imperative; self-replication a stronger one.

    Also, how is a molecule alive? We can't even define life, much less discriminate a virus' activity from the mechanics of a chemical reaction.

    Consider one pathway of how evolution works. The instruction set for repair/replication does its trick and repair/replication proceeds. Except that it's never perfect; the instruction set makes an error in execution, or it gets damaged and instructs something different, or.... The result of the error the vast majority of time is fatal to the thing being repaired/replicated and no action results other than the thing ceases to operate. A subset of those errors, though, have no apparent effect on the thing being repaired/replicated, and so the thing proceeds with repair/replication with the alteration incorporated. A tiny, but non-zero, subset of the errors proves at least slightly beneficial to the thing being repaired/replicated relative to the environment in which it's operating. The results of those last two subsets of error proliferate, the "beneficial" error more so.

    It makes no difference whether the instruction set is RNA, or DNA, or an algorithm.

    Eric Hines

    ReplyDelete