And how would you know?
There are some standing answers, such as the Turing test, and Sebastian Rödl's test for self-consciousness. These are 'just to be sure' tests, though; they're arguments that we have reason to treat these as thinking beings, as conscious beings, and no reason not to do so. To be sure we aren't exploiting them, then, we should do so.
But consider the arguments from the Aristotelian discussion below, and think about the problem. Are these things somehow programmed to mimic consciousness, or are they becoming conscious? How could you tell?
18 comments:
Are automatic reflexive actions evidence of consciousness? A sleeping person or one under the influence of a drug is generally not considered conscious in that moment. I think there's an extra level of autonomy which we look for with those tests.
When you say "programmed to mimic consciousness", can you tell me where the line is between 'programming' and 'improving its nature through art'? Because I thought the argument was that through art, man could improve nature. And if that's so, is 'programming' not improving the nature of a computer? And through that programming, could we not elevate the nature of a computer to full consciousness?
And if you would argue that 'programming' is somehow fundamentally different than using art to improve the computer's nature, then by what other mechanism do you suggest it is possible for humans to do so? Because programming is the fundamental interface between human and computer. Sure, you could manually modify the hardware of a computer, but any "chip" you could care to name that would potentially give consciousness to a computer would by definition contain programming that would do so. You're just changing the manner of input, not of what would be giving consciousness to the machine.
Well, there are limits to improvement. Just because we can improve something by one degree doesn't mean we can improve it infinitely. There is a line between mimicking consciousness and being conscious. A mirror mimics the real thing reflected, so much so that an image in a mirror may be mistaken for the real thing, but it isn't the thing.
Also, I don't think we should assume that computer consciousness is possible. When you ask "by what other mechanism do you suggest it is possible," the correct answer may just be that it isn't possible.
Of course, it may be possible, but the task of proving that it is possible is on the one making that claim.
"Are automatic reflexive actions evidence of consciousness?"
The question is what would be evidence of consciousness. Turing's answer was that we could treat an object as conscious if it could carry on an ordinary conversation with us. Well, we have a certain number of chatbots today that can do that, but there's no reason to think they are conscious; we can look at their programming, which we wrote, and see that they are executing it.
The 'machine learning' stuff is more like consciousness, but it's still following an algorithm. This intuition stuff looks even more like consciousness, enough to begin to satisfy Rödl's test. His test was that we could recognize a self-conscious being when we see it going through something we recognize from our own experience as a self-conscious process. Intuition and a faith that 'it'll all work out somehow' are very characteristic of human self-conscious decision making processes. Does that mean this thing is conscious? Does it really count as evidence that it is?
But perhaps I should answer Mike's question before we go further.
When you say "programmed to mimic consciousness", can you tell me where the line is between 'programming' and 'improving its nature through art'?
Yes. Programming is distinguishable because outputs are knowable from inputs. If you know the program, you know (or could in principle know) what will come out depending on what you put in. Hopefully this will be an improvement! That's presumably why you wrote a program. But the idea is that, because I have access to the program, I could in principle work out what the program will spit out as a response if I know what I'm going to put in as an input.
These things are getting complicated enough that we are now being surprised by outputs. But if the computer is executing a program, presumably even if we can't actually do it, we could in principle still do it if we had time to carry out enough calculations. But it's performing so many, now, that we can't practically -- and so we are surprised by outputs, even though in principle we could still determine them ahead of time.
Consciousness isn't necessary for us to be surprised by outputs, then. Consciousness is the experience of being something interacting with a world. How would we know if that is happening?
Because I thought the argument was that through art, man could improve nature.
The question is whether this is a new kind of being that can perfect nature through art, or if it is a kind of art that is further perfecting itself -- also through art.
Programming is distinguishable because outputs are knowable from inputs. If you know the program, you know (or could in principle know) what will come out depending on what you put in.
Yet I can program a human via Pavlovian or Operant conditioning, or any of a variety of other techniques, including simple education, culturalization--or propaganda. Programmability by itself doesn't mean much of anything.
Do I improve a human with such programming? How can we tell? We don't know what consciousness is. We're not blind men feeling up an elephant; we're the elephant (to twist a tale a tad). We're too far inside the problem--given our current level of technology, philosophy, philosophical technology--to know, even to recognize, the elephant or any part of it.
Eric Hines
"A mirror mimics the real thing reflected,..."
From my college physics textbook: "A plane mirror yields an erect but perverted image."
Yet I can program a human via Pavlovian or Operant conditioning, or any of a variety of other techniques, including simple education, culturalization--or propaganda.
The argument is that you can't. You can create inclinations, but you can't determine outputs from inputs in the same way. There's always a chance that the guy will decide to think for himself today, or choose to work against the strange habit of salivating when he hears a bell.
That only works through error--not an error of programming, but a "stray electron" or, in the human's case, an error of execution. Or, in fact, an error of programming: inadequate/error-containing education, culturalization, propaganda.
Eric Hines
That’s to the side. Say we and they have even that in common: how would you know that they were having a conscious experience?
To my eyes, reflexive actions are more akin to programming. Being able to control a reflex like breathing is evidence of consciousness, however.
how would you know that they were having a conscious experience?
At this stage in our development/evolution and theirs, why would I care? Technology and magic--it's the same with our current (lack of) understanding of consciousness.
Serious investigation certainly is necessary to gain understanding, and thereby make deliberate moves to improve consciousness--or deliberately to eschew the moves--beyond that, the distinction between conscious and automation seems meaningless.
Eric Hines
"..., why would I care?"
There are a few small matters that hang on it.
1) Conscious experience is the source of all meaning in the universe, because only a conscious mind can assign meaning. The question of whether or not we've begun to create conscious minds -- and when we could know if we had -- is thus of small importance because they would join us as the lions of creation.
2) If a thing lacks conscious experience, there's no reason not to put it to work doing whatever you'd like it to do without worrying about whether or not it is enjoying the work. If it's conscious, however, it can suffer or enjoy. It then matters whether or not it is happy. If we are forcing it to do things it hates, that is slavery (and a violation of the golden rule).
Those thing seem to me to be important enough to justify our attention to the question.
Conscious experience is the source of all meaning in the universe, because only a conscious mind can assign meaning.
You're assuming a lot of facts that have yet to be established. The source of meaning is conscious experience? Even if accurate, is it the only source? Is the universe conscious? Is self awareness a necessary outcome of consciousness? Can the universe be self aware if it's conscious? If it is, how can it be self aware, with nothing else with which to compare itself? Are we conscious? How do we know? Is consciousness necessarily intelligence? Based on what is it or is it not?
We don't even have a good definition of consciousness. We don't even have a good definition of intelligence. We don't even have a good definition of self awareness.
Eric Hines
Sir, give me some credit. I’m developing arguments, not making assumptions. See the recent thread on Aristotelian vs Neoplatonic models.
Give me some credit, Sir. You don't have arguments without underlying assumptions. And I'm in this thread, not that one.
Eric Hines
" ... outputs are knowable from inputs. ..."
Not exactly. Consider the computer game, Conway's 'Life'. The first time the input pattern is laid upon the board, the output is undetermined. The SECOND time, the result will be identical (known) to the first.
Anyhow, interesting news feature on the topic today on NPR radio. Podcast here.
https://www.stitcher.com/podcast/wnycs-radiolab/e/54537601
Furby dolls, chatbots, Turing tests and more
" ... outputs are knowable from inputs. ..."
Not exactly. Consider the computer game, Conway's 'Life'.
It's the same with the war games and the artillery design programs we designed and developed when I worked for a defense contractor. The inputs were known to the gamers and the artillery designers, but the outputs were stochastic, drawn from normal or non-normal (depending on the particular question) distributions, occasionally Markov chain runs.
Investment predictions, if they're any good, are the outcomes of Monte Carlo draws.
Eric Hines
Post a Comment