Sunday, February 11, 2018

Yet Another Baseless Claim about Consciousness


If I live long enough, I'm planning to write a book entitled "The 100 Stupidest Things Anyone Ever Said About Minds, Brains, Consciousness, and Computers". Indeed, I've been collecting items for this book for some time. Here's my latest addition: Michael S. Gazzaniga, a famous cognitive neuroscientist who should know better, writes:

Perhaps the most surprising discovery for me is that I now think we humans will never build a machine that mimics our personal consciousness. Inanimate silicon-based machines work one way, and living carbon-based systems work another. One works with a deterministic set of instructions, and the other through symbols that inherently carry some degree of uncertainty.

If you accept that the brain functions computationally (and I think the evidence for it is very strong) then this is, of course, utter nonsense. It was the great insight of Alan Turing that computing does not depend in any significant way on the underlying substrate where the computing is being done. Whether the computer is silicon-based or carbon-based is totally irrelevant. This is the kind of thing that is taught in any third-year university course on the theory of computation.

The claim is wrong in other ways. It is not the case that "silicon-based machines" must work with a "deterministic set of instructions". Some computers today have access to (at least in our current physical understanding) a source of truly random numbers, in the form of radioactive decay. Furthermore, even the most well-engineered computing machines sometimes make mistakes. Soft errors can be caused, for example, by cosmic rays or radioactive decay.

Furthermore, Dr. Gazzaniga doesn't seem to recognize that if "some degree of uncertainty" is useful, this is something we can simulate with a program!

6 comments:

JimV said...

I guess he hasn't heard of Fuzzy Logic, among other things.

I have felt for a long time that some degree of randomness is beneficial in game programs and therefore probably part of our evolutionary programming also.

In a simple computer, e.g. the old Apple II, it could be implemented by counting machine cycles between external stimuli, such as key or mouse input, and using that as a seed for a pseudo-random number algorithm. It seems to me our neurons could do something similar, such as counting the number of photons hitting our retinas. (In "QED", Feynman mentions that the human eye can detect single photons and was the most sensitive detector available for early quantum experiments.)

Gingerbaker said...

Ok, so if you have an old computer which has a program written in Basic to sum two numbers, and you have a new computer which has a program written in the latest language possible to write poetry, or to control how a robot perceives its environment, does the newer computer have more consciousness?

Or is it just a bad question - because neither computer has any consciousness whatsoever?

Jeffrey Shallit said...

For me, consciousness means "awareness of and ability to respond to the environment". So adding two numbers is not particularly aware at all, but a robot that can navigate in unfamiliar terrain can be fairly said to be more conscious. I am not ready to give a way that one could measure consciousness rigorously on a scale, though.

JimV said...

Flatworms can memorize mazes. Does a flatworm have as much "consciousness" as a human?

No, I don't think so, but it is part of the spectrum on the low end.

The operating system of the old computer received input, passed it on to the appropriate internal program to process the input, received the result, and displayed it externally. That is analogous to what our consciousness does. It is the operating systems of computers that are becoming more and more conscious, I think. (Internal programs are becoming more powerful also.)

Nonlin.org said...

1. Call it automaton, golem, automatic pilot, robot or artificial intelligence (AI), the idea that the inert can turn into the living is not new. And if God can make this happen, why can’t the human? While everything is possible in fiction of course, even some of the actual humans creations have been advanced enough for their times to amaze the uninformed into believing these devices have actually crossed the impossible barrier and come alive. But once the uninformed become informed, the performance becomes less compelling if still amusing. In essence we are witnessing an arms race between human imagination and human creativity.
...http://nonlin.org/ai/

Unknown said...

The discrete instruction-set model of cognition is a laughably outdated straw man. Gazzaniga has apparently never heard of neural networks or connectionist systems, which deal with non-discrete, probabilistic outcomes quite capably. These networks are based on – guess what? – how brains are actually wired and work. (I have a Ph.D. in communication from Stanford, where I took courses in neural network models of cognition in the 1990s with David Rumelhart, a pioneer of the field).