Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Search:

CA216      CA249      CA318

CA400      CA651      CA668


Philosophy of AI





Names

Philosophy of AI is a history of "big names". The debates are great fun to watch. Here are some big names and my take on them. You don't have to agree with me of course (The great thing about philosophy is it's not falsifiable!):


  1. Turing

  2. Dreyfus
    • His books, What Computers Can't Do and What Computers Still Can't Do - AI machines only look intelligent because they are programmed to output their meaningless tokens as English words. They have no idea what they are saying. He seems at times to say AI is impossible.
    • I say: Fair criticism of much of AI. Doesn't apply to the new stuff. Dreyfus leads us to the Symbol-grounding problem.

  3. Searle
    • His thought experiment The Chinese Room (also here) - AI is impossible. Instantiate your algorithm as a roomful of 1 billion people passing meaningless tokens around. You're telling me you can have a conversation in English with China, yet not one of the Chinese understands English?
    • I say: Yes.

  4. Godel, Lucas, Penrose
    • AI is impossible because of Godel's theorem on the limits of logical systems. There are propositions that we can see are true that the AI logical system can't.
    • I say: Any working AI would not be a logical (truth-preserving) system. It would be stochastic, statistical. See My comments on Penrose.

  5. Penrose also

  6. Edelman
    • His books Neural Darwinism, The Remembered Present and Bright Air, Brilliant Fire - AI is the wrong way to do it. It should be done this way.
    • I say: Edelman is doing AI. (And not very well.) See My comments on Edelman.

  7. Rosen
    • His book Life Itself - The machine metaphor is incorrect. Here are types of self-referential system that cannot be implemented as a machine.
    • I say: They can be implemented as a machine.
    • See the Church-Turing thesis.

  8. Brooks


  9. Lots of people (*)
    • The soul or spirit exists. There is a spiritual world. You are more than just the matter of your brain and your body. You are not just a biological machine operating according to the laws of physics. You have free will.

    • I say:
      1. Well what does the brain do then? Why is it the most complex object in the universe?
      2. Also, this spirit clearly must have causal effects on your brain/body. So at what point does it interact? In the brain, should we expect to see uncaused causes? Neurons firing for clearly no physical cause? The idea is outlandish.
      3. Also, how did this "spirit" evolve from creatures that were just matter?

    • (*) Who are "Lots of people"?
      • Mainly, lots of civilians. Unable to explain how the brain works, just about every human throughout history has believed in souls or spirits. Even today it is part of the official doctrine of most churches, and is claimed by almost every religious thinker and theologian. Perhaps 95 percent of the world's civilians accept this view.
      • Strangely, almost no one who studies the mind argues this view. With a complete disregard for public opinion, most cognitive scientists are materialists (perhaps Eccles is the only exception).
      • Yet if spirits were really needed to explain the mind, then surely it should not be too hard to construct a scientific argument that the brain is not enough.
      • It seems to me that the soul or spirit is merely an example of the "God of the gaps" argument.

      Q. Could we ever prove this? Could AI and Cognitive Science prove that we are material?
      A. Very hard to prove it. If we invent AIs, they clearly have no soul. But that doesn't mean we don't.


  10. Moravec
    • His book Mind Children - AI is coming, and humans will go extinct. And that won't be necessarily bad. AIs will be our inheritors.
    • I say: I like a lot of Moravec, but I have doubts about this. First, Who's to say we won't become AIs ourselves? Second, who's going to "mop up" the humans who don't co-operate with this "evolutionary inevitability"? Evolution is not in charge now. We are. And the only way humans will go extinct is by genocide.

  11. Warwick
    • AI is coming, and it's dangerous. And it's going to happen soon (10/50 years).
    • I say: AI is a lot harder than that. Nothing is happening soon.

  12. Weizenbaum

  13. My favourite debaters with the AI critics:




Recommended Reading

The Mind's I, Hofstadter and Dennett, 1981. - Library, 155.2. - A mind-bending collection of essays exploring the possibilities of Strong AI. If Strong AI was true, could you be immortal? Could you copy brains? - Far more fun than science fiction.

The Artificial Intelligence Debate, ed. Stephen Graubard, 1988. - Library, 006.3.GRA. - A fairer, but duller, round-up of all sides to the debate.

Symposium on Roger Penrose's Shadows of the Mind - Online. - A debate between Penrose and AI people. Also essential reading, if you're interested in Penrose, is the debate in Behavioral and Brain Sciences 13:643-705 (1990). This latter debate is the one that convinced me that Penrose was wrong.

Darwin's Dangerous Idea, Dennett, 1995. - Library 146.7, and see More information. - The best case for Strong AI that I know of, embedding it in a biological world view. Dennett shows how Strong AI is simply the consequence of ordinary scientific materialism, and any alternative better fit into evolutionary materialism as well as AI does.






Feeds      w2mind.org

On Internet since 1987.