Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Online coding site: Ancient Brain

coders   JavaScript worlds

Search:

CA114      CA170      CA686

Online AI coding exercises

Project ideas


Philosophy of AI

AI is related to "Philosophy of Mind".



Names

Philosophy of AI is a history of "big names". The debates are fun to watch. Here are some big names and my take on them. You don't have to agree with me. Philosophy (in science) is about ideas where we usually do not yet know enough to "prove" people right or wrong.


  1. Turing

  2. Dreyfus
    • His books, What Computers Can't Do and What Computers Still Can't Do.
    • Summary: AI machines only look intelligent because they are programmed to output their meaningless tokens as English words. They have no idea what they are saying. He seems at times to say AI is impossible.
    • I say: Fair criticism of much of AI. Doesn't apply to the new stuff. Dreyfus leads us to the Symbol-grounding problem.

  3. Searle
    • His thought experiment The Chinese Room (also here).
    • Summary: AI is impossible. Instantiate your algorithm as a roomful of 1 billion people passing meaningless tokens around. You're telling me you can have a conversation in English with China, yet not one of the Chinese understands English?
    • I say: Yes.

  4. Godel, Lucas, Penrose
    • Summary: AI is impossible because of Godel's theorem on the limits of logical systems. There are propositions that we can see are true that the AI logical system can't.
    • I say: Any working AI would not be a logical (truth-preserving) system. It would be stochastic, statistical. See My comments on Penrose.

  5. Penrose

  6. Edelman
    • His books Neural Darwinism, The Remembered Present and Bright Air, Brilliant Fire.
    • Summary: AI is the wrong way to do it. It should be done this way.
    • I say: Edelman is doing AI. (And not very well.) See My comments on Edelman.

  7. Rosen
    • His book Life Itself.
    • Summary: The machine metaphor is incorrect. Here are types of self-referential system that cannot be implemented as a machine.
    • I say: They can be implemented as a machine.
    • See the Church-Turing thesis.

  8. Brooks
    • Many papers. e.g. Intelligence without Reason.
    • Summary: Traditional AI is the wrong way to do it. We should do this New type of AI.
    • I say: I pretty much agree. Brooks' work is not the final answer of course, but his analysis is excellent.
    • Other people disagree. See Today the earwig, tomorrow man?.

  9. Moravec
    • His book Mind Children.
    • Summary: AI is coming, and humans will go extinct. And that won't be necessarily bad. AIs will be our inheritors.
    • I say: I like a lot of Moravec, but I have doubts about this. First, Who's to say we won't become AIs ourselves? Second, who's going to "mop up" the humans who don't co-operate with this "evolutionary inevitability"? Evolution is not in charge now. We are. And the only way humans will go extinct is by genocide.

  10. Warwick
    • Summary: AI is coming, and it's dangerous. And it's going to happen soon (10/50 years).
    • I say: AI is a lot harder than that. Nothing is happening soon.

  11. Weizenbaum
  12. Some leading proponents of AI over the years in these debates:



How about souls?

The problem of what we are and what is our core being has been discussed for thousands of years.
Most religions and most humans throughout history have believed in some "essence" or "soul" or "spirit" as the core of what we are.
What do scientists think?

  • Very few scientists propose such ideas. Most scientists are materialists: The core of what you are is matter, specifically your brain, nervous system and body.
  • One reason why is the origin of life and origin of humans: A materialist theory can explain our slow evolution from non-humans.
  • Another issue is we have a suspect for the seat of intelligence etc. The brain. The brain seems to be the most complex object in the universe. If it was a simple thing, then more scientists would look for a seat of intelligence elsewhere.
  • There have been attempts to construct a scientific argument that the brain is not enough. But none are widely accepted in science.
  • Another issue is that any "soul" or "spirit" must have causal effects on your brain/body. So at what point does it interact? In the brain, should we expect to see uncaused causes? Neurons firing for clearly no physical cause? Is this what happens? Maybe, but no one has shown that yet.
  • John Eccles has attempted an argument for something beyond materialism.

Q. Could we ever prove this? Could AI and Cognitive Science prove that we are material?
A. Very hard to prove it. If we invent AIs, they clearly have no soul. But that doesn't mean we don't.

  

Reading

  1. The Mind's I, Douglas R. Hofstadter and Daniel C. Dennett, 1981. DCU Library, 155.2.
    A mind-bending collection of essays exploring the possibilities of Strong AI. If Strong AI was true, could you be immortal? Could you copy brains?
    See chapters.

  2. Darwin's Dangerous Idea, Daniel C. Dennett, 1995. DCU Library 146.7.
    Makes the case for Strong AI, embedding it firmly in a biological world view. Argues that Strong AI is just the consequence of ordinary scientific materialism, and any alternative better fit into evolutionary materialism as well as AI does.

  3. The Artificial Intelligence Debate, ed. Stephen Graubard, 1988. DCU Library, 006.3.GRA.

  4. Symposium on Roger Penrose's Shadows of the Mind in Psyche, Volume 2 (1995 1996).
    A debate between Penrose and AI people.

  5. Debate on Penrose, Behavioral and Brain Sciences, Volume 13 - Issue 4, pp.643-705 (1990).
    More debates between Penrose and AI people.






ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Note: Links on this site to user-generated content like Wikipedia are highlighted in red as possibly unreliable. My view is that such links are highly useful but flawed.