---
abstract: |-
Harnad's main argument can be roughly summarised as follows: due to Searle's
Chinese Room argument, symbol systems by themselves are insufficient to
exhibit cognition, because the symbols are not grounded in the real world, hence
without meaning. However, a symbol system that is connected to the real world
through transducers receiving sensory data, with neural nets translating these
data into sensory categories, would not be subject to the Chinese Room
argument.
Harnad's article is not only the starting point for the present debate, but is also a
contribution to a longlasting discussion about such questions as: Can a computer
think? If yes, would this be solely by virtue of its program? Is the Turing Test
appropriate for deciding whether a computer thinks?
altloc:
- http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad93.symb.anal.net.html
- http://www.bib.ecs.soton.ac.uk/data/4138/html/index.html
chapter: ~
commentary: ~
commref: ~
confdates: ~
conference: ~
confloc: ~
contact_email: ~
creators_id: []
creators_name:
- family: Harnad
given: Stevan
honourific: ''
lineage: ''
date: 1993
date_type: published
datestamp: 2001-06-18
department: ~
dir: disk0/00/00/15/86
edit_lock_since: ~
edit_lock_until: ~
edit_lock_user: ~
editors_id: []
editors_name:
- family: Powers
given: D.M.W.
honourific: ''
lineage: ''
- family: Flach
given: P.A.
honourific: ''
lineage: ''
eprint_status: archive
eprintid: 1586
fileinfo: /style/images/fileicons/text_html.png;/1586/1/harnad93.symb.anal.net.html
full_text_status: public
importid: ~
institution: ~
isbn: ~
ispublished: pub
issn: ~
item_issues_comment: []
item_issues_count: 0
item_issues_description: []
item_issues_id: []
item_issues_reported_by: []
item_issues_resolved_by: []
item_issues_status: []
item_issues_timestamp: []
item_issues_type: []
keywords: "neural nets, symbol grounding, connectionism, symbolism, computationalism, Searle's Chinese Room, Turing Test, robotics"
lastmod: 2011-03-11 08:54:41
latitude: ~
longitude: ~
metadata_visibility: show
note: ~
number: 1
pagerange: 12-78
pubdom: FALSE
publication: Think
publisher: ~
refereed: TRUE
referencetext: |-
Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (in prep.) Learned Categorical Perception in Human Subjects: Implications
for Symbol Grounding.
Chomsky, N. (1980) Rules and representations. Behavioral and Brain Sciences 3 : 1-61.
Dietrich, E. (1990) Computationalism. Social Epistemology 4: 135 - 154.
Dyer, M. G. Intentionality and Computationalism: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical
Artificial Intelligence, Vol. 2, No. 4, 1990.
Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28: 3 - 71.
Fodor, J. A. (1975) The language of thought New York Thomas Y. Crowell
Hanson & Burr (1990) What connectionist models learn: Learning and Representation in connectionist networks. Behavioral and
Brain Sciences 13: 471-518.
Harnad S. (1984) Verifying machines' minds. Contemporary Psychology 29: 389-391.
Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.
Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.
Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.
Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary on Eric Dietrich's Computationalism) Social
Epistemology 4: 167-172.
Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and
Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.
Harnad, S. (1990d) Symbols and Nets: Cooperation vs. Competition. Review of: S. Pinker and J. Mehler (Eds.) (1988)
Connections and Symbols Connection Science 2: 257-260.
Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1:
43-54.
Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in
Context Springer Verlag.
Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In:
Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L
Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March
1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)
Harnad, S. Hanson, S.J. & Lubin, J. (in prep.) Learned Categorical Perception in Neural Nets: Implications for Symbol
Grounding.
Lawrence, D. H. (1950) Acquired distinctiveness of cues: II. Selective association in a constant stimulus situation. Journal of
Experimental Psychology 40: 175 - 188.
Lubin, J., Hanson, S. & Harnad, S. (in prep.) Categorical Perception in ARTMAP Neural Networks.
McClelland, J. L., Rumelhart, D. E., and the PDP Research Group (1986) Parallel distributed processing: Explorations in the
microstructure of cognition, Volume 1. Cambridge MA: MIT/Bradford.
MacLennan, B. J. (1987) Technology independent design of neurocomputers: The universal field computer. In M. Caudill & C.
Butler (Eds.), Proceedings, IEEE First International Conference on Neural Networks (Vol. 3, pp. 39-49). New York, NY:
Institute of Electrical and Electronic Engineers.
MacLennan, B. J. (1988) Logic for the new AI. In J. H. Fetzer (Ed.), Aspects of Artificial Intelligence (pp. 163-192). Dordrecht:
Kluwer.
MacLennan, B. J. (in press-a) Continuous symbol systems: The logic of connectionism. In Daniel S. Levine and Manuel Aparicio
IV (Eds.), Neural Networks for Knowledge Representation and Inference. Hillsdale, NJ: Lawrence Erlbaum.
MacLennan, B. J. (in press-b) Characteristics of connectionist knowledge representation. Information Sciences, to appear.
Minsky, M. & Papert, S. (1969) Perceptrons: An introduction to computational geometry. Cambridge MA: MIT Press
Newell, A. (1980) Physical Symbol Systems. Cognitive Science 4: 135 - 83.
Pinker, S & Prince, A. (1988) On language and connectionism: Analysis of a parallel distributed processing model of language
acquisition. Cognition 28(1-2): 73-193.
Pylyshyn, Z. W. (1984) Computation and cognition. Cambridge MA: Bradford Books
Rosenblatt, F. (1962) Principles of neurodynamics. NY: Spartan
Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-424.
Searle, J. (1990) Is the brain's mind a computer program?. Scientific American 262: 26-31.
Touretzky, D. S. (ed.) (1991) Machine Learning, vol. 7, nos. 2 and 3, special double issue on ``Connectionist Approaches to
Language Learning.''
Touretzky, D. S. (1990) BoltzCONS: Dynamic symbol structures in a connectionist network. Artificial Intelligence vol. 46, pp.
5-46.
Touretzky, D. S. and Hinton, G. E. (1988) A distributed connectionist production system. Cognitive Science, vol. 12, number 3,
pp. 423-466.
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, A . Anderson (ed.), Engelwood Cliffs NJ:
Prentice Hall.
relation_type: []
relation_uri: []
reportno: ~
rev_number: 8
series: ~
source: ~
status_changed: 2007-09-12 16:38:50
subjects:
- comp-sci-mach-dynam-sys
- neuro-mod
- phil-mind
succeeds: ~
suggestions: ~
sword_depositor: ~
sword_slug: ~
thesistype: ~
title: Grounding Symbols in the Analog World with Neural Nets
type: journalp
userid: 63
volume: 2