Farkas, Igor and Li, Ping (2001) A self-organizing neural network model of the acquisition of word meaning. [Conference Paper]
Full text available as:
Postscript
81Kb |
Abstract
In this paper we present a self-organizing connectionist model of the acquisition of word meaning. Our model consists of two neural networks and builds on the basic concepts of Hebbian learning and self-organization. One network learns to approximate word transition probabilities, which are used for lexical representation, and the other network, a self-organizing map, is trained on these representations, projecting them onto a 2D space. The model relies on lexical co-occurrence information to represent word meanings in the lexicon. The results show that our model is able to acquire semantic representations from both artificial data and real corpus of language use. In addition, the model demonstrates the ability to develop rather accurate word representations even with a sparse training set.
Item Type: | Conference Paper |
---|---|
Keywords: | word meaning, acquisition, self-organizing neural net, word co-occurrences |
Subjects: | Computer Science > Language Computer Science > Neural Nets Linguistics > Semantics Psychology > Psycholinguistics |
ID Code: | 1914 |
Deposited By: | Farkas, Igor |
Deposited On: | 23 Nov 2001 |
Last Modified: | 11 Mar 2011 08:54 |
Metadata
- ASCII Citation
- Atom
- BibTeX
- Dublin Core
- EP3 XML
- EPrints Application Profile (experimental)
- EndNote
- HTML Citation
- ID Plus Text Citation
- JSON
- METS
- MODS
- MPEG-21 DIDL
- OpenURL ContextObject
- OpenURL ContextObject in Span
- RDF+N-Triples
- RDF+N3
- RDF+XML
- Refer
- Reference Manager
- Search Data Dump
- Simple Metadata
- YAML
Repository Staff Only: item control page