Cogprints: No conditions. Results ordered -Date, Title. 2018-01-17T14:24:18ZEPrintshttp://cogprints.org/images/sitelogo.gifhttp://cogprints.org/2014-08-24T21:00:19Z2014-08-24T21:00:19Zhttp://cogprints.org/id/eprint/9744This item is in the repository with the URL: http://cogprints.org/id/eprint/97442014-08-24T21:00:19ZAutomatic Extraction of Protein Interaction in Literature Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction word are effective for the interaction direction judgment. At last, we analyzed the effects of different features and planed for the next step.Mr. Peilei Liulpl1520@163.comProfessor Ting Wangtingwang1970@163.com2011-09-17T17:40:15Z2011-09-17T17:40:53Zhttp://cogprints.org/id/eprint/7373This item is in the repository with the URL: http://cogprints.org/id/eprint/73732011-09-17T17:40:15ZThe International Conference on Information and Communication Systems (ICICS 2011) he International Conference on Information and Communication Systems (ICICS 2011) is a forum for scientists, engineers, and practitioners to present their latest research results, ideas, developments, and applications in all areas of Computer and Information Sciences.Mr Mustafa Rdaidehmyradaideh@just.edu.jo2008-04-30T18:34:51Z2011-03-11T08:57:07Zhttp://cogprints.org/id/eprint/6038This item is in the repository with the URL: http://cogprints.org/id/eprint/60382008-04-30T18:34:51ZGeneralization of Extended Baum-Welch Parameter Estimation for Discriminative Training and DecodingWe demonstrate the generalizability of the Extended Baum-Welch (EBW) algorithm not only for HMM parameter estimation but for decoding as well.
We show that there can exist a general function associated with the objective function under EBW that reduces to the well-known auxiliary function used in the Baum-Welch algorithm for maximum likelihood estimates.
We generalize representation for the updates of model parameters by making use of a differentiable function (such as arithmetic or geometric
mean) on the updated and current model parameters and describe their effect on the learning rate during HMM parameter estimation. Improvements on speech recognition tasks are also presented here.Dr Dimitri Kanevskykanevsky@us.ibm.comDr Tara Sainathtsainath@MIT.EDUDr Bhuvana Ramabhadranbhuvana@us.ibm.comDr David Nahamoonahamoo@us.ibm.com2008-04-30T18:35:19Z2011-03-11T08:57:06Zhttp://cogprints.org/id/eprint/6037This item is in the repository with the URL: http://cogprints.org/id/eprint/60372008-04-30T18:35:19ZA New Family of Extended Baum-Welch Update Rules In this paper, we consider a generalization of the state-of-art
discriminative method for optimizing the conditional likelihood in
Hidden Markov Models (HMMs), called the Extended Baum-Welch (EBW)
algorithm, that has had significant impact on the speech recognition
community. We propose a generalized form of EBW update rules that
can be associated with a weighted sum of updated and initial models,
and demonstrate that using novel update rules can significantly
speed up parameter estimation for Gaussian mixtures.Dr Dimitri Kanevskykanevsky@us.ibm.comDr Daniel Poveydpovey@us.ibm.comDr Bhuvana Ramabhadranbhuvana@us.ibm.comDr Irina Rishrish@us.ibm.comDr Tara Sainathtsainath@MIT.EDU2008-01-15T23:56:42Z2011-03-11T08:57:02Zhttp://cogprints.org/id/eprint/5902This item is in the repository with the URL: http://cogprints.org/id/eprint/59022008-01-15T23:56:42ZAdapted Extended Baum-Welch transformationsThe discrimination technique for estimating parameters of Gaussian mixtures that is based on the Extended Baum-Welch transformations (EBW) has had significant impact on the speech recognition community.
In this paper we introduce a general definition of a family of EBW transformations that can be associated with a weighted sum of updated and initial models. We compute a gradient steepness measurement for a family of EBW transformations that are applied to functions of Gaussian mixtures and demonstrate the growth property of these transformations. We consider EBW transformations of discriminative functions in which EBW controlled parameters are adapted to a gradient steepness measurement or to the likelihood of the data given the model. We present experimental results that show that adapted EBW transformations can significantly speed up estimating parameters of Gaussian mixtures and give better decoding results.Dr Dimitri Kanevskykanevsky@us.ibm.comDr Daniel Povey dpovey@us.ibm.comDr Bhuvana Ramabhadranbhuvana@us.ibm.comDr Tara Sainathtsainath@MIT.EDU2007-05-19Z2011-03-11T08:56:50Zhttp://cogprints.org/id/eprint/5544This item is in the repository with the URL: http://cogprints.org/id/eprint/55442007-05-19ZA Note on Ontology and Ordinary LanguageWe argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.Walid Saba2006-07-23Z2011-03-11T08:56:31Zhttp://cogprints.org/id/eprint/4994This item is in the repository with the URL: http://cogprints.org/id/eprint/49942006-07-23ZA Platform for Education in ‘Interaction Design
for Adaptive Robots’This paper introduces an educational
software platform for a small teddy-bear-like
robot, RobotPHONE. Utilizing back-drivability
of the three joints (totally 6 DOFs) of the robot,
the platform enables the robot to learn
correspondences between gestures (posed by a
human teacher) and voice (given by a human).
Because the motion and the voice can be
symmetrically produced and recognized, the
robot would be an ideal tool for the research
and education in learning and development.Oka NatsukiOzaka Mitsuyoshi2005-02-22Z2011-03-11T08:55:51Zhttp://cogprints.org/id/eprint/4107This item is in the repository with the URL: http://cogprints.org/id/eprint/41072005-02-22ZThe Self-Organization of Speech SoundsThe speech code is a vehicle of language: it defines
a set of forms used by a community to carry information.
Such a code is necessary to support the linguistic
interactions that allow humans to communicate.
How then may a speech code be formed prior to the
existence of linguistic interactions?
Moreover, the human speech code is discrete and compositional,
shared by all the individuals of a community but different
across communities, and phoneme inventories are characterized by
statistical regularities. How can a speech code with these properties form?
We try to approach these questions in the paper,
using the ``methodology of the artificial''. We
build a society of artificial agents, and detail a mechanism that
shows the formation of a discrete speech code without pre-supposing
the existence of linguistic capacities or of coordinated interactions.
The mechanism is based on a low-level model of
sensory-motor interactions. We show that the integration of certain very
simple and non language-specific neural devices
leads to the formation of a speech code that
has properties similar to the human speech code.
This result relies on the self-organizing properties of a generic
coupling between perception and production
within agents, and on the interactions between agents.
The artificial system helps us to develop better intuitions on how speech
might have appeared, by showing how self-organization
might have helped natural selection to find speech.
Pierre-Yves Oudeyer2003-10-24Z2011-03-11T08:55:23Zhttp://cogprints.org/id/eprint/3239This item is in the repository with the URL: http://cogprints.org/id/eprint/32392003-10-24ZAspects of Cognitive PoeticsThis paper is a short introduction to Cognitive Poetics. Cognitive poetics as I conceive of it is a far cry from what goes nowadays under the label "cognitive linguistics". Cognitive linguistics does not ask the questions this paper asks; consequently it does not answer them. In an important respect, the two approaches are even diametrically opposed. Cognitive linguistics shows very succesfully how a wide range of quite different metaphors can be reduced to the same underlying conceptual metaphor, whereas cognitive poetics makes significant distinctions between very similar metaphors, claiming that these differences make poetic expression unique. It accounts for the perceived effects of poetic texts, and relates perceived effects to poetic texts in a prin cipled manner. What is more, cognitive poetics has a lot to say about thematic, semantic, and syntactic structures, the reader's cognitive style preferring one or another "mental performance", rhyme patterns, and their interaction in generating the perce i ved effects. New Criticism, Structuralism and Formalism treated these effects, sometimes quite brilliantly, in a pre-theoretical manner. Cognitive poetics is devised to handle them in a principled manner. Finally, cognitive poetics conceives of the sema ntic and the rhythmic structure of a poem by a homogeneous set of principles. In both respects it allows for alternative (mental or vocal) performances, and handles the conflicting terms of a metaphor as well as the conflicting patterns of poetic rhythm i n conformity with the aesthetic principle of an "elegant solution to a problem": the conflicting terms of a metaphor are accommodated in a semantic interpretation; the conflicting patterns of poetic rhythm in a rhythmical performance. Above all, both semantic and rhythmic structures are shaped and constrained by cognitive processes. Cognitive Linguistics, by contrast, offers no tools for handling poetic rhythm; and objects to the Controversion Theory of metaphor. ? Reuven Tsur2004-02-12Z2011-03-11T08:55:25Zhttp://cogprints.org/id/eprint/3341This item is in the repository with the URL: http://cogprints.org/id/eprint/33412004-02-12ZA Constructive Model of Mother-Infant Interaction towards Infant’s Vowel ArticulationHuman infants seem to develop to acquire
common phonemes to adults without the capability
to articulate or any explicit knowledge.
To understand such unrevealed human
cognitive development, building a robot
which reproduces such a developmental process
seems effective. It will also contribute to
a design principle for a robot that can communicate
with human beings. This paper hypothesizes
that the caregiver’s parrotry to the
coo of the robot plays an important role in the
phoneme acquisition process based on the implication
from behavioral studies, and propose
a constructive model for it. We validate the
proposed model by examining whether a real
robot can acquire Japanese vowels through interactions
with its caregiver.Yuichiro YoshikawaJunpei KogaMinoru AsadaKoh Hosoda2004-02-12Z2011-03-11T08:55:25Zhttp://cogprints.org/id/eprint/3328This item is in the repository with the URL: http://cogprints.org/id/eprint/33282004-02-12ZSpeech Development by ImitationThe Double Cone Model (DCM) is a model
of how the brain transforms sensory input to
motor commands through successive stages of
data compression and expansion. We have
tested a subset of the DCM on speech recognition, production and imitation. The experiments show that the DCM is a good candidate
for an artificial speech processing system that
can develop autonomously. We show that the
DCM can learn a repertoire of speech sounds
by listening to speech input. It is also able to
link the individual elements of speech to sequences that can be recognized or reproduced,
thus allowing the system to imitate spoken
language.Bjorn BreidegardChristian Balkenius2003-10-18Z2011-03-11T08:55:22Zhttp://cogprints.org/id/eprint/3235This item is in the repository with the URL: http://cogprints.org/id/eprint/32352003-10-18ZPhonetic Cues and Dramatic Function
Artistic Recitation of Metered SpeechThis article attempts a brief synthesis of two of my research areas: sound symbolism and poetic rhythm, focussed on Simon Russel Beale's performance of Gloucester's first soliloquy in Richard III. It explores three structural relationships between phoneti c cues and their effects: redundancy (when several phonetic cues combine to the same effect); conflicting cues (which serve to convey conflicting prosodic effects by the same stretch of speech); and overdetermination (when one phonetic cue serves to conve y a variety of unrelated -- e.g., phonological, rhythmical and expressive -- effects). Iván Fónagy speaks of dual coding of phonetic cues; the same cues convey phonological and emotive information. This article proposes "triple coding": the same cues conv ey phonological, emotive and rhythmic information.
The expanded version concerns two instances of stress maxima in weak positions in Gloucester's soliloquy, performed by an outstanding British actor. One of them is the least performable kind, and this is sofar my only chance for studying it. The expansion attempts to explore a methodological innovation too: The audio version of the Merriam-Webster Dictionary offers recordings of the entries by highly trained speakers, to which the artistic reading can be compared. It may serve as a standard from which the artistic recital deviates. But this suggested to me an additional, completely unexpected possibility as well. When Cleanth Brooks speaks of irony, he means "the kind of qualification which the various elements in a context receive from the context". I suddenly realised that this allowed me to explore the kind of qualification which certain intonation contours receive from the context.
.eReuven Tsur2003-10-04Z2011-03-11T08:55:04Zhttp://cogprints.org/id/eprint/2528This item is in the repository with the URL: http://cogprints.org/id/eprint/25282003-10-04ZBehavior-Based Early Language Development on a Humanoid RobotWe are exploring the idea that early language acquisition could be better modelled on an artifcial creature by considering the pragmatic aspect of natural language and of its development in human infants. We have implemented a system of vocal behaviors on Kismet in which "words" or concepts are behaviors in a competitive hierarchy. This paper reports on the framework, the vocal system's architecture and algorithms, and some preliminary results from vocal label learning and concept formation.Paulina Varshavskaya2003-03-12Z2011-03-11T08:55:07Zhttp://cogprints.org/id/eprint/2658This item is in the repository with the URL: http://cogprints.org/id/eprint/26582003-03-12ZPhonemic Coding Might Result From
Sensory-Motor Coupling DynamicsHuman sound systems are invariably phonemically coded. Furthermore,
phoneme inventories follow very particular tendancies. To explain
these phenomena, there existed so far three kinds of approaches :
``Chomskyan''/cognitive innatism, morpho-perceptual innatism
and the more recent approach of ``language as a complex cultural system
which adapts under the pressure of efficient communication''.
The two first approaches are clearly not satisfying, while
the third, even if much more convincing,
makes a lot of speculative assumptions and did not
really bring answers to the question of phonemic coding. We propose
here a new hypothesis based on a low-level model of
sensory-motor interactions. We show that certain very
simple and non language-specific neural devices
allow a population of agents to build signalling systems
without any functional pressure. Moreover, these systems
are phonemically coded. Using a realistic vowel articulatory
synthesizer, we show that the inventories of vowels
have striking similarities with human vowel systems.Pierre-Yves Oudeyer2003-06-02Z2011-03-11T08:55:17Zhttp://cogprints.org/id/eprint/2992This item is in the repository with the URL: http://cogprints.org/id/eprint/29922003-06-02ZRevisiting the Status of Speech RhythmText-to-Speech synthesis offers an interesting manner of synthesising various knowledge components related to speech production. To a certain extent, it provides a new way of testing the coherence of our understanding of speech production in a highly systematic manner. For example, speech rhythm and temporal organisation of speech have to be well-captured in order to mimic a speaker correctly.
The simulation approach used in our laboratory for two languages supports our original hypothesis of multidimensionality and non-linearity in the production of speech rhythm. This paper presents an overview of our approach towards this issue, as it has been developed over the last years.
We conceive the production of speech rhythm as a multidimensional task, and the temporal organisation of speech as a key component of this task (i.e., the establishment of temporal boundaries and durations). As a result of this multidimensionality, text-to-speech systems have to accommodate a number of systematic transformations and computations at various levels. Our model of the temporal organisation of read speech in French and German emerges from a combination of quantitative and qualitative parameters, organised according to psycholinguistic and linguistic structures. (An ideal speech synthesiser would also take into account subphonemic as well as pragmatic parameters. However such systems are not yet available).
Dr. Brigitte Zellner Keller2003-10-18Z2011-03-11T08:55:22Zhttp://cogprints.org/id/eprint/3232This item is in the repository with the URL: http://cogprints.org/id/eprint/32322003-10-18ZOnomatopoeia: Cuckoo-Language and Tick-Tocking+◊This paper is a brief phonetic investigation of the nature of onomatopoeia. Onomatopoeia is the imitation of natural noises by speech sounds. To understand this phenomenon, we must realize that there is a problem here which is by no means trivial. There i s an infinite number of noises in nature, but only twenty-something letters in an alphabet that convey in any language a closed system of about fifty (up to a maximum of 100) speech sounds. I have devoted a book length study to the expressiveness of lang u age (What Makes Sound Patterns Expressive? -- The Poetic Mode of Speech Perception), but have only fleetingly touched upon onomatopoeia. In this paper I will recapitulate from that book the issue of acoustic coding, and then will toy around with two spe ci fic cases: why does the cuckoo say "kuku" in some languages, and why the clock prefers to say "tick-tock" rather than, say, tip-top. Only fleetingly I will touch upon the question why the speech sounds [s] and [S] (S represents the initial consonant of sh oe; s the initial consonant of sue) serve generally as onomatopoeia for noise. By way of doing all this, I will discuss a higher-order issue as well: How are effects translated from reality to some semiotic system, or from one semiotic system to ano ther.U.cnsReuven Tsur1999-08-20Z2011-03-11T08:53:44Zhttp://cogprints.org/id/eprint/219This item is in the repository with the URL: http://cogprints.org/id/eprint/2191999-08-20ZBook Review--Ronald Cole (editor-in-chief), Joseph Mariani, Hans Uszkoreit, Annie Zaenen, and Victor Zue, eds., Survey of the State of the Art in Human Language TechnologyThis is a review of Survey of the State of the Art in Human Language Technology, edited by Ronald Cole (editor-in-chief), Joseph Mariani, Hans Uszkoreit, Annie Zaenen, and Victor Zue, published by Cambridge University Press in 1997.Varol Akman1999-10-06Z2011-03-11T08:54:02Zhttp://cogprints.org/id/eprint/547This item is in the repository with the URL: http://cogprints.org/id/eprint/5471999-10-06ZDynamical recurrent neural networks towards prediction and modeling of dynamical systemsThis paper addresses the use of Dynamical Recurrent Neural Networks (DRNN) for time series prediction and modeling of small dynamical systems. Since the recurrent synapses are represented by Finite Impulse Response (FIR) filters, DRNN are state-based connectionist models in which all hidden units act as state variables of a dynamical system. The model is trained with Temporal Recurrent Backprop (TRBP), an efficient backward recurrent training procedure with minimal computational burden which benefits from the exponential decay of gradient reversely in time. The gradient decay is first illustrated on intensive experiments on the standard sunspot series. The ability of the model to internally encode useful information on the underlying process is then illustrated by several experiments on well known chaotic processes. Parsimonious DRNN models are able to find an appropriate internal representation of various chaotic processes from the observation of a subset of the state variables.A. Aussem1999-03-18Z2011-03-11T08:54:17Zhttp://cogprints.org/id/eprint/801This item is in the repository with the URL: http://cogprints.org/id/eprint/8011999-03-18ZLanguage identification with suprasegmental cues: A study based on speech resynthesisThis paper proposes a new experimental paradigm to explore the discriminability of languages, a question which is crucial to the child born in a bilingual environment. This paradigm employs the speech resynthesis technique, enabling the experimenter to preserve or degrade acoustic cues such as phonotactics, syllabic rhythm or intonation from natural utterances. English and Japanese sentences were resynthesized, preserving broad phonotactics, rhythm and intonation (Condition 1), rhythm and intonation (Condition 2), intonation only (Condition 3), or rhythm only (Condition 4). The findings support the notion that syllabic rhythm is a necessary and sufficient cue for French adult subjects to discriminate English from Japanese sentences. The results are consistent with previous research using low-pass filtered speech, as well as with phonological theories predicting rhythmic differences between languages. Thus, the new methodology proposed appears to be well-suited to study language discrimination. Applications for other domains of psycholinguistic research and for automatic language identification are considered.Franck RamusJacques Mehler2002-03-12Z2011-03-11T08:54:54Zhttp://cogprints.org/id/eprint/2135This item is in the repository with the URL: http://cogprints.org/id/eprint/21352002-03-12ZA system design for human factors studies of speech-enabled Web browsingThis paper describes the design of a system which will subsequently be used as the basis of a range of empirical studies aimed at discovering how best to harness speech recognition capabilities in multimodal multimedia computing. Initial work focuses on speech-enabled browsing of the World Wide Web, which was never designed for such use. System design is complete, and is being evaluated via usability testing.L. J AdamsS. DamperStevan HarnadW Hall2000-07-24Z2011-03-11T08:54:21Zhttp://cogprints.org/id/eprint/894This item is in the repository with the URL: http://cogprints.org/id/eprint/8942000-07-24ZTemporal structures for Fast and Slow Speech RateThe rhythmic component in speech synthesis often remains rather rudimentary, despite recent major efforts in the modeling of prosodic models. The European COST Action 258 has identified this problem as one of the next challenges for speech synthesis. This paper is a contribution to a new, promising approach that was tested on a French temporal model.Brigitte Zellner1999-10-08Z2011-03-11T08:54:03Zhttp://cogprints.org/id/eprint/551This item is in the repository with the URL: http://cogprints.org/id/eprint/5511999-10-08ZCombining Neural Network Forecasts on Wavelet-Transformed Time SeriesWe discuss a simple strategy aimed at improving neural network prediction accuracy, based on the combination of predictions at varying resolution levels of the domain under investigation (here: time series). First, a wavelet transform is used to decompose the time series into varying scales of temporal resolution. The latter provide a sensible decomposition of the data so that the underlying temporal structures of the original time series become more tractable. Then, a Dynamical Recurrent Neural Network (DRNN) is trained on each resolution scale with the temporal-recurrent backpropagation (TRBP) algorithm. By virtue of its internal dynamic, this general class of dynamic connectionist network approximates the underlying law governing each resolution level by a system of nonlinear difference equations. The individual wavelet scale forecasts are afterwards recombined to form the current estimate. The predictive ability of this strategy is assessed with the sunspot series.Alex AussemFionn Murtagh1998-10-19Z2011-03-11T08:54:16Zhttp://cogprints.org/id/eprint/751This item is in the repository with the URL: http://cogprints.org/id/eprint/7511998-10-19ZPhonemes and Syllables in Speech Perception: size of the attentional focus in French.A study by Pitt and Samuel (1990) found that English speakers could narrowly focus attention onto a precise phonemic position inside spoken words [1]. This led the authors to argue that the phoneme, rather than the syllable, is the primary unit of speech perception. Other evidence, obtained with a syllable detection paradigm, has been put forward to propose that the syllable is the unit of perception; yet, these experiments were ran with French speakers [2]. In the present study, we adapted Pitt & Samuel's phoneme detection experiment to French and found that French subjects behave exactly like English subjects: they too can focus attention on a precise phoneme. To explain both this result and the established sensitivity to the syllabic structure, we propose that the perceptual system automatically parses the speech signal into a syllabically-structured phonological representation.Christophe Pallier1997-12-05Z2011-03-11T08:54:05Zhttp://cogprints.org/id/eprint/586This item is in the repository with the URL: http://cogprints.org/id/eprint/5861997-12-05ZThe Psychophysics of Synthetic Categorical PerceptionStudies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. A major development was Macmillan et al.'s application in 1977 of signal detection theory to analyze several experimental paradigms, in particular explicating the relation between the psychometric labeling function and discrimination measures. Simultaneously, Anderson et al. proposed a neural model for what we will term synthetic CP, yet this line of research has been less well explored. In this paper, we assess neural-network models of CP with particular reference to their ability to predict the psychophysical behavior of real observers -- including the relation between labeling and discrimination. Synthetic categorization of a variety of stimuli, including speech sounds and artificial/novel dimensions, is reviewed and discussed in terms of both classical theories of CP and more recent developments. A variety of neural mechanisms is capable of replicating the essentials of categorical perception, indicating that CP is not a special mode of perception but an emergent property of any sufficiently-powerful general learning system. However, the most convincing replication is from a simulation whose output is continuous rather than discrete.R I DamperStevan Harnad1999-10-06Z2011-03-11T08:54:03Zhttp://cogprints.org/id/eprint/548This item is in the repository with the URL: http://cogprints.org/id/eprint/5481999-10-06ZDynamical Recurrent Neural Networks: Towards Environmental Time Series Prediction}Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meteorological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed {\em nowcasting} (Murtagh et al.\ 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy {\em k}-nearest neighbors method.A. AussemF. MurtaghM. Sarazin2000-07-04Z2011-03-11T08:53:44Zhttp://cogprints.org/id/eprint/225This item is in the repository with the URL: http://cogprints.org/id/eprint/2252000-07-04ZUltrametric Distance in SyntaxPhrase structure trees have a hierarchical structure. In many subjects, most notably in Taxonomy such tree structures have been studied using ultrametrics. Here syntactical hierarchical phrase trees are subject to a similar analysis, which is much simpler as the branching structure is more readily discernible and switched. The occurrence of hierarchical structure elsewhere in linguistics is mentioned. The phrase tree can be represented by a matrix and the elements of the matrix can be represented by triangles. The height at which branching occurs is not prescribed in previous syntactic models, but it is by using the ultrametric matrix. In other words the ultrametric approach gives a complete description of phrase trees, unlike previous approaches. The ambiguity of which branching height to choose, is resolved by postulating that branching occurs at the lowest height available. An ultrametric produces a measure of the complexity of sentences: presumably the complexity of sentences increases as a language is acquired so that this can be tested. All ultrametric triangles are equilateral or isoceles, here it is shown that \={X} structure implies that there are no equilateral triangles. Restricting attention to simple syntax a minimum ultrametric distance between lexical categories is calculated. This ultrametric distance is shown to be different than the matrix obtained from features. It is shown that the definition of {\sc c-command} can be replaced by an equivalent ultrametric definition. The new definition invokes a minimum distance between nodes and this is more aesthetically satisfying than previous varieties of definitions. From the new definition of {\sc c-command} follows a new definition of {\sc government}.Mark D. Roberts2007-03-06Z2011-03-11T08:56:47Zhttp://cogprints.org/id/eprint/5434This item is in the repository with the URL: http://cogprints.org/id/eprint/54342007-03-06ZUltrametric Distance in SyntaxPhrase structure trees have a hierarchical structure. In many subjects, most notably in Taxonomy such tree structures have been studied using ultrametrics. Here syntactical hierarchical phrase trees are subject to a similar analysis, which is much simpler as the branching structure is more readily discernible and switched. The occurrence of hierarchical structure elsewhere in linguistics is mentioned. The phrase tree can be represented by a matrix and the elements of the matrix can be represented by triangles. The height at which branching occurs is not prescribed in previous syntactic models, but it is by using the ultrametric matrix. In other words the ultrametric approach gives a complete description of phrase trees, unlike previous approaches. The ambiguity of which branching height to choose, is resolved by postulating that branching occurs at the lowest height available. An ultrametric produces a measure of the complexity of sentences: presumably the complexity of sentences increases as a language is acquired so that this can be tested. All ultrametric triangles are equilateral or isoceles, here it is shown that \={X} structure implies that there are no equilateral triangles. Restricting attention to simple syntax a minimum ultrametric distance between lexical categories is calculated. This ultrametric distance is shown to be different than the matrix obtained from features. It is shown that the definition of {\sc c-command} can be replaced by an equivalent ultrametric definition. The new definition invokes a minimum distance between nodes and this is more aesthetically satisfying than previous varieties of definitions. From the new definition of {\sc c-command} follows a new definition of {\sc government}.Mark D. Roberts2012-11-09T19:35:15Z2012-11-09T19:35:15Zhttp://cogprints.org/id/eprint/8084This item is in the repository with the URL: http://cogprints.org/id/eprint/80842012-11-09T19:35:15ZFeature Selection with Exception Handling - An Example from PhonologyThe goal in this paper is to show how the classification of
phonetic features to phonemes can be acquired. This classificational process is modeled by a supervised feature
selection method, based on adaptive distance measures.
Exception handling is incorporated into a learned distance function by pointwise additions of Boolean functions for
individual pattern combinations. An important result is the
differentiation of rules and exceptions during learning.Gabriele Schelergscheler@gmail.com