Intrator, Nathan and Edelman, Shimon (1996) How to Make a Low-Dimensional Representation Suitable for Diverse Tasks. [Preprint]
Full text available as:
Postscript
1986Kb |
Abstract
We consider training classifiers for multiple tasks as a method for improving generalization and obtaining a better low-dimensional representation. To that end, we introduce a hybrid training methodology for MLP networks; the utility of the hidden-unit representation is assessed by embedding it into a 2D space using multidimensional scaling. The proposed methodology is tested on a highly nonlinear image classification task.
Item Type: | Preprint |
---|---|
Subjects: | Psychology > Cognitive Psychology |
ID Code: | 571 |
Deposited By: | Edelman, Shimon |
Deposited On: | 17 Nov 1997 |
Last Modified: | 11 Mar 2011 08:54 |
Metadata
- ASCII Citation
- Atom
- BibTeX
- Dublin Core
- EP3 XML
- EPrints Application Profile (experimental)
- EndNote
- HTML Citation
- ID Plus Text Citation
- JSON
- METS
- MODS
- MPEG-21 DIDL
- OpenURL ContextObject
- OpenURL ContextObject in Span
- RDF+N-Triples
- RDF+N3
- RDF+XML
- Refer
- Reference Manager
- Search Data Dump
- Simple Metadata
- YAML
Repository Staff Only: item control page