Iacoboni M, Zaidel E. (1996) Hemispheric independence in word recognition: evidence from unilateral and bilateral presentations. Brain and Language, 53: 121-140

Hemispheric Independence in Word Recognition: Evidence from Unilateral and Bilateral Presentations

Marco Iacoboni & Eran Zaidel

Department of Psychology
University of California
Los Angeles, CA 90024-1563
USA

Address correspondence to :

Dr. Eran Zaidel
Department of Psychology,
University of California, Los Angeles
405 Hilgard Avenue
Los Angeles, CA 90024-1563
E-mail: Zaidel@psych.sscnet.ucla.edu
Fax: (310) 206-5895
Phone: (310) 825-4343

We thank Dr. Jan Rayman for help in experimental design, Steve Hunt for help with MacProbe, and Elicia David for research assistance. This work was supported by ARIN to M.I. and by an NIH grant NS 20187 and an NIMH RSA MH 00179 to E.Z.

Abstract

We compared behavioral laterality effect in a lexical decision task using cued unilateral or bilateral presentations of different stimuli to normal subjects. The goals were to determine the effects of lexical variables on word recognition in each hemisphere under conditions of maximal independence of information processing in the two hemispheres and to assess the degree of residual interhemispheric effects that can still exist then. Bilateral presentations increased hemispheric independence in word recognition, indexed by a significant interaction of response hand with target visual field. Bilateral presentations also selectively impaired word decisions, suggesting that word processing benefits from interhemispheric interaction, whereas nonword processing is done independently in each hemisphere. Indeed, there was a significant congruity effect for word targets only, whereby the wordness of the unattended stimulus affected the speed of processing of attended word targets.

Word frequency and regularity affected both hemispheres equally, arguing against the hemispheric interpretation of the dual route model of word recognition. Length affected the processing of nonwords more than words and in the LVF more than in the RVF. Taken together, the data support the conclusion that each normal hemisphere can control word recognition independently of the other.

Introduction

Hemifield tachistoscopic presentations of unilateral targets for lexical decision by right handed subjects usually yield a right visual hemifield advantage (RVFA) in accuracy and/or latency, which is taken to reflect left hemisphere (LH) specialization for linguistic information processing. However, the RVFA is ambiguous regarding right hemisphere (RH) competence for the task. A "callosal relay" model applies when the RH is unable to process the linguistic stimuli at all, so that information projected to the left visual field (LVF) needs to be relayed through the corpus callosum to the LH prior to linguistic processing. In that case, the RVFA reflects slowing down and degradation of information processing due to callosal relay. By contrast, a "direct access" model applies when the RH is able to process the linguistic information independently, although it may use different strategies and exhibit weaker competence than the LH. In this case, the RVFA reflects the differential processing strategies of the two normal hemispheres, one applying more effective, specialized strategies, the other applying less effective, more general purpose strategies (Zaidel, 1983; Zaidel, Clarke & Suyenobu, 1990). Zaidel and his associates offered several behavioral criteria for direct access. A criterion for independent information processing strategies in the two hemispheres is an interaction between visual field (VF) of presentation and some independent stimulus variable (`processing dissociation" criterion). A criterion for independent information processing resources is the response hand (h) x target visual field (VF) interaction showing faster and/or more accurate responses by the hand ipsilateral to the target visual field (Zaidel, 1983). As conceived here, "direct access" and "callosal relay" refer to two of many possible contrasting patterns of dynamic division of labor which can change across and within tasks.

Hemifield tachistoscopic presentations of bilateral stimuli often produce larger and more reliable VF differences than unilateral stimuli for both verbal and nonverbal tasks, even when subjects are cued to respond to only one VF on each trial (Boles, 1983, 1987, 1990, 1994). The reason for this "bilateral effect" is still unclear. Boles systematically ruled out several possible reasons, finally concluding that bilateral presentations of similar stimuli activate homologous areas of the two hemispheres, disrupting communication between them. This model presupposed normal interhemispheric communication through homotopic callosal channels as well as sharing of some hemispheric resources, and it can be interpreted to mean that bilateral presentations increase direct access or independent hemispheric processing of lateralized input. Indeed, in a direct access task, where the VF asymmetries are due to different hemispheric computations, bilateral presentations cannot interfere with any interhemispheric communication, because none is required for the task. On the contrary, in a callosal relay task, where VF asymmetries are due to the degradation of information relayed through the corpus callosum, bilateral presentations should, in principle, interfere with interhemispheric communication, sometimes even turning callosal relay processing or interhemispheric interaction into direct access processing (as long as the other hemisphere is competent) because the unattended stimuli automatically engage the contralateral hemisphere, thus decreasing its ability to participate in processing the attended stimuli in the other VF. Hence, bilateral presentations should yield greater VF differences than unilateral presentations for callosal relay tasks but similar VF differences for direct access tasks (Rayman & Zaidel, 1991). In this study, we compared lateralized lexical decision using unilateral and bilateral presentations. We also predicted that bilateral presentations will demonstrate greater direct access in the sense of resource indepence as evidenced by a reliable h x VF interaction. We expected both unilateral and bilateral presentations to yield hemispheric strategy independence, as evidenced by a Y x VF interaction, Y being a experimental variable, e.g. target wordness.

We then applied the bilateral paradigm to the analysis of several lexical variables (wordness, word frequency, grapheme-phoneme regularity and length) in an attempt to characterize the contributions of the two normal hemispheres to word recognition under conditions of maximum hemispheric independence. Lateralized lexical decision consistently shows an advantage for words over nonwords (the "lexicality effect"), and a wordness x VF interaction, often with a RVFA for words but no VFA for nonwords. All standard models of word recognition account naturally for the lexicality effect but none[1] has a natural account for the frequent failure of the lexicality effect to obtain in the LVF (Measso & Zaidel, 1990).

Chiarello (1988) proposed a multi-stage information processing model of lateralized lexical decision, including a pre-lexical stage involving sensory visual analysis, a subsequent encoding process, a lexical stage mediating access to the lexicon and retrieval of lexical information, and a post-lexical stage involving decision processes. Chiarello et al. (1988) found that changing response variables in lexical decision from Yes-No to Go-No Go and from manual to vocal did not affect the VF asymmetry in discriminating between words and nonwords (d'), but that it did change the subjects' response criteria (ß), implicating a post-lexical process. By contrast, Measso & Zaidel (1990) found a differential effect of response programming on the sensitivity of word and nonword decisions in the two VFs, but not on response criteria (they manipulated response programming and observed an effect on accuracy of nonword decisions in the LVF only). They suggested that word and nonword decisions may be carried out by two separate and parallel processes, one of them (nonword) overlapping, and sharing resources with, response programming more than the other. Thus, Measso & Zaidel suggested that there exist four different parallel and independent processes: word and nonword decisions in the LH and in the RH. If this hypothesis is true, the signal detection model which assumes a comparison between a signal population (words or nonwords) and a noise population (nonwords or words, respectively) along a single monotonic scale, is inappropriate.

Given our conjecture that bilateral presentations increase direct access and assuming a greater lateralization of word than nonword processing, i.e. a more consistent RVFA for words than for nonwords, we predict that bilateral presentations (1) should affect word processing more than nonword processing, and (2) should affect word processing in the LVF more than in the RVF.

To estimate the extent of automatic interhemispheric interaction of the four independent and parallel processes for word and nonword decisions in the two hemispheres, even under conditions of maximum hemispheric independence, we analyzed the facilitation/interference effect of the lexical status of the unattended stimulus on the decision of the target. The canonical predictions are based on standard accounts of facilitation/interference effects where faster and more automatic processes (RVFA and lexicality effect) interfere with slower and less automatic ones. Thus, we predict (1) facilitation of decisions when the stimuli belong to the same lexical category and interference when they belong to opposite categories, relative to unilateral stimuli; (2) greater interference of word decoys with nonword targets than of nonword decoys with word targets in the RVF, but equal interference of word and nonword decoys with targets in the LVF; (3) greater facilitation effects for LVF than RVF targets. In contrast to the standard account, Lambert & Voot (1993) report interference in a semantic judgment task when the unattended stimulus in the LVF belongs to the same semantic category as a RVF target.

The analysis of the effect of lexical variables on word recognition in each hemisphere under maximum independence also addresses the hemispheric version of the dual route model of word processing. Psycholinguistic experiments with both normal subjects and patients with acquired dyslexias converge on the conclusion that there are two routes for reading a word aloud (Patterson & Morton, 1985) and potentially for lexical decision as well. The lexical route operates by accessing semantic and phonological information in the lexicon via an orthographic address and is associated with the (semantic) word frequency effect (more frequent words are accessed more efficiently). The nonlexical route operates by converting graphemes into phonemes and is associated with the (phonological) regularity effects (regular words are processed more efficiently than irregular words). The nonlexical route provides a phonological address which can then feed back as an auditory input for lexical decision. The nonlexical route is believed to be slower than the lexical route and to be particularly effective in processing low frequency words. A third lexical variable, length of input string, is believed to reflect an early visual parsing process which is common to both routes.

Results from acquired dyslexia (Coltheart, 1983; Schweiger et al, 1989) and from split brain patients (Zaidel & Peters, 1981) suggest that both hemispheres have lexical routes but that only the LH has a nonlexical route. This predicts that with maximal direct access, we will observe (1) a similar frequency effect in both visual field, or even a greater frequency effect in the LVF, and (2) a greater or an exclusive regularity effect in the RVF, especially for low frequency words. From the hemispheric dual route model we would also expect (3) a similar length effect for words and for nonwords in both VFs. Supporting data already exist from unilateral presentations for similar frequency effects for lexical decision in the two VFs (Eviatar, Menn & Zaidel, 1989), and from bilateral presentations for a greater frequency effect in the LVF (Hines, 1977). There is some evidence for phonological effects in both VFs (Zaidel, 1989) but there are no published results on regularity. In conflict with the prediction of the model, there is also evidence that length effects are more likely to occur for nonwords than for words and in the LVF than in the RVF (for a brief review of the literature see Eviatar & Zaidel, 1991), suggesting that this variable taps a less efficient serial processing strategy rather than a universal visual parsing strategy. Further, length effects can occur both for words and for nonwords in either VF when resources are taxed (Eviatar & Zaidel, 1991). Since bilateral presentations tax resources, we expect stronger and more general length effects in bilateral than in unilateral presentations.

To summarize, this paper addresses three issues. First, what is the extent to which bilateral hemifield presentations increase independent hemispheric processing in the normal brain. Second, which interhemispheric interaction can occur in the presence of hemispheric independence. Third, which lexical parameters affect independent word recognition in each hemisphere.

Methods

Subjects. Twenty-four undergraduate UCLA students participated in this experiment. All the subjects were strongly right-handed as determined by a handedness inventory, had no left-handed relatives, and had not spoken or understood any language except English until at least the age of six. All subjects reported normal or corrected-to-normal vision in both eyes and no history or evidence of neurological insult. The subjects received course credit for their participation.

Apparatus. Subjects were seated in a dimly lit room at a distance of 57.3 cm from a high resolution RGB color monitor of a MacIntosh IIsi computer, with their chins in a chinrest, their eyes aligned with the fixation cross in the middle of the screen, and index and middle fingers poised on keys of the computer keyboard placed symmetrically at midline and roughly aligned in a vertical way (g and b for the left hand, j and n for the right hand). A green label with w or n on it, indicating respectively buttons for words and for nonwords, was placed on each key. A computer software for MacIntosh, MacProbe, was used to present stimuli and to record responses.

Procedure. A fixation cross was displayed during the entire experiment. A warning tone sounded 750 msec before the presentation of stimuli. Displays, horizontal lowercase letter strings, were presented for 120 milliseconds and they were black on a gray background. The innermost edge of the letter string appeared 1.5 degrees to the right and/or to the left of the fixation cross. The strings subtended from 1.5 to 3.0 degrees of the visual angle. In half of the trials only one letter string in either the left or the right visual field was presented (unilateral trials); in the other half two different letter strings of the same length appeared in each visual field, one as a target and the other one as a distractor (bilateral trials). An arrow indicating the target was displayed simultaneously with the letter strings, with the inner edge at 0.6 and the outer edge at 0.9 degrees from the fixation cross, both in unilateral and bilateral trials. The subject's task was to decide whether the letter string indicated by the arrow was a word or not by pressing the key with the corresponding label.

The experiment was repeated thrice for each subject. Subjects were instructed to use left hand (Lh) only, right hand (Rh) only, and both hands simultaneously (Bh). The order of hand responses was counterbalanced between subjects. Each subject participated in a practice session before each response hand condition. For each response hand condition each subject received 192 trials divided in three blocks of 64. Half of the subjects were instructed to use the index finger for words and the middle finger for nonwords, and the other half were instructed to use the index finger for nonwords and the middle finger for words.

Stimulus materials. Stimuli were 288 letter strings three, four, five, and six letter long; 144 were words and 144 pronounceable orthographically regular nonwords that were matched for length. Frequency (high frequency words > 160 per million, low frequency words < 20 per million) and regularity were counterbalanced across all three- four- five- and six-letter words. Almost all the stimuli came from lexical lists composed by Seymour, Bunce, and Evans (1992). Only few stimuli were changed because of differences of meaning in American English compare to British English. The list of words and nonwords used for the experiment appears in Appendix A.

All the items and their linguistic effects were balanced between subjects by the creation of twelve different lists. Every string letter appeared once in each list in one of the twelve possible combination of left and right VFs, unilateral and bilateral presentation, wordness of the target, and, only for bilateral trials, wordness of the distractor. All lists were balanced between subjects.

Results

Preliminary analyses of variance (ANOVA) with repeated measures were performed for each dependent variable, that is percentage of trials correct and medians of reaction times (RTs). Stimulus lists and response finger condition were between-subject factors, and VFs, presentation and wordness of the target were within-subject factors. In no analysis were the main effects or interactions involving response finger condition and stimulus lists ever significant. Therefore, these counterbalancing constraints were relaxed.

Display effect. In order to test the prediction that bilateral presentation will maximize direct access, i.e. a h x VF interaction, we carried out a 2 (presentation mode: unilateral, bilateral) x 2 (VF: left, right) x 3 (h: left, right, bimanual) ANOVA. There was a main effect of presentation mode on accuracy, F(1, 22)=25.008, p<.001, and on latency, F(1, 22)=85.110, p<.001, with unilateral presentation producing more accurate (82.8% correct) and faster (785 msec) responses than bilateral presentation (77.6% and 854 msec, respectively). There was no main effect of response hand for either accuracy or latency. There was a RVFA in both accuracy, F(1, 22)=83.660, p<.001, and latency, F(1, 22)=28.911, p<.001, with RVF targets (86.5%, 790 msec) processed more accurately and faster than LVF targets (73.9%, 849 msec).

Further, there was a h x VF interaction for accuracy, F(2, 44)=5.762, p<.01. Planned comparisons showed that subjects responded more accurately with the Lh than with the Rh to LVF targets, F(1, 23)=5.225, p<.03, and more accurately with the Rh than with the Lh to RVF targets, F(1, 23)=7.083, p<.02. Bimanual responses were not statistically different from either Lh or Rh responses in either VF. This pattern support "direct access". There was also a presentation mode x VF interaction in accuracy, F(1, 22)=7.065, p<.02. Bilateral presentation produced significantly less accurate responses than unilateral presentation both in the RVF, F(1, 23)=18.498, p<.001, and in the LVF, F(1, 23)=69.812, p<.001.

There was no presentation mode x h x VF interaction in accuracy, but since we predicted a stronger h x VF interaction with bilateral than unilateral presentations, we carried out separate ANOVAs for accuracies in unilateral and bilateral presentation. As predicted, the h x VF interaction was significant for bilateral presentations, F(2, 44)=5.340, p<.01 (fig. 1b), but not for unilateral presentation, F(2, 44)=2.235, p>.1 (fig. 1a).

Figure 1: Hand by Visual Field interactions for accuracy in unilateral and bilateral trials.

Psycholinguistic variables . In order to get an index of independent hemispheric involvement with different lexical variables, we included both h and VF in each ANOVA. In each case the h x VF interaction was significant and this will not be repeated in the individual analyses. Similarly, main effects and interactions obtained in the previous lower order ANOVAs will not be repeated in the subsequent higher order ones.

Wordness. 2 (wordness: word, nonword) x 2 (presentation mode: unilateral, bilateral) x 2 (VF: left, right) x 3 (h: left, right, bimanual) ANOVAs disclosed main effects of wordness in accuracy, F(1,22)=6.904, p<.016 and in latency, F(1,22)=69.071, p<.001, with words (82.3% correct, 764 ms) processed more accurately and faster than nonwords (78% correct, 785 ms, respectively). There were significant wordness x VF interactions for both accuracy, F(1,22)=37.231, p<.001 and latency, F(1,22)=17,325, p<.001. Planned comparisons showed that words produced more accurate responses than nonwords in the RVF, F(1,23)=47.463, p<.001, whereas there was a nonword advantage in the LVF which failed to reach significance, F(1,23)=3.503, p=.074. Moreover, words were processed faster than nonwords in both VFs (p<.001), but the difference was smaller in the LVF. Further, there was a significant presentation mode x wordness interaction in accuracy, F(1,22)=15.967, p<.001, where words showed less accurate responses in bilateral than unilateral presentations, F(1,23)=38.206, p<.001, whereas nonwords showed no difference between presentation modes. Indeed, there was a significant advantage of words over nonwords with unilateral presentations, F(1, 23)=38.206, p<.001, but not with bilateral presentations, F(1, 23)=.192, p>.5 (Figure 2).

Figure 2: Presentation Mode by Target Wordness interaction in accuracy. *= significant difference

The mode x wordness x VF interaction was significant in accuracy, F(1,22)=4.823, p<.04 and in latency, F(1,22)=11.025, p<.01. Unilateral presentations showed a significant word advantage in the RVF, F(1, 23)=39.239, p<.001 but no differences in the LVF (Figure 3a). Bilateral presentations also showed a word advantage in the RVF, F(1 ,23)=18.987, p<.001, but there was a nonword advantage in the LVF, F(1, 23=32.708, p<.001 (Figure 3b). Latency analysis showed that words were processed faster than nonwords in either the LVF or the RVF, with either unilateral or bilateral presentations.

Figure 3: Visual Field by Target Wordness by Presentation Mode interaction in accuracy. *= significant difference

Next we analyzed word frequency and word regularity for word trials only and target length for all trials by separate ANOVAs in order to avoid a small number of trials per cell in the design.

Word frequency. Word trials were submitted to a 2 (frequency: high, low) x 2 (presentation mode: unilateral, bilateral) x 2 (VF: left, right) x 3 (h: left, right, bimanual) ANOVA for both accuracy and RTs. There was a main effect of frequency in accuracy, F(1, 22)=18.719, p<.001, with more accurate performance for high frequency words (84.1% correct) than for low frequency words (80.4%). Mean latency for high frequency words (723 ms) was also significantly lower than for low frequency words (767 ms), F(1, 22)=25.35, p<.001.

Word regularity. Word trials were submitted to 3 (regularity: regular, irregular, rule-based) x 2 (presentation mode: unilateral, bilateral) x 2 (VF: left, right) x 3 (h: left, right, bimanual) ANOVA for both accuracy and RTs. The accuracy analysis showed a main effect of regularity, F(2, 46)=7.455, p<.002 with irregular words yielding 80.4% correct responses, regular words yielding 82.4% and rule-based word yielding 84%. However, there were no effects of regularity on latency (irregular words=761 ms, regular words=759 ms, rule-based words=752 ms). Moreover, there were no interactions involving regularity in either accuracy or latency.

Since the dual route model predicts a selective regularity effect for low frequency words, we carried out a 3 (regularity: regular, irregular, rule-based) x 2 (frequency: high, low) x 2 (VF: left, right) x 3 (rh: left, right, bimanual) ANOVAs with accuracy and latency as dependent variables. We observed the same main effects of regularity and of frequency obtained in the separate ANOVAs, but no significant interactions involving regularity, frequency and VF. Thus, the prediction that the nonlexical route is available exclusively to the LH is not supported by our data.

Length. Separate 4 (length: 3, 4, 5, and 6 letters) x 2 (wordness: words, nonwords) x 2 (presentation mode: unilateral, bilateral) x 2 (VF: left, right) x 3 (h: left, right, bimanual) ANOVAs were carried out with accuracies and latencies of correct responses as dependent variables. In addition to main effects and interactions obtained in the previous analyses there was a main effect of length for both accuracy, F(3, 66)=33.95, p<.001 (3 letters=84.6 percent correct, 4 letters=80.5 percent, 5 letters=78.7 percent and 6 letters= 75.3 percent) and latency, F(3.66)=13.279, p<.001 (3 letters=779 ms, 4 letters=823 ms, 5 letters=827 ms, 6 letters=847 ms). Further, the wordness x length interaction was significant for accuracy, F(3, 66=5.081, p<.006 but only approached significance for latency, F(1, 66)=2.715, p=.065. The length x VF interaction was significant for accuracy F(1, 66)=4.294, p<.01, but not for latency. Finally, the length x wordness x VF interaction was significant for both accuracy, F(3, 66)=7.861, p<.001 and latency, F(3, 66)=4.491, p<.01, with length effects for both words and nonwords in the LVF (Figure 4a) but a length effect for nonwords only in the RVF (Figure 4b).

Figure 4: Length by Target Wordness by Visual Field interaction in accuracy

Lexicality priming. Bilateral trials were submitted to separate 2 (target wordness: word, nonword) x 2 (distractor wordness) x 2 (target VF: left, right) x 3 (h: left, right, bimanual) ANOVAs with accuracy of correct responses and latency as dependent variables. There was no significant lexicality priming effect in accuracy, although congruent trials (word-word=78.6% correct and nonword-nonword=78.3%) were slightly more accurate than incongruent trials (word-nonword=76.3%, nonword-word=76.2%).

A target wordness x distractor wordness interaction was observed for RTs, F(1, 22=12.174, p<.003, with word targets processed faster when the distractor was a word, F(1, 23)=10.693, p<.004, whereas speed of processing of nonword targets was not affected by distractor wordness. There was also a trend toward a significant target wordness x distractor wordness x VF interaction in latency, F(1, 22)=3.293, p=.083. Though only a trend, we believe it is important to describe this VF-dependent lexicality priming, since this is the first report of this effect, and it should be explored in future studies. All RVF targets and nonword LVF targets were not affected by distractor wordness, whereas LVF word targets were processed faster with words than nonwords as distractors, F(1, 23)=24.381, p<.001 (Figure 5).

Figure 5: Target Wordness by Decoy Wordness interactions in medians of RTs of correct trials. *= significant difference

Discussion

Rayman & Zaidel (1991) argued that direct access tasks should show no difference in laterality effects between bilateral and unilateral presentations and we have predicted further that when the task can be performed by either hemisphere, bilateral presentations will maximize hemispheric independence by shifting a callosal relay or interhemispheric cooperation pattern into greater direct access or hemispheric independence in strategies and resources. As a sign of direct access we used a significant interaction of response hand (h) with target visual field (VF). Our prediction was borne out: although there was a significant overall h x VF interaction in accuracy and in bias-free sensitivity, the interactions were significant for bilateral presentations but not for unilateral presentations. The lack of statistical significance in the three-way interaction presentation mode x h x VF may be due to the presence of bilateral displays intermingled with unilateral displays throughout the experimental session. If bilateral presentations actually turn callosal relay processing into direct access processing, then they may create intertrial effects, turning a callosal relay pattern (in the case of unilateral presentations) into a mixed pattern, with callosal relay in some trials, and direct access in others. This hypothesis may also explain why we obtained an overall h x VF interaction, even if half of the trials were in a unilateral presentation mode. The effect of the previous trial in the current performance will be reported in detail in another manuscript (Iacoboni, Rayman, & Zaidel, in preparation). Thus, we predict that if unilateral and bilateral trials are blocked, the experiment should yield a significant mode x h x VF interaction.

The h x VF interaction was not significant in latency. One observes more frequent laterality effects in hemifield tachistoscopic experiments using lexical stimuli in accuracy than in latency (Zaidel et al., 1990) and the discrepancy between the two dependent variables may reflect a speed-accuracy tradeoff due to resource limited processing (Eviatar & Zaidel, 1992).

We found the usual RVFA and word advantage in both accuracy and latency. There was also a main effect of presentation mode in both accuracy and latency, with less accurate and slower decisions for bilateral displays. Further, there was a mode x wordness interaction in accuracy, showing that bilateral displays reduced accuracy for words but not for nonwords. This supports our hypothesis that, after an initial similar perceptual process, words and nonwords are processed by independent, parallel processes. Here there is no preliminary cognitive stage that signals whether the target is a word or not and then assign it to the appropriate process. We posit further that nonwords are usually processed more independently in each hemisphere, direct access-fashion, and that processing words involves more interhemispheric cooperation. That is why bilateral presentations, which maximize direct access, do not affect the processing of nonwords, already computed via direct access. By contrast, the loss of interhemispheric interaction induced by bilateral presentations is detrimental to word processing, particularly in the less competent RH. The mode x wordness x VF interaction in accuracy was significant because the loss of accuracy in processing bilateral LVF word targets (13.6%) is greater than the loss in processing bilateral RVF word targets (5.7%). If we take LH processing to be characterized by a word advantage and RH by a nonword advantage, then bilateral presentations come closest to showing independent hemispheric processing, i.e., direct access.

Our hypothesis that words and nonwords are recognized by independent parallel processes calls for an account of nonword recognition. There are several models for word recognition (e.g., Forster's bin search model, 1976, Morton's logogen activation model, 1969, and Becker's hypothesis testing model, 1976), but the usual account of nonword recognition is in terms of failure of word recognition. It is easy to recognize orthographically illegal letter strings as nonwords, but how are orthographically regular nonword recognized? Our best guess is that this is done by detecting violations of morphological structure or composition rules. Indeed, Caramazza et al. (1988) showed that morphologically complex nonwords are more difficult to reject as word than morphologically simple nonword matched for orthographic similarity. Further, Emmorey & Zaidel (cited in Zaidel, 1989) contrasted lateralized presentations of four types of nonwords (simple, suffixed, root-initial and morphologically decomposable) and found a RVFA for root-initial nonwords, a LVFA for suffixed nonwords, and no VFA for morphologically decomposable nonwords. By contrast, Koenig et al. (1992) found that morphologically complex nonwords (stem + suffix) produce more errors that matched morphologically simple nonwords in the RVF but not in the LVF. They concluded that stems and suffixes are represented only in the LH, but their data were unusual in showing an overall nonword advantage in accuracy and no lateral differences in latency. It is possible that the RH was particularly efficient in processing morphological structure.

The deficit of word decisions produced by increased hemispheric independence (bilateral presentations, direct access) suggests that word recognition usually involves interhemispheric interaction. One hemisphere may "borrow" information processing resources from the other even while independently controlling the strategy that is applied to stimuli projected directly to it. Then that hemisphere may be strategy-independent but not resource-independent of the other. Thus, the response hand x stimulus visual field interaction may signal resource independence (Zaidel, 1987) whereas an interaction between some independent variable Y and visual field ("processing dissociation", cf. Zaidel, 1983) may signal processing independence. In that case both unilateral and bilateral presentations show evidence for strategy independence (e.g. significant wordness x VF interactions) but only bilateral presentations show evidence for resource independence as well. A parallel distributed model of visual word recognition that could perform a lexical decision task despite the absence of word-level representations has been described by Seidenberg & McClelland (1989). The model presupposes that flexible response strategies are necessary in lexical decision, dynamically changing the level of information required to accomplish the task. In this way, the independence of response strategies in the two hemispheres can result in qualitative different patterns in the two VFs, even in absence of resource independence.

But how might hemispheric resources be shared, say, in processing unilateral words? We posit that each hemisphere has a different lexical semantic system that encompasses sets of neural cell assemblies which are represented throughout association cortex and correspond to semantic features. Word recognition involves synchronous activation of those multiple representations, coordinated, paced and controlled by "convergence zones" (lexical entries) (Damasio, 1989; Damasio & Damasio, 1990), and semantic features from both hemispheres may converge to facilitate processing. This implies that bilateral copies of the word should be more potent in generating resource sharing than either single unilateral copies or two copies in the same hemisphere (Mohr, Pulvermuller, & Zaidel, 1994; Zaidel & Rayman, 1994). An anatomical model fitting our design has been described in anterior cortical regions of primates where pathways from primary sensory, intermediate, and higher-order association cortices converge (Goldman-Rakic, 1988). Neurons in these regions have large and bilateral receptive fields whereas those in posterior regions have smaller and unilateral receptive fields. To summarize, the parallel processes for nonword recognition in the two hemispheres would be both strategy- and resource-independent, whereas the parallel process for word recognition in the two hemispheres would share resources but not strategies.

This account helps explain the reported lexicality priming effect. We found evidence for lexicality priming in latency for word targets. This result is consistent with our hypothesis that word processing benefits from interhemispheric cooperation. It follows that interhemispheric cooperation or resource sharing can occur automatically even in direct access tasks when the two hemispheres engage similar computations, albeit with different inputs. This priming effect provides a paradigm for assessing implicit interhemispheric transfer in the split brain (Iacoboni, Rayman & Zaidel, in preparation).

The standard account of facilitation/interference effects posits that congruity (priming) effects are proportional to the speed and/or automaticity of processing the decoy relative to the target. Recall that latency for unilateral trials showed significant word advantages in both VFs and a RVFA for words but not for nonwords. This would predict (1) little or no priming effects in the RVF, and (2) greater priming effects for LVF nonword than word targets. Only the first prediction is supported by our data. Note that unilateral presentations do not constitute an absolute baseline or a neutral condition for bilateral presentations. This most likely reflects overall overhead time sharing costs of implementing parallel computations in the two hemispheres during bilateral trials. Indeed, mode changes induce changes in hemispheric resource assignment, particularly for words. Thus neither the speed-of-processing nor the Automaticity model of the priming effect (cf. MacLeod, 1991) is supported.

Although the priming effect was not significant for RVF targets or for nonword targets, the pattern of the data was always consistent with a congruity effect (word targets faster with word than nonword decoys, nonword targets faster with nonword than word decoys) and there was no hint of a same-trial negative priming effect. Lambert (1991) found same-trial negative priming effects in a lateralized lexical categorization task but in his design the unattended stimulus (in the LVF) was flashed for only 35 ms whereas the attended one (in the RVF) was flashed for 100 ms and this may be regarded as a sequential presentation. In the classic negative priming effect, an unattended stimulus in one trial which belongs to the same category as an attended stimulus in the next, produces inhibition rather than facilitation (e.g. Driver & Tipper, 1989). Could this or other previous-trial effects interact with or obscure our same-trial congruity effect? Post hoc analyses showed no significant interaction between the lexicality priming effect and any previous trial variable (Iacoboni, Rayman, & Zaidel, in preparation).

The hemispheric version of the dual route model, according to which the RH has access only to a lexical route whereas the LH has access both to a lexical and a nonlexical route, was tested in our experiment under different conditions of hemispheric independence (unilateral and bilateral presentations). The dual route model predicts a regularity x VF interaction with a greater regularity effect in the RVF. It also predicts an overall frequency effect, significant in both VFs, and, perhaps, a frequency x VF interaction showing a greater frequency effect in the LVF. The strongest test of these predictions should obtain with bilateral presentations and we may therefore expect a mode x regularity x VF and a mode x frequency x VF interactions.

Surprisingly, we found an overall regularity effect, with irregular words processed less accurately than regular ones in either VF, but there was no regularity x VF interaction, nor a mode x regularity x VF interaction. This suggests that RH word recognition in the normal brain includes a phonological component, at least for speede lexical decision. That conclusion conflicts with results from split brain patients, suggesting that the disconnected RH has no access to grapheme-phoneme correspondence rules (Zaidel & Peters, 1981), but it is consistent with previous observations that the normal LVF is sensitive to phonological variables when there is evidence for direct access during word recognition (Zaidel, 1989; Rayman & Zaidel, 1991). Frequency had a similar effect in both VFs and no significant interactions were observed with other experimental variables. Taken together, the dual route analysis of hemispheric word recognition is disappointing and does not provide strong evidence for associating the normal RH with an exclusively lexical route. An alternative possibility is that the regularity effect in the RH reflects some orthographic rather than phonological representation but the details of such an implementation remain obscure.

We observed stronger length effects for nonwords than for words, and for the LVF than for the RVF, consistent with the generalization of Eviatar & Zaidel (1991). The length x wordness x VF interaction showed length effects for nonwords in both VFs and a length effect for words only in the LVF. Ellis et al. (1988) proposed that the LH recognizes horizontal words using a fast parallel process which is not sensitive to length, whereas the RH recognizes such words using a slow serial process which is sensitive to length. In their view, nonwords in either VF are recognized via the slow, serial process and thus show length effects. This view has been supported by an experiment with a split-brain patient (Reuter-Lorenz & Baynes, 1992). Other hemispheric lenght effects have been observed in nonpronounceable letter strings, suggesting that LH is more efficient than the RH in processing local elements of a display (Eng & Hellige, 1994). But discrepant results were obtained by others (Bub & Levine, 1988; Bruyer & Janlin, 1989; Eviatar & Zaidel, 1991). Eviatar & Zaidel attempted to reconcile the conflicting data by suggesting that length effects may reflect a switch from a parallel graphemic analysis to a sequential parsing strategy when resources are limited, with either words or nonwords in either VF. Thus, string length may interact with other variables changing the characteristics of a task from data-limited to resource-limited (Norman & Bobrow, 1975).

Why haven't we observed any interaction involving string length and mode of presentation? Presumably because LH resources are sufficient for parallel processing of words even with bilateral presentations. But LH resources are insufficient for parallel processing of nonwords and this is true even with unilateral presentations because, according to our earlier hypothesis, nonwords are processed via direct access with either bilateral or unilateral presentations.

As usual, one should exercise caution in generalizing from the artificial laboratory condition of lateralized lexical decision to the role of the two hemispheres in natural reading but the results suggest that RH processing of single words has surprisingly diverse strategies.

References

Becker, C.A. 1976. Allocation of attention during visual word recognition. Journal of Experimental Psychology: Human Perception & Performance, 2, 556-566.

Boles, D.B. 1983. Hemispheric interaction in visual field asymmetry. Cortex, 9, 99-113.

Boles, D.B. 1987. Reaction time asymmetry through bilateral vs. unilateral stimulus presentation. Brain & Cognition, 6, 321-333.

Boles, D.B. 1990. What bilateral displays do. Brain & Cognition, 12, 205-208.

Boles, D.B. 1994. An experimental comparison of stimulus type, display type, and input variable contributions to visual field asymmetry. Brain and Cognition, 24, 184-197.

Bruyer, R., & Janlin, D. 1989. Lateral differences in lexical access: word length vs. stimulus length. Brain & Language, 37, 258-265.

Bub, D.N., & Lewine, J. 1988. Different modes of word recognition in the left and the right visual fields. Brain & Language, 33, 161-188.

Caramazza, A., Laudanna, A., & Romani, C. 1988. Lexical access and inflectional morphology. Cognition, 28, 297-332.

Chiarello, C. 1988. Lateralization of lexical processes in the normal brain: a review of visual-half field research. In H.A. Whitaker (Ed.), Contemporary reviews in Neuropsychology. New York: Springer Verlag.

Chiarello, C., Nurding, S., & Pollock, A. 1988. A lexical decision and naming asymmetries: influence of response selection and response bias. Brain & Language, 34, 302-314.

Coltheart, M. 1983. The right hemisphere and disorders of reading. In A.W. Young (Ed.) Functions of the right cerebral hemisphere. London: Academic Press.

Damasio, A.R. 1989. Time-locked multiregional retroactivation: a systems-level proposal for the neural substrates of recall and recognition. Cognition, 33, 25-62.

Damasio, H., & Damasio, A.R. 1990. The neural basis of memory, language and behavioral guidance: advances with the lesion method in humans. Seminars in the Neurosciences, 2, 277-286.

Driver, J., & Tipper, S.P. 1989. On the nonselectivity of "selective" seeing: contrasts between interference and priming in selective attention. Journal of Experimental Psychology: Human Perception and Performance, 15, 304-314.

Ellis, A.W., Young, A.W., & Anderson, C. 1988. Modes of word recognition in the left and right cerebral hemispheres. Brain & Language, 32, 254-273.

Eng, T.L., & Hellige, J.B. 1994. Hemispheric asymmetry for processing unpronounceable and pronounceable letter trigrams. Brain and Language, 46, 517-535.

Eviatar, Z., Menn, L., & Zaidel, E. 1990. Concreteness: nouns, verbs, and hemispheres. Cortex, 26, 611-624.

Eviatar, Z., & Zaidel, E. 1991. The effects of word length and emotionality on hemispheric contribution to lexical decision. Neuropsychologia, 29, 415-428.

Eviatar, Z., & Zaidel, E. 1992. Letter matching in the hemispheres: speed-accuracy trade-offs. Neuropsychologia, 30, 699-710.

Forster, K.I. 1976. Accessing the internal lexicon. In R.J. Wales & E. Walker (Eds.), New approaches to language mechanisms. Amsterdam: New Holland.

Goldman-Rakic, P.S. 1988. Topography of cognition: parallel distributed networks in primate association cortex. Annual Review of Neuroscience, 11, 137-156.

Hardyck, C. 1991. Shadow and substance: attentional irrelevancies and perceptual constraints in the hemispheric processing of language stimuli. In F.L. Kitterle (Ed.) Cerebral laterality: theory and research. Hillsdale, NJ: Lawrence Erlbaum.

Hines, D. 1977. Differences in tachistoscopic recognition between abstract and concrete words as a function of visual half field and frequency. Cortex, 13, 66-73.

Hochhaus, L. 1972. A table for the calculation of d' and beta. Psychological Bulletin, 77, 375-376.

Kinsbourne, M., & Hiscock, M. 1983. Asymmetries of dual-task performance. In J. Hellige (Ed.) Cerebral hemisphere asymmetry: method, theory, and application. New York: Praeger.

Koenig, O., Wetzel, C., & Caramazza, A. 1992. Evidence for different types of lexical representations in the cerebral hemispheres. Cognitive Neuropsychology, 9, 33-45.

Lambert, A., & Voot, N. 1993. A left visual field bias for semantic encoding of unattended words. Neuropsychologia, 31, 67-73.

MacLeod, C.M. 1991. Half a century of research on the Stroop effect: an integrative review. Psychological Bulletin, 109, 163-203.

McNichol, D. 1972. A primer on signal detection theory. London: Allen & Unwin.

Measso, G., & Zaidel, E. 1990. Effect of response programming on hemispheric differences in lexical decision. Neuropsychologia, 28, 635-646.

Mohr, B., Pulvermuller, F., & Zaidel, E. 1994. Lexical decision after left, right, and bilateral presentation of function words, content words and non-words: evidence for interhemispheric interaction. Neuropsychologia 1994, 32, 105-124.

Morton, J. 1969. Interaction of information in word recognition. Psychological Review, 76, 165-178.

Norman, D.A., & Bobrow, D.G. 1975. On data limited and resource limited processes. Cognitive Psychology, 7, 44-64.

Patterson, K.E., & Morton, J. 1985. From orthography to phonology: an attempt at an old. In K.E. Patterson, J.C. Marshall & M. Coltheart (Eds.), Surface dyslexia. London: Lawrence Erlbaum Associates.

Rayman, J., & Zaidel, E. 1991. Rhyming and the right hemisphere. Brain & Language, 40, 89-105.

Reuter-Lorenz, P.A., & Baynes, K. 1992. Modes of lexical access in the callosotomized brain. Journal of Cognitive Neuroscience, 4, 155-164.

Seidenberg, M.S., & McClelland, J.L. 1989. A distributed, developmental model of word recognition and naming. Psychological Review, 96, 523-568.

Schweiger, A., Zaidel, E., Field, T., & Dobkin, B. 1989. Right hemisphere contribution to lexical access in an aphasic with deep dyslexia. Brain and Language, 37, 73-89.

Seymour, P.H.K., Bunce, F., & Evans, H.M. 1992. A framework for orthographic assessment and remediation. In C. Sterling & C. Robson (Eds.), Psychology, spelling, and education. Clevedon: Multilingual Matters.

Zaidel, E. 1983. Disconnection syndrome as a model for laterality effects in the normal brain. In J. Hellige (Ed.), Cerebral hemisphere asymmetry: method, theory and application. New York: Praeger.

Zaidel, E. 1987. Hemispheric monitoring. In D. Ottoson (Ed.), Duality and unity of the brain. London: MacMillan.

Zaidel, E. 1989. Hemispheric independence and interaction in word recognition. In C. von Euler, I. Lundberg, & G. Lennerstrand (Eds.), Brain & reading. Hampshire: Macmillan.

Zaidel, E., Clarke, J.M., & Suyenobu, B. 1990. Hemispheric independence: a paradigm case for cognitive neuroscience. In A.B. Scheibel & M.D. Wechsler (Eds.), Neurobiology of higher cognitive functioning. New York: Guilford Press.

Zaidel, E., & Peters, A.M. 1981. Phonological encoding and ideographic reading by the disconnected right hemisphere: two case studies. Brain and Language, 14, 205-234.

Zaidel, E., & Rayman, J. 1994. Hemispheric control in the normal brain: evidence from redundant bilateral presentation. In C. Umilta' & M. Moscovitch (Eds.), Attention and Performance XV. Cambridge: MIT Press.

[1] A preliminary model that can be seen as an attempt to account for the lack of lexicality effect in the LVF is outlined in Hardick (1991).