International Open Access Journal Platform

logo
open
cover
Current Views: 241992
Current Downloads: 128066

Advances in Linguistics Research

ISSN Print:2707-2622
ISSN Online:2707-2630
Contact Editorial Office
Join Us
DATABASE
SUBSCRIBE
Journal index
Journal
Your email address

Evidence for a Human Language Instinct from Sign Languages

Beiyang Zhang

Advances in Linguistics Research / 2026,8(1): 25-30 / 2026-03-23 look165 look118
  • Information:
    University of Amsterdam, Amsterdam, Netherlands
  • Keywords:
    Language Instinct; Sign Languages; Neurobiology
  • Abstract: This paper examines whether evidence from sign language research supports the hypothesis that humans possess a language instinct. Drawing on studies of homesign systems, emerging sign languages, cospeech gesture in stuttering, and the neurobiological bases of sign language processing, it argues that linguistic structure cannot be explained solely by communicative interaction or environmental input. Homesign research shows that deaf children develop systematic structural patterns despite the absence of a conventional language model. Studies of Nicaraguan Sign Language further demonstrate how gesture-based communication can be reorganized into increasingly structured grammatical systems across cohorts of learners. Evidence from cospeech gesture and neurobiological research suggests that language relies on internally organized mechanisms that are largely independent of modality. Together, these findings indicate that language development is guided by a biologically grounded capacity for language, providing support for the hypothesis that humans possess a language instinct.
  • DOI: 10.35534/lin.0801003
  • Cite: Zhang, B. Y. (2026). Evidence for a Human Language Instinct from Sign Languages. Advances in Linguistics Research, 8 (1), 25-30.


1 Introduction

The hypothesis that humans possess an innate capacity for language—often referred to as a language instinct—has been a central topic in theoretical linguistics and cognitive science. Within the generative tradition, Chomsky (1965) has argued that language acquisition is made possible by an innate faculty that provides abstract structural principles. Similarly, Pinker (1994) characterizes language as an evolved instinct that develops reliably in humans rather than as a skill transmitted solely through cultural learning. According to this perspective, the emergence of linguistic structure cannot be fully explained by communicative interaction or environmental input alone.

Sign languages provide a particularly valuable lens for investigating this hypothesis, as they emerge in a distinct modality from spoken languages yet exhibit comparable grammatical organization (Sandler & Lillo-Martin, 2006). Research on homesign systems, emerging sign languages, co-speech gesture, and the neurobiological bases of sign language processing has provided important insights into how linguistic structure develops when conventional linguistic input is limited or absent. These phenomena enable researchers to investigate whether linguistic structure can emerge from internally guided mechanisms rather than being solely derived from external linguistic input.

Strictly speaking, homesign and co-speech gesture are not conventional sign languages. For convenience, this paper uses the term “sign language research” in a broader sense to include studies of sign languages as well as related gesture-based communication systems.

This paper argues that empirical findings from sign language research provide strong evidence for the existence of a human language instinct. Four lines of evidence are examined. First, homesign systems demonstrate that deaf children can develop structured communication systems despite the absence of a conventional language model. Second, research on emerging sign languages such as Nicaraguan Sign Language (NSL) shows that successive cohorts of learners reorganize early gestural systems into increasingly systematic grammatical structures. Third, studies of co-speech gesture in individuals who stutter reveal that gesture production is tightly integrated with linguistic planning rather than functioning as an independent communicative channel. Finally, neurobiological evidence shows that both signed and spoken languages rely on largely overlapping neural networks, indicating a modality-independent linguistic system.

Taken together, these findings suggest that linguistic structure is not merely the product of communicative interaction or cultural transmission, but instead reflects an internally guided capacity for language that shapes the emergence and organization of linguistic systems.

2 Evidence from Sign Language Research

Evidence from sign language research provides multiple independent lines of support for the hypothesis that humans possess an innate capacity for language. This section reviews four types of evidence: homesign systems, emerging sign languages, co-speech gestures, and neurobiological findings.

2.1 Homesign

Homesign systems provide an important test case for evaluating whether linguistic structure can emerge in the absence of conventional linguistic input. Deaf children who develop homesign systems typically grow up in socially enriched environments, yet lack access to a fully developed linguistic model. Despite this limitation, research shows that homesign systems often exhibit systematic and language-like structural properties. Goldin-Meadow (2005) reports that homesigning children across different cultural contexts independently develop consistent ordering patterns in their gesture productions. In both the United States and China, children show a strong preference for placing the patient (object) before the act (verb) in two-gesture sentences (e.g., cheese–eat), a pattern that does not mirror the dominant word order of the surrounding spoken languages (English is SVO; Mandarin is largely SVO but topic-prominent). Similarly, homesigners tend to place intransitive actors (subjects of intransitive verbs) before the act (e.g., mouse–go). These patterns demonstrate that children are not merely reproducing the gestural input provided by caregivers, but instead generating abstract structural organization in their communication systems despite substantial cross-cultural differences in caregiver gesture use and interaction styles.

One possible alternative explanation of this phenomenon is that the observed structure arises through communicative interaction with hearing family members. Through repeated interaction, feedback, and the need for mutual understanding, such exchanges might gradually shape and conventionalize the homesign system. However, Carrigan and Coppola (2017) provide evidence that challenges this interpretation. In a comprehension experiment involving homesign productions presented in a decontextualized task, mothers and other relatives showed limited understanding of the homesign systems despite years of daily interaction with the children. Moreover, comprehension did not improve with increasing years of interaction. Interestingly, naïve users of American Sign Language (ASL) with no prior exposure to the homesign systems often performed better in interpreting these productions than the homesigners’ mothers. These findings indicate that the structural properties of homesign systems are not primarily driven by successful communicative alignment between children and their interlocutors.

Taken together, the evidence indicates that homesign systems exhibit systematic structural organization that cannot be fully explained by communicative interaction alone. Instead, the emergence of such a structure appears to reflect internally driven biases toward linguistic organization, providing support for the hypothesis that humans possess a language instinct.

2.2 Emerging Sign Languages

Emerging sign languages provide further evidence that the human capacity for language plays a crucial role in transforming early gestural communication into structured linguistic systems. Research on Nicaraguan Sign Language (NSL) illustrates how linguistic structure can evolve rapidly when a newly formed communicative system is transmitted across successive generations of learners. Senghas, Kita, and Özyürek (2004) show that the first cohort of NSL signers, who were exposed to the system relatively late, relied largely on holistic and gesture-like expressions when describing motion events. In these productions, manner and path were typically encoded simultaneously, a pattern similar to that observed in co-speech gesture.

When the system was transmitted to younger learners, however, it underwent a systematic reorganization. Later cohorts increasingly decomposed motion events into distinct components and expressed manner and path sequentially rather than simultaneously. These elements were also combined into more complex constructions, including A-B-A patterns that allow embedding within larger structures. Such developments have been interpreted as the emergence of a hierarchical structure within the language (Senghas et al., 2004). Importantly, comparable restructuring did not occur in older learners despite prolonged exposure to the same system. Given that the ambient gestural input remained relatively stable across cohorts, these changes are unlikely to stem solely from differences in input frequency. Instead, they suggest that younger learners reorganize the available material during acquisition, pointing to the possibility of a developmentally critical period in the emergence of linguistic structure.

More recent work demonstrates that this restructuring is not confined to event encoding but extends to abstract grammatical domains. By examining wh-questions in NSL, Kocab, Senghas, and Pyers (2022) show that although a range of facial gestures is available in the hearing community, only a subset is selectively conventionalized across successive cohorts. In particular, the brow furrow, neither frequent nor especially salient in the input, emerges as a stable non-manual interrogative marker only in later cohorts during acquisition. This pattern is difficult to reconcile with usage-based or input-driven accounts, which would predict that the most frequent or salient forms would undergo grammaticalization and that linguistic change would diffuse gradually across users.

These findings suggest that while gesture provides the initial expressive resources for emerging sign languages, the development of a structured linguistic system depends critically on how child learners reorganize this material during a developmentally constrained period in acquisition. The emergence of systematic grammatical structure in NSL therefore provides important evidence for the role of internally driven mechanisms in language formation, consistent with the hypothesis of a human language instinct.

2.3 Co-speech Gesture in Stuttering

Evidence from co-speech gestures produced by individuals who stutter provides further support for the hypothesis that humans possess a language instinct. If gestures operated as an independent communicative channel, one would expect them to compensate for disruptions in speech production. However, empirical findings suggest a different pattern, indicating that gesture production is closely tied to the processes underlying linguistic planning.

Mayberry and Jaques (2000) show that in neurotypical speakers who stutter, gesture production is markedly reduced during stuttering events. Representational gestures rarely co-occur with stuttered disfluencies, and gesture strokes are systematically delayed until fluent word production resumes. These effects cannot be attributed to a general motor impairment, as non-linguistic manual actions remain intact during stuttering episodes. Crucially, gestures do not increase to compensate for speech failure; instead, gesture production appears to pause when linguistic planning breaks down. This pattern suggests that gesture initiation is tightly linked to the timing and organization of linguistic processes rather than driven solely by communicative need.

In more recent work, additional evidence further clarifies this relationship. Maessen et al. (2022) report that individuals with Down syndrome may increase gesture use during stuttering as a compensatory strategy. This contrast indicates that gesture-speech coupling is contingent on the integrity of the linguistic system. It highlights an important asymmetry: in neurotypical speakers, where the linguistic system remains intact, gesture production does not operate independently of linguistic planning. Instead, gestures appear to be constrained by the organization and timing of language production. Compensatory gesture use emerges primarily when the underlying linguistic system is compromised.

These findings indicate that gesture production is normally coordinated with linguistic planning within an integrated linguistic system. Rather than functioning as an alternative communicative channel, gesture appears to be organized by the same mechanisms that govern language production. This close coupling between gesture and linguistic processes, therefore, provides additional evidence that language behavior is guided by internally structured language mechanisms, consistent with the hypothesis that humans possess a language instinct.

2.4 Neurobiological Evidence

Neurobiological research on sign language processing provides strong support for the hypothesis that humans possess a language instinct. If language were shaped entirely by the modality of its expression, one would expect signed and spoken languages to depend on fundamentally distinct neural systems. However, evidence from neuroimaging and lesion studies suggests that the brain supports a stable neural architecture that underlies a modality-independent linguistic system.

MacSweeney et al. (2008) review extensive lesion and neuroimaging evidence showing that both signed and spoken languages recruit a left-lateralized perisylvian network, including the left inferior frontal gyrus and superior temporal regions, during language comprehension and production. Furthermore, deaf signers with left-hemisphere damage exhibit sign language aphasia that closely mirrors the patterns observed in spoken language aphasia. These findings indicate that these brain regions support abstract linguistic computations rather than modality-specific motor control. At the same time, MacSweeney et al. (2008) also note systematic modality-related differences between signed and spoken language processing, particularly increased involvement of parietal regions during sign language production. Such differences are generally interpreted as reflecting the visuospatial and motor demands of the manual modality rather than indicating the existence of distinct linguistic systems.

Evidence from hearing bimodal bilinguals further illustrates how modality modulates the implementation of an underlying linguistic system. Banaszkiewicz, Costello, and Marchewka (2024) show that experience with a visuospatial sign language modulates the recruitment of inferior parietal regions during language processing. Both native and late signers exhibit enhanced activation of the left inferior parietal lobule (including the supramarginal gyrus) when processing sign language relative to audiovisual spoken language, which points to a robust modality effect independent of age of acquisition. In addition, native signers exhibit greater engagement of the right inferior parietal lobule than late learners, which suggests that age of acquisition modulates the allocation of neural resources during linguistic development. Importantly, no reliable differences were observed in core language regions such as the left inferior frontal gyrus, which may indicate that linguistic experience shapes how an existing linguistic system is implemented rather than constructing a different one.

These findings show that while modality and developmental timing may influence how language is implemented in the brain, they do so within a stable neural architecture supporting a modality-independent linguistic system. The largely shared neural basis of signed and spoken languages, therefore, provides further evidence for the existence of a human language instinct.

3 Discussion

The evidence reviewed in this paper is compelling not because any single phenomenon alone proves the existence of a language instinct, but because findings from several distinct domains converge on the same overarching implication. Across different developmental conditions, communicative environments, and levels of analysis, sign-related evidence repeatedly suggests that human language cannot be understood as the passive product of input, interaction, or modality alone. Instead, the overall pattern is more consistent with the view that learners actively organize available communicative material in specifically linguistic ways.

This point is particularly important for the theoretical debate between nativist and usage-based accounts. Usage-based approaches have rightly emphasized that language evolves through social interaction, repeated use, and the gradual abstraction of constructions from linguistic experience. Tomasello’s (2003) framework is especially influential in arguing that children do not need a self-contained language instinct in order to acquire language, but can build linguistic knowledge from domain-general cognitive capacities operating in communicative contexts. Yet the sign language evidence examined in this paper indicates that such explanations, while indispensable, are not fully adequate. They help explain how structures become shared, entrenched, and conventionalized within a community. Still, they do not fully explain why communicative systems so often develop specifically linguistic regularities even when the input is impoverished, or not itself fully linguistic.

What emerges, then, is not a simple rejection of interactional or usage-based explanations, but a limit on how far they can go on their own. Social interaction and transmission matter, but the evidence discussed in this paper suggests that these factors do not by themselves generate linguistic structure in an unconstrained way. Rather, they operate within learners whose cognitive systems are inherently biased toward the construction of structured, combinatorial, and modality-independent linguistic systems. In this sense, the strongest interpretation of the sign language evidence is not that language is fully pre-specified. Still, that language development is guided by internal constraints that shape how communicative experience is transformed into linguistic organization.

4 Conclusion

Overall, the most tenable conclusion of this paper is a nuanced one. The converging evidence discussed above—from homesign systems, emerging sign languages, gesture-speech integration, and neurobiological research—suggests that linguistic structure cannot be fully explained by communicative interaction or environmental input alone. Sign language evidence does not negate the role of usage, interaction, or transmission in linguistic development; rather, it demonstrates that these factors operate within a biologically grounded capacity that actively shapes the emergence of linguistic structure. The importance of sign languages lies precisely in the fact that they make this capacity particularly visible. Because sign languages emerge outside the auditory-vocal channel, they more clearly reveal that language is not fundamentally a property of speech, but rather of the human mind. In this sense, the structural properties of sign languages provide strong evidence that humans possess a language instinct.

References

[1] Banaszkiewicz, A., Costello, B., & Marchewka, A. (2024). Early language experience and modality affect parietal cortex activation in different hemispheres: Insights from hearing bimodal bilinguals. Neuropsychologia, 204, 108973.

[2] Carrigan, E. M., & Coppola, M. (2017). Successful communication does not drive language development: Evidence from adult homesign. Cognition, 158, 10–27.

[3] Chomsky, N. (1965). Aspects of the theory of syntax. MIT Press.

[4] Goldin-Meadow, S. (2005). The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. Psychology Press.

[5] Kocab, A., Senghas, A., & Pyers, J. (2022). From seed to system: The emergence of non-manual markers for wh-questions in Nicaraguan Sign Language. Languages, 7(2), 137.

[6] MacSweeney, M., Capek, C. M., Campbell, R., & Woll, B. (2008). The signing brain: The neurobiology of sign language. Trends in Cognitive Sciences, 12(11), 432–440.

[7] Maessen, B., Rombouts, E., Maes, B., & Zink, I. (2022). The relation between gestures and stuttering in individuals with Down syndrome. Journal of Applied Research in Intellectual Disabilities, 35(3), 761–776.

[8] Mayberry, R. I., & Jaques, J. (2000). Gesture production during stuttered speech: Insights into the nature of gesture–speech integration. In D. McNeill (Ed.), Language and gesture (pp. 199–214). Cambridge University Press.

[9] Pinker, S. (1994). The language instinct: How the mind creates language. William Morrow.

[10] Sandler, W., & Lillo-Martin, D. C. (2006). Sign language and linguistic universals. Cambridge University Press.

[11] Senghas, A., Kita, S., & Özyürek, A. (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305(5691), 1779–1782.

[12] Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Harvard University Press.

Already have an account?
+86 027-59302486
Top