Eikyu Wrote:Nestor Wrote:It makes a lot of sense, I'm surprised it's not more popular as it obviates the need for thinking in terms of binary oppositions, really, with articulatory components integrated into the visuospatial representations more literally.
What binary opposition are you referring to?
Edit:
Nestor Wrote:Learning to read rewires the brain and how we use language overall, it's quite amazing for a kludgy invention.
You've said that non-phonocentrism is a mainstream view among linguists, but all the refs I can find point to Derrida and deconstruction. Do you have a link to a linguistics paper/book discussing the issue? I'm not sure how deeply writing affects our way of thinking about language.
And Wikipedia Wrote:In April, 1946, Naoya Shiga published an article in the magazine Kaizō titled kokugo mondai (国語問題 the national language problem?), which suggested that the Japanese language should be eradicated in favour of French, which he considered to be the most beautiful language in the world.
That sounds like the perfect solution !
As I said, it's more a rhetorical vestige that doesn't really come up in practice except to make such points in themselves. Here are a few examples relevant to speech and writing:
Presentation in language: rethinking speech and writing
The written language bias in linguistics
Language and the Internet
This also talks a bit about the changing/dominant paradigms (that writing is not merely speech written down's been around awhile, in part thanks to M.A.K. Halliday:
Literacy and language analysis
Systemic functional linguistics and grammar and cognitive linguistics has long dealt with language as a system of texts/signs rather than as speech, whereas unfortunately the dominant view in generative grammar or whathaveyou (which ends up being equated with linguistics as a whole in often-obsolete intro texts) is treating speech as primary and ends up performing prescriptive wankery with little relevance to usage. This tends to rely on assumptions about universals and innatism of language, and, as you can see through surveys such as at Babel's Dawn using a paper by Evans & Levinson as a launch point (a brilliant
blog (see also
Chater & Christiansen, and
Michael Tomasello, or Simon Kirby from the computational linguistics view.. I believe a lot of this is contained
here) on the origins of speech and language, also see the kinds of papers presented recently at
conferences (
overviews), including focusing on semantics over syntax), while speech is dealt with because of its evolutionary primacy and interactional properties, such assumptions are no longer a widely held belief in contemporary evolutionary linguistics, and in fact, Chomsky himself and various figures that have been attached to him have modified their views over the years so that what they consider universal is not unimodal and intrinsic, but involves recursion and modularity interacting with the environment. Language is considered a ‘complex adaptive system.’ When speech is referenced, it is to get at the deeper linguistic processes and analyze evolution of language across periods of time predating writing.
Also:
http://applij.oxfordjournals.org/content/27/4/729.full
And did I mention
Hopper and emergence? Anyway.
Now the evolutionary/cognitive science looks at the role of gesture and sight and sound interacting to comprise meaning in a system. Issues are of refining the interactions of modality, neural paths, etc.
Embodied cognition (previously linked to Wilson on sensorimotor/visuospatial processes of articulating language in working memory) see also
Susan Goldin-Meadow), multisensory learning, and most recently the neural underpinnings of reading and how it interacts with speech/gesture. In fact, rather than arguing for the primacy of speech, innatist views (e.g. Jackendoff) tend to speculate on amodality, abstractions in the brain, rather than saying the inbuilt apparatus for speaking is itself a demonstration that speech is language. So really, I can't think of anywhere that treats spoken language as ‘real language’ in itself, unaffected by or subverting writing as a system, with that which is not sound merely representative of sound. Either the physiological basis is amodal, or it's multimodal. And as language is an emergent system, then the environmental aspect is going to be both written and spoken, and I'd argue that if they're not equal/incomparable, then writing is dominant because language is for making meaning and communicating it (re: effects of reading exposure on vocabulary and lexical access/density), and writing's stability across time causes it to subsume and organize spoken language. Only when evolutionary linguistics, applied linguistics, and the cognitive science of reading meet will we have a complete picture, as each is concerned with its own stuff. It doesn't fit the models of language to look only at the properties for producing speech on the biological end, one component of a process that goes deeper than that, and using that to make claims about all language itself as a co-evolved system.
Every day I find some new paper on language that demonstrates this focus. Evolutionary linguistics does not treat speech as language in itself. I just found this, for example:
Visual motion aftereffect from understanding motion language (speaks to the importance of vision and motion in listening, re: embodied cognition, e.g. “The findings support a view of cognition in which language comprehension is intimately linked to and dynamically interacts with perception and action. Within this embodied cognition framework, higher-level cognitive processing is grounded in an individual’s perceptuomotor system and their unique interactions with the environment, so variability across individuals becomes an important signal to explain.”
Neurophysiological origin of human brain asymmetry for speech and language - “Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.”
How Learning to Read Changes the Cortical Networks for Vision and Language
Pasting this as it's relevant to recent comments I made: “Conclusion. Literacy, whether acquired in childhood or through adult classes, enhances brain responses in at least three distinct ways. First, it boosts the organization of visual cortices, particularly by inducing an enhanced response to the known script at the VWFA site in left occipito-temporal cortex and by augmenting early visual responses in occipital cortex, in a partially retinotopic manner. Second, literacy allows virtually the entire left-hemispheric spoken language network to be activated by written sentences. Thus reading, a late cultural invention, approaches the efficiency of the human species’ most evolved communication channel, namely speech. Third, literacy refines spoken language processing by enhancing a phonological region, the planum temporale, and by making an orthographic code available in a top-down manner.”
In addition to linguistics that deals with writing and speech differently, the neuroscience looking at reading, here's some bits about the ‘literacy hypothesis’ (for early, seminal talk of the differences between the language of oral and literate cultures, see Walter Ong's ‘Orality and Literacy’):
The literacy hypothesis and cognitive development
Language, Literacy and Mind: The Literacy Hypothesis
You can see iterations of this in other scholarship on writing systems and how they developed.
Other notions in linguistics involves the indexical, self-contained nature of writing which can refer to itself as it performs its discursive functions. (See the works of Charles Bazerman (and M. A. K. Halliday), often tied to corpus liguistics and its spoken/written distinctions.)
With all of the studies showing the differences in logographs and phonographs (here's another, this time talking about its effects on picture recognition:
Words and pictures: An electrophysiological investigation of domain specific processing in native Chinese and English speakers) in light of research on language in the brain as a general, including ideas of chunking and working memory and the differing roles of the phonological loop, visuospatial sketchpad, executive modulation, and sensorimotor encoding for articulatory rehearsal and lexical access, the way the eye processes text, the multimodal cognitive faculty arising through speech, sign, and writing, which linguistics shows to have their own interacting, unique properties, then it should be that forming orthographic ideals and pedagogies of literacy are shaped around the best ways to process orthographic shapes for lexicogrammatical activations, e.g. balancing iconicity for grapheme-morpheme and simplicity for grapheme-articulatory processes, and looking at usage differences of the mediums such as multiple literacies in discourse analysis, and how learning works for pedagogical purposes, such as utilizing multimedia learning, and spaced retrieval, where enhancements come from visuospatial/sensorimotor combinations in augmenting the phonological.
Hmm, did I miss anything? Perhaps relating metalinguistics and working memory to
metacognition and its relevance to
study? Perhaps add why I think this is important for self-study, re:
affective neuroscience/
learning and the psychology of flow (Csikszentmihalyi) vs. unscientific tearing down of the writing system forum members are studying. Oh, and the binary opposition thing was me riffing on how per the above, I was always speculating on logo/phono elements interacting so that phonology and also function words can be enhanced with phonographs, letting the meaning making come through the morphographs, and how neat it would be if the articulatory components actually comprised the morphographs (such as in viewing SignWriting words, due to their iconicity, forming dynamically decomposable/recombining wholes through constituents that are themselves multimodal, being visual, spatial, gestural, and phonological).
Anyway, I could go on, but I would end up having to re-read my previous posts and links.
Here's a couple of them anyway. ;p
http://onthehuman.org/2010/02/on-the-hum...-language/ (He talks about relaxed selection here and again
here: “Surprisingly, the relaxation of selection at the organism level may have been a source of many complex synergistic features of the human language capacity, and may help explain why so much language information is “inherited” socially.”)
Rationality and the literate mind (on writing restructuring consciousness)
For intersections of the problems of focusing on phonology too much, especially with logographic languages, see the work of Mary Flaherty, such as:
An Investigation of the Stroop Effect Among Deaf Signers in English and Japanese: “There may well be an affinity between logographic script and sign (Flaherty, 1998, 2003), as was shown in the reduced Stroop effect for deaf signers reading Japanese relative to deaf signers reading English... Some of the important questions include... How would incorporating some sort of logographic visual scripts to bridge the gap between phonologically based script and a visual language work in the classroom?”
Also for similar dangers for a similar group of readers with regards to phonocentrism in reading pedagogy:
Phonology and Reading: A Response to Wang, Trezek, Luckner, and Paul or
The Role of Phonology in the Word Decoding Skills of Poor Readers - “It is suggested that in prelingually deafened readers, but maybe also in dyslectic readers, teachers should foster the development of orthographic knowledge as the basis for proficient reading without making such development contingent on the processing of the phonology of written words.”
Or Mairead McSweeney: “The fact that congenitally profoundly deaf readers can perform spoken language phonological tasks at an above-chance level suggests that they gain knowledge about phonological structure from modalities other than audition. Information may be derived from visual input in the form of orthography... Such studies suggest that phonological representations may best be thought of as supramodal or amodal.”
Some of that goes into memory span and logographic systems (
Deaf Signers Who Know Japanese Remember Words and Numbers More Effectively Than Deaf Signers Who Know English, but the flexibility of memory span and its trainable/cultural dependence (relevant to Chinese/Japanese and the costs of phonological excess) can be seen
here: “In summary, the "magical number 7," which is so often heralded as a fixed parameter of human memory, is not a universal constant. It is merely the standard value for digit span in one special population of Homo sapiens on which more than 90% of psychological studies happen to be focused, the American college undergraduate! Digit span is a culture- and training-dependent value, and cannot be taken to index a fixed biological memory size parameter. Its variations from culture to culture suggest that Asian numerical notations, such as Chinese, are more easily memorized than our Western systems of numerals because they are more compact.” Further tangent from same book: “Memory span, indeed, is not an invariant biological parameter such as blood group that can be measured independently of all cultural factors. It varies considerably with the meaning of the items to be stored.”
Reading should not be forced into only representing speech. Writing is language also. They both variably use the same modalities which are underpinned by more general cognitive processes, co-evolving with culture, highly dependent on language use, both written and spoken.
And in case I missed it, these are arguments against looking at things in terms of speech vs. writing, as taking the fundamental evolutionary aspects of communicating language as the ends rather than a means. All writing systems probably should have some phonetic component because we have onboard means of articulating language this way, but that's just the tip of the iceberg, language and writing itself go well beyond this.
Also on multiple literacies in Japanese, see Reading Japan Cool: Patterns of Manga Literacy and Discourse, and tangentially,
Digital Youth, and recently I found this (another tangent ;p):
Subversive script and novel graphs in Japanese girls’ culture