RECENT TOPICS » View all
Here's several articles for your perusal. Randomly add them to your speculations on self-study.
Another article from Lehrer, everyone's favourite neuro-blogger person (well, besides the fellow from Mind Hacks): http://scienceblogs.com/cortex/2009/08/grit.php - The Truth About Grit
The paper (.pdf) in question cites Ericcson (whose work was bastardized by Malcolm Gladwell in 'Outliers'), mentioned here, on 'deliberate practice'. Rather than discussing myriad social factors related to those sustained efforts, seems intent on finding inherent, measurable trait that enables them: wants to identify 'grit' in children as much as 'talent', also encourages instructional emphasis on 'stamina' as an embrace of mistakes and a long-term view--at least there we agree, even if I think they're going about it backwards.
Their interest in discovering 'traits' staticizes them by treating them as intrinsic and less flexible; instead we should try to find ways to encourage work\flow that can be sustained across a variety of lifestyles. Their research still in stages relying on self-reporting and social desirability bias.
On using technologies to enhance one's learning environment? I like how they slip in the mutability of 'memetics', getting away from slow biological metaphors in the vein of Kate Distin's critical views:
http://www.sciencedaily.com/releases/20 … 143958.htm - Cultural Evolution Continues Throughout Life, Mathematical Models Suggest
With many learning opportunities, the individual’s opportunities to actively choose among different cultural variants are of great importance to his/her development. Earlier choices form a foundation for choices to come, and clear differences can be discerned between the cultural evolution of different individuals that can be tied to how often they are exposed to cultural influences.
The factor that is of the greatest importance in the development of theory is the so-called frequency of exposure, which shows that the fewer occasions for exposure an individual encounters, the weaker that individual’s evolution is. In such cases the capacity for dissemination is what determines evolution, in the same was as with biological evolution.
“One finding that surprised us was that who the individual inherited the culture from did not have any direct impact on the results. In other words, it made no difference whether the culture was passed on by the parents, from peers, or from the collective. The very fact that the cultural heritage is not tied to the parents, which has been regarded as the most important difference between biological and cultural evolution, also strengthens our theory.”
- That last bit is why I'm just 'nest0r', I try to live that principle where identity is unimportant...
Why Sleep? Snoozing May Be Strategy To Increase Efficiency, Minimize Risk - http://www.sciencedaily.com/releases/20 … 161333.htm
The researchers concluded that sleep itself is highly adaptive, much like the inactive states seen in a wide range of species, starting with plants and simple microorganisms; these species have dormant states — as opposed to sleep — even though in many cases they do not have nervous systems. That challenges the idea that sleep is for the brain, said Siegel. ...
In humans, the brain constitutes, on average, just 2 percent of total body weight but consumes 20 percent of the energy used during quiet waking, so these savings have considerable adaptive significance. Besides conserving energy, sleep invokes survival benefits for humans too — "for example," said Siegel, "a reduced risk of injury, reduced resource consumption and, from an evolutionary standpoint, reduced risk of detection by predators."
On the importance of video?
Using Multiple Senses in Speech Perception - http://www.medicalnewstoday.com/articles/138829.php
We receive a lot of our speech information via visual cues, such as lip-reading, and this type of visual speech occurs throughout all cultures. And it is not just information from lips- when someone is speaking to us, we will also note movements of the teeth, tongue and other non-mouth facial features. It's likely that human speech perception has evolved to integrate many senses together. Put in another way, speech is not meant to be just heard, but also to be seen.
Related to discussions such as here:
Taking Up Music So You Can Hear - http://www.sciencedaily.com/releases/20 … 142857.htm
Such populations could benefit from the reordering of the nervous system that occurs with musical training, according to the study. Because the brain changes with experience, musicians have better-tuned circuitry -- the pitch, timing and spectral elements of sound are represented more strongly and with greater precision in their nervous systems.
"Musical training makes musicians really good at picking out melodies, the bass line, the sound of their own instruments from complex sounds," Kraus said. Now, for the first time, this study has confirmed that such fine tuning of the nervous system also makes musicians highly adept at translating speech in noise.
The finding has particular implications for hearing certain consonants which are vulnerable to misinterpretation by the brain and are a big problem for some poor readers in a noisy environment. The brain's unconscious faulty interpretation of sounds makes a big difference in how words ultimately will be read.
Lastly:
Tone-deaf people have fewer brain connections - http://www.newscientist.com/article/mg2 … tions.html
These brain connections are also used for speech and language, so "rehabilitation strategies for tone-deafness may also help with speech and language disorders", says Loui.
Last edited by nest0r (2009 August 24, 8:28 am)
Argh! You and your addictive-fascinating, but English-language articles.
Grit: Oh please, oh please, oh please don't try to teach it in school. My brief school-directed experiences with flashcards were almost enough to keep me from trying SRS. I can't imagine they'll get it right.
In particular the idea that hard work sucks; no pain, no gain; life is a trade-off between fun and success; forceing yourself to do things is indispensable, (ideas schools already try to program) are just plain wrong, completely discounting the human need for fun with the assumption that the fun is necessarily short-sighted.
Culture: no comment. Don't want to read it.
Sleep: I call BS. If sleep is purely an energy saving measure, why REM sleep with its intense levels of activation--even more intense than waking? Why do rats kept awake die? Why does sleep rebound occur, or predators sleep hungry? http://en.wikipedia.org/wiki/Sleep#Preservation
Multi-sense speech perception: Well, duh. Language is always gestural: gestures of the vocal organs, pen, hands, fingers on a keyboard, etc. Sound is not required for language. It doesn't surprise me the least that the brain uses non-aural information.
Music and audio processing: Again, duh. And learning to sketch improves your vision. Neuroplasticity, man. Neuroplasticity.
Summary: I've only spent half an hour on this, rather than the hours it could have. Interesting stuff that probably demands more than I've given it, but I only have about 15 hours a day to do stuff. Sorry.
The multiple-sense article seems kind of unneeded. Sure we look at someones lips when they are speaking... but it makes almost no difference. If it did, how could we possibly have radio, music, podcasts? One can understand spoken language just fine without any visual sense at all.
Tobberoth wrote:
The multiple-sense article seems kind of unneeded. Sure we look at someones lips when they are speaking... but it makes almost no difference. If it did, how could we possibly have radio, music, podcasts? One can understand spoken language just fine without any visual sense at all.
I have to respectfully disagree here. My own experiences with visual cues, being hard of hearing, have given me a different insight on this. If I were to talk to someone, under normal day-to-day circumstances, I can understand them without looking for visual cues(lip movement, hand gestures, facial expressions, etc). However, if they are talking in a noisy environment or about something completely new to me, then I tend to unconsciously shift to include these visual cues.
I can see what you're saying, but I feel that visual senses can, and do, play into enhancing both our understanding and expression in a language for everyone. While some people may be more aware of its subtleties than others, it remains a basic, universal component of human language and interactions. It cannot be dismissed. My 2 cents. ![]()
I didn't say it should be dismissed, just that radio is proof that we can do fine without it. Blind people can speak just as well as people with good eyes. Are visual cues a factor? Yes. Are they worth considering in a situation of self-study? No. The eventual outcome of such a consideration might be... for example, to only mine from videos, since there's no visual cues in audio, nor text. A person doing that would, while always having visual cues, seriously handicap their learning.
Surely we can all agree that visual cues enhance the message. So while we can understand radio, we know that we'd get more information from a video of the same speaker. Consider courtroom testimony. In the present system we need to see the witness in order to assess credibility, etc. A tape recording isn't good enough. A written transcript even worse.
So to improve your conversation abilities supplement your studies with video and real life people. It obviously doesn't mean use only video. Video isn't going to improve your reading skills. (But I think T is making the point that some folks tend to take simplified learning advice to silly extremes)
I get the impression Japanese relies on indirect communication even more so than many other languages. So much left unsaid is interpreted through body language, facial expression and shared expectations (as well as tone and word choice.) It's worth getting a feel for.
One thought on the application that cuts video into single phrases (subsc..s??). While that might fit the ideal of shorter SRS bits, I think it reduces the true benefit of studying dialogue. The (direct and indirect) meaning is often found in the exchange - not in isolated phrases. Memory of the setting isn't a perfect substitute for observing the dialogue as a chunk. Perhaps having a dialogue deck would be a useful alternative?
Last edited by Thora (2009 August 24, 1:50 pm)
IceCream wrote:
Thora, i really agree with that. It's definately the most interesting thing in dramas, to see how people interact.
I don't know if Thora had dramas in mind when she talked about videos of speakers but dramas are apparently not real. Actors are there to make it look real but as everything is scripted there is no need for the conversational functions and the underlying social behavior which makes conversational dialog so complex, e.g. processes about turn taking, initiative and grounding.
And for the visual clues (gaze, mimic, gesture, lip movement, body movement) , they are more important for fulfilling those conversational functions than for understanding the utterances... perhaps lipreading and gestures as an helpful, but not needed exception. Besides that, they have their own meaning to communicate various things (without spoken language).
As for noisy environments, don't forget that we have two ears and hear in stereo so speaker localization (and filtering the rest out) plays an important role too.
EDIT: so many oversights...
Last edited by thorstenu (2009 August 26, 9:20 am)
thorstenu wrote:
IceCream wrote:
Thora, i really agree with that. It's definately the most interesting thing in dramas, to see how people interact.
I don't know if Thora had dramas in mind when she talked about videos of speakers but dramas are apparently not real. Actors are there to make it look real but as everything is scripted there is no need for the conversational functions and the underlying social behavior which make conversational dialog so complex, e.g. processes about turn taking, interactive and grounding.
And for the visual clues (gaze, mimic, gesture, lip movement, body movement) , there are more important for fulfilling those conversational functions than for understanding the utterances... perhaps lipreading and gestures as an helpful, but not needed exception. Besides that, they have their own meaning to communicate various things (without spoken language).
As for noisy environments, don't forget that we have to ears and hear in stereo so speaker localization (and filtering the rest out) plays an important role too.
In a fun ironic twist, the point of the above post makes me think radio is actually a much better study tool for dialogs than drama. Drama might have a visual cue, but the conversations aren't real. On radio, conversations are.
IceCream wrote:
yeah, your right... i had been thinking about dramas, so it's what came into my mind. But any other source can be used in the same way to analyse various different things. hmm, what i was really trying to say is that the designer of subs2srs was talking about making longer bits available, so the dialogue thing could be possible...
tobberoth, what do you think about variety shows? i havent watched any really, but do they have any real unscripted dialogue in them?
Depends on the show, but most of the stuff is completely unscripted. It's like radio on TV. This of course also makes it very hard, London Hearts is way beyond any dramas I've seen, even if they use those crazy colored subs variety shows often use.
The process of being in the media still constrains the dialogue, even when it's 'unscripted', but I guess it's the best we can get, for now. Short of RevTKers in Japan walking around and recording Japanese for us.
There's also videotaped interviews that have similar styles to the radio broadcasts, except you get the added benefit of visuals. Some people have better deliveries than others. Ikuta Toma feels like he's improvising even when he's presumably reading from scripts in ドラマ. Such a nice boy, Ikuta Toma.
Last edited by ruiner (2009 August 26, 8:18 am)
ruiner wrote:
The process of being in the media still constrains the dialogue, even when it's 'unscripted', but I guess it's the best we can get, for now. Short of RevTKers in Japan walking around and recording Japanese for us.
There's also videotaped interviews that have similar styles to the radio broadcasts, except you get the added benefit of visuals. Some people have better deliveries than others. Ikuta Toma feels like he's improvising even when he's presumably reading from scripts in ドラマ. Such a nice boy, Ikuta Toma.
Here's some Japanese vlogging links, I haven't checked them out: http://yonasu.com/more-japanese-listeni … -cotorich/
You can see if you experience this McGurk effect in this video: http://www.physorg.com/news153579230.html (The guy in the white coat seems like a bit McGurkish to me.) McGurkism wasn't at the heart of my comments on the value of using conversation to learn conversation, but it's kind of interesting.
Wikipedia wrote:
It has also been examined in relation to witness testimony; Wareham & Wright's 2005 study showed that inconsistent visual information can change the perception of spoken utterances, suggesting that the McGurk effect may have many influences in everyday perception.
As for authenticity - the dramas I've seen are very exaggerated, but perhaps different types of dramas are available. If you're not around Japanese people, I find film, interviews, talk shows to be better sources. I listen more to radios talk shows b/c it's convenient. I don't think conversation needs to be 100% authentic for it to have some value to language learners (in terms of interpreting real meanings, absorbing how meaning can be communicated non-verbally, and social/cultural stuff). Yes, this is separate from knowing what the utterance means if you were to read it.
We have this intuition in our native languages. While I'm sure we can apply much of it to conversations in Japanese, there are some differences. Most of my many goofups in Japan arose from the read-between-the-lines stuff, not the words. I was usually blissfully unaware until...
So, variety... moderation...simple....no opposing categories...kay? ![]()
Last edited by Thora (2009 August 27, 8:37 pm)
Thora wrote:
You can see if you experience this McGurk Effect in this video: http://www.physorg.com/news153579230.html (The guy in the white coat seems like a bit McGurkish to me)
"... the senses did not evolve in isolation from each other, but actually work together to help us perceive our world. When multiple senses are stimulated simultaneously, the brain begins to experience an information rich learning experience and laps it up like ice cream. It encodes more information per unit time, and it remembers that information better, too."
Ha ha ha, thank you Thora, I'm checking out this fellow's 'Brain Rules' now, looks like a cool book.
" Cognitive psychologist Richard Mayer probably has done more
than anybody else to explore the link between multimedia exposure
and learning. He sports a 10-megawatt smile, and his head looks
exactly like an egg (albeit a very clever egg). His experiments are
just as smooth: Divide the room into three groups. One group gets
information delivered via one sense (say, hearing), another the same
information from another sense (say, sight), and the third group the
same information delivered as a combination of the irst two senses.
The groups in the multisensory environments always do better
than the groups in the unisensory environments. They have more
accurate recall. Their recall has better resolution and lasts longer,
evident even 20 years later. Problem-solving improves. In one study,
the group given multisensory presentations generated more than
50 percent more creative solutions on a problem-solving test than
students who saw unisensory presentations. In another study, the
improvement was more than 75 percent!
The benefits of multisensory inputs are physical as well. Our
muscles react more quickly, our threshold for detecting stimuli
improves, and our eyes react to visual stimuli more quickly. It’s not
just combinations of sight and sound. When touch is combined with
visual information, recognition learning leaps forward by almost 30
percent, compared with touch alone. These improvements are greater
than what you’d predict by simply adding up the unisensory data.
This is sometimes called supra-additive integration. In other words,
the positive contributions of multisensory presentations are greater
than the sum of their parts. Simply put, multisensory presentations
are the way to go.
Many explanations have been put forth to explain these
consistent findings, and most involve working memory. You might
recall from Chapter 5 that working memory, formerly called short-
term memory, is a complex work space that allows the learner to
hold information for a short period of time. You might also recall its
importance to the classroom and to business. What goes on in the
volatile world of working memory deeply affects whether something
that is taught will also be learned.
All explanations about multisensory learning also deal with a
counter-intuitive property lurking at its mechanistic core: Extra
information given at the moment of learning makes learning better.
It’s like saying that if you carry two heavy backpacks on a hike instead
of one, you will accomplish your journey more quickly. This is the
“elaborative” processing that we saw in the chapter on short-term
memory. Stated formally: It is the extra cognitive processing of
information that helps the learner to integrate the new material with
prior information. Multisensory experiences are, of course, more
elaborate. Is that why they work? Richard Mayer thinks so. And so do
other scientists, looking mostly at recognition and recall. "
He goes on to talk about synesthetes (that's a fashionable Japanese meme right now, I think) and senses besides hearing and vision. Anyway, a nice primer.
Oh and in addition to that last paragraph about elaborate experiences ["1) The more elaborately we encode information at the moment
of learning, the stronger the memory."]:
"2) A memory trace appears to be stored in the same parts of the
brain that perceived and processed the initial input.
3) Retrieval may best be improved by replicating the conditions
surrounding the initial encoding. "
Anyway...
Last edited by ruiner (2009 August 27, 9:46 pm)
hehe You quoted me before I edited out that last paragraph. (I thought I might offend someone)
above wrote:
When touch is combined with visual information, recognition learning leaps forward by almost 30 percent, compared with touch alone.
uh oh. SRS that.
Last edited by Thora (2009 August 27, 9:40 pm)
Thora wrote:
hehe You quoted me before I edited out that last paragraph. (I thought I might offend someone)
above wrote:
When touch is combined with visual information, recognition learning leaps forward by almost 30 percent, compared with touch alone.
uh oh. SRS that.
Muscle memory? Works for me. ;p He also talks about smell (Proust, etc). Guess I've got cooking words down.
Last edited by ruiner (2009 August 27, 9:47 pm)
IceCream wrote:
smell seems hugely powerful as a memory trigger. i'm surprised really it hasn't been used somehow yet... maybe it's the wrong type of memory trigger, i dunno. That stuff was really interesting, ty!
I was joking about the specific cooking words/smells, but he does make some interesting arguments about how to use smell.
There's probably more information on this sort of thing in Lehrer's 'Proust was a Neuroscientist', but in Brain Rules he talks about smell being used in some ambient, passive ways to enhance recall states. He mentions one test where students exposed to a smell (associated with initial learning process) during sleep did better on tests the next morning--and similarly with spraying perfume in a class then during tests improved test scores versus those who didn't have it sprayed during learning. I'll need to do more looking into this sort of thing, but if I were studying before a trip to Japan, perhaps I'd target all my (output?) sessions around cooking sessions of Japanese foods, whilist the fragrances still waft about the domicile.
Wouldn't be hard for me, since I'm constantly cooking that stuff purely for aesthetic immersion purposes. I have consciously replaced all of my 'comfort foods' (this being #1 - ah, that smell floods me with many 日本語-related memories... same with the effusions of my Zojirushi neuro-fuzzy).
Last edited by ruiner (2009 August 28, 10:20 am)
Here's another interesting aspect of sound/visuals intersecting:
The 'Pip' and 'Pop' effect:
http://scienceblogs.com/cognitivedaily/ … ound_h.php
http://www.psy.vu.nl/pippop/ - demo
https://docs.google.com/gview?a=v&q … type%3Dpdf - paper
Did all the demos. No sound took me twice as long to find as the ones with sound--even the simple silent ones versus the complex talkies (well, not talkie, but you know what I mean).
Last edited by ruiner (2009 August 28, 9:24 am)

