RECENT TOPICS » View all
I addressed that in the edit to my previous comment. Not sure why I used the word "cited", I meant that it's often considered to be the most difficult language. I've seen this opinion expressed on language forums and even in some non-linguistic circumstances (like a comic book review by the Internet personality Linkara), so I assumed that it was a somewhat popular idea, but I didn't mean to imply that it had serious academic backing.
Last edited by vonPeterhof (2012 September 30, 4:29 pm)
The FSI list is a nice indication, and I wouldn't disagree with what it says, but it only covers very few languages.
I thought Japanese was harder than Mandarin, but I'm not that advanced in Mandarin.
I'm sure Korean is as hard as Japanese as a spoken language, maybe even harder, I wouldn't know, but the writing system is enough to make it a much simpler language to learn.
Many people mention Hungarian -- I did a challenge with another polyglot friend in February in Finnish (which is relatively related to Hungarian) and we both reached A2 in a month (under 35 hours) -- evaluated by a Finnish teacher at Helsinki University. There's no way this compares to Japanese or Mandarin.
Personally, if I had to pick a hard language, for a challenge for instance, I'd go for the likes of Inuktitut or Mongolian. Basically, even if a language were simpler, if there are no resources for it, then the language will be much harder to learn. The same is true if the language is spoken by few people, over several very different dialects, as is the case of many Amerindian languages like Cree. Navajo is certainly one such language. Harder yet are languages tied to a very different reality to yours.
Last edited by AlexandreC (2012 September 30, 6:40 pm)
EratiK wrote:
First, my quote isn't an opinion, it's just logics. If I have a lot of books, I'm more likely to forget the ones I've read. If I re-read the same books over and over, I know them by heart. What is the difficulty here?
Second "Blake 81" does not have a title on the site of the link you provided, so "It is estimated that each Aboriginal language comprises approximately 10 000 words" should be taken cautiously, it could be more, and even a lot more.
Lastly, when you don't have numbers, and apparently don't know about you are talking about -- anyone who knows even a little about anthropology knows the system of belief is at least as important as the everyday world and isn't the monopoly of spiritual leaders -- you don't make baseless claims.
It's still an opinion, as you won't convince me that their entire vocabulary consists of nothing but essential every day words. Approximately means "around," so it's unlikely to be "a lot more."
Seeing as their learning system is different to ours, i.e. they learn about things and new words over time, with age and experience, I am willing to bet their "elders" have a much bigger vocabulary than a young adult. This does not apply to us, since any high school kid can have a bigger vocabulary than their own grandfather, precisely because of education.
"Blake, BJ 1981, Australian Aboriginal languages, Angus & Robertson, Sydney"
vonPeterhof wrote:
magamo wrote:
I heard languages spoken in Africa and regions geographically close to Africa typically have richer phonetical inventories because they tend to retain older phonemes like clicks.
I've heard of this theory. IIRC it was meant to tie the "out of Africa" theory of human origins and migrations (the dominant theory nowadays) with the "proto-human language" theory and show that all human languages descend from the same ancestor which was spoken in Africa. I don't have any objections to this claim, but I find the assumption that clicks must be an old feature of human language dubious (and most modern linguists seem to agree). I've always thought that the idea that clicks are a primordial feature rather than a more recent innovation was based on patronizing attitudes towards hunter-gatherer cultures ("their lifestyles are primitive, which means that their languages are primitive, which means that clicks are also primitive"). Besides, the language spoken in the "old country" is quite often less conservative than that spoken in regions settled later: (e.g. Danish vs. Icelandic, Mandarin vs. Cantonese, Satem vs. Centum).
Ah, I didn't mean clicks were "primitive" or anything like that. Certainly any language including contemporary English may acquire click consonants in the future. However, being complex consonants, they seem to be the subject of consonant loss more often than gain. I meant "older" in this sense. Again, by this I'm not saying a more "primitive" form of English or its ancestors probably had them. It's just some ancestors of the current native English speakers were probably speaking equally developed languages with clicks at some point in the past. If English acquires a click as a phoneme, I would call it a "newer" sound when talking about the relative difficulty of learning to native English speakers.
As for the argument some people are having here about the number of words a language has, I think employing such a number to access difficulty for foreigners or absolute complexity of a language is total nonsense.
There are a number of reasons for this claim. The quote from Wikipedia vonPeterhof gave explains quite well. But if it's not convincing enough for you, think of this ridiculous claim:
a random idiot wrote:
English must be simpler and easy to master because it's got less than 30 letters. Chinese is 100 times harder because you need thousands of characters.
Of course, English and Chinese are equally expressive, and nether is more complex than the other. The superficial difficulty you many encounter at the very early stage of learning Chinese shouldn't be a definite obstacle to your learning unless your goal is pretty low. There is one big flaw in my analogy though because you can certainly artificially increase the complexity of orthography by, for example, simply adding some strokes to each character. In this case, this artificially created Chinese is more complex while having the equal expressiveness. But vocabulary a native speaker acquires through their normal life is not something artificially inflated or simplified; vocabulary evolves as the language naturally changes. Hence, I argue that the number of words, even if you somehow succeed in giving a reasonable definition, doesn't matter when considering complexity or difficulty of learning a specific language.
Among other reasons, one that doesn't seem to have been mentioned here is that the number of fixed sequences of phonemes dedicated to abstract concepts can be larger simply because the total number of speakers is larger. In a sense, the more people speak it, the more words it gets. No one is an expert of everything. Hence, even if two languages had exactly the same system of expressing a concept, the one that has more speakers would have more "words." In other words, because there must be the kind of concept that will not appear frequently in the other language, the one with a smaller number of speakers will more likely have to resort to a different method than using a short, dedicated sound sequence (or a "word") to express the kind of concept that is not relevant among its speakers (but is frequent in a small group of speakers of the other language with the larger total number of speakers).
More people imply more topics, expertises, and generally larger variety in concepts that require dedicated short sequences of phonemes if you examine every single speaker. But this certainly doesn't mean a learner should memorize more words just because the number of speakers is larger and hence more pre-assigned sequences of phonemes (or words if you will). And of course it doesn't mean a language with a bigger dictionary is more complex, more expressive or harder to learn. That's just a dumb argument.
Anyway, at the end of the day, the language that is physically accesible is easier, e.g., you have friends who speak it natively, it has a wider range of texts and dictionaries for learners, there's ample entertainment material, etc. etc.
If you still want to compare complexity or difficulty for a native English speaker as a learner, that should be accessed through the kind of linguistic aspect that is known to be impossible for an adult to learn. For example, there is empirical evidence supporting the idea that adults can't develop prototypes of phonemes. So, if your target language's phonetics is different in a way the "unlearnable" part impedes communication, that must be a harder-to-learn one. Then again, it's true that it doesn't really matter if the target language is just a tool for basic communication and you only need to be so good. In this case, the superficial complexity such as more noun cases, complex conjugation tables, and more characters/letters would play a much more important role.
Last edited by magamo (2012 October 01, 7:45 am)
@magamo
What are prototypes of phonemes?
delta wrote:
@magamo
What are prototypes of phonemes?
It's kind of difficult to properly define it if you haven't heard of it before because it's a technical term in linguistics. Roughly speaking, it's THE proper sound of a given phoneme in the mind of a native speaker. More precisely, it's the center point of the perceptual magnet effect that only real native speakers show. If you don't know the perceptual magnet effect either, it's the kind of phenomenon even a totally proficient nonnative speaker (e.g., a speaker whose dominant language is different from the one s/he used to speak when s/he was a child) doesn't exhibit. I think I talked about the effect somewhere in this long post.
A tl;dr version is that nonnative speakers are typically way better at noticing a subtle difference between sounds that fall into the same phoneme category of their nonnative languages, which actually prevents them from efficient perception of sound used in a nonnative language because their brains don't throw away unimportant details. For example, it is often said that Japanese speakers can't hear the difference between l and r. This is correct in a sense. But the truth is that monolingual Japanese speakers' ear hears those sounds too correctly for their brains to efficiently process them. Japanese people feel they sound the same, and this is because they don't know where the boundary between the two sounds is. It's not because Japanese's ear doesn't hear the sounds correctly; monolingual Japanese speakers differentiate sounds within l and r too well, actually much better than native speakers.
Of course, as is evidenced by many multilinguals out there, proper training and exposure to the target nonnative language for an extended period of time can allow for alternative ways of handling troublesome phonemes. But brains don't function exactly the same way as native speakers' anymore.
This is a striking difference between learnable aspects of language (such as grammar and vocabulary) and some phonetic aspects. They're the most difficult part in the sense that they're literally impossible to learn. You can overcome this problem unless you're learning a language the wrong way though.
Edit: Ah, probably I should've explained it this way:
Humans typically can tell two subtly different sounds within a phoneme of their nonnative language. But a native speaker hears them as exactly the same because their brain automatically throws away the difference as irrelevant information. This imaginary sound that only exists in native speakers' minds is the prototype. So, for example, a nonnative speaker doesn't hear the prototype when they hear a sound that is only slightly off because their brain doesn't throw away the irrelevant information and too correctly perceives the sound exactly as it is presented to them.
In one sense, the commonly told reason why nonnative speakers can't tell two different phonemes is backward. It's not the nonnative speakers that don't hear subtle differences. It's the native speakers.
Last edited by magamo (2012 October 01, 11:25 am)
magamo wrote:
In one sense, the commonly told reason why nonnative speakers can't tell two different phonemes is backward. It's not the nonnative speakers that don't hear subtle differences. It's the native speakers.
As adults (though the cutoff is in infancy), we have (generally) lost the ability to identify various sounds as allophones and regroup them under the same phoneme. Native speakers have lost this as much as non-native speakers.
AlexandreC wrote:
magamo wrote:
In one sense, the commonly told reason why nonnative speakers can't tell two different phonemes is backward. It's not the nonnative speakers that don't hear subtle differences. It's the native speakers.
As adults (though the cutoff is in infancy), we have (generally) lost the ability to identify various sounds as allophones and regroup them under the same phoneme. Native speakers have lost this as much as non-native speakers.
I was talking about a totally different phenomenon.
By definition, using the wrong allophone doesn't impede communication. Using the wrong phonemes does and can be critical. While I haven't seen research showing impossibility or possibility of acquiring the same, correct allophone categorization as native speakers, it is known that it is already impossible at the phoneme level. This is the point I tried to make.
The phoneme acquisition process is not simple. You acquire the ability to hear a prototype in your native language whenever you hear a sound within a certain range around the imaginary proper sound. A person whose native langauge has no phonemes similar to it will hear it the way it is without any warp or shift in perception. In this sense, it's only the native speakers who lost the ability to hear minute differences. In other words, native speakers have acquired a frame of reference and learned what's important and what's not for classifying phonemes while a person whose native language doesn't have equivalent sounds still hear every detail without bias. (And this is NOT about allophones if you're still confusing the two phenomena in language acquisition.)
For example, the difference between /p/ and /b/, if followed by a voiced vowel, is when your vocal cord starts vibrating. /b/ requires voicing very early while voicing occurs much later in the combination of /p/ and the following vowel.
There is a threshold about this timing that separates /p/ from /b/. So, if a native English speaker hears this type of stop consonant with vocal cord vibration starting only slightly earlier than the threshold point, they hear the prototype of /b/, making them believe that they heard correctly pronounced /b/. If you artificially create the same stop sound with vocal cord vibration starting very early and have the same native speaker hear it, they still hear the same prototype of /b/. In fact, native English speakers can't reliably tell them apart better than chance.
However, a native speaker of a language that doesn't have these phonemes can not only tell them apart, but can also reliably order sounds from earlier voicing to later voicing if you have them hear the same type of consonant with various voicing points. They can sort, for example, exaggerated /b/ with very early voicing, typical /b/, almost /p/ but still below the threshold, /p/ with voicing only mili seconds into the /p/ side, typical /p/, and exaggerated /p/. Native English speakers only hear two sounds: the prototype of /p/ and the prototype of /b/.
The threshold varies from language to language if they have both /p/ and /b/. And prototypes may vary as well.
So, it's not just adults losing the ability to hear subtle differences. There is more to why nonnative speakers have trouble correctly classifying phonemes. If you take allophones into account, it gets more complicated. In fact, the p sound appears as an allophonic variant of phoneme /b/ in English, so the above explanation doesn't take allophones into account at all.
By the way, you seem to be confusing absolute difference and relative difference. For example, a vowel can be approximately classified by the first and second formants in frequency. These two parameters can be used to describe the absolute difference between two sounds. This is the kind of subtle difference I'm talking about. But you seem to be thinking of difference in a relative sense, e.g., more like typical word initial /t/ in English rather than the non-aspirated allophonic variation. You chose one frame of reference, so the difference is according to the chosen reference. Nonnative English speakers may be able to hear how much aspirated in the absolute sense while native English speakers definitely can't without proper training because they're allophones. In a sense, your statement is correct. But in another sense, it's wrong because nonnative speakers may still have the ability to hear differences in the absolute sense.
To summarize, what we lose as a native speaker is the ability to hear the difference within a phoneme in the absolute sense; we hear phonemes on the black and white basis by discarding all the unimportant information. (To be clear, this is not about allophones.) Nonnative speakers may retain this for sounds that don't appear in their native languages. What we acquire as a native speaker is the correct frame of reference and the thresholds for each phoneme. And adults can't learn the black-and-white perception for a particular language if they haven't already.
Last edited by magamo (2012 October 03, 1:06 am)
Thanks for the explanation.
magamo wrote:
So, for example, a nonnative speaker doesn't hear the prototype when they hear a sound that is only slightly off because their brain doesn't throw away the irrelevant information and too correctly perceives the sound exactly as it is presented to them.
If I understand correctly, non-native speakers can hear the difference between allophones commonly regarded as single phonemes by native speakers.
For example, in Spanish, Dd is realized as /d/ or /ð/ depending on letter position in the word, therefore untrained monolingual Spanish speakers will be unable to note the difference between /d/ and /ð/. From what you are saying, a learner of Spanish whose language separates both phonemes, e.g., an English speaker, can't develop a prototype of Dd like a native Spanish speaker (in which both /d/ and /ð/ are mashed together according to N rules), but is this even desirable? Also, could this impede communication in an extreme case? If so, do you have any examples?
magamo wrote:
To summarize, what we lose as a native speaker is the ability to hear the difference within a phoneme in the absolute sense
I was tempted to assert this is false until I read you clarify this doesn't apply to allophones. So, what does it apply to?
Last edited by delta (2012 October 03, 6:52 am)
delta wrote:
If I understand correctly, non-native speakers can hear the difference between allophones commonly regarded as single phonemes by native speakers.
It's more like nonnnative speakers can correctly hear the continuous spectrum of sound while native speakers' ear makes a black and white decision. This doesn't mean nonnative speakers see two similar allophones as two sounds that belong different categories. If their mother tongue doesn't have the pair of sounds or any similar sounds, they don't belong to any phonetic category in their mind in the strict sense.
You should realize that being able to hear differences and being able to categorize sounds into two groups are two different things. Categorization requires you to know how sounds are categorized in the language, i.e., the exact range of each phoneme or allophone while noticing the difference only requires the ability to consciously perceive absolute difference. The word "difference" is used in the absolute sense. It's not the difference between, say, /d/ and /ð/ with these categories in mind. It's the absolute difference within one big category called "not found in my language." You're probably implicitly assuming the listener knows the existence of the two categories /d/ and /ð/, and thresholds that define from where one sound is perceived as /d/ or the other.
If you're confused, think of someone who has absolute pitch but doesn't know how musical notes are classified into 12 kinds in one octave in Western music. He can tell apart sounds at 440Hz and 410Hz, which both belong to A to Westerners who only have relative pitch. But a trained pianist can pick up on this difference; one is the note A of the standard tuning while the other is a lower tuned version. You may think of these as "allophones" in the sense that they sound the same to normal Westerners while trained musicians effectively exploit the difference in their work.
Now, can the person with absolute pitch correctly classify the 440Hz and 410Hz without knowing there are such two categories? Of course, not. If anything, because he doesn't even know how one octave is divided into 12 notes in Western music, he wouldn't be able to tell if one particular sound is A or not. The whole notion of musical note "A" is new to him in the first place.
If musical notes were like phonemes, it would be like this:
Your culture only uses, say, the first 6 notes out of all the 12. You play from 440Hz to 880Hz (an octave from A4 to A5) gradually and continuously. At first you hear only 6 notes. You hear as if they are played "digitally" as in A -> B -> C..., i.e., you feel as if the played frequency jumps from one level to the next. And when the frequency hits the 7th note from where your culture doesn't use, you suddenly feel frequency is changing gradually. Another culture uses different notes, and they are divided at different frequencies.
delta wrote:
For example, in Spanish, Dd is realized as /d/ or /ð/ depending on letter position in the word, therefore untrained monolingual Spanish speakers will be unable to note the difference between /d/ and /ð/. From what you are saying, a learner of Spanish whose language separates both phonemes, e.g., an English speaker, can't develop a prototype of Dd like a native Spanish speaker (in which both /d/ and /ð/ are mashed together according to N rules), but is this even desirable? Also, could this impede communication in an extreme case? If so, do you have any examples?
I'm not sure what you mean by "desirable," but two words that rhyme consonant-wise to Spanish speakers may not to English speakers in your case, I guess.
An obvious disadvantage of lacking the magnet effect manifests when you hear a sound that lies almost on the threshold line and is only slightly into one category. Native speakers judge it as correct /p/ with no hesitation, for example. They don't think it's ambiguous at all. But you may be unsure if it's /p/ or /b/ because you correctly hear it as "not quite /p/ but not /b/ either."
There is a research that investigated how Japanese monolinguals hear /l/, /r/, and sounds between them. The result is that they correctly sorted the computer-generated sounds from /l/ to /r/. Native English speakers only hear them as either /l/ or /r/, so they can't reliably sort sounds in one phoneme category better than chance. But they don't misjudge if one sound is below or above the threshold separating /l/ from /r/. Of course, the Japanese monolinguals can't tell which is /l/ to native speakers and which is /r/ because they don't know the boundary. You probably already know how this can impede efficient communication.
delta wrote:
magamo wrote:
To summarize, what we lose as a native speaker is the ability to hear the difference within a phoneme in the absolute sense
I was tempted to assert this is false until I read you clarify this doesn't apply to allophones. So, what does it apply to?
I am talking about the very sharp recognition of phonemes by native speakers due to the black-and-white ear. Within one phoneme, everything is the same white. You go one step off the boundary, and it's all black to them. You may hear sounds in gray scale in high resolution but don't know there are categories there. A gray color that's neither clearly white nor black to you may be bright white to native speakers, and a slightly darker gray may be pitch black to native speakers. If asked, you can tell which gray is darker. But you don't know if it's considered white or black to native speakers.
Things are actually more pessimistic than this color analogy. You don't know "white" or "black." They're all grays in different shades. Or you only know white, so when you see black, you say it's darker white. You don't have the concept of "black."
Edit: Ah, if you're wondering what are those two sounds within a phoneme if they're not allophones, they might be considered allophones in a sense because, after all, they fall into the same phoneme category. But if you define allophones like this, you have infinitely many allophones in one phoneme because any slight difference a human can perceive creates a different allophone, which isn't convenient. So I purposely ignored the existence of allophones altogether.
Also, I certainly simplified the whole thing to be concise. Certain kinds of difference within one phoneme may be easily perceived by native speakers if taught they're different.
Last edited by magamo (2012 October 03, 1:26 pm)
まがもさん、偉い!
What are the physical limitations adults have when studying a foreign language in terms of what is achievable and what not?
delta wrote:
What are the physical limitations adults have when studying a foreign language in terms of what is achievable and what not?
In terms of proficiency, there isn't really unless you want exactly the same ability and functionality in your brain as native speakers'. Your brain may process the foreign language(s) you're acquiring in a different way than native speakers', but all intentions and purposes, you can be completely proficient and fluent as you probably already know.
Without some sort of special training, you may not be able to completely eliminate your foreign accent so that even a computer analysis of your speech won't show any trace of your mother tongue. Or you may have harder time recognizing utterances that are not well articulated and barely within the acceptable range. But I don't believe those "limitations" should always be seen as negative and significant.
Then again, I have seen a paper on the magnet effect where the authors use the term "compromised" and suggest that this could impose a formidable challenge to second language acquisition for adults. So at least some experts recognize as an obstacle the fact that you already have the magnet effect for your native language and never will for your second language you're learning.
If you think positively, it doesn't really matter if you have a foreign accent that only babies, nonnative speakers, and trained native speakers can pick up on. To your average native speaker, you're perfectly fine thanks to their black-and-white perception.
Maybe you can think of this as something similar to how a left handed person can't write/draw exactly the same way as a right handed peson because of asymmetry. If analyzed carefully, there is always a slight trace that shows the person wrote/drew by his left hand. But no one cares or even notices. And you don't need to mimic right handy writing/drawing unless you have some particular reason, e.g., you're an artist who needs to draw an object perfectly symmetrically by hand.
With that said, you might want to be a bit careful when choosing your learning method. The finding of the magnetic effect is relatively new, so your language teacher is surely unaware unless s/he is following current research in linguistics. Inevitably, popular learning methods are based on obsolete understanding of human language acquisition. For example, some may say you can trust a certain recorded material and listen to it again and again until you learn the new sounds. And they say the material is checked by native speakers so the pronunciation is perfect. But you already know where this can go wrong. What you need to learn is the range and boundary of each phoneme, and obviously this can't be learned if you rely on a single recording, e.g., by listening to one model sound 1,000 times straight. You need both volume and variety. And untrained native speakers are completely unreliable when it comes to identifying a good model of a prototype.
Last edited by magamo (2012 October 04, 5:56 pm)
AlexandreC wrote:
The FSI list is a nice indication, and I wouldn't disagree with what it says, but it only covers very few languages.
I thought Japanese was harder than Mandarin, but I'm not that advanced in Mandarin.
I'm sure Korean is as hard as Japanese as a spoken language, maybe even harder, I wouldn't know, but the writing system is enough to make it a much simpler language to learn.
Many people mention Hungarian -- I did a challenge with another polyglot friend in February in Finnish (which is relatively related to Hungarian) and we both reached A2 in a month (under 35 hours) -- evaluated by a Finnish teacher at Helsinki University. There's no way this compares to Japanese or Mandarin.
Personally, if I had to pick a hard language, for a challenge for instance, I'd go for the likes of Inuktitut or Mongolian. Basically, even if a language were simpler, if there are no resources for it, then the language will be much harder to learn. The same is true if the language is spoken by few people, over several very different dialects, as is the case of many Amerindian languages like Cree. Navajo is certainly one such language. Harder yet are languages tied to a very different reality to yours.
I was reading this thread on another forum with giving their input from studying both languages or just learning korean but some of them are quite insightful and sometimes quite hilarious.
link: http://how-to-learn-any-language.com/fo … amp;TPN=13
if you're curious
I just want to post this link because I feel so irritated with people commenting on korean vs japanese with difficulty when they aren't very far along with korean and sometimes not even with japanese or they only tried learning japanese. If you would like to hear thoughts from people who have tried both or just korean then check it out in the link above.
One reason I find korean harder is that there's so many ways to say the same thing plus different context in comparison to japanese. Because for daily japanese you notice they say the same shit ovre and over again but in korean there's so many variations you might not even notice they are saying the same shit but it's in another way. I feel like even if you say oh well korean hashanguel so it's so easy to read but I feel like you still need to learn about hanja to help you remember the vocab because a lot of words in korean are 2 syllables and words will sound similar or be differnt by one thing and one of the best ways to mitigate this problem is to learn the hanja behind the words.
but basically all the japanese vs korean issues are explained pretty well on that thread... definitely way better than i can ever explain it.
FYI korean has like double the particles, the grammar is like 10x more convoluted than japanese or something (I don't know how to estimate this), and they have way more sounds (double consonants etc etc), and 75% of the words based chinese intersect with japanese words based off chinese characters and I think the majority of korean vocabulary is based off of chinese characters (around 60% or something) so I don't know why you wouldn't see the benefit in learning hanja to help you learn/retain vocabulary. People keep jumping to the conclusion, oh hanguel so i don't have to learn hanja/kanji etc so it'll be that much easier (I'm sure most of these people either give up or reach the stage where they realize okay i need something to help me remember/retain vcoabulary oh it's hanja i wish i did it earlier).
Last edited by howtwosavealif3 (2012 October 10, 8:00 pm)

