So, those of you who are savvy about this sort of thing: If you wanted to take the vocabulary from a show or movie and create a deck where the the smart.fm sentences are only the ones w/ this vocabulary, how would you go about automating this process? Same with KO2001, but mostly smart.fm. The idea being to convert those sentences with simple, consistent grammar into support for the stuff you're subs2srsing so you get the vocab/readings out of the way with the smart.fm sentences and can focus on listening/parsing the new context-rich sentences from actual media.
What I've been doing is I make a list of new words in a given subs2srs deck--usually the ones w/ kanji esp. kanji I don't know readings for, I search for sentences with this word that have a minimum of new information in the iknow/ko2001 decks, then I study that sentence first.
This is good enough for me as an individual, but if I were to try and come up w/ something for posterity/mass use, seems like something different would be better.
You could say I'm looking for viable means to accomplish what we've been discussing since these resources were released--I know Nukemarine has been adamant about using smart.fm's Core 6000 as a supplementary vocab corpus for a while.
What I've been doing is I make a list of new words in a given subs2srs deck--usually the ones w/ kanji esp. kanji I don't know readings for, I search for sentences with this word that have a minimum of new information in the iknow/ko2001 decks, then I study that sentence first.
This is good enough for me as an individual, but if I were to try and come up w/ something for posterity/mass use, seems like something different would be better.
You could say I'm looking for viable means to accomplish what we've been discussing since these resources were released--I know Nukemarine has been adamant about using smart.fm's Core 6000 as a supplementary vocab corpus for a while.
Edited: 2009-09-09, 9:31 am
