So I don't think this was going to end up being a very efficient port the way I was doing it. So now I'm basically starting from scratch, and I hope to use only ffmpeg, because to make every user download the Mono framework, and Fink so they could get mp3splt would just be ridiculous.
I'm working in Java, mainly because of it can be used anywhere, and they have ffmpeg for other systems as well. I'm hoping to bundle them all together and it might be able to be universal
I'm still only dreaming though. Shouldn't be too hard to get it working.
OK So, I just finished a command-line version. (JavaSubs2SRS? javaSubs? v.0.0.1?)
It's pretty limited in what it can do (only SRT files that start with "1" as the first subtitle. And it does the entire file, rather than a select duration)
The only problem is that I can only get it to work in my IDE (Eclipse). I can't get it to run directly form the command line, and I'm not quite sure why. If anyone has any idea, I'd love to hear from you.
It's also a pretty messy program. It was thrown together pretty quickly, so I know there are better ways to do certain things, but I just wanted to get it done quickly.
Of if anyone can help me with compiling, I'd love to upload it and get it to all you Mac users! (Maybe linux?)
Macintosh:Subs2SRS golem$ java -cp /Subs2SRS/Subs2SRS.jar Subs2SRS Exception in thread "main" java.lang.NullPointerException at java.io.File.<init>(File.java:277) at srtScript.<init>(srtScript.java:22) at Subs2SRS.main(Subs2SRS.java:61)
I know that if you don't use options, it won't run. I should probably fix that to have standard settings...
As for why you get the "File not found" error...I'm not 100% sure why it's not working (as long as the files are in there, of course)... I'll play around and see if I can reproduce it in order to fix it.
The reason I have it work on Mac is because really all it does is invoke ffmpeg, which should work if you have a linux version in the same directory as Subs2SRS.jar
You ask why it has to be in a specified directory: Hopefully it won't have to be like that forever. The way I found the directory control in Java to work was set up by where the main class file is located. I have ideas of how to get it to work elsewhere, but so far it begins searching for files in either the main Subs2SRS directory, or the directory where the main Java files on your system are stored.
Shouldn't be something too hard to fix...I just don't know how...
after having carefully selected , merged , connected hundred of lines of Mononoke hime and basically spent hours in making a personnal deck for this movie I discovered that almost all the picture in my deck are botched up. Most of the time it s nothing but a grey screenshot , for about a dozen a pixel mix and only 4 pictures (out of more 400) have been shot correctly. I tried twice with the exact same result.
Anyone got an idea of what s going on ? most important how to fix it ?
I have a quick question. Am I right in thinking that this only works with ripped movie files, and wouldn't work just owning the DVD of, say Mononoke Hime? I have the Region 2 Mononoke coming in soon in the mail, and I wanted to take advantage of this awesome-sounding program. What would I need to do to be able to use subs2srs?
in the first place take a look to this topic http://forum.koohii.com/viewtopic.php?id=3915 and yes subs2srs is unable to deal with DVD.... considering the audio sampling and the snapshot process it would be incredly painful. it takes about 30 minutes to rip a DVD and 4 hours to encode it in a decent way so even if you don t know much about video there s no need to elaborate further ?
you need to get your hand on a compressed video file ffmpeg is able to deal with (because sub2srs is based on it) either you make it by yourself or you download it. And if you read my previous post you must know that ffmep isn t totally reliable. So avoid exotic format like OGM ,OGG obviously the safest standard is the lowest : avi for container /divx for video/mp3 for sound . MKV as container and H264 for video are also safe . Didnt try AAC and MP4.
if you download some subtitle watch out .... Besides padding the timing , sometimes you are compelled to redo the whole timing as the subtitles comes from a different edition and all the timing is mucked up. It s the case with mononoke hime : I had a subtitle made for a 2H09 long movie and most of the versions on internet are 2H13 (the irony being the 2H09 version I found was OGM file ...so I could nt use subs2srs with it).In which case I hope you rule with aegisub because retiming can be a real pain in the ass if you re not accostumed the keyboard shortcuts especially for such a long movie .
As I m currently making deck of my DVD , I ll publish a tutorial within 2 weeks.
Last edited by ghinzdra (2009 August 20, 11:23 pm)
@ghinzdra: I managed it: you have to edit the plugin-file and change "actionSuspendCard" to "actionDelete". You can also copy-paste the below text under the comments of the file (after deleting the previous content)
from PyQt4.QtCore import * from PyQt4.QtGui import * from ankiqt import mw
I'm a newbie when it comes to this sort of thing. If I want to take an Anki deck that's already been made with subs2srs, and edit it using an include list/remove non-kanji lines, how would I go about doing that? Is there a way to run it through subs2srs again with new 'advanced' options, or is it better to use another program to edit the .csv (or whatever it is) files, and if so, how?
Added context support. This option (accessed from the Advanced Subtitle Options dialog) allows you to observe the context in which a line of dialog appears. This is accomplished by attaching lines leading up to the line of dialog and attaching lines trailing the line of dialog. Text, audio clips, snapshots and video clips will be attached in this manner.
I haven't used it yet, but I like the graduated size differentiation, though I'd probably still change the colours of those fields to a gray gradient in Anki's options to make sure my eyes don't stray as easily.
This makes setting up dialogue chains of call-and-response cards easier, for sure. When I originally thought of them, I was just going to manually cut up the dialogues by hand, I think. I don't know how I expected to do it... no more theorizing w/o practice for me! (Now I will proceed to theorize w/o testing out the new subs2srs.)
Now I'd just mark the time spans down for those dialogue scenes, then edit the subsequent mini deck so that, let's see, the Current Line's cue (Previous Line's audio/image or video, expression optional) is on the Question side, then the cue's meaning (optional?) and the Current Line's audio/video/expression/meaning is on the Answer side... not sure if in that case I'd bother with Next Lines trailing it, to avoid unnecessary overlap. Also not sure if I'd bother adding the Expressions for earlier Previous lines on the Question side.
The Context tab says that it attaches text, audio, snapshots, etc--does that mean when you select the # of leading and trailing lines, that you end up w/ a series of Previous Line subfields like Previous Line Audio, Previous Line Video, et cetera? Guess I'd eliminate most of those as per the above paragraph, though if Anki had multiple Play buttons I'd tinker around w/ that.
Maybe we can have a group project where we add Actor's lines to the subtitles for various scenes, and convert .srts to .ssa/.ass files.
Or perhaps I'd only do call-and-response decks made from episodes/films I'd already studied and perhaps tagged with actors in the process? Hmm...