Hi Joachim, I just bought this yesterday and I was really excited to use it, but so far with every piece of audio I've analyzed I'm only able to have 5-10% success rates.
I've tried several different pieces of audio - some of them very short simple sentences and still poor results. I am using the preset to encode the audio. I've watched the tutorials. Can you help figure out why I'm having such a hard time?
Hey, I want to use this script for an animation and it simply don't work. I am on AE CS6, windows. I import the voice (mono, 16000khz), I am pasting the transcript (very very clean version) and I wait... I wait.. I wait....after 1 minute I get this error.
there is a calculation limit of 1 minute in the script that prevents it from running in a loop. Since your file is relatively long and the accent as well as exaggeration is also a factor in the analysis, it might simply take longer to analyse. So please try to cut out a smaller section (maybe one passage or a minute) of the audio and see if the analysis actually works there (don't cut in the middle of a word).
Since you are working on CS6 we always have the backup plan of making the analysis in Premiere CS6 (which still has this feature) and lipsyncr 1. So if the analysis doesn't work, because the accent or exaggeration leads to bad results, please report back!
what is the format of your original recording? Try a higher sample rate (like 44 kHz) in compression.
A "bad" recording is either noisy, has weird intonation (singing, exaggeration), and contains more than one speaker. Try to avoid this!
The original recording is 48Khz - 16 bit - The actor was in a booth speaking into a nice shotgun mic .... it's a clean recording with little background noise. peaking nicely at -15db
I took my original audio sequence and exported using the preset found on your downloads page. Should I try not compressing it? Is 48Khz screwing me up (should I convert to 44khz)?
Sounds good. Your settings are compatible. There are just a few settings that are not compatible/advisable, that is why we provide a template. You don't have to convert it.
Have you tried it with the original yet?
Again, it could be hard for the engine if you have a recording with more than one speaker, funny intonation or accent.
I've tried a bunch of different combinations of settings. I've tried smaller sections of the audio file. I've tried a different interview. I've tried changing my script so that the words are more phonetically accurate.
I've got one speaker. They are speaking with a normal english accent. The best I've been able to achieve is 13%
I'm getting pretty frustrated over here.
Do you recommend any other speech/metadata analyzer?
If you think it won't work with lipsyncr speech analysis, you can get access to an older version of Premiere Pro (before version 8.2, December 2014) where you still have the speech analysis package there.
If you want to go down that route, here's how it's done:https://helpx.adobe.com/premiere-pro/using/speech-analysis.html . I can get you a version of lipsyncr 1, which used to rely on Premiere speech analysis. So basically you use Premiere for the analysis and then animate via lipsyncr 1.
I understand that this is frustrating and it is kind of hard to tell from a distance what is wrong with your specific file. But the Premiere speech analysis is certainly better than ours and might be worth the effort of downloading the older version. Installation is very easy.
Our studio is testing this plugin for a big upcoming project. Yours is one of the only auto lip sync add-ons for After Effects we can find. Unfortunately, I keep getting very low results while running tests with the trial, only about 15% word recognition. I am using the preset along with professional voiceover recordings. Anything else we can do or are we out of luck?
The following factors make for a bad analysis: funny intonation/singing, more than one speaker in a single recording.
I have had people reporting that the analysis improved when they changed the compression. Try going with 16 bit Depth first, and then 44.1 kHz Sample Rate.
For the viseme recognition, we are using the standard consensus in visual speech recognition, aware of the fact that many animators use different sets. I think it's together with "AH" or "O" (if you choose 10).
There is unfortunately no way to add your own visemes.
what does the error say? Your attached image shows a question mark.
There was an error executing the Aligner. Error details: Exception in thread 'main' java.lang.runtimeexception: used file encoding not supported at edu.cmu.sphinx.frontend.util.audiofiledatasource.setinputstream(audiofiledatasource.java:197)
there is more of the same the file I am using 44k mono .wav
This sounds like you might have a 32 bit floating bit depth. Please try converting it to 24 bit depth.
I converted to 24 bit and it told be that 24 was not supported. Converted to 16 bit and it worked! Thanks. This is what happens to me when I don't use something in a while... I forget how it works! Thanks!
Me oh my! $60 bucks just saved me something like 2-3 days of animating lips on a very long animated corporate rap music video about ethics and compliance. Even when lipsyncr 2 does the worst job it can possibly do (recognizing zero words in the audio and placing them evening along the timeline) it still is something like 5x faster than scrubbing through audio and time remapping the mouth phases. I can't say how pleased I am not to be spending another 2-3 days working on this very VERY long animated corporate rap music video about ethics and compliance.
What have I done with my extra 2-3 days? I've ridden a motorcycle in the mountains. I've played tennis. I've spent lots and lots of extra time being all close and snuggled up with a new romance. I've played with a neighbors dog. I've had an extra dinner with a friend. I've read books. I've written a review about all the things I've done with the time I've saved thanks to lipsyncr 2
Hi, I've just tried Lipsyncer for the first time for After Effects CC2015. I had a Professional recorded Voice Over track, recorded with a UK english VO Artist in a studio. I used your preset to make sure its the correct format. When I ran "Anylise Speech" it says zero words recognised. How is a zero percent success rate possible?
I had to go through and manually move all the markers by hand Many thanks
depending on the accent of your talent, it might cause bad analysis. If the first word is not recognised, try helping the script by changing it so a similar sounding word and more common word in the text box (eg. "Jen" to "can").
I will say that it works best with a general English accent and North American accents from my experience. If you find out that it is the accent in your particular VO, you could use the quick workaround from the tutorial on music lyrics, found on the product page. You basically record yourself (watch accent!) syncing to your talent's recording.
HI Joachim, sorry for the delay in replying. I checked again and I think the first word was recognised, as I didn't not get the error in your video saying the first word was not recognised.
Also my VO as I mentioned was professionally recorded VO from British Voice Over artist with a normal English accent. So with this set up correctly I'm not sure why I am getting zero recognised. All I've used in the end was the markers that were set up and then had to do it the old way and go through and manually edit all the timings and often change the mouth shape chosen?
It appears I have to create a new mouth comp every time, then replace the built-in mouths with mine. That's a lot of needless repetitive work, especially since I'll have parent the new comp to my character (after resizing it every time to fit the character) over and over. Isn't there a way to create a mouth comp for a character once and then use it for the animation step after analyzing dialog? If not, I don't see how this is saving me much time.
of course! In the the 'Animate' tab, in the first dropdown you can select which mouth comp you want to use. You can use the same one multiple times and don't have to create a new one every time.
That's good news, but doesn't seem to work for me. I get nothing in the drop-down. Is that because I closed the panel to swap the images in the mouth comp and came back to it? If I go through the steps again, can I simply choose my own pre-existing mouth comp instead of the one created by the process?
yes, normally that should work. And yes, you are right, unfortunately, as soon as you close the script, it won't recognise the mouth comps created via the script anymore. In that case, you unfortunately have to create one again and copy the contents from your custom made comp in there.
I'm still not grasping the process. I let the panel create the mouth comp with the default mouth images. It was named MouthComp1. I swapped in my own images and renamed it to MartyMouth. Then I ran the whole process ... it created another MouthComp1, analyzed the sound and script. But when I went to the Animate tab, only MouthComp1 was available in the drop-down. I couldn't swap in the new comp with my images. So it still seems as if I'd have to swap in my own images every time I generate a new line of dialog.
Hello. I had been using 1.7 and t works great and was looking forward to using 2.2.2. In stalled and it failed to download the needed files for the interface. I went back and checked my permissions and set them correctly. I even removed the script and put it back but it does not try to download the needed images again. I just get and error drawing user interface, please check folder permissions. IOError: Bad Argument - File or folder does not exist.
Can you help me fix this please?
Thanks,
Rich
Hello Joachim,
I was about to try the script out to see if it'd work for me; but unfortunately the same happened to me as with Rich. I'd really appreciate your help on this one.
Just wanted to circle back and let you know I've been using lipsyncr2 for a few weeks now. It's brilliant. The most I ever have to fix is a keyframe here and there, which takes no time at all. Thanks again for producing a huge, HUGE time-saver.
thank you so much for taking the time and writing this lovely comment. It is so motivating to read that all the effort I've put into this is helping people out - I really appreciate it.
If you ever feel like sharing anything that you've created with the help of lipsyncr, please let me know.
Will do. In fact, I hope to share with the entire world.
I'm the writer/director of a humorous documentary called "Fat Head" -- you can find it on Amazon Prime, iTunes, etc. The current project is called "Fat Head Kids: stuff about diet and health I wish I knew when I was your age." There's a book version with cartoon characters coming out in April. The film version will have those characters talking on screen. When I realized how long it took to manually animate a single line of dialog, I went looking for solutions and found lipsyncr.
If the film gets picked up by Netflix, Amazon, etc., as its predecessor did, please feel free to tell everyone all the dialog was animated with lipsyncr. I'll do likewise.
when I try to use lipsyncer to add markers to a layer without speech analysis on using a language other then english it only gives me three empty markers
I was using the try verson of the scritp, but i just get and error drawing user interface, please check folder permissions. IOError: Bad Argument - File or folder does not exist.
I was using the previous version of lipsyncr 2.2.3 and i upgraded it to 2.3 and right now i get an error drawing user interface, please check folder permissions. IOError: Bad Argument - File or folder does not exist.
Many thanks
So with this set up correctly I'm not sure why I am getting zero recognised. All I've used in the end was the markers that were set up and then had to do it the old way and go through and manually edit all the timings and often change the mouth shape chosen?
Thanks
That's good news, but doesn't seem to work for me. I get nothing in the drop-down. Is that because I closed the panel to swap the images in the mouth comp and came back to it? If I go through the steps again, can I simply choose my own pre-existing mouth comp instead of the one created by the process?
Thanks,
Tom
I'm still not grasping the process. I let the panel create the mouth comp with the default mouth images. It was named MouthComp1. I swapped in my own images and renamed it to MartyMouth. Then I ran the whole process ... it created another MouthComp1, analyzed the sound and script. But when I went to the Animate tab, only MouthComp1 was available in the drop-down. I couldn't swap in the new comp with my images. So it still seems as if I'd have to swap in my own images every time I generate a new line of dialog.
What am I missing here?
Thanks,
Tom
Thanks for your help. This will save a ton of keyframing.
Tom
Just wanted to circle back and let you know I've been using lipsyncr2 for a few weeks now. It's brilliant. The most I ever have to fix is a keyframe here and there, which takes no time at all. Thanks again for producing a huge, HUGE time-saver.
Best,
Tom
Will do. In fact, I hope to share with the entire world.
I'm the writer/director of a humorous documentary called "Fat Head" -- you can find it on Amazon Prime, iTunes, etc. The current project is called "Fat Head Kids: stuff about diet and health I wish I knew when I was your age." There's a book version with cartoon characters coming out in April. The film version will have those characters talking on screen. When I realized how long it took to manually animate a single line of dialog, I went looking for solutions and found lipsyncr.
If the film gets picked up by Netflix, Amazon, etc., as its predecessor did, please feel free to tell everyone all the dialog was animated with lipsyncr. I'll do likewise.
Best,
Tom
I was using the try verson of the scritp, but i just get and error drawing user interface, please check folder permissions. IOError: Bad Argument - File or folder does not exist.
Can I fix this? I use After Effects CC 2017
Thanks