The following factors make for a bad analysis: funny intonation/singing, more than one speaker in a single recording.
I have had people reporting that the analysis improved when they changed the compression. Try going with 16 bit Depth first, and then 44.1 kHz Sample Rate.
FYI - This helped me - professional VO, (Southern) English accent - had a 0.0% accuracy, even using the Media Encoder preset to get the wav, However I switched to 16bit and got near perfect also 16bit and 44.1Hz had same result, again almost perfect.
every animator has their own preferred viseme chart. Because of the way this script is built, we had to chose one of them. Maybe you still want to give it a try - the visemes are grouped into clusters that look the same. If you can change your viseme selection, I am sure the script will improve your workflow a lot!
it's actually no problem to just use the mouth comp inside a comp with a different frame rate! It's being time remapped, so it doesn't matter if you have a higher or lower fps parent comp. I hope this helps!
there's a feature in the Monkey Tools script, which is part of any of the Monkey scripts I believe. Orrin describes it in his VO tutorial https://youtu.be/WXDb6TkvdRg?t=196
I can't get the speech analysis to work. It will only do 25 words out of 500. I've tried editing the transcript to be phonetic but I can't figure out why it won't get past the first 25 words. Audio file is 16 bit and mono and is professionally recorded.
maybe it's better to open a support ticket in your case.
Please use the Media Encoder preset under "additional downloads" to make sure the codec is perfect for the engine (convert your original audio file).
The engine won't work well if you have the following: english accents other than american; more than 1 speaker; funny intonation (eg. singing, cartoon exaggeration,...). Does any of this apply to your file?
Hi Joachcim. None of those apply. Voiceover is a generic American accent, no special intonations. The section that throws things off is when she is reading the initials M-E-S-P. I've tried the transcript with just the letters and writing it out phonetically, ie em ee ess pee. No luck either way.
Ah that's good to know. You could try cutting the audio into separate pieces and get rid of this hard to analyse sequence. Make sure not to cut through words and also it's best to cut in between sentences.
Receiving a rather weird error. When I try to copy paste texts the lipsyncr analyze result is always "transcription contains invalid characters". But whenever I just straight up type the words onto the text box, it'll analyze it just fine...
Hi jay,
That’s weird indeed. You do have an English transcript, do you? Every letter in the English language should work and other languages are not supported at this point.
Maybe there are tools on the internet that clean your text. My guess is that it’s in a brand name maybe?
lipsyncr error Error setting Keyframes to Composition.1 Error: After Effects error: Unable to call "setInterpolationTypeAtKey" because of parameter 1. Value 8 out of range 1 to 7.
Just wanted to check if anyone had the issue with the script - ti changes the pitch of the audio file and when I replace the phonemes in the template the third step does not recognize the mouth template at all - so I can't get it to work at all...
I analysed my audio file as a test and it found 96% of a 1min 30 sec audio file. I then did an edit of that audio to an animatic adding in gaps where needed. Then I exported the new timed audio as a mono wave file with the same specs as the original that worked and now it finds only 5% of the words.
I don't think the viseme interpretation is very accurate. For example, I get a lot of "D" for "M" or "NTRL," etc. (I also noticed this in the tutorial, so it's not just particular to my case), so even when the words are transliterated correctly the VO sync doesn't really look right. And maybe it's my VO performer in this specific project, but I have yet to solve a first word error no matter how many different ways I try to massage it.
thanks for the feedback! A number of customers mentioned that they would prefer a different viseme set. However, we did a poll about what sets to use and the ones we currently use are the most preferred ones.
Can't get the script to work for more then 20%, I have tried to split the audio into smaller sections but still doesn't help. The accent is Australian would that make a difference? If so you should tell people that the script only works with American accent.
Hi Gary - thanks for the feedback! It really depends on more than just the accent. However, we did have the best results with American English, so I guess we should promote that on the product page somewhere. If you need more support or a refund, please open a support ticket!
Hi there, I've been using you script quite a bit lately and it's saved me a lot of time! However all of the sudden I can't get it to recognize my wav files, possibly with the new AE update? I'm going to try and roll back to the previous version of AE and see if that fixes it, but I just used the script like 2 weeks ago with no issues. If you have insight into why this might be happening please share! Thank you!
*EDIT* I tried rolling back 3 versions, and each one had the same result, with nothing showing in the Analyze tab for audio files. It almost looks like the drop down menu isn't working or something.
Hi Cory – let's move this to a support ticket so we can keep this forum clean
The script works with all current AE versions and Java shouldn't be the problem here. The trial version has a limit of 15 seconds duration on WAV files. Could that be it? Let us know in the support ticket!
Hi Cory – let's move this to a support ticket so we can keep this forum clean
The script works with all current AE versions and Java shouldn't be the problem here. The trial version has a limit of 15 seconds duration on WAV files. Could that be it? Let us know in the support ticket!
Thanks Joachim, here's the ticket number: #1066216
Hallo, is it possible to make a talking mouth with this that moves between keyframes. Like a mouth made from a shape layer, where there is interpolation between keyframes happening. Instead of the stop motion kind of style it has in the demo?
no, this product is based only on time remapping, so no interpolation is possible. However, I recommend checking out the trial - you might find that the brain recognises this animation as very smooth
This used to be a workflow that worked with a legacy version of Premiere Pro (up to CC 8.1). Unfortunately, Adobe no longer provides downloads for legacy versions and so this workflow is no longer available.
Thanks for the heads-up, I will see if I can remove this part from the tutorial.
Why are you interested in this feature? Are you trying to analyze a language other than English?
I cannot seem to get this Script showing up in my program at all. here is the list of errors i get...
"INVALID LICENSE(-8)"
...Then i run the trial mode...
"THE SCRIPT REQUIRES ADDITIONALL FILES...DOWNLOAD THEM NOW?"
...I hit Yes...
"DOWNLOAD FAILED...CANNOT OPEN SOCKET.."
"PROCEED TO MANUAL INSTALLATION?"
I hit Yes and i followed your steps to a tee. i Even put the additional files into all the user's roaming file paths. and i have restarted AE multiple times.
1 - It's probably the RAM playback. Once it is cached, it should play back in the normal pitch. If not, check the frame rates!
2 - In order to see a mouth comp in the Animate tab, you need to create one in the Create tab first. If you close the plugin or AE in between sessions, you need to create another mouth comp.
Having very little success getting Lipsyncr to actually analyze the audio properly, so I'm having to spend a crazy amount of time painstakingly lining up the markers for each word by hand. Is there a trick to getting the automated analysis to work? I'm following all of the proper procedures for the audio file -- 48kHz, 16-bit, mono, .WAV. The language is American English, and the audio recording is incredibly clear -- no background noise, music, SFX or anything; the speaker is very carefully enunciating, and is speaking quite slowly. I'd be hard pressed to think of a more ideal recording for speech-to-text, but Lipsyncr routinely only recognizes 5-6% of the words, and flubs them up right from the start, meaning I have to adjust everything by hand. Changing the first word to a sound-alike makes no difference. Is there a way to make LipSyncr work with Premiere's speech-to-text engine? It has no problem whatsoever with the audio.
if you open a support ticket, you can send me the original audio file as well as the transcript and I can have a look. My guess is that is has something to do with compression!
Charging money for this is an absolute crime. It doesn't work, and I can't even get it to work on a one second audio clip that is literally two words. It gives me an error every single time. What a joke.
When I try to analise a converted through a preset file I get an error saying "Specified audio file contains invalid characters" What could be going on?
When I try to analise a converted through a preset file I get an error saying "Specified audio file contains invalid characters" What could be going on?
Make sure the name of your audio file as well as your transcript contains only Latin characters. You can use Notepad to find them more easily. If the problem stays, please open a support ticket
lipsyncr error
Error setting Keyframes to Composition.1
Error: After Effects error: Unable to call
"setInterpolationTypeAtKey" because of parameter 1. Value 8
out of range 1 to 7.
There was an error executing the Aligner. Error details:
Exception in thread "main" java.io.IOException: Resetting to invalid mark
at java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:447)
at java.desktop/com.sun.media.sound.SunFileReader.getAudioFileFormat(SunFileReader.java:59)
at java.desktop/com.sun.media.sound.WaveExtensibleFileReader.getAudioInputStream(WaveExtensibleFileReader.java:259)
at java.desktop/com.sun.media.sound.SunFileReader.getAudioInputStream(SunFileReader.java:119)
at java.desktop/javax.sound.sampled.AudioSystem.getAudioInputStream(AudioSystem.java:1062)
at edu.cmu.sphinx.demo.aligner.AlignerDemo.main(AlignerDemo.java:48)
Open a support ticket for assistance https://aescripts.com/contact/?direct=1
Is there a way to make LipSyncr work with Premiere's speech-to-text engine? It has no problem whatsoever with the audio.
Please help!
Make sure the name of your audio file as well as your transcript contains only Latin characters. You can use Notepad to find them more easily. If the problem stays, please open a support ticket