• Hi Emily,
    I assume you have an english recording? 

    The following factors make for a bad analysis: funny intonation/singing, more than one speaker in a single recording.

    I have had people reporting that the analysis improved when they changed the compression. Try going with 16 bit Depth first, and then 44.1 kHz Sample Rate.

    FYI - This helped me - professional VO, (Southern) English accent - had a 0.0% accuracy, even using the Media Encoder preset to get the wav, However I switched to 16bit and got near perfect also 16bit and 44.1Hz had same result, again almost perfect.
    Thanks!
  • After I choose numbers of Mouth Shapes and click Create button, Script Alert popup as below:


    Please help me resolve this error!
  • Hey Thanh,
    unfortunately, your attached image doesn't display. Could you please try again or copy-paste the error message in here?
  • Hey Thanh,
    unfortunately, your attached image doesn't display. Could you please try again or copy-paste the error message in here?
    Hi Joachim Holler, after I update java and restart Ae, it works.
    Thank you for your support!
  • This doesn't seem to have the right visemes. There's no W/R, no Ooo, no F/V. It's a shame because this looks pretty good otherwise.
  • Hi Stoph,

    every animator has their own preferred viseme chart. Because of the way this script is built, we had to chose one of them. Maybe you still want to give it a try - the visemes are grouped into clusters that look the same. If you can change your viseme selection, I am sure the script will improve your workflow a lot!
  • Do you have any recommendations for using LipSync in videos with framerates other than 30fps?

    Many thanks. 
  • Hi Ole,

    it's actually no problem to just use the mouth comp inside a comp with a different frame rate! It's being time remapped, so it doesn't matter if you have a higher or lower fps parent comp. I hope this helps!
  • Hi Joachim,

    I'm having trouble getting your demo version to work. I get the following error message when I try to analyse text:
    image

    I'd quite like to use your product as it will save me a lot of animation time! So your help would be great.
  • Hi Thomas,
    this is more of a support request, so I will help you out with the ticket you opened ;)
  • I am getting duplicate markers. How do I get rid of this? I saw in a video somewhere that there is a script, but I cannot find it anymore.
  • Hi James,
    there's a feature in the Monkey Tools script, which is part of any of the Monkey scripts I believe. Orrin describes it in his VO tutorial https://youtu.be/WXDb6TkvdRg?t=196
  • I can't get the speech analysis to work. It will only do 25 words out of 500. I've tried editing the transcript to be phonetic but I can't figure out why it won't get past the first 25 words. Audio file is 16 bit and mono and is professionally recorded.
  • Hi Sean,

    maybe it's better to open a support ticket in your case.

    Please use the Media Encoder preset under "additional downloads" to make sure the codec is perfect for the engine (convert your original audio file).

    The engine won't work well if you have the following: english accents other than american; more than 1 speaker; funny intonation (eg. singing, cartoon exaggeration,...). Does any of this apply to your file?
  • Hi Joachcim. None of those apply. Voiceover is a generic American accent, no special intonations. The section that throws things off is when she is reading the initials M-E-S-P. I've tried the transcript with just the letters and writing it out phonetically, ie em ee ess pee. No luck either way.


  • Ah that's good to know. You could try cutting the audio into separate pieces and get rid of this hard to analyse sequence. Make sure not to cut through words and also it's best to cut in between sentences.
  • Receiving a rather weird error. When I try to copy paste texts the lipsyncr analyze result is always "transcription contains invalid characters". But whenever I just straight up type the words onto the text box, it'll analyze it just fine...
  • Hi jay, That’s weird indeed. You do have an English transcript, do you? Every letter in the English language should work and other languages are not supported at this point. Maybe there are tools on the internet that clean your text. My guess is that it’s in a brand name maybe?
  • What does that Script Alert mean?

    lipsyncr error
    Error setting Keyframes to Composition.1
    Error: After Effects error: Unable to call
    "setInterpolationTypeAtKey" because of parameter 1. Value 8
    out of range 1 to 7.
  • Hi, I'm getting this error message. It's the first time in using it so not sure what's wrong.

    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px 'Helvetica Neue'; color: #000000; color: rgba(0, 0, 0, 0.85)} span.Apple-tab-span {white-space:pre}

    There was an error executing the Aligner. Error details:

    Exception in thread "main" java.io.IOException: Resetting to invalid mark

    at java.base/java.io.BufferedInputStream.reset(BufferedInputStream.java:447)

    at java.desktop/com.sun.media.sound.SunFileReader.getAudioFileFormat(SunFileReader.java:59)

    at java.desktop/com.sun.media.sound.WaveExtensibleFileReader.getAudioInputStream(WaveExtensibleFileReader.java:259)

    at java.desktop/com.sun.media.sound.SunFileReader.getAudioInputStream(SunFileReader.java:119)

    at java.desktop/javax.sound.sampled.AudioSystem.getAudioInputStream(AudioSystem.java:1062)

    at edu.cmu.sphinx.demo.aligner.AlignerDemo.main(AlignerDemo.java:48)


  • Hey Ben,

    please open a support ticket, so we can look into this!

    Joey
    https://aescripts.com/contact/?direct=1
  • Just wanted to check if anyone had the issue with the script - ti changes the pitch of the audio file and when I replace the phonemes in the template the third step does not recognize the mouth template at all - so I can't get it to work at all...
  • Hey there, Anyone have this issue?

    I analysed my audio file as a test and it found 96% of a 1min 30 sec audio file. I then did an edit of that audio to an animatic adding in gaps where needed. Then I exported the new timed audio as a mono wave file with the same specs as the original that worked and now it finds only 5% of the words.

    Is it having an issue with the gaps of silence?

    Thanks for any info
  • Toby opened a support ticket and got help there - if you need help quickly, I recommend doing that ;) https://aescripts.com/contact/?direct=1
  • When I analyze the audio, it keeps throwing up an error executing the "Aligner". Any thoughts?
  • When I analyze the audio, it keeps throwing up an error executing the "Aligner". Any thoughts?

    Open a support ticket for assistance https://aescripts.com/contact/?direct=1
  • I don't think the viseme interpretation is very accurate. For example, I get a lot of "D" for "M" or "NTRL," etc. (I also noticed this in the tutorial, so it's not just particular to my case), so even when the words are transliterated correctly the VO sync doesn't really look right. And maybe it's my VO performer in this specific project, but I have yet to solve a first word error no matter how many different ways I try to massage it.
  • Hey Michael,

    thanks for the feedback! A number of customers mentioned that they would prefer a different viseme set. However, we did a poll about what sets to use and the ones we currently use are the most preferred ones.

    What visemes do you usually use when animating?
  • Can't get the script to work for more then 20%, I have tried to split the audio into smaller sections but still doesn't help. The accent is Australian would that make a difference? If so you should tell people that the script only works with American accent.

    Gary
  • Hi Gary - thanks for the feedback! It really depends on more than just the accent. However, we did have the best results with American English, so I guess we should promote that on the product page somewhere. If you need more support or a refund, please open a support ticket!
  • Hi there, I've been using you script quite a bit lately and it's saved me a lot of time! However all of the sudden I can't get it to recognize my wav files, possibly with the new AE update? I'm going to try and roll back to the previous version of AE and see if that fixes it, but I just used the script like 2 weeks ago with no issues. If you have insight into why this might be happening please share! Thank you!

    FYI - Windows 10 - Latest NVIDIA Drivers - Latest Java. 

    *EDIT* I tried rolling back 3 versions, and each one had the same result, with nothing showing in the Analyze tab for audio files. It almost looks like the drop down menu isn't working or something. 
  • Hi Cory – let's move this to a support ticket so we can keep this forum clean :)

    The script works with all current AE versions and Java shouldn't be the problem here. The trial version has a limit of 15 seconds duration on WAV files. Could that be it? Let us know in the support ticket!
  • Hi Cory – let's move this to a support ticket so we can keep this forum clean :)

    The script works with all current AE versions and Java shouldn't be the problem here. The trial version has a limit of 15 seconds duration on WAV files. Could that be it? Let us know in the support ticket!
    Thanks Joachim, here's the ticket number: #1066216
  • Hallo, is it possible to make a talking mouth with this that moves between keyframes. Like a mouth made from a shape layer, where there is interpolation between keyframes happening. Instead of the stop motion kind of style it has in the demo?
    Greetings
  • Hi Fridolin,

    no, this product is based only on time remapping, so no interpolation is possible. However, I recommend checking out the trial - you might find that the brain recognises this animation as very smooth :)
  • In the tutorial you mention "Premier Analysis Markers" - can you give more information on how to do this?
  • Hi Anthony,

    This used to be a workflow that worked with a legacy version of Premiere Pro (up to CC 8.1). Unfortunately, Adobe no longer provides downloads for legacy versions and so this workflow is no longer available.

    Thanks for the heads-up, I will see if I can remove this part from the tutorial. 

    Why are you interested in this feature? Are you trying to analyze a language other than English?
  • I cannot seem to get this Script showing up in my program at all. here is the list of errors i get...
    "INVALID LICENSE(-8)"
    ...Then i run the trial mode...
    "THE SCRIPT REQUIRES ADDITIONALL FILES...DOWNLOAD THEM NOW?"
    ...I hit Yes...
    "DOWNLOAD FAILED...CANNOT OPEN SOCKET.."
    "PROCEED TO MANUAL INSTALLATION?"
    I hit Yes and i followed your steps to a tee. i Even put the additional files into all the user's roaming file paths. and i have restarted AE multiple times.
    Help?!
  • Hey Nick - please open a support ticket so we can help you there! :)
  • Hello,
    I am testing out the demo version before buying. I have run into two problems though.
    1 - The pitch of the voice has changed after analysis - it is now slower and deeper
    2 - In the animate tab, there is no mouthshapes comp for me to select. 

    So I cannot see if this works well for me. Can you help please?
  • Hi Amber,

    1 - It's probably the RAM playback. Once it is cached, it should play back in the normal pitch. If not, check the frame rates!
    2 - In order to see a mouth comp in the Animate tab, you need to create one in the Create tab first. If you close the plugin or AE in between sessions, you need to create another mouth comp.

    I hope this helps!
  • Hi there,
    Having very little success getting Lipsyncr to actually analyze the audio properly, so I'm having to spend a crazy amount of time painstakingly lining up the markers for each word by hand. Is there a trick to getting the automated analysis to work? I'm following all of the proper procedures for the audio file -- 48kHz, 16-bit, mono, .WAV. The language is American English, and the audio recording is incredibly clear -- no background noise, music, SFX or anything; the speaker is very carefully enunciating, and is speaking quite slowly. I'd be hard pressed to think of a more ideal recording for speech-to-text, but Lipsyncr routinely only recognizes 5-6% of the words, and flubs them up right from the start, meaning I have to adjust everything by hand. Changing the first word to a sound-alike makes no difference.
    Is there a way to make LipSyncr work with Premiere's speech-to-text engine? It has no problem whatsoever with the audio.

    Please help!
  • Hi Jeff,
    if you open a support ticket, you can send me the original audio file as well as the transcript and I can have a look. My guess is that is has something to do with compression!
  • Hi, any idea to make the Analysis in Spanish?