• working only one language. and there is no adjustment for the other.
  • Yep, I was thinking the same... :D Do you have plans to include Spanish again?
  • Yes, Version 2.0 only supports English speech analysis. 

    Unfortunately, I cannot promise that future versions will support other languages, because we will have to see how the development of the analysis engine progresses. What I can say is that if at all possible, Spanish is topmost on this list.
  • How about word breakers. Placement it in manual mode and analysis of the letters of the text, rather than sound?
  • Not 100% sure what you mean by "word breakers", but you are right - we could implement sort of this manual mode instead of the analysis. And yes, animation would be based on letters rather than visemes leading of course to less detail. I'll look into that and keep you updated.

    Thanks for the feedback!
  • testing out trial version and got to the 3rd tab where you 'Animate' but the dropdown menu for 'Mouth Source' shows nothing.
     I am trying to use the lipsyncr template.  Any suggestions on how to get it to work?


  • testing out trial version and got to the 3rd tab where you 'Animate' but the dropdown menu for 'Mouth Source' shows nothing.
     I am trying to use the lipsyncr template.  Any suggestions on how to get it to work?


    Hey Kathleen,

    There are generally two things to keep in mind:
    • Lipsyncr will not recognize it if you duplicate one of those comps from inside After Effects. 
    • Don't close the script between switching the tabs.
    Could it be that you closed the script and opened it again at some point? Please hit 'create' again to create a new template comp (in the 1st tab). It should immediately show up in the 3rd tab (there's also a refresh button next to it).
  • I had version 1.3 and it worked fine. As I upgraded to v2   it showing me error: "Unable to execute script at line 1. Sytax error."  ... How to fix this?? please Help.
  • Hi! i have this error in trial versionimage
    i don't understand in java anything. can you help me?)   

  • hi, i keep getting this error when speech analyzing:

    "There was an error executing the Aligner. Error details:
    Maximum execution time exceeded. Terminating."

    I am using ver 2-2-2 with after effects CC2014.

    the script worked fine before.

    anyone?
  • Yes, there is a 1 minute timeout to prevent the script from freezing if anything goes wrong. It could just be that the combination of your hardware and audio track takes longer to analyze. You could just cut your audio file one or two times and see if it works.

    If not, make sure your audio file is a .wav with 16 bit depth and 16 kHz sample rate.

    This error also occurs with files like music or just bad quality recordings where the engine just cannot recognize the speech.

    Let me know if that helped!
  • I'm having this issue where the panel, no matter how I resize it, cuts off the "Create", "Analyze" and "Animate" buttons.. So, I literally can do nothing with this...
  • For more info, I'm using a surface pro 3 with current Adobe CC subscription w/ windows 10 insider preview build. I'm thinking/hoping it's the windows 10 build, and not the screen size of the device itself...

  • Hey there, just wanted to ask you about the system you are using! 

    Screen size should neither be a problem nor resolution. Pretty sure Win 10 causes the problem. You could try pasting the script into "Scripts" instead of "ScriptUI Panels" and then open it via "File > Scripts". That way the UI won't be dockable, but that might be what causes the issue.
  • You are a gentleman and a scholar! That worked!

    To tell you though, and for those like myself who are too impatient to wait for the official release of a new OS (PS overall windows 10 [even though I am personally an apple kid] is so much better than windows 8), on first glance it looked as though the issue was the same using your work around BUT when I resized the undockable window to its smallest point there was a graphical glitch that made it pop out to full size.

    I did this first on the bottom border, then again on the side border to make it a small enough window that it doesn't eat up my whole screen, and it works beautifully. I double checked to see if that effected the dockable UIscript, but it doesn't. So Windows10 users, on a SP3, will need to use the method you recommended.Thank you!!! The script is amazingly helpful and efficient.
  • We bought a licence and it is working brilliantly.

    Unfortunately we are having trouble with the frame rate of the analysis.  Our character animation projects are 12.5 fps.  When we transfer the keyframe information over we are getting keys that sit in between the timeline frame numbers.  Is it possible to output at 12.5 fps or do you have a possible workaround that we could use?
  • Hey there, glad you like it!

    It is normal that the keyframes do not sit exactly on the frames of the comp. Like this, the viseme will be displayed the next "real" frame after the keyframe sits. Pretty much how normal keyframes behave, right? This is the same for any fps, even with 60, the resolution makes is just harder to notice.

    What is the problem and what would you like to happen? I imagine the text that the character is speaking is way faster than what 12.5 fps can display?
  • Thanks for the reply. 
    The character isn't speaking faster than 12.5 frames per second.  The audio is running at a per second rate.  What I need to do is assign 12.5 images per second to represent that audio. This has been the norm in traditional animation since the 1930's, animation on 2's. What is happening is after analysis I am getting 30 keys or 30 images for one second. Because there are so many keys and only 12.5 spots I am having to go through and hand pick which image will stay and which image will go. What I would like to know: is there a way to output at a different frame rate ie. 25 fps or 12.5 fps so the sound that is represented on the timeline matches the waveform?
    Cheers.
  • So you are getting more keyframe information than 12.5s can display? Or is the problem that 30 is just not a multiple of 12.5? 

    In both cases; the script does NOT output in 30 fps, that is, it does not create 30 keyframes per second. It creates keyframes depending on the speed of the audio. The thing is that these keyframes are created at the exact time that they appear in the audio, most of the time they are created between the frames of the comp.

    Therefore, if I would provide the option to select the desired framerate of the analysis comp, nothing would change. You would still have to "hand pick".

    Does that make sense?
  • Yes.
    Thanks for explaining it.
    I'll have another look at it. 
    Cheers.
  • Bought this yesterday - using CC2015, have to say its a very expensive way of getting some markers and a bit of time remapping. Lots of issues, 

    • Don't seem to be able to use it again in a project when you re open it. Error popup
     Speech analysis is just dumping a load of markers at the end of a time line, that i then have to place  for every word. I am using the Media Encoder preset supplied.
     Then i have to go in and change time remapping values to make it look half decent.

    Id love to be corrected on any of this but i don't see how this is worth actual money. At the moment I'm not happy to have paid for this

  • Bought this yesterday - using CC2015, have to say its a very expensive way of getting some markers and a bit of time remapping. Lots of issues, 

    • Don't seem to be able to use it again in a project when you re open it. Error popup
     Speech analysis is just dumping a load of markers at the end of a time line, that i then have to place  for every word. I am using the Media Encoder preset supplied.
     Then i have to go in and change time remapping values to make it look half decent.

    Id love to be corrected on any of this but i don't see how this is worth actual money. At the moment I'm not happy to have paid for this

    It sounds like the speech analysis of your specific audio file is running far from well. As with any speech analysis, it is crucial that you have a good voice recording. Apart from a noiseless recording, it should also feature only a single speaker.

    The script doesn't remember what you did if you close it and open it up again. You'll have to do all your animations in a single "session". Let me know if that info helps.
  • Thanks for the reply, i went back and tried it with a professionally recorded VO from another job. Worked perfectly so please accept my apologies, i was frustrated with the results from a not great voiceover that might be too fast, any tips on how to treat a voiceover to help analysis?
  • I am very glad to hear that you got to a better experience, Philip!

    As I said - the speech analysis engines like clear audio tracks best. Maybe you can remove some of the noise. If you have several speakers in just one recording, you should definitely split the passages of each speaker.
  • ..
    Was your question answered?

  • I can only see an empty comment again - did you solve/answer your problem/question?
  • Your script is good, but it doesn't work as well as the demo video suggests.

    I've tried analysing a 5 minute piece of audio and I keep getting a time out error. Instead I've had to break the clip up into about 10 smaller clips and do it that way.

    It's still faster than syncing everything manually, so thank you for that, but it's very disappointing to spend $60 on a product that doesn't work exactly as advertised.
  • Sorry to hear that. Such things strongly depend on the specific setup of audio file and computer. I hope you are patient enough to try it on another file some day. I am sure it will be better then!
  • Hi Joachim,

    I downloaded the trial version and am pretty happy with how it works. However it does stumble over how I've organised my project files. In one main directory name I used an octothorp (#, aka hashtag). I've never run into problems with this before, not even with Dropbox. It would require me to store project files outside of my projects folder or read audio once in a root folder and then move it back - long story short; hassle ^__^ 

    Is it possible to support # in filenames when reading audiofiles or is a hard-coded Java issue? It's the first time I've come across it and promises to ruin my workflow. 

    Edit: Decided to buy it regardless, I can work around it. But now I'm getting an error creating composition: ReferenceError: Object is invalid. I cannot use the script now as it seems to fail creating any composition(s). Halp?

    Edit 2:
    Okay, so I am experiencing the following behaviour from the script.

    • After placing the script in the ScriptUI Panels folder and starting AE, you need to let it download the assets without switching focus or quitting AE. Or you'll get this error: "Unable to execute script at line 225. items is read only"
    • However, I am now getting that error at start-up of AE.
    • On a blank project (the one my AE starts with) it runs fine creating new lipsyncs (despite the error).
    • However when closing that one and starting a new one, or when opening an existing project, it fails with the following error when you click on the Create button:
      lipsyncr error
      Error creating Composition.
      ReferenceError: Object is invalid
    • This means I can only use it once, on a blank slate, which is pointless considering I need to use it in an existing project.
    I am on Mac OSX, AE CC2014. 

    Edit 3: If it makes any difference: The trial worked fine apparently. Only tested it out in one projec, but it was an opened one, after starting AE.

    Thanks for helping me out through Twitter! For anyone interested; the startup error is apparently something caused by an other script or a conflict of scripts somewhere on my computer. However, there is still a bug when switching projects and hitting the Create button;
    lipsyncr error
    Error creating Composition.
    ReferenceError: Object is invalid

    Great support! ^__^
  • Hi,

    is it possible to replace the template viseme images with short animations (several frames long instead of single frame images) so that the lip movements have smoother transitions when the character is speaking slowly?

    Many Thanks
  • Hi Jarrick, I don't think this will make a lot of sense, as the "correct" transformation of a viseme depends on both of its neighbours. Maybe frame blending can help make your animation a bit smoother, but that might not turn out well either.
  • Hi Joachim,

    I want to animate photographic type images (not simple vector drawn characters) so am after an effect like Auto lip sync but which is more realistic in the range of visemes it can use- like Lipsyncr2. Would it be possible to parent or link several liquify effects to the Lipsyncr frames so that for each viseme that is triggered, it will make liquify effects to move the lips, teeth and tongue independently?
  • Hi Jarrick,

    if you want to link a liquify effect to a viseme, why not include it in the viseme's image? I understand that you want to move lips, teeth and tongue independently, but what would you like to link it to, so that you get a different result as described above?
  • Hello Joachim,

    Thank you for this plugin. It works well, however I am having one issue with it. When I close the Lipsyncr dialogue after creating the template comp and analysis comps, and then reopen it, I am unable to pick the template comp or analysis comp as the pull down menus are both empty. Also, is it possible to just connect a template comp that was not created within the lipsyncer create tab?

    Edit: I see now that you have to keep the script open the whole time, which is unfortunate because my current project is 30 minutes and is better suited to a less strict workflow. I do have one more question, however. Is there a way to have keyframes NOT placed in between frames?

    Again, thank you for all your work on this!
  • Hello,

    you are right, the script has to be opened while you go through tabs 1-3. What you could do is create a new template comp after restarting the script and just paste the contents from your desired comp in there.

    How would that other template comp be structured? Lipsyncr works with specific sets of visemes which are aligned in a specific order along the timeline of the MouthComp. Again, you could just create a new template comp and paste your desired layers. Just keep in mind that the visemes and their alignment have to match.

    Most of the time, due to their calculation, the keyframes don't exactly sit on the frames. If so, they appear in AE the next actual frame. I have not found a way to change this in programming - Why is that a problem?

    (If you have a longer support request, please write me a PM or open a support ticket to keep things tidy here)
  • Joachim,

    Thank you for your response! I did just start pasting the layers into the new MouthComp. It's a bit of a hassle, but it does work. 

    What I am experiencing with the keyframe issue is that the keyframes are not appearing on the next actual frame, but rather in between frames which means it is skipping a lot of the more important visemes. Does that make sense?

    Thank you for all your work on this plugin!
  • Yes, that is normal. The script is written that way so that people can adjust their framerate later on. The viseme appears on the next frame if it sits in between frames. It should skip them only if the spoken track is that fast, right?
  • Hello J.H. been using the script for a couple months now and it does the job! Couple questions:

    • can we set default framerate to 24 fps? Seems to default to 30
    • is their a more efficient way to replace the template phenomes with our phenomes? copy and paste can take a while in a long project. can we replace your template files with our corresponding phenomes?
    Thanks again. Def worth the $60. cheers!

    edit - made this video with minimal editing to the mouth movements: www.youtube.com/watch?v=SFFzCsrQfw4&feature=youtu.be

    the biggest time suck is from copying and pasting phenomes 
  • Thanks for your appreciation! The frame rate can't be set - it's 30 fps as you've mentioned.

    You know the trick of holding alt while dragging one piece of footage onto another one inside a timeline in order to replace it? If that's still not fast enough for you, I could tell you where the default images are saved. That creates room for errors (your images would have to be named exactly the same..), but if you have the same illustrations in many projects, it might be worth the hustle.
  • Hello. I had been using 1.7 and t works great and was looking forward to using 2.2.2. In stalled and it failed to download the needed files for the interface. I went back and checked my permissions and set them correctly. I even removed the script and put it back but it does not try to download the needed images again. I just get and error drawing user interface, please check folder permissions. IOError: Bad Argument - File or folder does not exist.

    Can you help me fix this please?

    Thanks,
    Rich
  • Hey Richard,
    yes, I'll send you a PM
  • Hi there Joachim 
    I've not had much luck with your speech analysis so I've been using the Premier analysis and its working better.
    When I 'analyze' in your script it adds 3 markers  'the' 'audio' 'transcription' overlaying the good markers from the audio metadata  . 

    Also when I  animate it with the script I've noticed it is ignoring the marker lengths and the silence leaving a lot of open mouths . 

    I presume its meant to return to silence or neutral in between the markers if the marker length leaves a gap ? 

    thanks 
    John 
  • Hi John and sorry for the rather late response.

    If you still have a version of Premiere that does have this feature, it is a good idea to use it!

    If I understand you correctly, you are saying that the script adds markers in addition to those that Premiere has already made? If you use Premiere to do the analysis, you don’t have to do it again via lipsyncr! In the analyse tab, untick the checkbox ‘Speech Analysis’ and leave the textbox blank before hitting ’Analyse’ (There’s a video called ‘Premiere Speech Analysis’ on the product page that illustrates that).

    I can also give you a version of lipsyncr 1, it will also make use of the marker lengths, as it is made to work with Premiere! Send me a PM if you want to try it.

    - Joachim


  • Hi Joachim  thanks for getting back to me  :) 
    yes it adds these markers with the checkbox unticked after pressing 'analyz' in the new composition it creates. 
    Then when animate is pressed it alters the marker lengths incorrectly . 
    I'm getting around it by selecting all the keyframes on a word that has a pause at the end and scaling them to the word length, or duplicating the original waveform with correct markers before pressing animate to use as a reference. 

    I'll PM you shortly for Lipsyncr 1 , 
    thanks
    John 
  • Hi Joachim,

    In the placeholders(\Adobe\Adobe After Effects CC 2015\Support Files\Scripts\ScriptUI Panels\lipsyncr\img\placeholders) folder there are two extra images that are not included in the mouthComp. Those are I.jpg and U.jpg images. Is there a way to incorporate these images in to the mouthComp and be used when the animate function is started?

    I and U are used a lot in lip syncing. It will add, I hope to believe, a more aesthetic and believable match to the audio being heard.

    I have the full version(1.3) of lipsyncr. Any  help you can guide me with to incorporate these additional images is greatly appreciated.

    Thank you
    -Chris
  • Hi Chris,

    these two visemes are used for German and Spanish in lipsyncr 1.x, but not English. In this script, much of the effort in development has been put into translating phones to visemes. Of course, many animators use their own set of images, but if you want to simplify all the phones to just 10 visemes, you have to make cuts. There are scores for how common and how distinct a viseme is, 'U' is actually not that different from 'O' so they are put together.

    Long story short, in the current state of the script you cannot access them. I don't doubt that the animations would look more detailed, but more visemes will always look better but again, we capped the level of detail to 10 visemes.

    - J
  • Hey Joachim!

    I've created a mouth comp, analyzed my audio and corrected all mistakes and word durations but when I go to animate it's showing nothing in the analysis comp?  

    Refresh doesn't seem to do much either.  Anyway to get it to recognize my lipsyncr comp?  I don't want to have to go back and do all the work again, there were a lot of corrections.  I actually haven't had lipsyncr recognize any more than 20% of my transcriptions so far?

    Any help or tips would be greatly appreciated!  Thank you!
  • Hi Michael,
    don't forget that we still have an open support ticket, so I will write my answer there to keep this thread clean ;)