Excellent first version! Well done guys. A most welcome addition, and I love the wiggle. But...
The demo footage you have there is very clean in terms of matching geometry, quality and color. As you must know, most 3D footage isn't like that, and fixing it is often the biggest job.
So, for instance, if one eye needed sharpening it would be nice to not have that effect applied to the other eye. Scaling, warps, or vertical image shifts (often dynamic) are other examples, and being able apply such tweaks without them being cloned would be nice. You currently have a way of doing that, but it's a bit involved and something simpler would be good as an option, like using the comments field?
Other effects could do with negative mirroring. Rotation for example. So rotating one eye 0.5 degrees would then automatically rotate the other -0.5.
Auto-scaling would be good too. So for the above rotation example the image would automatically be rescaled to avoid cropping.
I'm a bit confused about what you're meant to do with color matching here..?
Checkerboard viewing mode would be useful.
Your output options should maybe include interlaced 3D. Most passive 3D monitors will display that as 3D without having to switch to 3D viewing mode. A lot of 3D folks prefer it as it doesn't compromise horizontal resolution. It's also great if the monitor is in a situation where it may be switched off and will later power-on on in 2D mode. Interlaced 3D will then still come out as 3D.
Excellent first version! Well done guys. A most welcome addition, and I love the wiggle. But...
The demo footage you have there is very clean in terms of matching geometry, quality and color. As you must know, most 3D footage isn't like that, and fixing it is often the biggest job.
So, for instance, if one eye needed sharpening it would be nice to not have that effect applied to the other eye. Scaling, warps, or vertical image shifts (often dynamic) are other examples, and being able apply such tweaks without them being cloned would be nice. You currently have a way of doing that, but it's a bit involved and something simpler would be good as an option, like using the comments field?
Other effects could do with negative mirroring. Rotation for example. So rotating one eye 0.5 degrees would then automatically rotate the other -0.5.
Auto-scaling would be good too. So for the above rotation example the image would automatically be rescaled to avoid cropping.
I'm a bit confused about what you're meant to do with color matching here..?
Checkerboard viewing mode would be useful.
Your output options should maybe include interlaced 3D. Most passive 3D monitors will display that as 3D without having to switch to 3D viewing mode. A lot of 3D folks prefer it as it doesn't compromise horizontal resolution. It's also great if the monitor is in a situation where it may be switched off and will later power-on on in 2D mode. Interlaced 3D will then still come out as 3D.
Hope that's useful?
Hey Karel, Thanks for the feedback. It is all useful indeed!
As for things happening in the right and not the left even after LR Clone, there are a few ways:
Use the F/X button to split any effect or mask into Left and Right "versions" that both sit in the left. Each version only becomes dominant in it's respective comp.
Use "evil_wink" in any layer comment. This will invert the enabled state of the layer. So if it is disabled in the left it will become enabled in the right. This way you can maintain entirely separate Left and Right versions of any footage or adjustment layer. Use evil_wink on both of them and keep one disabled in the Left. (note: S3D footage will still be swapped with its twin unless you also add "no_evil" to the layer comment).
Create Left and Right pre-comps manually and use "no_evil", in the comp comment of the Left comp in the project panel. Be sure to name them with the Channel ID's (usually "_L" and "_R"). This will lock the pair from ever being LR Cloned but they will still be useable as a S3D pair when nested in other comps.
Very excited about this plugin. Just starting working with it and so I don't have a lot of feedback yet, but one problem I am having. I am not able to get the wiggle function to work. The info box gives feedback that the process is "done", but nothing happens. Holding down the Alt key will flip it to anaglyph, but will not toggle back. I am using After Effects CC 2014.2. As Karel mentioned, most stereo 3D footage needs much alignment work. That would be a great addition to the script so we would not have to use another program like Vegas Pro to do the alignment first. That would really speed up the work flow. I wasn't misled that your script would do any of this, but I was hoping it might.
Very excited about this plugin. Just starting working with it and so I don't have a lot of feedback yet, but one problem I am having. I am not able to get the wiggle function to work. The info box gives feedback that the process is "done", but nothing happens. Holding down the Alt key will flip it to anaglyph, but will not toggle back. I am using After Effects CC 2014.2.
Thank you Stephen. Very excited about it over here too! Let's get to the bottom of the Wiggle not working. The only time I notice this is when I've mistakenly got the same image in both Left and Right comps (meaning it is actually wiggling but the images are identical so you can't see the difference. This would happen if:
The layer sources, whether base footage or pre-comp, are not named according to Evil Twin's naming rules.
The layer comment has the "no_evil", tag in which case the same layer source will be used in both eyes.
But then, as you say, Anaglyph is working... Can you confirm that the Anaglyph view is indeed "internally" stereoscopic? Another thing to try right away would be to delete the Evil Twin - Right and Evil Twin - S3D layers and click Wiggle again - which will add fresh ones. Also, since the Wiggle is performed by guide layers, if you're viewing through a nested situation, the Wiggle won't show up in the parent comp.
Alt - Wiggle currently doesn't turn off Anaglyph by design (it's unclear what state to go back to... ). I've been considering changing this but for now, just disable the Evil Twin layers to turn off Anaglyph.
As Karel mentioned, most stereo 3D footage needs much alignment work. That would be a great addition to the script so we would not have to use another program like Vegas Pro to do the alignment first. That would really speed up the work flow. I wasn't misled that your script would do any of this, but I was hoping it might.
The current philosophy is to make it as quick and easy as possible to leverage or adapt whatever you already use in After Effects. Can you elaborate on what Vegas Pro does for stereo alignment that can't be done in AE?
All suggestions and feedback very much appreciated!
I am looking to purchase this or another script for after Effects. Would this script work for 3D camera tracking, to transfer the Left tracked footage camera keys into the Right camera? To create 3D camera tracking and compositing of objects into real stereo footage? Would the I be able to try it out before buying it, or is the tryal very limiting. Hopefully my questions made sense.
Hello Marcin, If I understand correctly: you have started with stereo 3d footage, and created a camera in After effects that is tracked to that footage - and now you would like to place 3D After Effects elements in front of that camera to “break in” to the depth space of your Stereo footage. If so, yes Evil Twin can handle this. You can create a stereo camera pair by selecting the camera position and clicking the ⚡X + button. This will setup a basic parallel stereo camera with a control for the right camera offset. In keeping with the main idea of the script - you control the stereo camera from inside the left comp.
Any 3D layer in your comp should then be visible in stereo 3d and the right camera will follow the left in real time as you edit. Pre-rendered stereo footage should of course be in 2D layers and will not be affected by the camera.
The trial is 5 days with access to all the features with the only limitation being that you can only use it on comps that are 60 frames or less. This should allow you to test out any method to see if it will work for your needs. I suggest making a copy of your comp (be sure to rename it) and trimming it to the 60 frames of your choice to test with.
Feel free to keep posting here with questions and I’ll try to respond quickly.
Marcin, I left out one detail: You need to include a convergence adjustment layer in the comp by pressing the CON button. Set the overall convergence on the slider in that layer to let the parallel cameras converge past the screen plane (otherwise it will feel very flat in stereo). It's ţhe combination of camera offset and scene convergence that will allow you to match the 3D space in your footage.
I am having trouble. Everything works, as in looks 3D but the elements in AF look flat. What it is, is camera tracked footage with pictures coming towards the camera. I added the convergence adjustment layer but can't seem to add depth. It looks very flat. Not sure what I'm doing wrong.
It's hard to say what's wrong without looking at your project, but the most common reason for things not working smoothly when you first start is that something, like a footage element in the project panel, or a layer name isn't named properly. The script uses the names and folders of your footage element to figure out what is stereo paired with what. You might want to double check the manual for the naming rules - which are important for the script to run well.
Otherwise, have you downloaded the template project for 3D cameras from the main page? It might help since it demonstrates what I described in my last post.
Do i need to also have to have a left and right version of each image in my comp? (the images flying towards the camera they are not being translated into 3d layers.) They are moving towards the camera on the z axes from far to near. So it seems that the cameras are not separating and are pretty much the same copies of each other with the 3D footage in the background. The S3D convergence layers seems to be just separating the the cameras but does not add any depth to the animated images. I have started with just animating layers moving them closer without the camera being animated and its the same problem. the layers are coming towards the camera on the z axes but there is no visible depth. Is ther a fix.
You don't need to put the right version of the image in the left comp.
The camera separation is controlled by the R --> Position slider in the effects of the null layer yourCameraName: R --> which was added when you selected Camera position and pushed X+.
You need to use the camera separation and the Scene Convergence to achieve S3D depth.
I really don't know what I am doing wrong. I get the ( Evil Twin - Right) layer created, the ( Evil Twin - S3D ) layer created, The ( Scene Convergence ) layer created, but I don't get the ( R --> Evil Twin S3D Camera ) layer created. I only get it ( R---> convergence ) created underneath my Scene Convergence effects layer. Today is the last day of my trial Is it because I had run the AF 3D camera tracker. Is this some how affecting the function of the script?
The ( Scene Convergence ) layer created, but I don't get the ( R --> Evil Twin S3D Camera ) layer created.
R-->Evil Twin S3D Camera is specific to the template comp, where the camera is named Evil Twin S3D Camera. You are looking for a layer called "R-->" followed by whatever your camera is name. It should appear after you've selected the position property of your camera and pushed (⚡︎ X +) (there should be a confirmation about creating the S3D camera). Note that no two layers should have the same name.
I only get it ( R---> convergence ) created underneath my Scene Convergence effects layer.
Do you mean a layer called R --> Convegence or are you referring to the effect that's in the Scene Convergence layer. There should be a numeric slider in the Scene Convergence layer with that name - I think that's what you mean.
Sounds like you're not finding where to control the Camera offset or for some reason it simply is not being added. Can you describe what happens when you select the camera's position and push the (⚡︎ X +) button? You also need to hit the (LR) button for any changes to update in both eyes.
Is it because I had run the AF 3D camera tracker. Is this some how affecting the function of the script?
Ok I did not select the position of the camera to created the new R---> camera layer in the left view. Now I did. I can adjust the layer without affecting the background. The only problem is that it dose not affect objects coming towards the the camera from the back ground. So it just looks like its getting bigger but not coming towards the camera. I've played with the R --> 3D Tracker Camera slider but that just seems to move it side to side without adding depth to it. Am I missing a step? or do i need to add something to the right layer?
By the way, thank you so much for your patience, and helping me out.
Ok we're getting closer! The challenge now is to find the correct value for the R--> slider in combination with the correct z coordinates for your 3d layers - which you are trying to match to "baked" S3D footage with no numerical information about its internal depth - so there is some trial and error involved.
Try thinking of it this way:
The larger the (R-->) value for the camera - the deeper the appearance of the S3D space (more obviously coming toward the camera).
So if the image seems to scale a lot without moving much in S3D, try increasing the R--> value.
Increasing the R--> value may throw off what appeared in 2D to be a good position for your flying image - so you would have to re-adjust your positions while viewing in S3D in order to get a visual match.
Happy to help. Hoping this info is useful for others as well.
This is a very extreme offset. but it shows that there is no depth. I really can't figure it out. What am I missing, or not doing still. I decided to show the images because I am not sure I know what I am doing.
From what I can see, you need to have a Scene Convergence that is not 0 - otherwise moving the image in Z will not be able to break the screen plane (because the cameras are parallel) and will feel stuck in the same S3D depth.
Of course, if you change the Scene Convergence R-->, your base footage will also be adjusted, which it looks like you've got it where you want it already. You can apply another shift to the base footage to balance it out (select position of the footage, press X+ and use the new R--> slider under effects in that layer and set it to the negative of the one in Scene Convergence).
The Camera Zoom will affect things as well. Not sure if you are free to play with that after having done the tracking...
Lots of playing around but it works. The thing that really helped was adding the other shift layer to the footage to keep the same depth in the background footage.
Thank you so much...
One last thing. I did notice that my footage is a little bit off up and down wise is ther any way to fix that?
GREAT! Future versions will make this a bit simpler but I'm glad to hear it's working for you!
For your footage alignment - vertical offset is described in the manual:
1. Highlight the position property of the layer, right click and choose “Separate Dimensions”.
2. Select theY Posiiton and click ⚡X+.
3. Click L▷R to update the Right comp.
4. Usethe R --> – Y Position slider in the effects group to control the vertical posi- tion of the Right comp relative to the Left. Keyframe the slider as needed.
You will have to redo the same thing for the X-position to get back your horizontal offset from before (Separate dimensions deletes the expression that was there).
But at a glance I think your problem might be Rotation-based. You can use the same technique on the layer's Rotation property as well.
I work at a production house that specializes in creating stereoscopic visual effects and this will be a huge time saver.
Our workflow requires checkerboard integration of the left and right images.
It does not appear that your script supports checkerboard stereo.
Would it be possible to implement this as a feature?
Currently I do it manually using a procedural checkerboard track matte.
This feature is very important for our workflow as it would prevent the tedium of manually creating the integration comp, or going to a 3rd party software such as Dashwood.
Seems like the hard part would be creating a procedural checker.
I noticed that the checkboard effect built in to AE will not work on a single pixel resolution.
So I use Photoshop to create a 2x2 pixel checker image, and then 'Edit > Define Pattern' and filling with that pattern.
Perhaps there is some script based way to do this within AE?
Anyway, really fantastic work on this evil genius script.
Hi Thomas, Thanks so much for the great feedback! Sounds like Evil Twin is a perfect match with your studio! I hear you. We need Checkerboard stereo mode and the feature will be integrated in the next update (within a month). In the meantime I suggest the following temporary solution so you can get working right away:
Download the HD Checkerboard Pattern (checkerboardHD.png) from the main product page (the link is just under the Overview video) and import it into your project.
In the Stereo Viewer comp (created by pressing SVU):
Set "S3D mode" in the the S3D Settings Layer to 0 (none).
Enable shy layers so you can see everything and add checkerboardHD.png as a matte layer above each of Left and Right precomp layers.
Use "Luma Matte" on Left, "Luma Inverted Matte" on Right - or vice-versa according to your setup.
Please let us know how this works! We can post the same pattern in other resolutions if anybody has a request.
Update on the Checkerboard (or Interlaced) stereo mode! Version 1.1, which was just released, solves the problem: Use the SVU button to create a Stereo Viewer comp. Checkerboard and Interlaced are now options in the S3D Settings layer (see the video on the main page detailing changes in v 1.1).
Mini feature request: SBS option without setting the horizontal scale to 50%. Useful when outputting directly to an HMD. I figured out how to edit the expression. But it would be nice to have a checkbox. Ultimately, it should handle scaling comps appropriately if they don't fit into the Stereo Preview comp (fitting two HD comps into a HD preview). For my need the L/R comps are 50% of the output size so it's not needed...
Hi Mark, Thanks for the suggestion. We do want to offer the fullest possible range of output options so this will surely make it into future versions.
To be sure I understand correctly - say your L and R comps are 1920 X 1080 - you would want a stereo viewer comp that is 3840 X 1080 where the Left and Right nested comps are Side by Side at 100% scale? This could be useful for twin projector systems, for example. The same would apply for Over-Under with a Stereo Viewer comp at 1920 X 2160.
I am running into a few problems with using the script in conjunction with another plugin. I am working with stereoscopic VR content so I am using a plugin called Mettle to do my distortions and equirectanglar effects.
However there is quite a bit of nesting that goes on with the mettle workflow. Everything works to getting the footage into a cube map format but when I try to convert back the image is blank in the comp viewer.
I suspect there might be a few updates to script that need to happen for it to handle Mettle correctly? I guess this would be a feature request so we can work with stereo VR content.
I am running into a few problems with using the script in conjunction with another plugin. I am working with stereoscopic VR content so I am using a plugin called Mettle to do my distortions and equirectanglar effects.
However there is quite a bit of nesting that goes on with the mettle workflow. Everything works to getting the footage into a cube map format but when I try to convert back the image is blank in the comp viewer.
Could you elaborate a bit on this? Are you saying everything works nicely in stereo and VR with both scripts until the conversion from cube map back to equirectangular? Which comp goes black?
Since Mettle Skybox is a plugin AND a script, the scripting part has
some overlap with Evil Twin that needs to be considered on both ends for
the tools to be perfectly smooth together. You might try asking them about it too.
I suspect there might be a few updates to script that need to happen for it to handle Mettle correctly? I guess this would be a feature request so we can work with stereo VR content.
Thank you so much!!!
We'll look into this but can't promise anything just yet! For now, at the very least we should be able to come up with a workaround or set of instructions that might add a couple
of clicks to the process, but would make it workable for both the VR and
the stereo. Stay tuned!
Actually with some further testing I think the problem is with mettle and your script might be fine.
I re-tried the project but only using the script... Was able to use all normal AE tools and it seemed fine...
I re-tried the project but only using the plugin... I was getting blank comp viewers again... I contacted the team at Mettle and they informed me that the plugin was only currently designed to work with 2:1 footage and since my footage is 1:1 we are de-buging on that end. Will keep you up to date.
But once we get Evil Twin and Mettle in a robust workflow... 360 Stereo will be easy again and we can get back to creating art
Actually with some further testing I think the problem is with mettle and your script might be fine.
I re-tried the project but only using the script... Was able to use all normal AE tools and it seemed fine...
I re-tried the project but only using the plugin... I was getting blank comp viewers again... I contacted the team at Mettle and they informed me that the plugin was only currently designed to work with 2:1 footage and since my footage is 1:1 we are de-buging on that end. Will keep you up to date.
But once we get Evil Twin and Mettle in a robust workflow... 360 Stereo will be easy again and we can get back to creating art
Thanks for your help and quick reply!!!
Hi, does anyone know if its now possible to create a 360 stereo output form an E3D comp using Evil Twin and SkyBox Studio now that its been updated to V2?
For some reason, the button L>R is not working. It's not copying what's in one comp to the other. I did recently upgrade another plugin Mettle from V1 to V2. Is there a conflict with this Mettle plugin? Can't seem to figure out why I'm no longer able to use this function.
I started a new After Effects project file and followed the tutorial with placing the clips into a Footage folder. However when I press the L>R button to duplicate, it copies all the elements from the _L Comp to the _R comp but instead of placing the _R movie clip in the _R Comp, it places the _L Clip. Any thoughts?
I am just wondering if you can provide some detailed advice on the following workflow (I'm enquiring on the advice of the team over at Mettle)
We are shooting with the GoPro Odyssey stereoscopic VR array (16 cameras). The stitched footage comes back to us from Google as equirectangular stereoscopic 1:1 (and we can choose any of the following resolutions ... 8K, 4096x4096, or 2880x2880... all come back with depth maps supplied).
We are hoping to use Mettle VR Studio 2 for After Effects for rig removal specifically, as well as any basic compositing. Can you give me any additional step by step advice on implementing the following workflow with Mettle and Evil Twin?
This is what I assume we should do - but please correct me if I'm wrong on any particular.
- Start with 1:1 stereoscopic equirectangular over/under original footage - Convert footage in AE to monoscopic to edit in Skybox Composer (not sure what the best way to convert Stereo to Mono is right now) - Make changes to the mono left frame in Skybox composer - Replicate those changes to mono right frame using Evil Twin? (again - should this step occur after changes in Skybox Composer, or is Evil Twin involved concurrently?) - Finally export both frames from Evil Twin back to 1:1 over/under stereo equirectangular to send back out to Premiere.
Can you suggest if this is a reasonable workflow, and give any advice or point to any tutorials you've produced on how to accomplish this kind of process (simple rig removal, etc) starting with 1:1 stereo equirectangular footage and using Mettle and Evil Twin?
This is what I assume we should do - but please correct me if I'm wrong on any particular.
- Start with 1:1 stereoscopic equirectangular over/under original footage - Convert footage in AE to monoscopic to edit in Skybox Composer (not sure what the best way to convert Stereo to Mono is right now) - Make changes to the mono left frame in Skybox composer - Replicate those changes to mono right frame using Evil Twin? (again - should this step occur after changes in Skybox Composer, or is Evil Twin involved concurrently?) - Finally export both frames from Evil Twin back to 1:1 over/under stereo equirectangular to send back out to Premiere.
Can you suggest if this is a reasonable workflow, and give any advice or point to any tutorials you've produced on how to accomplish this kind of process (simple rig removal, etc) starting with 1:1 stereo equirectangular footage and using Mettle and Evil Twin?
Hi Aaron, Yes this workflow should work. We'll look into testing it out over here, but please let us know once you've tried it.
Is Evil Twin compatible with Cineform 3D files? As in SBS footage created with GoPro Studio?
Thanks!
Hi James, If After Effects can import it as footage, then Evil Twin can work with it. So it should be no problem to work with Cineform 3D files as long as the correct codec is installed on your system. As for working directly on SBS footage - you will need to split the source footage into Left and Right. You can do this easily by
Nest your source footage in a comp that is half the width (to perfectly fir one eye of your SBS)
Position footage to frame the Left eye half of the SBS footage exactly.
Name the comp MyFootage_L.
Duplicate the MyFootage_L comp and reposition the footage layer to exactly frame the Right eye.
Name the new comp MyFootage_R.
From this point you can either render out your split footage and re-import it or simply enter "no_evil" in the project panel comment of the MyFootage_L. This will make Evil Twin treat the comps as if they were base footage.
Add the MyFootage_L to another comp (also tagged with "_L") and do your editing in there. Just press L>R (in the Evil Twin Panel) to update your changes to the Right comp.
For output or previewing on a stereo monitor - you can recombine your work into SBS by creating a Stereo Viewer comp using the SVU button.
Loving the product so far, but hitting a small annoying snag when I clone L > R. Seems like no_evil isn't keeping my base layer footage (in a precomp named MyFootage_L) and keeps copying it over top of my right eye footage, causing me to have to just copy/paste the right footage when I clone.
Using CC2018. Let me know if there's a solution, thanks!!
Loving the product so far, but hitting a small annoying snag when I clone L > R. Seems like no_evil isn't keeping my base layer footage (in a precomp named MyFootage_L) and keeps copying it over top of my right eye footage, causing me to have to just copy/paste the right footage when I clone.
Using CC2018. Let me know if there's a solution, thanks!!
Hi Sarah,
Glad you like it !
Just so I understand your situation, can you explain in a little more detail how you are using the "no_evil" tag (in a layer comment? in the project panel comment?) as well as what the expected result is in using it? If you remove the "no_evil" tag, does it work as expected?
Hi David, I'm just getting started with Evil Twin. Looking forward to learning how to use it... learning how to use After Effects as well (I have v. CS6). My question: in your introduction video you show accessing an information and settings panel (the button with a question mark on it), and when that panel displays it shows your Evil Twin v.2.0. I just bought the plugin and it is v.1.1, and here on AEscripts.com, it seems the only version available is v.1.1. Is there a way to upgrade to v.2.0? What am I missing? Is v.2.0 actually published?
Hi David, I'm just getting started with Evil Twin. Looking forward to learning how to use it... learning how to use After Effects as well (I have v. CS6). My question: in your introduction video you show accessing an information and settings panel (the button with a question mark on it), and when that panel displays it shows your Evil Twin v.2.0. I just bought the plugin and it is v.1.1, and here on AEscripts.com, it seems the only version available is v.1.1. Is there a way to upgrade to v.2.0? What am I missing? Is v.2.0 actually published?
Hi Boris, Good eye! If my panel says v2.0 in the demo video then it is most likely because I was using an internal beta version that was nearly identical to v1.1 to create the demo. Version 2.0 is in the works but version 1.1 is currently the most recent release of Evil Twin.
So I bought the plugin and after experimenting for 2d --> 3D video conversion, I see that using layers for objects that I need at different depths than backg and shifting those (but not updating to _stereo comp by L-->R as for layers that seems to make them all go back to background depth). I do adjust the background shift for _stereo so I see what I see in real-time convergence. This seems to do a good job. I finally figured out I could fix background shift of that background before render out by creating and going to convergence layer and shifting that R comp back about 2 steps.(keeps my stereo adjustment) Question, would I even use 3D cameras for conversion when extra elements are not added? I saw in demo you had of using a static background of snow for piano player and you seem to be able to give each or some snowflakes depth by doing some 3D camera tricks, but having a hard time following the steps. First, does using 3D cameras help with non-3D footage? Can you reccomend a quick guide to how that should be configured?
I should say though - if you find you need to avoid pressing the L->R update button, then I'm quite confident the workflow could be improved. The core idea is that you do not need to separately edit the Right comp - rather it gets re generated from info stored inside the Left comp every time you press L->R. For this reason there are tools to build your Left comp in such a way that your right comp gets rebuilt correctly. The pushing L->R is far more likely to fix things rather than break them. The power of this method is that you have all the control from one comp and, combined with the various s3d viewing methods, there should be no need to keep jumping back and forth maintaining two comps. If pressing L->R is working against you then please open a ticket and I can help you work it out to get the most out of Evil Twin.
To answer your question about the snow layer, it was most likely created using a 3D particle plugin. So, if you add an Evil Twin camera in a Left comp with a 3d particle plugin (or any 3d plugin) - you should get a full stereoscopic view of it in the stereo comp (and the Quick View) because the 3D plugin is providing all that information to the stereo cameras - which are slightly offset from eachother. That is why the snowflakes all appear at different depths. However, if you point a camera at a monoscopic footage layer - there is no internal 3d information for the camera to work with and all the content lives at the same depth. You can change the convergence of the camera, the layer, the stereo viewer, etc but that will just shift the apparent stereoscopic depth of the whole layer - it won't "create" 3D information for you.
The demo footage you have there is very clean in terms of matching geometry, quality and color. As you must know, most 3D footage isn't like that, and fixing it is often the biggest job.
So, for instance, if one eye needed sharpening it would be nice to not have that effect applied to the other eye. Scaling, warps, or vertical image shifts (often dynamic) are other examples, and being able apply such tweaks without them being cloned would be nice. You currently have a way of doing that, but it's a bit involved and something simpler would be good as an option, like using the comments field?
Other effects could do with negative mirroring. Rotation for example. So rotating one eye 0.5 degrees would then automatically rotate the other -0.5.
Auto-scaling would be good too. So for the above rotation example the image would automatically be rescaled to avoid cropping.
I'm a bit confused about what you're meant to do with color matching here..?
Checkerboard viewing mode would be useful.
Your output options should maybe include interlaced 3D. Most passive 3D monitors will display that as 3D without having to switch to 3D viewing mode. A lot of 3D folks prefer it as it doesn't compromise horizontal resolution. It's also great if the monitor is in a situation where it may be switched off and will later power-on on in 2D mode. Interlaced 3D will then still come out as 3D.
Hope that's useful?
Thanks for the feedback. It is all useful indeed!
As for things happening in the right and not the left even after LR Clone, there are a few ways:
- Use the F/X button to split any effect or mask into Left and Right "versions" that both sit in the left. Each version only becomes dominant in it's respective comp.
- Use "evil_wink" in any layer comment. This will invert the enabled state of the layer. So if it is disabled in the left it will become enabled in the right. This way you can maintain entirely separate Left and Right versions of any footage or adjustment layer. Use evil_wink on both of them and keep one disabled in the Left. (note: S3D footage will still be swapped with its twin unless you also add "no_evil" to the layer comment).
- Create Left and Right pre-comps manually and use "no_evil", in the comp comment of the Left comp in the project panel. Be sure to name them with the Channel ID's (usually "_L" and "_R"). This will lock the pair from ever being LR Cloned but they will still be useable as a S3D pair when nested in other comps.
Hope that helps!I am using After Effects CC 2014.2.
As Karel mentioned, most stereo 3D footage needs much alignment work. That would be a great addition to the script so we would not have to use another program like Vegas Pro to do the alignment first. That would really speed up the work flow. I wasn't misled that your script would do any of this, but I was hoping it might.
Let's get to the bottom of the Wiggle not working. The only time I notice this is when I've mistakenly got the same image in both Left and Right comps (meaning it is actually wiggling but the images are identical so you can't see the difference. This would happen if:
But then, as you say, Anaglyph is working... Can you confirm that the Anaglyph view is indeed "internally" stereoscopic? Another thing to try right away would be to delete the Evil Twin - Right and Evil Twin - S3D layers and click Wiggle again - which will add fresh ones. Also, since the Wiggle is performed by guide layers, if you're viewing through a nested situation, the Wiggle won't show up in the parent comp.
Alt - Wiggle currently doesn't turn off Anaglyph by design (it's unclear what state to go back to... ). I've been considering changing this but for now, just disable the Evil Twin layers to turn off Anaglyph.
The current philosophy is to make it as quick and easy as possible to leverage or adapt whatever you already use in After Effects. Can you elaborate on what Vegas Pro does for stereo alignment that can't be done in AE?All suggestions and feedback very much appreciated!
If I understand correctly: you have started with stereo 3d footage, and created a camera in After effects that is tracked to that footage - and now you would like to place 3D After Effects elements in front of that camera to “break in” to the depth space of your Stereo footage. If so, yes Evil Twin can handle this. You can create a stereo camera pair by selecting the camera position and clicking the ⚡X + button. This will setup a basic parallel stereo camera with a control for the right camera offset. In keeping with the main idea of the script - you control the stereo camera from inside the left comp.
Any 3D layer in your comp should then be visible in stereo 3d and the right camera will follow the left in real time as you edit. Pre-rendered stereo footage should of course be in 2D layers and will not be affected by the camera.
The trial is 5 days with access to all the features with the only limitation being that you can only use it on comps that are 60 frames or less. This should allow you to test out any method to see if it will work for your needs. I suggest making a copy of your comp (be sure to rename it) and trimming it to the 60 frames of your choice to test with.
Feel free to keep posting here with questions and I’ll try to respond quickly.
Thanks
Otherwise, have you downloaded the template project for 3D cameras from the main page? It might help since it demonstrates what I described in my last post.
The camera separation is controlled by the R --> Position slider in the effects of the null layer yourCameraName: R --> which was added when you selected Camera position and pushed X+.
You need to use the camera separation and the Scene Convergence to achieve S3D depth.
Sounds like you're not finding where to control the Camera offset or for some reason it simply is not being added. Can you describe what happens when you select the camera's position and push the (⚡︎ X +)
button? You also need to hit the (LR) button for any changes to update in both eyes.
It should work fine with the Camera Tracker.
The challenge now is to find the correct value for the R--> slider in combination with the correct z coordinates for your 3d layers - which you are trying to match to "baked" S3D footage with no numerical information about its internal depth - so there is some trial and error involved.
Try thinking of it this way:
Happy to help. Hoping this info is useful for others as well.
Of course, if you change the Scene Convergence R-->, your base footage will also be adjusted, which it looks like you've got it where you want it already. You can apply another shift to the base footage to balance it out (select position of the footage, press X+ and use the new R--> slider under effects in that layer and set it to the negative of the one in Scene Convergence).
The Camera Zoom will affect things as well. Not sure if you are free to play with that after having done the tracking...
Future versions will make this a bit simpler but I'm glad to hear it's working for you!
For your footage alignment - vertical offset is described in the manual:
1. Highlight the position property of the layer, right click and choose “Separate Dimensions”.
2. Select theY Posiiton and click ⚡X+.
3. Click L▷R to update the Right comp.
4. Usethe R --> – Y Position slider in the effects group to control the vertical posi-
tion of the Right comp relative to the Left. Keyframe the slider as needed.
You will have to redo the same thing for the X-position to get back your horizontal offset from before (Separate dimensions deletes the expression that was there).
But at a glance I think your problem might be Rotation-based. You can use the same technique on the layer's Rotation property as well.
It takes a bit of practice but once it clicks it becomes very fluid to work with... I hope!
Thanks so much for the great feedback! Sounds like Evil Twin is a perfect match with your studio!
I hear you. We need Checkerboard stereo mode and the feature will be integrated in the next update (within a month).
In the meantime I suggest the following temporary solution so you can get working right away:
- Download the HD Checkerboard Pattern (checkerboardHD.png) from the main product page (the link is just under the Overview video) and import it into your project.
In the Stereo Viewer comp (created by pressing SVU):checkerboardHD.png as a matte layer above each of Left and Right precomp
layers.
vice-versa according to your setup.
Please let us know how this works! We can post the same pattern in other resolutions if anybody has a request.
All the best and stay tuned for the update!
Version 1.1, which was just released, solves the problem: Use the SVU button to create a Stereo Viewer comp. Checkerboard and Interlaced are now options in the S3D Settings layer (see the video on the main page detailing changes in v 1.1).
All feedback is welcome as always.
Thank you for the props! Much appreciated and very happy to know you're getting good use out of Evil Twin!
Thanks for the suggestion. We do want to offer the fullest possible range of output options so this will surely make it into future versions.
To be sure I understand correctly - say your L and R comps are 1920 X 1080 - you would want a stereo viewer comp that is 3840 X 1080 where the Left and Right nested comps are Side by Side at 100% scale? This could be useful for twin projector systems, for example. The same would apply for Over-Under with a Stereo Viewer comp at 1920 X 2160.
Does that sound right?
Could you elaborate a bit on this? Are you saying everything works nicely in stereo and VR with both scripts until the conversion from cube map back to equirectangular? Which comp goes black?
Since Mettle Skybox is a plugin AND a script, the scripting part has some overlap with Evil Twin that needs to be considered on both ends for the tools to be perfectly smooth together. You might try asking them about it too.
We'll look into this but can't promise anything just yet! For now, at the very least we should be able to come up with a workaround or set of instructions that might add a couple of clicks to the process, but would make it workable for both the VR and the stereo. Stay tuned!
- Start with 1:1 stereoscopic equirectangular over/under original footage
- Convert footage in AE to monoscopic to edit in Skybox Composer (not sure what the best way to convert Stereo to Mono is right now)
- Make changes to the mono left frame in Skybox composer
- Replicate those changes to mono right frame using Evil Twin? (again - should this step occur after changes in Skybox Composer, or is Evil Twin involved concurrently?)
- Finally export both frames from Evil Twin back to 1:1 over/under stereo equirectangular to send back out to Premiere.
Hi Aaron,
Yes this workflow should work. We'll look into testing it out over here, but please let us know once you've tried it.
If After Effects can import it as footage, then Evil Twin can work with it. So it should be no problem to work with Cineform 3D files as long as the correct codec is installed on your system. As for working directly on SBS footage - you will need to split the source footage into Left and Right. You can do this easily by
Hope that helps to clarify!
Hi Sarah,
Glad you like it !
Just so I understand your situation, can you explain in a little more detail how you are using the "no_evil" tag (in a layer comment? in the project panel comment?) as well as what the expected result is in using it? If you remove the "no_evil" tag, does it work as expected?
Cheers,
David
>>> Contemporary work in the Stereoscopic Arts: www.patreon.com/retroformat <<<
Good eye! If my panel says v2.0 in the demo video then it is most likely because I was using an internal beta version that was nearly identical to v1.1 to create the demo. Version 2.0 is in the works but version 1.1 is currently the most recent release of Evil Twin.
So I bought the plugin and after experimenting for 2d --> 3D video conversion, I see that using layers for objects that I need at different depths than backg and shifting those (but not updating to _stereo comp by L-->R as for layers that seems to make them all go back to background depth). I do adjust the background shift for _stereo so I see what I see in real-time convergence. This seems to do a good job. I finally figured out I could fix background shift of that background before render out by creating and going to convergence layer and shifting that R comp back about 2 steps.(keeps my stereo adjustment) Question, would I even use 3D cameras for conversion when extra elements are not added? I saw in demo you had of using a static background of snow for piano player and you seem to be able to give each or some snowflakes depth by doing some 3D camera tricks, but having a hard time following the steps. First, does using 3D cameras help with non-3D footage? Can you reccomend a quick guide to how that should be configured?