Ballerinas, bad guys, and our invisible work
Our face replacement pipeline behind Amazon's Pretty Lethal
The Amazon Prime film “Pretty Lethal” came out recently, featuring (hopefully) “invisible FX” from the Shoot First team.
It’s a fun, high concept action film featuring a group of ballerinas who discover that they can use their ballet skills to slice up bad guys. What’s not to like?
The problem is that the actresses were chosen for their acting, not their ballet ability. That means that the main VFX challenge for us was taking the faces of the actresses and superimposing them onto their stunt doubles, who could do ballet.
The production team were insistent that AI was not to be used, since actors are generally terrified of being replaced by AI. The general feeling was that if any AI techniques were used, then the actresses' likeness would have been injected into the “AI world” and could never be removed, like a digital microplastic.
You’ll know that my take is that the fear around AI can be addressed through education - but education takes time, willingness, and sometimes bravery. So in this case I didn’t fight the battle, and we did the replacement the “traditional” way. Here’s what that looks like:
- Scan the actress’ faces - neutral and posed. For a Gollum style facial performance you’d get them to try and move each part of their face for each scan (e.g right eyebrow up, and down, left lip back, kissy face, etc. About 40-50 scans in total). For this we just did a few, since there was no dialogue or nuanced emotion.
- Digitally process the scans so that they can be animated (applying a topology, creating UVs, cleaning up scan artifacts, removing lighting from the scan, creating shaders, etc)
- Rig the face. We used Unreal’s Metahuman - it’s not suitable for the absolute highest level results, but it’s very fast to use
- Motion capture the actor’s facial performance
- Map the performance onto the digital head
- Track the digital head onto the stunt double’s performance (This is “object tracking”, but will also need a camera track when the camera moves)
- Recreate the shot lighting digitally
- Render the head, and composite it onto the stunt doubles body
If that sounds like a lot, it is. Which is great for us because it pays the bills, and the skills that the team and I used and developed in doing the work are useful in so many different areas.

Step 1 of the traditional face replacement means giving up your whole head - with a face cast or head scan. Even in an AI world, that’s really useful since it helps you get consistency between shots. Here’s a “raw” expression scan on the left, with a cleaned up animation-ready mesh on the right.
But does it make sense to not use AI? I would argue almost certainly not. AI does come with technical compromises that 3D doesn’t have (less precise control, lack of bit depth). But most producers would be willing to give those up to do the work in days or weeks rather than months.

