Mar 27 / Christian Bull

Ballerinas, bad guys, and our invisible work

Our face replacement pipeline behind Amazon's Pretty Lethal

The Amazon Prime film “Pretty Lethal” came out recently, featuring (hopefully) “invisible FX” from the Shoot First team.

It’s a fun, high concept action film featuring a group of ballerinas who discover that they can use their ballet skills to slice up bad guys. What’s not to like?

The problem is that the actresses were chosen for their acting, not their ballet ability. That means that the main VFX challenge for us was taking the faces of the actresses and superimposing them onto their stunt doubles, who could do ballet.

The production team were insistent that AI was not to be used, since actors are generally terrified of being replaced by AI. The general feeling was that if any AI techniques were used, then the actresses' likeness would have been injected into the “AI world” and could never be removed, like a digital microplastic.

You’ll know that my take is that the fear around AI can be addressed through education - but education takes time, willingness, and sometimes bravery. So in this case I didn’t fight the battle, and we did the replacement the “traditional” way. Here’s what that looks like:

  • Scan the actress’ faces - neutral and posed. For a Gollum style facial performance you’d get them to try and move each part of their face for each scan (e.g right eyebrow up, and down, left lip back, kissy face, etc. About 40-50 scans in total). For this we just did a few, since there was no dialogue or nuanced emotion.
  • Digitally process the scans so that they can be animated (applying a topology, creating UVs, cleaning up scan artifacts, removing lighting from the scan, creating shaders, etc)
  • Rig the face. We used Unreal’s Metahuman - it’s not suitable for the absolute highest level results, but it’s very fast to use
  • Motion capture the actor’s facial performance
  • Map the performance onto the digital head
  • Track the digital head onto the stunt double’s performance (This is “object tracking”, but will also need a camera track when the camera moves)
  • Recreate the shot lighting digitally
  • Render the head, and composite it onto the stunt doubles body

If that sounds like a lot, it is. Which is great for us because it pays the bills, and the skills that the team and I used and developed in doing the work are useful in so many different areas.

Step 1 of the traditional face replacement means giving up your whole head - with a face cast or head scan. Even in an AI world, that’s really useful since it helps you get consistency between shots. Here’s a “raw” expression scan on the left, with a cleaned up animation-ready mesh on the right.


But does it make sense to not use AI? I would argue almost certainly not. AI does come with technical compromises that 3D doesn’t have (less precise control, lack of bit depth). But most producers would be willing to give those up to do the work in days or weeks rather than months.

In terms of the actors losing their likenesses to AI? That’s just not how the tech works, and the traditional approach means we have actual scans of their actual faces. I could print them and make a mask of them tomorrow and go out and rob a bank. Or I could train an AI using those scans and put them into my own film. But I won’t because a) I’m not a bank robber and b) I signed an NDA, and c) If I’m putting anyone’s head onto a dancer’s body, it’s going to be mine.

So anyway, Pretty Lethal is out, it’s fun, and has our work in it, and it’s not AI. Watch it and let me know what you think! But if you do want to use AI for face replacement in film though, watch this space. We’ll address it in our ongoing AI series!

For now, here’s my face on a more buff actor’s body. Sweet dreams.....


Empty space, drag to resize