Facial Workflow for ELEX 2

Coming from Elex, where I had very little time to build all our faces and get them articulating, with ELEX 2 I wanted to make things right.

One major downside in the first game was the fact that I needed to use blendshapes and had to keep the shape count low for performance reasons. Also, the majority of faces were delivered very late, so I had to just apply the deltas of our Player head to all the different heads, which naturally looked suboptimal.

Here is a link to a dialogue scene in ELEX 2:

Building a Joint-Based Face Rig

To solve the performance issues and make our lives easier with the transfer of the rigs to other faces, I set out to build a joint-based rig! The plan was to use this rig in the engine. Unfortunately, the file servers for the project were shut down after the studio closed down, so I do not have a working live rig with controls to show.

But don’t fear, I’ll show what the skeletal structure looked like in a moment.

When I demonstrated the rig, everyone was pretty happy. However, when I started to talk to the devs of our in-house engine, it turned out that the task of storing a specific initial joint position for each character’s face rig was something that could not be implemented into the engine in a timely manner since all of our coders were busy with other important tasks. It seemed like I had to fall back to blends only, again! (This includes eye and jaw rotation).

Using our actual rig, I created a pose file that contained the head, driven by joints, moving through all poses I thought I might need for a complex blendshape-based rig. Here is a quick clip showing the joint-driven head going through all poses:

Converting to Blendshapes

The workflow that I created was fairly straightforward: I created a dictionary of shapes I wanted to use for the rig and the corresponding frames on our pose file. Then I wrote a script to process the head! Here is basically what the script did:

  1. Collect all parts of the head (head, brows, eyes, eyelashes, teeth, tongue, cornea…), duplicate and combine them into one single mesh. This will receive our blends.
  2. Since all of our parts are separate and follow different skin clusters, I opted to go through each pose, combine the meshes again in the given pose, and add them to the blendshape list.
  3. Some blends were symmetrical, although I wanted to split them into left/right, up/down shapes (or even more for the sticky lips). I wrote a script that saved and loaded predefined blendshape weights and allowed splitting the shapes even more.
  4. In the end, with all blends created, I generated a clean head, receiving all blends. A buffer node was created with a simple float attribute for each blend.
  5. Our simple Control Rig then connected our inputs to the buffer node – that way I could load any head into a scene and connect it to the control rig.
  6. Most of the complex rig logic, like how the blends are combined, is driven by the control rig.
  7. In the end, a Skin Cluster is loaded on the final head in order to make it move with the body rig!

Here is a quick capture of the Rig generation:

Other Heads

With the main head out of the way, I had to tackle the remaining 118 heads (which, as usual, were delivered VERY late in production). I wrote a simple tool that would map all of our given joints to the new heads‘ proportions using the topology (since I was using a unified topology for all heads). I made some basic adjustments for basic proportion differences on the joint movement, but not to the details I would have liked – there simply was no time!

One other aspect was that the topology on some heads was all over the place, with lip loops moving far out of the lip area or inside the mouth! I wrote a quick tool that would slide over a copy of the head surface and drag marked loops along in order to do a minimal quick fix (not shown in the video, as I don’t have unfixed head at my disposal anymore – sorry).

Building the Rig on a New Head:

## Lip Sync / Idles

I believe I had around 80,000 voice files (.WAVs) in four languages to process. As was tradition, the voice recordings were unfortunately delivered very late. I used [Speech Graphics SGX 3](https://www.speech-graphics.com/sgx-production-audio-to-face-animation-software) for handling ALL of our lip and face animation. This tool was a lifesaver!

Heads needed to be solved into a template file at Speech Graphics. I could only get one male and one female rig solved, so all faces in the game use one or the other. In Maya, I created a repository of high and low poses in different moods for both the male and female template files in SGX’s proprietary Maya tool.

(Unfortunately, I do not have access to a working license anymore, but here is a demo from SGX in case anyone is interested.)

And a quick demo of a few lines being applied to several different faces:

## Conclusion

Given the time and manpower (me :)), I think the results are not bad. There is a lot that I would reconsider, and I already had some plans in place for the next title, but as of yet, it seems like this won’t happen!

Things I would improve for sure:

  • Hybrid joint-based rig with a few blendshape correctives
  • Better wrinkle map integration (in engine)
  • Better algorithm to transfer joint animation between different faces
  • More streamlined rig generation
  • Add machine learning to the transfer system

I hope this gives you a good overview of the facial workflow for ELEX 2 and the challenges we faced. Despite the limitations, I believe we achieved a commendable result.