The nine faces shown on the previous page do not require as much work as I had imagined. If Alvaro produces all nine of those faces (using the central one as the starting point), he will include shading automatically. Hence I need only place the mobile features onto the face with the previously specified geometric transforms for roll, pitch, and yaw — a piece of cake.
However, this solution does not permit animation, which forces a major decision upon us: do we require that the faces be animated? If so, then we need to be able to interpolate the face display for frames between the two extremes — it’s called tweening. And I don’t think that I can manage tweening in real time in Java. That’s a complicated process.
It might be possible to directly calculate in-between frames if we use very simple 3D models of the faces. My first experiment, however, was unsuccessful:
This looks pretty good. But look what happens when I apply it to a more realistic face:
Even with drastic change in the geometry of the face, it just doesn’t look as if he’s looking to the side. That’s because there are 3D effects that we expect but don’t see. Yes, the right side of his face is smaller than the left side, which we would expect, but the surface relief of the face is unchanged. We expect to see the nose shift relative to the eyes; since that doesn’t happen, we just can’t see much in the way of yaw.
This process blends two images together to produce an intermediate version. You can set it up to produce varying degrees of blending, so that a series of frames with the blending value increasing from 0.0 to 1.0 will show a transition from one image to another. Good morphine software does a pretty good job of blending the images, but it’s slow.