Faceware Analyzer 2.0.14: A Powerful Tool for Facial Motion Capture
Faceware Analyzer is a software that tracks facial movement from video using machine learning and deep learning techniques. It uses markerless technology to track everything the face can do at extremely high quality. Faceware Analyzer is part of the Faceware Creation Suite, which also includes Faceware Retargeter, a plugin that applies the facial motion data to a 3D character in an animation package.
Faceware Analyzer 2.0.14 is the latest version of the software, released in November 2021. It introduces several new features and improvements, such as:
Global Tracking Models: A new option to use pre-trained models that can track any actor without creating custom training frames. This can save time and improve consistency across different actors and shots.
Automation API: A new interface that allows users to automate and customize the tracking workflow using Python scripts. This can enable batch processing, custom quality checks, integration with other tools and pipelines, and more.
Intelligent Drag: A new feature that allows users to manually adjust the tracking landmarks by dragging them on the video frame. The software will automatically update the surrounding landmarks and frames to maintain smooth and accurate tracking.
Faceware Analyzer 2.0.14 is available for Windows and Mac OS X platforms. It supports various video formats and resolutions, including HD, 4K, and VR. It also supports multiple cameras and head-mounted cameras for capturing different angles and expressions.
Faceware Analyzer 2.0.14 is a powerful tool for facial motion capture that can enhance the realism and quality of any facial animation project. It is used by many studios and professionals in the film, game, TV, commercial, and education industries. To learn more about Faceware Analyzer 2.0.14, visit https://facewaretech.com/software/analyzer/ or watch this video: https://www.youtube.com/watch?v=2cXiO5EQV5k.
How to Use Faceware Analyzer 2.0.14
To use Faceware Analyzer 2.0.14, you need to have a recorded video of a facial performance that meets the quality requirements for tracking. You can use any camera that can capture at least 30 frames per second and has good lighting and focus. You can also use multiple cameras or head-mounted cameras to capture different angles and expressions.
Once you have your video, you can create a new Analyzer project (or job) using the video. You can then choose between two tracking methods: Global Tracking Models or Custom Training Frames. Global Tracking Models are pre-trained models that can track any actor without creating custom training frames. This can save time and improve consistency across different actors and shots. Custom Training Frames are frames that you manually mark with landmarks to create a custom model for your actor. This can provide more accurate and controlled tracking, especially for subtle or complex expressions.
After you have tracked your video, you can parameterize the tracking data. This means converting the landmark positions into a set of parameters that represent the facial motion. You can also select a neutral frame for your actor, which is a frame where the actor has a relaxed and neutral expression. This can help improve the quality of the animation, especially if you use AutoSolve in Retargeter.
Finally, you can export your tracking data as an .fwr performance file that can be used in Faceware Retargeter to animate your character. Retargeter is a plugin that works with various 3D animation packages, such as Maya, MotionBuilder, Unreal Engine, and Unity. Retargeter allows you to apply the facial motion data to your character's rig and fine-tune the animation using various tools and settings.
Faceware Analyzer 2.0.14 also offers several advanced features and options, such as Automation API, Intelligent Drag, Batch Processing, Quality Check, and more. You can learn more about these features and how to use them in the Faceware Knowledge Base or by watching the tutorial videos on their website. 061ffe29dd