Learning Facial Action Units with Spatiotemporal Cues and Multi-label Sampling

Abstract

We propose a hybrid network architecture to jointly model AU relationships, spatial representation and temporal consistency. Our network naturally addresses the three issues together, and yields superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, with more than 400,000 frames coded with 12 AUs. To address class imbalance within and between batches during training the network, we introduce multi-labeling sampling strategies that further increase accuracy when AUs are relatively sparse. On both datasets, we report improvements over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see AUs for the first time.

Publication
Image and Vision Computing
Date
Links
Paper