Meet BeLFusion: A Behavioral Latent Space Approach for Realistic and Diverse Stochastic Human Motion Prediction Using Latent Diffusion
As Artificial Intelligence (AI) continues to captivate the world, one remarkable application emerges at the intersection of computer vision and AI as Human Motion Prediction (HMP). This captivating task involves forecasting human subjects’ future motion or actions based on observed motion sequences. The goal is to predict how a person’s body poses or movements will evolve. HMP finds applications in various fields, including robotics, virtual avatars, autonomous vehicles, and human-computer interaction.
Stochastic HMP is an extension of traditional HMP that focuses on predicting the distribution of possible future motions rather than a single deterministic future. This approach acknowledges human behavior’s inherent spontaneity and unpredictability, aiming to capture the uncertainty associated with future actions or movements. Stochastic HMP accounts for the variability and diversity in human behavior by considering the distribution of possible future motions, leading to more realistic and flexible predictions. It is particularly valuable when anticipating multiple possible behaviors is crucial, such as in assistive robotics or surveillance applications.
Stochastic HMP has often been approached using generative models like GANs or VAEs to predict multiple future motions for each observed sequence. However, this emphasis on generating diverse motions in the coordinate space has led to unrealistic and fast motion-divergent predictions that may need to align better with the observed motion. Furthermore, these methods often overlook anticipating diverse low-range behaviors with subtle joint displacements. As a result, there is a need for new approaches that consider behavioral diversity and produce more realistic predictions in stochastic HMP tasks. To address the limitations of existing Stochastic HMP methods, the University of Barcelona and Computer Vision Center researchers propose BeLFusion. This novel approach introduces a behavioral latent space to generate realistic and diverse human motion sequences.
The main objective of BeLFusion is to disentangle behavior from motion, allowing smoother transitions between observed and predicted poses. This is achieved through a Behavioral VAE consisting of a Behavior Encoder, Behavior Coupler, Context Encoder, and Auxiliary Decoder. The Behavior Encoder combines a Gated Recurrent Unit (GRU) and 2D convolutional layers to map joint coordinates to a latent distribution. The Behavior Coupler then transfers the sampled behavior to ongoing motion, generating diverse and contextually appropriate motions. BeLFusion also incorporates a conditional Latent Diffusion Model (LDM) to accurately encode behavioral dynamics and effectively transfer them to ongoing motions while minimizing latent and reconstruction errors to enhance diversity in the generated motion sequences.
BeLFusion’s innovative architecture continues with an Observation Encoder, an autoencoder that generates hidden states from joint coordinates. The model utilizes the Latent Diffusion Model (LDM), which employs a U-Net with cross-attention mechanisms and residual blocks to sample from a latent space where behavior is disentangled from pose and motion. By promoting diversity from a behavioral perspective and maintaining consistency with the immediate past, BeLFusion produces significantly more realistic and coherent motion predictions than state-of-the-art methods in stochastic HMP. Through its unique combination of behavioral disentanglement and latent diffusion, BeLFusion represents a promising advancement in human motion prediction. It offers the potential to generate more natural and contextually appropriate motions for a wide range of applications.
Experimental evaluation demonstrates the impressive generalization capabilities of BeLFusion, as it performs well in both seen and unseen scenarios. It outperforms state-of-the-art methods in various metrics in a cross-dataset evaluation using the challenging results on the Human3.6M and AMASS datasets. On H36M, BeLFusion demonstrates an Average Displacement Error (ADE) of approximately 0.372 and a Final Displacement Error (FDE) of around 0.474. At the same time, on AMASS, it achieves an ADE of roughly 1.977 and an FDE of approximately 0.513. The results indicate BeLFusion’s superior ability to generate accurate and diverse predictions, showcasing its effectiveness and generalization capabilities for realistic human motion prediction across different datasets and action classes.
Overall, BeLFusion is a novel method for human motion prediction that achieves state-of-the-art performance in accuracy metrics for both Human3.6M and AMASS datasets. It utilizes behavioral latent space and latent diffusion models to generate diverse and context-adaptive predictions. The method’s ability to capture and transfer behaviors from one sequence to another makes it robust against domain shifts and improves generalization capabilities. Moreover, the qualitative assessment shows that BeLFusion’s predictions are more realistic than other state-of-the-art methods. It offers a promising solution for human motion prediction, with potential applications in animation, virtual reality, and robotics.
Check out the Paper, Project, GitHub, and Tweet. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 27k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
The post Meet BeLFusion: A Behavioral Latent Space Approach for Realistic and Diverse Stochastic Human Motion Prediction Using Latent Diffusion appeared first on MarkTechPost.