parent: domain
GenMM Example-basedMotion Synthesis via Generative Motion Matching
training free, generated in a fraction of a second
DiffMimic EfficientMotion Mimicking with Differentiable Physics
Physics-basedMotion Retargeting from Sparse Inputs
from sparse human sensor data to characters of various morphologies
TC4D Trajectory-Conditioned Text-to-4D Generation
factors motion into global and local components
synthesis of scenes animated along arbitrary trajectories
PAEpaperdetails DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds
decompose movement in composable phase skeletons
parent: diffusionINTERACTIONS
ReMoDiffuse Retrieval-AugmentedMotion DiffusionModel
semantic augmented database, + previous motion features sequences
MotionDiffuser Controllable Multi-Agent Motion Prediction using Diffusion (future trajectories over multiple agents)
TEDi Temporally-Entangled Diffusion for Long-Term Motion Synthesis
adapt the gradual diffusion concept (along a diffusion time-axis) into temporal-axis of the motion sequence
extend DDPM framework to temporally varying denoising, thereby entangling the two axes; long-term motion
GeneratingFine-Grained Human Motions Using ChatGPT-Refined Descriptions
FG-MDM: Fine-Grained Human Motion Diffusion Model
DreaMo Articulated3D Reconstruction From A Single Casual Video ==best==
diffusion model to hallucinate invisible parts and to enhance the geometry
CAGE Controllable Articulation GEneration
attention modules designed to extract correlations between part attributes, connectivity graph as input
MoMask Generative MaskedModeling of 3D Human Motions
progressively predict the next-layer tokens based on the results from current layer
hierarchical quantization scheme is employed to represent human motion as motion tokens
TowardsDetailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model
low-dimensional latent consistent promt, while high-dimensional follows detail-enhancing process
RealisticHuman Motion Generation with Cross-Diffusion Models
model reverses either 2D or 3D noise into clean motion during training (2D motion data)
HuTuMotion Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback
personalized and style-aware human motion generation
MotionMix Weakly-Supervised Diffusion for Controllable Motion Generation
unannotated motions
GenerativeHuman Motion Stylization in Latent Space
Self-CorrectingSelf-Consuming Loops For Generative Model Training [training]
successfully avoid collapse (of nullifying opposite movements)
Multi-TrackTimeline Control for Text-Driven 3D Human Motion Generation ==best==
using a single text prompt as input lacks the fine-grained control needed by animators
multi-track timeline of multiple prompts organized in temporal intervals that may overlap
SeamlessHuman Motion Composition with Blended Positional Encodings ==best==
diffusion-based model for seamless human motion compositions
global coherence at absolute stage, smooth transitions at relative stage
Large MotionModel for Unified Multi-Modal Motion Generation
multimodal input: text, speech, music, video
DNO Optimizing Diffusion Noise Can Serve As Universal Motion Priors
preserve while accommodating editing modes: changing trajectory, pose, joint location, obstacles