:PROPERTIES: :ID: 4d3d8e2a-08c3-4624-9389-cd54e06850b9 :END: #+title: motion #+filetags: :neuralnomicon: #+SETUPFILE: https://fniessen.github.io/org-html-themes/org/theme-readtheorg.setup - parent: [[id:e9be16f7-8032-4509-9aa9-7843836eacd9][domain]] - [[https://weiyuli.xyz/GenMM/][GenMM]]: [[https://huggingface.co/papers/2306.00378][Example-based]] Motion Synthesis via Generative Motion Matching - training free, generated in a fraction of a second - [[https://arxiv.org/abs/2304.03274][DiffMimic]]: [[https://github.com/jiawei-ren/diffmimic][Efficient]] Motion Mimicking with Differentiable Physics - [[https://twitter.com/_akhaliq/status/1676822600478015488][Physics-based]] Motion Retargeting from Sparse Inputs - from sparse human sensor data to characters of various morphologies - [[https://arxiv.org/abs/2403.17920][TC4D]]: Trajectory-Conditioned Text-to-4D Generation - factors motion into global and local components - synthesis of scenes animated along arbitrary trajectories * PAE - PHASE :PROPERTIES: :ID: 88490b18-3eaf-402d-b8ef-eca7a125ce93 :END: - [[https://github.com/sebastianstarke/AI4Animation/tree/master/AI4Animation/SIGGRAPH_2020/DeepLearning][PAE]] [[https://github.com/sebastianstarke/AI4Animation/blob/master/Media/SIGGRAPH_2022/Paper.pdf][paper]] [[https://www.ianxmason.com/posts/PAE/][details]]: DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds - decompose movement in composable phase skeletons * DIFFUSION MOTION - parent: [[id:82127d6a-b3bb-40bf-a912-51fa5134dacc][diffusion]] [[id:38d684fe-bc58-4132-bc2f-407e70198230][INTERACTIONS]] - [[https://arxiv.org/pdf/2304.01116.pdf][ReMoDiffuse]]: [[https://github.com/mingyuan-zhang/ReMoDiffuse][Retrieval-Augmented]] [[https://twitter.com/_akhaliq/status/1643077969215205376][Motion Diffusion]] Model - semantic augmented database, + previous motion features sequences - [[https://twitter.com/_akhaliq/status/1665938011068964864][MotionDiffuser]]: Controllable Multi-Agent Motion Prediction using Diffusion (future trajectories over multiple agents) - [[https://twitter.com/_akhaliq/status/1684795307161448448][TEDi]]: Temporally-Entangled Diffusion for Long-Term Motion Synthesis - adapt the gradual diffusion concept (along a diffusion time-axis) into temporal-axis of the motion sequence - extend DDPM framework to temporally varying denoising, thereby entangling the two axes; long-term motion - [[https://twitter.com/_akhaliq/status/1732232720464363909][Generating]] Fine-Grained Human Motions Using ChatGPT-Refined Descriptions - FG-MDM: Fine-Grained Human Motion Diffusion Model - [[https://ttaoretw.github.io/DreaMo/][DreaMo]]: [[https://twitter.com/ChiehHubertLin/status/1732779871942959446][Articulated]] 3D Reconstruction From A Single Casual Video ==best== - diffusion model to hallucinate invisible parts and to enhance the geometry - [[https://3dlg-hcvc.github.io/cage/][CAGE]]: Controllable Articulation GEneration - attention modules designed to extract correlations between part attributes, connectivity graph as input ** HUMAN - [[https://twitter.com/_akhaliq/status/1731505959913042349][MoMask]]: Generative [[https://github.com/EricGuo5513/momask-codes][Masked]] Modeling of 3D Human Motions - progressively predict the next-layer tokens based on the results from current layer - hierarchical quantization scheme is employed to represent human motion as motion tokens - [[https://arxiv.org/abs/2312.10960][Towards]] Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model - low-dimensional latent consistent promt, while high-dimensional follows detail-enhancing process - [[https://wonderno.github.io/CrossDiff-webpage/][Realistic]] Human Motion Generation with Cross-Diffusion Models - model reverses either 2D or 3D noise into clean motion during training (2D motion data) - [[https://arxiv.org/abs/2312.12227][HuTuMotion]]: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback - personalized and style-aware human motion generation - [[https://arxiv.org/abs/2401.11115][MotionMix]]: Weakly-Supervised Diffusion for Controllable Motion Generation - unannotated motions - [[https://yxmu.foo/GenMoStyle/][Generative]] Human Motion Stylization in Latent Space - [[https://cs.brown.edu/people/ngillman//sc-sc.html][Self-Correcting]] Self-Consuming Loops For Generative Model Training [training] - successfully avoid collapse (of nullifying opposite movements) *** TIMELINE - [[https://twitter.com/_akhaliq/status/1748044168419500167][Multi-Track]] Timeline Control for Text-Driven 3D Human Motion Generation ==best== - using a single text prompt as input lacks the fine-grained control needed by animators - multi-track timeline of multiple prompts organized in temporal intervals that may overlap - [[https://twitter.com/_akhaliq/status/1761975347300454449][Seamless]] Human Motion Composition with Blended Positional Encodings ==best== - diffusion-based model for seamless human motion compositions - global coherence at absolute stage, smooth transitions at relative stage - [[https://mingyuan-zhang.github.io/projects/LMM.html][Large Motion]] Model for Unified Multi-Modal Motion Generation - multimodal input: text, speech, music, video ** EDIT - [[https://korrawe.github.io/dno-project/][DNO]]: Optimizing Diffusion Noise Can Serve As Universal Motion Priors - preserve while accommodating editing modes: changing trajectory, pose, joint location, obstacles