RelightingNeural Radiance Fields with Shadow and Highlight Hints
moving point light source
second multi layer perceptron which takes shadow and highlight hints
nerf reflections: https://youtu.be/qrdRH9irAlk
Mirror-NeRF Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing
introducing the reflection probability
SpecNeRF Gaussian Directional Encoding for Specular Reflections
learnable Gaussian directional encoding to model view-dependent effects under near-field light
UniSDF Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections
faster reconstructions; explicitly blending these representations in 3D space
Snap-it Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for Reconstructing Challenging Surfaces
incorporates touch data (local depth maps) with multi-view vision data to achieve surface reconstruction and novel view synthesis
fewer images required
Dynamic Mesh-AwareRadiance Fields
two-way rendering-simulation coupling between mesh and NeRF
realistic light from NeRF media onto surfaces, cast shadows on the NeRF = enhanced realism
Neuralangelo High-Fidelity Neural Surface Reconstruction (3d mesh augmented)
PyNeRF Pyramidal Neural Radiance Fields
modification to grid-based models by training model heads at different spatial grid resolutions
reduce error rates by 20% while training over 60x faster against Mip-NeRF
ReconFusion 3D Reconstruction with Diffusion Priors
leverages diffusion for novel view synthesis, trained on multiview; few photos
DyBluRF Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video
deals with challenge of view synthesis from motion blur
GANeRF Leveraging Discriminators to Optimize Neural Radiance Fields
using gan, no floating ghost artifacts
Bayes'Rays: Uncertainty Quantification for Neural Radiance Fields
evaluate uncertainty in any pre-trained NeRF, then clean
FrameNeRF A Simple and Efficient Framework for Few-shot Novel View Synthesis
regularization model (sd) as a data generator to produce dense views from sparse inputs
MELON NeRF with Unposed Images Using Equivalence Class Estimation (no poses)
DINER Depth-aware Image-based NEural Radiance fields
COLMAP-Free3D Gaussian Splatting
continuity of the input video = no need for camera poses
Efficient NeuralRadiance Fields for Interactive Free-viewpoint Video (people)
Progressively OptimizedLocal Radiance Fields for Robust View Synthesis
turn video into nerf
F2-NeRF Fast NeuralRadiance Field Training with Free Camera Trajectories
HOSNeRF Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video
free 360% viewpoint
Consistent4D Consistent 360Β° Dynamic Object Generation from Monocular Video
no need for multi-view data collection and camera calibration
to train DyNeRF
SceNeRFlow Time-Consistent Reconstruction of General Dynamic Scenes (dynamic-NeRF method)
correspondences even for long-term, long-range motions
ResFields Residual Neural Fields for Spatiotemporal Signals
effectively represent complex temporal signals
matrix factorization technique to reduce the number of trainable parameters
HyperFields Towards Zero-Shot Generation of NeRFs from Text
distills scenes encoded in individual NeRFs into one dynamic hypernetwork
ARF: Artistic Radiance Fields https://www.cs.cornell.edu/projects/arf/
nerf style transfer
Instruct-NeRF2NeRF Editing3DScenes with Instructions <<Instruct-NeRF2NeRF>>
training to NeRF in an iterative fashion
integrated to nerfstudio
FaceCLIPNeRF Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields
train a latent code-conditional deformable NeRF, over a dynamic scene
learns to represent a manipulated scene with spatially varying latent codes using clip
Seal-3D Interactive Pixel-Level Editing for Neural Radiance Fields
preview, instantly; local pretraining and global finetuning
proxy function mapping the editing instructions to the original space
SIGNeRF Scene Integrated Generation for Neural Radiance Fields
Generatively edits NeRF scenes
TIP-Editor An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts
accepts text and image prompts and a 3D bounding box to specify the editing region
Freditor High-Fidelity and Transferable NeRF Editing by Frequency Decomposition
lift 2D stylization results to 3D scenes
enabling stable intensity control and novel scene transfer
Reference-guidedControllableInpainting of Neural Radiance Fields
use a mask and a single view image to force it on
NeRFiller Completing Scenes via Generative 3D Inpainting
leveraging a 2D inpainting diffusion model
InseRF Text-Driven Generative Object Insertion in Neural 3D Scenes
GO-NeRF Generating Virtual Objects in Neural Radiance Fields
utilizing scene context for high-quality and harmonious 3D object generation within an existing NeRF
ReplaceAnything3D Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields
replace while maintaining 3D consistency across multiple viewpoints
TUVF: Learning Generalizable Texture UV Radiance Fields
nerf baked to texture
Blended-NeRF Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields
change nerf objects, multiple nerf views
Mesh NeuralCellular Automata, instead of uv map, 3d texture feel
NeRSemble Multi-view Radiance Field Reconstruction of Human Heads
nerf to texture to faces, very realistic (73 fps)
fix faces generated no texture-sticking issue https://www.youtube.com/watch?v=j1ZY7LInN9g&t=272s
Zip-NeRF Anti-AliasedGrid-Based Neural Radiance Fields (house interiors)
ProgressivelyOptimized Local Radiance Fields for Robust View Synthesis
dynamically allocate new local radiance
city nerf: https://city-super.github.io/gridnerf/
Neural FieldsmeetExplicit Geometric Representation for Inverse Rendering of Urban Scenes
nerf inserting 3d things
(not nerf) UrbanIR Large-Scale Urban Scene Inverse Rendering from a Single Video (google maps)
City-on-Web Real-time Neural Rendering of Large-scale Scenes on the Web
real life stream
VastGaussian Vast 3D Gaussians for Large Scene Reconstruction
large scenes 3D Gaussian Splatting
GaussianPro 3D Gaussian Splatting with Progressive Propagation
guide the densification of the 3D Gaussians across large scenes
Octree-GS Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
LOD-structured 3D Gaussian approach
parent: computer_illumination
NERF for real-time view synthesis https://arxiv.org/abs/2103.14645
AdaNeRF https://arxiv.org/pdf/2207.10312.pdf (40 ms, two nerfs)
VR-NeRF High-Fidelity Virtualized Walkable Spaces (36 Hz)
INGeo Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors, faster than Instant-NGP
instant-ngp fractions of screen-space repeatedly, all neural primitives in seconds
PlenOctree https://github.com/sxyu/volrend 150 fps
Tetra-NeRF RepresentingNeural Radiance Fields Using Tetrahedra
ReconstructiveLatent-Space Neural Radiance Fields for Efficient 3D Scene Representations
nerf with autoencoder latent field, 13 times faster rendering
VideoRF Rendering Dynamic Radiance Fields as 2D Feature Video Streams ==best==
streaming-rendering nerf online or mobile devices
feature image stream can be efficiently compressed by 2D video codecs
HybridNeRF Efficient Neural Rendering via Adaptive Volumetric Surfaces
==surfaces instead of volumes==, real time better speed
Plenoxels (plenoptic voxels), 3D grid with spherical harmonics https://arxiv.org/pdf/2112.05131
Patch-based3DNaturalSceneGeneration from a Single Example (3d patches as codebook)
content + structure separation
MobileNERF = small neural network (Small MLP) for view dependant per pixel, deferred neural shader
10 minutes
mobileNERF(polygons, triangles)124.3 fps
Re-ReND Real-time Rendering of NeRFs across Devices (facebook) 329.6 fps
using rendering pipeline gpu geometry (like that one which used triangles)
4K4D Real-Time 4D View Synthesis at 4K Resolution (30x faster than previous)
4D feature grid with points naturally regularized and optimized
learn the proposed model from RGB videos
Motion2VecSets 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking
dynamic reconstruction from point cloud sequences
NeRV Neural Representations for Videos (nerf video)
https://github.com/haochen-rye/HNeRV
FFNeRV Flow-Guided Frame-Wise Neural Representations for Videos
incorporates flow information
FastView Synthesis of Casual Videos
synthesize high-quality novel views from a monocular video efficiently, real time
NeuRBF A Neural Fields Representation with Adaptive Radial Basis Functions (hd images)
general radial bases with flexible kernel position and shape, to fit target signals
LightSpeed Light and Fast Neural Light Fields on Mobile Devices
a direct mapping from a ray representation to the pixel color, neural light field using a light slab representation
parent: nerf
==nerf alternative== ROOMDREAMERNOT NERFSUPERPRIMITIVE
ViewFormer no NeRF, instead Transformers
Geometry-Free View Synthesis: Transformers and no 3D Priors; no 3d prior
AutoMerge A Framework for Map Assembling and Smoothing in City-scale Environments (google maps) without gps
M-SDF: Mosaic-SDFfor 3D Generative Models ==best==
approximates the Signed Distance Function (SDF) of shape by using set of local grids spread near the boundary
VOLUME DIFFUSION==best==
GIBR: DenoisingDiffusion via Image-Based Rendering
IB-planes, ==new neural scene representation== accurately represent large 3D scenes dynamically allocating more capacity as needed for details
denoising-diffusion framework to learn prior over IB-planes
only 2D images no need for masks or depths
single image as input, synthesises plausible details in hidden regions
a point cloud > gaussian cloud
ray tracing(nerf) > ray marching
vs mobilenerf? which seems faster with lower system requirements
3D GaussianSplattingfor Real-TimeRadianceField Rendering
represent the scene with 3D Gaussians
it has NO neural networks at all
==best nerf== far better than instant-ngp
Mip-Splatting Alias-free 3D Gaussian Splatting
smoothing filter eliminating multiple artifacts and achieving alias-free renderings
Scaffold-GS Structured 3D Gaussians for View-Adaptive Rendering ==best==
reduces redundant Gaussians while delivering higher-quality rendering
GaussianPro 3D Gaussian Splatting with Progressive Propagation (vs 3DGS)
guide the densification of the 3D Gaussians
RadSplat Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS
GRM Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
recovering a 3D asset from sparse-view images in around 0.1s
reconstructs 3d gaussians-meshes from various sources: zero123++, instant3d, v3d, and sv3d
TRIPS Trilinear Point Splatting for Real-Time Radiance Field Rendering
approach that combines ideas from both Gaussian Splatting and ADOP(crisper images)
real-time frame rate of 60fps
Robust GaussianSplatting
fixing blur, imperfect camera poses, color inconsistencies(caused by ambient light, shadows)
FisherRF: Active View Selection and Uncertainty Quantification for Radiance Fields using Fisher Informations
view selections at 70~fps, better quality
by leveraging fisher information, longer needing density distribution assumptions
GES Generalized Exponential Splatting for Efficient Radiance Field Rendering
requiring far fewer particles to represent a scene, half the memory seed up to 40%
KEP-SVGP: Self-Attentionthrough Kernel-Eigen Pair Sparse Variational Gaussian Processes
attention kernels are in essence asymmetric, thus KEP-SVGP as attention kernel to fully characterizes the asymmetry
4D GaussianSplatting for Real-Time Dynamic Scene Rendering
video and in real time, 20 min
PhysGaussian Physics-Integrated 3D Gaussians for Generative Dynamics
integrates Newtonian dynamics within 3D Gaussians for motion synthesis
negates the necessity for triangle/tetrahedron meshing
4DGen Grounded 4D Content Generation with Spatial-temporal Consistency
4D representation using dynamic 3D Gaussians, generation from images or videos
specify geometry and motion offering superior control over content creation
4D GaussianSplatting: Towards Efficient Novel View Synthesis for Dynamic Scenes
4DGS, anisotropic 4D XYZT Gaussian
modeling complicated dynamics and fine details, especially for scenes with abrupt motions
SpacetimeGaussian Feature Splatting for Real-Time Dynamic View Synthesis
3d gaussians enhanced with temporal opacity and parametric motion/rotation
replaces spherical harmonics with neural features, so small size and fast at 60 FPS
GaussianFlow Splatting Gaussian Dynamics for 4D Content Creation
smooth and natural, even in highly dynamic regions, no artifacts
SuGaR Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
hybrid gaussian-mesh for easy animation by manipulating the mesh
GaMeS Mesh-Based Adapting and Modification of Gaussian Splatting
hybrid of mesh and gaussian, that pin all gaussians splats on the object surface (mesh)
allowing for adjustments in position, scale, and rotation during animation
Animatable Gaussians LearningPose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling
2D CNNs(StyleGAN-based) and 3D Gaussian splatting
Gaussian Frosting Editable Complex Radiance Fields with Real-Time Rendering
extracting a base mesh from gaussians; the fuzzier the material, the thicker the frosting
editing and animation by modifying the mesh
GPS-Gaussian Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
lift 2D parameter maps(depth estimation) to 3D space
HiFi4G High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
gaussian from non-rigid tracking
compression rate of approximately 25 times, less than 2MB of storage per frame
GPS-Gaussian Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
synthesizing novel views of any unseen characters instantly without any fine-tuning or optimization
Human101 Training 100+FPS Human Gaussians in 100s from 1 View (Gaussian Animation method)
GSM Gaussian Shell Maps for Efficient 3D Human Generation
3D Gaussian rendering primitives for controllable poses and diverse appearances
3DGS-Avatar Animatable Avatars via Deformable 3D Gaussian Splatting
HAHA HighlyArticulated Gaussian Human Avatars with Textured Mesh Prior
model learns to apply Gaussian splatting only in areas of mesh where it is necessary
like hair and out-of-mesh clothing
so it can handle the animation of small body parts such as fingers
RelightableGaussian Codec Avatars <<Relightable Gaussian Codec>>
high-fidelity relightable(real time) head avatars, eye reflections, animated to novel expressions
GaussianHead Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians
controllable 3D Gaussians, webcam controlled expressions
PSAvatar A Point-based Morphable Shape Model for Real-Time Head Avatar Creation with 3D Gaussian Splatting
parametric morphable for poses and expressions
MagicMirror Fast and High-Quality Avatar Generation with a Constrained Search Space
conditional NeRF and stable diffusion geometric prior
creation of custom avatars with unparalleled quality and better adherence to input text prompts
SplattingAvatar Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting
disentangle the motion and appearance of a virtual human
control the rotation and translation of the Gaussians directly by mesh
GaussianDreamer Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors
3D diffusion makes point cloud priors and then 2D model enriches the geometry and appearance, 25 min
D3GA: Drivable3D Gaussian Avatars ==best==
multi-view videos as input
LGM Large Multi-View Gaussian Model for High-Resolution 3D Content Creation ==best==
3D models from text prompts or single-view images, 5 seconds
GaussianDreamer Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models
3D diffusion model provides priors for initialization, 2D model enriches the geometry and appearance; 15 minutes
DreamGaussian GenerativeGaussianSplattingfor Efficient 3D Content Creation (1 min)
gaussian splatting with mesh extraction and texture refinement in uv space
high-quality textured meshes, just 2 minutes from a single-view image
FSGS Real-Time Few-shot View Synthesis using Gaussian Splatting
real-time and photo-realistic synthesis with three training views
LucidDreamer Domain-freeGenerationof 3DGaussianSplattingScenes
LucidDreamer Towards High-Fidelity Text-to-3D Generation via Interval Score Matching
interval-based score matching to counteract over-smoothing
generated from any text or image prompt (pseudo-depth alignment algorithm)
incorporated 3D Gaussian Splatting
project a portion of point cloud to the desired view and provide the projection
painted images are lifted to 3D space with estimated depth maps, composing a new points
DreamPolisher Towards High-Quality Text-to-3D Generation via Geometric Diffusion
coarse 3D generation refined via geometric optimization
then ControlNet driven refiner coupled with the geometric consistency to improve texture and consistency