Hacker News new | past | comments | ask | show | jobs | submit login
ADOP: Approximate Differentiable One-Pixel Point Rendering (arxiv.org)
45 points by rohithkp on Oct 16, 2021 | hide | past | favorite | 6 comments




Thank you, this gave me a very good introduction to the work!


So, if I understand correctly, they have created an algorithm that takes several colored point clouds (or photos + depth maps) of a scene and can generate renderings of the scene from novel viewpoints (or with different camera parameters, e.g. exposure or white balance). The results look truly impressive!

The whole rendering pipeline is differentiable, so they can backpropagate the ground truth error to all the unknowns in the pipeline. Those include values like camera pose parameters, point cloud texture and point positions, weights of a neural network that turns (potentially sparse) point cloud rasterizations into full HDR (high dynamic range) images, and tone mapping parameters like exposure.

(If I understand correctly, they start from something like a SLAM-based point cloud, but I haven't found that explained in glancing over the paper.)

Nitpick for the paper: A brief note on what HDR and LDR mean would be useful. The acronyms are not even expanded anywhere.


Structure from Motion (SfM) generated point clouds. Typical SLAM point clouds are much sparser so they can be run in real-time.


High and Low Dynamic Range


As I'm interested in the subject, I immediately think about the possibility of point cloud tech in a game development context. As far as I know, the big hurdle has always been dynamism like animation/deformation. Does this approach have any influence on that aspect?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: