File size: 2,524 Bytes
cea3106 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
<h1 align="center" id="heading">FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees</h1>
<p align="center">
<p align="center">
<b><a href="https://cg.cs.uni-bonn.de/person/m-sc-saskia-rabich">Saskia Rabich</a></b>
·
<b><a href="https://cg.cs.uni-bonn.de/person/dr-patrick-stotko">Patrick Stotko</a></b>
·
<b><a href="https://cg.cs.uni-bonn.de/person/prof-dr-reinhard-klein">Reinhard Klein</a></b>
</p>
<p align="center">
University of Bonn
</p>
<h3 align="center">The Visual Computer · Presented at CGI 2024</h3>
<h3 align="center">
<a href="https://doi.org/10.1007/s00371-024-03475-3">Paper</a>
|
<a href="https://arxiv.org/abs/2310.20710">arXiv</a>
|
<a href="https://cg.cs.uni-bonn.de/publication/rabich-2024-fpo">Project Page</a>
|
<a href="https://github.com/SaskiaRabich/FPOplusplus">Code</a>
</h3>
<div align="center"></div>
</p>
<p align="left">
This repository contains data used in "FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees".
</p>
## Usage
You can use this data by downloading and extracting the .zip-files into a `data` subdirectory in the root directory of the FPO++ source code.
Please refer to the GitHub repository for information on how to run the code.
## Citation
If you find this data useful for your research, please cite FPO++ as follows:
```
@article{rabich2024FPOplusplus:,
title = {FPO++: efficient encoding and rendering of dynamic neural radiance fields by analyzing and enhancing {Fourier} {PlenOctrees}},
author = {Saskia Rabich and Patrick Stotko and Reinhard Klein},
journal = {The Visual Computer},
year = {2024},
issn = {1432-2315},
doi = {10.1007/s00371-024-03475-3},
url = {https://doi.org/10.1007/s00371-024-03475-3},
}
```
## License
This data is provided under the MIT license.
## Acknowledgements
This work has been funded by the Federal Ministry of Education and Research under grant no. 01IS22094E WEST-AI, by the Federal Ministry of Education and Research of Germany and the state of North-Rhine Westphalia as part of the Lamarr-Institute for Machine Learning and Artificial Intelligence, and additionally by the DFG project KL 1142/11-2 (DFG Research Unit FOR 2535 Anticipating Human Behavior).
|