antalvdb commited on
Commit
a5ab2e5
·
verified ·
1 Parent(s): d9e4fd5

Update index.html

Browse files
Files changed (1) hide show
  1. index.html +21 -21
index.html CHANGED
@@ -100,32 +100,32 @@
100
  <h2 class="title is-3">Abstract</h2>
101
  <div class="content has-text-justified">
102
  <p>
103
- We present the first method capable of photorealistically reconstructing a non-rigidly
104
- deforming scene using photos/videos captured casually from mobile phones.
105
  </p>
106
  <p>
107
- Our approach augments neural radiance fields
108
- (NeRF) by optimizing an
109
- additional continuous volumetric deformation field that warps each observed point into a
110
- canonical 5D NeRF.
111
- We observe that these NeRF-like deformation fields are prone to local minima, and
112
- propose a coarse-to-fine optimization method for coordinate-based models that allows for
113
- more robust optimization.
114
- By adapting principles from geometry processing and physical simulation to NeRF-like
115
- models, we propose an elastic regularization of the deformation field that further
116
- improves robustness.
117
  </p>
118
  <p>
119
- We show that <span class="dnerf">Nerfies</span> can turn casually captured selfie
120
- photos/videos into deformable NeRF
121
- models that allow for photorealistic renderings of the subject from arbitrary
122
- viewpoints, which we dub <i>"nerfies"</i>. We evaluate our method by collecting data
123
- using a
124
- rig with two mobile phones that take time-synchronized photos, yielding train/validation
125
- images of the same pose at different viewpoints. We show that our method faithfully
126
- reconstructs non-rigidly deforming scenes and reproduces unseen views with high
127
- fidelity.
128
  </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  </div>
130
  </div>
131
  </div>
 
100
  <h2 class="title is-3">Abstract</h2>
101
  <div class="content has-text-justified">
102
  <p>
103
+ WOPR, Word Predictor, is a memory-based language model developed in 2006-2011.
 
104
  </p>
105
  <p>
106
+ A memory-based language model, in this case running on the TiMBL classifier,
107
+ is in the most basis sense a <i>k</i>-nearest neighbor classifier. However, on
108
+ tasks like next-word prediction, <i>k</i>-NN becomes inhibitively slow. Fortunately,
109
+ TiMBL offers a number of fast approximations of <i>k</i>-NN classification, all
110
+ partly based on decision-tree classification and many orders of magnitude faster.
 
 
 
 
 
111
  </p>
112
  <p>
113
+ Compared to Transformer-based LLMs, on the plus side memory-based LLMs are
 
 
 
 
 
 
 
 
114
  </p>
115
+ <ul>
116
+ <li>very efficient in training. Training is essentially reading the data (in linear time)
117
+ and compressing it into a decision tree structure. This can be done on CPUs,
118
+ with sufficient RAM;</li>
119
+ <li>pretty efficient in generation when running with the fastest decision-tree
120
+ approximations of <i>k</i>-NN classification. This can be done on CPUs as well.
121
+ Accuracy is traded for speed, however.</li>
122
+ </ul>
123
+ <p>On the downside,</p>
124
+ <ul>
125
+ <li>Memory requirements during training are heavy with large datasets (>100 million words);</li>
126
+ <li>Memory-based LLMs are not efficient at generation time when running relatively
127
+ slower approximations of <i>k</i>-NN classifiers, trading speed for accuracy.</li>
128
+ </ul>
129
  </div>
130
  </div>
131
  </div>