AutoInt: Automatic Integration for Fast Neural Volume Rendering
Abstract
Numerical integration is a foundational technique in scientific computing and is at the core of many computer vision applications. Among these applications, implicit neural volume rendering has recently been proposed as a new paradigm for view synthesis, achieving photorealistic image quality. However, a fundamental obstacle to making these methods practical is the extreme computational and memory requirements caused by the required volume integrations along the rendered rays during training and inference. Millions of rays, each requiring hundreds of forward passes through a neural network are needed to approximate those integrations with Monte Carlo sampling. Here, we propose automatic integration, a new framework for learning efficient, closedform solutions to integrals using implicit neural representation networks. For training, we instantiate the computational graph corresponding to the derivative of the implicit neural representation. The graph is fitted to the signal to integrate. After optimization, we reassemble the graph to obtain a network that represents the antiderivative. By the fundamental theorem of calculus, this enables the calculation of any definite integral in two evaluations of the network. Using this approach, we demonstrate a greater than 10 improvement in computation requirements, enabling fast neural volume rendering.
1 Introduction
Imagebased rendering and novel view synthesis are fundamental problems in computer vision and graphics (e.g., [Szeliski:book, Carranza:2003]). The ability to interpolate and extrapolate a sparse set of images depicting a 3D scene has broad applications in entertainment, virtual and augmented reality, and many other applications. Emerging neural rendering techniques have recently enabled photorealistic image quality for these tasks (see Sec. 2).
Although stateoftheart neural volume rendering techniques offer unprecedented image quality, they are also extremely slow and memory inefficient [mildenhall2020nerf]. This is a fundamental obstacle to making these methods practical. The primary computational bottleneck for neural volume rendering is the evaluation of integrals along the rendered rays during training and inference required by the volume rendering equation [max1995optical]. Approximate integration using Monte Carlo sampling is typically used for this purpose, requiring hundreds of forward passes through the implicit networks representing the volume for each of the millions of rays that need to be rendered for a single frame. Here, we develop a general and efficient framework for approximate integration. Applied to the specific problem of neural volume rendering, our framework speeds up the rendering process by factors greater than 10.
Our integration framework builds on previous work demonstrating that implicit representation networks can represent signals (e.g., images, audio waveforms, or 3D shapes) and their derivatives. That is, taking the derivative of the implicit representation network accurately models the derivative of the original signal. This property has recently been shown for implicit neural representations with periodic activation functions [sitzmann2020siren], but we show that it also extends to a family of representation networks with different nonlinear activation functions (Sec 3.4 and supplemental).
We observe that taking the derivative of an implicit representation network results in a new computational graph, a “grad network”, which shares the parameters of the original network. Now, consider that we use as our representation network a multilayer perceptron (MLP). Taking its derivative results in a grad network which can be trained on a signal that we wish to integrate. By reassembling the grad network parameters back into the original MLP, we have constructed a neural network that represents the antiderivative of the signal to integrate.
This procedure results in a closedform solution for the antiderivative, which, by the fundamental theorem of calculus, enables the calculation of any definite integral in two evaluations of the MLP. Inspired by techniques for automatic differentiation (AutoDiff), we call this procedure automatic integration or AutoInt. Although the mechanisms of AutoInt and AutoDiff are very different, both approaches enable the calculation of integrals or derivatives in an automated manner that does not rely on traditional numerical techniques, such as sampling or finite differences.
The primary benefit of AutoInt is that it allows us to evaluate arbitrary definite integrals quickly by querying the network representing the antiderivative. This concept could have important applications across science and engineering; here, we focus on the specific application of neural volume rendering. For this application, being able to quickly evaluate integrals amounts to accelerating rendering (i.e., inference) times, which is crucial for making these techniques more competitive with traditional realtime graphics pipelines. However, our framework still requires a slow training process to optimize a network for a given set of posed 2D images.
Specifically, our contributions include the following.

We introduce a framework for automatic integration that learns closedform integral solutions. To this end, we explore new network architectures and training strategies.

Using automatic integration, we propose a new model and parameterization for neural volume rendering that is efficient in computation and memory.

We demonstrate neural volume rendering at significantly improved rendering rates, an order of magnitude faster than previous implementations [mildenhall2020nerf].
2 Related Work
Neural Rendering.
Over the last few years, endtoend differentiable computer vision pipelines have emerged as a powerful paradigm wherein a differentiable or neural scene representation is optimized via differentiable rendering with posed 2D images (see e.g. [tewari2020state] for a survey). Neural scene representations often use an explicit 3D proxy geometry, such as multiplane [Zhou:2018, Mildenhall:2019, flynn2019deepview] or multisphere [Broxton:2020, Attal:2020:ECCV] images or a voxel grid of features [sitzmann2019deepvoxels, Lombardi:2019]. Explicit neural scene representations can be rendered quickly, but they are fundamentally limited by the large amount of memory they consume and thus may not scale well.
As an alternative, implicit neural representations have been proposed as a continuous and memoryefficient approach. Here, the scene is parameterized using neural networks, and 3D awareness is often enforced through inductive biases. The ability to represent details in a scene is limited by the capacity of the network architecture rather than the resolution of a voxel grid, for example. Such representations have been explored for modeling shape parts [genova2019learning, genova2019deep], objects [park2019deepsdf, mescheder2019occupancy, saito2019pifu, sitzmann2019srns, Oechsle2019ICCV, michalkiewicz2019implicit, atzmon2019sal, Niemeyer2020CVPR, liu2020dist, gropp2020implicit, yariv2020multiview, davies2020overfit, chabra2020deep, kohli2020inferring], or scenes [eslami2018neural, jiang2020local, peng2020convolutional, mildenhall2020nerf, liu2020neural, sitzmann2020siren]. Implicit representation networks have also been explored in the context of generative frameworks [chen2019learning, henzler2019platonicgan, hologan, graf, nguyenphuoc2020blockgan].
The method closest to our application is neural radiance fields (NeRF) [mildenhall2020nerf]. NeRF is a neural rendering framework that uses an implicit volume representation combined with a neural volume renderer to achieve stateoftheart image quality for view synthesis tasks. Specifically, NeRF uses ReLUbased multilayer perceptrons (MLPs) with a positional encoding strategy to represent 3D scenes. Rendering an image from such a representation is done by evaluating the volume rendering equation [max1995optical], which requires integrating along rays passing through the neural volume parameterized by the MLP. This integration is performed using Monte Carlo sampling, which requires hundreds of forward passes through the MLP for each ray. However, this procedure is extremely slow, requiring days to train a representation of a single scene from multiview images. Rendering a frame from a preoptimized representation requires tens of seconds to minutes.
Here, we leverage automatic integration, or AutoInt, to significantly speed up the evaluation of integrals along rays. AutoInt reduces the number of network queries required to evaluate integrals (e.g., using Monte Carlo sampling) from hundreds to just two. For neural volume rendering we demonstrate that this achieves a greater than 10 speed up in rendering time.
Integration Techniques.
In general, integration is much more challenging than differentiation. Whereas automatic differentiation primarily builds on the chain rule, there are many different strategies for integration, including variable substitution, integration by parts, partial fractions, etc. Heuristics can be used to choose one or a combination of these strategies for any specific problem. Closedform solutions to finding general antiderivatives exist only for a relatively small class of functions and, when possible, involve a rather complex algorithm, such as the Risch or RischNorman algorithm [risch1969problem, risch1970solution, norman1977implementing]. Perhaps the most common approach to computing integrals in practice is numerical integration, for example using Riemann sums, quadratures, or MonteCarlo methods [davis2007methods]. In these samplingbased methods, the number of samples trades off accuracy for runtime.
Since neural networks are universal function approximators, and are themselves functions, they can also be integrated analytically. Previous work has explored theory and connections between shallow neural networks and integral formulations for function approximation [kainen2000integral, dereventsov2019neural]. Other work has derived closedform solutions for integrals of simple singlelayer neural networks [turner2005introducing]. To our knowledge, no previous approach to solving general integrals using neural networks exists. As we shall demonstrate, our work is not limited to a fixed number of layers or a specific architecture. Instead, we directly train a grad network architecture, for which the integral network is known by construction.
3 AutoInt for Implicit Neural Integration
In this section, we introduce a fundamentally new approach to compute and evaluate antiderivatives and definite integrals of implicit neural representations.
3.1 Principles
We consider an implicit neural representation, i.e., a neural network with parameters mapping lowdimensional input coordinates to a lowdimensional output . We assume this implicit neural representation admits a (sub)gradient with respect to its input , and we denote by its derivative with respect to the coordinate . Then, by the fundamental theorem of calculus we have that
(1) 
This equation relates the implicit neural representation to its partial derivative and, hence, is an antiderivative of .
A key idea is that the partial derivative is itself an implicit neural representation, mapping the same lowdimensional input coordinates to the same lowdimensional output space . In other words, is a different neural network that shares its parameters with while also satisfying Equation 1. Now, rather than optimizing the implicit representation , we optimize to represent a target signal, and we reassemble the optimized parameters (i.e., weights and biases) to form . As a result, is a neural network that represents an antiderivative of . We call this procedure of training and reassembling to construct the antiderivative automatic integration. How to reassemble depends on the neural network architecture used for , which we address in the next section.
3.2 The Integral and Grad networks
Implicit neural representations are usually based on multilayer perceptron (MLP), or fully connected, architectures:
(2) 
with being the th layer of the neural network defined as using the parameters and the nonlinearity nl, which is a function applied pointwise to all the elements of a vector.
The computational graph of a 3hiddenlayer MLP representing is shown in Figure 2. Operations are indicated as nodes and dependencies as directed edges. Here, the arrows of the directed edges point towards nodes that must be computed first.
For this MLP, the form of the network can be found using the chainrule
(3) 
where and is the unit vector that has ’s everywhere but at the th component. The corresponding computational graph is shown in Figure 2. As we noted, despite having a different architecture (and vastly different number of nodes) the two networks share the same parameters. We refer to the network associated with as the integral network and the neural network associated with as the grad network. Homologous nodes in their graphs are shown in the same color. This color scheme also explicitly shows how the grad network parameters are reassembled to create the integral network after training.
3.3 Evaluating Antiderivatives & Definite Integrals
To compute the antiderivative and definite integrals of a function in the AutoInt framework, one first chooses the specifics of the MLP architecture (number of layers, number of features, type of nonlinearities) for an integral network . The grad network is then instantiated from this integral network based on AutoDiff. In practice, we developed a custom AutoDiff framework that traces the integral network and explicitly instantiates the corresponding grad network while maintaining the shared parameters (additional details in the supplemental). Once instantiated, parameters of the grad network are optimized to fit a signal of interest using conventional AutoDiff and optimization tools [paszke2019pytorch]. Specifically we optimize a loss of the form
(4) 
Here, is a cost function that aims at penalizing discrepancies between the target signal we wish to integrate and the implicit neural representation . Once trained, the grad network approximates the signal, that is Therefore, the antiderivative of can be calculated as
(5) 
This corresponds to evaluating the integral network at using weights . Furthermore, any definite integral of the signal can be calculated using only two evaluations of , according to the Newton–Leibniz formula
(6) 
We also note that AutoInt extends to integrating highdimensional signals using a generalized fundamental theorem of calculus, which we describe in the supplemental.
3.4 Example in Computed Tomography
In tomography, integrals are at the core of the imaging model: measurements are line integrals of the absorption of a medium along rays that go through it. In particular, in a parallel beam setup, assuming a 2D medium of absorption , measurements can be modeled as
(7) 
and is on the ray by satisfying with being the orientation of the ray and its eccentricity with respect to the origin as shown in Figure 3. The measurement is called a sinogram, and this particular integral is referred to as the Radon transform of [ct:book].
The inverse problem of computed tomography involves recovering the absorption given a sinogram. Here, for illustrative purposes, we will look at a compressed sensing tomography problem in which a grad network is trained on a sparse set of measurements and the integral network is evaluated to produce unseen ones. This setup is analogous to the novel view synthesis problem we solve in Section 4.
We consider a dataset of measurements corresponding to sparsely sampled rays. We train a grad network using the AutoInt framework. For this purpose, we instantiate a grad network whose input is a tuple . It is trained to match the dataset of measurements
(8) 
Thus, at training time, the grad network is evaluated times in a Monte Carlo fashion with . At inference, just two evaluations of yield the integral
(9) 
Results in Figure 3 show that the two evaluations of the integral network can faithfully reproduce supervised measurements and generalize to unseen data. This generalization, however depends on the type of nonlinearity used. We show that Swish [ramachandran2017searching] with normalized positional encoding (details in Sec. 5) generalizes well while SIREN [sitzmann2020siren] can fit the measurements much better but fails at generalizing to unseen views.
Note that both the nonlinearity nl and its derivative appear in the grad network architectures (Eq. (3) and Figure 2). This implies that integral networks with ReLU nonlinearities have step functions appearing in the grad network, possibly making training difficult because of nodes with (constant) zerovalued derivatives. We explore several other nonlinearities here (with additional details in the supplemental), and show that Swish heuristically performs best in the grad networks used in our application. Yet, we believe the study of nonlinearities in grad networks to be an important avenue for future work.
4 Neural Volume Rendering
Combining volume rendering techniques with implicit neural representations has proved to be a powerful technique for neural rendering and view synthesis [mildenhall2020nerf]. Here, we briefly overview volume rendering and describe an approximate volume rendering model that enables our efficient rendering approach using AutoInt.
4.1 Volume Rendering
Classical volume rendering techniques are derived from the radiative transfer equation [chandrasekhar2013radiative] with an assumption of minimal scattering in an absorptive and emissive medium [max1995optical, drebin1988volume]. We adopt a rendering model based on tracing rays through the volume [kajiya1984ray, mildenhall2020nerf], where the emission and absorption along camera rays produce color values that are assigned to rendered pixels.
The volume itself is represented as a highdimensional function parameterized by position, , and viewing direction . We also define the camera rays that traverse the volume from an origin point to a ray position . At each position in the volume, an absorption coefficient, , gives the probability per differential unit length that a ray is absorbed (i.e., terminates) upon interaction with an infinitesimal particle. Finally, an emissive radiance field , describes the color of emitted light at each point in space in all directions.
Rendering from the volume requires integrating the emissive radiance along the ray while also accounting for absorption. The transmittance describes the net reduction from absorption from the ray origin to the ray position , and is given as
(10) 
where indicates a near bound along the ray. With this expression, we can define the volume rendering equation, which describes the color of a rendered camera ray.
(11) 
Conventionally, the volume rendering equation is computed numerically by Riemann sums, quadratures, or MonteCarlo methods [davis2007methods], whose accuracy thus largely depends on the number of samples taken along the ray.
4.2 Approximate Volume Rendering for Automatic Integration
Automatic integration allows us to efficiently evaluate definite integrals using a closedform solution for the antiderivative. However, the volume rendering equation cannot be directly evaluated with AutoInt because it consists of multiple nested integrations: the integration of radiance along the ray weighted by integrals of cumulative transmittance. We therefore choose to approximate this integral in piecewise sections that can each be efficiently evaluated using AutoInt. For piecewise sections along a ray, we give the approximate volume rendering equation and transmittance as
(12) 
where
and is the length of each piecewise interval along the ray. After some simplification and substitution into Equation 12, we have the following expression for the piecewise volume rendering equation:
(13)  
While this piecewise expression is only an approximation to the full volume rendering equation, it enables us to use AutoInt to efficiently evaluate each piecewise integral over absorption and radiance. In practice, there is a tradeoff between improved computational efficiency and degraded accuracy of the approximation as the value of decreases. We evaluate this tradeoff in the context of volume rendering and learned novel view synthesis in Sec. 6.
5 Optimization Framework
We evaluate the piecewise volume rendering equation introduced in the previous section using an optimization framework overviewed in Figure 4. At the core of the framework are two MLPs that are used to compute integrals over values of and as we detail in the following.
Network Parameterization.
Rendering an image from the highdimensional volume represented by the MLPs requires evaluating integrals along each ray in the direction of . Thus, the grad network should represent , the partial derivative of the integral network with respect to the ray parameter. In practice, the networks take as input the values that define each ray: , , and . Then, positions along the ray are calculated as and passed to the initial layers of the networks together with . With this dependency on , we use our custom AutoDiff implementation to trace computation through the integral network, define the computational graph that computes the partial derivative with respect to , and instantiate the grad network.
Grad Network Positional Encoding.
As demonstrated by Mildenhall et al. [mildenhall2020nerf], a positional encoding on the input coordinates to the network can significantly improve the ability of a network to render fine details. We adopt a similar scheme, where each input coordinate is mapped into a higher dimensional space as using a function defined as
,  (14) 
where and controls the number of frequencies used to encode each input. We find that using this scheme directly in the grad network produces poor results because it introduces an exponentially increasing amplitude scaling into the coordinate encoding. This can easily be seen by calculating the derivative . Instead, we use a normalized version of the positional encoding for the integral network, which improves performance when training the grad network:
(15) 
Predictive Sampling.
While AutoInt is used at inference time, at training time, the grad network is optimized by evaluating the piecewise integrals of Equation 13 using a quadrature rule discussed by Max [max1995optical]:
(16)  
(17) 
We use Monte Carlo sampling to evaluate the integrals and by querying the networks at many positions within each interval .
However, some intervals along the ray contribute more to a rendered pixel than others. Thus, assuming we use the same number of samples per interval, we can improve sample efficiency by strategically adjusting the length of these intervals to place more samples in positions with large variations in and .
To this end, we introduce a small sampling network (illustrated in Figure 4), which is implemented as an MLP that predicts interval lengths . Then, we calculate stratified samples along the ray by subdividing each interval into bins and calculating samples as .
Fast Grad Network Evaluation.
AutoInt can be implemented directly in popular optimization frameworks (e.g., PyTorch [paszke2019pytorch], Tensorflow [tensorflow2015whitepaper]); however, training the grad network is generally computationally slow and memory inefficient. These inefficiencies stem from the two step procedure required to compute the grad network output at each training iteration: (1) a forward pass through the integral network is computed and then (2) AutoDiff calculates the derivative of the output with respect to the input variable of integration. Instead, we implemented a custom AutoDiff framework on top of PyTorch that parses a given integral network and explicitly instantiates the grad network modules with weight sharing (see Figure 2). Then, we evaluate and train the grad network directly, without the overhead of the additional periteration forward pass and derivative computation. Compared to the twostep procedure outlined above, our custom framework improves periteration training speed by a factor of 1.8 and reduces memory consumption by 15% for the volume rendering application. More details about our AutoInt implementation can be found in the supplemental, and our code will be made public.
Implementation Details.
In our framework, a volume representation is optimized separately for each rendered scene. To optimize the grad networks, we require a collection of RGB images taken of the scene from varying camera positions, and we assume that the camera poses and intrinsic parameters are known. At training time, we randomly sample images from the training dataset, and from each image we randomly sample a number of rays. Then, we optimize the network to minimize the loss function
(18) 
where is the ground truth rendered pixel for the selected ray.
In our implementation, we train the networks using PyTorch and the Adam optimizer with a learning rate of . We use a batch size of 4 with 1024 rays sampled from each image, and we decay the learning rate by a factor of 0.2 every iterations. For the sampling network, we evaluate using samples within each piecewise interval for and find that using 8, 16, or 32 piecewise intervals produces acceptable results while achieving a significant computational acceleration with AutoInt. Finally, for the positional encoding, we use and for and , respectively.
6 Results
NeRF  Neural Volumes  Ours (#sections=N)  

8  16  32  
PSNR (dB)  31.0  26.1  25.6  26.0  26.8 
Memory (GB)  15.6  10.4  15.5  15.0  15.5 
Runtime (s/frame)  30  0.3  2.6  4.8  9.3 
We evaluate AutoInt for volume rendering on a synthetic dataset of scenes with challenging geometries and reflectance properties. With qualitative and quantitative results, we demonstrate that the approach achieves high image quality with a greater than 10 improvement in render time compared to the stateoftheart in neural volume rendering [mildenhall2020nerf].
Our training dataset consists of eight objects, each rendered from 100 different camera positions using the Blender Cycles engine [mildenhall2020nerf]. For the test set, we evaluate on an additional 200 images. We compare AutoInt to two other baselines: Neural Radiance Fields (NeRF) [mildenhall2020nerf] and Neural Volumes [Lombardi:2019]. NeRF uses a similar architecture and Monte Carlo sampling with the full volume rendering model, rather than our piecewise approximation and AutoInt. Neural Volumes is a voxelbased method that encodes a deep voxel grid representation of a scene using a convolutional neural network. Novel views are rendered by applying a learned warping operator to the voxel grid and sampling voxel values by marching rays from the camera position.
In Table 1 we report the peak signaltonoise ratio (PSNR) averaged across all scenes and test images. We see that using AutoInt for volume rendering outperforms Neural Volumes quantitatively, while achieving a greater than 10 improvement in render time relative to NeRF. Our method can also trade off runtime with image quality. By increasing the number of piecewise sections evaluated in the approximate volume rendering model, the model’s accuracy improves. On the other hand, decreasing the number of sections leads to increased computational efficiency.
We also show qualitative results in Figure 6 for three scenes: Materials, Hot Dog, and Drums. As we detail in Table 1, AutoInt with sections provides the fastest inference times. The quality of the rendered images improves as we increase the number of sections to , and the proposed technique exhibits fewer artifacts in the Materials scene compared to Neural Volumes. In the Drums scene, AutoInt shows improved modeling of viewdependent relative to Neural Volumes and NeRF (e.g., specular highlights on the symbols) with significantly lower computational requirements compared to NeRF.
7 Discussion
In this work, we introduce a new framework for numerical integration in the context of implicit neural representations. Applied to neural volume rendering, AutoInt enables significant improvements to computational efficiency by learning closedform solutions to integrals. Our approach is analogous to conventional methods for fast evaluation of the volume rendering equation; for example, methods based on shearwarping [lacroute1994fast] and the Fourier projectionslice theorem [totsuka1993frequency, malzbender1993fourier]. Similar to our method, these techniques use approximations (e.g., with sampling and interpolation) that trade off image quality with computationally efficient rendering. Additionally, we believe our approach is compatible with other recent work that aims to speed up volume rendering by pruning areas of the volume that do not contain the rendered object [liu2020neural]. We envision that AutoInt will enable increased computational efficiencies, not only for the volume rendering equation, but for evaluating other integral equations using neural networks.
A key idea of AutoInt is that an integral network can be automatically created after training a corresponding grad network. Thus, exploring new grad network architectures that enable fast training with rapid convergence is an important and promising direction for future work. Moreover, we believe that AutoInt will be of interest to a wide array of application areas beyond computer vision, especially for problems related to inverse rendering, sparseview tomography, and compressive sensing.