Ravi Ramamoorthi

Ravi Ramamoorthi

University of California, San Diego

H-index: 79

North America-United States

About Ravi Ramamoorthi

Ravi Ramamoorthi, With an exceptional h-index of 79 and a recent h-index of 54 (since 2020), a distinguished researcher at University of California, San Diego, specializes in the field of Computer Graphics, Computer Vision, Signal Processing.

His recent articles reflect a diverse array of research interests and contributions to the field:

Openillumination: A multi-illumination dataset for inverse rendering evaluation on real objects

Decorrelating restir samplers via mcmc mutations

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

A differentiable wave optics model for end-to-end imaging system optimization

Importance Sampling BRDF Derivatives

A generalized ray formulation for wave-optics rendering

Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion

Ravi Ramamoorthi Information

University

University of California, San Diego

Position

Professor of Computer Science

Citations(all)

29940

Citations(since 2020)

18095

Cited By

15753

hIndex(all)

79

hIndex(since 2020)

54

i10Index(all)

193

i10Index(since 2020)

156

Email

University Profile Page

University of California, San Diego

Ravi Ramamoorthi Skills & Research Interests

Computer Graphics

Computer Vision

Signal Processing

Top articles of Ravi Ramamoorthi

Openillumination: A multi-illumination dataset for inverse rendering evaluation on real objects

Authors

Isabella Liu,Linghao Chen,Ziyang Fu,Liwen Wu,Haian Jin,Zhong Li,Chin Ming Ryan Wong,Yi Xu,Ravi Ramamoorthi,Zexiang Xu,Hao Su

Journal

Advances in Neural Information Processing Systems

Published Date

2024/2/13

We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials, captured under 72 camera views and a large number of different illuminations. For each image in the dataset, we provide accurate camera parameters, illumination ground truth, and foreground segmentation masks. Our dataset enables the quantitative evaluation of most inverse rendering and material decomposition methods for real objects. We examine several state-of-the-art inverse rendering methods on our dataset and compare their performances. The dataset and code can be found on the project page: https://oppo-us-research. github. io/OpenIllumination.

Decorrelating restir samplers via mcmc mutations

Authors

Rohan Sawhney,Daqi Lin,Markus Kettunen,Benedikt Bitterli,Ravi Ramamoorthi,Chris Wyman,Matt Pharr

Journal

arXiv e-prints

Published Date

2022/10

Monte Carlo rendering algorithms often utilize correlations between pixels to improve efficiency and enhance image quality. For real-time applications in particular, repeated reservoir resampling offers a powerful framework to reuse samples both spatially in an image and temporally across multiple frames. While such techniques achieve equal-error up to 100× faster for real-time direct lighting [Bitterli et al. ] and global illumination [Ouyang et al. ; Lin et al. ], they are still far from optimal. For instance, spatiotemporal resampling often introduces noticeable correlation artifacts, while reservoirs holding more than one sample suffer from impoverishment in the form of duplicate samples. We demonstrate how interleaving Markov Chain Monte Carlo (MCMC) mutations with reservoir resampling helps alleviate these issues, especially in scenes with glossy materials and difficult-to-sample lighting. Moreover, our approach …

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

Authors

Alex Trevithick,Matthew Chan,Towaki Takikawa,Umar Iqbal,Shalini De Mello,Manmohan Chandraker,Ravi Ramamoorthi,Koki Nagano

Journal

arXiv preprint arXiv:2401.02411

Published Date

2024/1/4

3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries of scenes from collections of 2D images via neural volume rendering. Yet, the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution, which sacrifices multiview consistency and the quality of resolved geometry. Consequently, 3D GANs have not yet been able to fully resolve the rich 3D geometry present in 2D images. In this work, we propose techniques to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail. Our approach employs learning-based samplers for accelerating neural rendering for 3D GAN training using up to 5 times fewer depth samples. This enables us to explicitly "render every pixel" of the full-resolution image during training and inference without post-processing superresolution in 2D. Together with our strategy to learn high-quality surface geometry, our method synthesizes high-resolution 3D geometry and strictly view-consistent images while maintaining image quality on par with baselines relying on post-processing super resolution. We demonstrate state-of-the-art 3D gemetric quality on FFHQ and AFHQ, setting a new standard for unsupervised learning of 3D shapes in 3D GANs.

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

Authors

Mukund Varma T,Peihao Wang,Zhiwen Fan,Zhangyang Wang,Hao Su,Ravi Ramamoorthi

Journal

arXiv e-prints

Published Date

2024/3

In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets. At the same time, there has been renewed interest in 3D scene representations such as neural radiance fields from multi-view images. However, the availability of 3D or multiview data is still substantially limited compared to 2D image datasets, making extending 2D vision models to 3D data highly desirable but also very challenging. Indeed, extending a single 2D vision operator like scene editing to 3D typically requires a highly creative method specialized to that task and often requires per-scene optimization. In this paper, we ask the question of whether any 2D vision model can be lifted to make 3D consistent predictions. We answer this question in the affirmative; our new Lift3D method trains to predict unseen views on …

A differentiable wave optics model for end-to-end imaging system optimization

Authors

Chi-Jui Ho,Yash Belhe,Steve Rotenberg,Ravi Ramamoorthi,Tzu-Mao Li,Nick Antipa

Published Date

2024/3/13

In imaging system design, computational applications and optical components are interdependent. End-to-end optimization, jointly optimizing hardware and software, is a prevalent approach. However, most optical simulators use ray optic models, which may lack real-world fidelity. We propose a differentiable wave optics model that accurately simulates light propagation. It exposes performance disparities among physical models. Integrated with unrolled FISTA and color filters, the system consistently yields clear measurements and accurate recovery. By noting the performance degradation caused by deviations from real-world physics, our wave optics model is a superior choice for end-to-end imaging system design.

Importance Sampling BRDF Derivatives

Authors

Yash Belhe,Bing Xu,Sai Praveen Bangaru,Ravi Ramamoorthi,Tzu-Mao Li

Journal

ACM Transactions on Graphics

Published Date

2024/2/21

We propose a set of techniques to efficiently importance sample the derivatives of a wide range of BRDF models. In differentiable rendering, BRDFs are replaced by their differential BRDF counterparts which are real-valued and can have negative values. This leads to a new source of variance arising from their change in sign. Real-valued functions cannot be perfectly importance sampled by a positive-valued PDF, and the direct application of BRDF sampling leads to high variance. Previous attempts at antithetic sampling only addressed the derivative with the roughness parameter of isotropic microfacet BRDFs. Our work generalizes BRDF derivative sampling to anisotropic microfacet models, mixture BRDFs, Oren-Nayar, Hanrahan-Krueger, among other analytic BRDFs. Our method first decomposes the real-valued differential BRDF into a sum of single-signed functions, eliminating variance from a change in sign …

A generalized ray formulation for wave-optics rendering

Authors

Shlomi Steinberg,Ravi Ramamoorthi,Benedikt Bitterli,Eugene d'Eon,Ling-Qi Yan,Matt Pharr

Journal

arXiv preprint arXiv:2303.15762

Published Date

2023/3/28

Under ray-optical light transport, the classical ray serves as a local and linear "point query" of light's behaviour. Such point queries are useful, and sophisticated path tracing and sampling techniques enable efficiently computing solutions to light transport problems in complex, real-world settings and environments. However, such formulations are firmly confined to the realm of ray optics, while many applications of interest, in computer graphics and computational optics, demand a more precise understanding of light. We rigorously formulate the generalized ray, which enables local and linear point queries of the wave-optical phase space. Furthermore, we present sample-solve: a simple method that serves as a novel link between path tracing and computational optics. We will show that this link enables the application of modern path tracing techniques for wave-optical rendering, improving upon the state-of-the-art in terms of the generality and accuracy of the formalism, ease of application, as well as performance. Sampling using generalized rays enables interactive rendering under rigorous wave optics, with orders-of-magnitude faster performance compared to existing techniques.

Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion

Authors

Jiatao Gu,Alex Trevithick,Kai-En Lin,Joshua M Susskind,Christian Theobalt,Lingjie Liu,Ravi Ramamoorthi

Published Date

2023/7/3

Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input. Existing approaches condition neural radiance fields (NeRF) on local image features, projecting points to the input image plane, and aggregating 2D features to perform volume rendering. However, under severe occlusion, this projection fails to resolve uncertainty, resulting in blurry renderings that lack details. In this work, we propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test-time. We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views. Our approach significantly outperforms existing NeRF-based and geometry-free approaches on challenging datasets including ShapeNet, ABO, and Clevr3D.

MesoGAN: Generative Neural Reflectance Shells

Authors

Stavros Diolatzis,Jan Novak,Fabrice Rousselle,Jonathan Granskog,Miika Aittala,Ravi Ramamoorthi,George Drettakis

Journal

Computer Graphics Forum

Published Date

2023/9

We introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our …

Discontinuity-Aware 2D Neural Fields: Supplemental document

Authors

YASH BELHE,MICHAËL GHARBI,MATTHEW FISHER,ILIYAN GEORGIEV,RAVI RAMAMOORTHI,TZUMAO LI

Published Date

2023

A potential approach to construct a feature field is to use a hybrid data structure containing both a grid and the set of discontinuity curves. This approach is commonly applied by classical featurebased texture methods [Ramanarayanan et al. 2004; Sen 2004; Tum-blin and Choudhury 2004; Tarini and Cignoni 2005; Parilov and Zorin 2008] for representing sharp discontinuities in textures and images. These methods construct a regularly spaced grid and store the features at the corners of every grid cell. Reconstruction within each cell follows by interpolating features at the corners while avoiding smoothing across the discontinuities. Once these methods pick a grid resolution, the locations of the grid vertices are fixed, and they cannot adapt to the topology of the curve network. This makes the approximation accuracy resolution dependent. More precisely, the four values at the corners can resolve at most four regions within each grid cell. At the cost of increasing resolution, the fraction of grid cells with more than four regions can be reduced. Nontheless, to correctly resolve the color in all regions, they require infinite subdivision, because any finite grid size can always contain cells with more than four regions, as shown in Fig. 1. Therefore, existing methods usually have to modify the topology of the curve network, in a way that violates our continuity criteria (2) and (3) in paper Section 4.1. For example, the method of Parilov and Zorin [2008] requires each grid cell to contain a maximum of two discontinuity curves, and simplifies the curves to satisfy the constraints. Since we use a curved triangulation that adapts to the curve topology, we can achieve …

Neural Free‐Viewpoint Relighting for Glossy Indirect Illumination

Authors

Nithin Raghavan,Yan Xiao,Kai‐En Lin,Tiancheng Sun,Sai Bi,Zexiang Xu,Tzu‐Mao Li,Ravi Ramamoorthi

Journal

Computer Graphics Forum

Published Date

2023/7

Precomputed Radiance Transfer (PRT) remains an attractive solution for real‐time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real‐time. However, practical PRT methods are usually limited to low‐frequency spherical harmonic lighting. All‐frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural‐wavelet PRT solution to high‐frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we …

Nerfs: The search for the best 3d representation

Authors

Ravi Ramamoorthi

Journal

arXiv preprint arXiv:2308.02751

Published Date

2023/8/5

Neural Radiance Fields or NeRFs have become the representation of choice for problems in view synthesis or image-based rendering, as well as in many other applications across computer graphics and vision, and beyond. At their core, NeRFs describe a new representation of 3D scenes or 3D geometry. Instead of meshes, disparity maps, multiplane images or even voxel grids, they represent the scene as a continuous volume, with volumetric parameters like view-dependent radiance and volume density obtained by querying a neural network. The NeRF representation has now been widely used, with thousands of papers extending or building on it every year, multiple authors and websites providing overviews and surveys, and numerous industrial applications and startup companies. In this article, we briefly review the NeRF representation, and describe the three decades-long quest to find the best 3D representation for view synthesis and related problems, culminating in the NeRF papers. We then describe new developments in terms of NeRF representations and make some observations and insights regarding the future of 3D representations.

Factorized inverse path tracing for efficient and accurate material-lighting estimation

Authors

Rui Zhu*,Liwen Wu*,Mustafa B Yaldiz,Yinhao Zhu,Hong Cai,Janarbek Matai,Fatih Porikli,Tzu-Mao Li,Manmohan Chandraker,Ravi Ramamoorthi

Published Date

2023/4/12

Inverse path tracing has recently been applied to joint material and lighting estimation, given geometry and multi-view HDR observations of an indoor scene. However, it has two major limitations: path tracing is expensive to compute, and ambiguities exist between reflection and emission. Our Factorized Inverse Path Tracing (FIPT) addresses these challenges by using a factored light transport formulation and finds emitters driven by rendering errors. Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities. The exhaustive experiments on synthetic scenes show that our method (1) outperforms state-of-the-art indoor inverse rendering and relighting methods particularly in the presence of complex illumination effects;(2) speeds up inverse path tracing optimization to less than an hour. We further demonstrate robustness to noisy inputs through material and lighting estimates that allow plausible relighting in a real scene. The source code is available at: https://github. com/lwwu2/fipt

PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN

Authors

K‐E Lin,Alex Trevithick,Keli Cheng,Michel Sarkis,Mohsen Ghafoorian,Ning Bi,Gerhard Reitmayr,Ravi Ramamoorthi

Journal

Computer Graphics Forum

Published Date

2023/7

Portrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and …

Real-time radiance fields for single-image portrait view synthesis

Authors

Alex Trevithick,Matthew Chan,Michael Stengel,Eric Chan,Chao Liu,Zhiding Yu,Sameh Khamis,Manmohan Chandraker,Ravi Ramamoorthi,Koki Nagano

Journal

ACM Transactions on Graphics (TOG)

Published Date

2023/8/1

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e.g., face portrait) in real-time. Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering. Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization. To train our triplane encoder pipeline, we use only synthetic data, showing how to distill the knowledge from a pretrained 3D GAN into a feedforward encoder. Technical contributions include a Vision Transformer-based triplane encoder, a camera data augmentation strategy, and a well-designed loss function for synthetic data training. We benchmark against the state-of-the-art methods, demonstrating significant improvements in …

Neural BSSRDF: Object Appearance Representation Including Heterogeneous Subsurface Scattering

Authors

Thomson TG,Jeppe Revall Frisvad,Ravi Ramamoorthi,Henrik Wann Jensen

Journal

arXiv preprint arXiv:2312.15711

Published Date

2023/12/25

Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If we do path tracing and use a high dynamic range lighting environment, the rendering becomes computationally heavy. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. The neural representation function resembles a bidirectional scattering-surface reflectance distribution function (BSSRDF). However, conventional BSSRDF models assume a planar half-space medium and only surface variation of the material, which is often not a good representation of the appearance of real-world objects. Our method represents the BSSRDF of a full object taking its geometry and heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures all-frequency relighting of heterogeneous translucent objects. We use a multi-layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non-self-shadowed part of the object. We demonstrate the ability of our method to store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering …

Vision transformer for nerf-based view synthesis from a single input image

Authors

Kai-En Lin,Yen-Chen Lin,Wei-Sheng Lai,Tsung-Yi Lin,Yi-Chang Shih,Ravi Ramamoorthi

Published Date

2023

Although neural radiance fields (NeRF) have shown impressive advances in novel view synthesis, most methods require multiple input images of the same scene with accurate camera poses. In this work, we seek to substantially reduce the inputs to a single unposed image. Existing approaches using local image features to reconstruct a 3D object often render blurry predictions at viewpoints distant from the source view. To address this, we propose to leverage both the global and local features to form an expressive 3D representation. The global features are learned from a vision transformer, while the local features are extracted from a 2D convolutional network. To synthesize a novel view, we train a multi-layer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering. This novel 3D representation allows the network to reconstruct unseen regions without enforcing constraints like symmetry or canonical coordinate systems. Our method renders novel views from just a single input image, and generalizes across multiple object categories using a single model. Quantitative and qualitative evaluations demonstrate that the proposed method achieves state-of-the-art performance and renders richer details than existing approaches.

View Synthesis of Dynamic Scenes Based on Deep 3D Mask Volume

Authors

Kai-En Lin,Guowei Yang,Lei Xiao,Feng Liu,Ravi Ramamoorthi

Journal

IEEE Transactions on Pattern Analysis and Machine Intelligence

Published Date

2023

Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations. The next key step in immersive virtual experiences is view synthesis of dynamic scenes. However, several challenges exist due to the lack of high-quality training datasets, and the additional time dimension for videos of dynamic scenes. To address this issue, we introduce a multi-view video dataset, captured with a custom 10-camera rig in 120FPS. The dataset contains 96 high-quality scenes showing various visual effects and human interactions in outdoor scenes. We develop a new algorithm, Deep 3D Mask Volume, which enables temporally-stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras. Our algorithm addresses the temporal inconsistency of disocclusions by identifying the error-prone areas with a 3D mask volume, and …

Parameter-space ReSTIR for differentiable and inverse rendering

Authors

Wesley Chang,Venkataram Sivaram,Derek Nowrouzezahrai,Toshiya Hachisuka,Ravi Ramamoorthi,Tzu-Mao Li

Published Date

2023/7/23

Differentiable rendering is frequently used in gradient descent-based inverse rendering pipelines to solve for scene parameters – such as reflectance or lighting properties – from target image inputs. Efficient computation of accurate, low variance gradients is critical for rapid convergence. While many methods employ variance reduction strategies, they operate independently on each gradient descent iteration, requiring large sample counts and computation. Gradients may however vary slowly between iterations, leading to unexplored potential benefits when reusing sample information to exploit this coherence. We develop an algorithm to reuse Monte Carlo gradient samples between gradient iterations, motivated by reservoir-based temporal importance resampling in forward rendering. Direct application of this method is not feasible, as we are computing many derivative estimates (i.e., one per optimization …

Conditional Resampled Importance Sampling and ReSTIR

Authors

Markus Kettunen,Daqi Lin,Ravi Ramamoorthi,Thomas Bashford-Rogers,Chris Wyman

Published Date

2023/12/10

Recent work on generalized resampled importance sampling (GRIS) enables importance-sampled Monte Carlo integration with random variable weights replacing the usual division by probability density. This enables very flexible spatiotemporal sample reuse, even if neighboring samples (e.g., light paths) have intractable probability densities. Unlike typical Monte Carlo integration, which samples according to some PDF, GRIS instead resamples existing samples. But resampling with GRIS assumes samples have tractable marginal contribution weights, which is problematic if reusing, for example, light subpaths from unidirectionally-sampled paths. Reusing such subpaths requires conditioning by (non-reused) segments of the path prefixes. In this paper, we extend GRIS to conditional probability spaces, showing correctness given certain conditional independence between integration variables and their unbiased …

See List of Professors in Ravi Ramamoorthi University(University of California, San Diego)

Ravi Ramamoorthi FAQs

What is Ravi Ramamoorthi's h-index at University of California, San Diego?

The h-index of Ravi Ramamoorthi has been 54 since 2020 and 79 in total.

What are Ravi Ramamoorthi's top articles?

The articles with the titles of

Openillumination: A multi-illumination dataset for inverse rendering evaluation on real objects

Decorrelating restir samplers via mcmc mutations

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

A differentiable wave optics model for end-to-end imaging system optimization

Importance Sampling BRDF Derivatives

A generalized ray formulation for wave-optics rendering

Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion

...

are the top articles of Ravi Ramamoorthi at University of California, San Diego.

What are Ravi Ramamoorthi's research interests?

The research interests of Ravi Ramamoorthi are: Computer Graphics, Computer Vision, Signal Processing

What is Ravi Ramamoorthi's total number of citations?

Ravi Ramamoorthi has 29,940 citations in total.

What are the co-authors of Ravi Ramamoorthi?

The co-authors of Ravi Ramamoorthi are Patrick Hanrahan, Steven Feiner, Maneesh Agrawala, David W. Jacobs, James F. O'Brien.

    Co-Authors

    H-index: 97
    Patrick Hanrahan

    Patrick Hanrahan

    Stanford University

    H-index: 83
    Steven Feiner

    Steven Feiner

    Columbia University in the City of New York

    H-index: 81
    Maneesh Agrawala

    Maneesh Agrawala

    Stanford University

    H-index: 64
    David W. Jacobs

    David W. Jacobs

    University of Maryland, Baltimore

    H-index: 60
    James F. O'Brien

    James F. O'Brien

    University of California, Berkeley

    academic-engine

    Useful Links