William T. Freeman

William T. Freeman

Massachusetts Institute of Technology

H-index: 141

North America-United States

Description

William T. Freeman, With an exceptional h-index of 141 and a recent h-index of 95 (since 2020), a distinguished researcher at Massachusetts Institute of Technology, specializes in the field of computer vision, computational photography.

Professor Information

University

Massachusetts Institute of Technology

Position

Professor of Computer Science

Citations(all)

104010

Citations(since 2020)

40040

Cited By

79142

hIndex(all)

141

hIndex(since 2020)

95

i10Index(all)

352

i10Index(since 2020)

268

Email

University Profile Page

Massachusetts Institute of Technology

Research & Interests List

computer vision

computational photography

Top articles of William T. Freeman

The persistent shadow of the supermassive black hole of M 87

In April 2019, the Event Horizon Telescope (EHT) Collaboration reported the first-ever event-horizon-scale images of a black hole, resolving the central compact radio source in the giant elliptical galaxy M 87. These images reveal a ring with a southerly brightness distribution and a diameter of ∼42 μas, consistent with the predicted size and shape of a shadow produced by the gravitationally lensed emission around a supermassive black hole. These results were obtained as part of the April 2017 EHT observation campaign, using a global very long baseline interferometric radio array operating at a wavelength of 1.3 mm. Here, we present results based on the second EHT observing campaign, taking place in April 2018 with an improved array, wider frequency coverage, and increased bandwidth. In particular, the additional baselines provided by the Greenland telescope improved the coverage of the array. Multiyear …

Authors

Kazunori Akiyama,Antxon Alberdi,Walter Alef,Juan Carlos Algaba,Richard Anantua,Keiichi Asada,Rebecca Azulay,Uwe Bach,Anne-Kathrin Baczko,David Ball,Mislav Baloković,Bidisha Bandyopadhyay,John Barrett,Michi Bauböck,Bradford A Benson,Dan Bintley,Lindy Blackburn,Raymond Blundell,Katherine L Bouman,Geoffrey C Bower,Hope Boyce,Michael Bremer,Roger Brissenden,Silke Britzen,Avery E Broderick,Dominique Broguiere,Thomas Bronzwaer,Sandra Bustamante,John E Carlstrom,Andrew Chael,Chi-kwan Chan,Dominic O Chang,Koushik Chatterjee,Shami Chatterjee,Ming-Tang Chen,Yongjun Chen,Xiaopeng Cheng,Ilje Cho,Pierre Christian,Nicholas S Conroy,John E Conway,Thomas M Crawford,Geoffrey B Crew,Alejandro Cruz-Osorio,Yuzhu Cui,Rohan Dahale,Jordy Davelaar,Mariafelicia De Laurentis,Roger Deane,Jessica Dempsey,Gregory Desvignes,Jason Dexter,Vedant Dhruv,Indu K Dihingia,Sheperd S Doeleman,Sergio A Dzib,Ralph P Eatough,Razieh Emami,Heino Falcke,Joseph Farah,Vincent L Fish,Edward Fomalont,H Alyson Ford,Marianna Foschi,Raquel Fraga-Encinas,William T Freeman,Per Friberg,Christian M Fromm,Antonio Fuentes,Peter Galison,Charles F Gammie,Roberto García,Olivier Gentaz,Boris Georgiev,Ciriaco Goddi,Roman Gold,Arturo I Gómez-Ruiz,José L Gómez,Minfeng Gu,Mark Gurwell,Kazuhiro Hada,Daryl Haggard,Ronald Hesper,Dirk Heumann,Luis C Ho,Paul Ho,Mareki Honma,Chih-Wei L Huang,Lei Huang,David H Hughes,Shiro Ikeda,CM Violette Impellizzeri,Makoto Inoue,Sara Issaoun,David J James,Buell T Jannuzi,Michael Janssen,Britton Jeter,Wu Jiang,Alejandra Jiménez-Rosales,Michael D Johnson,Svetlana Jorstad,Adam C Jones,Abhishek V Joshi,Taehyun Jung,Ramesh Karuppusamy,Tomohisa Kawashima,Garrett K Keating,Mark Kettenis,Dong-Jin Kim,Jae-Young Kim,Jongsoo Kim,Junhan Kim,Motoki Kino,Jun Yi Koay,Prashant Kocherlakota,Yutaro Kofuji,Patrick M Koch,Shoko Koyama,Carsten Kramer,Joana A Kramer,Michael Kramer,Thomas P Krichbaum,Cheng-Yu Kuo,Noemi La Bella,Sang-Sung Lee,Aviad Levis,Zhiyuan Li,Rocco Lico,Greg Lindahl,Michael Lindqvist,Mikhail Lisakov,Jun Liu,Kuo Liu,Elisabetta Liuzzo,Wen-Ping Lo,Andrei P Lobanov,Laurent Loinard,Colin J Lonsdale,Amy E Lowitz,Ru-Sen Lu,Nicholas R MacDonald,Jirong Mao,Nicola Marchili,Sera Markoff,Daniel P Marrone,Alan P Marscher,Iván Martí-Vidal,Satoki Matsushita,Lynn D Matthews

Journal

Astronomy & Astrophysics

Published Date

2024/1/1

Featup: A model-agnostic framework for features at any resolution

Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction because models aggressively pool information over large areas. In this work, we introduce FeatUp, a task- and model-agnostic framework to restore lost spatial information in deep features. We introduce two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multi-view consistency loss with deep analogies to NeRFs. Our features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains even without re-training. We show that FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation.

Authors

Stephanie Fu,Mark Hamilton,Laura Brandt,Axel Feldmann,Zhoutong Zhang,William T Freeman

Journal

ICLR 2024

Published Date

2024

VizieR Online Data Catalog: M87* EHT image (Event Horizon Tel. Coll.+, 2024)

FITS files of the representative image of M 87* from the EHT observations taken on 2018 April 21 at band 3, as found in Figure 1 of the paper. The image is created by averaging together the blurred images from each of the methods described in Sect. 5 of the paper.

Authors

K Akiyama,A Alberdi,W Alef,J Carlos Algaba,R Anantua,K Asada,R Azulay,U Bach,A-K Baczko,D Ball,M Balokovic,B Bandyopadhyay,J Barrett,M Bauboeck,BA Benson,D Bintley,L Blackburn,R Blundell,KL Bouman,GC Bower,H Boyce,M Bremer,R Brissenden,S Britzen,AE Broderick,D Broguiere,T Bronzwaer,S Bustamante,JE Carlstrom,A Chael,C-K Chan,DO Chang,K Chatterjee,S Chatterjee,M-T Chen,Y Chen,X Cheng,I Cho,P Christian,NS Conroy,JE Conway,TM Crawford,GB Crew,A Cruz-Osorio,Y Cui,R Dahale,J Davelaar,M de Laurentis,R Deane,J Dempsey,G Desvignes,J Dexter,V Dhruv,IK Dihingia,SS Doeleman,SA Dzib,RP Eatough,R Emami,H Falcke,J Farah,VL Fish,E Fomalont,HA Ford,M Foschi,R Fraga-Encinas,WT Freeman,P Friberg,CM Fromm,A Fuentes,P Galison,CF Gammie,R Garcia,O Gentaz,B Georgiev,C Goddi,R Gold,AI Gomez-Ruiz,JL Gomez,M Gu,M Gurwell,K Hada,D Haggard,R Hesper,D Heumann,LC Ho,P Ho,M Honma,C-WL Huang,L Huang,DH Hughes,S Ikeda,CMV Impellizzeri,M Inoue,S Issaoun,DJ James,BT Jannuzi,M Janssen,B Jeter,W Jiang,A Jimenez-Rosales,MD Johnson,S Jorstad,AC Jones,AV Joshi,T Jung,R Karuppusamy,T Kawashima,GK Keating,M Kettenis,D-J Kim,J-Y Kim,J Kim,J Kim,M Kino,J Yi Koay,P Kocherlakota,Y Kofuji,PM Koch,S Koyama,C Kramer,JA Kramer,M Kramer,TP Krichbaum,C-Y Kuo,N La Bella,S-S Lee,A Levis,Z Li,R Lico,G Lindahl,M Lindqvist,M Lisakov,J Liu,K Liu,E Liuzzo,W-P Lo,AP Lobanov,L Loinard,CJ Lonsdale,AE Lowitz,R-S Lu,NR MacDonald,J Mao,N Marchili,S Markoff,DP Marrone,AP Marscher,I Marti-Vidal,S Matsushita,LD Matthews

Journal

VizieR Online Data Catalog

Published Date

2024/1

Diffusion with forward models: Solving stochastic inverse problems without direct supervision

Denoising diffusion models are a powerful type of generative models used to capture complex distributions of real-world signals. However, their applicability is limited to scenarios where training samples are readily available, which is not always the case in real-world applications. For example, in inverse graphics, the goal is to generate samples from a distribution of 3D scenes that align with a given image, but ground-truth 3D scenes are unavailable and only 2D images are accessible. To address this limitation, we propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed. Instead, these signals are measured indirectly through a known differentiable forward model, which produces partial observations of the unknown signal. Our approach involves integrating the forward model directly into the denoising process. A key contribution of our work is the integration of a differentiable forward model into the denoising process. This integration effectively connects the generative modeling of observations with the generative modeling of the underlying signals, allowing for end-to-end training of a conditional generative model over signals. During inference, our approach enables sampling from the distribution of underlying signals that are consistent with a given partial observation. We demonstrate the effectiveness of our method on three challenging computer vision tasks. For instance, in the context of inverse graphics, our model enables direct sampling from the distribution of 3D scenes that align with a single 2D input image.

Authors

Ayush Tewari,Tianwei Yin,George Cazenavette,Semon Rezchikov,Josh Tenenbaum,Frédo Durand,Bill Freeman,Vincent Sitzmann

Journal

Advances in Neural Information Processing Systems

Published Date

2024/2/13

Audio-visual speech separation

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective …

Published Date

2024/2/6

PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation

Realistic object interactions are crucial for creating immersive virtual experiences, yet synthesizing realistic 3D object dynamics in response to novel interactions remains a significant challenge. Unlike unconditional or text-conditioned dynamics generation, action-conditioned dynamics requires perceiving the physical material properties of objects and grounding the 3D motion prediction on these properties, such as object stiffness. However, estimating physical material properties is an open problem due to the lack of material ground-truth data, as measuring these properties for real objects is highly difficult. We present PhysDreamer, a physics-based approach that endows static 3D objects with interactive dynamics by leveraging the object dynamics priors learned by video generation models. By distilling these priors, PhysDreamer enables the synthesis of realistic object responses to novel interactions, such as external forces or agent manipulations. We demonstrate our approach on diverse examples of elastic objects and evaluate the realism of the synthesized interactions through a user study. PhysDreamer takes a step towards more engaging and realistic virtual experiences by enabling static 3D objects to dynamically respond to interactive stimuli in a physically plausible manner. See our project page at https://physdreamer.github.io/.

Authors

Tianyuan Zhang,Hong-Xing Yu,Rundi Wu,Brandon Y Feng,Changxi Zheng,Noah Snavely,Jiajun Wu,William T Freeman

Journal

arXiv preprint arXiv:2404.13026

Published Date

2024/4/19

Ordered magnetic fields around the 3C 84 central black hole

Context 3C 84 is a nearby radio source with a complex total intensity structure, showing linear polarisation and spectral patterns. A detailed investigation of the central engine region necessitates the use of very-long-baseline interferometry (VLBI) above the hitherto available maximum frequency of 86 GHz.Aims Using ultrahigh resolution VLBI observations at the currently highest available frequency of 228 GHz, we aim to perform a direct detection of compact structures and understand the physical conditions in the compact region of 3C 84.Methods We used Event Horizon Telescope (EHT) 228 GHz observations and, given the limited (u, v)-coverage, applied geometric model fitting to the data. Furthermore, we employed quasi-simultaneously observed, ancillary multi-frequency VLBI data for the source in order to carry out a comprehensive analysis of the core structure.Results We report the detection of a highly …

Authors

GF Paraschos,J-Y Kim,M Wielgus,J Röder,TP Krichbaum,E Ros,I Agudo,I Myserlis,M Moscibrodzka,E Traianou,JA Zensus,L Blackburn,C-K Chan,S Issaoun,M Janssen,MD Johnson,VL Fish,K Akiyama,A Alberdi,W Alef,JC Algaba,R Anantua,K Asada,R Azulay,U Bach,A-K Baczko,D Ball,M Baloković,J Barrett,M Bauböck,BA Benson,D Bintley,R Blundell,KL Bouman,GC Bower,H Boyce,M Bremer,CD Brinkerink,R Brissenden,S Britzen,AE Broderick,D Broguiere,T Bronzwaer,S Bustamante,D-Y Byun,JE Carlstrom,C Ceccobello,A Chael,DO Chang,K Chatterjee,S Chatterjee,MT Chen,Y Chen,X Cheng,I Cho,P Christian,NS Conroy,JE Conway,JM Cordes,TM Crawford,GB Crew,A Cruz-Osorio,Y Cui,R Dahale,J Davelaar,M De Laurentis,R Deane,J Dempsey,G Desvignes,J Dexter,V Dhruv,SS Doeleman,S Dougal,SA Dzib,RP Eatough,R Emami,H Falcke,J Farah,E Fomalont,HA Ford,M Foschi,R Fraga-Encinas,WT Freeman,P Friberg,CM Fromm,A Fuentes,P Galison,CF Gammie,R García,O Gentaz,B Georgiev,C Goddi,R Gold,AI Gómez-Ruiz,JL Gómez,M Gu,M Gurwell,K Hada,D Haggard,K Haworth,MH Hecht,R Hesper,D Heumann,LC Ho,P Ho,M Honma,CL Huang,L Huang,DH Hughes,S Ikeda,CMV Impellizzeri,M Inoue,DJ James,BT Jannuzi,B Jeter,W Jaing,A Jiménez-Rosales,S Jorstad,AV Joshi,T Jung,M Karami,R Karuppusamy,T Kawashima,GK Keating,M Kettenis,D-J Kim,J Kim,M Kino,JY Koay,P Kocherlakota,Y Kofuji,PM Koch,S Koyama,C Kramer,JA Kramer,M Kramer,C-Y Kuo,N La Bella,TR Lauer,D Lee,S-S Lee,PK Leung,A Levis,Z Li,R Lico,G Lindahl,M Lindqvist,M Lisakov,J Liu,K Liu

Journal

Astronomy & Astrophysics

Published Date

2024/2/1

Foundations of Computer Vision

An accessible, authoritative, and up-to-date computer vision textbook offering a comprehensive introduction to the foundations of the field that incorporates the latest deep learning advances. Machine learning has revolutionized computer vision, but the methods of today have deep roots in the history of the field. Providing a much-needed modern treatment, this accessible and up-to-date textbook comprehensively introduces the foundations of computer vision while incorporating the latest deep learning advances. Taking a holistic approach that goes beyond machine learning, it addresses fundamental issues in the task of vision and the relationship of machine vision to human perception. Foundations of Computer Vision covers topics not standard in other texts, including transformers, diffusion models, statistical image models, issues of fairness and ethics, and the research process. To emphasize intuitive learning, concepts are presented in short, lucid chapters alongside extensive illustrations, questions, and examples. Written by leaders in the field and honed by a decade of classroom experience, this engaging and highly teachable book offers an essential next-generation view of computer vision. Up-to-date treatment integrates classic computer vision and deep learning Accessible approach emphasizes fundamentals and assumes little background knowledge Student-friendly presentation features extensive examples and images Proven in the classroom Instructor resources include slides, solutions, and source code

Authors

Antonio Torralba,Phillip Isola,William T Freeman

Published Date

2024/4/16

Professor FAQs

What is William T. Freeman's h-index at Massachusetts Institute of Technology?

The h-index of William T. Freeman has been 95 since 2020 and 141 in total.

What are William T. Freeman's research interests?

The research interests of William T. Freeman are: computer vision, computational photography

What is William T. Freeman's total number of citations?

William T. Freeman has 104,010 citations in total.

What are the co-authors of William T. Freeman?

The co-authors of William T. Freeman are Antonio Torralba, Joshua B. Tenenbaum, Fredo Durand, Edward H Adelson, Rob Fergus, Yair Weiss.

Co-Authors

H-index: 138
Antonio Torralba

Antonio Torralba

Massachusetts Institute of Technology

H-index: 137
Joshua B. Tenenbaum

Joshua B. Tenenbaum

Massachusetts Institute of Technology

H-index: 101
Fredo Durand

Fredo Durand

Massachusetts Institute of Technology

H-index: 94
Edward H Adelson

Edward H Adelson

Massachusetts Institute of Technology

H-index: 83
Rob Fergus

Rob Fergus

New York University

H-index: 71
Yair Weiss

Yair Weiss

Hebrew University of Jerusalem

academic-engine

Useful Links