Amir Gholami is a postdoctoral research fellow in BAIR Lab. He received his PhD from UT Austin, working on large scale 3D bio-physics based image segmentation, a research topic which received UT Austin’s best doctoral dissertation award in 2018 (thesis can be found here). He is a Melosh Medal finalist, recipient of best student paper award in SC'17, Gold Medal in the ACM Student Research Competition, as well as best student paper finalist in SC’14. His current research includes Neural ODEs, large scale training of Neural Networks, stochastic second-order methods, and robust optimization (resume).

Contact Email: "amirgh _at_ berkeley . edu".

Recent News

  • 11/11/19: Two papers accepted in AAAI'20: Q-BERT, and Inefficiency of K-FAC for large batch size training.
  • 09/30/19: Two papers accepted in NeurIPS'19: ANODEV2 in the main conference, and our work on Trace Weighted Quantization as spotlight in beyond first order methods workshop.
  • 09/29/19: I will be presenting our work on second-order quantization (HAWQ and Q-BERT) in BLISS seminar on October 2nd.
  • 08/15/19: Very excited to participate in AI4ALL, an annual teaching program for high school students from underrepresented communities to promote diversity and inclusion in AI.
  • 05/07/19: Congragulations to Linjian Ma (now PhD student at UIUC), Jiayu Ye (now at Google), and Gabe Montague (co-founder of Bike and Pedal) on successfully defending their Masters project.
  • 03/21/19: Will be giving a talk at BSTARS'19. Many thanks to the Berkeley Statistics department for the invitation.
  • 03/01/19: Our Trust Region paper has been accepted to CVPR'19!
  • 02/28/19: Will be giving a talk in Fifth Annual Industry Day at Simons Institute
  • 11/06/18: Three papers accepted in NeurIPS'18 (one main conference and two workshops)
  • 11/01/18: I will be giving a talk in Stanford CME-510 lecture series
  • 03/30/18: Just learned that my PhD thesis has won UT Austin's 2018 Outstanding Disseration Award. Thanks George for your great mentorship
  • 03/28/18: We have released SqueezeNext, the smallest neural network desgined so far (112x smaller than AlexNet)
  • 03/05/18: Bichen's paper is selected for spotlight in CVPR'18
  • 02/26/18: Selected as a finalist for Robert J. Melosh Medal. Very excited to visit Duke University
  • 02/08/18: Will be giving a lecture in CS267 on GPUs [Watch Here]
  • 11/21/17: Our paper won the Best Student Paper award at SC'17!





  • UC Berkeley, BLISS Seminar, Oct. 2019,
    Systematic Quantization of Neural Networks Through Second-Order Information.

  • Facebook, AI Systems Faculty Summit, Sep. 2019,
    Efficient Neural Networks through Systematic Quantization.

  • BSTARS'19, Berkeley Statistics Department, Mar. 2019,
    Neural Networks Through the Lens of the Hessian.

  • Berkeley Simons Institute, 5th Annual Industry Day, Feb. 2019,
    ANODE: Unconditionally Accurate Memory-Efficient Gradients for Neural ODEs.

  • Simons Randomized Numerical Linear Algebra and Applications Workshop, Sep. 2018,
    Large Scale Stochastic Training of Neural Networks.

  • Simons Data Science Finale Workshop, Dec. 2018,
    Towards Robust Second-order Training of Neural Networks.

  • Simons Weekly Optimization Reading Group, Oct. 2018,
    Second order optimization for convex and non-convex problems.

  • NERSC Data Seminar, Dec. 2018,
    Beyond SGD: Robust Optimization and Second-Order Information for Large-Scale Training of Neural Networks .

  • Stanford, CME 510: Linear Algebra and Optimization Seminar, Nov. 2018,
    Large-scale training of Neural Networks .

  • UCSF Radiology Department, Oct. 2018 ,
    A Domain Adaptation framework for Neural Network Based Medical Image Segmentation.

  • Intel AI Meeting, Oct. 2018,
    Autonomous Driving Challenges in Computer Vision Research.

  • Facebook AI Research, Sep. 2018,
    Challenges for Distributed Training of Neural Networks.

  • Microsoft Research, Aug. 2018,
    Large Scale Training of Neural Networks .

  • Berkeley Scientific Computing and Matrix Computations Seminar, Sep. 2017,
    A Framework for Scalable Biophysics-based Image Analysis .

  • Stanford, ICME Star Talk Series, 2017,
    Fast algorithms for inverse problems with parabolic pde constraints with application to biophysics-based image analysis,

  • SIAM Minisymposium on Imaging Sciences, Albuquerque, NM, USA, 2016,
    On preconditioning Newton method for PDE constrained optimization problems.

  • 13th U.S. National Congress on Computational Mechanics, San Diego, CA, USA, 2015,
    Challenges for exascale scalability of elliptic solvers using a model Poisson solver and comparing state-of-the art methods.

  • SIAM CSE Minisymposium, Salt Lake, Utah, USA, 2015,
    Parameter estimation for malignant brain tumors.

  • 12th U.S. National Congress on Computational Mechanics, Raleigh, NC, USA, 2013,
    A numerical algorithm for biophysically-constrained parameter estimation for tumor modeling and data assimilation with medical images.

  • SIAM Annual Meeting, San Diego, CA, USA, 2013,
    Image-driven inverse problem for estimating initial distribution of brain tumor modeled by advection-diffusion-reaction equation.


  • Dynamic directional rounding,
    A. Fit-Florea, A. Gholami, B. Ginsburg, and P. Davoodi.
    Approved by Nvidia Patent Office (US patent pending), 2018.

  • Tensor processing using low precision format,
    B. Ginsburg, S. Nikolaev, A. Kiswani, H. Wu, A. Gholami, S. Kierat, M. Houston, and A. Fit-Flores.
    United States patent application US 15/624,577. 2017 Dec 28.

  • A novel high performance inplace transpose algorithm,
    A. Gholami and B. Natarajan,
    US Patent Pending, 2017.

  • Pool boiling cooling system.,
    A. Gholami, R. Hosseini, M. Nabil, and M. H. Samadinia.
    Iran Industrial Property Office, 68033, 2010.



In this work, we introduce SqueezeNext, a new family of neural network architectures. SqueezeNext matches AlexNet's accuracy on the ImageNet with 112x fewer parameters, and its deeper variant exceeds VGG-19's accuracy with only 4.4 Million parameters, (31x smaller). SqueezeNext also achieves better top-5 classification accuracy with 1.3x fewer parameters as compared to MobileNet, while avoiding depthwise-separable convolutions that have poor arithmetic intensity. Using hardware simulation results for power and inference speed on an embedded system, guided us to optimize the baseline model that are 2.59x/8.26x faster and 2.25x/7.5x more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation. For details please see this paper.

Landscape of Neural Network Loss

Characterizing the generalization performance of Neural Network at different points in the optimization space is an active area of research. In particular, the network's performance highly depends on the mini-batch size used for training. But what is different in the quality of the solution for large and small batch size that leads to this difference? We study this through the lens of the Hessian operator and show an interesting interleaved connection with robustness of the Neural Network and mini-batch size. For details please see this paper.

Multi-Modal Brain Segmentation

Segmenting a tumor-bearing image, is the task of decomposing the image into disjoint regions. We present a framework for fully automatic segmentation of brain MRI bearing gliomas, which includes three main steps: (1) preprocessing of the input MRI to normalize intensities and transport them in a common atlas space; (2) using supervised machine learning to create initial segmentation and probability maps for the target classes (whole tumor, edema, tumor core, and enhancing tumor); (3) combining these probabilities with an atlas-based segmentation algorithm in which we use a tumor growth model to improve on the segmentation and probability maps from the supervised learning scheme. The result of this work will be presented in MICCAI 2017.

(image from Wikipedia)

Half Precision Training

I worked on this project during my internship at NVIDIA. The goal was to perform the whole training pipeline using half-float precision. This is very challenging due to the limited range of expressible numerical values in half-precision. The limitted precision, severeley affect the vanishing and exploding gradient problem in Neural Networks. Existing approaches, included use of stochastic rounding, which even for shallow networks cannot achieve the baseline accuracy. We developed a novel approach that achieves same accuracy as the baseline, with all the calculations and storage in half-float. We successfully tested the method on deep networks such as AlexNet and GoogLeNet. This work has resulted in a pending patent application.

Parallel Image Registration

Image registration is a process in which a mapping from a reference image to a target image is sought. It is key in many different applications ranging from medical imaging to machine learning. We have develoepd a state-of-the-art parallel registration solver that has been scaled up to 8,192 cores, and have been able to solve a record 3D image registration problem with 200 billion unknowns in less than 4 minutes. The code that we have developed is based on AccFFT along with a novel parallel high-order interpolation kernel. The result of this work will appear in SC'17( best student paper finalist [pdf]).

Accelerated FFT Library

Accelerated FFT (AccFFT) is a new parallel FFT library for computing distributed Fast Fourier Transforms on GPU and CPU architectures. The library has been designed with the goal of achieving maximum performance, without making the user interface complicated. AccFFT supports parallel FFTs distributed with slab or pencil decomposition for both CPU and GPU architectures. The library's scalability has been tested upto 131K CPU cores, and upto 4K GPUs [pdf].

Novel Stokes Solver using FMM

Stokes equation is one of the most important equations derived from Navier-Stokes. Numerical solutions and discretization of the Stokes equation is challenging. For instance, one cannot use arbitrary discretization spaces for velocity and pressure. Moreover, it is an elliptic but indefinite problem, which further complicates the construction of fast linear algebraic solvers and preconditioners, especially for problems with highly variable coefficients or high-order discretizations. We are using a novel adaptive fast multipole method (pvfmm), which uses an integral formulation scheme that can circumvent most of the difficulties with the Stokes equation. Compared to finite element methods, our formulation decouples the velocity and pressure, and generates fields that are by construction divergence free [pdf].


Massively Parallel Poisson Solvers

The need for large scale parallel solvers for elliptic partial differential equations (PDES) pervades across a spectrum of problems with resolution requirements that cannot be accommodated on current systems. Poisson solvers must scale to trillions of unknowns. Example of methods that scale well are the FFT (based on spectral discretizations), the Fast Multipole Method, and multigrid methods (for stencil-based discretizations). We have benchmarked these methods and compared their parallel efficiency as well as the corresponding cost per unknowns for different test cases. FFT is tested with p3dfft, FMM with pvfmm, AMG with ML package, and GMG with an in house code [pdf].

Brain Tumor Inverse Problem

Gliomas are tumors that arise from Glial cells in the brain. They account for 29% of all brain and central nervous system (CNS) tumors, and 80% of all malignant tumors out of about 60,000 cases diagnosed each year in the United States. Despite advances in surgery, chemo/radio therapy, the median survival rate of high grade Gliomas has remained about one year in the past 30 years. One of the key parameters in increasing the survival rate of patients is how well the tumor invasion boundaries are detectable. With the current imaging technologies only the bulk of the tumor abnormalities, can be detected, and the infiltrated tumor cells get masked. I am trying to approximate the extent of tumor infiltration by coupling the imaging data with tumor growth dynamics [pdf].

Copyright © Amir Gholami 2014-2018