Matthew Gwilliam
I am a third year Ph.D student (started Fall 2020) in the department of Computer Science at the University of
Maryland (UMD), advised by Professor Abhinav Shrivastava. I
am studying computer vision.
I completed my B.S. in Computer Science at Brigham Young University in 2019. During
undergrad I worked part-time at Qualtrics, and after graduation I
worked there full-time before I started my Ph.D.
While at BYU, I was fortunate to work with Ryan Farrell, who have
helped me grow as a researcher and a person and ultimately decide to pursue my
graduate degree.
Email  / 
CV  / 
Google
Scholar
|
|
Research
I am interested in computer vision models that learn without labels.
More specifically, I am interested in methods that can learn universal image
representations in an unsupervised manner.
Currently, that work focuses on models based on diffusion and implicit neural
representation (but mostly on INR).
I am working with the sorts of tasks that these models are useful for: video
retrieval, compression, generation; image classification, clustering, etc.
|
|
Diffusion Models Beat GANs on Image Classification
Matthew Gwilliam*,
Soumik Mukhopadhyay*,
Vatsal
Agarwal
Namitha Padmanabhan
Archana Swaminathan
Tianyi Zhou,
Abhinav Shrivastava
Under Review
Project Page |Paper
Explore diffusion models as unified unsupervised image representation learning
models.
|
|
HNeRV: A Hybrid Neural Representation for Videos
Hao Chen,
Matthew Gwilliam,
Ser-Nam Lim,
Abhinav Shrivastava
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ,
2023
Project Page | Paper | Code
Combine the strengths of implicit (NeRV) and explicit (autoencoder)
representation to create a hybrid neural
representation for video with good properties for representation, compression,
and editing.
|
|
CNeRV: Content-adaptive Neural Representation for Visual Data
Hao Chen,
Matthew Gwilliam,
Bo He,
Ser-Nam Lim,
Abhinav Shrivastava
British Machine Vision Conference (BMVC),
2022 (ORAL)
Project Page | Paper
Make implicit video representation networks generalize to unseen data by
swapping time embedding for content-aware embedding
that is computed as a unique summary of each frame.
|
|
Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning
Matthew Gwilliam,
Abhinav Shrivastava
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ,
2022
Project Page | Paper | Code
Examine, compare, and contrast popular unsupervised image representation learning methods,
showing that there are significant differences based on specific algorithm used,
and "supervised vs. unsupervised" comparisons which neglect these differences
tend to over-generalize.
|
|
Rethinking Common Assumptions to Mitigate Racial Bias in Face Recognition Datasets
Matthew Gwilliam,
Srinidhi Hegde
Lade Tinubu
Alex Hanson
IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) ,
2021
Paper | Code
Reveal the role of data in racial bias for facial recognition systems, and the flaws
underlying the assumption that balanced data results in fair performance.
|
|
Fair Comparison: Quantifying Variance in Results for Fine-grained Visual Categorization
Matthew Gwilliam,
Adam Teuscher
Connor Anderson
Ryan Farrell
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ,
2021
Paper
Uncover the large, often-ignored amount of variance in FGVC systems across training runs,
on the dataset level, but more
particularly in terms of the classification performance for individual classes.
|
|
Intelligent Image Collection: Building the Optimal Dataset
Matthew Gwilliam,
Ryan Farrell
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) ,
2020
Paper
Propose smart practices to optimize image curation,
such that classification accuracy is maximized
for a given constrained dataset size.
|
|