Kevin's picture

Kevin Meng

currently: @mit @csail @gantry

formerly: @nvidia @whist

contact: mengk at mit dot edu

my interests:

  • interpretability
  • natural language processing
  • protein & drug design
  • computer vision
  • entrepreneurship

about me.

Hi! πŸ‘‹ I'm Kevin, a sophomore at MIT studying EECS and pursuing a concurrent master's in AI. These days, I spend lots of time thinking about transparency in deep neural networks, making machine learning systems improve themselves, and how AI can solve impactful problems in computational biology and NLP.

I also care deeply about teaching. Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit that provides personalized science fair, research, and computer science mentorship to aspiring scientists. I still occasionally mentor students via AYSI, and I organize and teach for [email protected] Workshops. In my free time, I enjoy running, cooking, hiking, playing card games, sucking at basketball, and wandering the streets of new cities.

things i've worked on.

Locating and Editing Factual Associations in GPT

arXiv Pre-Print | Code | Demo Colab | Project Page
Kevin Meng*, David Bau*, Alex Andonian, Yonatan Belinkov

We investigate the mechanisms underlying factual knowledge recall in autoregressive transformer language models. First, we develop a causal intervention for identifying neuron activations capable of altering a model's factual predictions. Within large GPT-style models, this reveals two distinct sets of neurons that we hypothesize correspond to knowing an abstract fact and saying a concrete word, respectively. This insight inspires the development of ROME, a novel method for editing facts stored in model weights. For evaluation, we assemble CounterFact, a dataset of over twenty thousand counterfactuals and tools to facilitate sensitive measurements of knowledge editing. Using CounterFact, we confirm the distinction between saying and knowing neurons, and we find that ROME achieves state-of-the-art performance in knowledge editing compared to other methods. An interactive demo notebook, full code implementation, and the dataset are available.

VeriClaim Poster
Studying the Approximate Linearity of Apple's NeuralHash

ICML ML4Cyber
Jagdeep Bhatia*, Kevin Meng*

Perceptual hashes map images with identical semantic content to the same n-bit hash value, while mapping semantically-different images to different hashes. These algorithms carry important applications in cybersecurity such as copyright infringement detection, content fingerprinting, and surveillance. Apple's NeuralHash is one such system that aims to detect the presence of illegal content on users' devices without compromising consumer privacy. We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks that can (i) evade detection of "illegal" images, (ii) generate near-collisions, and (iii) leak information about hashed images, all without access to model parameters. These vulnerabilities pose serious threats to NeuralHash's security goals; to address them, we propose a simple fix using classical cryptographic standards.

NeuralHash Project
VeriClaim: End-to-End Computational Fact Checking

NeurIPS 2021 | Claim-Spotter Paper | Demo Video
Kevin Meng
*Presented at NeurIPS 2021's Workshop on AI for Credible Elections

VeriClaim contains two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects β€œcheck-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.

VeriClaim Poster
36 Hour Fitness: Your Personalized Fitness Trainer

Slides | Demo Video
Kevin Meng, Brandon Wang, Nihar Annam, Julia Camacho
*HackMIT 2020 Grand Prize Winner, DRW Special Award Winner

In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours πŸ˜‰) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.

An NLP-Powered Dashboard for Mitigating the COVID-19 Infodemic

EACL 21 | Code | Data | Dashboard | Demo Video
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li

This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.

Gradient-Based Adversarial Training on Transformer Networks

Pre-Print Paper | Code | ClaimBuster Website
Kevin Meng*, Damian Jimenez*, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide

We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.

Model Architecture
Through-Wall Pose Imaging with a Many-to-Many Paradigm

ICMLA 19 | Poster | Explanatory Video
Kevin Meng, Yu Meng
*Intel ISEF Best in Category (+6 Other Prizes), ACM Cutler-Bell Prize Winner, Davidson Fellows HM
*Presented at AAAI-20 in New York, NY

This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.

Project Poster
Vehicle Action Prediction Using LSTM Networks

ICMLA 18 | Demo Video
Kevin Meng, Cheng Shi, Yu Meng
*Intel ISEF Grand Award (+2 Other Prizes)

Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.