Kevin's picture

Kevin Meng

currently: @mit @csail @neu

formerly: @nvidia @whist @gantry

contact: mengk at mit dot edu

my interests:

  • interpretability
  • natural language processing
  • protein & drug design
  • neuro-symbolic learning
  • robotic planning

about me.

Hi! 👋 I'm Kevin, an undergrad at MIT studying EECS and pursuing a concurrent master's in AI. These days, I spend lots of time thinking about transparency in deep neural networks, probabilistic models, and problems in computational biology and NLP. Aside from work and research, I also care deeply about teaching. Back at home, I ran a non-profit providing personalized research and CS + AI mentorship to aspiring scientists, where I still occasionally volunteer. At MIT, I organize and teach for [email protected] Workshops and have taught for Splash.

In my free time, I enjoy cooking, reading, running, playing card games, taking road trips, sucking at basketball, and wandering the streets of new cities.

recent news.

  • oct 2022
    MEMIT is out! we've scaled ROME to 100x state-of-the-art model editors [twitter]
  • sept 2022
    ROME will appear in NeurIPS '22! [twitter]
  • july 2022
    Whist is finally out of stealth with the world's first cloud-hybrid browser [hackernews], and Gantry has raised $28M to build infra for continual learning systems [techcrunch]
  • february 2022
    we've released ROME, a study on the fact-storing mechanisms in large auto-regressive transformer language models [twitter]
  • October 2020
    our ClaimBuster model is being used by the Duke Reporter's Lab to help fact-check the 2020 Presidential Election [poynter]

things i've worked on.

Mass-Editing a Transformer Memory

arXiv Pre-Print | Code | Project Page
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, David Bau

Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up to thousands of associations for GPT-J (6B) and GPT-NeoX (20B), exceeding prior work by orders of magnitude.


Locating and Editing Factual Associations in GPT

NeurIPS '22 | Code | Project Page
Kevin Meng*, David Bau*, Alex Andonian, Yonatan Belinkov
*To appear at NeurIPS 2022 in New Orleans, LA

We investigate the mechanisms underlying factual knowledge recall in autoregressive transformer language models. First, we develop a causal intervention for identifying neuron activations capable of altering a model's factual predictions. Within large GPT-style models, this reveals two distinct sets of neurons that we hypothesize correspond to knowing an abstract fact and saying a concrete word, respectively. This insight inspires the development of ROME, a novel method for editing facts stored in model weights. For evaluation, we assemble CounterFact, a dataset of over twenty thousand counterfactuals and tools to facilitate sensitive measurements of knowledge editing. Using CounterFact, we confirm the distinction between saying and knowing neurons, and we find that ROME achieves state-of-the-art performance in knowledge editing compared to other methods. An interactive demo notebook, full code implementation, and the dataset are available.

VeriClaim Poster
Studying the Approximate Linearity of Apple's NeuralHash

ICML ML4Cyber '22
Jagdeep Bhatia*, Kevin Meng*
*Presented at ICML 2022's ML for Cybersecurity Workshop

Perceptual hashes map images with identical semantic content to the same n-bit hash value, while mapping semantically-different images to different hashes. These algorithms carry important applications in cybersecurity such as copyright infringement detection, content fingerprinting, and surveillance. Apple's NeuralHash is one such system that aims to detect the presence of illegal content on users' devices without compromising consumer privacy. We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks that can (i) evade detection of "illegal" images, (ii) generate near-collisions, and (iii) leak information about hashed images, all without access to model parameters. These vulnerabilities pose serious threats to NeuralHash's security goals; to address them, we propose a simple fix using classical cryptographic standards.

NeuralHash Project
VeriClaim: End-to-End Computational Fact Checking

NeurIPS AI4CE '21 | Claim-Spotter Paper | Demo Video
Kevin Meng
*Presented at NeurIPS 2021's Workshop on AI for Credible Elections

VeriClaim contains two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.

VeriClaim Poster
36 Hour Fitness: Your Personalized Fitness Trainer

Demo Video
Kevin Meng, Brandon Wang, Nihar Annam, Julia Camacho
*HackMIT 2020 Grand Prize Winner, DRW Special Award Winner

In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.

An NLP-Powered Dashboard for Mitigating the COVID-19 Infodemic

EACL '21 | Code | Data | Dashboard | Demo Video
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li

This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.

Gradient-Based Adversarial Training on Transformer Networks

Pre-Print Paper | Code | ClaimBuster Website
Kevin Meng*, Damian Jimenez*, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide

We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.

Model Architecture
Through-Wall Pose Imaging with a Many-to-Many Paradigm

ICMLA '19 | Poster
Kevin Meng, Yu Meng
*Intel ISEF Best in Category (+6 Other Prizes), ACM Cutler-Bell Prize Winner, Davidson Fellows HM
*Presented at AAAI-20 in New York, NY

This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.

Project Poster
Vehicle Action Prediction Using LSTM Networks

ICMLA '18 | Demo Video
Kevin Meng, Cheng Shi, Yu Meng
*Intel ISEF Grand Award (+2 Other Prizes)

Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.