Kevin's headshot

Kevin Meng

B.S. Computer Science

Massachusetts Institute of Technology '24

mengk [at] mit {dot} edu


  • DNN Interpretability
  • Neural Perception
  • Natural Language Processing
  • Algorithms
  • Entrepreneurship
  • STEM Education


Hi there! 👋 I'm a computer science student at MIT who is passionate about developing technologies that solve real-world problems. Over the years, I've built projects that have been presented at venues including the NSA, IEEE, AAAI, AAAS, NVIDIA, and 7-Eleven R&D Labs. Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit organization aimed at providing personalized science fair and research mentorship to aspiring scientists. In my free time, I enjoy making music, reading, and playing basketball.

At MIT CSAIL, I'm currently designing new algorithms that focus on interpretability & controllability, along with David Bau, Yonatan Belinkov, and Antonio Torralba. Prior to MIT, I was a member of the Innovative Data Intelligence Research Lab at UT Arlington, advised by Chengkai Li.


See below for some recent updates from Dec 2019 and on!

  • ClaimBuster Applied to 2020 Presidential Election | August-October 2020

    The Duke Reporters' Lab used our ClaimBuster model to aid fact-checking during the 2020 DNC/RNC Conventions and Biden/Trump Presidential Debates. See this Poynter article for details!

  • NVIDIA Fall GTC | July 2020

    VERiCLAIM was invited for presentation at the NVIDIA Fall GPU Technology Conference (San Diego, CA online). Huge thank you to Dr. Branislav Kisačanin and Dr. Boris Ginsburg for making this possible!

  • 34th Annual AAAI Conference | Feb 2020

    I presented my through-wall imaging system at AAAI-20 in New York City, NY.

  • CES 2020 | Jan 2020

    I was invited to visit CES in Las Vegas, NV to tour the showroom and discuss business collaborations.

  • ICMLA 2019 | Dec 2019

    I presented my paper, Through-Wall Pose Imaging with a Many-to-Many Paradigm, at ICMLA 2019 in Boca Raton, FL.

Selected Research & Hackathon Projects

For other projects that I've open-sourced, please visit my Github.

36 Hour Fitness: Your Personalized Fitness Trainer

Slides | Code | Demo Video
Kevin Meng, Brandon Wang, Nihar Annam, Julia Camacho
*HackMIT 2020 Grand Prize Winner, DRW Special Award Winner

In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.

VERiCLAIM: Verifying Check-Worthy Factual Claims Using NLP Techniques

System Pre-Print Paper | Claim-Spotter Paper | Demo Video
Kevin Meng

VERiCLAIM introduces a novel framework containing two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.

An NLP-Powered Dashboard for Mitigating the COVID-19 Infodemic

Pre-Print Paper | Code | Data | Dashboard | Demo Video
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li

This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.

Gradient-Based Adversarial Training on Transformer Networks

Pre-Print Paper | Code | ClaimBuster Website
Kevin Meng, Damian Jimenez, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide

We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.

Model Architecture
Through-Wall Pose Imaging with a Many-to-Many Paradigm

IEEE Xplore | Poster | Explanatory Video
Kevin Meng, Yu Meng
*Intel ISEF Best in Category (+6 Other Prizes), ACM Cutler-Bell Prize Winner, Davidson Fellows HM

This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.

Project Poster
Vehicle Action Prediction Using LSTM Networks

IEEE Xplore | Demo Video
Kevin Meng, Cheng Shi, Yu Meng
*Intel ISEF Grand Award (+2 Other Prizes)

Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.

Selected Awards


  • Intel ISEF: 3x Grand Award Winner (Best in Category '19), 7x Special Award Winner
  • $10,000 ACM Cutler-Bell Prize
  • Davidson Fellows: Honorable Mention

Merit-Based Scholarships

  • $20,000 Coca-Cola Scholarhip
  • $10,000 Chief of Naval Research Scholarship
  • $2,500 National Merit Scholarship

Music & Volunteerism

  • 4x TMEA All-State Orchestra Bassist (3rd, 5th, 7th, 21st rank in Texas over 4 years)
  • 4x President's Volunteer Service Award

Presentations & Public Speaking Engagements

Computer Science (Focus in Machine Learning)

Topic Date Venue
VERiCLAIM Oct 2020, San Diego, CA Online NVIDIA Fall 2020 GTC
VERiCLAIM 8/21/2020, Beijing, China Online Haibohui Investor Platform
Looking Through Walls 2/8/2020, New York City, NY AAAI-20 Main Conference
Looking Through Walls 12/16/2019, Boca Raton, FL ICMLA 2019 Main Conference
Looking Through Walls 7/30/2019, Ft. Meade, MD National Security Agency HQ
Looking Through Walls YouTube 6/28/2019, Irving, TX 7-Eleven R&D Labs
Looking Through Walls 3/10/2019, Plano, TX DFWCIT Meeting
Vehicle Action Prediction Feb 2019, Washington D.C. (Declined) AAAS Annual Meeting
Vehicle Action Prediction 12/17/2018, Orlando, FL ICMLA 2018 Main Conference
Developing ML Research 3/27/2018 DFWCIT Meeting
Vehicle Action Prediction YouTube 1/13/2018 DFW BigData Meeting

Technological Innovation

Topic Date Venue
Looking Through Walls 8/8/2019, Plano, TX ACP Foundation Meeting
My Journey Through Science Fair 7/27/2019, Dallas, TX McDonald's Education Workshop
Innovation Without Limits YouTube 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala

Banquet or Event Emcee

Role Date Venue
General Event Emcee 7/27/2019, Dallas, TX McDonald's Education Workshop
Convention Banquet Emcee 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala
General Event Emcee 3/25/2017, Plano, TX DFWAACC Voter Registration Forum