Kevin's headshot

Kevin Meng

currently: @mit @nvidia @idirlab

mengk [at] mit {dot} edu

my interests:

  • interpretability
  • natural language processing
  • intelligent robotics
  • computer vision
  • entrepreneurship

about me.

Hey there! 👋 I'm just your typical college kid who's trying to live life to the fullest :^)

tl;dr I love solving problems. Recently, I've been working on a computational fact-checking system used by thousands of researchers and journalists worldwide, algorithms for interpreting & rewriting deep neural networks (alongside David Bau, Yonatan Belinkov, and Antonio Torralba in MIT CSAIL), and ML pipelines and a mobile platform for customers in the fashion space. You might also find me thinking about the ethics and implementation of computer vision systems, as well as NLP models' applications in various domains. I've presented some of my work at venues including AAAI, IEEE, NVIDIA, AAAS, the NSA, and 7-Eleven R&D Labs; check them out below!

Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit that provides personalized science fair and research mentorship to aspiring scientists. In my free time, I enjoy making music, reading, running, and playing basketball.

recent news.

  • EACL Paper Accepted | March 2021

    Our paper has been accepted to EACL 21!

  • ClaimBuster Applied to 2020 Presidential Election | August-October 2020

    The Duke Reporters' Lab used our ClaimBuster model to aid fact-checking during the 2020 DNC/RNC Conventions and Biden/Trump Presidential Debates. See this Poynter article for more.

  • NVIDIA Fall GTC | July 2020

    VERiCLAIM was invited for presentation at the NVIDIA Fall GPU Technology Conference (San Diego, CA virtual). Huge thank you to Dr. Branislav Kisačanin and Dr. Boris Ginsburg for making this possible!

  • 34th Annual AAAI Conference | Feb 2020

    I presented my through-wall imaging system at AAAI-20 in New York City, NY.

projects i've been working on!

TrendifyApp

Website | App Store Download | Google Play Store Download
Lots of People + Kevin Meng

Ever seen an outfit that you liked but had no clue where to get? Look no further: just take a screenshot and upload it to Trendify! We partner with over 200 high-quality brands featuring over 500 thousand products, including a variety of dresses, jeans, boots, tops, sweatshirts, handbags, underwear, swimwear, sleepwear, shirts, outerwear, watches, necklaces and jewelry.

Project Poster
36 Hour Fitness: Your Personalized Fitness Trainer

Slides | Code | Demo Video
Kevin Meng, Brandon Wang, Nihar Annam, Julia Camacho
*HackMIT 2020 Grand Prize Winner, DRW Special Award Winner

In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.

VERiCLAIM: Verifying Check-Worthy Factual Claims Using NLP Techniques

System Pre-Print Paper | Claim-Spotter Paper | Demo Video
Kevin Meng

VERiCLAIM introduces a novel framework containing two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.

An NLP-Powered Dashboard for Mitigating the COVID-19 Infodemic

EACL 21 | Code | Data | Dashboard | Demo Video
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li

This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.

Gradient-Based Adversarial Training on Transformer Networks

Pre-Print Paper | Code | ClaimBuster Website
Kevin Meng, Damian Jimenez, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide

We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.

Model Architecture
Through-Wall Pose Imaging with a Many-to-Many Paradigm

ICMLA 19 | Poster | Explanatory Video
Kevin Meng, Yu Meng
*Intel ISEF Best in Category (+6 Other Prizes), ACM Cutler-Bell Prize Winner, Davidson Fellows HM

This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.

Project Poster
Vehicle Action Prediction Using LSTM Networks

ICMLA 18 | Demo Video
Kevin Meng, Cheng Shi, Yu Meng
*Intel ISEF Grand Award (+2 Other Prizes)

Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.

speaking!

ml research projects

Topic Date Venue
Language & Vision Transformers Jan 2021, Cambridge, MA Online, with David Bau MIT CSAIL Torralba Lab Reading Group
VERiCLAIM Oct 2020, San Diego, CA Online NVIDIA Fall 2020 GTC
VERiCLAIM 8/21/2020, Beijing, China Online Haibohui Investor Platform
Looking Through Walls 2/8/2020, New York City, NY AAAI-20 Main Conference
Looking Through Walls 12/16/2019, Boca Raton, FL ICMLA 2019 Main Conference
Looking Through Walls 7/30/2019, Ft. Meade, MD National Security Agency HQ
Looking Through Walls YouTube 6/28/2019, Irving, TX 7-Eleven R&D Labs
Looking Through Walls 3/10/2019, Plano, TX DFWCIT Meeting
Vehicle Action Prediction Feb 2019, Washington D.C. (Declined) AAAS Annual Meeting
Vehicle Action Prediction 12/17/2018, Orlando, FL ICMLA 2018 Main Conference
Developing ML Research 3/27/2018 DFWCIT Meeting
Vehicle Action Prediction YouTube 1/13/2018 DFW BigData Meeting

tech & innovation

Topic Date Venue
Applying Project-Based Learning to Computer Science 2/20/2021, Cambridge, MA Online MIT Blueprint
Looking Through Walls 8/8/2019, Plano, TX ACP Foundation Meeting
My Journey Through Science Fair 7/27/2019, Dallas, TX McDonald's Education Workshop
Innovation Without Limits YouTube 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala

emceeing for events & banquets

Role Date Venue
General Event Emcee 7/27/2019, Dallas, TX McDonald's Education Workshop
Convention Banquet Emcee 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala
General Event Emcee 3/25/2017, Plano, TX DFWAACC Voter Registration Forum