Kevin's picture

Kevin Meng

currently: @mit @csail @nvidia @idirlab

contact: mengk [at] mit {dot} edu

location: cambridge, ma

my interests:

  • interpretability
  • natural language processing
  • protein & drug design
  • computer vision
  • entrepreneurship

about me.

Hi! 👋 I'm Kevin, a second-year at MIT studying computer science & electrical engineering.

Recently, I've been thinking a lot about interpretable (and thus controllable) AI. When deployed, deep networks often fail in ways we find difficult to comprehend. Ideally, we'd be able to dissect their parameters and debug these problems, but we lack an understanding of their decision-making and fact-storing mechanisms. To me, demystifying these structures in neural networks is one of the most important open problems in AI right now. Stay tuned for some joint work with David Bau (CSAIL) and Yonatan Belinkov (Technion) that hopes to make progress on this :^)

I'm also generally interested in AI's applications to various problems. At NVIDIA, I'm developing computational pipelines for drug-target interaction prediction, and on the side, I've been working on a fact-checking system (claim-spotter + claim-checker) used by thousands of researchers and journalists worldwide. If you're bored, feel free to check out stuff I've presented at venues including NeurIPS, AAAI, EACL, ICMLA, NVIDIA GTC, AAAS, the NSA, and 7-Eleven R&D Labs.

I also love teaching. Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit that provides personalized science fair, research, and computer science mentorship to aspiring scientists. I still occasionally mentor students via AYSI, and I organize and teach for [email protected] Workshops. In my free time, I love running, eating, being bad at basketball, and wandering the streets of new cities.

random news.

  • new papers | march, october 2021

    VeriClaim will appear in NeurIPS AI4CE, and our COVID dashboard paper has been accepted to EACL.

  • claimbuster applied to 2020 presidential election | august-october 2020

    The Duke Reporters' Lab used our ClaimBuster model to aid fact-checking during the 2020 DNC/RNC Conventions and Biden/Trump Presidential Debates. See this Poynter article for more.

  • nvidia fall gtc | july 2020

    VeriClaim was invited for presentation at the NVIDIA Fall GPU Technology Conference (San Diego, CA virtual). Huge thank you to Dr. Branislav Kisačanin and Dr. Boris Ginsburg for making this possible!

  • AAAI-20 | feb 2020

    I presented my through-wall imaging system at AAAI-20 in New York City, NY.

  • consumer electronics show (CES) 2020 | jan 2020

    Huge thanks to Vayyar Imaging for inviting me to CES in Las Vegas, NV to tour the showroom and discuss collaborations :)

projects i've been working on!

VeriClaim: End-to-End Computational Fact Checking

NeurIPS 2021 | Claim-Spotter Paper | Demo Video
Kevin Meng
*To be presented at NeurIPS 2021's Workshop on AI for Credible Elections

VeriClaim contains two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.

36 Hour Fitness: Your Personalized Fitness Trainer

Slides | Demo Video
Kevin Meng, Brandon Wang, Nihar Annam, Julia Camacho
*HackMIT 2020 Grand Prize Winner, DRW Special Award Winner

In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.

TrendifyApp

Website | App Store Download | Google Play Store Download
Some other people + Kevin Meng

Ever seen an outfit that you liked but had no clue where to get? Look no further: just take a screenshot and upload it to Trendify! We partner with over 200 high-quality brands featuring over 500 thousand products, including a variety of dresses, jeans, boots, tops, sweatshirts, handbags, underwear, swimwear, sleepwear, shirts, outerwear, watches, necklaces and jewelry.

Project Poster
An NLP-Powered Dashboard for Mitigating the COVID-19 Infodemic

EACL 21 | Code | Data | Dashboard | Demo Video
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li

This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.

Gradient-Based Adversarial Training on Transformer Networks

Pre-Print Paper | Code | ClaimBuster Website
Kevin Meng, Damian Jimenez, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide

We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.

Model Architecture
Through-Wall Pose Imaging with a Many-to-Many Paradigm

ICMLA 19 | Poster | Explanatory Video
Kevin Meng, Yu Meng
*Intel ISEF Best in Category (+6 Other Prizes), ACM Cutler-Bell Prize Winner, Davidson Fellows HM
*Presented at AAAI-20 in New York, NY

This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.

Project Poster
Vehicle Action Prediction Using LSTM Networks

ICMLA 18 | Demo Video
Kevin Meng, Cheng Shi, Yu Meng
*Intel ISEF Grand Award (+2 Other Prizes)

Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.

speaking!

ml research projects

Topic Date Venue
Language & Vision Transformers Jan 2021, Cambridge, MA Online, with David Bau MIT CSAIL Torralba Lab Reading Group
VeriClaim Oct 2020, San Diego, CA Online NVIDIA Fall 2020 GTC
VeriClaim 8/21/2020, Beijing, China Online Haibohui Investor Platform
Looking Through Walls 2/8/2020, New York City, NY AAAI-20 Poster Session
Looking Through Walls 12/16/2019, Boca Raton, FL ICMLA 2019 Main Conference
Looking Through Walls 7/30/2019, Ft. Meade, MD National Security Agency HQ
Looking Through Walls YouTube 6/28/2019, Irving, TX 7-Eleven R&D Labs
Vehicle Action Prediction Feb 2019, Washington D.C. (Declined) AAAS Annual Meeting
Vehicle Action Prediction 12/17/2018, Orlando, FL ICMLA 2018 Main Conference
Vehicle Action Prediction YouTube 1/13/2018 DFW BigData Meeting

tech & innovation

Topic Date Venue
Applying Project-Based Learning in CS 2/20/2021, Cambridge, MA Online MIT Blueprint
My Journey Through Science Fair 7/27/2019, Dallas, TX McDonald's Education Workshop
Building Cool Things :) 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala

emceeing for events & banquets

Role Date Venue
General Event Emcee 7/27/2019, Dallas, TX McDonald's Education Workshop
Convention Banquet Emcee 11/17/2018, Addison, TX ACP MetroCon Symposium & Gala
General Event Emcee 3/25/2017, Plano, TX DFWAACC Voter Registration Forum