Our paper has been accepted to EACL 21!
Hi! 👋 I'm Kevin, a second-year at MIT studying computer science & electrical engineering.
Recently, I've been thinking a lot about interpretable (and thus controllable) AI. When deployed, deep networks often fail in ways we find difficult to comprehend. Ideally, we'd be able to dissect their parameters and debug these problems, but we lack an understanding of their decision-making and fact-storing mechanisms. To me, demystifying these structures in neural networks is the most important open problem in AI right now. Stay tuned for some joint work with David Bau (CSAIL) and Yonatan Belinkov (Technion) that hopes to make progress on this :^)
I'm also generally interested in AI's applications to various problems. At NVIDIA, I'm developing computational pipelines for drug-target interaction prediction, and on the side, I've been working on a fact-checking system (claim-spotter + claim-checker) used by thousands of researchers and journalists worldwide.
If you're bored, feel free to check out stuff I've presented at venues including AAAI, NVIDIA GTC, AAAS, the NSA, and 7-Eleven R&D Labs. Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit that provides personalized science fair, research, and computer science mentorship to aspiring scientists. In my free time, I love teaching, running, being bad at basketball, and wandering the streets of big cities.
Our paper has been accepted to EACL 21!
VERiCLAIM was invited for presentation at the NVIDIA Fall GPU Technology Conference (
San Diego, CA virtual).
Huge thank you to Dr. Branislav Kisačanin and Dr. Boris Ginsburg for making this possible!
I presented my through-wall imaging system at AAAI-20 in New York City, NY.
Ever seen an outfit that you liked but had no clue where to get? Look no further: just take a screenshot and upload it to Trendify! We partner with over 200 high-quality brands featuring over 500 thousand products, including a variety of dresses, jeans, boots, tops, sweatshirts, handbags, underwear, swimwear, sleepwear, shirts, outerwear, watches, necklaces and jewelry.
In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.
VERiCLAIM introduces a novel framework containing two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.
EACL 21 |
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li
This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.
Pre-Print Paper |
Kevin Meng, Damian Jimenez, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide
We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.
This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.
Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.
|Language & Vision Transformers||Jan 2021,
||MIT CSAIL Torralba Lab Reading Group|
||NVIDIA Fall 2020 GTC|
||Haibohui Investor Platform|
|Looking Through Walls||2/8/2020, New York City, NY||AAAI-20 Poster Session|
|Looking Through Walls||12/16/2019, Boca Raton, FL||ICMLA 2019 Main Conference|
|Looking Through Walls||7/30/2019, Ft. Meade, MD||National Security Agency HQ|
|Looking Through Walls YouTube||6/28/2019, Irving, TX||7-Eleven R&D Labs|
|Looking Through Walls||3/10/2019, Plano, TX||DFWCIT Meeting|
|Vehicle Action Prediction||Feb 2019, Washington D.C. (Declined)||AAAS Annual Meeting|
|Vehicle Action Prediction||12/17/2018, Orlando, FL||ICMLA 2018 Main Conference|
|Developing ML Research||3/27/2018||DFWCIT Meeting|
|Vehicle Action Prediction YouTube||1/13/2018||DFW BigData Meeting|
|Applying Project-Based Learning in CS||2/20/2021,
|Looking Through Walls||8/8/2019, Plano, TX||ACP Foundation Meeting|
|My Journey Through Science Fair||7/27/2019, Dallas, TX||McDonald's Education Workshop|
|Innovation Without Limits YouTube||11/17/2018, Addison, TX||ACP MetroCon Symposium & Gala|
|General Event Emcee||7/27/2019, Dallas, TX||McDonald's Education Workshop|
|Convention Banquet Emcee||11/17/2018, Addison, TX||ACP MetroCon Symposium & Gala|
|General Event Emcee||3/25/2017, Plano, TX||DFWAACC Voter Registration Forum|