Hi there! 👋 I'm a computer science student at MIT who is passionate about developing technologies that solve real-world problems. Over the years, I've built projects that have been presented at venues including the NSA, IEEE, AAAI, AAAS, NVIDIA, and 7-Eleven R&D Labs. Back at home, I founded the Association for Young Scientists & Innovators, a student-run non-profit organization aimed at providing personalized science fair and research mentorship to aspiring scientists. In my free time, I enjoy making music, reading, and playing basketball.
At MIT CSAIL, I'm currently designing new algorithms that focus on interpretability & controllability, along with David Bau, Yonatan Belinkov, and Antonio Torralba. Prior to MIT, I was a member of the Innovative Data Intelligence Research Lab at UT Arlington, advised by Chengkai Li.
.__/\ ___. __ __ __ .__ __ .__ __ __ __ |__)/_____ \_ |__ _____/ |__/ |_ ___________ _/ |_| |__ _____ ____ | | _____.__.| | ____ _____ _/ |_ | | _________ __ __ ____ | | __ ___________ | |/ \ | __ \_/ __ \ __\ __\/ __ \_ __ \ \ __\ | \\__ \ / \ | |/ < | || | _/ __ \ \__ \\ __\ | |/ /\_ __ \ | \/ \| |/ // __ \_ __ \ | | Y Y \ | \_\ \ ___/| | | | \ ___/| | \/ | | | Y \/ __ \| | \ | < \___ || |_\ ___/ / __ \| | | < | | \/ | / | \ <\ ___/| | \/ |__|__|_| / |___ /\___ >__| |__| \___ >__| |__| |___| (____ /___| / |__|_ \/ ____||____/\___ > (____ /__| |__|_ \ |__| |____/|___| /__|_ \\___ >__| \/ \/ \/ \/ \/ \/ \/ \/\/ \/ \/ \/ \/ \/ \/
See below for some recent updates from Dec 2019 and on!
VERiCLAIM was invited for presentation at the NVIDIA Fall GPU Technology Conference (
San Diego, CA online).
Huge thank you to Dr. Branislav Kisačanin and Dr. Boris Ginsburg for making this possible!
I presented my through-wall imaging system at AAAI-20 in New York City, NY.
I was invited to visit CES in Las Vegas, NV to tour the showroom and discuss business collaborations.
I presented my paper, Through-Wall Pose Imaging with a Many-to-Many Paradigm, at ICMLA 2019 in Boca Raton, FL.
For other projects that I've open-sourced, please visit my Github.
In light of heightened obstacles to human interaction and physical health due to COVID-19, we present 36 Hour Fitness: a fun, intuitive, and powerful app that enhances the quality of home workouts while bringing friends, family, and workout buddies together over the Internet. This system (built in 36 hours 😉) helps replicate the gym experience by allowing users to select any of their favorite workout videos and receive real-time automated feedback. Using gamification and other social features, 36 Hour Fitness creates exciting virtual group workouts from the comfort of your home. Dynamic time warping, neural pose estimation, and simple geometrical formulas are used to generate scores and suggestions.
VERiCLAIM introduces a novel framework containing two computational modules: the claim-spotter and claim-checker. The claim-spotter first selects “check-worthy” factual statements from large amounts of text using a Bidirectional Encoder Representations from Transformers (BERT) model trained with a novel gradient-based adversarial training algorithm. Then, selected factual statements are passed to the claim-checker, which employs a separate stance detection BERT model to verify each statement using evidence retrieved from a multitude of knowledge resources. Web interface inspired by Fakta, from MIT.
Pre-Print Paper |
Zhengyuan Zhu, Kevin Meng, Josue Caraballo, Israa Jaradat, Xiao Shi, Zeyu Zhang, Farahnaz Akrami, Haojin Liao, Fatma Arslan, Damian Jimenez, Mohammed Samiul Saeef, Paras Pathak, Chengkai Li
This paper introduces a public dashboard which, in addition to displaying case counts in an interactive map and a navigational panel, also provides some unique features not found in other places. Particularly, the dashboard uses a curated catalog of COVID-19 related facts and debunks of misinformation, and it displays the most prevalent information from the catalog among Twitter users in user-selected U.S. geographic regions. We also explore the usage of BERT models to match tweets with misinformation debunks and detect their stances. We also discuss the results of preliminary experiments on analyzing the spatio-temporal spread of misinformation.
Pre-Print Paper |
Kevin Meng, Damian Jimenez, Fatma Arslan, Jacob Daniel Devasier, Daniel Obembe, Chengkai Li
*Deployed on ClaimBuster, used by thousands of fact-checkers and research groups worldwide
We introduce the first adversarially-regularized, transformer-based claim spotting model that achieves state-of-the-art results by a 4.70 point F1-score margin over current approaches on the ClaimBuster Dataset. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.
This paper establishes a deep-learning model that can be trained to reconstruct continuous video of a 15-point human skeleton even through visual occlusion using RF signals. During training, video and RF data are collected simultaneously using a co-located setup containing an optical camera and an RF antenna array transceiver. Next, video frames are processed with a gait analysis module to generate ground-truth human skeletons for each frame. Then, the same type of skeleton is predicted from corresponding RF data using a novel custom-designed CNN + RPN + LSTM model that 1) extracts spatial features from RF images, 2) detects all people present in a scene, and 3) aggregates information over many time-steps, respectively.
Current Advanced Driver Assistance Systems provide reactive protections that warn drivers of dangers up to 0.5 seconds ahead of collisions. However, rule of thumb suggests 2 seconds for safety in emergency reactions; many lives could be saved even with a slight improvement to the warning time. This paper develops an innovative two-stage neural network model that predicts drivers' actions before fatal collisions can occur. Data is collected from sensors and devices including two cameras, a GPS module, an Onboard Diagnostics-II (OBD-II) interface, and a gyroscope, preprocessed with a neural computer vision model to extract facial movements and rotation, filtered and selected with the Classification and Regression Tree (CART), and modeled with an LSTM.
||NVIDIA Fall 2020 GTC|
||Haibohui Investor Platform|
|Looking Through Walls||2/8/2020, New York City, NY||AAAI-20 Main Conference|
|Looking Through Walls||12/16/2019, Boca Raton, FL||ICMLA 2019 Main Conference|
|Looking Through Walls||7/30/2019, Ft. Meade, MD||National Security Agency HQ|
|Looking Through Walls YouTube||6/28/2019, Irving, TX||7-Eleven R&D Labs|
|Looking Through Walls||3/10/2019, Plano, TX||DFWCIT Meeting|
|Vehicle Action Prediction||Feb 2019, Washington D.C. (Declined)||AAAS Annual Meeting|
|Vehicle Action Prediction||12/17/2018, Orlando, FL||ICMLA 2018 Main Conference|
|Developing ML Research||3/27/2018||DFWCIT Meeting|
|Vehicle Action Prediction YouTube||1/13/2018||DFW BigData Meeting|
|Looking Through Walls||8/8/2019, Plano, TX||ACP Foundation Meeting|
|My Journey Through Science Fair||7/27/2019, Dallas, TX||McDonald's Education Workshop|
|Innovation Without Limits YouTube||11/17/2018, Addison, TX||ACP MetroCon Symposium & Gala|
|General Event Emcee||7/27/2019, Dallas, TX||McDonald's Education Workshop|
|Convention Banquet Emcee||11/17/2018, Addison, TX||ACP MetroCon Symposium & Gala|
|General Event Emcee||3/25/2017, Plano, TX||DFWAACC Voter Registration Forum|