Motor vehicle travel is a major means of transportation in the United States, yet for all its advantages, each year fatal motor vehicle crashes in the U.S. lead to an estimated societal burden of more than $230 billion from medical and other costs . Motor vehicle crashes are also the leading cause of death for persons every age from 5 to 32 years old . In this project, we deep dived into the fatal car crash records in the U.S. in year 2016, collected by National Highway Traffic Safety Administration (NHSTA)  and encoded using the government’s Fatality Analysis Reporting System. By wielding this dataset and related research, we developed an interactive essay trying to thoroughly explore the top risk factors that are highly correlated to fatal motor vehicle crashes.
To read the interactive essay in your web browser, visit here.
The report detailing our methodologies and poster summary of this project are also available.
Categorization of music plays an essential role in music appreciation and cognition. A study shows that genre is so important to listeners that the style of a piece can influence their liking for it more than the piece itself [1, 2]. The problem of recognizing song genres, however, is a challenging task as song genres are subjective in nature as there are no clear-cut boundaries between human-labeled song genres.
Multiple researches have shown that machine learning approaches have the potential to achieve significant results in this problem. However, we believe that it is possible to further explore the potential of applying deep learning approach on the music genre classification problem. While other works have aimed to adopt and assess deep learning methods that have been shown to be effective in other domains, there is still a great need for more original research focusing on music primarily and utilizing musical knowledge and insight .
For reproducibility, we published our experiment worksheet on CodaLab. This contains our introduction to the problem, the datasets, code, and other artifacts from our various experiments.
Incident Analytics was an intelligent incident management tool we developed for AppDynamics DevOps customers during a Hackathon. AppDynamics customers were able to configure health rules based on a few key metrics of their interest and get alerted when these metrics saw unexpected patterns. However, without knowing about historical data, DevOps may spend hours figuring out a resolution when someone had solved a similar issue before. In this project, we built a tool based on machine learning algorithms to automatically identify root cause analyses (RCAs) for incidents — this task previously would take hours if not days of manual work. The solution we built helped customers understand the context around incoming incidents and get to resolution much faster. We applied machine learning to grouping incidents together, correlating incidents with RCAs, and analyzing if incidents were triggered by a global issue. This constitutes a big improvement over current AppDynamics solution which provides zero out-of-box analytics.
For the years out of college, I’ve been working as a software engineer (focusing on UI) on the core APM team at AppDynamics (now part of Cisco) based in downtown San Francisco, California. Application Performance Management (APM) is a technology that provides end-to-end business transaction-centric management of complex and distributed software applications. Auto-discovered transactions, dynamic baselining, code-level diagnostics, and Virtual War Room collaboration ensure rapid issue identification and resolution to maintain an ideal user experience. At AppDynamics, I developed complex yet performant AngularJS-based web application UI providing rich user interaction with a wealth of APM data in large scale. I’ve been made seasoned in all phases of the software product lifecycle: designing, prototyping, developing, maintaining, test automation, and shipping out useful features to our customers.
An electroencephalogram (EEG) is the most important tool in the diagnosis of seizure disorders. Between seizures, epileptiform neural activities in EEG recordings occur in the forms of spikes or spike-and-slow wave complexes. Seeking for an automated EEG interpretation algorithm that is well-accepted by clinicians has been a research goal stretched for decades. As a participant in an NSF-funded Research Experience for Undergraduates (REU) program hosted at Clemson University School of Computing, I continued on this endeavor to develop an automated system that detected epilepsy-related events, in real-time, from scalp EEG recordings.
In finding the optimal algorithm for this purpose, I constructed a multi-stage processing pipeline. In the first stage, I cleaned up the clinic data gathered from 100 epileptic patients and treated them with cross-validation. Next, I used wavelet transformations to generate the features for study from EEG signal in a “sliding window” approach. I then applied machine learning algorithms and analyzed their performances in classifying data patterns into epileptiform activities versus other activities. For this stage I also explored the use of hidden Markov model to fit the time sequence in which epileptiform events occurred. In the final step, I further separated target eplieptiform events from noise signals, by applying a statistical model locally, and stitched outputs from different signal windows together. – source code
The automation results were highlighted these findings in realtime on the eegNet (standardized EEG database developed by Clemson) web interface.
Automatic detection of epileptiform events in EEG recordings – poster
Barriers for scientists to practice open scienceprevail due to a range of cultural and technological reasons. This undergraduate thesis, developed under the guidance of the Center for Open Science, seeks to understand the incentive structure for open science from a sociotechnical perspective, and attempts at a software solution to facilitate its implementation. The research paper, Incentive structure for Open Science in Web 2.0, elucidates how current reward system needs to be changed to encourage more practices of open science: to create incentives for researchers to open up their research materials for the broader community, organizations need to provide researchers with intrinsic rewards, proper credit allocation, and tangible career benefits. In the technical portion of the project, Designing Data Visualizations for Open Science, I prototyped an interactive research exploration and organizing tool for the Open Science Framework. The thesis contributes to this collective effort towards open science by making the creation of incentives as an explicit design goal for open science web applications. –thesis cover | STS paper