</ Matthew Ocando's Portfolio >
AUTONOMOUS ALERT
Overview
Autonomous Alert involved the design and creation of a vehicle-mounted system capable of using machine learning techniques to detect the presence of an emergency vehicle siren based on a microphone input. The device was targeted toward people with auditory impairments, but could find utility in other environments.
The final system was fully independent, capable of running on a Raspberry Pi 2 off of a standard 12V car lighter port using a USB adapter. The microphone input was filtered using a bandpass RC filter designed for the constraints of the system. A 3D printed housing was developed for the design, which included a section for 3 LED lights which would serve as the visual indicator corresponding to the detection of a siren.
Detection
The detection system used an 18 Layer Residual Neural Network (ResNet18), to train and implement a model capable of distinguishing ambient noise from emergency vehicle sirens based on a microphone input.
Given that ResNet18 is an existing powerful image classifier with weights pre-trained on ImageNet, the siren detection problem was modelled as an image recognition problem to allow for the use of transfer learning. This was done by creating spectrogram images from the input microphone capture, which were used both for training and testing. In the final deployment, the real-time sound input was used to create 5 second window spectrograms that would update with new information from the microphone every second.
Takeaways
This project allowed me to explore skills in machine learning techniques and filter design. Specifically, the framing of a sound classification problem as an image classification problem was a unique task in feature engineering that highlighted the importance of extracting useful information from data before process
Additionally, the use of services like GitHub for organizing and logging progress gave insight into industry standards for software and digital engineering.