gesture enabled speaker
I was embedded in Google ATAP for two months as the Harman X Project Lead, tasked with creating the Gesture-enabled audio device with visible feedback, shown at Google I/O 2016.
Ivan Poupyrev, Head of Soli at Google ATAP, initiated several possible collaborations/demos to be potentially shown at Google I/O, given that they could pass through the team’s first gate and be personally approved by Ivan. As a result of the initial experience prototype I created that leveraged simulated “Soli gesture signals” piped into an Intel Edison running C++ code I created to control light an sound on a hacked JBL speaker, the project was given the green light.
For the following two months my contributions included:
Creating software (Open Frameworks) to test potential interaction design methods (gestures, visual feedback, auditory feedback, etc.)
Extending the backend I created on the Intel Edison for the initial Experience Prototype (keeping the same software architecture) to incorporate the full feature set
Contribution to interaction design (i.e., what gestures, what use cases)
Assisting with the prototyping and iteration of training machine learning models for radar-based gestures, including material placement, and hand movements
Design feedback on industrial design, including grill patterns, material choice, color choice, etc.