Neurosense

Cognitive Sensing System

During my last year at Harman X, I ramped-up and led the company’s first team focussed on AI, the Future Intelligence Lab. During my time in the Future Experience team, we had iterated on prototypes of a system which aimed to derive cognitive and emotional states from a user (driver) through non-contact methods. Given that I now had a team of 50, I chose to focus the majority of my team members on turning these prototypes into reality. 

The first, and primary, focus of Neurosense was to classify a driver’s cognitive load (CL), as this was identified as one of the internal states of a driver that most impacted their ability to drive safely. With the successful classification of this state, besides the obvious adjustment of various ADAS functionalities of the vehicle, the complexity of the HMI could be adjusted inversely proportional to the driver’s CL, and notifications could be postponed or rerouted to additional output modalities to help load-balance the driver.

My team created a driving simulator setup, with a plethora of sensors including infrared cameras, ultra-sideband radar, chest-worn ECG sensors, head-worn EEG sensors, in addition to the collection of various driving parameters like steering wheel position and pedal position. We developed an experiment that induced cognitive load, through the use of validated methods including DRT, n-back tests, and the OSPAN task. 

Labeled data was collected during these hundreds of repeated experiments, and after iteration and investigation, the salient signals of eye-gaze fixations, saccades, and blink rate were identified as the most important signals to the ML classifier.

Real-time algorithms were created, capable of running in a moving vehicle at a functionally acceptable level, with the next step being the collection of training data from equipped vehicles across Harman’s hundreds of sites around the world.

Current HARMAN product offering

1569484430755.JPEG
Screen Shot 2020-03-09 at 11.09.03 PM copy.png