Tandon EEs Document the Risks of Incorporating DNNs into Autonomous Systems

The integration of deep neural networks (DNNs) have enabled cars and robots to use input from sensors to function without any human intervention. Yet, DNNs are vulnerable to hacking, which makes their use in autonomous systems risky. A team of researchers from NYU Tandon’s Electrical and Computer Engineering department, including CCS faculty member Dr. Siddharth Garg, recently showed how relatively easy it was to attack an end-to-end trained DNN. Using a roadside digital billboard that displayed videos to approaching vehicles, the team was able to cause the DNN controller in the vehicle to generate steering commands. As a result, the hackers could steer the cars into other lanes or other vehicles, potentially leading to serious damage and even loss of life..

First documented in a paper presented at last year’s EEE Intelligent Robots and Systems (IROS) conference, the results of their test are also documented in the article ”Learning-Based Real-Time Process-Aware Anomaly Monitoring for Assured Autonomy,” which was published in IEEE’s Design and Test Magazine in May. In addition to Garg, the research team included Professor of Electrical and Computer Engineering Farshad Khorrami, research scientist Prashanth Krishnamurthy, and doctoral student Naman Patel of Tandon’s Controls/Robotics Research Laboratory. 


To learn more about the work, read the profile prepared by NYU Tandon here.

Leave a Reply

Your email address will not be published.