A team of NYU researchers has discovered a way to manipulate the artificial intelligence that powers self-driving cars and image recognition by installing a secret backdoor into the software…“We saw that people were increasingly outsourcing the training of these networks, and it kind of set off alarm bells for us,” Brendan Dolan-Gavitt, a professor at NYU, wrote to Quartz. “Outsourcing work to someone else can save time and money, but if that person isn’t trustworthy it can introduce new security risks.”
August 24, 2017August 24, 2017Emerald Knox
Press Highlights