Three researchers from New York University (NYU) have published a paper this week describing a method that an attacker could use to poison deep learning-based artificial intelligence (AI) algorithms.
Researchers based their attack on a common practice in the AI community where research teams and companies alike outsource AI training operations using on-demand Machine-Learning-as-a-Service (MLaaS) platforms.