In a very unusual turn of events, three tech industry heavyweights recently demanded more regulation from the U.S. Federal Government. Amazon, Microsoft, and IBM have all announced that they are pulling their facial recognition technologies from law enforcement agencies unless some form of regulation is implemented to fight instances of discrimination. The problem, according to Siddharth Garg, assistant professor of electrical and computer engineering at NYU Tandon School of Engineering, is that the technology contains inherent biases that have proven particularly abusive towards women and people of color. In a June 17, 2020, article in Yahoo Finance, Garg explains that the bias emerges from the data fed to the algorithm to train its responses. For facial recognition algorithms, that data is a huge number of photographs. If the algorithm sees more photos of one group than another, it impacts how impartially the software can respond. As Garg notes, “The algorithms aren’t biased, but there is bias in the algorithms to be precise.” He concludes that, “If these technologies were to be deployed, I think you cannot do it in the absence of legislation.” You can read the entire article here.