AI Ethics Are in Danger
Funding independent research could help
Across industries, a turn to artificial intelligence (AI) has become ubiquitous, if not completely hegemonic. It’s not just Meta, Google, Apple and Amazon—nearly every large company has pivoted to AI. Logistics companies like DHL have rolled out AI-powered logistics management; Walmart has turned one location into an Intelligent Retail Lab; Citibank has begun supplementing person-to-person customer service with an Intelligent Virtual Agent.
Meanwhile, startups proliferate, promising AI-powered everything: improving one’s writing, detecting contraband in baggage, inferring emotions from one’s face, monitoring remote employees’ actions, predicting fraudulent payments or even generating art.
Unfortunately, alongside its increasing omnipresence, much of the AI developed by large corporations has exhibited a myriad of issues. There are countless public instances where AI was deployed with overtly racist, sexist, and discriminatory outcomes: Facial analysis algorithms used by law enforcement have misrecognized at least three Black men; an internal hiring tool at Amazon categorically excluded women based on facially neutral terms on their resumes; an automated tool allocating health care resources for 70 million Americans discriminated substantially against Black patients.
Please select this link to read the complete article from SSIR.