In Regulating AI, We May Be Doing Too Much
Others say we may be doing too little
Last week, when President Joe Biden signed his sweeping executive order on artificial intelligence, he joked about the strange experience of watching a "deep fake" of himself, saying, "When the hell did I say that?"
The anecdote was significant, for it linked the executive order to an actual AI harm that everyone can understand — human impersonation. Another example is the recent boom in fake nude images that have been ruining the lives of high-school girls. These everyday episodes underscore an important truth: The success of the government’s efforts to regulate AI will turn on its ability to stay focused on concrete problems like deep fakes, as opposed to getting swept up in hypothetical risks like the arrival of our robot overlords.
Mr. Biden’s executive order outdoes even the Europeans by considering just about every potential risk one could imagine, from everyday fraud to the development of weapons of mass destruction. The order develops standards for AI safety and trustworthiness, establishes a cybersecurity program to develop AI tools and requires companies developing AI systems that could pose a threat to national security to share their safety test results with the federal government.
Please select this link to read the complete article from The New York Times.