The Department of Defense is issuing AI ethics guidelines for tech contractors.

Goodman says the guidelines aim to ensure that tech contractors adhere to existing DoD ethics for AI. Following a two-year study by the Defense Innovation Board, DoD announced the principles last year, an advisory panel of leading technology researchers and business people formed in 2016 to deliver the Silicon Valley spark to the U.S. military. ۔ The board was chaired by former Google CEO Eric Schmidt until September 2020, and its current members include Daniela Ross, director of MIT’s computer science and artificial intelligence lab.

Yet some critics question whether the work promises meaningful reform.

During the study, the board consulted with a number of experts, including critics of the military’s use of AI, including members of the Campaign for Assassin’s Creed robots and Meredith Whitaker, a former Google researcher who organized the Project Maun protest. Helped to do.

Whitaker, now a faculty director at New York University’s AI Now Institute, was not available for comment. But according to Institute spokeswoman Courtney Hollesworth, she attended a meeting where she discussed with senior board members, including Schmidt, what direction it was heading. Hollesworth says that “meaningful counsel has never been given.” The presence of dissenting voices is used to claim that a given outcome leads to extensive purchases by relevant stakeholders. “

If DoD doesn’t have extensive purchases, can its guidelines still help build trust? “There will be people who will never be satisfied with any set of ethical guidelines that DoD develops because they consider the idea contradictory,” Goodman says. “It’s important to be realistic about what leaders can and can’t do.”

The guidelines, for example, do not address the use of lethal sophisticated weapons, a technology that some campaigners say should be banned. But Goodman points out that the rules governing such tech are decided above China. The guidelines aim to facilitate the construction of AI that meets these requirements. And part of that process is to clarify any concerns that the third party developers may have. “One of the correct applications of these guidelines is to decide not to adopt a particular system,” says Jared Dunmon, author of DIU. “You may decide it’s not a good idea.”

Write a Comment

Your email address will not be published. Required fields are marked *