CES 2020: White House Unveils AI Regulatory Principles

The White House is proposing principles that government agencies must follow when drafting regulations affecting private sector deployment of artificial intelligence (AI) technologies, and they are focused on preventing overregulation that could stifle innovation, the White House said. 

That comes even as the tech industry is gathering in Las Vegas for CES, where AI will be a big topic of discussion, and where top White House tech officials will be in attendance.  

The Trump principles are not binding on independent agencies, like the Federal Communications Commission and Federal Trade Commission, but a White House official speaking on background told Multichannel News that in talking with the Office of Management and Budget it was their understanding that those agencies would follow that guidance voluntarily, per past precedent.  

U.S. CTO Michael Kratsios

U.S. CTO Michael Kratsios

When the Obama Administration told agencies to start conducting cost-benefit analysis of regs, for example, the FCC under chairman Julius Genachowski indicated it would follow that lead. 

The Department of Transportation would be bound by the new AI principles for regulation of drones, for example, and the FDA for regs on AI-powered medical devices, respectively. But the principles don't apply to government use of AI, say, facial recognition by law enforcement for example.  

Related: White House Updates AI R&D Plan 

The Trump Administration's proposal will come in the form of a memorandum to federal agencies that will be posted on whitehouse.gov by midday Tuesday. The administration will give interested parties 60 days to comment before finalizing the principles. 

U.S. chief technology officer Michael Kratsios, speaking to reporters Monday (Jan. 6) in advance of the principles' release, said the idea behind them is to provide guidance on tech and ethical issues around AI, but in a way that supports robust private sector development and provides federal agencies common sense, pro innovative approaches.  

He said the principles need to be put in two important contexts. One is that they are meant to remove impediments to private sector innovation and growth and reduce any uncertainty about how the Administration will approach AI regs. The other is that the U.S. needs to implement these first-of-their-kind principles to ensure AI is led by the values of the U.S. and its allies, which are freedom, human rights and civil liberties.  

For example, the EU is planning to release an AI regulatory document soon, and he said he hopes it will use the U.S. AI principles as a framework and avoid preemptive and burdensome regs that stifle economic growth. 

Related: Trump Backs International AI Principles 

The regulatory principles agencies like the Department of Transportation and FDA, must abide by, according to Dr. Lynne Parker, deputy CTO of the U.S. and assistant director of artificial intelligence at the White House Office of Science and Technology Policy (OSTP) are: 1) public trust in AI; 2) public participation in the rulemaking process; 3) scientific integrity and information quality; 4) risk assessment and management; 5) cost and benefit analysis; 6) flexibility, i.e. performance based standards that can adapt to rapid changes in technology. 7) fairness and nondiscrimination, i.e. whether an AI app increases or decreases discrimination; 8) disclosure and transparency, which can increase public trust, could include when AI is in use; 9) safety and security; and 10) interagency coordination. 

Agencies will have to demonstrate to the Trump White House that any proposed regs follow those principles, which were developed as part of President Trump's American AI Initiative, announced a year ago. The 10 principles rest on three major elements: 1) ensuring public engagement and trust by increasing public participation; 2) limiting regulatory overreach by requiring cost-benefit analysis before any AI regs are imposed, with an emphasis on flexible regulatory frameworks, and by directing agencies to cooperate and communicate to avoid duplication of effort; and 3) promoting trustworthy AI by considering fairness, openness, nondiscrimination, transparency, safety and security.  

John Eggerton

Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.