White House report says government must play role in development of AI

Jonathan Keane
Digital Trends
White House making push to rev up charging infrastructure for EVs
The Obama Administration announced key programs to accelerate EV and charging infrastructure deployment in 48 national highway corridors. Partnerships with states, municipalities, and the private sector all support infrastructure growth.

The government needs to play a role in how artificial intelligence is developed and used to ensure it is used fairly, according to a new White House report published today.

The report, Preparing for the Future of Artificial Intelligence, outlines many of the benefits of using AI, and while it does not call for any new strong regulation on AI, it does make the case for standards of use for certain industries like aviation and finance.

However, the report does recommend that the Department of Transportation develop an “evolving framework for regulation” when it comes to autonomous cars and drones. The report adds that industries, as well as other nations, need to maintain a regular dialogue with the government over how they are using AI but it also welcomes more research into the uses and potential dangers of AI.

Related: U.K. wants a government team to monitor AI, robotics development

“Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science,” the report asserts as one of its 23 recommendations.

The authors of the report also stated that the Executive Office of the President should conduct a follow-up study that examines the effect AI will have on job losses, an ongoing concern in many sectors.

Perhaps most interestingly, the report raises several concerns over bias built into our AI technologies that could lead to discrimination. In one example, it questioned the competence of predictive policing technology that can disproportionately target black communities, citing a ProPublica investigation earlier this year.

“Similar issues could impact hiring practices,” the authors added. “If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could be to perpetuate past bias.

“For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants.”

One of the chief recommendations among AI researchers for solving this issue is to push for greater transparency over when AI tools are used governments, which would give citizens an opportunity to at least see how the tools operate.