Australia markets closed

    +39.40 (+0.49%)

    -0.0016 (-0.24%)
  • ASX 200

    +34.20 (+0.44%)
  • OIL

    -0.63 (-0.80%)
  • GOLD

    -19.50 (-0.83%)
  • Bitcoin AUD

    -619.30 (-0.61%)
  • CMC Crypto 200

    +5.16 (+0.37%)

Biden Administration will invest $140 million to launch seven new National AI Research Institutes

The announcement comes hours before VP Harris meets with Silicon Valley leaders for a "frank discussion" about the technology's risks.

Kevin Dietsch via Getty Images

Ahead of a meeting between Vice President Kamala Harris and the heads of America's four leading AI tech companies — Alphabet, OpenAI, Anthropic and Microsoft — the Biden Administration announced Thursday a sweeping series of planned actions to help mitigate some of the risks that these emerging technologies pose to the American public. That includes $140 million to launch seven new AI R&D centers as part of the National Science Foundation, extracting commitments from leading AI companies to participate in a "public evaluation" of their AI systems at DEFCON 31, and ordering the Office of Management and Budget (OMB) to draft policy guidance for federal employees.

"The Biden Harris administration has been leading on these issues since long before these newest generative AI products debuted last fall," a senior administration official said during a reporters call Wednesday. The Administration unveiled its AI Bill of Rights "blueprint" last October, which sought to "help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public," per a White House press release.

"At a time of rapid innovation, it is essential that we make clear the values we must advance, and the common sense we must protect," the administration official continued. "With [Thursday's announcement] and the blueprint for an AI bill of rights, we've given company and policymakers and the individuals building these technologies, some clear ways that they can mitigate the risks [to consumers]."


While the federal government does already have authority to protect the citizenry and hold companies accountable, as the FTC demonstrated Monday, "there's a lot the federal government can do to make sure we get AI right," the official added — like found seven brand new National AI Research Institutes as part of the NSF. They'll act to collaborate research efforts across academia, the private sector and government to develop ethical and trustworthy in fields ranging from climate, agriculture and energy, to public health, education, and cybersecurity."

"We also need companies and innovators to be our partners in this work," the White House official said. "Tech companies have a fundamental responsibility to make sure their products are safe and secure and that they protect people's rights before they're deployed or made public tomorrow."

To that end, the Vice President is scheduled to meet with tech leaders at the White House on Thursday for what is expected to be a "frank discussion about the risks we see in current and near-term AI development," the official said. "We're also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and will discuss how we can work together to protect the American people from the potential harms of AI so that they can reach the benefits of these new technology."

The Administration also announced that it has obtained "independent commitment" from more than a half dozen leading AI companies — Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI — to put their AI systems up for public evaluation at DEFCON 31 (August 10-13th). There, thousands of attendees will be able to poke and prod around in these models to see if they square with the Biden admin's stated principles and practices of the Blueprint. Finally, the OMB will issue guidance to federal employees in coming months regarding official use of the technology and help establish specific policies for agencies to follow, and allow for public comment before those policies are finalized.

"These are important new steps to come out responsible innovation and to make sure AI improved people's lives, without putting rights and safety at risk," the official noted.