U.S. President Joe Biden will take wider-ranging action on artificial intelligence (AI) on Monday by seeking to increase safety while protecting consumers, workers and minority groups from the technology’s related risks.

An executive order requires that developers of AI systems that pose risks to U.S. national security, the economy, public health or safety share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they are released to the public.

It also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear and cybersecurity risks, according to the White House.

The order is the latest step by the administration to set parameters around AI as it makes rapid gains in capability and popularity in an environment of, so far, limited regulation. It prompted a mixed response from the private sector.

IBM said in a statement that the order “sends a critical message: that AI used by the United States government will be responsible AI.”

NetChoice, a national trade association that includes major tech platforms, described the executive order as an “AI Red Tape Wishlist,” that will end up “stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.”

The new order, set to be unveiled at an event Monday afternoon, goes beyond voluntary commitments made earlier this year by AI companies such as OpenAI, Alphabet and Meta Platforms which pledged to watermark AI-generated content to make the technology safer.

As part of the order, the Commerce Department will “develop guidance for content authentication and watermarking” for labeling items that are generated by AI, to make sure government communications are clear, the White House said in a release.

White House Deputy Chief of Staff Bruce Reed called the order, which also delves into privacy, housing discrimination and job displacement, the “strongest set of actions” any government had taken to ensure AI security.

“It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” he said in a statement.

The Group of Seven industrial countries on Monday will agree a code of conduct for companies developing advanced artificial intelligence systems, according to a G7 document.

A senior administration official, briefing reporters ahead of the official unveiling of the order, pushed back against criticism that Europe had been more aggressive at regulating AI than the United States has.

The official said the executive order had the force of law and the White House believed that legislative action from Congress was also necessary for AI governance.

Biden is calling on Congress in particular to pass legislation on data privacy, the White House said.

U.S. officials have warned that AI can heighten the risk of bias and civil rights violations, and Biden’s executive order seeks to address that by calling for guidance to landlords, federal benefits programs and federal contractors “to keep AI algorithms from being used to exacerbate discrimination,” the release said.

The order also calls for the development of “best practices” to address harms that AI may cause workers, including job displacement, and requires a report on labor market impacts.

Vice President Kamala Harris will attend an AI global summit in Britain this week; China is also expected to be represented at the meeting, hosted by British Prime Minister Rishi Sunak.

Sunak has said only governments could tackle the risks posed by AI, a technology he said could make it easier to build chemical or biological weapons, spread fear and, in a worst-case scenario, escape human control.

(Reporting by Jeff Mason; additional reporting by John Kruzel, David Shepardson, Alexandra Alper and Diane Bartz; editing by Grant McCool and Jonathan Oatis)