Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

Skip to comments.

The White House Just Told Congress How to Regulate AI. Here's What It Actually Says.
The Neuron ^ | 03/22/2026 | Grant Harvey

Posted on 03/22/2026 5:38:27 PM PDT by SeekAndFind

The U.S. Trump administration released a sweeping AI policy framework today covering everything from child safety to copyright. The biggest move: telling states to stop making their own rules.

If your company uses AI (and at this point, whose doesn't?), the rules governing what that AI can and can't do are about to change. The question is whether Washington can agree on how.

The White House released its National Policy Framework for Artificial Intelligence today, a four-page legislative blueprint that tells Congress exactly what the administration wants in a national AI law. Seven priority areas. Zero new regulatory agencies. And one very clear message to state governments: stand down.

P.S: Before we get into this, goes without saying, but none of our commentary should be considered legal advice; we share for opinion and educational purposes only.

First up, the TL;DR

The U.S. White House dropped its first-ever national AI policy framework today, and the headline takeaway is clear: the federal government wants to be the only one writing AI rules.

Here's what happened

Why this matters

The framework sounds comprehensive, but its most powerful move is what it prevents. By calling for federal preemption of state AI laws, the White House is trying to shut down a wave of state-level regulation that's been building since Congress failed to act, oh idk, over and over again for the past 3 years since ChatGPT came out? More than 50 Republican state legislators pushed back against this approach just weeks ago, calling the administration's pressure campaign an effort to shield Big Tech from accountability.

Meanwhile, Sen. Marsha Blackburn released her own nearly 300-page federal AI bill the day before, with much stricter provisions: a "duty of care" requirement for chatbot developers, a sunset of Section 230 protections, and criminal penalties for AI companies that let chatbots have explicit conversations with kids. That said, the Cato Institute already identified five major flaws in her approach.

Our take

The White House framework is a wish list, not a law. Congress has been deadlocked on AI regulation for years (because of lots of lots of lobbying, among other actually good reason like not stifling innovation so the U.S. can lead in AI development, which matters if you live in the U.S but might worry you if you live, well, anywhere else), and the same fights over preemption, copyright, and kids' safety that stalled past bills are still very much alive. Watch the gap between the framework's ambitions and what actually makes it through committee... we imagine an even more watered down version will be what actually gets through.

The Seven Pillars, Decoded

Let's break down what the framework actually asks Congress to do, section by section, in plain English.

1. Protecting Kids

The framework wants parents to have account controls, privacy settings, and screen-time management for AI platforms their kids use. It supports age-verification requirements (with a twist: it prefers "parental attestation," meaning parents confirm their kid's age rather than the platform collecting biometric data; this actually makes some sense; ideally it would be the parent confirming the child is still a child and not giving away sensitive data; we have an upcoming interview where we talk about this exact issue soon). AI platforms would also need features to reduce sexual exploitation and self-harm risks for minors.

The child safety provisions borrow heavily from existing proposals. The Take It Down Act, signed earlier in the Trump administration, already targets deepfake abuse. What's new here is the explicit call to apply existing child privacy protections to AI systems, including limits on using kids' data for model training and targeted ads.

h3>2. Protecting Communities (and Your Electric Bill)

This section has two priorities that seem unrelated but are deeply connected. First, the administration pledged that residential electricity customers won't see higher bills because of AI data centers. Second, it wants to speed up federal permitting so data centers can build their own power generation on-site.

Translation: AI companies need massive amounts of electricity, and the White House wants them to produce it themselves rather than strain the existing power grid. The "Ratepayer Protection Pledge" makes this politically palatable by promising your bill won't go up, while the permitting streamlines make it easier for companies to build the energy infrastructure they need. For more about how AI does / doesn't impact the grid, read this.

Related:

The section also calls for more resources to fight AI-powered scams targeting seniors and for small business grants and tax incentives to adopt AI tools.

3. Copyright: The "We're Staying Out of It" Take That Low Key Takes a Side

This might be the most consequential section for the AI industry. The framework states the administration "believes that training of AI models on copyrighted material does not violate copyright laws." Then, in the very next breath, it says Congress should let the courts decide. So basically: this is what we think, but it's not up to us so good luck!

That's a carefully worded position. By stating its belief that AI training is legal while telling Congress not to legislate, the White House is effectively siding with AI companies in the dozens of pending lawsuits from writers, artists, and musicians. Courts have largely been trending toward allowing "fair use" of copyrighted works for AI training (on the grounds that the AI training process is inherently transformative), and the White House is saying: let that trend continue.

There's one olive branch for creators: the framework supports collective licensing frameworks (think ASCAP or BMI, but for AI training data) that would let rights holders negotiate compensation together without running into antitrust problems. Interesting! It also backs a federal law against unauthorized AI replicas of someone's voice or likeness, with exceptions for parody, satire, and news.

This is still such a tricky gray area though; for example, where do you draw the line between parody and satire? Also, news? How is it okay for NEWS to make AI replicas of someone's voice or likeness? Unless of course it's a parody, which makes sense; like us making a deepfake of Sam Altman in a Santa hat when Christmas comes around. But that's clearly parody. Legal eagles out there, lend us your takes on this cause it's still confusing to us.

4. Free Speech and Anti-Censorship

The framework wants to prevent the government from pressuring AI companies to alter content based on political agendas and would give citizens a way to sue the federal government if agencies try to censor AI platforms. It also requires "high-risk" AI systems to undergo third-party audits for political viewpoint discrimination. This kinda makes sense no matter what your political persuasion is?

This section reflects a specific Republican concern: that AI systems are being trained with liberal bias baked in. The administration's December executive order went further, directing the FTC to classify state-mandated bias mitigation as a deceptive trade practice. The argument: if an AI model is trained on data reflecting real-world patterns, forcing it to alter outputs to "correct" for bias actually makes it less accurate. But there's inherent biases in the original dataset that might not reflect the real world, based on what's been curated or based on what's available, so I don't think this really flies.

5. Innovation and "AI Dominance"

Regulatory sandboxes. Open federal datasets for AI training. No new regulatory agencies. Support for sector-specific regulation through existing bodies (the FDA handles AI in healthcare, the SEC handles AI in finance, etc.).

The key principle: the government shouldn't create a single new agency to oversee AI. Instead, existing regulators should handle AI within their domains, supplemented by industry-led standards. The Center for Data Innovation praised this approach as avoiding "regulation from a place of fear." Hard agree with this.

In general, we need less regulatory agencies and more solid, well-thought through regulations. Law is software. You gotta update it frequently, and as the ground truth reality changes (or remains the same). Let's give the agencies we do have better data and realtime analytics, and empower them with people who understand the tech to inform them on what's actually possible, then let them think through how AI impacts their agencies' specific domain using their, well, domain specific expertise!

6. Workforce and Education

Congress should incorporate AI training into existing education and apprenticeship programs. Land-grant universities (the state colleges originally created to teach agriculture and engineering) would get funding for AI research and youth programs. The framework also wants better federal tracking of how AI is changing specific job tasks.

Personally, we'd go a step further and pursue some sort of framework that requires existing employers to train their employees for this new era before laying them off as a new employee protection. Separate from that, forming new "lab" style education programs where the goal is to upskill someone in less than six months to a year (speed-running a degree program, if you will) would be incredibly useful for rethinking education in the AI age. Having industry co-sponsor these labs would make a ton of sense because they will benefit by getting qualified talent directly funneling into their businesses at the end of it. That's right, we here at The Neuron got ideas, people!

7. Federal Preemption: The Big Fight

This is the section everyone's watching. The framework wants Congress to override state AI laws that impose "undue burdens," creating a single national standard. States would keep the power to enforce general consumer protection and fraud laws, control zoning for data centers, and set rules for their own government's use of AI.

But states would lose the ability to regulate AI development itself. The framework says states "should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications." They also couldn't penalize AI developers for what a third party does with their models.

My gut reaction to this, and the entire Anthropic vs Pentagon saga, goes like this: at what point does the U.S. just nationalize a certain threshold of AI training? Meaning, they take over "control" of the labs after autonomous capability reaches a certain point where it's no longer in the interest of the public to allow private industry to control it?

The flip side of that coin: at what point is it no longer in the public's interest for their government, or any single government, to control AI development past a certain capability threshold? Do the labs even agree on what that threshold should be? And if we can't agree, do we just cap all development at a certain level of capability? Or do we just make everything open at a certain level of capability so all countries around the world have the same level of awareness and control of frontier capabilities to avoid any one nation running away with superintelligence? Open question on how to handle as a global community. Obviously, any one individual country or company is inherently biased towards controlling it themselves.

Why Republicans Are Fighting Republicans

Here's what makes this framework unusual: the loudest opposition is coming from within the president's own party.

In early March, more than 50 Republican state legislators from 22 states wrote to Trump asking him to back off. Their letter came after the White House tried to kill a Republican-sponsored AI transparency bill in Utah by calling it "unfixable" in a one-line memo to state Senate leadership. The Utah bill would have simply required frontier AI companies to publish safety plans and child protection plans on their websites.

The state Republicans' argument cuts to a core conservative tension: they believe state-level AI regulation is "fully consistent with conservative principles" of local governance and holding powerful companies accountable. The White House counters that AI is too important for national security and competitiveness to let 50 statehouses set the rules.

This isn't an abstract policy debate. Parents who've lost children to AI-related harms have been testifying before state legislatures, and advocacy groups like ParentsRISE! have accused "unelected officials in D.C." of killing the "bare minimum" in accountability. See above thoughts; related to this.

The Blackburn Wildcard

One day before the White House released its framework, Sen. Marsha Blackburn dropped a nearly 300-page bill called the "Trump America AI Act" that goes significantly further than the White House wants.

Blackburn's bill would require chatbot developers to exercise "reasonable care" to prevent foreseeable harms. It would sunset Section 230 (the law that shields platforms from liability for user-generated content) two years after passage. It incorporates the Kids Online Safety Act (KOSA) with a full "duty of care" standard, bans AI companion chatbots for kids with criminal penalties, and would let users sue AI companies for property damage, mental anguish, illness, and financial injury.

Our take: This sounds like what a healthy society would pass. Anyone know where I can find one of those? A bill that lets you actually legally hold some of the most well-funded and powerful companies accountable?! Ha! Come on. That part will totally get taken out. But, it could be huge if it makes it through. Expect some intense lobbying on the Section 230 and legal liability components of this bill.

The bill also tackles creator protections: individuals could license their voice and likeness for AI replicas (this is good), copyright holders could subpoena AI companies to find out if their work was used in training (this is already the case right?), and AI-generated derivative works wouldn't qualify for fair use protection.

On that last point, this is somewhat true but currently the general policy requires a human involved in the creation process making meaningful transformations for it to qualify; personally, I think we need a global remix system in place where anyone can make derivative works with proper licensing, but both the original creator AND the new remixer share the spoils. And I'm not talking about YouTube demonetizing you and giving all your $$ to the rights holder of one song used, I'm talking about a system where every work that is used in contribution to a piece gets a fair and measured cut of the total revenue, with the platform taking a small platform fee but the majority of the revenue going to the creators themselves.

Blackburn framed the bill as answering Trump's own call for a federal AI standard. To date as of this article, the White House hasn't endorsed it. Don't count on it, either, until this gets meaningfully scaled back. That's our two cents, anyway.

What This Means for You

If you're using AI tools at work (Claude, ChatGPT, Gemini, Copilot, any number of other cool tools we've shared over the years), nothing changes tomorrow. This is a framework, not a law, and Congress has a long history of not passing AI legislation quickly.

But here's what to watch:

  • If you're in a regulated industry (healthcare, finance, education), the framework's push for sector-specific regulation through existing agencies means your existing compliance frameworks are likely to expand to cover AI, rather than a new AI-specific agency creating separate rules. This feels like the right approach; so just make sure you're compliant with existing rules in your AI use!
  • If you create content, the copyright fight is heading to the courts. The White House declining to ask Congress to intervene is a signal that the current legal trajectory (which has favored AI companies) will continue unless a major ruling changes things. As we'v said multiple times, there should be an automated marketplace solution for this. But it needs to be built as an open standard (like MCPs / Skills / etc) so everyone can adopt it.
  • If you're a business owner, the federal preemption push matters a lot. Right now, companies deploying AI face a patchwork of state laws. If preemption passes, you'd have one set of rules. If it doesn't, state-by-state compliance stays your problem. We'll see what happens if states push back against this, as we expect they might.
  • If you're a parent, child safety provisions have the most bipartisan support of anything in the framework. These are the most likely to actually become law, whether through the White House framework or Blackburn's bill. Fingers crossed!

The biggest tell: OSTP Director Kratsios said the administration wants legislation "this year." Congress has been trying to pass comprehensive AI rules since at least 2023 and has failed every time. The same sticking points (preemption, copyright, kids' safety) that killed past bills are all still here. The framework gives Congress a map. Whether they follow it is another question entirely. To answer that question, follow the money.

Also worth noting: U.S. Senator Bernie Sanders talked to Claude. IDK why, but I find this hilarious.



TOPICS:
KEYWORDS: ai; regulation

1 posted on 03/22/2026 5:38:27 PM PDT by SeekAndFind
[ Post Reply | Private Reply | View Replies]

To: SeekAndFind
The framework states the administration "believes that training of AI models on copyrighted material does not violate copyright laws."

This is utter nonsense. In its May 2025 report the United States Copyright Office expressed skepticism that all AI training is a transformative fair use, particularly when it's done commercially and results in outputs that compete with original works.

2 posted on 03/22/2026 6:10:19 PM PDT by DoodleBob (Gravity's waiting period is about 9.8 m/s²)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

**Executive Summary**

On March 22, 2026, the Trump administration released its first National Policy Framework for Artificial Intelligence, a four-page legislative blueprint urging Congress to enact a single federal AI law that preempts state regulations, arguing AI development is an “inherently interstate” matter with national security and foreign policy implications. The framework outlines seven priorities: child safety (parental controls, age verification, reduced exploitation risks), community protections (ratepayer safeguards against data-center power costs, faster permitting for on-site generation), copyright (explicit belief that training on copyrighted material is legal under fair use, leaving resolution to courts while supporting collective licensing and voice/likeness protections), free speech (anti-censorship measures, audits for viewpoint discrimination), innovation (regulatory sandboxes, no new agencies, sector-specific oversight), workforce training (AI integration into education/apprenticeships), and federal preemption (barring states from regulating AI development itself while preserving general consumer laws and local zoning). Critics, including some Republican state legislators, see the preemption push as shielding Big Tech from accountability amid stalled congressional action, while Sen. Marsha Blackburn’s competing 300-page bill proposes stricter measures like duty-of-care standards, Section 230 sunset, and criminal penalties for child-targeted AI chatbots. The framework is a non-binding wish list aimed at passing legislation in 2026, but bipartisan divides over preemption, copyright, and safety suggest any final law will likely be significantly watered down.


3 posted on 03/22/2026 7:24:19 PM PDT by jroehl (And how we burned in the camps later - Aleksandr Solzhenitsyn - The Gulag Archipelago)
[ Post Reply | Private Reply | To 2 | View Replies]

To: SeekAndFind

The framework states the administration “believes that training of AI models on copyrighted material does not violate copyright laws.”

And with that they totally lost the plot.


4 posted on 03/22/2026 7:43:13 PM PDT by lastchance (Cognovit Dominus qui sunt eius.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind
This section reflects a specific Republican concern: that AI systems are being trained with liberal bias baked in.

AI can never be free from the bias of those who programmed it.

And democrats have consistently displayed a nasty propensity to do whatever is convenient to advance their agenda, legal or not or moral or not.

5 posted on 03/22/2026 8:08:46 PM PDT by metmom (He who testifies to these things says, “Surely I am coming soon." Amen. Come, Lord Jesus….)
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

For the record, I DO. NOT. TRUST. Ai in the least.

I’ve read far too many sci-fi literature in my lifetime.

When real life starts reading like a sci-fi novel. it
s time to stay far, far away from it.


6 posted on 03/22/2026 8:10:46 PM PDT by metmom (He who testifies to these things says, “Surely I am coming soon." Amen. Come, Lord Jesus….)
[ Post Reply | Private Reply | To 1 | View Replies]

To: metmom

AI is a 99 percent solution.

99 times out of a hundred it will be right, but it’s that one time that’ll get you!


7 posted on 03/22/2026 8:12:09 PM PDT by dfwgator ("I am Charlie Kirk!")
[ Post Reply | Private Reply | To 6 | View Replies]

To: SeekAndFind

AI will be a dark stain on humanity. The future will prove it.

And Trumps huge support of it, puts spiritual questions in my mind. God did not design the human race to be run by AI.


8 posted on 03/22/2026 10:29:44 PM PDT by Revel
[ Post Reply | Private Reply | To 1 | View Replies]

To: SeekAndFind

Bookmark


9 posted on 03/23/2026 4:51:31 AM PDT by jimjohn (We're at war, people. Start acting like it.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: metmom

I’ve thought from the first that AI is too expensive & causes more problems that it’s worth. Get rid of it & the attendant problems or show me what it’s really good for as I don’t see it.


10 posted on 03/23/2026 5:36:31 AM PDT by oldtech
[ Post Reply | Private Reply | To 6 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
Bloggers & Personal
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson