OpenAI requests US government legalize theft or lose to China

OpenAI may have already crawled the internet for all the world’s data to train ChatGPT, but it seems that isn’t enough as it wants protection from copyright holders to allow it to continue stealing everything that both is and isn’t nailed down.

The latest manufactured goalpost OpenAI has set dubbed “AGI” can’t be reached unless the company is free to do whatever it takes to steal everything good and turn it into AI slop. At least, that’s what the latest OpenAI submission to the United States Government suggests.

An OpenAI proposal submitted regarding President Trump’s executive order on sustaining America’s global dominance explains how the administration’s boogeyman might overtake the US in the AI race. It seems that the alleged theft of copyright material by Chinese-based LLMs put them at an advantage because OpenAI has to follow the law.

The proposal seems to be that OpenAI and the federal government go into a kind of partnership that enables OpenAI to avoid any state-mandated laws. Otherwise, the proposal alleges that the United States will lose the AI race.

And to prove it’s point, OpenAI says will lose that race to the US government’s favorite boogeyman — China.

One bullet point reads:

Its ability to benefit from copyright arbitrage being created by democratic nations that do not clearly protect AI training by statute, like the US, or that reduce the amount of training data through an opt-out regime for copyright holders, like the EU. The PRC is unlikely to respect the IP regimes of any of such nations for the training of its AI systems, but already likely has access to all the same data, putting American AI labs at a comparative disadvantage while gaining little in the way of protections for the original IP creators.

Such a partnership would protect OpenAI from the 781 and counting AI-related bills proposed at the state level. AI companies could volunteer for such a partnership to seek exemption from the state laws.

The proposal also suggests that China and countries like it that don’t align with democratic values be cut off from AI built in the United States. This would mean Apple’s work with Alibaba to bring Apple Intelligence to China would be halted if required.

It also calls for a total ban on using equipment produced in China in goods that would be sold to Americans or used by American AI companies.

The section on copyright more or less calls for the total abandonment of any restriction for access to information. According to the proposal, copyright owners shouldn’t be allowed to opt out of having its content stolen to train AI. It even suggests that the US should step in and address restrictions placed by other places like the EU.

The next proposal centers around infrastructure. OpenAI wants to create government incentives (that would benefit them) to build in the US.

Plus, OpenAI wants to digitize all the government information that is currently still in analog form. Otherwise, it couldn’t crawl it to train ChatGPT.

Finally, the proposal suggests the US government needs to implement AI across the board. This includes national security tasks and classified nuclear tasks.

The lunacy of OpenAI

One of the US Navy nuclear trained staffers here at AppleInsider pointed this out to me, cackling as he did so. As the other half of the nuclear-trained personnel on staff, I had to join him in laughing. It’s just not possible for so many reasons.


AI is a tool, not some kind of cataclysmic event in human history

Admiral Hyman G. Rickover, the father of the nuclear Navy, helped build not just the mechanical and electrical systems we still use today, but the policies and procedures as well. One of his most important mantras besides continuous training was that everything needed to be done by humans.

Automation of engineering tasks is one of the reasons the Soviets lost about a submarine a year to accidents during the height of the Cold War.

When you remove humans from a system, you start to remove the chain of accountability. And the government must function with accountability, especially when dealing with nuclear power or arms.

That aside, there are an incredible number of inconsistencies with the proposals laid out by OpenAI. Using China as a boogeyman only to propose building the United States policy on AI around China’s is hypocritical and dangerous.

OpenAI has yet to actually explain how AI will shape our future beyond sharing outlandish concepts from science fiction about possible outcomes. The company isn’t building a sentient, thinking computer, it won’t replace the workforce, and it isn’t going to fundamentally transform society.

It’s a really nice hammer, but that’s about it. Humans need tools — they make things easier, but we can’t pretend these tools are replacements for humans.

Yes, of course, the innovations created around AI, the increased efficiency of some systems, and the inevitable advancement of technology will render some jobs obsolete. However, that’s not the same as the dystopian promise OpenAI keeps espousing of ending the need for work.

Blue whale silhouette with the word deekseek in lowercase, featuring a playful whale illustration integrated into the design.
Deepseek is an existential crisis for OpenAI’s bottom line, not Democracy

Read between the lines of this proposal, and it says something more like this:

OpenAI got caught by surprise when DeepSeek released a model that was much more efficient and undermined its previous claims. So, with a new America-first administration, OpenAI is hoping it can convince regulators to ignore laws in the name of American exceptionalism.

The document references that authoritarian regimes that allow DeepSeek to ignore laws will enable it to get ahead. So, OpenAI needs the United States to act like an authoritarian regime and ensure it can compete without laws getting in the way.

AI has an intelligence problem

Of course, the proposal is filled with the usual self-importance evoked by OpenAI. It seems to believe its own nonsense about where this so-called “artificial intelligence” technology will take us.

Text conversation explaining that the Gulf of Mexico touches Florida, Mexico, and Cuba, with a ChatGPT signature.
Someone should warn OpenAI that hallucinations like this won’t get you sympathy from the US government

It had to adjust goal posts to suggest that the term “AI” wasn’t the sentient computer it promised us. No, now we’ve got two other industry terms to target: Artificial General Intelligence and Artificial Superintelligence.

To be clear, none of this is actual intelligence. Your devices aren’t “thinking” any more than a calculator is. It is just a much better evolution of what we had before.

Computers used to be more binary. A given input would give a predetermined output. Then, branching allowed more outputs to occur for a given input depending on conditions.

That expanded until we got to the modern definition of machine learning. That technology is still fairly deterministic, meaning you expect to get the same output for the given inputs.

The next step past that was generative technology. It uses even bigger data sets than ML, and outputs are not deterministic. Algorithms attempt to predict what the output should be based on patterns in the data.

That’s why we still sarcastically refer to AI as fancy autocomplete. Generating text just predicts what the next letter is most likely to be. Generating images or video does the same, but with pixels or frames.

Person holds smartphone displaying a small dog with pointy ears, captured in real-time. The dog is on a leash, standing on a sidewalk.
Human intelligence is asking the owner about the breed, artificial intelligence is awkwardly pointing your phone at it

The “reasoning” models don’t reason. In fact, they can’t reason. They’re just finer tuned to cover a specific case that makes them better at that task.

OpenAI expects AGI to be the next frontier, which would be a model that surpasses human cognitive capabilities. It’s this model that OpenAI threatens will cause global economic upheaval as people are replaced with AI.

Realistically, as the technology is being developed today, it’s not possible. Sure, OpenAI might release something and call it AGI, but it won’t be what they promised.

And you can forget about building a sentient machine with the current technology.

That’s not going to stop OpenAI from burning the foundation of the internet down in the pursuit of the almighty dollar.

Apple’s role in all this

Meanwhile, everyone says Apple is woefully behind in the AI race. That since Apple isn’t talking about sentient computers and the downfall of democracies, it’s a big loser in the space.

Colorful abstract looped design resembling an infinity symbol, glowing in shades of orange, pink, and blue against a black background.
Perhaps Apple Intelligence can catch up by stealing copyrighted materials

Even if OpenAI succeeds in getting some kind of government partnership for AI companies, it is doubtful Apple would participate. The company isn’t suffering from a lack of data access and has likely scraped all it needed from the public web.

Apple worked with copyright holders and paid for content where it couldn’t get it from the open web. So, it is an example of OpenAI’s arguments falling flat.

While OpenAI tries and fails to create something beyond the slop generators we have today, Apple will continue to refine Apple Intelligence. The private, secure, and on-device models are likely the future of this space where most users don’t even know what AI is for.

Apple’s attempt at AI is boring and isn’t promising the potential end of humanity. It’ll be interesting to see how it addresses AI with iOS 19 during WWDC 2025 in June after delaying a major feature.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *