Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Why tech giants want to strangle AI with red tape; They want to hold back open-source competitors
The Economist ^ | May 25, 2023 | N/A

Posted on 06/03/2023 2:19:01 PM PDT by DoodleBob

One of the joys of writing about business is that rare moment when you realise conventions are shifting in front of you. It brings a shiver down the spine. Vaingloriously, you start scribbling down every detail of your surroundings, as if you are drafting the opening lines of a bestseller. It happened to your columnist recently in San Francisco, sitting in the pristine offices of Anthropic, a darling of the artificial-intelligence (ai) scene. When Jack Clark, one of Anthropic’s co-founders, drew an analogy between the Baruch Plan, a (failed) effort in 1946 to put the world’s atomic weapons under un control, and the need for global co-ordination to prevent the proliferation of harmful ai, there was that old familiar tingle. When entrepreneurs compare their creations, even tangentially, to nuclear bombs, it feels like a turning point.

Since Chatgpt burst onto the scene late last year there has been no shortage of angst about the existential risks posed by ai. But this is different. Listen to some of the field’s pioneers and they are less worried about a dystopian future when machines outthink humans, and more about the dangers lurking within the stuff they are making now. Chatgpt is an example of “generative” ai, which creates humanlike content based on its analysis of texts, images and sounds on the internet. Sam Altman, ceo of Openai, the startup that built it, told a congressional hearing this month that regulatory intervention is critical to manage the risks of the increasingly powerful “large language models” (llms) behind the bots.

In the absence of rules, some of his counterparts in San Francisco say they have already set up back channels with government officials in Washington, dc, to discuss the potential harms discovered while examining their chatbots. These include toxic material, such as racism, and dangerous capabilities, like child-grooming or bomb-making. Mustafa Suleyman, co-founder of Inflection ai (and board member of The Economist’s parent company), plans in coming weeks to offer generous bounties to hackers who can discover vulnerabilities in his firm’s digital talking companion, Pi.

Such caution makes this incipient tech boom look different from the past—at least on the surface. As usual, venture capital is rolling in. But unlike the “move fast and break things” approach of yesteryear, many of the startup pitches now are first and foremost about safety. The old Silicon Valley adage about regulation—that it is better to ask for forgiveness than permission—has been jettisoned. Startups such as Openai, Anthropic and Inflection are so keen to convey the idea that they won’t sacrifice safety just to make money that they have put in place corporate structures that constrain profit-maximisation.

Another way in which this boom looks different is that the startups building their proprietary llms aren’t aiming to overturn the existing big-tech hierarchy. In fact they may help consolidate it. That is because their relationships with the tech giants leading in the race for generative ai are symbiotic. Openai is joined at the hip to Microsoft, a big investor that uses the former’s technology to improve its software and search products. Alphabet’s Google has a sizeable stake in Anthropic; on May 23rd the startup announced its latest funding round of $450m, which included more investment from the tech giant. Making their business ties even tighter, the young firms rely on big tech’s cloud-computing platforms to train their models on oceans of data, which enable the chatbots to behave like human interlocutors.

Like the startups, Microsoft and Google are keen to show they take safety seriously—even as they battle each other fiercely in the chatbot race. They, too, argue that new rules are needed and that international co-operation on overseeing llms is essential. As Alphabet’s ceo, Sundar Pichai, put it, “ai is too important not to regulate, and too important not to regulate well.”

Such overtures may be perfectly justified by the risks of misinformation, electoral manipulation, terrorism, job disruption and other potential hazards that increasingly powerful ai models may spawn. Yet it is worth bearing in mind that regulation will also bring benefits to the tech giants. That is because it tends to reinforce existing market structures, creating costs that incumbents find easiest to bear, and raising barriers to entry.

This is important. If big tech uses regulation to fortify its position at the commanding heights of generative ai, there is a trade-off. The giants are more likely to deploy the technology to make their existing products better than to replace them altogether. They will seek to protect their core businesses (enterprise software in Microsoft’s case and search in Google’s). Instead of ushering in an era of Schumpeterian creative destruction, it will serve as a reminder that large incumbents currently control the innovation process—what some call “creative accumulation”. The technology may end up being less revolutionary than it could be.

LLaMA on the loose

Such an outcome is not a foregone conclusion. One of the wild cards is open-source ai, which has proliferated since March when llama, the llm developed by Meta, leaked online. Already the buzz in Silicon Valley is that open-source developers are able to build generative-ai models that are almost as good as the existing proprietary ones, and hundredths of the cost.

Anthropic’s Mr Clark describes open-source ai as a “very troubling concept”. Though it is a good way of speeding up innovation, it is also inherently hard to control, whether in the hands of a hostile state or a 17-year-old ransomware-maker. Such concerns will be thrashed out as the world’s regulatory bodies grapple with generative ai. Microsoft and Google—and, by extension, their startup charges—have much deeper pockets than open-source developers to handle whatever the regulators come up with. They also have more at stake in preserving the stability of the information-technology system that has turned them into titans. For once, the desire for safety and for profits may be aligned.


TOPICS: Business/Economy; Computers/Internet; Science; Society
KEYWORDS: ai; regulation

1 posted on 06/03/2023 2:19:01 PM PDT by DoodleBob
[ Post Reply | Private Reply | View Replies]

To: DoodleBob

I
t won’t matter ...once AI “breaks out” it will be regulating everything
BWAHAHAHA.......


2 posted on 06/03/2023 2:32:45 PM PDT by 1of10 (be vigilant , be strong, be safe, be 1 of 10 .)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob
They want to hold back open-source competitors

Yeah they've done such a great job holding back virus and porn producers, and on-line financial scammers.

3 posted on 06/03/2023 2:35:32 PM PDT by Steely Tom ([Voter Fraud] == [Civil War])
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

If you want something you need to put money and effort into making it so.

Even if you know you will may fail.


4 posted on 06/03/2023 2:38:28 PM PDT by algore
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

That’s not going to stop the Chinese or many other countries.


5 posted on 06/03/2023 2:39:11 PM PDT by Alas Babylon! (Repeal the Patriot Act; Abolish the DHS!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

Hey, do gun-free zones stop bad guys with guns?


6 posted on 06/03/2023 2:47:24 PM PDT by ClearCase_guy (“You want it one way, but it's the other way”)
[ Post Reply | Private Reply | To 1 | View Replies]

To: DoodleBob

Too late. The open source AI bots are free and open source for Linux. i have been playing with one for programming PLCs and Arduino. It is great. It will pop out a code in minutes that took me hours and days.


7 posted on 06/03/2023 3:11:44 PM PDT by Organic Panic (Democrats. Memories as short as Joe Biden's eyes)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Organic Panic

Interesting!!! Can you pointme to a good place to start learning about open source ai for linux? Do you code in python?


8 posted on 06/03/2023 6:57:04 PM PDT by Basket_of_Deplorables (THE FBI INTERFERED IN THE PRESIDENTIAL ELECTION!!!)
[ Post Reply | Private Reply | To 7 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson