This Week in AI: Why OpenAI’s o1 changes the AI regulation game

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

It’s been just a few days since OpenAI revealed its latest flagship generative model, o1, to the world. Marketed as a “reasoning” model, o1 essentially takes longer to “think” about questions before answering them, breaking down problems and checking its own answers.

There’s a great many things o1 can’t do well — and OpenAI itself admits this. But on some tasks, like physics and math, o1 excels despite not necessarily having more parameters than OpenAI’s previous top-performing model, GPT-4o. (In AI and machine learning, “parameters,” usually in the billions, roughly correspond to a model’s problem-solving skills.)

And this has implications for AI regulation.

California’s proposed bill SB 1047, for example, imposes safety requirements on AI models that either cost over $100 million to develop or were trained using compute power beyond a certain threshold. Models like o1, however, demonstrate that scaling up training compute isn’t the only way to improve a model’s performance.

In a post on X, Nvidia research manager Jim Fan posited that future AI systems may rely on small, easier-to-train “reasoning cores” as opposed to the training-intensive architectures (e.g., Meta’s Llama 405B) that’ve been the trend lately. Recent academic studies, he notes, have shown that small models like o1 can greatly outperform large models given more time to noodle on questions.

So was it short-sighted for policymakers to tie AI regulatory measures to compute? Yes, says Sara Hooker, head of AI startup Cohere’s research lab, in an interview with TechCrunch:

[o1] kind of points out how incomplete a viewpoint this is, using model size as a proxy for risk. It doesn’t take into account everything you can do with inference or running a model. For me, it’s a combination of bad science combined with policies that put the emphasis on not the current risks that we see in the world now, but on future risks.

Now, does that mean legislators should rip AI bills up from their foundations and start over? No. Many were written to be easily amendable, under the assumption that AI would evolve far beyond their enactment. California’s bill, for instance, would give the state’s Government Operations Agency the authority to redefine the compute thresholds that trigger the law’s safety requirements.

The admittedly tricky part will be figuring out which metric could be a better proxy for risk than training compute. Like so many other aspects of AI regulation, it’s something to ponder as bills around the U.S. — and world — march toward passage.

Source: techcrunch
 

Ai Development Company

Trending News

Whatsapp IconWhatsapp IconTelegram IconSkype Iconmail Icon
Osiz Technologies Software Development Company USA
Osiz Technologies Software Development Company USA