我想看一级黄色片_欧美性爱无遮挡电影_色丁香视频网站中文字幕_视频一区 视频二区 国产,日本三级理论日本电影,午夜不卡免费大片,国产午夜视频在线观看,18禁无遮拦无码国产在线播放,在线视频不卡国产在线视频不卡 ,,欧美一及黄片,日韩国产另类

首頁 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 商潮 專題 品牌中心
雜志訂閱

OpenAI發(fā)布新“推理”模型和編程智能體

Jeremy Kahn
2025-04-18

該公司新發(fā)布兩款A(yù)I“推理”模型o3和o4-mini,試圖在AI領(lǐng)域維持其領(lǐng)先地位。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

OpenAI聯(lián)合創(chuàng)始人兼首席執(zhí)行官薩姆·奧爾特曼,。圖片來源:Taylor Hill—FilmMagic

OpenAI發(fā)布了兩款號(hào)稱“迄今最強(qiáng)大”的AI推理模型,,以及一款輔助編程的開源AI智能體,試圖在競(jìng)爭(zhēng)激烈的AI領(lǐng)域維持其領(lǐng)先地位,。

這款名為Codex CLI的開源編程智能體,,是OpenAI自2019年以來首次推出的重要開源工具。

另外兩款新模型分別是其o3模型的完整版本(被OpenAI稱為最先進(jìn)的AI系統(tǒng)),,以及一個(gè)體積更小但更高效的o4-mini模型,。

OpenAI總裁格雷格·布羅克曼在周三的新品發(fā)布會(huì)上表示:“這是首批被頂尖科學(xué)家認(rèn)可能夠真正產(chǎn)生有價(jià)值、有創(chuàng)新性的想法的模型,?!?/p>

這些模型將即刻向付費(fèi)用戶開放,包括ChatGPT Plus和Pro服務(wù)的訂閱用戶,,以及使用企業(yè)版Teams和API產(chǎn)品的機(jī)構(gòu),。

此次新模型的發(fā)布正值OpenAI維持AI領(lǐng)域領(lǐng)先地位面臨壓力的時(shí)刻。今年早些時(shí)候,,中國公司深度求索(DeepSeek)打破了人們對(duì)OpenAI等美國AI實(shí)驗(yàn)室長期技術(shù)領(lǐng)先的這一固有認(rèn)知,。深度求索的R1模型不僅具備OpenAI o系列模型的“思維鏈”推理能力,更憑借其開源特性(可免費(fèi)下載和輕松定制)贏得眾多企業(yè)青睞,。相較之下,,OpenAI的多數(shù)模型只能通過專有應(yīng)用程序編程接口(API)付費(fèi)訪問。

與此同時(shí),,OpenAI還面臨其他閉源模型提供商更加激烈的競(jìng)爭(zhēng),。2月,AI公司Anthropic率先推出一款模型,既能快速提供類似直覺反應(yīng)的回答,,也能根據(jù)提示語要求進(jìn)行“思維鏈”逐步推理,。這種動(dòng)態(tài)決定何時(shí)需要推理和何時(shí)需要更快提供回答的能力,正是OpenAI尚未攻克的難題,。上個(gè)月,,谷歌(Google)發(fā)布了Gemini 2.5 Pro推理模型,在多項(xiàng)基準(zhǔn)測(cè)試中擊敗了OpenAI的o3-mini模型,。

周三,,OpenAI試圖重新奪回在推理模型領(lǐng)域的領(lǐng)先地位。OpenAI宣稱其o3和o4-mini模型現(xiàn)已在多項(xiàng)基準(zhǔn)測(cè)試中領(lǐng)先,,盡管這些結(jié)果尚未獲得第三方驗(yàn)證,。該公司還宣稱,其模型能夠自主調(diào)用其他軟件工具,,如網(wǎng)頁瀏覽,、編程環(huán)境等,無需用戶特別給出指令,。

OpenAI在周三的直播中演示了o3模型的能力,。研究人員展示了o3模型在分析2015年一份物理研究海報(bào)的照片后,自主進(jìn)行網(wǎng)頁搜索,,查找更多最新相關(guān)研究并對(duì)比研究結(jié)果,。他們還展示了模型自主決定運(yùn)行Python代碼解決數(shù)學(xué)和編程難題的場(chǎng)景。

OpenAI表示,,o3和o4-mini模型可直接對(duì)草圖,、圖表甚至模糊的低質(zhì)量照片進(jìn)行視覺推理,并能在推理過程中操作圖像處理,。

而Codex CLI編程智能體設(shè)計(jì)為本地運(yùn)行,,通過云端接入o3和o4-mini模型進(jìn)行推理,同時(shí)支持調(diào)用其他本地部署的軟件工具,。Codex CLI不僅能建議代碼片段,,還能自主選擇使用不同工具來完成任務(wù)。

公司還透露,,Codex CLI即將可以使用本周早些時(shí)候發(fā)布的GPT-4.1模型的功能,。

為鼓勵(lì)開發(fā)者使用Codex CLI,OpenAI設(shè)立了100萬美元基金,,將為有潛力的項(xiàng)目提供價(jià)值2.5萬美元的API積分支持,。

OpenAI表示,訓(xùn)練o3模型所使用的算力是前代最強(qiáng)推理模型o1的10倍,。(財(cái)富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

OpenAI發(fā)布了兩款號(hào)稱“迄今最強(qiáng)大”的AI推理模型,,以及一款輔助編程的開源AI智能體,,試圖在競(jìng)爭(zhēng)激烈的AI領(lǐng)域維持其領(lǐng)先地位。

這款名為Codex CLI的開源編程智能體,,是OpenAI自2019年以來首次推出的重要開源工具,。

另外兩款新模型分別是其o3模型的完整版本(被OpenAI稱為最先進(jìn)的AI系統(tǒng)),,以及一個(gè)體積更小但更高效的o4-mini模型,。

OpenAI總裁格雷格·布羅克曼在周三的新品發(fā)布會(huì)上表示:“這是首批被頂尖科學(xué)家認(rèn)可能夠真正產(chǎn)生有價(jià)值、有創(chuàng)新性的想法的模型,?!?/p>

這些模型將即刻向付費(fèi)用戶開放,包括ChatGPT Plus和Pro服務(wù)的訂閱用戶,,以及使用企業(yè)版Teams和API產(chǎn)品的機(jī)構(gòu),。

此次新模型的發(fā)布正值OpenAI維持AI領(lǐng)域領(lǐng)先地位面臨壓力的時(shí)刻。今年早些時(shí)候,,中國公司深度求索(DeepSeek)打破了人們對(duì)OpenAI等美國AI實(shí)驗(yàn)室長期技術(shù)領(lǐng)先的這一固有認(rèn)知,。深度求索的R1模型不僅具備OpenAI o系列模型的“思維鏈”推理能力,更憑借其開源特性(可免費(fèi)下載和輕松定制)贏得眾多企業(yè)青睞,。相較之下,,OpenAI的多數(shù)模型只能通過專有應(yīng)用程序編程接口(API)付費(fèi)訪問。

與此同時(shí),,OpenAI還面臨其他閉源模型提供商更加激烈的競(jìng)爭(zhēng),。2月,AI公司Anthropic率先推出一款模型,,既能快速提供類似直覺反應(yīng)的回答,,也能根據(jù)提示語要求進(jìn)行“思維鏈”逐步推理。這種動(dòng)態(tài)決定何時(shí)需要推理和何時(shí)需要更快提供回答的能力,,正是OpenAI尚未攻克的難題,。上個(gè)月,谷歌(Google)發(fā)布了Gemini 2.5 Pro推理模型,,在多項(xiàng)基準(zhǔn)測(cè)試中擊敗了OpenAI的o3-mini模型,。

周三,OpenAI試圖重新奪回在推理模型領(lǐng)域的領(lǐng)先地位,。OpenAI宣稱其o3和o4-mini模型現(xiàn)已在多項(xiàng)基準(zhǔn)測(cè)試中領(lǐng)先,,盡管這些結(jié)果尚未獲得第三方驗(yàn)證。該公司還宣稱,,其模型能夠自主調(diào)用其他軟件工具,,如網(wǎng)頁瀏覽、編程環(huán)境等,,無需用戶特別給出指令,。

OpenAI在周三的直播中演示了o3模型的能力,。研究人員展示了o3模型在分析2015年一份物理研究海報(bào)的照片后,自主進(jìn)行網(wǎng)頁搜索,,查找更多最新相關(guān)研究并對(duì)比研究結(jié)果,。他們還展示了模型自主決定運(yùn)行Python代碼解決數(shù)學(xué)和編程難題的場(chǎng)景。

OpenAI表示,,o3和o4-mini模型可直接對(duì)草圖,、圖表甚至模糊的低質(zhì)量照片進(jìn)行視覺推理,并能在推理過程中操作圖像處理,。

而Codex CLI編程智能體設(shè)計(jì)為本地運(yùn)行,,通過云端接入o3和o4-mini模型進(jìn)行推理,同時(shí)支持調(diào)用其他本地部署的軟件工具,。Codex CLI不僅能建議代碼片段,,還能自主選擇使用不同工具來完成任務(wù)。

公司還透露,,Codex CLI即將可以使用本周早些時(shí)候發(fā)布的GPT-4.1模型的功能,。

為鼓勵(lì)開發(fā)者使用Codex CLI,OpenAI設(shè)立了100萬美元基金,,將為有潛力的項(xiàng)目提供價(jià)值2.5萬美元的API積分支持,。

OpenAI表示,訓(xùn)練o3模型所使用的算力是前代最強(qiáng)推理模型o1的10倍,。(財(cái)富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

OpenAI has released two AI “reasoning” models that it says are its most capable yet as well as an open-source AI agent that helps computer programmers code, as the company seeks to gain a lead over its rivals.

The open-source coding agent, called Codex CLI, marks the first time since 2019 that OpenAI has introduced a significant open-source tool.

The other new models are the full-scale version of its o3 model, which OpenAI says is its most advanced AI system, as well as a smaller, but more efficient model called o4-mini.

“These are the first models where top scientists tell us they produce legitimately good and useful novel ideas,” OpenAI president Greg Brockman said in announcing the new products on Wednesday.

The models will be immediately available to users of its paid ChatGPT Plus and Pro services, as well as organizations that use its enterprise-focused Teams and API products.

The release of the new models comes at a time when OpenAI faces pressure to show it remains at the forefront of AI development. Earlier this year, China’s DeepSeek upended conventional wisdom about the technological edge U.S. AI labs such as OpenAI enjoyed for years. DeepSeek’s R1 mimicked the “chain of thought” reasoning that OpenAI’s o-series models offer. The fact that DeepSeek’s R1 was also an open model—meaning people could download it for free and customize it easily—has tilted many enterprises in favor of deploying such open-source models. Most of OpenAI’s models, in contrast, can only be accessed on a paid basis through a proprietary application programming interface (API).

At the same time, OpenAI has also faced increased competition from other proprietary model providers. In February, AI company Anthropic became the first to offer a model that combines quick, intuition-like answers with the ability to also perform “chain of thought” step-by-step reasoning if a prompt requires it. The ability to decide when reasoning is required and when a faster answer will do is a trick OpenAI has yet to match. Then, last month, Google unveiled its Gemini 2.5 Pro model, a reasoning model that beat OpenAI’s o3-mini model on numerous benchmarks.

On Wednesday, OpenAI moved to try to retake the lead in reasoning models. The company says its o3 and o4-mini models now top various benchmarks—although none of those results has yet been independently verified. It also says the models have the ability to autonomously use other software tools, such as web browsing and coding environments, without having to be specifically prompted to do so by a user.

In a demo of o3’s capabilities that OpenAI livestreamed Wednesday, AI researchers showed o3 analyzing a photo of a physics research poster from 2015 and then searching the web autonomously to find more recent relevant research and comparing the results. They also showed it autonomously deciding to run Python code to solve various math and coding challenges.

OpenAI said o3 and o4-mini have the ability to reason directly about visual information, such as sketches, diagrams, or photos—even ones that might be blurry or of poor quality. The company said the models also knew how to manipulate photos as part of their reasoning process.

Meanwhile, the new Codex CLI coding agent is designed to run on a user’s device, tapping a cloud-based connection to OpenAI’s o3 and o4-mini models to help it reason, but then also allowing it to use other software tools deployed locally. Codex CLI doesn’t just suggest lines of code, it can autonomously decide to use a variety of different tools to help it complete a task.

The company said Codex CLI would also soon be able to tap the capabilities of the GPT-4.1 model that it released earlier this week.

To encourage developers to experiment with Codex CLI, OpenAI said it had set up a $1 million fund that will disburse $25,000 grants in API credits to promising projects.

OpenAI said o3 used about 10 times as much computing power to train as it took to create its o1 model, its previous best reasoning model.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有,。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載,、摘編,、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開
熱讀文章