
DeepSeek讓開源再度流行起來。這家中國初創(chuàng)企業(yè)決定使用開源框架來實現復雜推理,,這一舉動震動了人工智能生態(tài)系統(tǒng):自此之后,,百度將文心一言(ERNIE)模型開源,而 OpenAI 首席執(zhí)行官山姆·奧特曼(Sam Altman)也表示,,他認為自己領導的非開源公司可能站在了“歷史錯誤的一邊”,。
目前人工智能領域存在兩種截然不同的范式:一種是由OpenAI和微軟(Microsoft)等巨頭推動的閉源生態(tài)系統(tǒng),另一種是由Meta和Mistral等公司倡導的開源平臺,。
這不僅僅是一場技術層面的爭論,。開源與閉源之爭,是關于人工智能未來走向的根本性討論,,也是在萬億美元規(guī)模的人工智能行業(yè)逐步成型之際,,關于誰將掌控這項新技術巨大潛力的討論。
歷史教訓
每一場軟件革命的核心都是開源系統(tǒng)與閉源系統(tǒng)之間的較量,。
在大型機時代,,IBM及其閉源系統(tǒng)占據了主導地位,以至于流傳著這樣一句格言:“選IBM永遠不會被炒魷魚,?!钡S著技術的成熟,企業(yè)紛紛轉向開源系統(tǒng),,從而擺脫了供應商的限制,。
這樣的循環(huán)一次又一次地上演。開源的Linux系統(tǒng)向微軟的Windows系統(tǒng)發(fā)起了挑戰(zhàn),。PostgreSQL和MySQL數據庫成為了甲骨文數據庫的替代品,。
供應商鎖定,即更換供應商幾乎不可能,,會抑制創(chuàng)新,、限制靈活性并帶來安全隱患。隨著人工智能日益深入地融入關鍵業(yè)務流程,,這些風險只會進一步加劇,。
開源平臺則能夠降低這些風險,使企業(yè)能夠更換供應商或將解決方案進行內部化調整,而無需承擔高昂的成本,。
開源為何重要
消費者可能會體驗到閉源平臺帶來的便利,。然而,企業(yè)優(yōu)先考慮的因素則有所不同,。企業(yè)不能通過它們無法掌控的黑盒應用程序編程接口(API)傳輸敏感數據和專有信息,。
開源人工智能模型具有三個關鍵優(yōu)勢。
首先,,開源模型將敏感信息保存在企業(yè)內部的基礎設施之中,,降低了因與外部服務器交互而導致數據泄露的風險。
其次,,企業(yè)能夠根據自身獨特需求對開源模型進行定制化開發(fā),,利用專有數據對模型進行微調,而不受閉源系統(tǒng)的限制,。
最后,,通過在自有基礎設施上部署開源模型,企業(yè)能夠避免向供應商支付因規(guī)模擴展而產生的額外費用,。
盡管閉源平臺可能在操作上更為簡便,,但它們卻無法提供開源模型所具備的安全性、靈活性以及低成本優(yōu)勢,。
頗具諷刺意味的是,,OpenAI的崛起是建立在開源的基礎之上的。2017年谷歌發(fā)布的《注意力就是你所需要的一切》為現代語言模型的發(fā)展提供了藍圖,。然而,,盡管有此基礎,OpenAI 卻從初創(chuàng)時的開源理念轉向了更加傾向于閉源的發(fā)展模式,,這引發(fā)了人們對其致力于確保人工智能惠及“全人類”這一承諾的質疑,。
微軟與OpenAI的合作使這家科技巨頭迅速躋身商業(yè)人工智能領域的前沿。微軟投入了超過130億美元,,將GPT-4集成到其整個生態(tài)系統(tǒng)中——從Azure云服務到通過Copilot,、GitHub和必應(Bing)集成到Office應用程序中——這對依賴這些工具的企業(yè)產生了強大的鎖定效應。
從歷史上看,,閉源人工智能系統(tǒng)一直通過強力策略占據主導地位:擴大數據規(guī)模,、增加參數數量以及提升算力,從而主導市場并設置進入壁壘,。
然而,,一種新的模式正在興起:推理能力革命。像DeepSeek的R1這樣的模型表明,,復雜推理能力已然能夠與依賴規(guī)模優(yōu)勢的專有系統(tǒng)相抗衡。推理能力是開源人工智能的特洛伊木馬,,它證明了算法的進步能夠削弱閉源平臺的優(yōu)勢,,從而對競爭格局構成挑戰(zhàn),。
這為小型實驗室和初創(chuàng)企業(yè)帶來了重要機遇。開源人工智能以遠低于閉源系統(tǒng)的成本促進集體創(chuàng)新的發(fā)展,,實現了資源獲取的民主化,,并鼓勵更多參與者做出貢獻。
目前,,傳統(tǒng)的人工智能價值鏈由少數幾家公司主導,,包括硬件領域的英偉達(Nvidia)、模型開發(fā)領域的OpenAI和Anthropic,,以及基礎設施領域的亞馬遜云科技(Amazon Web Services),、微軟Azure和谷歌云平臺(Google Cloud Platform)。由于對資金和算力要求較高,,形成了極高的進入壁壘,。
但是,像優(yōu)化推理引擎和專用硬件這樣的新型創(chuàng)新正在逐步瓦解這一壟斷格局,。
在這個新的生態(tài)系統(tǒng)中,,人工智能堆棧正在經歷拆分。像Groq這樣的公司正在硬件領域向英偉達發(fā)起挑戰(zhàn)(Groq是Race Capital投資組合中的公司之一),。像Mistral這樣的小型實驗室開發(fā)出了極具智能的模型,,能夠與OpenAI和Anthropic抗衡。像Hugging Face這樣的平臺致力于實現模型獲取的民主化,。像Fireworks和Together這樣的推理服務正在降低延遲并提高請求處理量,。像Lambda Labs和Fluidstack這樣的替代云市場,提供了競爭性價格優(yōu)勢(與三大寡頭相比),。
平衡開源與閉源
誠然,,開源模型也有自身的風險。訓練數據存在被濫用的可能性,。惡意行為者可能會開發(fā)出有害的應用程序,,比如惡意軟件或深度偽造技術。企業(yè)也可能會越過道德底線,,擅自使用個人數據,,犧牲數據隱私以謀取競爭優(yōu)勢。
采取戰(zhàn)略性治理措施有助于緩解上述風險,。推遲發(fā)布前沿模型能夠為安全評估工作留出時間,。實施部分權重共享策略也能在帶來研究益處的同時,限制被濫用的風險,。
人工智能的未來取決于平衡這些相互沖突的利益訴求的能力——這就如同人工智能系統(tǒng)本身為了實現最佳性能而平衡權重和偏差一樣,。
開源與閉源的抉擇,絕非僅僅基于個人喜好,而是決定人工智能革命走向的關鍵決策,。我們必須選擇能夠鼓勵創(chuàng)新,、促進包容性且合乎道德的治理框架。開源將是實現這一目標的途徑,。
莊思浩(Alfred Chuang)是Race Capital的普通合伙人,,該公司在人工智能領域進行全面布局,涵蓋開源和閉源解決方案,。(財富中文網)
譯者:中慧言-王芳
DeepSeek讓開源再度流行起來,。這家中國初創(chuàng)企業(yè)決定使用開源框架來實現復雜推理,這一舉動震動了人工智能生態(tài)系統(tǒng):自此之后,,百度將文心一言(ERNIE)模型開源,,而 OpenAI 首席執(zhí)行官山姆·奧特曼(Sam Altman)也表示,他認為自己領導的非開源公司可能站在了“歷史錯誤的一邊”,。
目前人工智能領域存在兩種截然不同的范式:一種是由OpenAI和微軟(Microsoft)等巨頭推動的閉源生態(tài)系統(tǒng),,另一種是由Meta和Mistral等公司倡導的開源平臺。
這不僅僅是一場技術層面的爭論,。開源與閉源之爭,,是關于人工智能未來走向的根本性討論,也是在萬億美元規(guī)模的人工智能行業(yè)逐步成型之際,,關于誰將掌控這項新技術巨大潛力的討論,。
歷史教訓
每一場軟件革命的核心都是開源系統(tǒng)與閉源系統(tǒng)之間的較量。
在大型機時代,,IBM及其閉源系統(tǒng)占據了主導地位,,以至于流傳著這樣一句格言:“選IBM永遠不會被炒魷魚?!钡S著技術的成熟,,企業(yè)紛紛轉向開源系統(tǒng),從而擺脫了供應商的限制,。
這樣的循環(huán)一次又一次地上演,。開源的Linux系統(tǒng)向微軟的Windows系統(tǒng)發(fā)起了挑戰(zhàn)。PostgreSQL和MySQL數據庫成為了甲骨文數據庫的替代品,。
供應商鎖定,,即更換供應商幾乎不可能,會抑制創(chuàng)新,、限制靈活性并帶來安全隱患,。隨著人工智能日益深入地融入關鍵業(yè)務流程,這些風險只會進一步加劇,。
開源平臺則能夠降低這些風險,,使企業(yè)能夠更換供應商或將解決方案進行內部化調整,,而無需承擔高昂的成本。
開源為何重要
消費者可能會體驗到閉源平臺帶來的便利,。然而,,企業(yè)優(yōu)先考慮的因素則有所不同。企業(yè)不能通過它們無法掌控的黑盒應用程序編程接口(API)傳輸敏感數據和專有信息,。
開源人工智能模型具有三個關鍵優(yōu)勢。
首先,,開源模型將敏感信息保存在企業(yè)內部的基礎設施之中,,降低了因與外部服務器交互而導致數據泄露的風險。
其次,,企業(yè)能夠根據自身獨特需求對開源模型進行定制化開發(fā),,利用專有數據對模型進行微調,而不受閉源系統(tǒng)的限制,。
最后,,通過在自有基礎設施上部署開源模型,企業(yè)能夠避免向供應商支付因規(guī)模擴展而產生的額外費用,。
盡管閉源平臺可能在操作上更為簡便,,但它們卻無法提供開源模型所具備的安全性、靈活性以及低成本優(yōu)勢,。
頗具諷刺意味的是,,OpenAI的崛起是建立在開源的基礎之上的。2017年谷歌發(fā)布的《注意力就是你所需要的一切》為現代語言模型的發(fā)展提供了藍圖,。然而,,盡管有此基礎,OpenAI 卻從初創(chuàng)時的開源理念轉向了更加傾向于閉源的發(fā)展模式,,這引發(fā)了人們對其致力于確保人工智能惠及“全人類”這一承諾的質疑,。
微軟與OpenAI的合作使這家科技巨頭迅速躋身商業(yè)人工智能領域的前沿。微軟投入了超過130億美元,,將GPT-4集成到其整個生態(tài)系統(tǒng)中——從Azure云服務到通過Copilot,、GitHub和必應(Bing)集成到Office應用程序中——這對依賴這些工具的企業(yè)產生了強大的鎖定效應。
從歷史上看,,閉源人工智能系統(tǒng)一直通過強力策略占據主導地位:擴大數據規(guī)模,、增加參數數量以及提升算力,從而主導市場并設置進入壁壘,。
然而,,一種新的模式正在興起:推理能力革命。像DeepSeek的R1這樣的模型表明,,復雜推理能力已然能夠與依賴規(guī)模優(yōu)勢的專有系統(tǒng)相抗衡,。推理能力是開源人工智能的特洛伊木馬,,它證明了算法的進步能夠削弱閉源平臺的優(yōu)勢,從而對競爭格局構成挑戰(zhàn),。
這為小型實驗室和初創(chuàng)企業(yè)帶來了重要機遇,。開源人工智能以遠低于閉源系統(tǒng)的成本促進集體創(chuàng)新的發(fā)展,實現了資源獲取的民主化,,并鼓勵更多參與者做出貢獻,。
目前,傳統(tǒng)的人工智能價值鏈由少數幾家公司主導,,包括硬件領域的英偉達(Nvidia),、模型開發(fā)領域的OpenAI和Anthropic,以及基礎設施領域的亞馬遜云科技(Amazon Web Services),、微軟Azure和谷歌云平臺(Google Cloud Platform),。由于對資金和算力要求較高,形成了極高的進入壁壘,。
但是,,像優(yōu)化推理引擎和專用硬件這樣的新型創(chuàng)新正在逐步瓦解這一壟斷格局。
在這個新的生態(tài)系統(tǒng)中,,人工智能堆棧正在經歷拆分,。像Groq這樣的公司正在硬件領域向英偉達發(fā)起挑戰(zhàn)(Groq是Race Capital投資組合中的公司之一)。像Mistral這樣的小型實驗室開發(fā)出了極具智能的模型,,能夠與OpenAI和Anthropic抗衡,。像Hugging Face這樣的平臺致力于實現模型獲取的民主化。像Fireworks和Together這樣的推理服務正在降低延遲并提高請求處理量,。像Lambda Labs和Fluidstack這樣的替代云市場,,提供了競爭性價格優(yōu)勢(與三大寡頭相比)。
平衡開源與閉源
誠然,,開源模型也有自身的風險,。訓練數據存在被濫用的可能性。惡意行為者可能會開發(fā)出有害的應用程序,,比如惡意軟件或深度偽造技術,。企業(yè)也可能會越過道德底線,擅自使用個人數據,,犧牲數據隱私以謀取競爭優(yōu)勢,。
采取戰(zhàn)略性治理措施有助于緩解上述風險。推遲發(fā)布前沿模型能夠為安全評估工作留出時間,。實施部分權重共享策略也能在帶來研究益處的同時,,限制被濫用的風險。
人工智能的未來取決于平衡這些相互沖突的利益訴求的能力——這就如同人工智能系統(tǒng)本身為了實現最佳性能而平衡權重和偏差一樣,。
開源與閉源的抉擇,,絕非僅僅基于個人喜好,,而是決定人工智能革命走向的關鍵決策。我們必須選擇能夠鼓勵創(chuàng)新,、促進包容性且合乎道德的治理框架,。開源將是實現這一目標的途徑。
莊思浩(Alfred Chuang)是Race Capital的普通合伙人,,該公司在人工智能領域進行全面布局,,涵蓋開源和閉源解決方案。(財富中文網)
譯者:中慧言-王芳
DeepSeek has made open-source cool again. The Chinese startup’s decision to use open-source frameworks to achieve sophisticated reasoning has shaken up the AI ecosystem: Since then, Baidu has made its ERNIE model open-source, while OpenAI CEO Sam Altman has said he thinks his non-open source company may be on the “wrong side of history.”
There are now two distinct paradigms in the AI sector: the closed ecosystems promoted by giants like OpenAI and Microsoft, versus the open-source platforms championed by companies like Meta and Mistral.
This is more than just a technical debate. Open vs. closed is a fundamental debate about AI’s future and who will control the new technology’s vast potential as a trillion-dollar industry takes shape.
Lessons from history
Every software revolution has been, at its heart, a struggle between open and closed systems.
In the mainframe era, IBM and its closed system dominated, prompting the aphorism: “Nobody ever got fired for choosing IBM.” But as technology matured, businesses turned to open systems that freed them from vendor constraints.
This cycle happened again and again. Open-source Linux challenged Microsoft Windows. PostgreSQL and MySQL became an alternative to Oracle’s databases.
Vendor lock-in, where switching providers becomes nearly impossible, stifles innovation, limits agility, and creates vulnerability. Those same risks will only increase as AI is increasingly integrated into critical business processes.
Open platforms mitigate those risks, allowing organizations to change vendors or bring solutions in-house without incurring crippling costs.
Why open source matters
Consumers may enjoy the convenience of a closed platform. Yet enterprises have different priorities. Organizations can’t send sensitive data and proprietary information through black box APIs that they don’t control.
Open-source AI models offer three critical advantages.
First, open models keep sensitive information within an organization’s infrastructure, reducing the risk of data breaches from interactions with an external server.
Second, enterprises can tailor open-source models to their unique needs, fine-tuning models with their proprietary data without being constrained by a closed system.
Finally, organizations can avoid scaling fees charged by vendors by deploying open-source models on their own infrastructure.
Closed platforms may be simple, but they don’t provide the safety, flexibility and low costs of an open-source model.
Ironically, OpenAI’s rise was built on open-source foundations. The “Attention Is All You Need” paper released by Google in 2017 provided the blueprint for modern language models. Yet, despite this foundation, OpenAI has shifted from its initial open-source ethos to a more closed model, raising questions about its commitment to ensuring that AI benefits “all of humanity.”
Microsoft’s partnership with OpenAI has rapidly positioned the tech giant at the forefront of the commercial AI landscape. With over $13 billion invested, Microsoft has integrated GPT-4 across its ecosystem—from Azure to Office applications via Copilot, GitHub, and Bing—creating a powerful lock-in effect for businesses that rely on these tools.
Historically, closed AI systems have dominated through brute-force strategies: Scaling data, parameters, and computing power to dominate the market and create barriers to entry.
Yet, a new paradigm is emerging: the reasoning revolution. Models like DeepSeek’s R1 demonstrate that sophisticated reasoning capabilities can rival proprietary systems that depend on sheer scale. Reasoning is a Trojan horse for open-source AI, challenging the competitive landscape by proving that algorithmic advancements can diminish the advantages held by closed platforms.
This opens up a crucial opportunity for smaller labs and startups. Open-source AI fosters collective innovation at a fraction of the cost associated with closed systems, democratizing access and encouraging contributions from a wider range of participants.
Currently, the traditional AI value chain is dominated by a few players in hardware (Nvidia), model development (OpenAI, Anthropic) and infrastructure (Amazon Web Services, Microsoft Azure, Google Cloud Platform). This has created significant barriers to entry, due to high capital and compute requirements.
But new innovations, like optimized inference engines and specialized hardware, are dismantling this monolithic structure.
The AI stack is becoming unbundled in this new ecosystem. Companies like Groq are challenging Nvidia in hardware. (Groq is one of Race Capital’s portfolio companies.) Smaller labs like Mistral have built creative models that can compete with OpenAI and Anthropic. Platforms like Hugging Face are democratizing access to models. Inference services like Fireworks and Together are reducing latency and increasing throughput of requests. Alternative cloud marketplaces, such as Lambda Labs and Fluidstack, offer competitive pricing with the Big Three oligopoly.
Balancing open vs. closed
Of course, open-source models bring their own risks. Training data could be misappropriated. Malicious actors could develop harmful applications, like malware or deepfakes. Companies, too, may cross ethical boundaries by using personal data without authorization, sacrificing data privacy in pursuit of competitive advantage.
Strategic governance measures can help mitigate these risks. Delaying releases of frontier models could give time for security assessments. Partial weight sharing could also limit the potential for misuse, while still providing research benefits.
The future of AI rests on the ability to balance these competing interests—much like how AI systems themselves balance weights and biases for optimal performance.
The choice between going open or closed represents more than just preference. It’s a pivotal decision that will determine the trajectory of the AI revolution. We must choose frameworks that encourage innovation, inclusivity, and ethical governance. Going open-source will be the way to achieve that.
Alfred Chuang is general partner at Race Capital, which invests across the AI spectrum, including both open-source and closed-source solutions.