Nvidia, which builds some ofthe most highly sought-after GPUsin the AI industry,has announcedthat it has released an open-source large language model that reportedly performs on par with leading proprietary models fromOpenAI,Anthropic,Meta, andGoogle.

The company introduced its new NVLM 1.0 family ina recently released white paper, and it’s spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.

Nvidia CEO Jensen in front of a background.

Introducing NVLM 1.0, a family of frontier-class multimodal LLMs that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models (e.g., InternVL 2).Remarkably, NVLM 1.0 shows improved text-only…pic.twitter.com/yKGyOqHnsp

— Wei Ping (@_weiping)July 24, 2025

The new model family is reportedly already capable of “production-grade multimodality,” with exceptional performance across a variety of vision and language tasks, in addition to improved text-based responses compared to the base LLM that the NVLM family is based on. “To achieve this, we craft and integrate a high-quality text-only dataset into multimodal training, alongside a substantial amount of multimodal math and reasoning data, leading to enhanced math and coding capabilities across modalities,” the researchers explained.

The result is an LLM that can just as easily explain why a meme is funny as it can solve complex mathematics equations, step by step. Nvidia also managed to increase the model’s text-only accuracy by an average of 4.3 points across common industry benchmarks, thanks to its multimodal training style.

Nvidia appears serious about ensuring that this model meetsthe Open Source Initiative’s newest definition of “open source”by not only making its training weights available for public review, but also promising to release the model’s source code in the near future. This is a marked departure from the actions of rivals likeOpenAIand Google, who jealously guard the details of their LLMs’ weights and source code. In doing so, Nvidia has positioned the NVLM family to not necessarily compete directly againstChatGPT-4oandGemini 1.5 Pro, but rather serve as a foundation for third-party developers to build their own chatbots and AI applications.