India’s growing artificial intelligence ecosystem received a major boost as Sarvam AI unveiled two new large language models (LLMs) — Sarvam-30B and Sarvam-105B — at the India AI Impact Summit on 18 February. The announcement marks a significant milestone in the country’s push toward sovereign AI infrastructure and homegrown foundation models.
Positioned as open-source alternatives to foreign AI systems, the two models are designed to deliver high-quality performance while improving efficiency and reducing token usage — a critical factor in lowering operational costs for enterprises and developers.
Strengthening India’s Sovereign AI Ambitions
The launch comes at a time when India is accelerating its efforts to build domestic AI capabilities under the government-backed IndiaAI Mission. With growing concerns around data security, infrastructure costs, and reliance on overseas technology providers, initiatives like Sarvam AI’s are seen as strategic steps toward technological self-reliance.
By releasing both models as open source, Sarvam AI aims to foster adoption among:
Developers building AI-powered applications
Enterprises seeking localized AI solutions
Government agencies prioritizing data sovereignty
Research institutions advancing AI innovation
This approach encourages transparency, collaboration, and customization — elements that are crucial for scaling AI within regulated and multilingual markets like India.
Built from Scratch: A Fully Indigenous Approach
According to Pratyush Kumar, Founder and CEO of Sarvam AI, the models were developed entirely from the ground up.
He emphasized that the systems were created without external dataset dependencies, underscoring the company’s focus on developing indigenous AI capabilities. This foundational independence aligns with broader national goals of reducing technological reliance on foreign ecosystems.
Sarvam-30B: Efficiency-Focused Large Language Model
The Sarvam-30B model is the smaller of the two but has been engineered with efficiency at its core.
Key Highlights:
Pre-trained on 16 trillion tokens
Supports a 32,000-token context window
Designed to generate high-quality responses using fewer tokens
Token efficiency is a critical metric in large language models because it directly impacts compute requirements and operational costs. By optimizing response quality while reducing token consumption, Sarvam-30B aims to make large-scale AI deployment more cost-effective for enterprises.
During the summit, Sarvam AI presented benchmark results indicating that Sarvam-30B outperformed several well-known AI systems in specific evaluation categories.
Sarvam-105B: Advanced Model for Long-Context Intelligence
The larger Sarvam-105B model significantly expands capabilities with a 128,000-token context window.
What This Means:
Ability to process longer documents
Handle extended multi-turn conversations
Support enterprise-scale knowledge management tasks
According to the company, Sarvam-105B performs on par with other frontier models of comparable size — including both open-source and proprietary systems. The expanded context window makes it particularly suitable for sectors such as legal services, research, governance, and enterprise documentation.
Real-World Demonstration: AI on a Feature Phone
One of the most compelling moments at the launch was a live demonstration showcasing a chatbot named “Vikram” running on a basic feature phone equipped with a physical keypad.
The demonstration highlighted:
Multi-language conversational capability
Operation in low-resource hardware environments
Practical applicability beyond high-end smartphones
The chatbot interacted in Hindi and Punjabi, reflecting the company’s focus on supporting India’s linguistic diversity.
The name “Vikram” pays tribute to Vikram Sarabhai, the renowned Indian physicist widely regarded as the father of India’s space programme. The naming underscores the broader theme of technological self-reliance and national innovation.
Open Source Strategy: Driving Adoption and Innovation
By making Sarvam-30B and Sarvam-105B open source, Sarvam AI is positioning itself within a growing global movement toward transparent AI development.
Open-source models provide:
Greater customization for enterprise needs
Improved auditability and compliance
Faster innovation through community contributions
Reduced vendor lock-in
For India, this could translate into stronger AI ecosystems tailored to local languages, governance requirements, and business environments.
Implications for India’s AI Ecosystem
Sarvam AI joins a growing group of domestic startups working to build large-scale AI models optimized for India’s multilingual and enterprise landscape.
The launch signals several broader trends:
Increased focus on AI sovereignty
Investment in domestic compute and infrastructure
Expansion of open-source AI frameworks
Development of AI systems tailored to emerging markets
As India aims to position itself as a global AI hub, the introduction of high-parameter, open-source foundation models marks an important step in that direction.
The Road Ahead
The unveiling of Sarvam-30B and Sarvam-105B reflects a strategic shift toward building AI systems that are not only powerful but also efficient, accessible, and locally governed.
If widely adopted, these models could accelerate AI innovation across sectors ranging from education and governance to finance and telecommunications. More importantly, they reinforce India’s ambition to shape its own AI future rather than relying exclusively on global technology giants.
With open-source accessibility, extended context capabilities, and a focus on token efficiency, Sarvam AI’s latest models represent a significant contribution to India’s evolving artificial intelligence landscape.
Follow Before You Take on:
Latest Technology News | Updates | Latest Electric Vehicle News | Updates | Electronics News | Mobile News | Updates | Software Updates
📌 Facebook | 🐦 Twitter | 📢 WhatsApp Channel | 📸 Instagram | 📩 Telegram | 💬 Threads | 💼 LinkedIn | 🎥 YouTube
🔔 Stay informed, Stay Connected!







































































































