CIO Insider

CIOInsider India Magazine

Separator

Startups Proceeds to LLMs to Get GenAI Smarts to Indian Languages

CIO Insider Team | Friday, 8 December, 2023
Separator

Indian startups are working to build large-scale language models (LLM) to scale up generative AI in Indian languages. The releases will come in the coming months, but experts say it could take 3-5 years before India gets its ChatGPT equivalent.

Gurgaon-based startup Soket Labs expects to launch its multilingual large language model (LLM) Pragna in the second quarter of next year, founder and CEO Abhishek Upperwal said. A 7-billion-parameter version of Pragna, trained in 23 Indian programming languages and English, will be released as open source, followed by a 30-billion-parameter model in a few months.

BharatGPT, a conversational artificial intelligence platform Corover AI LLM project with support for 14 Indian languages and offered to enterprises, will be formally launched in the coming weeks.

The company's order book for the next 12 months is 91 trillion rupees - mainly from BFSI, utilities, public sector and e-commerce - and is expected to grow exponentially next year.

Costs and competition from global giants like OpenAI ChatGPT, Google's Bard and others mean the road won't be easy for such startups.

Soket Labs, which began working with Pragna in February, currently spends $4,000 to $5,000 a month on computing infrastructure to train and test smaller models, covering costs about $100,000 a month

"It (building LLM businesses) is a capital-intensive task and there is no shortcut to profitability," sayd Sachin Arora, partner and country head of Digital Lighthouse (Cloud, Data and AI), KPMG India.

Soket Labs, which began working with Pragna in February, currently spends $4,000 to $5,000 a month on computing infrastructure to train and test smaller models, covering costs about $100,000 a month.

Unlike Soket Labs' 7-30 billion parameter models and Corover's 100-7 billion parameter models, ChatGPT runs on OpenAI's GPT-3.5 model, which is trained with 175 billion parameters.

The "smaller" LLMs they create are effective for domain-specific uses, and existing Indian language skills are sufficient, Bhattacharya said, but how well the models perform will be determined by the quality, and quality of their datasets about education.



Current Issue
Education In Technology ERA



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...