Switzerland to enter global AI race with fully open-source, multilingual model | Technology News

Read Time:3 Minute, 17 Second

The global race for AI domination has been framed as a contest largely between the US and China, but Switzerland is quietly emerging as a potential contender.

The alpine nation is looking to introduce a fully open-source large language model (LLM) that has been developed using public infrastructure. It has been built by researchers at Swiss universities such as EPFL and ETH Zurich working together with engineers at the country’s National Supercomputing Centre (CSCS).

The LLM is currently undergoing final testing and is scheduled to be launched in “late summer” this year. The model will be accessible in two sizes, 8 billion and 70 billion parameters. The parameter count of an LLM reflects its capacity to learn and generate complex responses.

Story continues below this ad

The Swiss researchers are making the mode, including its source code and weights, downloadable under the Apache 2.0 open licence. They have committed to releasing details such as the model architecture, training methods, and usage guidelines as well in accompaniment of the model.

The model has been specifically designed to ensure transparency, multilingual performance, and broad accessibility.

The release of OpenAI’s ChatGPT in 2022 kicked off an AI arms race between tech giants such as Google, Meta, Microsoft, etc. This battle has since escalated into a global race for dominance, especially after DeepSeek, a Chinese AI startup, stunned the tech world by developing a cutting-edge, open AI model at a fraction of the cost of proprietary, closed systems by US-based firms that dominated the market until then.

While countries like India are looking to enlist homegrown startups to help build foundational AI models to rival ChatGPT and DeepSeek R1, Switzerland appears to be taking a slightly different approach

Story continues below this ad

“Fully open models enable high-trust applications and are necessary for advancing research about the risks and opportunities of AI. Transparent processes also enable regulatory compliance,” Imanol Schlag, research scientist at the ETH AI Center, who is leading the effort alongside EPFL AI Center faculty members and professors Antoine Bosselut and Martin Jaggi, said in a statement.

“We have emphasised making the models massively multilingual from the start. As scientists from public institutions, we aim to advance open models and enable organisations to build on them for their own applications,” Bosselut said.

“By embracing full openness — unlike commercial models that are developed behind closed doors — we hope that our approach will drive innovation in Switzerland, across Europe, and through multinational collaborations. Furthermore, it is a key factor in attracting and nurturing top talent,” Jaggi said.

How the model was developed

The base model developed by Swiss researchers has been trained on large text datasets in over 1,500 languages. Around 60 per cent of the data is in English and the rest is in non-English languages. The datasets also comprise code and other mathematics-related data.

Story continues below this ad

The project heads also recently published a study which found that respecting web crawling opt-outs during the pre-training stage of LLM development does not impact the performance of the model in regards to everyday tasks and general knowledge-related tasks.

In addition, the model was trained using the CSCS’s ‘Alps’ supercomputer comprising clusters of over 10,000 NVIDIA Grace Hopper GPUs. “The system’s scale and architecture made it possible to train the model efficiently using 100% carbon-neutral electricity,” the announcement post read.



Source link

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post How to Find the Right Medical Rehab Services
Next post System Shock 2 Remastered makes modern triple-A games look bad