OLMoE Achieves State-Of-The-Art Performance using Fewer Resources and MoE

7 months ago 53
  • Last updated September 5, 2024
  • In AI News

A team of researchers from the Allen Institute for AI, Contextual AI, and the University of Washington have released OLMoE (Open Mixture-of-Experts Language Models), a new open-source LLM that achieves state-of-the-art performance while using significantly fewer computational resources than comparable models.

OLMoE utilizes a Mixture-of-Experts (MoE) architecture, allowing it to have 7 billion total parameters but only activate 1.3 billion for each input. This enables OLMOE to match or exceed the performance of much larger models like Llama2-13B while using far less compute power during inference.

Thanks to Mixture-of-Experts, better data & hyperparams, OLMoE is much more efficient than OLMo 7B as it uses 4x less training FLOPs and 5x less parameters were used per forward pass for cheaper training and cheaper inference.

Importantly, the researchers have open-sourced not just the model weights, but also the training data, code, and logs. This level of transparency is rare for high-performing language models and will allow other researchers to build upon and improve OLMOE.

For example, on the MMLU benchmark, OLMOE-1B-7B achieves a score of 54.1%, surpassing models like OLMo-7B (54.9%) and Llama2-7B (46.2%) despite using significantly fewer active parameters. After instruction tuning, OLMOE-1B-7B-INSTRUCT even outperforms larger models like Llama2-13B-Chat on benchmarks such as AlpacaEval.

OLMoE compared to other models

This demonstrates the effectiveness of OLMOE’s Mixture-of-Experts architecture in achieving high performance with lower computational requirements. 

Additionally, OLMOE-1B-7B stands out for its full open-source release, including model weights, training data, code, and logs, making it a valuable resource for researchers and developers looking to build upon and improve state-of-the-art language models.

MoE is a preferred choice when you don’t have enough resources to build your own model from scratch and merge multiple small models of different expertise to have one single model that does it all without much cost and training.

Picture of Sagar Sharma

Sagar Sharma

A software engineer who loves to experiment with new-gen AI. He also happens to love testing hardware and sometimes they crash. While reviving his crashed system, you can find him reading literature, manga, or watering plants.

Association of Data Scientists

Tailored Generative AI Training for Your Team

Upcoming Large format Conference

Sep 25-27, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

26 July 2024 | 583 Park Avenue, New York

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

September 25-27, 2024 | 📍Bangalore, India

discord icon

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Read Entire Article