Thanks to NVIDIA, Llama 3.1’s Context Window Went Up From 128k to 4M

5 days ago 10
  • Published on April 14, 2025
  • In AI News

Researchers from NVIDIA and UIUC have uncovered a technique to expand standard context window limits in LLMs.

LLM Systems Will Soon Have Infinite Context Length

Illustration by Nikhil Kumar

LLMs have been pushing the context window limit to let users provide more information and get accurate results. A new study seems to have found a way to go beyond the order of 1 million.

The Llama Club Article Foot

Researchers from NVIDIA and the University of Illinois Urbana-Champaign (UIUC) have shared a research paper that discusses the technique to expand the context window of LLMs to about four million tokens.

They have also come up with UltraLong-8B, a new series of models – Llama-3.1-8-UltraLong-1M-Instruct, Llama-3.1-8-UltraLong-4M-Instruct, and Llama-3.1-8-UltraLong-2M-Instruct – all available on Hugging Face. These models are based on Llama-3.1-8B-Instruct. 

“In this work, we introduce an efficient training recipe for building ultra-long context LLMs from aligned instruct model, pushing the boundaries of context lengths from 128K to 1M, 2M, and 4M tokens,” the researchers stated. 

“Our approach leverages efficient continued pretraining strategies to extend the context window and employs effective instruction tuning to maintain the instruction-following and reasoning abilities,” they added.

The approach involves two main stages. The first attempts to extend the context window using a specially curated corpus with unsampled long documents. Researchers applied ‘YaRN-based RoPE scaling’ to improve the model’s ability to process long sequences and continued with a one-step pretraining method over multistep techniques.

The second stage deals with instruction tuning, which refines the model’s instruction-following and reasoning capabilities using a high-quality, short-context supervised fine-tuning (SFT) dataset across general, mathematical, and coding domains.

As per the paper, benchmark experiments included evaluations like RULER, LV-Eval, InfiniteBench, HumanEval, and more. UltraLong-8B models were found to be outperforming the rest compared to existing Llama-based long context models in both long-context and standard tasks. The researchers also performed a Needle in a Haystack (NIAH) test, where the models achieved 100% accuracy.

Researchers acknowledged that the technique uses supervised fine-tuning and does not explore reinforcement learning, which can be studied in the future. They also state that expanding the context window does not keep LLM’s safety alignment in mind.

Picture of Ankush Das

Ankush Das

I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.

Related Posts

Our Upcoming Conference

India's Biggest Conference on AI Startups

April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

Happy Llama 2025

AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India

Data Engineering Summit 2025

May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru

MachineCon GCC Summit 2025

June 20 to 22, 2025 | 📍 ITC Grand, Goa

Cypher India 2025

Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India

MLDS 2026

India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru

Rising 2026

India's Biggest Summit on Women in Tech & AI 📍 Bengaluru

Read Entire Article