UAE’s TII Announces ‘Powerful’ Small AI Model Falcon 3 

4 months ago 29
  • Published on December 17, 2024
  • In AI News

The small model is available in multiple variants up to 10 billion parameters, and has been released on Hugging Face.

TII Falcon Mamba 7B

Technology Innovation Institute (TII), a research institute from Abu Dhabi, UAE, has unveiled a new family of small language models titled Falcon 3. The models range from one billion parameters to 10 billion parameters in both base and instruct versions. Falcon is available as an open-source model under TII’s Falcon License 2.0. 

The institute also released benchmark results comparing some of the other leading models in its category. Both Falcon 3 7B and 10B outperformed models like Qwen 2.5 7B and Llama 3.1 8B in several benchmarks. 

TII is a global research institution based in Abu Dhabi and funded by the Abu Dhabi government. It was established in May 2020 and focuses on research in AI, quantum computing, robotics, and cryptography. 

Falcon 3 employed a shared parameter technique called Grouped Query Attention (GQA) that reduces the memory demands, thereby leading to low latency during inference. 

“The initial training was followed by multiple stages to improve reasoning and math performance with high-quality data and context extension with natively long context data,” read the announcement. 

The model was also trained in four languages, including English, Spanish, Portuguese, and French. 

All variants of the Falcon model are available for download on Hugging Face

In August, TII launched the Falcon Mamba 7B model. It outperformed Meta’s Llama 3.1 8B, Llama 3 8B, and Mistral’s 7B in benchmarks. In May, they launched Falcon 2, an 11B text and vision model. 

Small Models on the Rise

Are small language models finally delivering the promise? A few days ago, Microsoft announced the latest Phi-4 model. With just 14B parameters, the model outperformed much larger models like Llama 3.3 70B and GPT 4o on several benchmarks. 

There have also been discussions about the relevance of pre-training and a brute-force approach to improve the model by increasing its size. Ilya Sutskever, former OpenAI chief scientist, had his say on this debate in his presentation at NeurIPS 2024. 

“Pre-training as we know it will unquestionably end,” he said, referring to the lack of available data. “We have but one internet. You could even go as far as to say that data is the fossil fuel of AI. It was created somehow, and now we use it,” he added. 

He also speculated that the use of inference time computing and synthetic data for training is a key technique that may help researchers overcome the problem. 

That said, if small models leverage new and innovative techniques and deliver high performance on resource-constrained devices, the smartphone market in 2025 will be the one to watch out for. 

Picture of Supreeth Koundinya

Supreeth Koundinya

Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.

Association of Data Scientists

GenAI Corporate Training Programs

India's Biggest Developers Summit

February 5 – 7, 2025 | Nimhans Convention Center, Bangalore

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

February 5 – 7, 2025 | Nimhans Convention Center, Bangalore

Rising 2025 | DE&I in Tech & AI

Mar 20 and 21, 2025 | 📍 J N Tata Auditorium, Bengaluru

Data Engineering Summit 2025

May, 2025 | 📍 Bangalore, India

MachineCon GCC Summit 2025

June 2025 | 583 Park Avenue, New York

September, 2025 | 📍Bangalore, India

MachineCon GCC Summit 2025

The Most Powerful GCC Summit of the year

discord icon

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Read Entire Article