Microsoft has introduced a new AI model in its Phi family, called Phi-4. This model has been improved in several ways, especially in solving math problems, thanks to better training data.
For now, Phi-4 is available to a very limited number of users, only through Microsoft's new Azure AI Foundry platform and only for research under a special agreement with Microsoft.
Phi-4 is a small language model with 14 billion parameters, competing with other smaller models like GPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku. These smaller models are often faster and cheaper to run, and their performance has improved over the years.
Microsoft credits Phi-4's improvements to using high-quality synthetic data, along with data created by humans and some post-training tweaks.
Many AI companies are now focusing more on synthetic data and ways to improve models after training. Scale AI CEO Alexandr Wang even mentioned that AI labs have hit a "pre-training data wall," meaning there's a limit to how much new training data can improve models.
This is the first Phi model launched since Sébastien Bubeck, a key figure in the development of the Phi models, left Microsoft in October to join OpenAI.
The official link of this news is P4TechReport.pdf.