OpenAI’s ChatGPT, Microsoft CoPilot, Midjourney, Google’s Gemini, and other AI tools are taking the tech industry by storm. The growth of these tools is exponential. ChatGPT became the fastest-growing consumer app by signing up 100 million users in 2 months. To compare this, it took TikTok 9 months and Instagram 2.5 years to reach 100 million users.
AI apps like ChatGPT use AI software and large data sets called large language models (LLM). That is where the magic happens. ChatGPT is merely a front-end tool. The LLMs do the real work behind the scenes.
Besides LLMs, a new model is emerging fast: a small language model. A small language model (SLM) is a miniaturized version of its larger cousin, the large language model (LLM). Compared to LLMs, which often boast hundreds of billions or even trillions of parameters, SLMs have a significantly smaller number, typically ranging from a few million to a few billion.
In the future, personal language models (PLMs) designed to understand and generate content about individual users will be needed. I will cover PLMs in my next article.
This "smaller" size comes with its own set of advantages and limitations:
Advantages of SLMs
Here are some of the key benefits of SLMs:
- Less costly: One of the biggest problems with LLMs is the resources they require to store, train, and execute. Smaller data sets with limited use cases allow them to be trained on less data and with less computational power, making them a more cost-effective option.
- Deployability: SLMs are smaller and compact, so they are easier to package and deploy on devices with limited memory and processing power, such as mobile phones and edge devices.
- Privacy and Control: Not every business wants to share its data publicly. SLMs are designed to store and process data locally or within a private network boundary. Data processing can even happen locally on the device, offering benefits for privacy and security, especially for sensitive information.
- Specialization: SLMs are effective when designed for a specific domain, such as banking, HR, or legal. For example, a model designed for legal only cares about legal-related data. As a result, it requires fewer resources.
- Interpretability: Their simpler architecture makes it easier to understand how they reach their conclusions, which can be helpful for debugging and improvement.
What are some of the most popular SLMs?
The popularity of SLMs is growing rapidly, making it a dynamic landscape. Here are some of the most notable players currently generating buzz:
Open-source models
Phi-3 is indeed a relatively new player in the field of Small Language Models (SLMs). It was developed by Microsoft and focuses on offering several key advantages:
- Smaller size: Compared to other SLMs, Phi-3 models are lightweight and require less processing power. This makes them suitable for deployment on various devices.
- Strong performance: Despite their compact size, Phi-3 models achieve high accuracy, rivaling even larger models in some tasks.
- Focus on reasoning: The training data for Phi-3 emphasizes properties that promote logical reasoning abilities.
Llama-2-13b and CodeLlama-7b (Meta): These models focus on general language understanding and code generation. They've gained traction for their open-source availability and strong performance on various benchmarks.
Mistral-7b and Mixtral 8x7b (Mistral): Mistral-7b has impressed with its ability to outperform larger models on specific tasks. In contrast, Mixtral, a mixture-of-experts model, shows exceptional promise in matching the performance of GPT-3.5 across various areas.
Phi-2 and Orca-2 (Microsoft): Known for their strong reasoning capabilities, these models shine in tasks requiring logical deduction and understanding. Orca-2, in particular, excels on zero-shot reasoning tasks.
Smaller models (under 1 billion parameters)
- DistilBERT, TinyBERT, and T5-Small are highly efficient models ideal for specific use cases like summarization. Their low computational footprint makes them perfect for on-device applications or resource-constrained environments.
- Industry-specific models:
- Jurassic-1 Jumbo (AI21 Labs): Designed for scientific communication, this model excels at understanding and processing scientific language.
- Bloom (Hugging Face): Intended for legal text, Bloom boasts impressive performance in tasks like contract analysis and due diligence.
Can I create my own SLM for my company?
Yes, it is possible to create your own SLM for your company, but it depends on several factors:
- Building and training an SLM requires significant resources, including computational power, large datasets, and machine learning expertise. In addition to hardware, your technical team needs to be well-versed in artificial intelligence, model design, and architecture.
- Some public tools and libraries may help you create your own SLMs, but you will still need to have some experience in artificial intelligence, data modeling, and machine learning.
Before embarking on your SLM model, consider existing open-source SLMs like those mentioned earlier. Fine-tuning an existing model to your specific needs can often be quicker and more cost-effective.
Here are some steps to consider if you decide to build your own SLM
- Define your goals and use cases.
- What specific tasks do you want your SLM to perform? Gather data relevant to your use cases. This could be text data from your industry, customer interactions, or product manuals.
- Choose an appropriate SLM architecture and framework. TensorFlow Lite and PyTorch are popular options for smaller models.
- Train your model on your collected data. This might involve renting cloud computing resources or using specialized hardware.
- Evaluate and refine your model's performance. Use metrics relevant to your tasks and iteratively improve your model.
Hire an expert SLM Expert
Are you planning to create your SLMs? Do you need an expert consultant or trainer? CSharp team provides AI consulting and training services.
You can contact an AI expert trainer here.