Image: Credit
Meta has introduced new AI models, including a tool called the Self-Taught Evaluator. This tool aims to reduce the need for humans to develop AI. The new models are part of Meta’s efforts to improve AI accuracy and efficiency in complex areas.
The Self-Taught Evaluator uses a method called ‘chain of thought,’ similar to a technique used by OpenAI. It breaks down problems into smaller steps, making it more accurate in fields like science, coding, and math. This evaluator was trained only with data generated by AI, so no human input was needed during that training.
This AI can evaluate other AI models on its own, which might save time and money compared to traditional methods that rely on human feedback. Meta’s researchers believe that AI systems that can improve themselves might do better than humans, moving us closer to digital assistants that can handle complex tasks without help.
The latest release, Meta, also updated its Part Anything model, made tools for quicker language model responses and created datasets to find new inorganic materials. Unlike companies like Google and Anthropic, Meta allows the public to use its models, which makes it stand out in the industry.
Meta has launched a new tool called the Self-Taught Evaluator, designed to minimize human involvement in AI development.
The evaluator uses a ‘chain of thought’ technique to break down complex problems into smaller steps, enhancing accuracy in fields like science, coding, and mathematics. This AI tool can independently evaluate other AI models, potentially saving time and resources compared to traditional human-dependent methods.
Meta’s researchers believe that AI systems capable of self-improvement could surpass human evaluators, bringing us closer to advanced digital assistants that manage complex tasks autonomously. Meta’s models, including updates to the Part Anything model and new tools for faster language responses, are made publicly accessible, setting it apart from competitors like Google and Anthropic.