Meta, the parent company of Facebook, is making significant strides in AI technology. They are investing heavily in AI training chips, like Nvidia H100, and expanding their data center infrastructure. This move aims to develop a powerful chatbot comparable to OpenAI’s GPT-4, according to The Wall Street Journal.
The training of this large language model is scheduled to commence in early 2024. CEO Mark Zuckerberg is advocating for it to be freely available for companies to create AI tools.Previously, Meta relied on Microsoft’s Azure cloud platform for such endeavors, but now they seek independence in this domain. They’ve formed a dedicated team to accelerate the creation of AI tools capable of replicating human expressions.
Read More: Facebook’s Services Resume Again After Outage
This aligns with their rumored generative AI features, like the Instagram chatbot with 30 personalities, which are currently in testing. However, Meta has faced challenges, including high turnover among AI researchers due to resource allocation issues across multiple language model projects. They also face stiff competition.
While OpenAI has delayed GPT-5 training, Apple is investing heavily in its “Ajax” AI model, claiming it surpasses GPT-4. Google and Microsoft are incorporating AI in productivity tools, and Google aims to integrate generative AI into Google Assistant. Even Amazon is working on generative AI projects, potentially leading to a chatbot-powered Alexa.
Meta’s AI endeavors hold great promise, but the evolving landscape of generative AI is intensely competitive, with tech giants vying to push the boundaries of artificial intelligence.