Bing may soon have the ability to use images, videos, and other types of data in responses.

He new Bing based on ChatGPT is continually improving, and Microsoft could start its biggest development next week. Andreas Braun, CTO of Microsoft Germany, recently announced that GPT-4 will arrive next week. Along with this, the executive hinted at “multimodal models that will offer completely different possibilities.”

Microsoft has already poured billions of dollars into its artificial intelligence initiatives, most notably constantly improving Bing to better compete with Google. Now, the software giant confirmed next week the arrival of GPT-4, which is expected to be injected into its search engine and chatbot.

Before the release of ChatGPT Bing, there were rumors over Bing using GPT-4. However, Microsoft has used the GPT-3.5 model along with its proprietary technology. Prometheus, which allows Bing to generate up-to-date data. Surprisingly, while the new Bing isn’t yet available to everyone, the company already has plans to give the search engine a significant boost through the upcoming GPT-4.

The new and upcoming OpenAI large language model is expected to allow the Bing chatbot to generate faster results, which can be a big help as the current version usually takes a few seconds to start generating responses. However, besides speed, a multimodal capability could be the most important thing that the introduction of the new LLM could bring.

At Microsoft’s AI in Focus – Digital Kickoff event, Braun shared some details on what to expect from a new LLM entry. (through Heise)

“We will present GPT-4 next week, there we will have multi-modal models that will offer completely different possibilities, for example, videos,” said Braun, who described LLM as a “game changer”.

In addition, Braun confirmed that Microsoft has plans to “make models comprehensive” using multimodal capability. Once injected, this should allow Bing to provide a variety of data when answering questions, which means it could also process video, images, and other types of data. This should result in better responses, making Bing a more effective search assistant for everyone.

On the other hand, it is important to note that Bing is not the first in multimodality. Recently, you.com implemented its multi-modal chat search feature, which allows users to provide text and voice input and receive responses beyond conversational text. The search engine, however, is still struggling to gain public attention. Meanwhile, even though it’s still not fully accessible to everyone, Bing already has an ever-expanding waiting list. Injecting a multi-modal capability into it will no doubt hurt rivals such as You.com. However, it is still too early to say how widespread this effect will be, as Braun’s announcement only confirms very few details.



Source link

James D. Brown
James D. Brown
Articles: 8624