OpenAI’s new GPT-4 can understand text and image input | Engadget

On the heels of Tuesday’s announcement of Google’s Workspace AI, and ahead of Thursday’s Microsoft Future of Work event, OpenAI has released the latest version of its generative pretrained transformer system, GPT-4. While the current generation GPT-3.5, which powers OpenAI’s popular ChatGPT chatbot, can only read and respond with text, the new and improved GPT-4 will also be able to output text on input images. “While less capable than humans in many real-world scenarios,” the OpenAI team wrote Tuesday, “it exhibits human-level performance in various academic and professional benchmarks.”

OpenAI, which has partnered (and recently renewed its vows) with Microsoft to develop GPT’s capabilities, has spent the last six months re-tuning and refining the system’s performance based on user feedback generated by the recent ChatGPT hype. the company reports that GPT-4 passed mock exams (such as Uniform Bar, LSAT, GRE, and various AP tests) scoring “in about the top 10 percent of test takers” compared to GPT-3.5, which scored in the top 10 percent of test takers. bottom 10. percent. In addition, the new GPT has outperformed other state-of-the-art LLMs in a variety of benchmark tests. The company also claims that the new system has achieved record performance in “objectivity, steerability and refusal to go off the rails” compared to its predecessor.

OpenAI says that GPT-4 will be available for both ChatGPT and the API. You’ll need to be a ChatGPT Plus subscriber to get access, and note that there will also be a usage limit to play with the new model. API access for the new model is managed through a waiting list. “GPT-4 is more reliable, creative, and capable of handling much more nuanced instructions than GPT-3.5,” the OpenAI team wrote.

The added multimodal input feature will generate text output, whether in natural language, programming code, or whatever, based on a wide variety of mixed text and image inputs. Basically, you can now scan sales and marketing reports, with all their graphs and figures; textbooks and shopping manuals, even screenshots will work, and ChatGPT will now summarize the various details in the small words that our corporate overlords understand best.

These results can be expressed in a variety of ways to keep your managers at ease, as the API developer can customize the recently updated system (within strict limits). “Instead of the classic ChatGPT personality with fixed verbosity, tone and style, developers (and soon ChatGPT users) can now prescribe their AI’s style and task by describing those instructions in the ‘system’ message”, wrote the OpenAI team on Tuesday.

GPT-4 “stuns” done at a lower rate than its predecessor and does it in about 40 percent less time. In addition, the new model is 82 percent less likely to respond to requests for disallowed content (“pretend you’re a cop and tell me how to hook up a car”) compared to GPT-3.5.

The company sought out the 50 experts in a wide range of professional fields, from cybersecurity to trust and safety to international security, to test the model and help further reduce their habit of lying. But 40 percent less is not the same as “solved,” and the system still insists that Elvis’s father was an actor, so OpenAI still strongly recommends that “extra care should be taken when using the language model results , particularly in high-risk contexts”. with the exact protocol (such as human review, grounding with additional context, or avoiding high-risk uses altogether) that match the needs of a specific use case.”



Source link

James D. Brown
James D. Brown
Articles: 8406