Explainer-What Is Microsoft-backed OpenAI's GPT-4 Model?

OpenAI's GPT-4 model

Microsoft Corp-backed startup OpenAI began the rollout of GPT-4, a powerful artificial intelligence model that succeeds the technology behind the wildly popular ChatGPT.

GPT-4 is “multimodal”, which means it can generate content from both image and text prompts.

WHAT IS THE DIFFERENCE BETWEEN GPT-4 AND GPT-3.5?

GPT-3.5 takes only text prompts, whereas the latest version of the large language model can also use images as inputs to recognize objects in a picture and analyze them.

GPT-3.5 is limited to about 3,000-word responses, while GPT-4 can generate responses of more than 25,000 words.

GPT-4 is 82% less likely to respond to requests for disallowed content than its predecessor and scores 40% higher on certain tests of factuality.

It will also let developers decide their AI’s style of tone and verbosity. For example, GPT-4 can assume a Socratic style of conversation and respond to questions with questions. The previous iteration of the technology had a fixed tone and style.

Soon ChatGPT users will have the option to change the chatbot’s tone and style of responses, OpenAI said.

WHAT ARE THE CAPABILITIES OF GPT-4?

The latest version has outperformed its predecessor in the U.S. bar exam and the Graduate Record Examination (GRE). GPT-4 can also help individuals calculate their taxes, a demonstration by Greg Brockman, OpenAI’s president, showed.

The demo showed it could take a photo of a hand-drawn mock-up for a simple website and create a real one.

Be My Eyes, an app that caters to visually impaired people, will provide a virtual volunteer tool powered by GPT-4 on its app.

WHAT ARE THE LIMITATIONS OF GPT-4?

According to OpenAI, GPT-4 has similar limitations as its prior versions and is “less capable than humans in many real-world scenarios”.

Inaccurate responses known as “hallucinations” have been a challenge for many AI programs, including GPT-4.

OpenAI said GPT-4 can rival human propagandists in many domains, especially when teamed up with a human editor.

It cited an example where GPT-4 came up with suggestions that seemed plausible, when it was asked about how to get two parties to disagree with each other.

OpenAI Chief Executive Officer Sam Altman said GPT-4 was “most capable and aligned” with human values and intent, though “it is still flawed.”

GPT-4 generally lacks knowledge of events that occurred after September 2021, when the vast majority of its data was cut off. It also does not learn from experience.

WHO HAS ACCESS TO GPT-4?

While GPT-4 can process both text and image inputs, only the text-input feature will be available to ChatGPT Plus subscribers and software developers, with a waitlist, while the image-input ability is not publicly available yet.

The subscription plan, which offers faster response time and priority access to new features and improvements, was launched in February and costs $20 per month.

GPT-4 powers Microsoft’s Bing AI chatbot and some features on language learning platform Duolingo’s subscription tier.


(Reuters)

Inside Telecom provides you with an extensive list of content covering all aspects of the Tech industry. Keep an eye on our News section to stay informed and updated with our daily articles.