GPT-4 Released

GPT-4 Released: The Most Advanced Multimodal AI Ever Created!

OpenAI has released GPT-4, a powerful new AI model that can understand both images and text. The company calls it “the latest milestone in its work to scale up deep learning.”

Paying users of OpenAI can get GPT-4 right now through ChatGPT Plus, and developers can sign up for a waitlist to get to the API.

It has been right in front of us the whole time. Microsoft confirmed today that its Bing Chat chatbot, which was made in partnership with OpenAI, runs on GPT-4. Other early adopters include Stripe, which scans business websites with GPT-4 and sends a summary to customer support staff, and Duolingo, which built GPT-4 into a new subscription tier for learning languages.

GPT-4 Released
GPT-4 Released

These News Have Been Making Headlines:

The First App From GPT-4 is a “Virtual Volunteer” for People Who Can’t See

OpenAI says that GPT-4 can take in both images and text, which is better than its predecessor, GPT-3.5, which could only take in text, and that it performs at “human level” on a number of professional and academic benchmarks. For example, GPT-3 gets a score in the top 10% of people who take a mock bar exam.

OpenAI spent six months iteratively aligning GPT-4 using lessons from an adversarial testing programme and ChatGPT. The company says this led to the “best-ever results” for accuracy, being able to be guided, and not going outside of the boundaries.

“In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle,” OpenAI wrote in a blog post announcing GPT-4. “The difference comes out when the complexity of the task reaches a sufficient threshold — GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.”

One of the most interesting things about GPT-4 is that it can understand both text and pictures. GPT-4 can put captions on images and even figure out what they mean. For example, it can tell from a picture of an iPhone plugged in that the picture is of a Lightning Cable adapter.

The ability to understand images isn’t yet available to all OpenAI customers. For now, OpenAI is only testing it with one partner, Be My Eyes. The new Virtual Volunteer feature on Be My Eyes is powered by GPT-4 and can answer questions about images that are sent to it.

Announcing GPT-4, a large multimodal model, with our best-ever results on capabilities and alignment: openai.com/product/gpt-4

In a Blog Post, Be My Eyes Talks About How It Works

“For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it, but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them.”

The steerability tooling mentioned above could be a more important improvement. With GPT-4, OpenAI is adding a new API feature called “system” messages. These allow developers to give specific instructions about style and task. System messages, which will also be added to ChatGPT in the future, are basically instructions that set the tone and limits for the next interactions between the AI and the user.

For Example, This Could Be Written in a System Message

“You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.”

But OpenAI knows that GPT-4 isn’t perfect, even with system messages and the other changes. Still, it “hallucinates” facts and makes mistakes in reasoning, sometimes with a lot of confidence. In one case given by OpenAI, GPT-4 said that Elvis Presley was the “son of an actor,” which was a clear mistake.

GPT-4 Released

“GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience,” OpennAI wrote. “It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”

OpenAI does say that it got better in some ways. For example, GPT-4 is less likely to say no to requests about how to make dangerous chemicals. The company says that compared to GPT-3.5, GPT-4 is 82% less likely to respond to requests for “disallowed” content and 29% more likely to respond to sensitive requests, such as for medical advice or information about self-harm, in line with OpenAI’s policies.

About Manoj 1544 Articles
Manoj's writing in the games, series, and entertainment field goes beyond mere entertainment value. He delves into the deeper cultural impact of these mediums, analyzing storytelling techniques, character development, and the evolving landscape of interactive experiences. By providing thoughtful and well-reasoned insights, Manoj aims to engage readers and foster a deeper understanding of the subject matter. With a BTech qualification, Manoj combines his technical knowledge with his passion for writing to deliver insightful and engaging content.
Exit mobile version