• btc = $97 585.00 769.57 (0.79 %)

  • eth = $3 291.63 -38.71 (-1.16 %)

  • ton = $5.43 -0.05 (-1.00 %)

  • btc = $97 585.00 769.57 (0.79 %)

  • eth = $3 291.63 -38.71 (-1.16 %)

  • ton = $5.43 -0.05 (-1.00 %)

14 Mar, 2023
2 min time to read

It exhibits human-level performance on various professional and academic benchmarks.

OpenAI released GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. It now could work with different types of inputs, not only text but images too. The system is also capable of outperforming humans on various academic benchmarks, according to a research paper.

Now, you could ask GPT-4 for some context about any image. And it would understand what's on the image and give you the answer.

For example, GPT-4 passes a simulated bar exam with a score around the top 10% of test takers. While in contrast, GPT-3.5’s score was around the bottom 10%. As OpenAI stated, researchers spent 6 months iteratively aligning GPT-4 using lessons from the adversarial testing program as well as ChatGPT, resulting in the best-ever results on factuality, steerability, and refusing to go outside of guardrails.

OpenAI is releasing GPT-4 text input capability via ChatGPT and the API will also be available through a waitlist. Image capability for now is under internal tests with company's partner. In comparison with GPT-3.5, GPT-4 is more reliable, creative, and able to handle much more nuanced instructions in complex tasks. But in casual conversation, the distinction can be subtle.

To fully understand the difference between the two models researchers tested on a variety of benchmarks, including simulating exams that were originally designed for humans. GPT-4 was not trained for these exams but passed many of them.

GPT-4 still has similar limitations as earlier GPT models like it is not fully reliable and sometimes "hallucinates" facts and makes reasoning errors. At the same time, OpenAI says that the new model significantly reduces hallucinations relative to previous models. For example, GPT-4 scores 40% higher than the latest GPT-3.5 in internal adversarial factuality evaluations.

The new model also has better accuracy across multiple languages, not only English. For comparison with GPT-3.5, GPT-4 hugely outperforms, including low-resource languages such as Latvian, Welsch, and Swahili.

ChatGPT Plus subscribers will get GPT-4 access on chat.openai.com with a usage cap. OpenAI said that it will adjust the exact usage cap depending on demand and system performance in practice, and also may introduce a new subscription level for higher-volume GPT-4 usage.