APP Users: If unable to download, please re-install our APP.
Only logged in User can create notes
Only logged in User can create notes

General Studies 3 >> Science & Technology

audio may take few seconds to load



AI powerhouse OpenAI announced GPT-4, the next big update to the technology that powers ChatGPT and Microsoft Bing, the search engine using the tech, on Tuesday. GPT-4 is supposedly bigger, faster, and more accurate than ChatGPT, so much so, that it even clears several top examinations with flying colours, like the Uniform Bar Exam for those wanting to practice as lawyers in the US
2.What is GPT-4
  • GPT-4 is a large multimodal model created by OpenAI and announced on March 14, 2023
  • Multimodal models can encompass more than just text – GPT-4 also accepts images as input. Meanwhile, GPT-3 and GPT-3.5 only operated in one modality, text, meaning users could only ask questions by typing them out
  • Aside from the fresh ability to process images, OpenAI says that GPT-4 also “exhibits human-level performance on various professional and academic benchmarks.”
  • The language model can pass a simulated bar exam with a score around the top 10 per cent of test takers and can solve difficult problems with greater accuracy thanks to its broader general knowledge and problem-solving abilities
  • GPT-4 is also capable of handling over 25,000 words of text, opening up a greater number of use cases that now also include long-form content creation, document search and analysis, and extended conversations
3.How is GPT-4 is different from GPT-3
The major differences would be the following
3.1. GPT-4 can see images:
  • The most noticeable change to GPT-4 is that it’s multimodal, allowing it to understand more than one modality of information
  •  GPT-3 and ChatGPT’s GPT-3.5 were limited to textual input and output, meaning they could only read and write
  • However, GPT-4 can be fed images and asked to output information accordingly
  • It can be looked at it as Google Lens , But Lens only searches for information related to an image
  • GPT-4 is a lot more advanced in that it understands an image and analyses it
  • An example provided by OpenAI showed the language model explaining the joke in an image of an absurdly large iPhone connector
  • The only catch is that image inputs are still a research preview and are not publicly available
3.2.GPT is harder to trick
  • One of the biggest drawbacks of generative models like ChatGPT and Bing is their propensity to occasionally go off the rails, generating prompts that raise eyebrows, or worse, downright alarm people
  • They can also get facts mixed up and produce misinformation
  • OpenAI mentioned that " they have spent 6 months training GPT-4 using lessons from its "adversarial testing program" as well as ChatGPT resulting in the company's "best ever results on factuality, steerability, and refusing to go outside of gaurdrails"
3.3.GPT-4 can process a lot at one time 
  • Large Language Models (LLMs) may have been trained on billions of parameters, which means countless amounts of data, but there are limits to how much information they can process in a conversation
  • ChatGPT’s GPT-3.5 model could handle 4,096 tokens or around 8,000 words but GPT-4 pumps those numbers up to 32,768 tokens or around 64,000 words
  • This increase means that where ChatGPT could process 8,000 words at a time before it started to lose track of things, GPT-4 can maintain its integrity over way lengthier conversations
  • It can also process lengthy documents and generate long-form content – something that was a lot more limited on GPT-3.5
3.4.GPT-4 Accuracy has been improved
  • OpenAI admits that GPT-4 has similar limitations as previous versions – it’s still not fully reliable and makes reasoning errors
  • However, “GPT-4 significantly reduces hallucinations relative to previous models” and scores 40 per cent higher than GPT-3.5 on factuality evaluations
  • It will be a lot harder to trick GPT-4 into producing undesirable outputs such as hate speech and misinformation
3.5.GPT-4 is better at understanding languages other than 'English'
  • Machine learning data is mostly in English, as is most of the information on the internet today, so training LLMs in other languages can be challenging
  • But GPT-4 is more multilingual and OpenAI has demonstrated that it outperforms GPT-3.5 and other LLMs by accurately answering thousands of multiple-choice across 26 languages
  • It obviously handles English best with an 85.5 per cent accuracy, but Indian languages like Telugu aren’t too far behind either, at 71.4 per cent
  • What this means is that users will be able to use chatbots based on GPT-4 to produce outputs with greater clarity and higher accuracy in their native languages
4.Way Forward
GPT-4 has already been integrated into products like Duolingo, Stripe, and Khan Academy for varying purposes
While it’s yet to be made available for all for free, a $20 per month ChatGPT Plus subscription can fetch you immediate access. The free tier of ChatGPT, meanwhile, continues to be based on GPT-3.5
However, if you don’t wish to pay, then there’s an ‘unofficial’ way to begin using GPT-4 immediately. Microsoft has confirmed that the new Bing search experience now runs on GPT-4 and you can access it from right now
Meanwhile, developers will gain access to GPT-4 through its API
 A waitlist has been announced for API access, and it will begin accepting users later this month

Share to Social