• Category
  • >Artificial Intelligence
  • >NLP

About OpenAI GPT-3 Language Model

  • Neelam Tyagi
  • Jul 23, 2020
About OpenAI GPT-3 Language Model title banner

GPT-3 was mentioned in the Artificial Intelligence race, and again debatable, albeit a more dominant language model launched by OpenAI. 

 

Recently, Generative Pretrained Transformer third-generation, or GPT-3 (as it is better recognized) seizes the spotlight across the internet this weekend. The internet is buzzing with the GPT-3, an OpenAI’s novel language model. 

 

“GPT-3 seizes the potentiality to breakthrough both the benign and noxious applications of language models.”

 

Our foremost conversation, through this blog, is how GPT-3 is dragging attention as it embraces shockingly enthralling characteristics.

 

How ridiculously amazing, understand the fact; “from being the huge language model trained since the date to rivaling state of the art model on diversified tasks such as question-answering and interpretation, GPT-3 has introduced brand-new benchmark for Natural Language Processing.”

 

Specifying GPT-3: A Brief Touch

 

As concerned, GPT-3 is the most persuasive language model being formulated endlessly because of its size as the GPT-3 model has a whopping 175 billion parameters in comparison to its OpenAI’s previous model GPT-2(predecessor of GPT-3) which has the 1.5 billion parameters.

 

Being trained on the dataset of half a trillion words, GPT-3 can recognize splendid linguistics patters incorporated in that. Form the huge datasets, nuanced interferences can be extricated in terms of hidden patterns that are far beyond the recognition of the human mind at its own. 

 

For a short description of GPT-3, “ It is a language model that employs machine learning algorithms to translate text, answer questions, and compose text predictively. By examining an array of words, texts, or other relevant data, GPT-3 operates and develops on these examples to generate a completely authentic outcome in the context of a paragraph or an image.”

 

It has plenty of capabilities as when briefed by a human, it can compose original fiction and meaningful business notes, etc, it is also able to generate functioning code. 

 

(More eager to learn its detailed specification, read our previous blog What is the OpenAI GPT-3?”, but this does not end here, we also have provided a full blog on its predecessor, i.e., GPT-2, “OpenAI’s GPT-2: AI that is too Dangerous to Handle.”)

 

Statements made for GPT-3

 

On arrival, some sort of welcome dose  

 

  1. From a tweet of CEO of OpenAI, Sam Altman, “The GPT-3 hype is way too much…...AI is going to change the world, but GPT-3 is just a very glimpse." 

  2. In his other tweet, “Theory: a significant part of what resonates about the GPT-3 API is that you program it with English, so any English speaker can use it, and it heavily rewards creativity. For non-programmers, it's like experiencing the magic of programming for the first time.”

  3. OpenAI research team admitted; “GPT-3 samples might lose coherence across adequately long passages, disclaim themselves, and irregularly include non-sequitur sentences or paragraphs.”

  4. In the latest news, Sharif Shameem, the founder of debuild.co, a startup that lets developers in making apps with least effort, has leveraged GPT-3 to produce code. He tweeted, “This is mind-blowing. With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.”


 

Example Refecting GPT-3 Working

 

The original paper was published in May 2020, after that OpenAI provided the public access of the model to selected members through an API. Thereafter, various examples of text generated by GPT-3 were disseminated every platform including social media.

 

It is very easier to inquire GPT-3 questions, but it is a very complex text predictor, provided some text as input, the model can produce its most reliable predictions as to what would be the next text. It can replicate this method; it takes original input together with the lately generated text, accounts them as new input, and generates a consequent text until it attains a limited length. (From)

 

Moreover, it has devoured the complete sort of text available over the internet. The generated output is the kind of language that GPT-3 determines as a statistically conceivable response for the input provided to it, depending on the text or anything human has beforehand proclaimed online.  

 

(More deeply to understand, how OpenAI’s GPT-3 API could work, referring you to read the OpenAI’s blog)

 

The below images demonstrate the examples of GPT-3 Turing Test  (or conversation exchanged with GPT-3).

 

  1. Based on the prevalent intelligence  


Demonstrating a chunk of text examples based on common sense.

Text sample based on common-sense


  1. Trivia questions 


             Reflecting a chunk of text examples when asked Trivia questions.

Trivia question-answering as GPT-3 example


  1. Based on Logic

Showing examples based on logical and elementary maths.

GPT-3 input-output as examples based on Logical mathematics



Besides that, more details of this Turing test can be found in the Kevin Lacker blog. However, it also holds some flaws, as put simply, the GPT-3 model lacks an overall long-term thought of meaning, understanding, and purpose.  It will confine its expertise to make useful language output in diverse contexts.

 

It does not mean that GPT-3 is not a beneficial tool or it won’t construct several useful applications, it just signifies that GPT-3 is unpredictable and vulnerable to fundamental errors that a normal human would never do.   


 

What Can Go Wrong?

 

With a novel approach, the OpenAI researchers also discussed its adverse effects in their paper. Till now, we understand that GPT-3 has a tremendous quality text generating strength, but it also makes it difficult for GPT-3 to differentiate between synthetic text and human-written text. Therefore, the authors warn that there could be the ill-usage of the language model. 

 

They also acknowledge that wicked utilization of GPT-3 would be hard to envision due to the fact that it can be reprocessed in numerous contexts or for other objectives than what researchers expected. Various misuse of GPT-3 are listed;

 

  • Spam & phishing(cybercrime), 

  • Deceitful academic article writing, 

  • Violation of legal and governmental means and

  • social construction pretexting

 

As GPT-3 has indigested nearly everything over the internet including each written-word, the researchers also took this as a chance to distinguish how the racial emotions and other thoughts impersonate throughout a conversation.

 

 

Conclusion


Definitely, GPT-3 came with amazingly, though the impressive technical achievement. It has attained an advanced state of the art strength in natural language processing. It embraces an ingenious gift to create language in all varieties of techniques and methods that will unhitch compelling applications for entrepreneurs and marketers.

 

“GPT-3 is made up of 175 billion parameters that can execute a broad bandwidth of NLP tasks without expecting fine-tuning for a particular task. It can perform machine interpretation, writing poems and basic mathematics, question-answering, and studying conceptual tasks.”

 

It’s undeniable, where GPT-3 is pretty awesome in some operations and it is also seen as explicitly subhuman in other domains. With a better understanding of its strengths and flaws, researchers would be able to utilize modern language models in real-time outcomes.

Latest Comments