Blog   

Will GPT-4 Soon Revolutionise Text Automation with Large Language Models?

Photo by Anastasia Linnik

Anastasia Linnik

Chief Artificial Intelligence Officer, Retresco

Teilen

The introduction of OpenAI’s GPT-3 in spring 2020 caused a disruption in the world of text automation with large langue models. The huge performance leap from GPT-2 surprised everyone familiar with the topic and it soon got clear that the abilities for text automation with GPT-3 were not comparable to any other large language model before.

The fact that OpenAI stayed very discreet about the predecessor GPT-4 left many with high expectations about what to come next. And it looks like they won’t be disappointed – as new hints at the release date and the broadened abilities of the new model have recently shown. Actually, we might be able to see the new model already sometime from end 2022 to beginning of 2023.

milad fakurian - Representation of a 3D created brain.

What might text automation with GPT-4 look like?

When OpenAI released GPT-3, it was the first large language model that offered capabilities for text automation that previous generators did not have to this extent. The marketing promise of the end-to-end approach is that any type of content can be created "on demand" and without setup. The model only needs an instruction and roughly structured data – and through Deep Learning the algorithm "reads" freely available internet content and acquires the necessary knowledge to "spit out" a text.

Now that information on the predecessor GPT-4 were leaked, people were starting to wonder if GPT-3 could be but a fraction of the futuristic bigger models for text automation. Several hints lead in this direction. For example, while GPT-3 has currently 175 billion parameters – which is 10 times more than any of its market peers – AI company Cerebras’ CEO Andrew Feldman said, "From talking to OpenAI, GPT-4 will be about 100 trillion parameters". This indeed would mean a huge qualitative leap between the two models.

But there is more.

Even though the people who are apparently beta-testing GPT-4 since August had to sign an NDA, some more detailed descriptions of what GPT-4 might look like reached the surface. The reliability of the sources is not granted but it still sparks a lot of excitement throughout experts, prospects, and users. Mainly three features were highlighted:

  • They will probably use Whisper to derive more training data from videos.
  • GPT-4 is expected to be multimodal, accepting audio, text, image and even video input.
  • Probably a new SOTA ("Self-organizing Tree Algorithm") level will be used in many tasks.

If this is true, this is of course extremely exciting news. But we will have to wait a couple of months to see if the new possibilities of text automation with GPT-4 are as promising as they sound.

Why GPT-4 is still not the all-round solution for text automation

Even before the rise of end-to-end text automation, users have been "spoilt for choice" between a data-based approach with an initial set-up requirement, but also with a subsequent "reward" of full automation and flawless text output, – and creative solutions via large language models that require little preparatory work but generate error-prone texts during operation and therefore require manual editing. Not even GPT-4 will be able to solve this problem.

For this reason, Retresco brought together the data-based approach for text automation with the advantages of large language models within a hybrid Natural Language Generation solution, called "Hybrid NLG". The new assistance system Hybrid NLG combines text suggestions generated by a large language model with the data-based text models of the content automation platform textengine.io. The central advantage here is that all relevant text models for automated content generation can be set up significantly faster, with the most diverse text types being supported.

You can find further information about Hybrid NLG here: https://www.retresco.com/hybrid-nlg-gpt-content-automation

Back to the news overview