AI language models revolutionize the world of digital content

Director Business Development
Valtech Germany

29. Juni 2022

If you occasionally deal with digital innovations, with artificial intelligence or machine learning, or even just with digital content, you will increasingly come across terms like “AI language models”, “NLG” “OpenAI”, “GPT-3” or “GPT-X”.

We all notice that there is always talk of something really big. But this is also true for quantum computers and probably only a few people who are talking or writing about understand them.

At this point, I’d like to help you guess what the “big thing” is about AI language models and why you can’t get involved with them soon enough. At least if the human-machine interface or digital content are somehow important to you.

I begin with a completely unscientific derivation of the above terms in fast forward. Even if you are in a hurry, I recommend at least skimming this derivation.

Artificial Intelligence and Natural Language Generation

1. AI, or Artificial Intelligence, does not exist. No software in the world has judgment, understanding or even reason and therefore cannot be intelligent. Nevertheless, the two abbreviation AI (and KI in Germany) accompany us permanently, because they have described the entire research field for many years. Furthermore, it has become common to call adaptive algorithms AI.

2. In the research and search for artificial intelligence, it has been found that “machines”, or rather algorithms, can learn in a certain way. When analyzing data, they come to increasingly better results through pattern recognition and trial-and-error, and improve themselves in the process. This is called machine learning, and it is a subfield of artificial intelligence (AI).

3. Machine Learning (often abbreviated ML) already works very well and it is in the nature of digital processes that you can scale them infinitely. The learning loops can be run millions of times in seconds if the computer or data center or data center network has enough power.

4. The Internet consists of trillions of texts that can be easily searched and analyzed. The AI (i.e. the adaptive algorithm) thereby understands that the german word “golf”, for example, can be a coastal form (english “gulf”), a sport or a car (US “Rabbit”). The context, e.g. of a sports portal, tells the AI that it is most likely about the sport. This is machine learning based on text, and it already works very well.

5. The output of this ML process is a natural language text . This is because the AI has, almost incidentally, learned at the same time how the result is most likely to be formulated. The word “most likely” is important at this point because it makes clear that the AI arrives at the result via statistical probabilities.

6. So a text was generated automatically. There is also an abbreviation for this, namely NLG. This stands for Natural Language Generation. There were and are different methods for NLG, see below.

7. The AI thus automatically produces text or content. And it does this in a fraction of a second, considering an incredible number of sources (the complete content of Wikipedia is only a fraction of this). If you now take a huge data center and let such adaptive software analyze an infinite number of texts and teach it how to link this information with each other to be able to answer questions on a topic or to formulate entire texts, you have an AI language model.

8. Elon Musk, Peter Thiel & Friends wanted to do that (amongst other things) on a large scale and funded OpenAI.

9. OpenAI then developed the Generative Pretrained Transformer, or GPT-3, and released it last year for commercial, paid use via interface.

10. AI language models in general and GPT-3 can suddenly do much more than was originally intended. So much, in fact, that mastering these technologies could become crucial in the coming years. That is why Europe and Germany have ambitious plans to position themselves with their own AI language model — working name GPT-X — so as not to give up part of their digital sovereignty.

An AI language model thus primarily generates texts. You could also say language or content because text is only the output format.

This brings us to our topic. The automatic text generation or the automatic content generation.

Automatic Content Generation

If you’re in a hurry, you can skip the next two passages and scroll straight to the “The 3rd Generation” paragraph, but it’s not worth doing so for comprehension.

Before we get into what GPT-3 can — and cannot — do, let’s take a look at the existing world of automatic text generation, which we can divide into three generations or evolutionary stages.

1. Generation — Template-based generation

2. Generation — algorithmic-grammatical generation

3. Generation — statistical or ML-based generation by an AI language model.

The 1st generation — in fact still a workaround

The first generation of automatic text generation did not really generate complete texts, but text templates, i.e. partial sentences written by humans, assembled rule-based and modified or adapted if necessary. It is easy to understand that this results in a relatively large initial effort and that not everything can be done with it.

But for certain scenarios, this effort was worthwhile or is still worthwhile. For example, for weather or sports reports, certain types of descriptions (e.g. product descriptions) or for shorter reports, the automatic text generation of the 1st generation was and is already very effective. Large media companies in the U.S., for example, have been generating various types of news (sports, weather, traffic, stock market data) with such systems for years.

Basically, the spectrum of possible applications was rather limited. A “conversation” with Alexa or with the voice control in the car illustrates this quite well.

The 2nd generation — an important evolutionary step for automatic text generation

A Berlin startup¹ (find footnotes at the end) has built up a lot of know-how in the field of computational linguistics and has taken a decisive step. It has developed a purely algorithmic text generation that is able to generate high-quality texts from data and terms fully automatically — without using templates (!).

Simply put, 2txt partially transformed the German language into an algorithm. Templates were thus history and this first real, rule-based language model was and is able to realize completely new use cases, more or less out-of-the-box (see also the following example). Every single text is generated in realtime based on a programmed, general grammar as well as topic-specific word collections and ontologies.

Via a standardized interface, data can be transferred to the 2txt engine and generated texts can be received.

An example will illustrate the technological leap from the 1st to the 2nd generation:

With template-based text generation, an online store would be able to generate, for example, 1000 product descriptions in seconds from 1000 product records (manufacturer, type, color, features, etc.).

The quality, uniqueness and variance of the texts would depend to a large extent on the level of investment in templates. It is obvious — the more data has to be processed and the more output has to be generated, the more complex and expensive the template approach becomes.

With the algorithmic text generation of the 2nd generation it would be possible, almost without setup effort, to generate tens of thousands of different texts (e.g. for different purposes or for an individual customer approach) from the 1000 product data records . Even the generation of longer texts, such as a complete landing page, could be automated .

Furthermore, it would be possible to include additional parameters, such as the filter criteria set by the customer, in each text and/or to weight the text differently as a result. Thus, a text would be created from known and previously unknown data. The effort for template modulation would be completely eliminated.

Both methods could take into account language styles or have the texts automatically translated into diverse languages.

The 3rd generation — AI language models bring the big bang for automatic text generation

Neuroscience has only a rudimentary understanding of how the human brain works. And what you can’t see down to the smallest detail, you can’t recreate. But if we imagine this still largely unknown universe as a ball of wool, then machine learning has loosened a first loose end of the thread from the ball.

OpenAI (and many others in the meantime) has started to pull quite powerfully on this loose end with GPT. Based on artificial neural networks (*) and the machine learning outlined above, first GPT-2 and later GPT-3 emerged.

Large AI language models, such as GPT-3, can capture and analyze vast amounts of text data. They thereby recognize structures, connections and contexts and make weightings on a statistical basis.

So you can ask GPT-3 e.g. the question “Who is Olaf Scholz?” and you will get approximately the answer that Olaf Scholz is a German politician and currently Chancellor etc. etc.. That Olaf Scholz is 170 cm tall and his brother is a doctor can also be recognized by the AI. But it is rarely mentioned in texts and therefore underweighted.

OK, up to this point Google² can do that too.

However, GPT-3 can also end a sentence that has been started in a meaningful way. Something like this:

Input: “Olaf Scholz and Angela Merkel…”

Output: “Olaf Scholz and Angela Merkel have both held the office of German Chancellor. Angela Merkel from November 22, 2005 to December 08, 2021 and Olaf Scholz from…”

GPT-3 thus continues a topic and, if required, becomes an automatic copywriter that can write complete articles on a topic. This “creative act” alone is a huge development step and goes far beyond anything we have known before.

But that is not all. For example, if I ask GPT-3 about the largest Chinese companies, the AI first independently specifies the question (“…largest Chinese companies by sales…”) and then creates a ranking.

But the system can also enter into a dialog with me about Chinese companies or create a table about the turnover per employee of the largest Chinese companies. And if I like all this, GPT-3 can also output the HTML code for a corresponding website from this content. Because for AI language models, programming languages are just languages in text form.

Before we get to the hook, here are some skills from hugeAI language models, such as GPT-3:

• Simplify texts

• Summarize texts

• Translate texts

• Document

• Chat

• Create Q&A lists

• Generate keywords

• Describe images

• Create images from descriptions

• Create tables

• Analyze code

• Convert code (e.g. XML) into natural language

The catch

Further above, in the clarification of terms under point 5, we pointed out the fact that the AI language models work with statistics. The “AIs” thus arrive at their results via statistical probabilities.

Statistics-based algorithms, however, have neither ethical guard rails nor a sense of truth, or of logical breaks in a text. Therefore, it can happen that an AI language model creates texts that massively contradict our ideas.

I don’t want to throw out the baby with the bath water with spectacular examples. Just so much — the results are occasionally not quite “housebroken” or sometimes slip into original but irritating fantasies.

The developers of the various AIs are working on corresponding improvements, filters, etc. However, the topic is very complex and it will certainly take a while until these problems are completely solved.

De facto, the uncontrolled further processing of AI texts is therefore a risk. This makes it difficult to use in many areas, especially in commercial ones. In my opinion, the inclusion of a human acceptance stage would be the least that could be done to make the risk manageable. However, the potentially enormous gain in efficiency would be significantly curbed by this.

The solution

The 1st and 2nd generation solutions I outlined earlier can do much less than the AI language models, and the skills gap widens as the pace increases. Still, these “old” solutions have a big advantage. They are predictable and make virtually no errors in content. The language style can be predetermined, as can the use of specific terminology.

In particular, the very flexible 2nd generation solution can be combined with a current AI language model to create an extremely powerful and content-secure solution. The weaknesses and imponderables are tamed and thus already made commercially viable.

The results of the first implementations are fascinating. This means that a new solution for the automated creation of digital content is in the world, and there may soon be no way around it.

Normally, I would conclude this article with a series of super innovative use cases. But their disclosure will have to wait a little longer…

[1] — this is 2txt GmbH from Berlin which is, among other companies and persons, a founding member of the LEAM Initiative (= Large European AI Model). The LEAM Initiative is driving the development of a large European AI language model. www.2txt.de

[2] — Google itself is very active in the field of AI research as well as AI language models and sees this technology as the basis for a future Google search in the medium term.

Kontakt

Let's reinvent the future