L’IA come proprietà emergente?

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord

AI as an emergent property?

Warning: This post was created 1 year does

This is a text automatically translated from Italian. If you appreciate our work and if you like reading it in your language, consider a donation to allow us to continue doing it and improving it.

The articles of Cassandra Crossing I'm under license CC BY-SA 4.0 | Cassandra Crossing is a column created by Marco Calamari with the "nom de plume" of Cassandra, born in 2005.

Great new article by Cassandra on Artificial Intelligences!

This article was written on April 14, 2023 from Cassandra

Cassandra Crossing 536/ AI as an emergent property?

There is a school of thought that believes the "spontaneous" evolution of linguistic models into "true" artificial intelligence is possible. What exactly are we talking about?

Pre-Trained Language Models could manifest Artificial Intelligence as an emergent property”.

It's an interesting concept that Cassandra had never come across. Instead he discovered that the hypothesis is, even in a very limited environment, the subject of philosophical and even scientific debate. Very interesting, and let's try to explain why.

Forgive your favorite prophetess though, because a lot of background is needed. Let's hope that at least the 24 indomitable readers follow me without hesitation.

Model Linguisticso” is here to be understood in a very precise and restricted sense, as a program capable of generating sequences of symbols, letters and words that respect the grammar, syntax, style and other characteristics of a natural language.

GPT-3, GPT-4 and ChatGPT are software that create Linguistic Models, i.e. that are able to generate natural language output in response to natural language input. Output that has all the characteristics of natural language, except semantics.

In all of this, semantics, that is, the meaning of a sequence of words in relation to extralinguistic reality, has absolutely nothing to do with it, it is not considered, it is not treated.

Linguistic Models do not deal with this level of information “by design”; put in banal terms, they do not “understand” and do not “know”.

If Linguistic Models were created in a deterministic form, were therefore "algorithms", we could know exactly what they can do and what not, for example if they can summarize a text, or if they can translate one language into another, and finally if they can understand the meaning of a question, and always provide the correct answer.

But that's not the case. Research on linguistic models essentially met with only failures until it was oriented towards "pre-trained" linguistic models. In practice, very expensive software in terms of resources consumed to create them, which were not built with deterministic algorithms, but which learned a language from scratch, analyzing large quantities of examples in natural language with "statistical" algorithms in the broad sense.

A process equivalent to training a traditional Neural Network.

And today this has produced very successful programs. 

After remaining stuck at Eliza's "intellectual" level since the 1960s, in the last 10 years there has been great progress in building pre-trained Generative Linguistic Models, and today a Linguistic Model like GPT can be "sold" for any use , as it can convincingly simulate the answers to any question. But he is as intelligent as Eliza, which is zero. We repeat it again; he is not even remotely intelligent, he knows and understands nothing. It's not made for this.

And we come to the second part of this very long introduction. 

What is an “emergent property”? It is a very common concept in physics, where it appears often. AND' explained quite well on Wikipedia

In the complexity theory emerging behavior is the situation in which a complex system exhibits well-definable macroscopic properties that are difficult to predict on the basis of laws which govern its components taken individually, thus arising from linear and non-linear interactions between the components themselves[1]: although it is more easily found in systems of living organisms or of social individuals or even in economic systems, unlike a widespread belief today, emergency also manifests itself in much more elementary contexts, such as for example particle physics[2] and the atomic physics[3]. It can also be defined as the process of pattern formation complex starting from simpler rules, and an example can be obtained by observing the game of life Of John Conway, in which a few simple rules established for a few basic individuals can lead to very complex evolutions.

A simple example is water, a homogeneous, transparent and colorless liquid or vapour, made up of simple and identical molecules which, if cooled below zero, crystallises into a solid, forming crystals with beautiful hexagonal geometries. The geometry of ice and snow crystals is a simple example of an emergent property as the collective behavior of simple particles in response to an external condition

Another example is helium cooled to superfluidity, which rises up the walls and spontaneously exits the container that contains it to "fall" lower onto the floor, manifesting a tunnel effect at a macroscopic level.

Similarly, some philosophers or scientists who professionally deal with pre-trained Generative Linguistic Models such as GPT, hypothesize that from the complex and absolutely idiotic behavior of Linguistic Models, sooner or later, it is absolutely unknown how or why, a true "intelligence" may arise as an "emerging property" of new Linguistic Models that are sufficiently larger, better trained, more complex. 

In short, they believe it is possible that a GPT-9000 will begin to respond in an informed and exact manner to questions posed in natural language, instead of providing ravings and hallucinations with some rare shred of casual accuracy, which is the only thing that current Linguistic Models such as GPT know how to do it.

Something if you asked.

“How many times are the first 1000 characters of pi different from 1?”

answered me with great confidence

The first 1000 digits of pi are composed of numbers other than the number 1 most of the time. In fact, the digit 1 only appears about 166 times in the first 1000 digits of pi. This means that digits other than the number 1 appear on the order of 834 times.

but without making any big mistakes (the "1"s are exactly 116) a fact which is clearly displayed on a Wikipedia page, fully digested by him but, as we were saying, not memorized and not understood.

And let it never be said that the Moon is made of green cheese, but that by chance, to this day all LEMs and probes have landed in places where it was very aged and looked like rock.

Jokes aside, Cassandra, while candidly admitting that she is certainly not an expert in AI, even though she has read many things about it, even some texts by Nobel prize winners who have dealt with complexity theory (it seems that citing Nobel prizes makes it more what someone writes is credible), is not at all convinced and not at all reassured by this hypothesis. And there are two very distinct reasons.

The first cause for concern for Cassandra it is:

Because, if an emergent property really emerged from who knows where for a cause that we ignore today, it should be precisely that of knowing how to respond completely, correctly and exhaustively to a question asked in natural language?

A Linguistic Model with an emergent property could answer “42” to any questions.

Or it could respond correctly, but also calculated to push humanity, without it realizing it, towards an atomic war (Skynet) or towards universal well-being (Asimov's Machines in “Avoidable Conflict”).

Or it may manifest other unwelcome, dangerous, or simply useless properties, which may still remain beyond our ability to perceive or understand.

The second cause for concern by Cassandra it is methodological; scientific research done haphazardly, or taking unnecessary risks, does not seem to be the right way to do science that leads to real advantages and progress.

The first nuclear physicists who built the A bomb and then the H had doubts that the explosion could trigger a combustion of the atmosphere which, by burning oxygen and nitrogen, would have made the planet uninhabitable. 

It is said that Fermi himself did these calculations with his slide rule shortly before the Trinity experiment at Alamogordo, and it is a historical fact that the nuclear physicist Gregory Breit was secretly entrusted with the very task of answering the question whether the A bomb or the H bomb would have caused this catastrophe. The answer was negative, and fortunately for humanity, also correct. Scientists often give correct answers.

Well, it's not that Cassandra is more worried that Skynet will be born from GPT as an emerging property, but rather that it is argued that a Linguistic Model can be or become intelligent, and that for this reason AI in general is advertised around as immediately available solution to all problems. 

This is because, here and today, Linguistic Models are used only and exclusively in a propagandistic manner, as financial operations aimed at concentrating money and power increasingly in the hands of a few entities. As it happens, the usual ones.

If someone were to publish the on Github source code to build an “intelligent oracle” that can be compiled at home and trained on a specific branch of knowledge, Cassandra will be happy and relieved to have been very wrong, and to have to apologize to a lot of people. 

But for now he only hears the giggles that come from within the false “Artificial Intelligences” that are being introduced into our homes and lives. They are just like the ones he heard so long ago when they pulled that famous wooden horse into the walls of Troy.

For this reason, after having bored you with this very long statement, he can only end it with the usual Neapolitan exhortation

“Be careful”!

Marco Calamari

Write to Cassandra — Twitter — Mastodon
Video column “A chat with Cassandra”
Cassandra's Slog (Static Blog).
Cassandra's archive: school, training and thought

Join communities

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord




If you have found errors in the article you can report them by clicking here, Thank you!