False Intelligenze Artificiali

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord

False Artificial Intelligences

Warning: This post was created 1 year does

This is a text automatically translated from Italian. If you appreciate our work and if you like reading it in your language, consider a donation to allow us to continue doing it and improving it.

The articles of Cassandra Crossing I'm under license CC BY-SA 4.0 | Cassandra Crossing is a column created by Marco Calamari with the "nom de plume" of Cassandra, born in 2005.

Given Cassandra's recent production on Artificial Intelligence, we have decided to bring together all the latest reflections on this topic in a single article. Two articles had already been published and we link them below, the last two can be found below.

This article was written on January 8, 2023 from Cassandra

Cassandra Crossing 527/ Artificial Intelligence: Power

What will happen when current "Artificial Intelligences" are used on a large scale?

Okay, okay, Cassandra knows perfectly well that her 24 well-informed readers have read the previous episode, in which it is forcefully underlined that the real danger of software called "Artificial Intelligence" is the exceptional effectiveness of this name as a tool for mass intellectual destruction.

But summarizing the conclusions, Cassandra essentially underlines how none of the "Artificial Intelligences" currently existing or in development are able to "understand" the problems that are presented to them and to "deduce" anything about them. They are just advanced statistical tools, nourished by the computing power of entire data centers, to become huge monsters of cancerous statistics. But they are incapable of learning, of knowing, of deducing. They are therefore not, in any sense, "Intelligencies".

They are simply software capable, among other things, of providing "apparently" and "probably" right answers, without any guarantee of accuracy and without being able to explain why they gave a certain answer to a certain question. I am able to produce "plausible" texts, articles and publications, but without any possibility that they contain new knowledge or contain old but certainly accurate knowledge.

But what happens in the minds of men when, duly distorted and potential, in the news learned from any type of media, "Artificial Intelligence" continues to be repeated? When any IT service or product or IT object is advertised as being based on “Artificial Intelligence”.

The catastrophe that Cassandra has already described occurs; people listen to the Word, the magical power of the Name is unleashed, and they become convinced, consciously but above all on an unconscious level, that Do such things as "intelligent" programs or "intelligent" objects really exist?.

And this is the crux of this serious and almost always ignored problem. The summary of the previous episode ends here, and we start again from here.

What consequences will the massive adoption of software based on massive unsupervised learning of very large databases have, in practice on a large part of the culture and data available in the world? What will happen when “language models” like GPT-3 and chat-GPT are in ordinary use?

Sure, there will be legions of unemployed half-wits and pseudo-news editors, but this is a known problem, which has always happened and has always been solved with every technological discontinuity. Linotypists and readers have been totally replaced by new technologies, but they have not died of starvation or on the barricades. This is not the main problem.

The real problem (or at least what we can clearly see today) is the replacement of reasoning entities with entities incapable of reasoning in the production of information, in the analysis of information, in the control of people.

Let's start with the production of information. An article, a photo, a "synthetic" film produced by a so-called "Artificial Intelligence" can be completely convincing and plausible, indeed be mostly "exact", and yet contain errors at various levels, from the easily perceptible to the extremely thin and undetectable.

Let's reduce the problem to better analyze it, and limit ourselves to the written information. If you have had the opportunity to use or even just witness the use of a linguistic model such as GPT-3 or chatGPT you will have been amazed by the quality of its answers and the documents it produces.

But what is it really about?

The incipit relating to GPT-3 of Wikipedia is spot on; “GPT-3 is a language model autoregressive which uses thedeep learning to produce text similar to natural human language.”

It is therefore a statistical model, generated from several petabytes of data from the Internet, which also includes most of the existing books and, to make sure nothing is missing, also Wikipedia and Reddit. A true indigestion of information, which requires such enormous processing that only a handful of organizations in the world can afford it (not just GAFAM, wish it were just them!).

And what do we get in the end? A software that, given a sequence of words, can calculate the next word very well, and the one after that, and the one after that, and...

Nothing more. He knows nothing, he understands nothing, even if he manages to write articles that they seem true. It's just aEliza much more powerful, but equally irrelevant if you need psychoanalysis, although it passes the Turing Test much more easily than the ancient one 17 page code.

“They look like” is the heart of the problem.

Because when we go to the body shop or the doctor, to whom we entrust our money and our lives, we go to professionals who do not "seem" competent, but we know that they are; more or less completely, but they are.

Because they are authentic intelligences,”General Natural Intelligences”. Because they understand. Because they learn.

Cassandra is unable to predict a future in which authentic "artificial intelligences" with the same characteristics exist. We are not talking about “Artificial General Intelligence” here, we are not talking about Skynet. We are talking about what exists today and is passed off as "Artificial Intelligence".

But for Cassandra it is child's play to predict how the software that is today sold as "Artificial Intelligence" will be used. Let's say it again, I knowno false “Artificial Intelligences”.

They will be used as instruments of power, will for example be used in the so-called "Intelligent Objects", which today are a minority, but in ten years they will become the norm.

They will be in the hands of large organizations, both non-state and state, such as the NSA or today's GAFAM.

They will also be used as weapons, to fight multidimensional wars in which one of the new battle plans will be the culture of humanity.

Not simple propaganda, but production of plausible but false, erroneous, artificially flawed information so that it is functional for a purpose.

What will become of science, where we advance through hypotheses and verifications, doubts and controls, and where only sharing and discussing previous and exact knowledge allow us to progress?

Today the scientist feeds on books and publications, and the fact that they are expensive and difficult to find, or even secret and protected by copyright, greatly limits the work of those who truly produce culture.

When one day the very basis of culture will be polluted by "fake" culture produced by fake "Artificial Intelligences" and used for power purposes, what will happen? Because these programs presented as clever chatbots or decision-making supports will be used for this.

And will culture itself, the Infosphere, be able to survive a “war” fought with these means? Or will it be destroyed by a phenomenon equivalent to the destruction of the Ecosphere caused by the much feared global thermonuclear war, a catastrophe which fortunately did not take place?

In conclusion: today Cassandra did not do her usual work; these are just helpful questions for you, not prophecies. Be careful, and exercise critical thinking every time you hear the Word, to prevent the Spell from being triggered.

At least in this, try to live responsibly.

You can hear the giggles that come from your Alexa, from your cell phone, from your home automation, from your car. I had heard them coming from inside a wooden horse, but no one listened to me. You know well how it ended.

Be careful!

This article was written on January 10, 2023 from Cassandra

Cassandra Crossing 528/ Artificial Intelligence: the dangerous Programmer

Is it true that Artificial Intelligences are excellent programmers?

Okay, there is no doubt about it, “Artificial Intelligence” works well as autocomplete for software writing editors.

Certainly much better than the much trumpeted and infamous T9, once a dangerous SMS messer.

But for the software ecosystem, which now permeates all of reality, false "Artificial Intelligence" constitutes a danger. Both for the software and for the people who work on it.

Cassandra, catastrophic as usual!” all the readers will say, excluding, we hope, the 24 unshakable ones.

Well, for this reason the tone of your favorite prophetess will be particularly formal today, to better demonstrate this thesis of hers.

It will therefore begin by explaining the two perverse effects that the application of false "Artificial Intelligence" techniques to software development will generate; and will end by stating that, like two waves that reinforce each other, these two phenomena could cause a real tsunami in the software ecosystem.

In the previous two articles in this series, “Spell" And "Candies”, the fact was stated that false “Artificial Intelligences”, which use “Deep Learning” techniques, although fed with a good part of the world's culture, cannot understand anything and cannot learn anything.

Now, what will happen when programmers, already often not excessively competent, stressed by impossible deadlines and working on old low-quality codes (in short, the vast majority of current programmers), begin to use programs that autocomplete many lines of code automatically and in one fell swoop (like Github Copilot), or that even generate code by responding to a description of its functionality expressed in natural language (like chatGPT)?

Elementary watson”, it will happen that they will use them massively, to save work and meet deadlines.

And in fact, as far as the generation of software is concerned, false "Artificial Intelligences" are "better" than in other fields.

But this is completely logical.

For false "Artificial Intelligences", writing code is easier, as it involves generating output in a programming language, that is, in a formalized, closed and unambiguous language. Forget Gödel's subtleties here.

Much, indeed immensely more difficult for false "Artificial Intelligences" is to tell a story, write an article or compose a poem. Here their performances, after an initial "Wow" effect, immediately highlight their structural limits.

But even in the software field, where their performance is better, the underlying limitations remain.

They have digested, among other things, a good part of the software ever written in the world, but they have not "learned" to program, and they do not "know" programming. They only know how to find the word (in this case instruction) that best suits the previous ones, and they only know how to reproduce what they have digested.

Exploits such as writing a simple algorithm expressed (well) in natural language in Python, and then easily redoing it in COBOL, are the "easiest" things a false "Artificial Intelligence" can try. Even if it happened that a software programmer saw his own code "re-emerge" in this way, including comments and variable names.

But what can we say about the correctness of the software thus obtained? Of his safety? Of its quality?

Anyone who has ever worked in the software industry knows full well that just a source code compiles and manages to run the test cases it almost immediately becomes a “product”, and is released as soon as possible, always too soon, so much so is the omnipresent “…without any guarantee, express or implied…

And he would also be able to easily answer the question of whether or not programmers will use the energy thus saved to exercise their critical sense at the highest levels, in order to find not only the errors introduced by themselves, but also those inserted by false "Artificial Intelligences". ”.

These errors will likely be even more difficult to find, as code generated by false “Artificial Intelligence” will by its very nature be “very similar” to a perfect code.

Similar”. It is not enough! Code is law, as Lawrence Lessig said, and law should strive for perfection.

Cassandra instead prophesies, without much effort, that this increased productivity will be spent on using programmers with lesser skills, and making them produce more, like breeding chickens.

Deep down, in the software industry a good programmer, someone who writes quality code, perhaps original, and who tries to prevent problems and correct approximate specifications, has always been a nuisance. It was sometimes necessary for him, but it was annoying.

Such an individual tends to produce less code, miss unrealistic deadlines in order to do a good job, perhaps even expect to be paid well. It is even said that, in the most serious cases, he may be caught working in an "ethical" manner.

Never! The dream of the software industry is to make this type of programmers disappear, indeed to make programming itself disappear.

In this way the production of software can be entrusted to legions of myths, disciplined and above all replaceable "keyboard monkeys”, which will obviously be paid in peanuts, and will therefore look great on company balance sheets, and will make both managers and shareholders happy.

But will the code thus created work? Of course it will work, it will work like the one produced today, i.e. the bare minimum, so there are users who find errors and who are always ready to pay for support or buy the new version. And really big mistakes will be remedied by contractual clauses and insurance companies.

And so the globalization of misery will take another step forward, interplanetary probes will continue to make holes in increasingly distant planets, radiotherapy machines will continue to make holes in patients' heads, missiles will continue to make holes in civil planes, robots murderers who fill people with holes even when they shouldn't.

Business as usual” they will instead be able to say, in a cynical but completely normal way, a few but very happy “Natural Intelligences”.

Marco Calamari

Write to Cassandra — Twitter — Mastodon
Video column “A chat with Cassandra”
Cassandra's Slog (Static Blog).
Cassandra's archive: school, training and thought

Join communities

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord




If you have found errors in the article you can report them by clicking here, Thank you!