L’impercettibile pericolosità dei chatbot

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord

The imperceptible danger of chatbots

Warning: This post was created 5 months does

This is a text automatically translated from Italian. If you appreciate our work and if you like reading it in your language, consider a donation to allow us to continue doing it and improving it.

The articles of Cassandra Crossing I'm under license CC BY-SA 4.0 | Cassandra Crossing is a column created by Marco Calamari with the "nom de plume" of Cassandra, born in 2005.

Cassandra returns to chatbots with some valuable advice.

This article was written on February 14, 2024 from Cassandra

Cassandra Crossing 572/ The imperceptible danger of chatbots

The fake commercial AIs we use today are so ambiguous that they cannot be used without running constant risks. Even when speaking in confidence.

Just on Valentine's Day, Cassandra stumbled upon Gizmodo in an article, also cited by Slashdot, which spoke of obvious and serious privacy problems present in the licensing agreements of the "romantic chatbots”.

Cassandra's first reaction says a lot about her ignorance of certain issues of the present; it is known that prophetesses love to deal mainly with the future, and sometimes, in fact, they detach themselves from the present.

In fact, the fact that such chatbots existed, and people used them so much that one review listed 11 companies and services that provide them, was completely new to her.

But the wonder quickly disappeared to make way for a more ordinary, profound despondency.

It is completely natural, as the Red Queen has always known very well, that people project themselves onto any natural language output coming from a computer, and that they attribute depth and meaning obviously coming from the human, and not from the computer.

Remember Eliza?

Wikipedia introduces it like this “ELIZA is a chatterbot written in 1966 by Joseph Weizenbaum.” Yes, almost sixty years ago, and it was just over a thousand lines of a BASIC-like language. Of course retrocomputing!

Historians unanimously report that ordinarily intelligent people, who had had experience of psychotherapy, mistook her for a human psychotherapist, and declared themselves satisfied with the interaction with "her".

At the time we began to talk, without yet having understood anything, of a "Eliza effect”.

For Cassandra, obviously, it was not a sensational historical fact but simply a worrying one, and she had correctly included it in the family of addictions that people create thanks to information technology, such as the much more serious ones caused by social media.

A few seconds after hearing the news, the usual reasoning was formed and concluded in the mind of our favorite prophetess, and the 24 well-informed readers will certainly be able to predict them.

Skipping over those linked to the natural stupidity of "genus Homo Sapiens”, which certainly does not live up to its name, let's focus on the fact itself.

And in particular on the completely predictable amplification that the arrival of false Artificial Intelligences, and in particular gods Pre-trained Large Generative Language Models (as it is called, in full, chatGPT), would have had on Eliza's inevitable "successors".

Successors obviously born to make money, mainly by extracting personal and sensitive data (and how sensitive!) from people.

Yes, because here, now and in this world there are people, many people, who tell themselves, their most intimate secrets, their fears, and anything else that comes to mind to a chatbot equipped with false artificial intelligence, officially purpose of having "psychological support and existential relief”.

Let's skip over the effectiveness and problems of such a service again, and quickly arrive at the most important fact according to Cassandra, and also at the conclusion of this "reasoning”; personal data.

The companies that produce the chatbots analyzed by the investigation, in 90% cases, clearly say in the license that they will use the conversations as data for further processing, training of false artificial intelligence, and for any other purpose they can think of, obviously starting with selling data to anyone willing to pay for it.

In English, in a much more pompous, distracting and refined way, and in strict legalese, it says exactly this.

And as always, the ignorant users (in the Latin sense) who have already said yes to Alphabet, to Meta and to all their famuli, just like the nun of Monza, have responded, and have responded yes.

Apart from any further considerations, Cassandra cannot make prophecies about this (for her) "news"; giving suggestions instead yes.

And he will do it by concluding in a way that not even his 24 dumbfounded readers expect.

If you have never used these chatbots, you can make a report to the Guarantor for the Protection of Personal Data, who has already shown in the past that he takes into serious consideration (more than his European colleagues) even "strange" aspects of what falls within his competence; always if someone reports them to him and they are dangerous.

If you have actually used a chatbot of this type, or perhaps if a person who depends on you has done so, you can (not to mention must) make a complaint to the Guarantor, who in this second case will give you, within the necessary time, an answer.

Reporting and complaining are two different things, and on Guarantor website all of this is explained thoroughly and there are instructions and templates to use.

Do it.

Marco Calamari

Write to Cassandra — Twitter — Mastodon
Video column “A chat with Cassandra”
Cassandra's Slog (Static Blog).
Cassandra's archive: school, training and thought

This tag @loyal alternatives is used to automatically send this post to Feddit and allow anyone on the fediverse to comment on it.

Join communities

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord




If you have found errors in the article you can report them by clicking here, Thank you!

Comments

Each article corresponds to a post on Feddit where you can comment! ✍️ Click here to comment on this article ✍️

Feddit is the Italian alternative to Reddit managed by us, based on the software Lemmy, one of the most interesting projects of fediverse.