ED-209 ci aspetta dietro l’angolo

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord

ED-209 is waiting for us around the corner

Warning: This post was created 3 years does

This is a text automatically translated from Italian. If you appreciate our work and if you like reading it in your language, consider a donation to allow us to continue doing it and improving it.

The articles of Cassandra Crossing I'm under license CC BY-SA 4.0

Cassandra Crossing is a column created by Marco Calamari with the “nom de plume” of Cassandra, born in 2005.

Every Thursday, starting from September 9th, we will offer you an ancient prophecy of Cassandra, to be reread today to reflect on the future, alternating recent articles selected from the latest releases.

The article we propose today is very interesting and, despite being written in 2008, in our opinion it is very current. 13 years later we still have drones that only kill innocent civilians 1 and facial recognition with problems of gender racism and more 2 3.

Despite this, facial recognition is still sold 4 as a turning point for security regardless of the risks and warnings of the Privacy Guarantor 5.

This article was written on October 10, 2008 from Cassandra

The adoption of machines for safety tasks that should be carried out by humans is not only dangerous for the latter, it also allows responsibility to be transferred to the machine. It's already happening.

October 10, 2008 — Reading newspapers and listening to the media, it happens that we read articles on the future evolution of security systems.

Miraculous facial recognition algorithms already allow us to automatically track a fleeing criminal among the hundreds of cameras at Termini station.

Databases of information on airline passengers make it possible to discover potential terrorists before they can board.

Intelligent algorithms will allow us to distinguish facial expressions and reactions of people who lie. Analyzing the movements of each person in a crowd will allow us to understand the intentions of a particular individual before he carries out dangerous actions.

All automatic, infallible, economical, clean.

The question that never receives an answer, or rather that is not even formulated in these articles, is whether automating such delicate operations could be, both from an intelligence and legal point of view and from the protection of civil rights, a desirable thing?

There are three important issues to consider:

Reliability of the automation; it's not the main point, but do you know those little problems that afflict the automations you know? The bank door that won't let you in because of your belt buckle? It's used to keep machine guns and their owners out, but it also keeps you out if you don't undress. The ATM that eats a perfectly valid card because it read it wrong?

And what can we say about the responses of certain employees at the counter who respond to an absolutely normal request that the procedure doesn't let them do it or that the computer freezes? Would you entrust your life to one of these automatisms? Would you let yourself be judged by an electronic judge? Visiting an electronic doctor? Confess to an electronic confessor?

There are issues that do not allow automaticity.

Patrolling a crowded and poorly lit street, distinguishing the good guys from the bad guys, doing it reliably is already almost impossible for a human being. In “Robocop” by Paul Verhoeven the ED-209 robot (Enforcement Droid 209 — Force Robot) is a caricature of these automatisms to control people. Big and noisy, it roars like a lion but it works badly, it kills those who give up, it turns out to be strong but totally inefficient. But it is manufactured by a multinational, it is wanted by a local politician, it was created to become a military technology...

Laws, constraints and rules: Lawrence Lessig says, as a lawyer but in a very effective and correct way, that only software is law in cyberspace. He says this as part of his criticism of the approach of the American legal system which wants to regulate the Internet by considering it a shadow, an analogy of the material world.

It is absolutely true, since cyberspace is made of software, which without it would not even exist.

In our case the terms of the problem are completely reversed. Can software be law in the material world? It will be said that software does not impose a law, but simply helps humans to enforce the rules, creating constraints that prevent crime or that reveal potential criminals.

Here a vicious circle is created that risks taking us to very dangerous places.

Laws arise from consensus to establish shared rules; the individual freely decides whether to respect them or violate them and bear the consequences. In both cases there are people who must evaluate, possibly prosecute, judge and condemn.

It is the virtuous but often forgotten circle of democracy and free men with free will. But what happens if the rules turn into constraints? If a non-human mechanism like software prevents you from violating a rule, if it becomes a constraint, a total constraint, a supreme controller that automatically decides guilt?

The existence of laws and rules recognizes and safeguards free will, the existence of constraints enslaves the just and the unjust. It's not a detail, it's a complete distortion, an antithesis.

If having laws is the right thing, then you cannot have constraints. Constraints do not need shared laws, and once established they can quickly and permanently become instruments of power.

Unfortunately, those who are willing to give up their freedom in exchange for a modicum of temporary security have no problem submitting to constraints rather than having to respect, as a free man, shared laws and rules.

Men on the chain of command: returning for a moment to ED-209, I still have in mind the tragicomic situation in which the technicians who started it desperately try to stop it while it pierces executives in ties.

This is not the usual archetype of the creature that escapes its creator, nor the problem, also current but less discussed, of whether or not autonomous robots have the "right" to kill human beings. Flying drones and autonomous machine guns can already do it and therefore the question is superfluous.

The problem, however, is much further upstream. Is it reasonable, in the context of difficult, expensive and complex activities such as intelligence ones, to consider the automation of the "recognition of suspects" possible? Is it legitimate to propagate outlines of algorithms outlined in a university paper as the final solution against terrorism?

This is not the place to ridicule these algorithms from a technical or statistical point of view; let's simply ask how many real terrorists have been identified thanks to the personal data of passengers exchanged and stored by CAPPS-2? How many attacks have been prevented with the use of the “no-fly list”?

Yet just one foiled attack would be a scoop; instead the news we read in the press is that Bill Clinton was unable to get on the plane or that there was a 4-year-old child on the no-fly list.

The problem is more insidious. Imagine that while you are choosing your favorite yogurt at the supermarket or trying on a swimsuit, a red light turns on, and a gentleman full of badges asks you to follow him. The automatic system for proactively detecting shoplifting activities has been activated. What do you do? Are you angry? Do you allow yourself to be searched calmly since you have nothing to hide? And once identified and recognized as innocent, will you ask that the data of this false positive be deleted? They can never be, if only for reasons of legal protection of whoever carried out the search.

What if the red light turns on while you're going through check-in at the airport? Would you feel comfortable because there is a human in the chain of command of the activities? Of course, if you found a powerful and benevolent sage who immediately recognized your mirrored honesty, all would be well. But the chances are that instead you will end up with a bored and underpaid person who wants to go home at the end of the shift and who knows perfectly well that no one has ever been fired for heeding the red light. In that case, goodbye travel.

And that's already fine with you because everyone could see the red light, including you, and that only meant that you couldn't fly. What if the red light is hidden and means you are a suspected terrorist? The tag above the red light may indicate “suspicious swelling under coat” while on the London Underground. Don't rush to catch the train that is leaving because someone, or maybe something, could put the classic six bullets in your head. And many apologies to the family: "Unfortunately the red light had come on and he started running...".

Obviously there will always be a human being in the chain of command, but the computerization of intelligence activities will inevitably lead to the transfer of authority and trust from man to machine, thus giving reliability to any report, especially in emergency conditions. Once the situation is over, you can also transfer the responsibility, the blame, to a missing patch, to an evil hacker, to a disconnected wire, to a faulty component, to a bug in the software.

Remember the ordeals of the engineer suspected of being the Unabomber (see Cassandra Crossing/ You, the Unabomber and Data Retention)?

The investigators had “calculated”, through the use of simple databases and an aberrant method, that the engineer belonged to a group of a dozen “technically perfect culprits”?

How far is the introduction of automation in intelligence and population control from the equation “The red light came on and therefore you are guilty”?

The reason and cause of the ignition will probably be just as inaccessible as the reasons for the investigation in “The process” by Franz Kafka. Obviously for safety reasons!

For mistakes, which will not be called "killed” but “false positives”, the principle that it was an unfortunate accident caused by a faulty computer will then apply. Or rather, to further reduce costs, perhaps someone will resurrect a "creative" legislative loop like the one that allowed the SS to legally execute an enemy of the homeland without trial, and which defined anyone who was shot by the SS as an enemy of the homeland.

Textbook simplification and cost reduction, even if at the expense of freedom and democracy.

ED-209 still makes you smile? Or does the thought of your invisible grandchildren make you feel a little uneasy? The real concern should come out already by reading the articles above, and should push you to speak and express your opinion. Otherwise, around the corner, together with Ed-209's grandchildren, the historic and always lurking slogan of "He who has nothing to hide has nothing to fear”.

Marco Calamari

Write to Cassandra — Twitter — Mastodon
Video column “A chat with Cassandra”
Cassandra's Slog (Static Blog).
Cassandra's archive: school, training and thought
  1. The Pentagon admits: “Our drone attack on Kabul killed only innocent civilians[]
  2. Facial Recognition Is Accurate, if You're a White Guy[]
  3. Gender and racial bias found in Amazon's facial recognition technology (again[]
  4. Facial recognition systems are arriving in Italian cities[]
  5. Measure of 26 February 2020 [9309458][]

Join communities

Logo di Feddit Logo di Flarum Logo di Signal Logo di WhatsApp Logo di Telegram Logo di Matrix Logo di XMPP Logo di Discord




If you have found errors in the article you can report them by clicking here, Thank you!