disponibile anche in italiano
There are concerns about the decree law on data retention issued by the Italian government . The problem was raised by ALCEI in a press release on December 23, 2003 one day before the decree was issued. There was vague and ineffective debate on the subject in mainstream media, based on privacy problems, and the fact was also reported internationally. But this needs to be understood in somewhat greater depth and in a wider perspective.
This particular set of rules isnt much more restrictive, or mischievous, than rules and practices that came before (and more are likely to follow.) Its messy, poorly conceived and confusing hastily put together to amend the previous one (June 2003) that made data retention compulsory but (for alleged privacy reasons) set a limit of 30 months while now its up to five years. The fact that its a decree law isnt just a technicality. Its the equivalent of a law, but its issued by the government without prior parliamentary discussion. In theory this should be done only in case of urgency, but its often used to bypass the attention of political debate and public opinion.
Telephone companies and internet providers, who retain data for accounting reasons, are now forced to keep them for an extended period, for the sake of inspection by magistrates, police or any other government or state authority. This adds to, but doesnt substantially innovate, existing laws and practices.
The new law is only an episode in a trend that has been in place for several years.
Discussion on data retention generally relates to privacy. If applied correctly, and following the declared (but unspecified) good intentions, this new law wouldnt make the existing situation much worse (or any better) than it was. But that is only half the story or less. There are other, and very serious, problems relating to civil rights and to the general framework of how protection from crime is reconciled with individual rights and freedom
There is a trend, that has been in place de facto for a long time, but only now is being formalized. The concept of responsibility is shifting from proven fact (a crime has been committed) to intention (a certain type of person may be likely to act illegally.) Sanction or punishment no longer relate to the actual effects of a behavior, but to the status of a category of people, that is considered guilty because of what it is, not what it does.
While prevention, per se, is legitimate and correct, this way of applying the concept leads to arbitrary sanction, suspicion or persecution against (real or imaginary) characterization of people who are assumed to be guilty.
In this perspective the focus is not on an actual burglary, but on the thief. Not on a someone creeping illegally into a network, but on the pirate (an improper and confusing definition, that is applied to a variety of unrelated behaviors, illegal or not, none of which has anything to do with manslaughter or armed robbery at sea.) Not a person viewing or keeping «pornographic images produced «by the sexual exploitation of minors» (as stated by Italian law) but the paedophile. Etcetera.
This means that criminal models are created, to be punished not for what they do, but for what they are or are assumed to be. Even if the prototype doesnt do anything illegal or reproachable.
Obviously these definitions, inherently vague as they are, empower anyone who has control and sanction resources to persecute, with a variety of pretexts or assumptions, anyone who is considered to be uncomfortable, unfriendly or untamed.
Things get seriously worse with a dangerous and dramatic threat such as terrorism. There is a real need to prevent and preempt i.e. to find and stop terrorists before they act. That can be done in a civilized manner. It is more effective, as well as ethically correct, to avoid witchhunts, to stay away from prejudice and arbitrary categorization and to avoid any violation of those human rights, and personal freedoms, that anti-terrorism action claims to be protecting.
In this context data retention (combined with the arbitrary, and often clumsy, criteria of data analysis and clustering) plays a key role, because it encourages the creation of as many behavior patterns as suit the whims of whoever is searching or of whoever else, for any reason, has access to the data.
Furthermore, if we consider that a widespread retention of traffic data is extremely cumbersome and hard to manage, its not unlikely that choices will be made about whose traffic is to be retained. Thus leading to mass labeling of people who are disliked or mistrusted by anyone in power. This already exists but with vast technical resources it can not only be enormously increased, but also warped by the inevitable imperfection and arbitrarity of automatic devices.
We know, from practical experience, that commercial profiling doesnt work (even when it does, it is much less effective than other, more civilized, ways of developing customer relations.) In spite of that fact the profiling legend, spread by list merchants, has caused not only justified concern, but also exaggerated alarm. As a result, while there are attempts to limit profiling as a commercial tool, it is (quite wrongly) assumed that it can work wonders in criminal investigation or for other controls of human attitudes and behavior that are much less legitimate or justifiable.
With the use of tools that are often unreliable, and can be easily manipulated, inquiries and trials (and other forms of persecution) are developed against virtual identities that can be created ad hoc on the basis of prejudice or a variety of questionable intentions. With the added problem that people cant defend themselves because the inquiry was done by a computer and there is no way of finding out how the data were generated.
The myth of machine infallibility combines with the obnoxious notion of criminal personality. There is no limit to the number or kind of tendencies or types that can be made to include any person, or category of people, that is considered unpleasant or uncomfortable. The result is a sort of institutionalized pogrom, without even the visibility of a publicly declared ethnic or cultural prejudice.
And, to make things worse, a variety of errors or manipulations can be hidden in any automated inquiry. Improper or arbitrary criteria can be included invisibly, in ways that are hard, if not impossible, to discover. And equally hidden distortions can be caused by incompetence, or deliberate trickery, of any individual using the system.
A combination of intentional prejudices or involuntary mistakes (with countless complications resulting from the convergence of the two factors) can lead to consequences so vast and complex that its worrying to even imagine them.
Sooner or later we shall return to this subject, in an even wider perspective. But in the meantime, to reach a temporary and partial conclusion, lets get back to the specific case of this recent Italian law. Its true that it vaguely mentions the need to have fair procedures in data retention, but it gives no indication of how such guarantees should be put in place. Another problem is that, if there are deliberate tricks, or errors of any sort, in the way the databases are generated and collected, correct retention in this context simply means mass retention of bad data.
The key point is that a consistent watch is to be kept in the defense of civil rights and personal freedom and this isnt only a matter of privacy. The issue goes far beyond the single case of this hastily and clumsily conceived decree law that is only an episode in a series that has been going on for several years and is getting worse all the time.