How, if at all, do capabilities enabled by new and emerging technology in telecommunications (e.g., key-escrow encryption technologies, digital telephony) and electronic networking make it _easier_ for those who control that technology to compromise and/or protect the interests of individual end users? Please use as the standard of comparison the ease _today_ of compromising or protecting these interests. We are interested in scenarios in which these interests might be compromised or protected both individually and on a large scale. Please be sure to tell us the interests you believe are at stake.
Control of technology does not need to be held by service providers, the government, or any other centralized entity. It can be taken, today, by individuals who are concerned enough to do so. I will use as my basis for comparison the ease of compromising the interests of an individual who chooses to protect their communications with the tools available to them, mainly PGP and the remailer network. These tools are not yet trivially easy to use, but they are out there and they are being improved. Since those tools are available today to those who are interested, I will use them as a baseline against which centralized 'security' can be compared.
The FBI wiretapping bill creates a new power for government - the right to tap phones. The change is a subtle one with large implications. It creates an additional array of points of failure for a possibly secure network. Law enforcement agents today have the ability, acquired yesterday through an accident of technology, to tap phones. That does not mean that ability should be preserved. It is widely known that this ability has been, and probably still is, abused.[1](What do you call an illegal wiretap? An anonymous informant.) GAK (Government Access to Keys) codifies a similar accident; that networks are insecure becomes a design feature.
In a centrally controlled system, there will be points where the entire system can fail. Those points of failure could expose an entire population of users to information leaks. They may be well protected, but even the NSA has had agents defect.
This model is in stark contrast to the situation today, where individuals can take responsibility for their own encryption. If there is no centralized back door, no database of keys, LEA fields, and the like, then the security of each key remains where it is likely to be best protected, namely in the possession of its user. I would understand the value of my private keys to me, and not disclose them. Thus we have made it substantially easier to damage the interests of end users, while not adding anything to their protection.
You could argue that the government has an excellent track record in protecting information. This is only partly true. The government did an excellent job of covering up radiation tests on the mentally ill; it has done a poor job of concealing Social Security numbers, which the IRS prints on the outside of tax documents, claiming the US mail is secure[2]. Only when there are institutional interests at stake does the government show any interest in protecting information about citizens. Doubtless, accidental or illegal revelation of keys would be carefully classified, along with the names of the effected individuals.
The bureaucrat, not having a personal stake in the security of the keys, will be more lax than an individual. No one believes that agents of the government will look out for them as well as they look out for themselves. If they did, perhaps we'd all be happy to let the IRS compute our taxes. It would sure make life easier. But we don't. Individuals are always the best protector of their own interests.
To hammer on the point, there have been repeated cases of INS employees selling green cards, FBI agents creating rules of engagement later found unconstitutional, and agents of every three-letter agency in Washington selling out to the Russians. The point was perhaps most succinctly made by an NSA historian who I spoke with at the NSA museum. Of Aldrich Ames he said, "Its amazing how cheaply someone will betray their country."
If we mandate backdoors in a system, they will be found and exploited. Give end users control of the technology, including source code and access to algorithims, and they are empowered to choose a level of security that is appropriate. The government can not do so, and should not try.
A few scenarios to illustrate better my points.
Plot A: put in place a system of GAK (government access to keys.) Lets call it Clipper, for convenience. Lets also say that the DEA is using Clipper to protect its phone conversations about Pablo.
Pablo finds a low level employee of some key escrow agency. Lets call him Aldrich. Aldrich likes fast cars. Pablo buys Aldrich a fast car, in exchange for 8 or 10 keys, easily smuggled out on a floppy disk. Aldrich has just broken the law, and will doubtless be providing keys to Pablo for a very long time. Pablo, meanwhile, is laughing at the DEA agents, to whose daily phone meeting he listens.
Plot B: There is no GAK. The DEA uses PGP, (having gotten copies from European FTP sites so as to not export it to its agents in South America.) The DEA agents hunting Pablo are the only ones with their keys. They know what Pablo does to DEA agents. Pablo can't get their keys, and our heroic agents catch Pablo, and throw him in jail forever.
(Naturally, we can substitute any well funded enemy of law enforcement for Pablo. The KGB works well.)
Plot 2: For some reason, there is probable cause, leading to the issue of a warrant. The FBI taps into the communication lines, and discovers that the Terrorists are using VoicePGP. They obtain a warrant, and through the use of an ELINT monitoring device near the computer in question, and get all the information they need.
This scenario is different in that the terrorists are in locations known to the FBI, whereas Pablo does not know where the DEA agents are. If the location of the terrorists is not known, it is difficult to tap into their communications links.
Adam Shostack
[2] RISKS-16.21. The IRS has apparently promised to fix the problem.