[LWN Logo]
[LWN.net]
From:	 Bruce Schneier <schneier@counterpane.com>
To:	 crypto-gram@chaparraltree.com
Subject: CRYPTO-GRAM, July 15, 2001
Date:	 Sun, 15 Jul 2001 15:31:23 -0500

                  CRYPTO-GRAM

                 July 15, 2001

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier@counterpane.com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at 
<http://www.counterpane.com/crypto-gram.html>.  To subscribe or 
unsubscribe, see below.


Copyright (c) 2001 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Phone Hacking: The Next Generation
      Crypto-Gram Reprints
      News
      Counterpane Internet Security News
      Single Sign-On
      Monitoring First
      Comments from Readers


** *** ***** ******* *********** *************

       Phone Hacking: The Next Generation



The phone network and the Internet are converging.  That's good news for 
smart telephones, new telephony services, and customer convenience, and bad 
news for security.  If you think that phone hacking is bad now, take a 
gander at what's coming.

During the last fifteen years or so, there has been a trend toward 
intelligent telephone networking.  We've seen ISDN.  We've seen SS7.  We've 
seen IN (Intelligent Networking).  These protocols are responsible for all 
the cool telephony features we've come to know and love: call forwarding, 
call following, local number portability, caller ID, etc.  These features 
work fine, but are limited because they are all controlled by the phone 
company.  If you want to initiate caller ID, you need to get the phone 
company involved.  If you want your business calls forwarded to your home 
after 5:00 PM, you need to turn that on and off every day.

On the corporate side, we've seen Computer Telephony Integration (CTI), 
which didn't work very well because it was so big and clunky.  It might be 
fine if you're a huge call center, but it just wasn't cost-effective for 
your average business.  Development cycles were long, and service creation 
horrendously expensive; usage was rare.

But along came the Internet, and everything changed.  The notion of 
intelligent endpoints (computers) and a dumb network (routers) turns the 
telephony model upside down.  There are several consortiums and standards 
bodies working on bringing the Internet model to the telephone network, and 
allowing Internet-based control of telephone switching.  The idea is to 
turn the telephone network into a giant networking resource that people 
outside the telephone network can control and manage.  The benefit to the 
enterprise is more features and control: cost savings, better sales and 
marketing, improved customer service, etc.

The Parlay Group is a major player in this space.  A consortium of 
software, hardware, and telephony companies, they are creating a 
specification and API to enable phone-system control from outside the 
secure telco network.  This API will allow software to do such things as 
reroute calls, get notified of call attempts, retrieve the location of 
mobile users, and more.  Even access to telco billing systems is 
planned.  The idea is that computer applications can have integrated 
telephone components.

Even more fundamentally, all the switching protocols will interoperate at 
multiple points.  Switches, gatekeepers, proxies, and call control agents 
will all be components of the new telephony control system.  Control can be 
distributed or centralized, depending on the application.

Meanwhile, the IETF is defining the Session Initiation Protocol (SIP) for 
Voice over IP (VoIP) and more.  This protocol will allow a user to define 
complicated ways to redirect calls: between 9 AM and 5 PM ring my office 
number, between 5 and 6 PM call my cell phone, after 6 PM call my home 
phone, and if my mother calls at any time, send her directly to voice 
mail.  The protocol even includes a programming language, so a user can 
write a program to handle phone calls to match his own needs.  While these 
features are nominally controlled by the user, the programs are stored in 
the telco network, and a DNS-like service is used to handle the profile and 
call forwarding.  SIP is becoming a big thing; it's currently being used 
for VoIP telephony, will control calls in 3G wireless networks, and is 
being envisaged for all sorts of other uses like Instant Messaging.

The big idea here is to leverage the development techniques of the Web to 
services for telephony.  New services are essential, because all the 
carriers have cut their collective throats on per-minute long-distance 
rates.  Premium services are seen by many as the only source of meaningful 
revenue in the future.  This means that telephony, which has heretofore 
been slow and methodical and reliable, will become as freewheeling as the 
Internet.

I am terrified at the security implications of these services.  Sure, the 
Parlay spec says that communication between the Parlay client and Parlay 
server in the telco network is encrypted, and authentication will be 
enforced, but I don't believe for a minute that this will remain 
unhacked.  SIP contains security provisions, but I don't trust them.

It's not the details of the protocols.  It doesn't matter how many bits the 
key is, or what authentication protocol they employ: we've learned from 
experience that all systems like this are hackable.  The worry is that 
these protocols open a huge hole into the telephone system.  The problem is 
that these telephony control systems will sit on top of insecure operating 
systems.  They will be hacked, and then things will get ugly.

Think about the possibilities for a minute.  Denial-of-service attacks are 
a breeze: just reroute all calls to a person elsewhere.  Or reroute all 
calls to a popular phone-sex service to another person.  Or maybe just 
eavesdrop: set up a three-way conference bridge whenever someone receives a 
phone call.  Remember the Trojan program that quietly made the modem dial 
Moldavia; this kind of system would make that hack a lot easier.  And don't 
you think all of those hackers who chat on IRC would much rather take over 
a PBX and set up a conference call?  You don't need me to think up the 
possibilities; there are lots and lots of them, none of them good.

One of the biggest backward steps is the re-merging of the control and 
voice channels.  Switch and PBX hacking used to be very easy when signaling 
was done in-band.  SS7 is an out-of-band signaling system, which separated 
the voice from the telephone control and made "beeping into the receiver" 
hacking impossible.  These new IP telephony systems rebuild that old, 
vulnerable model.

It gets worse.  The FCC is mandating that cell phone companies pinpoint 
phone locations to within 50-100 meters (for use with 911 calls).  The 
carriers plan to use this information to create new data services based on 
location.  The location information will also be available through services 
like Parlay for third parties to use.  Imagine the security implications of 
that information getting into unauthorized hands.  What if someone 
correlated a person's cell phone with his online identity?  Could he 
pinpoint locations of desktop computers on the Internet?  (This is actually 
a serious issue for 911 services.  Unless one can somehow manage location 
information for endpoints, there's no hope of fielding a reasonable 
life-critical communications system based on the Internet.)

And think about reliability.  The one thing about the telephone system is 
that it just works.  That reliability is very hard to engineer using 
Internet protocols.  As the phone system starts to look more and more like 
the Internet, it will become as reliable as the Internet.  This means that 
it will forever be in beta.  This means there will be software 
incompatibilities, upgrade problems, and random weird errors.  This means 
that it will fail, catastrophically, once in a while.

Telephone hacking is not new.  There have been decades of allegations and 
investigations into Las Vegas crime syndicates surreptitiously rerouting 
escort-service phone numbers, and the dial telephone was invented in the 
late 1800s by someone convinced that operators were rerouting his calls to 
rival businesses.  Before the Internet, the phone network was the primary 
focus of hackers.

But it's a hard network to hack.  Telephony is still a controlled closed 
universe.  The protocols are often proprietary, access is limited, and 
information is scarce.  You need to speak SS7, have the right physical 
connections, etc.  There is nominally no interconnect to the TCP/IP 
Internet.  Even with knowledge, it is the limited physical access that 
provides the most constraint.  Voice and control are on separate 
channels.  None of this provides absolute security, but it helps keep the 
number of hackers down.

The Internet, on the other hand, is much easier to hack.  It's 
public.  It's available.  Anyone can connect a computer up to the 
Internet.  Anyone can download boatloads of hacking tools.  Anyone can 
become a script kiddie.

What we're seeing is another example of the tension between functionality 
and security.  Opening the network is a good thing from the perspective of 
creating innovative new services, speeding up development cycles, adding 
value to data and voice.  Yet when we do this, we open up the potential for 
the bad things as well.  It's impossible to get the one without the other.

Soon the phone network will become just like the Internet.  Putting control 
of telephony networks on the Internet means anyone can hack 
chicago.switch.uswest.net.  These protocols will turn control over to both 
authorized and unauthorized Internet control.  If you think phone phreaking 
was bad, just wait until anyone can do it.

Standards and companies active in this area:
<http://www.parlay.org>
<http://www.telecomsys.com>
<http://www.invertix.com>
<http://www.locationet.com>
<http://www.openls.org>
<http://www.locationforum.org>
<http://www.3gpp.org>
<http://www.sipforum.org>
<http://www.sipcenter.com>
<http://www.etsi.org/tiphon>

Steve Bass and John Ladwig both helped with this article.


** *** ***** ******* *********** *************

              Crypto-Gram Reprints



Those of you who have subscribed recently might have missed these essays 
from back issues.

Declassifying Skipjack:
<http://www.counterpane.com/crypto-gram-9807.html#skip>

The Future of Crypto-Hacking:
<http://www.counterpane.com/crypto-gram-9907.html#hacking>

Bungled SSL:
<http://www.counterpane.com/crypto-gram-9907.html#doghouse>

Full Disclosure and the CIA:
<http://www.counterpane.com/crypto-gram-0007.html#1>

Security Risks of Unicode:
<http://www.counterpane.com/crypto-gram-0007.html#9>


** *** ***** ******* *********** *************

                     News



Security is a people problem:
<http://cgi.zdnet.com/slink?107608:8469234>

The NSA has released a bunch of guides to help DoD organizations secure 
Windows 2000.
<http://nsa1.www.conxion.com/win2k/index.html>

I'm not sure what to make of this one.  Robert Hanssen, the FBI agent 
accused of spying for the Russians, wanted to retire into a job with 
Invicta Networks.  (Invicta Networks is the company run by Soviet KGB 
defector Viktor Sheymov that I talked about last month.)  Is Invicta 
Networks a government front?  For which government?  This is just plain weird.
<http://www.nydailynews.com/2001-02-23/News_and_Views/Beyond_the_City/a-1009 
28.asp>

Rental car companies use GPS features in their cars to spy on renters.
<http://www.zdnet.com/zdnn/stories/news/0,4586,2778752,00.html>
At least in this case, it has been declared illegal.
<http://www.zdnet.com/zdnn/stories/news/0,4586,5093616,00.html>
But the car rental company has no plans to change the policy.
<http://www.wired.com/news/privacy/0,1848,45163,00.html>

Insiders are a serious security concern.
<http://cgi.zdnet.com/slink?110672:8469234>
As are ex-employees (outsiders who used to be insiders, and who 1) often 
still have access, and 2) are sometimes pissed).
<http://www.securityfocus.com/templates/article.html?id=220>
<http://www.computerworld.com/itresources/rcstory/0,4167,STO61983.html>
And corporate downsizing makes the problem worse:
<http://www.computerworld.com/rckey73/story/0,1199,NAV63_STO61663,00.html>

I have long wondered about the security of various online 
competitions.  Here is a Coca Cola competition that has been hacked.
<http://www.theage.com.au/news/national/2001/06/18/FFX85SO43OC.html>

Negligence causes security problems.  (Does this surprise anyone?)
<http://www.vnunet.com/News/1123341>

Here's a story about a Microsoft IIS vulnerability, a patch that many 
people have not bothered installing, and a hacker tool that exploits the 
vulnerability.  Often the tools are made public, and are used by thousands 
of script kiddies world-wide.  At this time, it seems that the hacker tools 
are being held closer to the vest because they are so valuable.
<http://news.cnet.com/news/0-1003-200-6353491.html?tag=mn_hd>

How the FBI investigates computer crime:
<http://www.cert.org/tech_tips/FBI_investigates_crime.html>

Parody: "Terms used in the disciplines of Cryptography, IT Security and 
Risk Analysis."
<http://www.ii.uib.no/~larsr/craptology/crv0n0-0.html>

Excellent three-part series on developing good security habits:
<http://www.securityfocus.com/focus/basics/articles/sechabits1.html>
<http://www.securityfocus.com/focus/basics/articles/sechabits2.html>
<http://www.securityfocus.com/focus/basics/articles/sechabits3.html>

NIST has released a new FIPS 140 standard.  This has been the de facto 
standard for cryptographic modules, and is evoked for other crypto hardware 
devices.  There aren't many changes in the new FIPS 140-2, but there are some.
<http://csrc.nist.gov/cryptval/140-2.htm>

Comparing computer viruses with biological viruses:
<http://securityportal.com/articles/greatanalogy1.html>

Terrorists are using encryption!!!  This is a terrible story, one-sided and 
full of hyperbole.  But it plays well as FBI propaganda.  Remember kids, 
cryptography is for criminals.  Big Brother is your friend.  What I 
particularly like is the way key escrow is proposed as the solution when 
steganography is described as the problem.  Geez.
<http://www.usatoday.com/life/cyber/tech/2001-02-05-binladen.htm>

Big Brother is Tampa's friend.  During the recent Super Bowl in Tampa, 
Florida, the city installed video cameras that watched everybody coming 
into the stadium.  Then, the faces were compared to an archive of wanted 
criminal suspects, looking for a match.  (The system matched nineteen 
people, all wanted for minor offenses.  No one was arrested.)  Tampa liked 
the system so much they're making it permanent.
<http://www.cnn.com/2001/TECH/ptech/07/02/high.tech.security.ap/index.html>

Time for some good news.  The EU is funding a project to build an 
intelligent agent that executes actions in compliance with European 
standard privacy legislation.
<http://pet-pisa.openspace.nl/pisa_org/pisa/index.html>

Article on NSA's Cryptologic Museum:
<http://www.washingtonpost.com/wp-dyn/articles/A10615-2001Jul2.html>

Teen hacker gets sentenced to psychiatric care, not jail:
<http://news.bbc.co.uk/hi/english/uk/wales/newsid_1424000/1424937.stm>
<http://cgi.zdnet.com/slink?116143:8469234>

Eli Lilly leaks 600 names of Prozac users.  I wonder what the "computer 
programming error" was -- using "To:" instead of "Bcc:" in an e-mail?
<http://www.washingtonpost.com/wp-dyn/articles/A16718-2001Jul4.html>
<http://www.siliconvalley.com/docs/news/svfront/009534.htm>

Outlook Redemption is a developer tool specifically designed to let Outlook 
applications evade the Outlook security patches and built-in features of 
Outlook 2000 that warn users when applications send mail on their behalf, 
read their address book, and so forth.  This can't possibly be a good idea.
<http://www.dimastr.com/redemption/>
On the other hand, the "security patch" is a really terrible idea, too; it 
won't let you receive bunches of different types of attachments, rather 
than letting the user choose.  And there's no way to uninstall the security 
patch, once installed.  That's what inspired this tool, I'll bet.

This story is interesting because it shows how easy it is to track someone 
trying to be anonymous on the Internet, especially someone who doesn't 
understand how the Internet works.
<http://www.wired.com/news/politics/0%2C1283%2C44567%2C00.html>

Yet another scary hacking tool.  Sure, there's a patch available.  But what 
percentage of users actually have installed the patch?  Anyone care to take 
a guess?
<http://cgi.zdnet.com/slink?115257:8469234>

Trojan horse that is a spam tool; it sends bulk e-mail without the user's 
knowledge.
<http://cgi.zdnet.com/slink?115260:8469234>

Having a firewall set up on your system doesn't do you much good if you 
don't monitor it for abnormal behavior.
<http://cgi.zdnet.com/slink?116145:8469234>
What I find interesting is the number of different things the author 
suggests you watch out for.  This would be a tall order for anyone to do 
continuously, and this is just a small number of the total number of alerts 
-- from firewall logs and all the other network-device logs -- that you 
should be looking for.  And, of course, you really want to be reading those 
logs in real-time, so you can react to problems quickly.  This article 
explains the precise reason why I formed Counterpane Internet Security as a 
Managed Security Monitoring company.

An argument for anonymity:  From "The Economist":  "Scientists and 
engineers at Xerox's Palo Alto Research Center (PARC), for instance, were 
discouraged from searching an online database of patents maintained by 
IBM.  Xerox feared that if IBM tracked the pattern of inquiries made by its 
engineers at PARC, the computer giant could build a fairly accurate profile 
of the kind of research under way in the Palo Alto laboratories."
<http://www.economist.com/science/tq/displayStory.cfm?story_id=662374>

Top 10 security mistakes.  Not a bad list, actually.  The main problem is 
that some of these mistakes are not fixable.  At least, it's not realistic 
to expect them to be fixed.
<http://www.idg.net/ic_646834_1794_9-10000.html>

You have to love the irony of this story.  "A U.S. government website 
devoted to helping businesses keep sensitive information private instead 
revealed confidential information about American firms."  Honestly, I don't 
think the so-called Safe Harbor idea is going to work long-term.  As soon 
as European governments realize that private information is being leaked, 
they are going to demand the same controls on non-collection that they 
subject their own companies to.
<http://www.wired.com/news/technology/0,1282,45031,00.html>

Malware of the future will be worse.
<http://news.cnet.com/news/0-1003-200-6548363.html?tag=mn_hd>


** *** ***** ******* *********** *************

       Counterpane Internet Security News



Some time ago I talked about various cool things going on at Counterpane 
that I couldn't talk about.  One was the reseller agreement with Exodus, 
which I mentioned last month.  The other is our VAR program.  This is a big 
deal.  A whole bunch of security VARs and resellers have signed up to 
resell Counterpane's Managed Security Monitoring service.
Counterpane's U.S. VARs:
<http://www.counterpane.com/pr-resellers.html>
Counterpane's European VARs:
<http://www.counterpane.com/pr-emeavars.html>
Press coverage (please pardon the awful picture of me):
<http://www.crn.com/Components/Search/Article.asp?ArticleID=28016>

Alliance between Counterpane and Cigital:
<http://www.counterpane.com/pr-cigital.html>

Schneier is speaking at the 3rd Annual CERT Conference in Omaha on 6 August:
<http://www.certconf.org/>

A video interview with Bruce Schneier is on silicon.com:
<http://www.silicon.com/a45413>


** *** ***** ******* *********** *************

               Monitoring First



(Note: This essay advocates something that my company, Counterpane Internet 
Security, is selling.  If this will offend you, please do not read this.  I 
have announced my bias.  You are forewarned.)


You have a safe in a dilapidated building, and you need to secure 
it.  What's the first thing you do?  Inventory the safe?  Assess the 
security of the building?  Install better locks on the doors and bars on 
the windows?   Probably not.  The first thing you do, as quickly as 
possible, is alarm the safe.  Once the safe is being monitored, you can 
then afford the time and attention needed to inventory the stock, analyze 
the environment, and improve the security.  Without monitoring, you're 
vulnerable until your security is perfect.  If you monitor first, you're 
immediately more secure.

Network security has this backwards.  Companies see monitoring as something 
to do after they have their security products in place.  First they develop 
a security policy.  Then they do a vulnerability analysis.  Then they 
install a firewall, and maybe an intrusion detection system.  And finally 
they think about monitoring.  Rationally, this makes no sense.

Monitoring should be the first step in any network security plan.  It's 
something that a network administrator can do today to provide immediate 
value.  Policy analysis and vulnerability assessments take time, and don't 
actually improve a network's security until they're acted upon.  Installing 
security products improves security, but only if they are installed 
correctly and in the right places.  How does a CIO know what products to 
install, and whether they are actually working -- in the actual corporate 
environment, not as they worked in the lab?  The only way he can know is to 
monitor.  Monitoring ensures that security products are working properly.

This kind of thinking is especially important in dynamic environments like 
company networks.  The network changes every day: new applications, new 
servers, new vulnerabilities.  A CIO can go to sleep one night confident 
that his network is secure, and can wake up the following morning to read 
about a major vulnerability in the newspaper.  Suddenly his network is wide 
open, even though nothing changed.  A CIO can reconfigure his network to 
increase productivity, or add a new network service, or simply upgrade a 
software package, and suddenly the security of his environment is 
completely different.  Networks are extremely complex -- nonlinear and 
tightly coupled -- and it's impossible to predict how different subsystems 
interact.  How does he know the security ramifications of what he 
does?  The only way is to monitor security.

It's specious logic for a CIO to decide to wait until his network is 
stable, he understands his security, and all his patches are up to 
date.  It'll never happen.  Monitoring's best value is when a network is in 
flux -- as all large networks always are -- due to internal and external 
factors.  Monitoring provides immediate security in a way that a 
vulnerability assessment can never provide, in a way that dropping a 
firewall into a network can never provide.  Monitoring provides dynamic 
security in a way that a random product can never provide.  And as security 
products are added into a network -- firewalls, IDSs, specialized security 
devices -- monitoring only gets better.

In engineering, control theory is based on the concept of monitoring.  An 
engineer might want to be able to tune his factory: "How can I control this 
plastic film extruder to ensure a uniform thickness of plastic?"  This is a 
real question, and a complicated one.  The plastics extruder might have a 
dozen different dials controlling things like temperature, pressure, and 
speed.  You can adjust the amount and force of the air being blown, the 
amount of plastic bead material in the machine, or how rapidly the film is 
pulled out of the machine.  All of these controls affect the thickness of 
the plastic; but what you really want is to turn a single dial that says "4 
mil plastic."  But since each dial affects the others, can even cause 
time-dependent feedback loops, it's not nearly as simple as that.  So what 
do you do?  You monitor the system, not just at the output but 
internally.  Then, based on what you've observed, you establish feedback 
loops to create a closed-loop system (I am i
gnoring a library's worth of advanced mathematics here), and apply the 
mathematics of control theory to get what you want.  It might take hundreds 
of pages of analysis, but that's how control theory works.  But first you 
need to monitor so you know what's going on.  Monitor, and you gain control 
of the system.

Security is no different.  Monitoring is what gives companies a window into 
their security.  Did you install a firewall?  An IDS?  Why?  Did it 
increase security or not?  Did you configure it right?  Did you install it 
at the right place in your network?  How do you know?  Monitoring is how 
you know.  Monitoring is the only way you can really know.  And once you 
know, you can start making changes.  If you make changes without 
monitoring, you're just guessing.

Monitoring is the feedback loop that makes all the other network security 
activities more effective.  It's how you determine where to install 
security devices, and whether or not they're doing any good.  It's how you 
know if your security devices are configured correctly.  It's how you 
ensure that your security doesn't degrade over time.  And it needs to be 
done first.


** *** ***** ******* *********** *************

             Comments from Readers



From: Hal Lockhart <hal.lockhart@entegrity.com>
Subject: What Military History Can Teach Network Security

In recent years, I have become convinced that one of the biggest obstacles 
to information security is incorrect reasoning based on false 
analogies.  Don't get me wrong.  I think analogies are great, I use them 
all the time to explain technical concepts to laymen.  The problem comes 
when you try to design solutions from the analogy rather than the 
technology.  A familiar, non-computer example is when people who don't 
understand the math try to argue about relativity.  Closer to home, I have 
seen many people who put their Web server with private key outside the 
firewall.  "I don't want to let somebody inside my wall until I check their 
credentials," they say.  They are thinking about a guy with a crowbar, not 
computers sending and receiving messages.

This brings me to your articles on war as a metaphor for hacking, which 
makes me nervous for the reasons stated.  I am always happy to steal good 
ideas from anybody, but hacking as war strikes me more as rhetoric than 
reality.  In a war, the enemy has some identity, common characteristics, 
loyalty and objectives.  While we would prefer to defeat the enemy, we can 
have confidence that if the enemy achieves some set of goals they will be 
willing to stop fighting.

The current state of the Internet seems a lot more like anarchy or 
vandalism.  Attackers have no common characteristics or objectives, except 
the attacks themselves.  In fact it appears that in most cases they have no 
external objective at all.  Web defacements in particular seem almost 
exactly like spray painting graffiti on subway trains.  This is a problem 
that was once thought to be insoluble, but was eventually reduced to 
acceptable levels by the combined use of technology (padlocks and special 
paints) and labor (cops and cleaners).

However if you insist on war as the metaphor, here are two thoughts along 
those lines.  First, the war, if that is what it is, is surely a guerrilla 
war.  The entities being attacked are large, visible, slow-moving and part 
of the power structure.  They have much greater resources than the 
attackers, but no effective way to apply them.  The attackers and few, 
dispersed, hidden and have few resources.  But what they have is the free 
choice of when and where to attack.

To fight guerrillas it is necessary to a) identify them; i.e., distinguish 
them from civilians and b) control some resource that is essential to their 
survival.  Given the Internet as it exists today, I don't see much hope of 
doing either of these.  If the authorities decide to employ broadly 
targeted, draconian measures, they will find like the British in America 
and the Americans in Vietnam, that the collateral effects on innocent 
civilians are simply unacceptable.

My second thought about hacking as war is that the situation can be 
compared to that in many wars, but most especially the American Civil 
War.  While some people were busy fighting and dying, war profiteers made 
fortunes selling rotten food, unserviceable uniforms and non-working 
weapons.  With a market full of snake oil security, bug-ridden applications 
and vendors who are more interested in suing or prosecuting people who 
reveal security problems than fixing them, this seems like the kind of war 
we are in.



From: Bill McGonigle <bill@zettabyte.net>
Subject: Re: CRYPTO-GRAM, June 15, 2001

 >From: Richard Straub <richard.straub@mediacrypt.com>
 >Subject: Digital Content Protection
 >Preventing unauthorized people from copying or distributing
 >intellectual property is a natural law of this world, whether
 >it is a digital or physical product.

It cannot be argued that a human law created less than 300 years ago is a 
natural law.  Copyright was established in 1710 by the Statute of Anne by 
the British Parliament to protect the public from the publishers.

This was an artifact of the printing press.  After Gutenberg invented it a 
market for books developed.  Before that people actually memorized and told 
stories.  Books were way too expensive and had to be hand-copied by monks 
(in papal states).  People decided they liked books, probably since then 
they could spend time on things besides memorizing stories.  Once people 
didn't remember how to memorize stories, publishers figured out that they 
could make a bunch of money by creating a scarcity of stories in the books 
market.  The Parliament put a stop to that with the copyright law by 
limiting the term of exclusivity, but since the capital costs of hiring a 
logging team, building a pulp plant and building a printing press are 
beyond that of ordinary citizens, a scarcity still existed, and publishers 
could make a fair profit.

Digital reproduction drives the scarcity towards zero since the costs of 
reproduction and distribution of text are extremely low.  Digital copyright 
protection seeks to artificially enforce the scarcity.  But the function of 
the publishers as converters of trees into books is approaching 
obsolescence in the post-Gutenberg era.  Rich citizens can afford to 
commission works directly, and poor citizens can cooperate with something 
like the Street Performer protocol.

Neither Sophocles nor Chaucer nor Shakespeare had the benefit of copyright 
protection.



From: Russell Nelson <nelson@crynwr.com>
Subject: Digital Content Protection

 >From: Paul Kocher <paul@cryptography.com>
 >Subject: Digital Content Protection
 >For this reason, I disagree with the assertion that "unrestricted
 >distribution is a natural law of digital content" -- I don't see
 >piracy as the result of a natural law any more than stealing,
 >cheating, and eavesdropping are.

Sorry, Paul, but Bruce is right.  Without end-to-end encryption (in other 
words, directly inserted from the distributor into the person's senses 
without ever being available in plaintext -- using some technology that I 
doubt could exist or if it could, wouldn't be accepted by consumers), the 
hostile attacker (in other words, the user in the privacy of their home) 
can intercept the communications.

Okay, so the plaintext is available.  Now, I take that plaintext and 
encrypt it.  I tell everyone "Suck down a copy of this file.  I'll tell you 
what it is later; possibly much later."  How is a content "owner" to deal 
with that?  They can't decrypt it.  They don't even know it's their 
content.  They might think they own it if I describe it vaguely, or even 
exactly.  But how to prove it in a court of law?  "Sorry, your honor, I 
don't really know what that file is.  I got it from someone else.  No, your 
honor, I can't give you the key.  Only the encryptor has the key to it, and 
I didn't encrypt it."

The endgame has only two possibilities: 1) the complete prohibition of 
publishing anything with random bits in it, or 2) the death of 
copyright.  #1 is not in the cards, therefore you can count on #2.  Any 
action taken by any content owner to stop copying is just a delaying action 
based on revising their business plan or implementing same.  Any action NOT 
based on the reality of #2 is head-in-the-sand idiocy, to be whacked by 
knowledgeable stockholders.



Reply-To: Vladimir Katalov <vkatalov@elcomsoft.com>
Subject: Re: The Futility of Digital Copy Prevention

 > All digital copy protection schemes can be broken, and once they
 > are, the breaks will be distributed...

Totally agree.  Unfortunately, not all the people (especially developers of 
DRM (or how you call it: "digital copy protection schemes") think so.  Our 
company specializes in password recovery and decryption software; just 
about two weeks ago, we have released a product that decrypts protected 
Adobe Acrobat PDF files and e-books, removing all protections.  And 
immediately got problems with Adobe: they did their best to close our site 
etc.  Now the site is functional again, we don't sell the program 
anymore.  Also, they 'closed' the hole
in Acrobat eBook Reader by making new, improved version, but we have broken 
it as well -- in just 20 minutes.  Our program works like a demo -- 
decrypts only 25% of file contents, just to demonstrate the problems with 
such kind of 'copy protection'.

You can read the whole story about our battle with Adobe (and get our
program) here: <http://www.elcomsoft.com/aebpr.html>



From: Mike <John.Michael.Williams@computer.org>
Subject: The Fallacy of Incomplete Requirements

In your 6/15 newsletter, Ken Ayer <kayer@visa.com> wrote in justification 
of the Common Criteria (CC): "Every vendor (whether of a chip, card, lab or 
consulting service) says it's the best, but we need a way to compare these 
claims...  What Visa has done with the Common Criteria is to start a 
dialogue on how to clearly express security requirements...  We are making 
progress, though there remains work to be done...."

This is the Fallacy of Incomplete Requirements: If we could all just get 
along, and write down what we REALLY want done, we could have those 'droid 
techies fix this annoying aberration.

We informally compare marketing/technical claims all the time, those 
involving the life-and-limb of ourselves and our family -- for example, 
automotive safety.  Show me the SAE spec for "automotive security," or 
component-wise, cockpit-collapse security, comparable to what the CC ilk want.

Government and professional societies like SAE, IEEE, ACM, (or hybrids like 
CC bureaucracies) can't begin to do this.  Collapse rate of a steering 
column?  Right.  Impact of a side-airbag?  Right.  Tire safety?  Hmmm ... 
check the net for hearings and litigation, possibly criminal, on Ford vs. 
Firestone.

Safety-rating of a cockpit, a vehicle, a model, a brand?  No way.  The 
market and painful experience reject the failures (the pre-'65 Chevy 
Corvairs, and maybe the Ford Explorers, for examples).

What no one acknowledges is an observation I've made since entering the biz 
in '73.  Science generally is extremely poor at detecting fraud and 
deception  -- one doesn't get one's doctorate by proving why the big guys' 
ideas don't work, and you don't win tenure by winnowing out colleagues' 
detritus.  Some of the biggest suckers for stage magic and plain old 
collusion have been scientists, especially government scientists.

Computer security is unique in an aspect it fails to acknowledge: it 
contains the notion, and realization, of hostile automata.  Let's see a 
requirements spec for that!


** *** ***** ******* *********** *************


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a 
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe, 
visit <http://www.counterpane.com/unsubform.html>.  Back issues are 
available on <http://www.counterpane.com>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of "Secrets and Lies" and 
"Applied Cryptography," and an inventor of the Blowfish, Twofish, and 
Yarrow algorithms.  He served on the board of the International Association 
for Cryptologic Research, EPIC, and VTW.  He is a frequent writer and 
lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security 
Monitoring.  Counterpane's expert security analysts protect networks for 
Fortune 2000 companies world-wide.

<http://www.counterpane.com/>

Copyright (c) 2001 by Counterpane Internet Security, Inc.