[LWN Logo]
[Timeline]
Date: Wed, 14 Feb 2001 23:17:01 -0600
To: crypto-gram@chaparraltree.com
From: Bruce Schneier <schneier@counterpane.com>
Subject: CRYPTO-GRAM, February 15, 2001

                  CRYPTO-GRAM

               February 15, 2001

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier@counterpane.com
          <http://www.counterpane.com>


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at 
<http://www.counterpane.com/crypto-gram.html>.  To subscribe or 
unsubscribe, see below.


Copyright (c) 2001 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Hard-Drive-Embedded Copy Protection
      Crypto-Gram Reprints
      News
      Counterpane Internet Security News
      An Intentional Backdoor
      The Doghouse: NASA and eTrue
      A Semantic Attack on URLs
      E-mail Filter Idiocy
      Air Gaps
      Internet Voting vs. Large-Value e-Commerce
      Comments from Readers


** *** ***** ******* *********** *************

     Hard-Drive-Embedded Copy Protection



CPRM (Content Protection for Recordable Media) is a system for enforcing 
copy protection on personal computers.  The basic idea is to enforce 
digital rights management -- copy-prevention, limited use, whatever -- in 
electronic media.

In more detail, the scheme requires specially designed copying 
software.  This software communicates directly to the disk drive, bypassing 
the operating system.  To write a document, first the drive and the 
software authenticate to each other.  Then the drive sends the software 
keying material that is stored in a nonstandard place on the drive that's 
unique to the medium, and also reads back an increment-only counter in the 
medium.  The user-level application -- or, more likely, a server somewhere 
on the Web -- encrypts the file using that keying material.  The encrypted 
object is written as an ordinary file on the medium.  An intermediate key 
file is written as a second ordinary file.

The "player" for these encrypted objects will pull an increment-only 
counter out of the drive, use it and the keying material to decrypt the 
intermediate key-file, and then extract the document key from that 
file.  It will then play the document.

To move (as opposed to copy) the document to another disk, the software 
will check to determine if this is permissible.  (Perhaps the permissions 
will be embedded in the file; perhaps the software will query another 
computer over the Internet.)  If the move is allowed, the software will 
re-encrypt the document for the new medium (only allowing it to be stored 
in a copy-protected medium), increment the increment-only counter in the 
old medium, generate a new key-file key with the new counter value, and 
rewrite the old key-file, deleting the key that would allow the old copy to 
be played.  After moving the document, even if the user keeps a copy of the 
encrypted bits, it won't play on the original medium because its key won't 
be in the key-file on that medium.

If a user copies the encrypted object to another medium without going 
through the approved procedure, its key won't be in the key-file on the new 
medium, so the reader can't play it.  If the user copies both of them to 
another medium, the key-file won't be decryptable since its key depends on 
the medium-specific keying info.  If the user makes a backup copy of his 
entire disk, "moves" the encrypted song onto another medium, then scrubs 
and restores the entire original disk, the restored key-file won't be 
decryptable, since the increment-only counter (that is hashed with the 
medium-specific keys to produce the key-file key) will have changed.

There are other tricks built into the system.  There's no single global 
secret to steal, and there's a mechanism to recover security if some of the 
many global secrets get out.  The system is based on something called 
"broadcast encryption," developed by Amos Fiat and Moni Naar in 1993.

The technology will be ineffective, but that may not matter.

Broadly speaking, there are three classes of people who copy 
documents.  There are average users, who just want a second copy for 
whatever reason but won't use hacker tools.  There are  more savvy users, 
who are willing to download programs that break copy-protection 
schemes.  And there are professionals, who are prepared to spend serious 
money to break copy-protection schemes.

Against the first group, any security measure works.  This hardware scheme 
is overkill.  Against the second group, any scheme that involves software 
fails.  I've written about this extensively both in _Secrets and Lies_ (see 
pp. 250-253) and in a previous issue of Crypto-Gram.  Basically, the scheme 
described above has a key stored in hardware and a software decryptor.  To 
break the scheme, you don't need to extract the hardware key.  You can let 
the decryption software do it normally, and then grab the document after 
decryption and before play.  Someone will write software do to this, just 
as someone has written software to get around every other similar 
scheme.  The hardware component doesn't matter.

Where it will make a difference is in devices that don't expose the 
decrypted document.  The reason the computer embodiment fails is because 
the document exists unencrypted in the computer, and a hacker can write a 
program to take advantage of that.  If this copy protection is brought 
forward to the video monitor, or the speakers, then the document never 
exists in the computer in unencrypted form.  If the scheme only runs on DVD 
players or MP3 devices or anything else where you can't run custom 
software, this is much more effective.

But it still doesn't work against the third class of attackers: the 
professionals.  These are people willing to invest in custom 
hardware.  They will always be able to break these schemes and extract the 
documents.  And they will always be able to produce and sell bootlegs, at 
least to the limits of law enforcement in whatever country they're in.

There is another angle here, making this even more complicated.  Content 
providers are no longer relying on technology to enforce copy protection, 
they're relying on laws.  The algorithms used in this scheme will be 
patented, so anyone who writes a hacked decoder will be infringing on the 
patent.  And any software designed to circumvent this mechanism will be 
illegal under the Digital Millennium Copyright Act.  Not only can the 
authors of this software be prosecuted, but so can people who "traffic" in 
this software: e.g., post or link to it on their Web site.

This will not make it any harder to find such circumvention software -- 
notice how easy it is to find DeCSS today with your search engine -- but it 
will have a chilling effect on the whole idea.  2600 Magazine was 
successfully prosecuted for linking to DeCSS; similar pressure will be 
brought to bear against anyone who publicizes any DeCPRM software.

So, what do we have here?  We have a serious threat to civil liberties: 
large entertainment companies are allying themselves with the computer 
industry to dictate what can and can't happen on your hard drive.  (CPRM is 
only supposed to be for flash memory.  This is a lie, of course.  Already 
it is planned for IBM's tiny hard drive, and larger drives aren't far 
behind.)  We have a technology that will, in some circumstances, make 
backups impossible.  Compatibility problems between disk drives that have 
CPRM and those that don't will force networks to completely upgrade their 
mass storage.  We have a technology that forces users to buy proprietary 
decoding software forever.  We have a technology that won't really work 
unless it extends to computer output devices; you may find yourself forced 
to upgrade your monitor as well to watch movies on your computer.  And we 
have an increased reliance on legal harassment by media companies.  It's 
that last bit that scares me the most.

The proposal:
<http://www.theregister.co.uk/content/2/15620.html>
<http://www.lmicp.com/4centity/data/tech/cpsa/cpsa081.pdf>
<http://www.lmicp.com/4centity/data/tech/4cspec.pdf>
<http://www.theregister.co.uk/content/2/15718.html>
<http://www.theregister.co.uk/content/2/15797.html>

What's Wrong with Copy Protection, by John Gilmore:
<http://www.toad.com/gnu/whatswrong.html>

Copy protection and why it doesn't work:
<http://www.counterpane.com/crypto-gram-9811.html#copy>

EFF's archives on the topic:
<http://www.eff.org/IP/>

The 4C Entity (IBM, Intel, Toshiba and Matsushita), which owns and 
advocates CPRM:
<http://www.dvdcca.org/4centity/>


** *** ***** ******* *********** *************

             Crypto-Gram Reprints



Distributed denial-of-service attacks:
<http://www.counterpane.com/crypto-gram-0002.html#DistributedDenial-of-Servi 
ceAttacks>

Publicizing vulnerabilities:
<http://www.counterpane.com/crypto-gram-0002.html#PublicizingVulnerabilities>

Recognizing crypto snake-oil:
<http://www.counterpane.com/crypto-gram-9902.html#snakeoil>


** *** ***** ******* *********** *************

                      News



Nineteen technology companies have joined forces to share data on system 
vulnerabilities and Internet threats.  Called the Information Technology 
Information Sharing and Analysis Center (IT-ISAC), the group is supposed 
work with the government to head off future cyberattacks on the group's 
members.  I have very mixed feelings about this.  On the one hand, sharing 
information is a good idea, and we need more of this.  On the other hand, 
the problem with sharing information among group members is that it remains 
a secret from the rest of the world.  The last thing we want is a divide of 
haves and have-nots, with only those in special clubs being secure.
<http://www.msnbc.com/local/rtar/m8943.asp>
<http://www.zdnet.com/zdnn/stories/news/0,4586,2674800,00.html>
<http://www.wired.com/news/politics/0,1283,41212,00.html>

Princeton University's Ed Felten is not going to publish details about how 
he broke the Secure Digital Music Initiative (SDMI) watermark challenge, 
because of the prosecution provisions of 1998 Digital Millennium Copyright 
Act (DMCA).
<http://www.theregister.co.uk/content/6/16107.html>
<http://www8.nytimes.com/2001/01/15/technology/15TUNE.html>

A rebuttal to my essay last month about certifying security products:
<http://www.avolio.com/columns/ProdCert.html>

The future of operating system security:
<http://securityportal.com/cover/coverstory20010115.html>

Latest news on U.S. crypto export controls:
<http://www.zdnet.com/zdnn/stories/news/0,4586,2673461,00.html>

Testimony on evaluating voting machines.  He makes the point that voting 
machines should never have secret or proprietary parts.  A bit long, but 
worth reading.
<http://www.cs.uiowa.edu/~jones/voting/uscrc.html>
And a U.S. government report on the problems of computerized voting from 
1988.  Lots of lessons that should be learned.
<http://www.nist.gov/itl/lab/specpubs/500-158.htm>

A Linux worm -- called Ramen -- is working its way through the 
Internet.  Some consider this retribution for the smugness of Linux users 
about the security of their operating system versus Windows.  That misses 
the point; of course all operating systems will have vulnerabilities.  The 
two interesting points are: 1) default installations of Red Hat Linux are 
insecure, just like default installations of Windows, and 2) just because 
patches exist for a vulnerability doesn't mean we're safe.
<http://news.cnet.com/news/0-1003-200-4508359.html>

Why the government needs to protect individual privacy on the Internet, 
while staying away from regulating content, information, and ideas.
<http://www.oecdobserver.org/news/fullstory.php/aid/409>

A year after the highly publicized distributed denial-of-service attacks, 
the Internet is still vulnerable:
<http://cgi.zdnet.com/slink?76741:8469234>

The EFF has petitioned a federal appeals court to overturn a lower court's 
interpretation of the DMCA to mean that the 2600 Web site must remove links 
to DeCSS.
<http://news.excite.com/news/zd/010119/17/eff-on-decss>
<http://www.computeruser.com/news/01/01/22/news14.html>
<http://news.cnet.com/news/0-1005-201-4537831-0.html>
<http://www.geek.com/news/geeknews/2001jan/gee20010122003915.htm>

Leo Marks, WW II cryptographer and author of _Between Silk and Cyanide_ died:
<http://www.theregister.co.uk/content/4/16308.html>

Often, security vulnerabilities are never patched:
<http://news.cnet.com/news/0-1007-201-4578373-0.html?tag=btmprm>

The 3G cell phone encryption algorithms are public.  This is a welcome 
change from the industry's usual practice of secret cryptography.  Bravo.
<http://www.research.att.com/~janos/3gpp.html>

After all these years, the Net still isn't a safe place (no surprise):
<http://computerworld.com/cwi/story/0,1199,NAV65-663_STO56690_NLTs,00.html>

All vulnerability scanners miss things, but the open source (and free) 
Nessus was judged to be the best:
<http://www.networkcomputing.com/1201/1201f1b1.html>

Children are a significant personal privacy risk:
<http://www.ecommercetimes.com/perl/story/6942.html>

Master's thesis:  Cryptographic Software Export Controls in the EU.
<http://ethesis.helsinki.fi/julkaisut/oik/julki/pg/parviainen/>

DirecTV scored a direct hit against pirates.  Over the course of a few 
months it surreptitiously broadcast, byte by byte, a program that allowed 
it to permanently disable pirate DirecTV access cards.  On January 21st, 
they triggered the program.  Supposedly this knocked out 98% of cracked 
cards.  My favorite tidbit is that they wrote "GAME OVER" into an affected 
area of memory.  The pirate community is already working on hardware 
workarounds and, supposedly, the cracked cards that use emulation are easy 
to fix.  So while DirecTV won this battle, the war goes on.
<http://www.securityfocus.com/news/143>
<http://news.bbc.co.uk/hi/english/sci/tech/newsid_1138000/1138550.stm>
<http://slashdot.org/articles/01/01/25/1343218.shtml>
<http://www.hackhu.com/>

Basic cryptanalysis of RSA's formerly proprietary SecurID hash function.
<http://www.atstake.com/research/reports/initial_securid_analysis.pdf>

The NSA is trying to design a hack-proof computer.  I don't know, but this 
seems like a poorly thought out idea.
<http://www.zdnet.co.uk/news/2001/4/ns-20672.html>

Classic snake oil:
<http://www.securesystems.com.au/>
It's secret, and they even have a contest.

More snake oil: cryptanalysis of "Encrypted Magic Folders":
<http://www.accessdata.com/emf_cryptanalysis.html>

It seems that private keys are exposed in the Java security database.  This 
explanation is worth reading.
<http://www.amug.org/~glguerin/security/jdk-1.1/exposure.html>

A bunch of cryptographers, myself included, wrote an amicus brief for the 
DeCSS case that EFF is supporting:
<http://eon.law.harvard.edu/openlaw/DVD/NY/appeal/000126-cryptographers-amic 
us.html>

Optical vote readers can be worse than punch cards, according to a 28 Jan 
01 LA Times article.  Some systems reject ballots because the voter marked 
the oval for a candidate and then also wrote the same candidate's name in 
the write-in space.  Personally, I don't see how that can be interpreted as 
anything but "the clear intent of the voter."  My favorite blunder has to 
be the counting machines that rejected mail-in absentee ballots because 
they detected the fold in the paper (unavoidable in order to actually mail 
the ballot in the legal envelope) as a second vote.  (If you want to read 
the article online, you have to pay for it at www.latimes.com.  The idiots 
only keep their links free for a few weeks.)

Excellent article on why micropayments don't make business sense.
<http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html>

Fascinating thought piece on the future of "digital rights management":
<http://www.discover.com/feb_01/gthere.html?article=featnapster.html>

There have been reports that RSA has been cracked by a Filipino "math 
enthusiast."  Actually, the cracking algorithm is slower than factoring.
<http://www.zdnetasia.com/news/dailynews/story/0,2000010021,20178050-1,00.htm>
An e-mail conversation with Ron Rivest about the algorithm:
<http://www.seedmuse.com/rsa_edit.htm>

Crypto break of the 802.11 wireless LAN encryption protocol (Apple's 
AirPort, for example).  Real-time decryption of traffic is possible.  Nice 
work.
<http://www.isaac.cs.berkeley.edu/isaac/wep-faq.html>
<http://www.zdnet.com/zdnn/stories/news/0,4586,2681947,00.html>
<http://www.wired.com/news/technology/0,1282,41612,00.html>


** *** ***** ******* *********** *************

       Counterpane Internet Security News


Schneier's "Why Cryptography is Harder than it Looks" has been published on 
a German Web site:
<http://legamedia.de/nl/nl.php3?nl=a6577c331eb4a55ac8e738e74ff06f86>


** *** ***** ******* *********** *************

             An Intentional Back Door



It's one thing have an attacker add a back door to your system for later 
unauthorized access.  It's quite another to deliberately do it to yourself.

It seems that Borland did this very thing with its Interbase database.  All 
versions released for the past seven years (versions 4.x through 6.01) have 
this back door.  How it came about and how it was discovered is instructive.

Versions of Interbase before 1994 didn't have any access-control 
mechanisms.  This was fixed in version 4.0 using a peculiar system.  Since 
they had a database, the engineers created a special database within 
Interbase for account names and encrypted passwords.  This solution had a 
problem: in order to authenticate a user, the program had to access the 
database, but before the program could access the database, it had to 
authenticate a user.

Now there are many ways to solve this problem, but hard coding the username 
"politically" and the password "correct" is not the one I would have 
chosen.  But it is the one Borland did.  This is the backdoor; anyone using 
that username and password can access any Interbase database.

Lesson one:  Deliberately adding back doors creates a security problem 
whose magnitude is unknown and changes over time.  I call this the "window 
of exposure."  The moment this product was shipped, the vulnerability 
existed.  But as long as no one knew about it, it wasn't a problem.  At 
this point we have no idea if anyone in the hacking underground knew about 
the vulnerability, or if any criminals took advantage of it.  Certainly the 
programmers who coded the "feature" knew about it.  They could have told 
others.  Word could have gotten around.

Now the vulnerability is public; everyone knows about.  Within days of 
announcement, there were reports of scans looking for the 
vulnerability.  Borland issued a patch, but who knows what percentage of 
users will patch their systems.  The vulnerability will remain in many 
systems for years.

Lesson two: Open source helps security, sometimes.  Interbase was made open 
source in July 2000.  This vulnerability was discovered six months later by 
a German software developer.  If he hadn't discovered it, maybe we would 
still not know about it.  If someone had looked at the code sooner, maybe 
we would have known sooner.  Open source means that more people examine the 
source code, but it is no guarantee that vulnerabilities will be found or 
-- even if found -- fixed properly.  If the person who discovered the 
vulnerability was intent on breaking into systems, this would have played 
out much differently.

Lesson three:  In a world with no liabilities, trust is transitive whether 
you like it or not.  Company X's customers trust Company X.  If Company X 
was using Interbase, those customers were also trusting Borland...only they 
didn't know it.  Those customers are also trusting Company X's hardware and 
software suppliers, ISP, etc., etc., etc.  If the Company X had some sort 
of legal liability towards its customers, then this transitive relationship 
would not exist.  But because the customers have no recourse if Company X 
screws up, it does.

Back doors have the unfortunate property of being all or nothing.  It's 
like leaving your house key under the mat.  If no one knows about it, it's 
pretty safe.  If everyone knows about it, it makes your door lock 
useless.  Borland certainly belongs in the doghouse for this one.

<http://www.securityfocus.com/news/136>
<http://www.windowsitsecurity.com/articles/index.cfm?articleID=16566>
<http://www.cert.org/advisories/CA-2001-01.html>


** *** ***** ******* *********** *************

         The Doghouse: NASA and eTrue



NASA is testing a biometric authentication system that works over the 
Internet.  I've repeatedly written, both in these pages and in _Secrets and 
Lies_, about the dangers and insecurities of remote biometric 
authentication.  But if that weren't enough, the CEO of the biometric 
company eTrue was reported to have said: "And anyone attempting to hack 
into a NASA system will have his or her own biometric characteristics 
logged and recorded."  Why would anyone attempting to break into a network 
send his *own* fingerprint to be authenticated?  (Note that it is not a 
direct quote in the article, which may mean that the reporter is to blame 
for this.)

<http://www.fcw.com/fcw/articles/2001/0101/web-nasa-01-04-01.asp>

My essay on biometrics:
<http://www.counterpane.com/crypto-gram-9808.html#biometrics>


** *** ***** ******* *********** *************

           A Semantic Attack on URLs



This is clever.  Last month I received an e-mail that said:

Check out breaking news at CNN:
<http://www.cnn.com&story=breaking_news@18.69.0.44/evarady/www/top_story.htm>

(Unfortunately, the URL no longer works.  But stick with me.)  At first 
glance, this looks like a CNN URL.  But the URL does not lead to, or does 
not redirect from, cnn.com.  The page is not CNN's.  The URL is a clever 
hack that plays with people's assumptions about what a URL is supposed to 
look like.

Here's how it works.  An MIT student created a fake Web page and put it up 
on his Web site at:

<http://salticus-peckhamae.mit.edu/evarady/www/top_story.htm>

He then sent out the first URL above.  If you examine that URL carefully, 
you can see that the host name is not "www.cnn.com" but "18.69.0.44," which 
is the same as salticus-peckhamae.mit.edu.  (For extra obfuscation, he 
could have converted that host name to decimal.)  That entire bit before 
the @-sign -- "www.cnn.com&story=breaking_news" -- is a "username," 
something allowed by the HTTP specification but rarely used in actual URLs.

This is a really clever example of a semantic attack: one that targets 
people and meaning rather than computer syntax.  The attacks are obvious: 
someone could send a fake e-mail from www.whatever.com, telling them to 
click on this URL for a free gift.  The URL would look like it came from 
the Whatever company, but would instead go to a look-alike site that 
harvests the usernames and passwords.

Most Internet users have no idea what a URL is supposed to look like, let 
alone how to parse one.  In a world where there is no real way to validate 
anything, the URL has become the means that people use to determine the 
source of a Web page.  (Does anyone EVER examine a public-key 
certificate?)  But if URLs can play with our expectations of what they 
should look like, what can we do?

Semantic Attacks:
<http://www.counterpane.com/crypto-gram-0010.html#1>


** *** ***** ******* *********** *************

             E-Mail Filter Idiocy



As long as we're mentioning semantic attacks, let's talk about semantic 
defenses.  Many companies filter incoming e-mail for viruses, Trojans, 
spam, etc.  Some of these filters blocked the January Crypto-Gram.

Ten copies were blocked because they contained the character string 
"ILOVEYOU" -- even though they were plain text with no attachments.  This 
makes no sense to me:  it's laughably easy to change ILOVEYOU to use some 
other subject line, but good luck getting it to spread through plain text 
e-mail.  (I suppose you could put the source code in the message and just 
hope the recipient would compile it.)  So what the filter really does is 
block e-mail containing information about the Trojan, not to mention this 
issue of Crypto-Gram.

But that bit of overreaction pales in comparison to this bounce message I 
received:  "MailMarshal (an automated content monitoring gateway) has 
stopped the following e-mail for the following reason:  It believes it may 
contain unacceptable language terms, or inappropriate material....

"MailMarshal Rule: Inbound Messages: Block Unacceptable Language, Script 
Profanity Porn and Racism Triggered, Expression: blow AND job Triggered 1 
times weighting 5.  For more information on e-mail virus scanning, security 
and content management, visit http://www.crypsys.nl"

That's right, MailMarshal blocked Crypto-Gram because in one place I used 
the word "blow" and then a couple of thousand words later I used the word 
"job."  And the sad thing is that the very same e-mail filter will block 
this issue of Crypto-Gram for the very same reason, and the recipient will 
never know.


** *** ***** ******* *********** *************

                     Air Gaps



Whale Communications has been marketing something called e-Gap, which they 
claim is an "air gap" between two networks.  Basically, the system consists 
of two servers.  One is connected to the Internet and the other to the 
internal network.  The two servers only connect through the e-Gap system, a 
SCSI-based memory device that gets toggled between them.  The two servers 
are never directly connected.

This is an interesting idea, but it's not an air gap.

What E-Gap really does is create a proxy connection between two 
computers.  It's a slow connection.  It's a very limited connection; the 
system strips down any network layers under the session layer.  What that 
means is that if you set up a system using E-Gap and an intruder were to 
break into the Internet server, he could not obtain TCP/IP connectivity to 
the internal server.  This certainly increases the security of the back-end 
server.

Nonetheless, the intruder can still access the back-end server as a regular 
client.  The intruder can still break into the internal system by 
exploiting any vulnerabilities above the transport layer.

The whole point of an air gap is that there is no automated connection 
between the two devices.  It's not simply that there is no physical 
connection between the devices most of the time, but that any logical 
connection between the two devices is not automated.  If the Internet 
server and the back-end server were on opposite sides of a room, there 
would be an air gap between them.  To connect the two computers, a user has 
to walk a floppy disk across the room.  For an attacker to attack one 
computer from the other, he needs to be physically present.  Even if an 
attacker gains access to the Internet server remotely, he cannot bridge the 
air gap to the back-end server.

While E-Gap can claim that with their device systems are "completely 
disconnected at all times," the truth is that their switch operates 
automatically at all times.  There is always a logical connection between 
the systems connected by their device.  And that connection is subject to 
remote attack, and possible compromise.

I'm not saying that this is a bad product -- it sounds like a good product 
-- but it is not an air gap.  Calling it one is deceptive marketing.  Kind 
of like calling a stream cipher a one-time pad.


Whale's page describing their technology:
<http://www.whale-com.com/fr_0300.htm>
They call it "impenetrable."  Also note that on their home page they don't 
just call it an air gap but a *physical* air gap, just in case someone 
might have wanted to give them the benefit of the doubt.

A response to critics by someone with Whale:
<http://lists.gnac.net/firewalls/mhonarc/firewalls.199911/msg00269.html>

Hall of shame puff piece:
<http://www2.cio.com/archive/050100_development_content.html>

Whale isn't the only one.  Here's a review of six "air gap" products:
<http://www.infosecuritymag.com/articles/july00/cover.shtml>

Airgap Networks, which has few details on their product, is notable for 
actually defining "air gap" (albeit in an Orwellian manner).
<http://www.airgap.net/what.html>


** *** ***** ******* *********** *************

   Internet Voting vs. Large-Value e-Commerce



One of the odder comments I've heard in the debate on Internet voting is 
the following: "If we can protect multi-billion-dollar e-commerce 
transactions on the Internet, certainly we can protect elections" (or words 
to that effect).  I've heard it so often that I feel the need to explain 
why it isn't true.

There are two important differences between large financial transactions 
and voting that make the former much more suitable for networked 
implementation: anonymity and recovery.

In _Secrets and Lies_, I made the point that electronic financial systems 
based on identity (electronic credit cards, electronic checks, PayPal, 
etc.) are much more likely to be implemented than electronic cash because 
the former is much easier to secure.  Large financial transactions all have 
names attached: who gets the money, who loses the money.  Votes only have 
the names of the recipients attached; the whole point of a secret ballot is 
to remove the name of the voter.  This makes is much harder to protect the 
system from fraud, much harder to detect fraud if it happens, and much 
harder to identify the perpetrator and arrest him.

Another difference between large financial transactions and voting is that 
you can unwind a financial transaction.  This is important.  If someone 
manages to steal a billion dollars from a financial system, you can freeze 
the transaction, try to figure out what happened, and hopefully return the 
money.  If someone manages to hack the vote, there's nothing you can 
do.  This is the lesson from Florida: even in the face of a confusing 
ballot, manipulation of absentee ballot applications, widespread voter 
irregularities, and voting technology that disproportionally 
disenfranchised minorities, there was nothing that could be done.  The vote 
was taken on Election Day, and that's that.  Revoting would have been even 
more unfair, because it is impossible to recreate the conditions at the 
time of the original vote.  Our voting system doesn't allow for the same 
ability to redo transactions that our financial systems do.

There's another, less important, difference between large financial 
transactions and voting: in the latter, appearances matter.  If someone 
claims to have stolen a billion dollars and no one seems to have lost it, 
you can safely ignore his boast.  On the other hand, if some political 
group claims, on election night, to have hacked the vote...what do you 
do?  You can't disprove the claim.  You can't redo the vote to make sure it 
is accurate.

Building a secure Internet-based voting system is a very hard problem, 
harder than all the other computer security problems we've attempted and 
failed at.  I believe that the risks to democracy are too great to attempt it.

Quotes:

Phil Noble, director of politicsonline.com, said "...if the largest banks 
in the world transfer billions of dollars every day electronically we can 
use the same technology to ensure secure voting."
<http://www.politicsonline.com/pol2000/electorales_launch/semana-959/>

 From "Analysis of Internet Voting Protocols," by Andre M. Chernay: 
"Currently existing software is adequate to protect the integrity of the 
ballot once the ballot gets into the Internet pipeline.  Transactions 
currently performed over the Internet include e-commerce, the transferring 
of funds around the world, and the purchasing and selling of stocks.  For 
example, 100 million Americans will go online this year and spend almost 
$12 billion in online purchases."
<http://www.mcgeorge.edu/cir/analysis_internet_voting.htm>

This same argument was made on an All Things Considered radio segment 
"Allow People to Register to Vote Online" on 10 Aug 1999.
<http://search.npr.org/cf/cmn/cmnps05fm.cfm?SegID=55141>

"'Vote Integrity on the Internet Is No Different From Billion Dollar 
Transactions,' Says Chris Kenber, President and CEO of Hifn."
<http://biz.yahoo.com/prnews/001207/ca_hifn_vo.html>

 From an article called "About (tele)democracy":  "First, we experience 
voter fraud now and always have lived with it.  Today claims about voting 
irregularities are heard at nearly every election.  Second, if there is an 
incentive to commit electronic fraud, surely money is the prime 
motivation.  Yet, every day, hundreds of billions of dollars move through 
the banking system with good security.  Fraud exists, but it has been 
policed and prosecuted."
<http://www.pels.net/artikel/004.htm>


** *** ***** ******* *********** *************

             Comments from Readers



From: Richard <richard@away32.com>
Subject: Underwriters Laboratories

I used to work for a company that manufactured electric hospital beds, and 
was the liaison to UL, and I became very familiar with their modus operandi 
and know how the public can continue to be hoodwinked.

First, they take no legal responsibility for a product (if it should fail), 
by saying that the product is "Listed" not "Approved".  Listing simply 
means that the product met some minimal standards that were in effect at 
the time the product was made.  For encryption software, cracking simply 
means that that version met the old standard, and the software company 
needs to submit a new version to meet the new standard.

Next, they require the manufacturer to pay an annual fee to keep up the 
listing, along with allowing a UL inspector to visit the premises and check 
various samples periodically.  The manufacturer naturally paid extra for 
this.  If the manufacturer wanted to make changes to a product in a listed 
area, they had to submit the change beforehand and when UL accepts it 
(charging extra, of course), then the manufacturer could put it into 
production.  If the inspector found an unauthorized change in the product, 
he could tell the manufacturer they could not ship any more product with 
the UL label on it.  Since the UL logo is usually part of the regularly 
applied product packaging, he was, in effect, halting production.  This 
means that if a software company wanted to improve or fix a bug in their 
product, they couldn't sell it until UL was satisfied.

Finally, every electrically savvy buyer of electrical products can look at 
the guts of a product and pretty well tell if it is reasonably "safe".  The 
consumer, on the other hand, is made to believe that the product is totally 
safe.  The truth is, is that UL goes through an evolution rating a product 
and making changes, just like any manufacturer, and makes mistakes and 
fixes to their standards.  So any defect in any given UL Listed Encryption 
software would simply mean revisions to their standards.

All of this means money for UL, with no responsibility for any damage done.

Your last paragraph on "Ethical Extortionist" hits the nail exactly on the 
head.  It plays right into the current public thinking that we need some 
"authority" to validate every facet of our lives.  Authority is one of the 
six principles of influence as explained by Dr Robert Cialdini's 
http://www.influenceatwork.com/6principles.html article on influence.

I guess what I am trying to point out is that while you and I know that UL 
is a farce, they will probably be involved and come out smelling like a 
rose.  The only way for them not to be involved is if all the encryption 
companies band together and ignore them.  But that "authority" stamp of 
approval, excuse me, listing is too much of a selling point to be ignored.


From: Nick Rozanski <nick@rozanski.com>
Subject: The "Insurance" Model of Security

When most of us think about security, we view it as a binary quality of a 
system -- so we ask questions like "Is this system secure?"  The correct 
answer to this question is always "no" --­ *any* security architecture can 
be broken if there is someone willing to invest the effort, time, and/or 
money to break it.

However this approach is flawed.  A much better way is to recognise that it 
is impossible to implement totally foolproof security -- the best we can do 
is to decide how *resistant to attack* our security architecture needs to 
be, which we can do this by answering two questions.  Firstly, "what is the 
cost to me if my security architecture is compromised?" And secondly (not 
always the same thing) "what is the benefit to someone else of compromising 
my security?"

It is important to understand that the answers to these questions come from 
the business, not from technologists.  (All we have to do is to design and 
build it!) Furthermore, in an ideal world it should be possible to 
objectively *demonstrate* that the system provides the required level of 
security -- in the same way that we are expected to demonstrate conformance 
to other classes of requirement (functional, performance, etc.) as part of 
acceptance.

Where we are today, this is very hard, not least because of the difficulty 
in quantifying, or better still eliminating, the human factors which so 
often seem to be involved in security breaches.  In practice, demonstration 
of conformance to security requirements (or otherwise!) usually takes place 
after systems go live.

The final question to ask is "how will I respond if (when) my security 
architecture is eventually breached?"  This is a problem which needs to be 
addressed by both sides -- technology to "tighten up" the security holes 
which have been exposed, and business to deal with the commercial fall-out 
and make the needed organisational changes.  The answers to these questions 
should form part of any security model as much as the technology components do.


From: Greg Guerin <glguerin@amug.org>
Subject: On "Code Signing in Microsoft Windows"

Here's a relatively simple attack on the code-signing in Whistler, 
regardless of who controls it or what it turns out to be.

1) Create an individual Web-based e-mail account on Hotmail, representing 
yourself as a sole proprietorship software developer.  For the truly 
criminal, falsify all the information.

2) Register as a developer with Microsoft, providing the Hotmail e-mail 
address.  The basic Microsoft registration is probably free.

3) Buy a genuine Authenticode code-signing key.  This may be a non-trivial 
expense though probably not more than a few hundred bucks.  For the truly 
criminal, pay for it with a stolen credit-card number.

4) Once you have the code-signing certificate, sign your malware with it, 
then distribute in the usual way.  If your malware subtly sabotages the 
code-signing system itself, say by subverting the root key, so much the better.

5) Make the private key widely public by posting the private key and the 
code-signing certificate on a H4KOR 51TE, or by other means.  One truly 
wicked possibility is to incorporate the private key and the cert itself as 
part of the malware, creating a self-signing worm.

6) Don't tell Microsoft, or the certificate provider, or anyone else that 
your private key has been "compromised."  For the truly criminal, delete or 
abandon the falsified identity.

Eventually your code-signing certificate will be revoked (we hope).  Until 
that happens and the Certificate Revocation List eventually gets around, 
your malware can go anywhere and do anything.  It is, after all, completely 
authentic and trusted as far as the code-signing process is concerned.  And 
if there's no CRL at all, then your malware can survive until its 
certificate expires or an antidote is released.

There is some monetary cost involved in this attack, but if a group of even 
petty criminals pools their cash, it's basically pocket-change.  And if 
they profit from the attacks in some way, then they can create identities 
and buy keys on the fly, staying ahead of the CRL effectively forever.

This attack shows what happens when one of the main premises inherent in 
the crypto-system is flagrantly violated.  In this case, the intentional 
widespread dissemination of the private key makes everything else in the 
system, including accountability, moot.


From: Phil Pennock <Phil.Pennock@globnix.org>
Subject: On "Code Signing in Microsoft Windows"

There's another way to defeat this.  If a program is signed and running, 
then it's henceforth trusted.  If there's a buffer overrun allowing someone 
to smash the stack, then their code is presumably trusted  --  how would 
Microsoft prevent it?  Stackguard-like technologies go some way towards 
preventing the more trivial attacks, but that's it.

So you have a buffer overrun in, say, an e-mail client.  Given Microsoft's 
history, it isn't far-fetched to suppose that it's the default client which 
Microsoft ships (but then, the same can be said for c-client).  The lack of 
adequate bounds-checking is discovered, someone writes a worm which goes 
around, lugging extra code as needed (e.g., along the lines of Samhain) and 
they disable the checking on the system.  Hey presto ...


From: Leonardo Humberto Liporati <liporati@almg.gov.br>
Subject: Brazilian Voting Machine

The description of the modified PC used in the Brazilian elections has an 
error.  The machine doesn't have a small ballot box attached to it.  So the 
machine doesn't print and slip every vote into the ballot box.  At the end 
of voting the machine only prints some copies of a final report with totals 
by candidate.  This is most criticized "weakness" of the machine; there is 
no way to recount the votes because there are no physical proof of the 
vote, just a digital record on a floppy and in an internal flash memory 
card.  This way the machine fails to achieve the AUDIT attribute, enforced 
by the fact that the modified PC runs a proprietary operating 
system/software whose source code is a commercial secret and can't be 
reviewed.  Also the machine is said to use classified encryption 
software/algorithm developed by ABIN (the Brazilian secret service) that 
isn't available to public review.  This way, when the machine print the 
initial "zero votes" report, no one can be sure that it represents the 
"initial state," just that the software inside it knows how to print such 
report.  There is no way to check/ensure whether the final report (or the 
floppy contents) are the real results or pre-programmed values set by the 
own government.

The machine also fails to achieve the ANONYMITY attribute.  The machine 
holds the name and ID number of each voter of that electoral zone.  Before 
every vote, using the remote control, the operator types the ID number of 
the voter to release the keypad.  As the software is secret, we can't be 
sure that it doesn't record the vote along with the ID of the 
voter.  Another problem is that the machine sounds a beep for every key 
pressed.  Usually a vote is made by typing the candidate's number (2 to 5 
digits) and pressing the "confirm" key, but there is one key for "blank 
vote."  So counting the number of beeps we can easy identify blank votes.

The Brazilian electronic system is far from perfect and receives many 
criticisms by specialists.  The Web site <http://www.votoseguro.org> holds 
a discussion group about it, but it is in Portuguese.


From: Celso Brites <brites@correionet.com.br>
Subject: Brazilian Voting System

In your Cryptogram December 15 proposal for an automated voting system you 
say: "The voter checks the paper ballot for accuracy, and then drops it 
into a sealed ballot box.  The paper ballots are the 'official' votes and 
can be used for recounts, and the computer provides a quick initial tally."

The paper ballot you propose was used in the first Brazilian electronic 
election.  However, it has been abandoned on the elections that followed, 
in order to avoid an old type of electoral fraud, known in Brazil as "Voto 
de Cabresto" (literally translated, "halter vote").  In this fraud, a 
politician (typically a small-town, large property owner) "buys" votes from 
his (usually desperately poor) electorate with promises of small benefits, 
goods, or even cash.  To assure that they will vote as promised, the 
politician uses the following trick: someone in his payroll gets to be the 
first one to vote, but it drops any piece of paper instead of his ballot in 
the box.  He takes the ballot with him and marks it with a small 
handwritten sign.  He then hands it to the first of the politician 
"clients", which is instructed to put the signed ballot in the box and 
bring his own ballot back, thus proving he voted for the "right" 
candidate.  The whole process is then repeated: the new ballot is signed 
and handed to the next "client".

This process could be avoided by using a sealed transparent conduit between 
the machine and the box, so that the voter would not touch the ballot, but 
could check if it was properly printed. For some reason it was not adopted.


From: Todd Hooper <Todd.Hooper@watchguard.com>
Subject: Publishing Security Advisories

I think the spat between Microsoft and Bugtraq is less about the rights of 
the security community, and more about Bugtraq (aka SecurityFocus) 
protecting their income stream from advertising on their portal.  If 
Microsoft don't let them distribute the material, the value of Bugtraq (and 
SecurityFocus) is reduced in some fashion.


From: Jonah Duckles <jduckles@purdue.edu>
Subject: Gianus in the Doghouse

I found your "Doghouse" entry for <http://www.gianus.com> intriguing.  I 
visited their site and found that they were claiming they had full 
technical reports detailing how none of the independent testing agencies 
they had submitted their product to were able to access protected 
information.  The two "technical" reports that I received had descriptions 
that basically were accounts of how good the product seemed to be.  There 
was no technical probing of what kinds of things the product was actually 
doing to "hide" the data.

I contacted the person who had sent me these technical reports again to say 
that I felt the product simply obfuscates the data and does not actually 
provide any security and I received the following e-mail:

      From: "GIANUS TECHNOLOGIES" <gianus@gianus.com>
      To: "Jonah Duckles" <jduckles@purdue.edu>
      Subject: RE: Technical Reviews
      Date: Wed, 24 Jan 2001 13:15:29 -0800

      So, Mr. Duckles, maybe you're the genius we were looking for!!!

      How can you express an opinion about something you have never even seen??

      You are more than welcome to try to retrieve any information we protect
      with the Phantom. ...But what do we get if you cannot???

      There probably is a reason why you are in the academic world, and not in
      the commercial (real) one.

      Best Regards.
      ____________________________________________
      GIANUS TECHNOLOGIES

I guess you have every reason to keep these guys out in the Doghouse, for 
bad security practice AND bad PR.


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.

To subscribe, visit <http://www.counterpane.com/crypto-gram.html> or send a 
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe, 
visit <http://www.counterpane.com/unsubform.html>.  Back issues are 
available on <http://www.counterpane.com>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of _Secrets and Lies_ and 
_Applied Cryptography_, and an inventor of the Blowfish, Twofish, and 
Yarrow algorithms.  He served on the board of the International Association 
for Cryptologic Research, EPIC, and VTW.  He is a frequent writer and 
lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company bringing 
innovative managed security solutions to the enterprise.

<http://www.counterpane.com/>

Copyright (c) 2001 by Counterpane Internet Security, Inc.