[LWN Logo]

Date: Tue, 16 Feb 1999 19:25:21 -0700 (MST)
From: mea culpa <jericho@dimensional.com>
To: InfoSec News <isn@repsec.com>
Subject: [ISN] CRYPTO-GRAM, February 15, 1999 


Forwarded From: Bruce Schneier <schneier@counterpane.com>

                 CRYPTO-GRAM

               February 15, 1999

              by Bruce Schneier
                  President
             Counterpane Systems
           schneier@counterpane.com
          http://www.counterpane.com


A free monthly newsletter providing summaries, analyses, insights, and
commentaries on cryptography and computer security.

Back issues are available at http://www.counterpane.com.  To subscribe or
unsubscribe, see below.


Copyright (c) 1999 by Bruce Schneier


** *** ***** ******* *********** *************

In this issue:

     Snake Oil
     News
     Counterpane Systems -- Featured Research
     The Doghouse: WinXFiles
     Back Doors, Export, and the NSA
     Intel's Processor ID
     Comments from Readers


** *** ***** ******* *********** *************

                 Snake Oil



The problem with bad security is that it looks just like good security.
You can't tell the difference by looking at the finished product.  Both
make the same security claims; both have the same functionality.  Both
might even use the same algorithms: triple-DES, 1024-bit RSA, etc.   Both
might use the same protocols, implement the same standards, and have been
endorsed by the same industry groups.  Yet one is secure and the other is
insecure.

Many cryptographers have likened this situation to the pharmaceutical
industry before regulation.  The parallels are many: vendors can make any
claims they want, consumers don't have the expertise to judge the accuracy
of those claims, and there's no real liability on the part of the vendors
(read the license you agree to when you buy a software security product).

This is not to say that there are no good cryptography products on the
market. There are.  There are vendors that try to create good products and
to be honest in their advertising.  And there are vendors that believe they
have good products when they don't, but they're just not skilled enough to
tell the difference.  And there are vendors that are just out to make a
quick buck, and honestly don't care if their product is good or not.

Most products seem to fall into the middle category: well-meaning but
insecure.  I've talked about the reason in previous CRYPTO-GRAM essays, but
I'll summarize: anyone can create a cryptography product that he himself
cannot break.  This means that a well-meaning person comes up with a new
idea, or at least an idea that he has never heard of, cannot break it, and
believes that he just discovered the magic elixir to cure all security
problems.  And even if there's no magic elixir, the difficulty of creating
secure products combined with the ease of making mistakes makes bad
cryptography the rule.

The term we use for bad cryptography products is "snake oil," which was the
turn-of-the-century American term for quack medicine.  It brings to mind
traveling medicine shows, and hawkers selling their special magic elixir
that would cure any ailment you could imagine.

For example, here is a paragraph from the most recent snake-oil
advertisement I received in e-mail: "Encryptor 4.0 uses a unique in-house
developed incremental base shift algorithm.  Decryption is practically
impossible, even if someone manages to reverse engineer our program to
obtain the algorithm, the decryption of a file depends on the exact
password (encryption key).  Even if someone is guessing the encryption key
the file will only be decrypted correctly if the encryption key is 100
percent correct.  See the IMPORTANT WARNING on our Web site
http://ten4.com/encryptor."  I checked the Web site; the odds that this
product is any good are negligible.

Elsewhere I've talked about building strong security products, using
tried-and-true mathematics, and generally being conservative.  Here I want
to talk about some of the common snake-oil warning signs, and how you can
pre-judge products from their advertising claims.  These warning signs are
not foolproof, but they're pretty good.

Warning Sign #1: Pseudo-mathematical gobbledygook.  

In the quote above, notice the "unique in-house developed incremental base
shift algorithm."  Does anyone have any idea what that means?  Are there
any academic papers that discuss this concept?  Long noun chains don't
automatically imply security.

Meganet <http://www.meganet.com> has a beauty on their Web site: "The base
of VME is a Virtual Matrix, a matrix of binary values which is infinity in
size in theory and therefore have no redundant value.  The data to be
encrypted is compared to the data in the Virtual Matrix.  Once a match is
found, a set of pointers that indicate how to navigate inside the Virtual
Matrix is created.  That set of pointers (which is worthless unless
pointing to the right Virtual Matrix) is then further encrypted in dozens
other algorithms in different stages to create an avalanche effect. The
result is an encrypted file that even if decrypted is completely
meaningless since the decrypted data is not the actual data but rather a
set of pointers.  Considering that each session of VME has a unique
different Virtual Matrix and that the data pattern within the Virtual
Matrix is completely random and non-redundant, there is no way to derive
the data out of the pointer set."  This makes no sense, even to an expert.

US Data Security <http://www.usdsi.com> has another beauty: "From a
mathematical point of view, the TTM algorithm is intuitively natural and
less cumbersome to use than methods that are number-theory based."
SuperKrypt <http://www.superkrypt.com/> tries to impress with an acronym:
"SuperKrypt products utilize the DNGT bulk encryption method," whatever
that is.  And Cennoid <http://www.cennoid.com> just doesn't understand what
it's talking about: "Since key length and key structure vary and since the
encryption engine does not use any mathematical algorithms, reverse
engineering is impossible and guessing is not an option."

The point here is that, like medicine, cryptography is a science.  It has a
body of knowledge, and researchers are constantly improving that body of
knowledge: designing new security methods, breaking existing security
methods, building theoretical foundations, etc.  Someone who obviously does
not speak the language of cryptography is not conversant with the
literature, and is much less likely to have invented something good.  It's
as if your doctor started talking about "energy waves and healing
vibrations."  You'd worry.

Warning Sign #2: New mathematics.

Every couple of years, some mathematician looks over at cryptography, says
something like, "oh, that's easy," and proceeds to create an encryption
algorithm out of whatever he has been working on.  Invariably it is lousy.  

Beware cryptography based on new paradigms or new areas of mathematics:
chaos theory, neural networks, coding theory, zeta functions.  Cryptography
is hard; the odds that someone without any experience in the field can
revolutionize it are small.  And if someone does, let the academic
community have a few years to understand it before buying products based on
it.

Warning Sign #3: Proprietary cryptography.

I promise not to start another tirade about the problems of proprietary
cryptography.  I just include it here as a warning sign.  So when a company
like GenioUSA <http://www.geniousa.com/genio/> refuses to divulge what
algorithm they're using (they claim it's "world class secret key
encryption," whatever that means), you should think twice before using
their product (it's completely broken, by the way).

Another company, Crypt-o-Text <http://www.savard.com/crypt-o-text/>,
promises a "complex proprietary encryption algorithm" and that "there is
absolutely no way to determine what password was used by examining the
encrypted text."  It was completely broken in an InfoWorld review.

This kind of thing isn't exclusive to small companies.  Axent once tried to
pass XOR off as a real encryption algorithm.  It wasn't until some peeked
inside the compiled code that we discovered it.

Any company that won't discuss its algorithms or protocols has something to
hide.  There's no other possible reason.  (And don't let them tell you that
it is patent-pending; as soon as they file the patent, they can discuss the
technology.  If they're still working on the patent, tell them to come back
after they can make their technology public.)

Warning Sign #4: Extreme cluelessness.

Some companies make such weird claims that it's obvious that they don't
understand the field.  TriStrata says this about their encryption
algorithm:  "Since TriStrata's encryption scheme is so simple and of such
low computational complexity, the client portion can reside on a wide range
of systems -- from a server to a portable PC."  Don't they realize that
every encryption algorithm is small enough to fit on a portable PC, that
DES and RSA and SHA can fit on an 8-bit smart card, and that you can
implement some of the AES candidates in 17 clock cycles per byte or a few
thousand gates?

GenioUSA talks about why they don't use public-key cryptography in their
product): "Public Key encryption is exactly that, you are not the only
party involved in the generation, integrity, and security of all the
keys/passwords used to encrypt your e-mail, documents, and files.  Public
key encryption is great technology to use to exchange things with anyone
you won't trust with your secret key(s) and/or can't exchange secret key(s)
with.  We quote one sentence from a well known Web page, 'All known public
key cryptosystems, however, are subject to shortcut attacks and must
therefore use keys ten or more times the lengths of those discussed here to
achieve the an [sic] equivalent level of security.'"  So what?  This
company just doesn't get it.

Warning Sign #5: Ridiculous key lengths.

Jaws Technology <http://www.jawstech.com> boasts: "Thanks to the JAWS L5
algorithm's statistically unbreakable 4096 bit key, the safety of your most
valued data files is ensured."  Meganet takes the ridiculous a step further
<http://www.meganet.com>: "1 million bit symmetric keys -- The market
offer's [sic] 40-160 bit only!!"

Longer key lengths are better, but only up to a point.  AES will have
128-bit, 192-bit, and 256-bit key lengths.  This is far longer than needed
for the foreseeable future.  In fact, we cannot even imagine a world where
256-bit brute force searches are possible.  It requires some fundamental
breakthroughs in physics and our understanding of the universe.  For
public-key cryptography, 2048-bit keys have same sort of property; longer
is meaningless.

Think of this as a sub-example of Warning Sign #4: if the company doesn't
understand keys, do you really want them to design your security product?

Warning Sign #6: One-time pads.

One-time pads don't make sense for mass-market encryption products.  They
may work in pencil-and-paper spy scenarios, they may work on the
U.S.-Russia teletype hotline, but they don't work for you.  Most companies
that claim they have a one-time pad actually do not.  They have something
they think is a one-time pad.  A true one-time pad is provably secure
(against certain attacks), but is also unusable.

Elementrix, now defunct, announced a one-time pad product a few years ago,
and refused to recant when it was shown that it was no such thing.  Ciphile
Software <http://www.ciphile.com> just tries to pretend: "Original Absolute
Privacy - Level3 is an automated pseudo one-time pad generator with very
sophisticated and powerful augmenting features."  Whatever that means.

More recently, TriStrata <http://www.tristrata.com> jumped on the world's
cryptography stage by announcing that they had a one-time pad.  Since then,
they've been thoroughly trounced by anyone with a grain of cryptographic
sense and have deleted the phrase from their Web site.  At least they've
exhibited learning behavior.

Ultimate Privacy <http://www.ultimateprivacy.com> might actually use a
one-time pad (although they claim to use Blowfish, too, which worries me):
"The one time pad is a private key method of encryption, and requires the
safe and secure distribution of the pad material, which serves as the key
in our solution.  The security of the key distribution comes down to how
secure you want to be -- for communicating point-to-point with one other
person, we suggest a face-to-face hand-off of the pad material."  Remember
that you need to hand off the same volume of bits as the message you want
to send, otherwise you don't have a one-time pad anymore.

Warning Sign #7: Unsubstantiated claims.

Jaws Technologies says this about its new encryption technology: "This
scientifically acclaimed encryption product is the world's strongest
commercially available software of its kind."  Acclaimed by who?  The Web
site doesn't say.  World's strongest by what comparison?  Nothing.

UBE98, at <http://www.parkie.ndirect.co.uk/>, stands for "unbreakable
encryption," or at least it did before someone took a day to break it.  Its
Web site makes the same sort of ridiculous claims:  "One of the Strongest
Encryptions available in the UK in a program that everyone will understand
how to use!"  Wow.  SenCrypt <http://www.ionmarketing.com/> is advertised
to be "the most secure cryptographic algorithm known to mankind."  Double wow.

Some companies claim "military-grade" security.  This is a meaningless
term.  There's no such standard.  And at least in the U.S., military
cryptography is not available for non-government purposes (although
government contractors can get it for classified contracts).

Other companies make claims about other algorithms that are "broken,"
without giving details.  Or that public-key cryptography is useless.  Don't
believe any of this stuff.  If the claim seems far-fetched, it probably is.
 If a company claims that their products have been reviewed by
cryptographers, ask for names.  Ask for a copy of the review.  Counterpane
Systems reviews many products, and our clients can give out the reviews if
they choose.

Warning Sign #8: Security proofs.

There are two kinds of snake-oil proofs.  The first are real mathematical
proofs that don't say anything about real security.  The second are fake
proofs.  Meganet claims to have a proof that their VME algorithm is as
secure as a one-time pad.  Their "proof" is to explain how a one-time pad
works, add the magic spell "VME has the same phenomenon behavior patterns,
hence proves to be equally strong and unbreakable as OTP," and then give
the results of some statistical tests.  This is not a proof.  It isn't even
close.

More subtle are actual provably secure systems.  They do exist.  Last
summer, IBM made a big press splash about their provably secure system,
which they claimed would revolutionize the cryptography landscape.  (See
<http://www.counterpane.com/crypto-gram-9809.html#cramer-shoup> for a
discussion.)  Since then, the system has disappeared.  It's great research,
but mathematical proofs have little to do with actual product security.

Warning Sign #9: Cracking contests.

I wrote about this at length last December:
<http://www.counterpane.com/crypto-gram-9812.html#contests>.  For now,
suffice it to say that cracking contests are no guarantee of security, and
often mean that the designers don't understand what it means to show that a
product is secure.

Conclusion: Separating the Good from the Bad

These snake-oil warning signs are neither necessary nor sufficient criteria
for separating the good cryptography from the snake oil.  Just as there
could be insecure products that don't trigger any of these nine warning
signs, there could be secure products that look very much like snake oil.
But most people don't have the time, patience, or expertise to perform the
kind of analysis necessary to make an educated determination.  In the
absence of a Food-and-Drug-Administration-like body to regulate
cryptography, the only thing a reasonable person can do is to use warning
signs like these as guides.


Further reading: The "Snake Oil" FAQ is an excellent source of information
on questionable cryptographic products, and a good way to increase the
sensitivity of your bullshit detector.  Get your copy at:
<http://www.interhack.net/people/cmcurtin/snake-oil-faq.html>.


** *** ***** ******* *********** *************

                    News



The U.S. has new interim cryptography export regulations.  The Department
of Commerce issued new interim regulations on encryption export controls on
December 31, 1998.  Products with DES can now be freely exported.  (Of
course, we all know that DES can be broken in 21 hours by a bunch of
amateurs, and a lot faster by professionals.)  Products with any key length
can be exported to insurance companies, medical end-users, and online
merchants (only for buying and selling goods), under the current exception
available for banks.  Corporations can export to their subsidiaries for
"internal company proprietary use"; some of this extends to partners of
American companies.  Some of the licensing requirements on export of key
escrow/key recovery systems have been removed.  These new regulations,
announced in September, are targeted towards large corporations.
Restrictions on the exports of strong encryption used for private,
non-commercial reasons is still strictly limited.  Comments on the rules
are due March 1, 1998.  A copy of the rules is available at:
http://www.epic.org/crypto/export_controls/bxa-regs-1298.html

France reversed its long-standing position as being one of the most
anti-cryptography countries in the world.  On January 19, Prime Minister
Lionel Jospin announced the French government is relaxing its current
restrictive policy on encryption.  Under the new policy, a key escrow
system of "Trusted Third Parties" will no longer be required for domestic
use, the 1996 law requiring TTPs will not be implemented, and users will be
able to use up to 128-bit encryption without restrictions until a new law
which
eliminates all restrictions is enacted.  Rah rah.  The announcement is
available in French at:
http://www.premier-ministre.gouv.fr/PM/D190199.HTM
http://www.internet.gouv.fr/francais/textesref/cisi190199/decis1.htm
http://www.internet.gouv.fr/francais/textesref/cisi190199/decis2.htm
and a translation is at:
http://slashdot.org/articles/99/01/19/1255234.shtml

In addition to adding a unique processor ID (see below) to its Pentium III
chip, Intel is adding a hardware random number generator.  This is
excellent news.  I know nothing about how it works (or even if it is any
good), but using techniques such as Yarrow, we can take even a mediocre
hardware random number generator and turn it into something that is good
for cryptographic applications.

There's a new Word-based virus named Caligula.  Caligula steals a
user's PGP key ring and sends it to the creators' FTP site.  According
to Network Associates (owners of PGP, having bought it in 1997), this
doesn't compromise PGP security because the key ring file is useless
without the passphrase.  This seems a bit optimistic; once the private key
ring is known, PGP's security level goes from unbreakable to that of a
standard hashed passphrase.  And most people choose lousy passphrases.
http://www.techweb.com/wire/story/TWB19990205S0011

Sun's Scott McNealy announced that we all have no privacy anyway, and might
as well get used to it.  All the more troubling, Sun is a member of the
Online Privacy Alliance.  With an attitude like McNealy's, is it hard to
believe that "an industry coalition that seeks to head off government
regulation of online consumer privacy in favor of an industry
self-regulation approach" has my best interests at heart?
http://www.wired.com/news/news/politics/story/17538.html

SECRET POWER is an excellent book about project Echelon, the NSA's secret
program designed to eavesdrop on pretty much every piece of communication
in the world.  The book isn't available in the U.S. (Amazon.com never heard
of it, and I got my copy from a friend in New Zealand), but CovertAction
Quarterly has an excellent article on the topic by the author:
http://www.caq.com/caq59/CAQ59GlobalSnoop.html
And if you want to try to get the book, here are the details:  Nicky Hager,
SECRET POWER, Craig Potton Publishing (Box 555, Nelson, New Zealand), 1996.
See also:  http://www.gn.apc.org/duncan/echelon-dc.htm


** *** ***** ******* *********** *************

   Counterpane Systems -- Featured Research



"Breaking Up Is Hard To Do: Modeling Security Threats for Smart Cards"

B. Schneier and A. Shostack, First USENIX Symposium on Smart Cards, USENIX
Press, to appear.

Smart card systems differ from conventional computer systems in that
different aspects of the system are not under a single trust boundary.  The
processor, I/O, data, programs, and network may be controlled by different,
and hostile, parties.  We discuss the security ramifications of these
"splits" in trust, showing that they are fundamental to a proper
understanding of the security of systems that include smart cards.

http://www.counterpane.com/smart-card-threats.html


** *** ***** ******* *********** *************

           The Doghouse: WinXFiles



WinXFiles <http://www.pepsoft.com/wxf/intro.html> is an image viewer that
claims security.  Quoting from the Web site, "WinXFiles(tm) with its
attractive tabbed-dialog interface features secure encryption to prevent
unauthorized access of all type (sic) of files and particularly of your
image collection."  While it may have an attractive interface, it is
anything but secure.

WinXFiles obscures the password and stores it in the header of the
encrypted file.  It obscures the password by adding it (modulo 256) to the
output of a pseudorandom number generator.  The PRNG is seeded with one
byte derived from the password (simply the sum of the characters in the
password modulo 255).  The PRNG itself is also laughably weak; it generates
two bytes at a time by adding and subtracting the offset from the seed
byte, respectively.

Perhaps the scariest thing about the encryption is the high ratings it got
from reviewers: http://www.pepsoft.com/wxf/reviews.html.

(This work was done by Mike Stay and Casimir, a French hacker whose
exploits have been reported by Joe Peschel.  Thanks to Mike Stay for
writing this up.)


** *** ***** ******* *********** *************

        Back Doors, Export, and the NSA



Among cryptography product companies, "Have you had a meeting with Lew
Giles?" is code for "Has the NSA asked you to secretly weaken your
product?"  Giles has been known to visit companies and request that they
add back doors to their products so that the NSA could break the encryption.

The deal went something like this:  Giles offered you preferential
treatment for export if you would add a back door.  The back door could be
subtle enough that it wouldn't show up in the design, and only be obvious
if someone analyzed the binary code.  It could be something that would
easily be viewed as a mistake if someone learned about it.  Maybe you could
weaken your random number generator, or leak a few key bits in a header.
Anything that would let the NSA decrypt the ciphertext without it looking
like the crypto was broken.

In return you would be able to export your products.  But you and he would
have to come up with some kind of cover story as to why you could export
what was normally unexportable encryption, something that would allay any
suspicion.

Giles was supposedly very smooth.  He would try a variety of tactics to
make you go along with this plan.  Sometimes he would meet with just the
engineers -- no management -- to try and circumvent potential problems.

I've heard this story from several cryptography companies, large and small.
 None of them were willing to talk on the record.  All were visited at
least two years ago; most were visited by Giles.  None agreed to this
bargain.  (Presumably those who did would be unwilling to admit even
talking to the NSA.)  And all of these stories are at least two years old;
I have no idea if Giles is still employed by the NSA, if he is still doing
this kind of thing, or in fact if anyone is still doing this kind of thing.

None of this should be surprising.  The NSA seems to have done whatever it
could to add trap-doors into cryptography products.  They completely
subverted the Swiss company CryptoAG, for example, and for at least half a
century have been intercepting and decrypting the top-secret documents of
most of the world's governments.  (The URL for this absolutely fascinating
story is <http://www.caq.com/CAQ/caq63/caq63madsen.html>.)

This kind of thing happens in Canada, too.  One name I've heard is Norm
Weijer; a couple of years ago he visited several Canadian crypto companies.
 One person tells the story of submitting his product to Norm for export
approval.  The product used a number of different proprietary algorithms,
all weakened to 40-bit.  The word came back, unofficially of course, that
if he would get rid of the proprietary algorithms and replace them with
56-bit DES, they could get export approval.  Presumably using their
existing DES crackers was easier than building unique crackers for this
particular product.


** *** ***** ******* *********** *************

             Intel's Processor ID



Last month Intel Corp. announced that its new processor chips would come
equipped with ID numbers, a unique serial number burned into the chip
during manufacture.  Intel said that this ID number will help facilitate
e-commerce, prevent fraud and promote digital content protection.

Unfortunately, it doesn't do any of these things.

To see the problem, consider this analogy: Imagine that every person was
issued a unique identification number on a national ID card.  A person
would have to show this card in order to engage in commerce, get medical
care, whatever. Such a system works, provided that the merchant, doctor, or
whoever can examine the card and verify that it hasn't been forged.  Now
imagine that the merchants were not allowed to examine the card.  They had
to ask the person for his ID number, and then accept whatever number the
person responded with.  This system is only secure if you trust what the
person says.

The same problem exists with the Intel scheme.

Yes, the processor number is unique and cannot be changed, but the software
that queries the processor is not trusted.  If a remote Web site queries a
processor ID, it has no way of knowing whether the number it gets back is a
real ID or a forged ID.  Likewise, if a piece of software queries its
processor's ID, it has no way of knowing whether the number it gets back is
the real ID or whether a patch in the operating system trapped the call and
responded with a fake ID.  Because Intel didn't bother creating a secure
way to query the ID, it will be easy to break the security.

As a cryptographer, I cannot design a secure system to validate
identification, enforce copy protection, or secure e-commerce using a
processor ID.  It doesn't help.  It's just too easy to hack the software
that queries the hardware.

This kind of system puts us in the same position we were in when the
government announced the Clipper chip: Those who are engaged in illicit
activities will subvert the system, while those who don't know any better
will find their privacy violated.  I predict that patches that randomize
the ID number will be available on hacker Web sites within days of the new
chips hitting the streets.

The only positive usage for processor IDs is the one usage that Intel said
they would not do: stolen processor tracking.  Pentium II chips are so
valuable that trucks are hijacked on the highways, sometimes resulting in
drivers being killed.  A database of stolen processor IDs would drop the
market for stolen CPUs to zero: board manufacturers, computer companies,
resellers and customers could simply query the database to ensure that
their particular CPU wasn't stolen.  (This is the primary usage for
automobile VINs.)  This same system could be used to prevent manufacturers
from overclocking their CPUs -- running them faster than Intel rated them
for -- another thing that Intel would love to prevent.

The real question is whether computers are a dangerous technology, and need
to be individually tracked like handguns and automobiles. During the Cold
War many Eastern European countries required mimeograph machines to be
individually licensed; I have a hard time believing that computers need the
same sorts of controls.

http://www.techweb.com/wire/story/TWB19990120S0017
http://cnn.com/TECH/computing/9901/22/intelid.idg/index.html
http://www.semibiznews.com/stories99/jan99y/9a22intID.htm

The boycott:
http://www.crn.com/dailies/weekending012999/jan25dig15.asp

Intel's partially backing down:
http://cbs.marketwatch.com/archive/19990125/news/current/intc.htx?source=htx
/http2_mw
http://www.pcworld.com/pcwtoday/article/0,1510,9497,00.html

This article originally appeared in:
http://www.zdnet.com/zdnn/stories/comment/0,5859,2194863,00.html

Since I wrote the above essay, there have been some developments.  Intel
announced that the Pentium III would be shipped with the feature turned
off.  This doesn't help one bit.  Intel has also dropped rumors about a
secure protocol running in tamper-resistant software that will query the ID
number in a secure manner.  This doesn't make sense: there's no such thing
as tamper resistant software (ask anyone who breaks copy protection schemes
as a hobby) and the real worry is the unauthorized protocols that will
query, or hide, the ID number.  Also, people have pointed out that there
are other unique ID numbers on your computer: Ethernet cards, hard drive
serial numbers, etc.  This is true, and could be a problem, but Intel
intends to use its ID number as a surveillance feature.
http://www.techweb.com/se/directlink.cgi?EET19990201S0020

Intel's response to all of this is here:
http://support.intel.com/support/processors/pentiumiii/psqa.htm


** *** ***** ******* *********** *************

            Comments from Readers



From: kragen@pobox.com (Kragen Sitaker)
Subj:  Side Channel Attacks

When I build a computer system, hardware and software, to solve some
problem, I only need to ensure that there exists some reasonable set of
circumstances under which it does what it's supposed to do.  If a client
mounts a standard PC in an enclosure with 150-degree heat, or cuts off
power to it at unpredictable times, or whatever, and then they bitch at me
that it broke, I can blame them.  After all, I never claimed it would work
under those conditions.

When I build a security system, I need to ensure that there does not exist
any reasonable set of circumstances under which it does not do what it's
not supposed to do, as well.  If my smart card leaks information when it's
operated out of spec, well, I can't blame the customer for operating it out
of spec, can I?  :)

The dichotomy is deep.  In one case, we need only ensure that \exists X:
Y(X) = T, where Y(X) means "property Y holds true under circumstances X".
In the other case, we must ensure that \not\exists X: Y(X) = F, or
equivalently \forall X: Y(X) = T.

Most computers and software are designed to fit the first case.  In this
case, the solution to the discovery of a buffer overflow is to tell the
user not to do that -- or if they must do that, to expand the buffer.  In
the second case, the solution is to check bounds carefully.

The result: when software designed for the first environment is pressed
into service in the second, it's full of holes.  The same is true of
hardware, and even hardware design methodologies.


From: Bob Geiger <geiger@areaplg2.corp.mot.com>
Subj:  Internet attacks

One of the important points is how the Internet changes the standard attack
cost/distance model. Given a value of X it would take an attacker Y dollars
to break into my physical world stuff. The farther the attacker must travel
in physical space the higher the cost to him. So unless you have the Hope
Diamond in your basement you can probably assume criminals will be drawn
from a limited pool. Furthermore they actually have to purchase tools and
the threat of being caught (or shot) is a real one. Now in the Internet
this cost/distance ratio is gone. Suddenly anyone, anywhere, can attack
your systems.  Furthermore the availability of free tools limits the
investment required, and the difficulty in tracking down attackers
compounds the problem. Another problem is that when it becomes easy and
relatively risk free to poke around so many systems information deemed of
limited value may add up to help in breaking into much more valuable
systems. So we end up needing a far, far greater level of security as
related to system value than in "physical space". I think many people have
a hard time understanding this difference and this is why warnings on
Internet security are sometimes viewed as alarmist.


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on cryptography and computer security.

To subscribe, visit http://www.counterpane.com/crypto-gram.html or send a
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe,
visit http://www.counterpane.com/unsubform.html.  Back issues are available
on http://www.counterpane.com.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is president of
Counterpane Systems, the author of "Applied Cryptography," and an inventor
of the Blowfish, Twofish, and Yarrow algorithms.  He served on the board of
the International Association for Cryptologic Research, EPIC, and VTW.  He
is a frequent writer and lecturer on cryptography.

Counterpane Systems is a six-person consulting firm specializing in
cryptography and computer security.  Counterpane provides expert consulting
in: design and analysis, implementation and testing, threat modeling,
product research and forecasting, classes and training, intellectual
property, and export consulting.  Contracts range from short-term design
evaluations and expert opinions to multi-year development efforts.
 
http://www.counterpane.com/

Copyright (c) 1999 by Bruce Schneier




-o-
Subscribe: mail majordomo@repsec.com with "subscribe isn".
Today's ISN Sponsor: Internet Security Institute [www.isi-sec.com]