Friday, May 23, 2008

PCI Silverbullet for POS?

Has Verifone created a PCI silverbullet for Point Of Sale (POS) systems with their VeriShield Protect product? It's certainly interesting. It claims to encrypt credit card data BEFORE it enters POS, passing a similarly formatted (16 digit) encrypted card number into POS that presumably only your bank can decrypt and process.


I have to admit, I like the direction it's headed in. Any organization's goal (unless you are a payment processor) should be to reduce your PCI scope as much as possible, not try to bring PCI to your entire organization. This is a perfectly viable option to addressing risk that is often overlooked: ditch the asset. If you cannot afford to properly protect an asset, and you can find a way to not have to care for the asset anymore, then ditch it.

The questions I have about this specific implementation that are certainly going to have to be answered before anyone can use this to get a PCI QSA off of their back are:

1) What cryptographers have performed cryptanalysis on this "proprietary" design? Verifone's liberty to mingle the words "Triple DES" into their own marketing buzz format, "Hidden TDES", should at least concern you, if you know anything about the history of information security and the track records of proprietary encryption schemes. Since the plaintext and the ciphertext are exactly 16 digits (base 10) long and it appears that only the middle 6 digits are encrypted (see image below), this suggests that there might exist problems with randomness and other common crypto attacks. Sprinkle in the fact that credit card numbers must comply with the "Mod 10" rule (Luhn alogirthm), and I'm willing to bet a good number theorist could really reduce the possibilities of the middle 6 digits. If only the middle 6 digits are encrypted, and they have to be numbers between 0 and 9, then the probability of guessing the correct six digit number is one in a million. But the question is (and it's up to a mathematician or number theorist to answer), how many of the other 999,999 combinations of middle 6 digits, when combined with the first 6 and last 4 digits, actually satisfy the Mod 10 rule? [Especially since the "check digit" in the mod 10 credit card number rule is digit 14, which this method apparently doesn't encrypt.] I'm no mathematician, but I'm willing to bet significantly fewer than 999,999 satisfy the mod 10 rule. It's probably a sizeable cut-down on the brute-force space. If there are any other mistakes in the "H-TDES" design or implementation, it might be even easier to fill in the middle 6 gap.

It would be great to know that Verifone's design was open and peer-reviewed, instead of proprietary. I'd be very curious for someone like Bruce Schneier or Adi Shamir to spend some time reviewing it.


2) How are the keys generated, stored, and rotated? I certainly hope that all of these devices don't get hardcoded (eeprom's flashed) with a static shared key (but I wouldn't be surprised if they are). It would be nice to see something like a TPM (secure co-processor) embedded in the device. That way, we'd know there is an element of tamper resistance. It would be very bad if a study like the one the Light Blue Touchpaper guys at Cambridge University just published would detail that all of the devices share the same key (or just as bad, if all of the devices for a given retailer or bank share the same key).

It would be great if each device had its own public keypair and generated a session key with the bank's public key. This could be possible if the hardware card-swipe device sent the cardholder data to the bank directly instead of relying on a back office system to transmit it (arguably the back-end could do the transmission, provided the card swipe had access to generate a session key with the bank directly).

3) Will the PCI Security Council endorse a solution like this? (Unfortunately, this is probably the most pressing question on most organizations' minds.) If this does not take the Point of Sale system out of PCI scope, then most retailers will not embrace the solution. If the PCI Security Council looks at this correctly with an open mind, then they will seek answers to my questions #1 and #2 before answering #3. In theory, if the retailer doesn't have knowledge or possession of the decryption keys, POS would not be in PCI scope any more than the entire Internet is in PCI scope for e-tailers who use SSL.

...

Many vendors (or more accurately "payment service providers") are using "tokenization" of credit card numbers to get the sticky numbers out of e-tailers' databases and applications, which is a similar concept for e-commerce applications. A simple explanation of tokenizing a credit card number is simply creating a surrogate identifier that means nothing to anyone but the bank (service provider) and the e-tailer. The token replaces the credit card number in the e-tailer's systems, and in best-case scenarios the e-tailer doesn't even touch the card for a millisecond. [Because even a millisecond is long enough to be rooted, intercepted, and defrauded; the PCI Security Council knows that.]

It's great to see people thinking about solutions that fit the mantra: "If you don't have to keep it, then don't keep it."

[Note: all images are likely copyrighted by Verifone and are captures from their public presentation in PowerPoint PPS format here.]

...
[Updated May 23, 2008: Someone pointed out that PCI only requires the middle 6 digits (to which I refer in "question 1" above) to be obscured or protected according to requirement 3.3: "Mask PAN when displayed (the first six and last four digits are the maximum number of digits to be displayed)." Hmmm... I'm not sure how that compares to the very next requirement (3.4): "Render PAN [Primary Account Number], at minimum, unreadable anywhere it is stored" Looks like all 16 digits need to be protected to me.]

Saturday, May 17, 2008

Why You Don't Need a Web Application Layer Firewall

Now that PCI 6.6's supporting documents are finally released, a lot people are jumping on the "Well, we're getting a Web Application Firewall" bandwagon. I've discussed the Pros and Cons of Web Application Firewalls vs Code Reviews before, but let's dissect one more objection in favor of WAFs and against code reviews (specifically static analysis) ...

This is from Trey Ford's blog post "Instant AppSec Alibi?"
Let’s evaluate this in light of what happens after a vulnerability is identified- application owners can do one of a couple things…
  1. Take the website off-line
  2. Revert to older code (known to be secure)
  3. Leave the known vulnerable code online
The vast majority of websites often do the latter… I am personally excited that the organizations now at least have a viable new option with a Web Application Firewall in the toolbox! With virtual patching as a legitimate option, the decision to correct a vulnerability at the code level or mitigate the vuln with a WAF becomes a business decision.

There are two huge flaws in Mr Ford's justification of having WAFs as a layer of defense.

1) Web Application Firewalls only address HALF of the problems with web applications
: the syntactic portion, otherwise known in Gary McGraw speak as "the bug parade". The other half of the problems are design (semantic) problems, which Gary refers to as "security flaws". If you read Gary's books, he eloquently points out that research shows actual software security problems fall about 50/50 in each category (bugs and flaws).

For example, a WAF will never detect, correct, or prevent horizontal (becoming another user) or vertical (becoming an administrator) privilege escalation. This is not an input validation issue, this is an authorization and session management issue. If a WAF vendor says their product can do this, beware. Given the ideal best case scenario, let's suppose a WAF can keep track of the source IP address of where "joe" logged in. If joe's session suddenly jumps to an IP address from some very distinctly different geographic location and the WAF thinks this is "malicious" and kills the session (or more realistically the WAF just doesn't pass the transactions from that assumed-to-be-rogue IP to the web application), then there will be false positives, such as corporate users who jump to VPN and continue their browser's session, or individuals who switch from wireless to an "AirCard" or some other ISP. Location based access policies are problematic. In 1995 it was safe to say "joe will only log on from this IP address", but today's Internet is so much more dynamic than that. And if the WAF won't allow multiple simultaneous sessions from the same IP, well forget selling your company's products or services to corporate users who are all behind the same proxy and NAT address.

Another example: suppose your org's e-commerce app is designed so horribly that a key variable affecting the total price of a shopping cart is controlled by the client/browser. If a malicious user could make a shopping cart total $0, or worse -$100 (issuing a credit to the card instead of a debit), then no WAF on today's or some future market is going to understand how to fix that. The WAF will say "OK, that's a properly formatted ASCII represented number and not some malicious script code, let it pass".

Since the PCI Security Standards Council is supporting the notion of Web Application Firewalls, that begs the question: Does the PCI Security Standards Council even understand what a WAF can and cannot do? Section 6.6 requires that WAFs or Code Reviews address the issues inspired by OWASP which are listed in section 6.5:
6.5.1 Unvalidated input
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
The following items fall into the "implementation bug" category which could be addressed by a piece of software trained to identify the problem (a WAF or a Static Code Analyzer):
6.5.1 Unvalidated input
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
These items fall into the "design flaw" category and require human intelligence to discover, correct, or prevent:
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management
Solving "design" or "semantic" issues requires building security into the design phase of your lifecycle. It cannot be added on by a WAF and generally won't be found by a code review, at least not one that relies heavily on automated tools. A manual code review that takes into consideration the criticality of a subset of the application (say, portions dealing with a sensitive transaction) may catch this, but don't count on it.



2) If your organization has already deployed a production web application that is vulnerable to something a WAF could defend against, then you are not really doing code reviews. There's no more blunt of a way to put it. If you have a problem in production that falls into the "bug" category that I described above, then don't bother spending money on WAFs. Instead, spend your money on either a better code review tool OR hiring & training better employees to use it (since they clearly are not using it properly).



Bottom line: any problem in software that a WAF can be taught to find could have been caught at development time with a code review tool, so why bother buying both. You show me a problem a WAF can find that slipped through your development process, and I'll show you a broken development process. Web Application Firewalls are solution in search of a problem.

Saturday, May 10, 2008

Sending Bobby Tables to the Moon

NASA has a program where you can send your name to the moon. Just give them your name, they'll store it electronically, and send it on the next lunar rover to be left there forever.

Now, little Bobby Tables will be immortalized forever on the moon.

Saturday, May 3, 2008

Automating Exploitation Creation

Some academic security researchers at Carnegie Mellon have released a very compelling paper which introduces the idea that just monitoring a vendor's patch releases can allow for an automated exploit creation. (They call it "APEG".) They claim that automated analysis of the diffs between a pre-patched program and a patched program is possible-- and that in some cases an exploit can be created in mere minutes when some clients take hours or days to check in and install their updates! Granted there is some well established commentary from Halvar Flake about the use of the term "exploit" since the APEG paper really only describes "vulnerability triggers" (Halvar's term).

Our friends at Carnegie Mellon have proved that the emperor hath no clothes. Creating exploits from analyzing patches is certainly not new. What is novel, in this case, is how the exploit creation process is automated:
"In our evaluation, for the cases when a public proof- of-concept exploit is available, the exploits we generate are often different than those publicly described. We also demonstrate that we can automatically generate polymorphic exploit variants. Finally, we are able to automatically generate exploits for vulnerabilities which, to the best of our knowledge, have no previously published exploit."