Security Compliance

Having served on a national information security standards working group, I’m keenly aware that compliance is a major driver — if not the primary driver — for security initiatives today.

Compliance rules work best when the threat for inaction is tangible and immediate.   Usually, the threat is “we will fail external audit unless we comply with X” and thus management is highly motivated to comply with X, spending resources they otherwise would not.

There are many issues with this approach:

  • The majority of small & medium businesses out there are not subject to periodic audit.  Without the big stick of a negative audit opinion, compliance rules are routinely ignored.
  • Initiatives are often designed to pass audit with the least amount of work.  Little or no effort is expended in actually understanding the risks and designing controls appropriate for that level of risk.
  • Within large enterprises with complex infrastructure, compliance teams and auditors can realistically only sample small parts of the overall system, leaving large gaps unexamined.
  • Auditors are often too reluctant to “fail” an auditee if the auditee has appropriate “processes and procedures” in place.  Auditors generally “believes” an auditee who says a pending issue is being addressed. However, often these processes and procedures only exist on paper, and sometimes no action is taken until an auditor starts complaining.
  • As an extension to the above, often what’s being audited is only the paperwork (existence of standards, directives, design documents, change logs, etc.), not the actual systems in use.
  • Compliance does not equal security.  Standards, rules and regulations cannot replace common sense.

I could go on and on.  None of these problems are new, mirroring issues with auditing in general.

Having said all that, I truly believe compliance-driven initiatives do help organizations improve their security posture.  Even when companies just do the bare minimum required, that’s still more than doing nothing.

If you’re programming the iPhone, sooner or later you’ll need regular expressions (regex).  By default OS X includes the ICU, an open source Unicode library which has extensive regex capabilities.

The ICU APIs are in C/C++ however, not Objective-C.   Fear not, RegexKitLite to the rescue.   This small library has done all the hard work of adding regex methods to NSString.  RegexKitLite is small, thread-safe, and quite fast.  It simply links to ICU —  unlike its bigger brother, RegexKit, which must be compiled against PCRE.

RegexKitLite is also easy to use:

#import "RegexKitLite.h"
NSString * foo = @"some string to search on";
NSString * regex = @"^(.+?)\s";
NSLog(@"Match: %@", [foo stringByMatching:regex capture:1]);

Then just link with –licucore and that’s it!!

Note: In Xcode I simply added -licucore to the “Other Linker Flags” in my project’s build configuration.  Maybe there’s a “better” way of doing this but this method works for me.

Google Suggest vs. Privacy

Since Google launched Chrome yesterday, much have been said on the blogosphere about its privacy implications. The issue is Google can log your search keystrokes as you type, even prior you hitting that Enter key to submit the search. But since Google Suggest is now enabled by default, this behavior is actually no different than when you type in a search into Google.com directly, using any browser.

And this behavior is not confined to Google.com either, many third-party websites directly or indirectly uses Google Suggest, even if they don’t use the Google Search widget. It gets worse (see later). But how does it work from non-Google websites?

Basically, the website traps your keystrokes using an “onkeyup” event handler, then issues an AJAX call to the Suggest API (suggestqueries.google.com). The API can be invoked with a simple HTTP GET. Here’s an example when you search for “sarah p” today:


Google then returns a suggestion list:

window.google.ac.h(["sarah p",[["sarah palin","357,000 results","1"],
["sarah polley","1,110,000 results","2"],["sarah paulson","487,000 results","3"]

notice this is a JSONP result.

What most people don’t know, if you use Firefox, the top-right Google search box (that’s default for most people) has already been using this functionality all along!!  So Firefox has the same privacy issue.  The Firefox search handler calls the suggestion API with an added parameter (output=firefox&qu=sarah%20p) and gets a simpler return list:

["sarah p",["sarah palin","sarah polley","sarah paulson", ...

So what’s new with Chrome? The difference is Chrome combines the URL bar and the Search bar together. When you type in “http://www.slashdot”, for example, Chrome sends out the following HTTP request prior to you completing your action. Here’s what the packet sniffer logs:

GET /complete/search?client=chrome&output=chrome&hl=en-US&q=http%3A%2F%2Fwww.slashdot HTTP/1.1\r\n
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/ Safari/525.13\r\n
Accept-Language: en-US,en\r\n
Accept-Charset: ISO-8859-1,*,utf-8\r\n
Accept-Encoding: gzip,deflate,bzip2\r\n
Host: clients1.google.ca\r\n
Connection: Keep-Alive\r\n

Which means with Chrome, Google now knows not only what you’re searching for, but also which websites you directly go to as well.

You can turn off this functionality by going to Options > Default Search > Manage and uncheck the “Use a suggestion service” box. At the very least, Google should let users turn off URL auto-suggestions (off by default) while still enabling search completion.

Struts2 external redirect

For one of my S2 apps I needed to redirect to an external site. If you Google on how to do this, you’ll see examples in the form of:

<result name=”foo” type=”redirect”>${externalUrl}</result>

which doesn’t actually work, at least not in Instead, I’m using the httpheader result type to do a 301 redirect:

<result name=”foo” type=”httpheader”>
<param name=”status”>301</param>
<param name=”headers.Location”>${externalUrl}</param>

Moore’s law == SSL

When it comes to security, Moore’s law usually benefits crackers: faster brute-force is an obvious benefit. One win for “the good guys” is in regards to SSL.

Not so long ago, implementing SSL was so expensive compute-wise we had to deploy special cryptographic accelerator cards either on our load-balancers or on our edge servers.  One type of card we had was capable of 200 RSA signs/second, but cost ~$4000.00 each.  Theoretically we could stuff three of these cards into a web server,  achieving 600 signs/sec for $12000 (plus whatever the server costs.)

Fast forward 2008.   I recently evaluated a “low-end” Dell Poweredge SC1435 1U rackserver with a single dual-core 2.6GHz Opteron.  After installing FreeBSD/amd64 and recompiling OpenSSL from sources, running “speed rsa1024” computed 2000+ signs/sec per core, totalling 4100 RSA signs/sec.  Plus the SC1435 has an open socket for a second dual-core Opteron.

Not bad for a machine we bought for less than $800 on eBay.  Needless to say we have no performance concerns deploying our application with SSL enabled.  Thanks Mr. Moore.