Archive for March, 2009

[Update 11/13: Please see my follow-up to these issues.]

[Update 3/26: I’m now in contact with Google Security.]

[Update 3/28: I’m aware of Google’s official response to the issues raised in this blog.  I am continuing to share my findings with Google Security and appreciate the excellent feedback they are providing me.  It would be premature for me to provide further comment at this time. ]

If you can see the image below, you’ve just hacked Google Docs:

The above image should not be accessible to you.  It’s supposed to be embedded solely within a protected Google Docs document, which I have not shared. In fact, I’ve actually deleted that document.  It shouldn’t even exist anymore.  Yet here you are, viewing my precious picture in all its glory, nakedly served by Google servers,  outside of the protective Docs environment.

What went wrong?  In light of the recent Google Docs privacy glitch, let’s take a look at three privacy issues highlighting problems with the way documents are shared:

1. No protection for embedded images

When you embed (“insert”) an image from your computer into a Google Document, that image is “uploaded” onto Google servers and assigned an id.   From then on, the image is accessible via a URL.  For example, the URL for the above image is:


However, unlike the containing document, embedded images are not protected by the sharing controls.  That means anyone with access to the URL can view the image.  If you’ve shared a document containing embedded images with someone, that person will always be able to view those images.  Even after you’ve stopped sharing the document.  Or as the image above demonstrates, even after you’ve deleted the document.

That’s counter-intuitive behavior for most users.   If you embed an image into a protected document, you’d expect the image to be protected too.  If you delete a document, you’d expect any embedded resources to be deleted also. The end result is a potential privacy leak.

2. File revision flashback

It’s 4am and you’re been working all night on a document.   This document contains a Docs diagram, blueprinting that million-dollar-idea you have in your head.

You want to share this document with potential suppliers, but you don’t want to reveal all of your secrets just yet.   So you diligently redact the diagram, removing all the sensitive parts of the blueprints.  Satisfied that your idea is safe, you share the document (view-only).

Next thing you know, your idea has been stolen.  A Chinese company quickly ships knockoffs based on your complete blueprints.  What happened?

Unknown to you, anyone you shared the document with can view any version of any diagram embedded in the document.  The fact that you’ve deleted sensitive parts of the diagram doesn’t matter, because the viewer can see the older versions.

How?  Quite easy.  In Google Docs, a diagram is a set of instructions that’s rasterized into an image (in PNG format).  Each time you modify a diagram, a new raster image is created, but the old versions remain accessible via a URL, in the format:


To view any previous version, just change the “rev=” number above.

This problem is reminiscent of the old Microsoft Word Fast Save issue, and will have similar privacy implications if not changed.

3. I’ll help myself to your Docs, thanks

So you learned your lesson from above, and stopped sharing your documents.  You’ve kicked everyone out from your Docs.  This negates the purpose of Docs somewhat, but you’d rather be safe than sorry.

Working solo, you happily add new ideas to your secret document, patting yourself on the back before you go on a well-deserved vacation.

Too bad while you’re sipping piña coladas on the beach, those same suppliers you’ve just kicked out have added themselves back to your Docs and stealing your new ideas!  What?

It’s true.  Even if you unshare a document with a person, that person can in certain cases still access your document without your permission, a serious breach of privacy.  For now I’m withholding the mechanics of when/why/how this happens, pending further research and feedback from Google if any.


These findings are based upon my investigations stemming from Issue #1 above.  I disclosed this particular issue to Google on March 18.  I tend to follow rfpuppy’s Full Disclosure Policy and so waited five business days for Google to comment.  I’ve yet received any response from Google other than the usual automated, canned reply (which I don’t consider a real response.)


Read Full Post »

Chinks in the Armor

Defense-in-depth is a cornerstone of any information security strategy.   Corporate networks are routinely segmented into various zones such as “public”, “DMZ”, “extranet” and “intranet” to contain sensitive information deep within several protection domains.  Failure of one control should not compromise the entire system.

Defense-in-depth is everywhere.   Border routers filter spoofing attacks.  The firewalls behind them enforce specific network (and sometimes application) controls.  IPS/IDS systems monitor numerous operational parameters.   Sophisticated log analysis and alerting tools are being deployed.  Everything from the HR hiring procedures to the workstation anti-virus update procedure forms a part of this layering strategy.

Yet while IT and security professionals are becoming adept in designing sophisticated fortresses to protect ultra-secret corporate data, sometimes they completely forget to protect their customers.

Defense-in-depth as practiced today protect bad stuff from coming in, but not bad stuff from going out.

Let’s take a look at a simple XSS attack.   For example, I disclosed this problem to Solvay more than six months ago.  They’ve never bothered to fix (or even acknowledge) the issue.

You see, the Solvay public website is more or less just a “brochure-ware”.  It doesn’t have credit-card numbers.   It doesn’t contain Solvay’s trade secrets.  A lowly XSS attack like the above wont compromise any Solvay databases.  It’s not worthy of a fix.

I can, however, use the above XSS to phish Solvey customers into giving up their confidential information.  Or create a fake press release to manipulate Solvay’s stock price. I’m sure Solvay’s investors wont be very happy.

A simple XSS bug might be “harmless” by itself but can form a powerful attack when combined with other techniques, both technical and non-technical.  It can be a malicious “first-step” used to exploit other weaknesses in the system.  Fixing simple problems like this should be part of any layered defense strategy.

Solvay, by the way, makes chemicals and pharmaceuticals,  like industrial hydrogen-peroxide, equally useful in hair-bleaching and bomb-making.

Read Full Post »

Security Compliance

Having served on a national information security standards working group, I’m keenly aware that compliance is a major driver — if not the primary driver — for security initiatives today.

Compliance rules work best when the threat for inaction is tangible and immediate.   Usually, the threat is “we will fail external audit unless we comply with X” and thus management is highly motivated to comply with X, spending resources they otherwise would not.

There are many issues with this approach:

  • The majority of small & medium businesses out there are not subject to periodic audit.  Without the big stick of a negative audit opinion, compliance rules are routinely ignored.
  • Initiatives are often designed to pass audit with the least amount of work.  Little or no effort is expended in actually understanding the risks and designing controls appropriate for that level of risk.
  • Within large enterprises with complex infrastructure, compliance teams and auditors can realistically only sample small parts of the overall system, leaving large gaps unexamined.
  • Auditors are often too reluctant to “fail” an auditee if the auditee has appropriate “processes and procedures” in place.  Auditors generally “believes” an auditee who says a pending issue is being addressed. However, often these processes and procedures only exist on paper, and sometimes no action is taken until an auditor starts complaining.
  • As an extension to the above, often what’s being audited is only the paperwork (existence of standards, directives, design documents, change logs, etc.), not the actual systems in use.
  • Compliance does not equal security.  Standards, rules and regulations cannot replace common sense.

I could go on and on.  None of these problems are new, mirroring issues with auditing in general.

Having said all that, I truly believe compliance-driven initiatives do help organizations improve their security posture.  Even when companies just do the bare minimum required, that’s still more than doing nothing.

Read Full Post »

iPhone SDK Regular Expressions

If you’re programming the iPhone, sooner or later you’ll need regular expressions (regex).  By default OS X includes the ICU, an open source Unicode library which has extensive regex capabilities.

The ICU APIs are in C/C++ however, not Objective-C.   Fear not, RegexKitLite to the rescue.   This small library has done all the hard work of adding regex methods to NSString.  RegexKitLite is small, thread-safe, and quite fast.  It simply links to ICU —  unlike its bigger brother, RegexKit, which must be compiled against PCRE.

RegexKitLite is also easy to use:

#import "RegexKitLite.h"
NSString * foo = @"some string to search on";
NSString * regex = @"^(.+?)\s";
NSLog(@"Match: %@", [foo stringByMatching:regex capture:1]);

Then just link with –licucore and that’s it!!

Note: In Xcode I simply added -licucore to the “Other Linker Flags” in my project’s build configuration.  Maybe there’s a “better” way of doing this but this method works for me.

Read Full Post »