Feeds:
Posts
Comments

Adrian Rosebrock has a good walkthrough on how to Install OpenCV 3.0 and Python 2.7+ on OSX.  However if you want to use Python 3, the OpenCV Python bindings don’t get installed correctly, if at all.

What I did to resolve:

1. Follow Adrian’s walkthrough from Steps 1-6, but be sure to use “python3” and “pip3” instead of just “python“:

  • Install Xcode
  • Install python3 via Homebrew (“brew install python3“, “brew linkapps python3“)
  • Also install virtualenv and virtualenvwrapper as directed (“pip3 install virtualenv virtualenvwrapper”), including modifying and resourcing .bash_profile
  • mkvirtualenv a new enviroment
  • Make sure numpy is installed (“pip3 install numpy”)
  • Install the other dependencies in Step 6: cmake, jpeg, libpng, etc.

2. In Adrian’s Step 6, get OpenCV and opencv_contrib from GitHub.  For both use the latest release if you’d like (currently 3.1.0).  So do “git checkout 3.1.0” for both OpenCV and the opencv_contrib.  You must use the same version for both.

3. For the cmake part, use the snippet below (copy & paste to get the full text).   Currently Python 3.5.1 is the latest from Homebrew but if you have a newer/different version, please adjust accordingly.  Also be sure to change the last line (OPENCV_EXTRA_MODULES_PATH variable) to the modules directory where you’ve checked out opencv_contrib from GitHub:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D PYTHON3_EXECUTABLE=$(which python3) \
 -D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
 -D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/libpython3.5.dylib \
 -D PYTHON3_LIBRARIES=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/bin \
 -D PYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/Headers \
 -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
 -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=ON \
 -D BUILD_EXAMPLES=ON \
 -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules ..

4. The cmake command above should run without errors. Check to output to see that python3 is included in the OpenCV modules to be built:

OpenCV modules:
     To be built:           core flann imgproc ml photo reg 
surface_matching video dnn fuzzy imgcodecs shape videoio highgui 
objdetect plot superres ts xobjdetect xphoto bgsegm bioinspired 
dpm face features2d line_descriptor saliency text calib3d ccalib 
datasets rgbd stereo structured_light tracking videostab 
xfeatures2d ximgproc aruco optflow stitching python2 python3

5.  If python3 is not included in the “to be built” modules then something is wrong.  Check the cmake variables again to make sure they are correct for your environment.

6. Assuming the python3 module is correctly listed, then you can now build OpenCV and install it: “make -j 4” followed by “sudo make install“.   On a quad-core I7 Mac you can use “make -j 8” to speed the compile up a bit.

7. Finish up with the rest of Adrian’s guide (Steps 9 and 10) to check that OpenCV has been properly installed and configured.

Perusing the private address book and making live video calls — all from a locked iPhone 4.

Last time we explored how an incorrect time setting could expose your pictures on a locked iPhone.  Today we’ll have a bit more fun.

Often when doing security work, you’re happy if you manage to leak just a single bit (yes, one binary digit) of encrypted material.  That one bit could be the tip of the iceberg, so to speak, which might lead to more secrets underneath.

Lets see how many bits of personal information we can gather from a locked, passcode-protected iPhone 4, without jailbreaking or using any special tools?

Below is a screenshot of my iPhone’s lock setup screen.  The settings are rather conservative: a long passcode is required immediately; and Voice Dial is DISABLED (when the screen is locked).

Passcode Lock setup screen

Note: I’m using an iPhone 4 (not 4s) with vanilla iOS 5.0.1 (the latest at this time).  I do not have Siri on this phone.

When the iPhone 3GS first came out, many were surprised that Voice Dial was enabled by default on their locked iPhones (and similarly, with Siri on the 4s).  So for our exercise today, we made sure Voice Dial is turned off.

Voice dialing is accessed by long-pressing the phone’s home button.  Again I’ve disabled voice dialing, but a fine print on the setup screen above notes that “iPod Voice Control is always enabled” so it can still be used to play songs, etc.

Can we trick this restricted Voice Control to leak some private info, and perhaps trick it to make calls for us?  (Yes, we can!)

First let’s see how the Voice Dial restriction works.  I lock my phone, then long-press the home button until Voice Control appears.  I command it, “call <Alice>”.  The phone responds with “Voice Dialing is disabled“.   As it should.  All good, right?

Voice Control screen

Now “slide to unlock” but instead of entering the passcode, hit the “Emergency Call” button (bottom-left).  We get this special emergency call screen:

Emergency Call screen

With this screen showing, I again bring up Voice Control, and repeat, “call <Alice>”.  This time the phone responds with “No match found“.  Hmm, different!!

Actually, that response in itself, my friends, is already a leak.  Voice Control reveals that I don’t have a contact named “Alice” in my Contacts.  One leaked privacy bit.

Just to test, let’s try with someone who’s actually in my address book, my friend Wayland.  I bring up Voice Control again from the Emergency Call screen and say “call <Wayland>”.

(Locked) Voice Control calling Wayland

Wow, it tries to dial out!  Although the call fails to actually connect, the screen reveals Wayland’s full name and that I have his mobile number.  Not a huge deal, but more leaked bits!

At this point, it’s easy for anyone to enumerate through the Contacts by simply trying common first names like Adam, Bob, Charles, etc.  Let’s see how far we can go.

Here’s an example when I say, “call <Lisa>”:

Multiple matches shown for the name Lisa

Voice Control leaks that I have two Lisas in my contact list, one Lisa Atkins and one Lisa Klein**.  Repeating with “call <Lisa Klein>” yields further information:

Multiple numbers listed for Lisa Klein

Now Voice Control leaks that I have two numbers for Lisa Klein: her “mobile” and another number at the “love shack“.  Had this been my jealous girlfriend probing my locked phone, I would’ve been totally busted!

Remember, we’re getting all this info from a locked phone with Voice Dial explicitly disabled.

So far we’ve only enumerated through the Contacts.  Can we actually complete a call from the locked phone?  With FaceTime, the answer is yes!

Again starting from the Emergency Call screen, this time I say, “FaceTime <Lisa Klein>”.  And Voice Control dutifully connects, to the love shack, with full two-way video live streaming.  Yikes!  Not what I’d expect from my locked phone!

Lisa please don’t answer…

During testing, the FaceTime calls from my locked iPhone successfully connected and I was able to see + converse with the other party.  The test calls disconnected after a few minutes, but those disconnections might be due to the spotty internet service here at my hotel in Medellín, Colombia.

Bottom line:  We’re able to trick Voice Control to enumerate through the private address book and make live FaceTime video calls on a locked iPhone 4, even with Voice Dial specifically disabled in the settings.

**Some names faked to protect the innocent.

Special thanks to Wayland Chan for helping me test FaceTime.

p.s. I have not tested this issue on the iPhone 3GS, which has Voice Control but lacks FaceTime.

UPDATE: Feb 8, 2012:  While the iPhone attempts to connect the FaceTime call, it will show the contact’s profile picture if any.  So a stranger using your iPhone could possibly see pictures of your contacts even if they do not have FaceTime enabled.

UPDATE: Feb 9, 2012: CNET also tested the bug on the iPhone 3GS and the iPhone 4S.

I always get a bit antsy about hacking researching vulnerabilities when I travel, and this time is no exception.  Often I notice “glitches” or abnormalities which I want to investigate, but since I’m in the middle of riding my motorcycle from Canada to Argentina, infosec has been on the back burner.

Recently I took advantage of great wi-fi in Costa Rica to finally upgrade my iPhone 4 to iOS 5.   Double-clicking the home button now allows one to quickly access the Camera app even from a locked phone:

The camera icon (bottom-right) is now accessible from a locked iPhone

Since the camera is locked, Camera app has a smart feature barring access to the iPhone’s album.  You can only see pictures taken from the current (locked) session.

As an aside, I thought I noticed a glitch whereby I could completely bypass the passcode lock, but turns out it’s just poor UI from Apple.  (There’s a state where the phone is locked but a passcode is not yet required, and the UI during this period can be misleading.)   I changed the passcode setting to “immediate” after that.

UI barring access to album pictures from locked phone

While researching the above “glitch”, I was intrigued at how the Camera app’s album manager was able to segregate your “protected” images vs. the ones from the current session.  It’s like a “jail” for images.  I wondered if I could break out of this image jail.

Turns out Apple’s restriction is just a simple filter based on the timestamp when the Camera app was invoked.  You’re allowed to see all images with a timestamp greater than this invocation time.  Yet that leads to an immediate hole: if your iPhone’s clock ever rolls back, then all images with timestamps newer than your iPhone’s clock will be viewable from your locked phone.

But time always moves forward, right? Why would your phone’s clock ever roll backwards?

  • It could be due to user error.  E.g., maybe while traveling across timezones you accidentally set the iPhone’s date or time incorrectly (rather than simply resetting the timezone).   If you set the clock ahead of what it’s supposed to be, then this vulnerability will appear when you reset to the correct time.  If you accidentally set the clock to the past, then your images will immediately become unprotected.
  • It could be an iPhone glitch.  E.g., a software or hardware issue could reset your iPhone’s clock to epoch time — iPhone’s “zero” time at midnight January 1, 2001.  In this case all your images are exposed.
  • It could be an infrastructure error.  E.g., if you automatically sync from a erroneous external time source (cell phone company, etc.)

I don’t think normal (non-Apple) apps can change the iPhone’s clock, but if it can then that could be another possible source of rollback.

This vulnerability is simple to test.  Just set your iPhone’s clock to a time in the past (say, in 2010).  Then access the Camera while your phone is still locked.  Lo-and-behold, you’ll be able to see all your “protected” images.

The point to all this is that Apple should not rely on a simple timestamp to restrict image access.  Changing the iPhone’s clock — forwards or backwards — should not affect its security.  We can’t guarantee the clock will always monotonically more forward, and when it doesn’t, the system should fail-secure.

In the big picture, if real “bad guys” have physical access to your phone, then the game is over already.  However, as I wrote previously, defense-in-depth is a basic concept which should always be applied.

In various occasions I’ve advised clients to secure their time servers, etc., in the context of esoteric cryptographic attacks, audit logging, and other protocols which depend on accurate timekeeping.  I’m a bit amused that the iPhone is vulnerable to a simple time change.

A PGP Online Store vulnerability could have allowed hackers to harvest PGP Corporation‘s customer data.

Exposed data included each customer’s full contact info (name, physical address, email address, telephone number, etc), the PGP product & version level purchased by the customer, and the customer’s operating system (Windows or Mac).

For some customers, partial credit card information were also exposed, including the type of card the customer used (Visa, Amex, etc.), the last four digits of the card, the card’s expiration date, and the first & last name associated with the card.

Screenshot showing masked PGP customer data

The type and amount of data exposed could have subjected PGP customers to an extremely effective targeted phishing attack, especially considering PGP’s reputation as a leader in the data protection market.

The vulnerability involved an Insecure Direct Object Reference on a product renewal URL which was not protected by any form of authentication.

Timeline:

This vulnerability was disclosed to PGP on October 17, 2009.  PGP acknowledged the issue on October 22 and implemented a fix; however, my re-testing indicated the problem was not resolved (results communicated back to them on the same day).  Sent follow-up email on November 5 as there were no updates from PGP.  On November 9, PGP responded that they were planning to add authentication to protect the renewal function.  Verified vulnerability still existed on November 11. Issue appears to have been fixed on November 12.

Back in March I wrote about a few security issues with Google Docs while keeping some details private.

Google Security and the Google Docs product management team engaged me immediately after the issues became public, and kept me well informed of their findings through several days of productive exchange of ideas. I’m used to getting the silent treatment when reporting security issues, so I’d like to credit Google for keeping the lines of communications open.

I had been “on the road” since then and decided to take time off from blogging.  Now that I’m back home, I’d like to close these issues before writing about a few other (non-Google) security & privacy concerns I have in mind.

So without further delay let’s revisit the three Docs issues based on my emails with Google back in late March and early April.  I understand that Google have made changes to remediate part or all of these issues, according to their own risk determination.

1. No protection for embedded images

This issue was about the lack of protection (authentication) for images embedded in a document, and an image’s continued existence on Google’s servers after its containing document has been deleted.  The lack of authentication means that the image URL could be accessed by 3rd parties without the document’s owner consent.

Google correctly noted that the image URL would have been known only to those with previous access to the image, and someone with such access could have saved the image anyway, and perhaps disclosed the saved image with unauthorized persons.

However, from a privacy perspective, there is a crucial difference between a “saved” image being disclosed, and one being served directly by Google Docs: evidence of ownership.

Let’s examine how a typical Docs image URL is constructed:

docs.google.com/File?id=dtfqs27_1f3vfmkcz_b   (an image stored at Google)

The bolded portion of the URL (“dtfqs27”) seems to uniquely identify the resource owner (in this case, me).  Documents and images created by the same account will have this same ID as part of the URL.

Embedding “personally linkable” IDs in URLs is poor practice and has wide-ranging privacy implications on on its own — more on this later.  Yet we’re going twice further here by: 1) associating the ID with a document resource; and 2) making the entire URL publicly accessible.  This is a form of Insecure Direct Object Reference, a common security issue which I’ll have to say more about in the coming days.

Contrived scenario:

I share a picture of my company’s ultra-secret new tablet with a potential supplier.   An employee of said supplier saves the picture and wants to sell it to AdeInsider.com, a rumor-site tracking my inventions.   They accuse the employee of just making it all up in Photoshop.  So the employee shares the link instead, e.g.:

docs.google.com/File?id=dtfqs27_4ghppz9dq_b

Since there is no authentication, AdeInsider.com can now widely publish that link, and point out to their readers that the image on my blog has the same unique identifier, thus positively determining ownership. Instant privacy breach.  (Instead of a secret gadget, imagine compromising pictures, etc.)  My only recourse is to get Google support to remove the image, since I can’t immediately do it myself by deleting the containing document.  But any action on my part would have been too late, anyway.

As I noted in a previous post, I can only recommend defense-in-depth.   In this case the lack of authentication — which appears benign by itself due to randomness in the URL — might cause a serious privacy breach due to another issue (leak of what is essentially personally identifying information.)

Tangent:

Tagging resources with IDs potentially linked to personal information is unfortunately a widespread practice, with Facebook being a big example.  Like Google Docs, images uploaded to Facebook are tagged with the user’s ID, are accessible without authentication, and subject to the same privacy flaw.  It’s trivial to map Facebook IDs to real names.  From a privacy perspective, ID tagging might in some cases be more problematic than tracking cookies.

2. File revision flashback

I’m not going to add much more to this issue except to note that privacy breaches can occur due to designed behavior having non-intuitive implications to regular users — the old Microsoft Fast Save feature comes to mind, as well as a number of accidental disclosures involving PDF.  The fact that someone can fiddle with an embedded image’s URL (normally buried in HTML) to get previous revisions is not obvious to your typical Docs user.

Google has added useful entries in their Help files and there are now explicit controls in the diagram tool.

3. I’ll help myself to your Docs, thanks

I reported that in some cases, a person removed from a shared document could add himself back to a document’s shared list without the owner’s permission or knowledge.  This issue obviously garnered the most attention and as it turned out, was much more complex than I originally thought.

Google clarified that this behavior is proper when a document has the “invitations may be used by anyone” option enabled.  The purpose of this option is to allow forwarding of invitations (e.g., for mailing lists), and essentially works by making the document public.

After Google’s clarification, I checked through my test documents, and sure enough, this option was enabled on them, explaining the behavior.  There was only one problem:  I had explicitly disabled this option when creating my test documents, yet somehow these documents became publicly accessible!

After additional analysis at the time, my findings indicated that:

– A race condition existed due to the way the document sharing control GUI was implemented.  Most of the time, the Docs sharing control worked fine.  However, in some cases the control could fail in three distinct ways: a) the “invitations may be used by anyone” option visibly re-enabled itself after being disabled, immediately prior to the user clicking “submit”; b) the option remained disabled on screen, but was incorrectly submitted as enabled; c) the GUI completely failed and became non-responsive (which is actually fine since that’s fail-secure.)

I was able to record screencasts of each failure type and submitted them for Google’s review.

– Compounding the issue, a different GUI problem could hide the fact that a deleted “sharee” has added himself back to a document.

In Google Docs there are several areas where a document’s sharing status can be seen, including from the main screen’s “folder view”, from the left-nav of the main screen, and from a document’s sharing dialog.  When a sharee deleted a document (breaking the share) then immediately added himself back, the main folder view and left-nav will show that the document is no longer being shared when in fact it still is.

So weaknesses in the Google Docs user interface implementation could cause private documents invitations to be wrongly permissioned as public, and furthermore, deleted share participants could add him/herself back to documents without  document’s owner noticing.

What are essentially simple UI flaws (which arguably should have been caught by developers and/or QA) now have security and privacy implications.   This “escalation” is an inherent risk with collaborative applications, especially “cloud” applications which have world-shareable features.

I must state, the likelihood of a direct breach due to wrong permissioning is low.  However, as Issue #1 demonstrated, even seemingly minor flaws could lead to privacy leaks.  Indeed, documents incorrectly permissioned in this way are subject to the same evidence of ownership leak as the images in Issue #1.

From what I could tell, Google quickly implemented changes to fix part if not all of these issues.  I have no visibility regarding how many documents were incorrectly marked public.  Readers with highly sensitive documents should periodically review their sharing controls.

[Update 11/13: Please see my follow-up to these issues.]

[Update 3/26: I’m now in contact with Google Security.]

[Update 3/28: I’m aware of Google’s official response to the issues raised in this blog.  I am continuing to share my findings with Google Security and appreciate the excellent feedback they are providing me.  It would be premature for me to provide further comment at this time. ]

If you can see the image below, you’ve just hacked Google Docs:

The above image should not be accessible to you.  It’s supposed to be embedded solely within a protected Google Docs document, which I have not shared. In fact, I’ve actually deleted that document.  It shouldn’t even exist anymore.  Yet here you are, viewing my precious picture in all its glory, nakedly served by Google servers,  outside of the protective Docs environment.

What went wrong?  In light of the recent Google Docs privacy glitch, let’s take a look at three privacy issues highlighting problems with the way documents are shared:

1. No protection for embedded images

When you embed (“insert”) an image from your computer into a Google Document, that image is “uploaded” onto Google servers and assigned an id.   From then on, the image is accessible via a URL.  For example, the URL for the above image is:

docs.google.com/File?id=dtfqs27_1f3vfmkcz_b

However, unlike the containing document, embedded images are not protected by the sharing controls.  That means anyone with access to the URL can view the image.  If you’ve shared a document containing embedded images with someone, that person will always be able to view those images.  Even after you’ve stopped sharing the document.  Or as the image above demonstrates, even after you’ve deleted the document.

That’s counter-intuitive behavior for most users.   If you embed an image into a protected document, you’d expect the image to be protected too.  If you delete a document, you’d expect any embedded resources to be deleted also. The end result is a potential privacy leak.

2. File revision flashback

It’s 4am and you’re been working all night on a document.   This document contains a Docs diagram, blueprinting that million-dollar-idea you have in your head.

You want to share this document with potential suppliers, but you don’t want to reveal all of your secrets just yet.   So you diligently redact the diagram, removing all the sensitive parts of the blueprints.  Satisfied that your idea is safe, you share the document (view-only).

Next thing you know, your idea has been stolen.  A Chinese company quickly ships knockoffs based on your complete blueprints.  What happened?

Unknown to you, anyone you shared the document with can view any version of any diagram embedded in the document.  The fact that you’ve deleted sensitive parts of the diagram doesn’t matter, because the viewer can see the older versions.

How?  Quite easy.  In Google Docs, a diagram is a set of instructions that’s rasterized into an image (in PNG format).  Each time you modify a diagram, a new raster image is created, but the old versions remain accessible via a URL, in the format:

docs.google.com/drawings/image?id=1234&...&rev=23&ac=1

To view any previous version, just change the “rev=” number above.

This problem is reminiscent of the old Microsoft Word Fast Save issue, and will have similar privacy implications if not changed.

3. I’ll help myself to your Docs, thanks

So you learned your lesson from above, and stopped sharing your documents.  You’ve kicked everyone out from your Docs.  This negates the purpose of Docs somewhat, but you’d rather be safe than sorry.

Working solo, you happily add new ideas to your secret document, patting yourself on the back before you go on a well-deserved vacation.

Too bad while you’re sipping piña coladas on the beach, those same suppliers you’ve just kicked out have added themselves back to your Docs and stealing your new ideas!  What?

It’s true.  Even if you unshare a document with a person, that person can in certain cases still access your document without your permission, a serious breach of privacy.  For now I’m withholding the mechanics of when/why/how this happens, pending further research and feedback from Google if any.

NOTE:

These findings are based upon my investigations stemming from Issue #1 above.  I disclosed this particular issue to Google on March 18.  I tend to follow rfpuppy’s Full Disclosure Policy and so waited five business days for Google to comment.  I’ve yet received any response from Google other than the usual automated, canned reply (which I don’t consider a real response.)

Chinks in the Armor

Defense-in-depth is a cornerstone of any information security strategy.   Corporate networks are routinely segmented into various zones such as “public”, “DMZ”, “extranet” and “intranet” to contain sensitive information deep within several protection domains.  Failure of one control should not compromise the entire system.

Defense-in-depth is everywhere.   Border routers filter spoofing attacks.  The firewalls behind them enforce specific network (and sometimes application) controls.  IPS/IDS systems monitor numerous operational parameters.   Sophisticated log analysis and alerting tools are being deployed.  Everything from the HR hiring procedures to the workstation anti-virus update procedure forms a part of this layering strategy.

Yet while IT and security professionals are becoming adept in designing sophisticated fortresses to protect ultra-secret corporate data, sometimes they completely forget to protect their customers.

Defense-in-depth as practiced today protect bad stuff from coming in, but not bad stuff from going out.

Let’s take a look at a simple XSS attack.   For example, I disclosed this problem to Solvay more than six months ago.  They’ve never bothered to fix (or even acknowledge) the issue.

You see, the Solvay public website is more or less just a “brochure-ware”.  It doesn’t have credit-card numbers.   It doesn’t contain Solvay’s trade secrets.  A lowly XSS attack like the above wont compromise any Solvay databases.  It’s not worthy of a fix.

I can, however, use the above XSS to phish Solvey customers into giving up their confidential information.  Or create a fake press release to manipulate Solvay’s stock price. I’m sure Solvay’s investors wont be very happy.

A simple XSS bug might be “harmless” by itself but can form a powerful attack when combined with other techniques, both technical and non-technical.  It can be a malicious “first-step” used to exploit other weaknesses in the system.  Fixing simple problems like this should be part of any layered defense strategy.

Solvay, by the way, makes chemicals and pharmaceuticals,  like industrial hydrogen-peroxide, equally useful in hair-bleaching and bomb-making.

Security Compliance

Having served on a national information security standards working group, I’m keenly aware that compliance is a major driver — if not the primary driver — for security initiatives today.

Compliance rules work best when the threat for inaction is tangible and immediate.   Usually, the threat is “we will fail external audit unless we comply with X” and thus management is highly motivated to comply with X, spending resources they otherwise would not.

There are many issues with this approach:

  • The majority of small & medium businesses out there are not subject to periodic audit.  Without the big stick of a negative audit opinion, compliance rules are routinely ignored.
  • Initiatives are often designed to pass audit with the least amount of work.  Little or no effort is expended in actually understanding the risks and designing controls appropriate for that level of risk.
  • Within large enterprises with complex infrastructure, compliance teams and auditors can realistically only sample small parts of the overall system, leaving large gaps unexamined.
  • Auditors are often too reluctant to “fail” an auditee if the auditee has appropriate “processes and procedures” in place.  Auditors generally “believes” an auditee who says a pending issue is being addressed. However, often these processes and procedures only exist on paper, and sometimes no action is taken until an auditor starts complaining.
  • As an extension to the above, often what’s being audited is only the paperwork (existence of standards, directives, design documents, change logs, etc.), not the actual systems in use.
  • Compliance does not equal security.  Standards, rules and regulations cannot replace common sense.

I could go on and on.  None of these problems are new, mirroring issues with auditing in general.

Having said all that, I truly believe compliance-driven initiatives do help organizations improve their security posture.  Even when companies just do the bare minimum required, that’s still more than doing nothing.

If you’re programming the iPhone, sooner or later you’ll need regular expressions (regex).  By default OS X includes the ICU, an open source Unicode library which has extensive regex capabilities.

The ICU APIs are in C/C++ however, not Objective-C.   Fear not, RegexKitLite to the rescue.   This small library has done all the hard work of adding regex methods to NSString.  RegexKitLite is small, thread-safe, and quite fast.  It simply links to ICU —  unlike its bigger brother, RegexKit, which must be compiled against PCRE.

RegexKitLite is also easy to use:

#import "RegexKitLite.h"
NSString * foo = @"some string to search on";
NSString * regex = @"^(.+?)\s";
NSLog(@"Match: %@", [foo stringByMatching:regex capture:1]);

Then just link with –licucore and that’s it!!

Note: In Xcode I simply added -licucore to the “Other Linker Flags” in my project’s build configuration.  Maybe there’s a “better” way of doing this but this method works for me.

Google Suggest vs. Privacy

Since Google launched Chrome yesterday, much have been said on the blogosphere about its privacy implications. The issue is Google can log your search keystrokes as you type, even prior you hitting that Enter key to submit the search. But since Google Suggest is now enabled by default, this behavior is actually no different than when you type in a search into Google.com directly, using any browser.

And this behavior is not confined to Google.com either, many third-party websites directly or indirectly uses Google Suggest, even if they don’t use the Google Search widget. It gets worse (see later). But how does it work from non-Google websites?

Basically, the website traps your keystrokes using an “onkeyup” event handler, then issues an AJAX call to the Suggest API (suggestqueries.google.com). The API can be invoked with a simple HTTP GET. Here’s an example when you search for “sarah p” today:

http://suggestqueries.google.com/complete/search?qu=sarah%20p

Google then returns a suggestion list:

window.google.ac.h(["sarah p",[["sarah palin","357,000 results","1"],
["sarah polley","1,110,000 results","2"],["sarah paulson","487,000 results","3"]
...etc...

notice this is a JSONP result.

What most people don’t know, if you use Firefox, the top-right Google search box (that’s default for most people) has already been using this functionality all along!!  So Firefox has the same privacy issue.  The Firefox search handler calls the suggestion API with an added parameter (output=firefox&qu=sarah%20p) and gets a simpler return list:

["sarah p",["sarah palin","sarah polley","sarah paulson", ...

So what’s new with Chrome? The difference is Chrome combines the URL bar and the Search bar together. When you type in “http://www.slashdot&#8221;, for example, Chrome sends out the following HTTP request prior to you completing your action. Here’s what the packet sniffer logs:

GET /complete/search?client=chrome&output=chrome&hl=en-US&q=http%3A%2F%2Fwww.slashdot HTTP/1.1\r\n
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13\r\n
Accept-Language: en-US,en\r\n
Accept-Charset: ISO-8859-1,*,utf-8\r\n
Accept-Encoding: gzip,deflate,bzip2\r\n
Host: clients1.google.ca\r\n
Connection: Keep-Alive\r\n

Which means with Chrome, Google now knows not only what you’re searching for, but also which websites you directly go to as well.

You can turn off this functionality by going to Options > Default Search > Manage and uncheck the “Use a suggestion service” box. At the very least, Google should let users turn off URL auto-suggestions (off by default) while still enabling search completion.