Feeds:
Posts
Comments

Archive for the ‘Uncategorized’ Category

Adrian Rosebrock has a good walkthrough on how to Install OpenCV 3.0 and Python 2.7+ on OSX.  However if you want to use Python 3, the OpenCV Python bindings don’t get installed correctly, if at all.

What I did to resolve:

1. Follow Adrian’s walkthrough from Steps 1-6, but be sure to use “python3” and “pip3” instead of just “python“:

  • Install Xcode
  • Install python3 via Homebrew (“brew install python3“, “brew linkapps python3“)
  • Also install virtualenv and virtualenvwrapper as directed (“pip3 install virtualenv virtualenvwrapper”), including modifying and resourcing .bash_profile
  • mkvirtualenv a new enviroment
  • Make sure numpy is installed (“pip3 install numpy”)
  • Install the other dependencies in Step 6: cmake, jpeg, libpng, etc.

2. In Adrian’s Step 6, get OpenCV and opencv_contrib from GitHub.  For both use the latest release if you’d like (currently 3.1.0).  So do “git checkout 3.1.0” for both OpenCV and the opencv_contrib.  You must use the same version for both.

3. For the cmake part, use the snippet below (copy & paste to get the full text).   Currently Python 3.5.1 is the latest from Homebrew but if you have a newer/different version, please adjust accordingly.  Also be sure to change the last line (OPENCV_EXTRA_MODULES_PATH variable) to the modules directory where you’ve checked out opencv_contrib from GitHub:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D PYTHON3_EXECUTABLE=$(which python3) \
 -D PYTHON3_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") \
 -D PYTHON3_LIBRARY=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/libpython3.5.dylib \
 -D PYTHON3_LIBRARIES=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/bin \
 -D PYTHON3_INCLUDE_DIR=/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/Headers \
 -D PYTHON3_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") \
 -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=ON \
 -D BUILD_EXAMPLES=ON \
 -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules ..

4. The cmake command above should run without errors. Check to output to see that python3 is included in the OpenCV modules to be built:

OpenCV modules:
     To be built:           core flann imgproc ml photo reg 
surface_matching video dnn fuzzy imgcodecs shape videoio highgui 
objdetect plot superres ts xobjdetect xphoto bgsegm bioinspired 
dpm face features2d line_descriptor saliency text calib3d ccalib 
datasets rgbd stereo structured_light tracking videostab 
xfeatures2d ximgproc aruco optflow stitching python2 python3

5.  If python3 is not included in the “to be built” modules then something is wrong.  Check the cmake variables again to make sure they are correct for your environment.

6. Assuming the python3 module is correctly listed, then you can now build OpenCV and install it: “make -j 4” followed by “sudo make install“.   On a quad-core I7 Mac you can use “make -j 8” to speed the compile up a bit.

7. Finish up with the rest of Adrian’s guide (Steps 9 and 10) to check that OpenCV has been properly installed and configured.

Advertisements

Read Full Post »

[Update 11/13: Please see my follow-up to these issues.]

[Update 3/26: I’m now in contact with Google Security.]

[Update 3/28: I’m aware of Google’s official response to the issues raised in this blog.  I am continuing to share my findings with Google Security and appreciate the excellent feedback they are providing me.  It would be premature for me to provide further comment at this time. ]

If you can see the image below, you’ve just hacked Google Docs:

The above image should not be accessible to you.  It’s supposed to be embedded solely within a protected Google Docs document, which I have not shared. In fact, I’ve actually deleted that document.  It shouldn’t even exist anymore.  Yet here you are, viewing my precious picture in all its glory, nakedly served by Google servers,  outside of the protective Docs environment.

What went wrong?  In light of the recent Google Docs privacy glitch, let’s take a look at three privacy issues highlighting problems with the way documents are shared:

1. No protection for embedded images

When you embed (“insert”) an image from your computer into a Google Document, that image is “uploaded” onto Google servers and assigned an id.   From then on, the image is accessible via a URL.  For example, the URL for the above image is:

docs.google.com/File?id=dtfqs27_1f3vfmkcz_b

However, unlike the containing document, embedded images are not protected by the sharing controls.  That means anyone with access to the URL can view the image.  If you’ve shared a document containing embedded images with someone, that person will always be able to view those images.  Even after you’ve stopped sharing the document.  Or as the image above demonstrates, even after you’ve deleted the document.

That’s counter-intuitive behavior for most users.   If you embed an image into a protected document, you’d expect the image to be protected too.  If you delete a document, you’d expect any embedded resources to be deleted also. The end result is a potential privacy leak.

2. File revision flashback

It’s 4am and you’re been working all night on a document.   This document contains a Docs diagram, blueprinting that million-dollar-idea you have in your head.

You want to share this document with potential suppliers, but you don’t want to reveal all of your secrets just yet.   So you diligently redact the diagram, removing all the sensitive parts of the blueprints.  Satisfied that your idea is safe, you share the document (view-only).

Next thing you know, your idea has been stolen.  A Chinese company quickly ships knockoffs based on your complete blueprints.  What happened?

Unknown to you, anyone you shared the document with can view any version of any diagram embedded in the document.  The fact that you’ve deleted sensitive parts of the diagram doesn’t matter, because the viewer can see the older versions.

How?  Quite easy.  In Google Docs, a diagram is a set of instructions that’s rasterized into an image (in PNG format).  Each time you modify a diagram, a new raster image is created, but the old versions remain accessible via a URL, in the format:

docs.google.com/drawings/image?id=1234&...&rev=23&ac=1

To view any previous version, just change the “rev=” number above.

This problem is reminiscent of the old Microsoft Word Fast Save issue, and will have similar privacy implications if not changed.

3. I’ll help myself to your Docs, thanks

So you learned your lesson from above, and stopped sharing your documents.  You’ve kicked everyone out from your Docs.  This negates the purpose of Docs somewhat, but you’d rather be safe than sorry.

Working solo, you happily add new ideas to your secret document, patting yourself on the back before you go on a well-deserved vacation.

Too bad while you’re sipping piña coladas on the beach, those same suppliers you’ve just kicked out have added themselves back to your Docs and stealing your new ideas!  What?

It’s true.  Even if you unshare a document with a person, that person can in certain cases still access your document without your permission, a serious breach of privacy.  For now I’m withholding the mechanics of when/why/how this happens, pending further research and feedback from Google if any.

NOTE:

These findings are based upon my investigations stemming from Issue #1 above.  I disclosed this particular issue to Google on March 18.  I tend to follow rfpuppy’s Full Disclosure Policy and so waited five business days for Google to comment.  I’ve yet received any response from Google other than the usual automated, canned reply (which I don’t consider a real response.)

Read Full Post »

Chinks in the Armor

Defense-in-depth is a cornerstone of any information security strategy.   Corporate networks are routinely segmented into various zones such as “public”, “DMZ”, “extranet” and “intranet” to contain sensitive information deep within several protection domains.  Failure of one control should not compromise the entire system.

Defense-in-depth is everywhere.   Border routers filter spoofing attacks.  The firewalls behind them enforce specific network (and sometimes application) controls.  IPS/IDS systems monitor numerous operational parameters.   Sophisticated log analysis and alerting tools are being deployed.  Everything from the HR hiring procedures to the workstation anti-virus update procedure forms a part of this layering strategy.

Yet while IT and security professionals are becoming adept in designing sophisticated fortresses to protect ultra-secret corporate data, sometimes they completely forget to protect their customers.

Defense-in-depth as practiced today protect bad stuff from coming in, but not bad stuff from going out.

Let’s take a look at a simple XSS attack.   For example, I disclosed this problem to Solvay more than six months ago.  They’ve never bothered to fix (or even acknowledge) the issue.

You see, the Solvay public website is more or less just a “brochure-ware”.  It doesn’t have credit-card numbers.   It doesn’t contain Solvay’s trade secrets.  A lowly XSS attack like the above wont compromise any Solvay databases.  It’s not worthy of a fix.

I can, however, use the above XSS to phish Solvey customers into giving up their confidential information.  Or create a fake press release to manipulate Solvay’s stock price. I’m sure Solvay’s investors wont be very happy.

A simple XSS bug might be “harmless” by itself but can form a powerful attack when combined with other techniques, both technical and non-technical.  It can be a malicious “first-step” used to exploit other weaknesses in the system.  Fixing simple problems like this should be part of any layered defense strategy.

Solvay, by the way, makes chemicals and pharmaceuticals,  like industrial hydrogen-peroxide, equally useful in hair-bleaching and bomb-making.

Read Full Post »

Security Compliance

Having served on a national information security standards working group, I’m keenly aware that compliance is a major driver — if not the primary driver — for security initiatives today.

Compliance rules work best when the threat for inaction is tangible and immediate.   Usually, the threat is “we will fail external audit unless we comply with X” and thus management is highly motivated to comply with X, spending resources they otherwise would not.

There are many issues with this approach:

  • The majority of small & medium businesses out there are not subject to periodic audit.  Without the big stick of a negative audit opinion, compliance rules are routinely ignored.
  • Initiatives are often designed to pass audit with the least amount of work.  Little or no effort is expended in actually understanding the risks and designing controls appropriate for that level of risk.
  • Within large enterprises with complex infrastructure, compliance teams and auditors can realistically only sample small parts of the overall system, leaving large gaps unexamined.
  • Auditors are often too reluctant to “fail” an auditee if the auditee has appropriate “processes and procedures” in place.  Auditors generally “believes” an auditee who says a pending issue is being addressed. However, often these processes and procedures only exist on paper, and sometimes no action is taken until an auditor starts complaining.
  • As an extension to the above, often what’s being audited is only the paperwork (existence of standards, directives, design documents, change logs, etc.), not the actual systems in use.
  • Compliance does not equal security.  Standards, rules and regulations cannot replace common sense.

I could go on and on.  None of these problems are new, mirroring issues with auditing in general.

Having said all that, I truly believe compliance-driven initiatives do help organizations improve their security posture.  Even when companies just do the bare minimum required, that’s still more than doing nothing.

Read Full Post »