Sunday, November 3, 2013

Apple Takes Additional Precautions with its iPhone Fingerprint Sensor

After the release of iOS7 which touted several new security features, I gave Apple some grief for the discovery of a security which was indicative of  some lackadaisical security testing.  In the spirit of equal time, however, I need to give Apple its "propers" regarding some security forethought.  In a recent online post, Mactrast.com discusses Apple's apparent pairing of its TouchID sensor with the specific processor chip contained within its 5S phone.  In other words, swapping out either a touch sensor or the phone's processing chip rendors access to the biometric data useless for accessing the phone's applications.  This clearly showed forethought on Apple's part re: securing biometric information as well as sensitivity to applicable privacy concerns.  

You can read the full article here. Note that the article (correctly) points out the potential issues re: repairing phone screens on the 5S (translation -- if the screen repair damages the TouchID sensor then the sensor and chip will need to be replaced in order for the biometric feature to work...which means that you're basically talking about a whole new phone.)

The Fate of the Security "Profession"

I've been off the air for about a month due to some personal challenges, so I'm just catching up on some of the older stories that have been floating out there since late September.  One that has caught my eye is the National Academy of Sciences (NAS) report regarding the professionalization of information security.  In this report  NAS concludes that cybersecurity is best classified as an occupation rather than a profession;  further, NAS concludes that professionalization of cybersecurity should only occur when "the occupation has well-defined and stable characteristics [and] when there are observed deficiencies in the occupational workforce that professionalization could help remedy."  NAS (and several industry pundits) further pointed out the challenges of our ever morphing enemay as well as the self-taught nature of many of our most seasone professionals.  

What struck me most about this report is that hue and cry that did not occur from security professionals.  There were a small handful of articles and  some (predictable) responses from folks in the who resented the implication that they were not "professionals" (in the strictest interpretation of the word), then...nothing.

It is this lack of commentary that concerns me the most.  Several reasons for this.
  1. One of the criteria for professionalization has been (at least partially) met.  The security profession is facing a shortage of qualified personnel.  The operative term here is "qualified."  In an era where colleges and universities are regularly pumping out folks with computer/information security degrees,  senior professionals are still having difficulty finding people with the KSAs to do the work.  Experience (The "E" that we add over time to KSAs) helps and is supposed to enhance basic skills...but many organizations have taken to ignoring the training and experience offered by colleges and universities as being meaningless to the security utility in the workplace.  Further, there is still a wide variety of degree variance between university programs in Infosec -- and very few security professionals recommend ANY program as being appropriatly constructed to tackle a security gig straight out of the classroom.  To me, this translates to a case of "deficiencies in the occupational workforce" as well as an inability to provide a steady stream of qualified personnel into the workforce.                      
  2. What do we do about it?  Folks, the lack of response from us as a profession seems to indicate either that (a) we agree with the characterization or (b) while we disagree with the report we don't see how to change it. While I will be the first person to admit that there is a portion of our work is art, we cannot surrender the battle for the science lest we lose the ability to maintain the seat at the table that we have fought to occupy over the past 15 years.  When organizations cannot afford to steal senior folks from other organizations, they will turn more and more to technology to substitute for experience.  Should this trend occur, we may find ourselves in a position where the chief security officer position (one of the 3 most senior positions our career progression has to offer) goes the way of the VP of Telephony.  
Think I'm exaggerating?  I am personally aware of three multi-billion dollar entities who have broken up their security responsibilities amongst multiple entities upon the departure of their CSO.  Two of those three seem to be sustaining compliance and security levels within minimal to no difficulty.

The point of this post in a fairly simple one:  we cannot as professionals (even if we aren't technically a profession) to accept the status quo accurately pointed out by the NAS report.  We need to find a method of identifying and fostering the skills and mindset needed to succeed and -- most importantly -- stay ahead of the bad guys.  If we fail to invest in this effort than we do a disservice to our constituents as well as those who are trying to follow in our footsteps.

My two cents...

(Note:  the link to the report above lists a price for the printed version of the report;  downloading the PDF is still free. )

Saturday, October 5, 2013

World's Largest Data Breaches

Here's one for your SETA quiver:

David McCandless and the team from Information Is Beautiful recently released both  a static and interactive infographic visualizing the World's Biggest Data Breaches.  It provides an interesting perspetive of the size, scope, and cause of breaches cor the past ten years.  There were some interesting nuggets there, even for a security guy!

You can find a link to the infographic here.  Enjoy!

Wednesday, September 25, 2013

Data Aggregator Giants Hacked

Today Brian Krebs (krebsonsecurity.comhas posted the results of a months-long investigation conducted by his organization.  These results, while long suspected, are disheartening:  it appears that several well known data aggregators have been compromised, and their files accessed for malicious use.


The underground ID Theft service SNNDOB[dot]ms (hereafter SSNDOB) has for two years marketed itself as a source for valid compromised identities.  The source of their data has been largely unknown, but access to a major data aggregator was suspected.  Several months ago, SSNDOB’s  own compromised database was compromised and a copy was provided to Brian Krebs for analysis. Further analysis was performed on the networks, activities, and credentials held by SSNDOB administrators revealing a small Botnet operating on the internal systems of LexisNexis, Dunn & Bradstreet, and Kroll Background America.


The SSNDOB service has served up more than 1.02 million unique social security numbers, and nearly 3.1 million date of birth records since its  inception in early 2012


You can read Krebs' full post regarding the compromise here.  Be advised that I have no further substatiation of Mr. Krebs' claims nor any statements from the aformentioned companies...but  krebsonsecurity.com is known to be one of the most credible sources out there.  Also here is a link with some great tips about what to do if you suspect your identify has been compromised.


Be aware...



Monday, September 23, 2013

IE Zero Day Released Into the Wild

Kudos to my buddy Matt for pointing this one out.  Recently SANS reported a zero-day exploit to all supported versions of Internet Explorer.  

It looks like this zero day is no joke.  SANS raised their threat level to Yellow so it looks like it is actively being exploited.   Since this is a zero day, the best bet for now is to make sure you have appropriate mitigation in place. Matt's blog does a great job of laying out the options.  Give it a read!

Saturday, September 21, 2013

iOS 7 Security Bug Discovered

Well THAT didn't take long at all, did it?

Just days after the release of Apple's new operating system -- which Apple is touting as having (among other things) enhanced security features -- websites are reporting the discovery of a security-related bug.  In a video released online, hackers demonstrated how accessing the Control Center feature from the lock screen and executing a specific series of commands will allow someone to access other applications (such as email) which are supposedly inaccessible when the phone is locked.  While Apple says it's working on a fix, the simplest solution for the nonce is to change the Control Center settings so that you cannot access Control Center on the Lock screen.  This is easily doable from the Settings screen (though feel free to ping me directly if you need a walk-through).

While some have dismissed the relevance of this bug, I like security controls to work.  I remain cautious about what I do and do not do online, but given the ubiquitous nature of technology it is nigh impossible to avoid utilizing wireless devices to store, process, or transmit some type of data.  In this context, dismissing security flaws as hyperbole is short sighted and naive.  Yes, convenience comes with risk and even security geeks like me understand that.  I wonder, though, how much Apple spent on the Security Testing and Evaluation (ST&E)  of its operating system as compared to, say, redisigning its icons.  Would shifting 1% of that spend toward ST&E have made a difference?  We'll never know.  What we do know is that Apple is now spending unplanned dollars fixing flaws and responding to public embarrassment instead of innovating.  Not a good position for a technology company to be in.

Here's hoping Apple takes a much harder look at its iOS  and sends a fast update before the next security bug is discovered.  Oh, wait...too late...the next bug has already been found.  

Be aware...

Sunday, September 15, 2013

Nymi -- Biometrics Revisited

Last week my friend Lori approached me with an article she had read about a new device called Nymi.  This device (which is in pre-release and available for preorder) purports to be able to use "a person's unique heartrate" for authentication purposes.   Payment devices, hotel check-in technologies, enterprise computer systems, and even automobile locks can then be secured and accessed without remembering a plethora of passwords or carrying half a dozen token devices (to include physical keys).  Lori's question to me -- which warmed my heart :)  -- was what the security implications and ramications would be of  such a technology.  To answer this question, we need to go back to the basic principles behind authentication and biometrics.  If you are already more than well versed in these topics, then you should scroll ahead a few paragraphs; however a base-level review of these topics is never a bad thing.

As most of us know, the best authetnication schemas use two of the following three factors:  (1) something you have (a physical token such as a key or an digital key fob); (2) something you know (a unique password); and/or (3) something you are (a biometric identifier such as a fingerprint).  Most true two-factor authentication schemas employ (1) and (2) above;  many schemas use two instances of item (2) -- such as a user ID and a password -- which is not true two-factor authentication.  

Very, very few authentication schemas employ widespread use of biometrics in their environments.  The reasons are straightforward:
  • Invasiveness.  Utilization of biometrics in some form or fashion usually means the surrender and recording of a person's unique physical characteristics.  If you use a fingerprint scanner, for example, then somewhere within your network is some type of digital representation of  your staffs' fingerprints.  Same for retinal scanners.  Many organizations see the adoption of such tools to be invasive and "overkill" from a security standpoint.
  • Privacy.  With over 35 states having data privacy and security laws, protection of biometric data adds yet another category of data to be secured within the enterprise.  Worse, biometric data may subject organizations to portions of the HIPAA/HITECH regulations that they mighn't have to deal with at present.
  • Rejection/Acceptance Rates.  If you enter your password and token data in correctly, the system will allow you access.  Period.  If you use a biometric device, you are subject to false rejection and denial of access -- or worse (from a security perspective) false acceptance which will allow unauthorized personnel access to your secure data.  While these rates are falling as technologies get better, they are still not at 100% -- which means they run the risk of being labelled as a (a) nuisance or encumberance to operations or (b) ineffective in securing the enerprise.
With these things in mind, let's take a took at the Nymi.

Nymi's use requires it to be on your wrist and active.  Once there, Nymi purports to be able to "continuously" sample your heartrate and provde continuous proximity-based authentication for those systems which require such things (say, for example, your network-based office computers which could automatically lock when you get up from the desk).  Interviews with the CEO (which can be found on their website) discuss how Nymi was buildtutilizing the Privacy by Design framework which emphasizes minimum utilization of personal data and total transparency re: where the data goes...

...but there is no information or discussion around the security of the device and the data.

Without the technical specifications I am only guessing regarding how Nymi actually works...but logic would dictate that it is either (a) transmitting a digitized versious of your heart rate signature or (b) is utilizing your hear rate to authorize the transmittial of a unique "go code" to an authentication device (in other words, Nymi samples your heart rate and determines that it is, indeed, you...at which point it sends a unique authentication key to the device you're attempting to utilize).  Here are the top-of-head questions the Old Security Guy in me has regarding security and utility of the Nymi:
  1. Static Nature of My "Unique Heart Rate."  I'm not a doctor, but I would assume that my heart characteristics now as an overweight 47 year-old man have changed slightly since I was a 22 year-old Lean Mean Fighting Machine.  What specific items are measured to generate this unique signature.  If my heart health changes (cholesterol, etc.), will I be locked out of my own Nymi-enabled devices?  While heart rate and heart beat are different things, I would assume that my heartbeat is one of the variables which goes into my unique signature.  What's the variance and/or tolerance rate of the device in this regard?  If (for example) I set Nymi at my resting heart rate just after I wake up, will I be unable to use it just after a workout when my heart beat is accellerated?  What if I get a pacemaker installed or need heart surgery (as another dear friend of mine is undergoing this week)?  Would those things change my characteristics to the point of needing to reset my Nymi -- and is such a reset possible?
  2. It's All About The Data.  What, specifically, is being transmitted by the Nymi?  Is is compared against a centrally-stored signature or is the authentication done in the local device?  If there is a centralized store of data, then I would want to know how Nymi is protecting that data.  If authentication is done locally in the Nymi device then I would expect that either a static or dynamic "go code" is sent to the authenticating system.  If the code is dynamic (similar, for instance, to the random RSA token), what's the schema used to generate the random code to ensure it can't be spoofed?  If it is static and tied to the individual Nymi device, then how is the code server secured?  (Note:  Nymi speaks often about its use of Bluetooh technology...but Bluetooh technology isn't foolproof or hackproof. :) )
  3. What's the Uplift?  The marketing campaign for Nymi is clearly geared to the consumer...but for this technology to work in as widespread a fashion as described there needs to be acceptance by enterprise-class users such as (for example) payment processors.  Given the highly-regulated nature of that industry (and the heightened level of  concern regarding data security these days), the questions listed in (2) above would have to be answered in meticulous detail before widespread adoption could take place.
Conclusions:  In an era where people are still using weak passwords and changing them infrequently,  convenient biometric solutions make sense; that being said, Nymi's marketing focus on privacy versus security leads me to believe that they mightn't be ready for security prime time just yet.  I would be reluctant to employ Nymi even on my personal devices until I got some answers to some fairly straightforward security questions...

...answers that, as of yet, aren't forthcoming.

My two cents...