Wednesday, December 4, 2013

Millions of Gmail, Yahoo, Twitter, and Facebook Passwords Stolen

Hackers have stolen usernames and passwords for nearly two million accounts at Facebook, Google, Twitter, Yahoo and others, according to a report released this week.

The massive data breach was a result of keylogging software maliciously installed on an untold number of computers around the world, researchers at cybersecurity firm Trustwave said. The virus was capturing log-in credentials for key websites over the past month and sending those usernames and passwords to a server controlled by the hackers. You can read the details of the breach here  but you should change your passwords as soon as possible. on these services.  Spread the word!

Sunday, December 1, 2013

Security Tips for Cyber Monday

I am a seasonal retailer's worst nighmare...

...I am a consumer who shops for Christmas well before the holidays. 

I've never done a Black Friday nor a Cyber Monday shopping extravaganza;  by this time of the year I am focusing on decorating the house, writing Christmas cards, and swinging by the local supermarket to pick up some gift cards as stocking stuffers.  The idea of queuing in lines for hours and then fighting over towels is as bewildering to me as staying up until midnight in front of a computer screen to snag an online deal.  Nevertheless, millions of consumers in the US will engage in these post Thanksgiving rituals with eageness and zeal.  

This year several prognosticators are anticipating more retail revenue generated on Cyber Monday than on Black Friday.  In anticipation of this onslaught, many in my professsion are reemphasizing the importance of protecting yourself while online shopping.  There are several decent articles out there with lists of good practices (you can find two of them here and here)...but one more can't hurt.  

Here are my Tips for Safe Cyber Shopping:
  1. Patch Your Systems.  Sounds simple, doesn't it?  Still many personal computing devices and applications remain unpatched and vulnerable (as this year's data breach reports point out   Again.).  Patch the O/S.  Patch applications.  Update your virus software definitions...and run a thorough scan of the system before your start surfing.  
  2. No-App Monday.  Cyber Monday is not the day to download new apps or ringtones onto  your personal device.  Expect an onslaught of "new" or "discounted" apps to hit the app sites, offering you every convenient phone functionality you can think of.  While many of these might be legitimate, a significant percentage will not be.  Remember that the easiest way for the bad guy to get into your systems is for you to willingly let him in.  Downloading an app opens your front door to the cyber crook.  
  3. Ignore Pop-Ups.  Do not respond to any pop-up window offering your additional discounts/savings/deals simply by clicking on the window.  
  4. Know Your Retailers. If you are going to shop online on Cyber Monday, do so with retailers that you know and have done business with before.  Cyber Monday is not the day to "try out" a new online retailer or a known retailer's new online functionality.  Also, remember to check the URL of any known retail site that you visit by hovering over the link or inspecting the full URL in the browser windor.  Look at the beginning of the string and make certain that the site you are on is the correct one (e.g.:  amazon<dot>com versus amaz0n<dot>com).  Do not assume that you will recognize a phony website just by surfing it;  scammers have become quite proficient at creating professional-looking sites.
  5. Manage Your Risk.  Limit the amount of risk you incur when shopping online by controlling the dollar amount that the bad guys are exposed to.  Using credit cards is the most popular method of mitigating this risk, but not the only way.  PayPal is, by its design, a risk-limiting method of payment and is also effective.  You can also get creative with your banking instruments and designate one checking account/debit card for online shopping and only populate that account with the monies necessary to pay for your online purchases.  
  6. Password Sunday.  Scammers are looking for access to your accounts and data as well as your financial instruments.  If you shop online on Cyber Monday, consider doing a full-fledge password update and lockdown the day before.  Most individuals use the same password for multiple accounts...and (as recent breaches continue to show) most of these passwords are extremely weak.  If a scammer utilizes Cyber Monday activities to gain access to your system, having strong individual passwords stored in a secure offline container may slow down the potential damage that can be done.  Given the plethora of passwords that most people need to remember, it would be foolish of me to tell you not to capture them somewhere; be smart/prudent re: where and how you store them, though.  Personally, I am a fan of KeePass which I store on an IronKey that I keep in my firebox...though there are less-paranoid and less technical solutions.
  7. Remember Barnum.  P. T. Barnum is often credited with saying that "There's a sucker born every minute."  Scammers and criminals live by this philosophy.  If something sounds too good to be true, it probably is.  Be skeptical of "dream" deals and discounts.  Do not go down the rabbit hole of exploring such deals, regardless of how tempting they are.  Remember, it only takes a nanosecond to compromise a system.   
(Historical note:  for the purists out there, I am aware that Barnum never said the aforementioned maxim;  go here if you want the correct reference.  Yes, I have friends who will obsess over that point [squirrel!])

Hope this helps.  Please feel free to pass this along to your friends.  Safe shopping, all!

Saturday, November 23, 2013

Vendor Kabuki

Recently, a good friend of mine and longtime CISO left the chair to become the chief security strategist at a well-known security technologies company. A few weeks after that transition, my buddy and I sat down for a long overdue dinner with some friends. During the meal we discussed the transition from responsible charge to vendor. My colleague was less than thrilled at all aspects of the transition.

"Overnight I went from being a respected colleague to 'just another vendor,'" my colleague complained. "I'm no longer allowed at CISO events; I am no longer eligible to sit in CISO-exclusive meetings; professional organizations that I have supported for years treat me like a second class citizen; and folks whom I interacted freely and openly with won't return my calls.  Why is it that CISOs treat vendors like dirt?"  

As I about to respond, another colleague of mine who had crossed over to the vendor side nodded her agreement. "You're an anomaly, Kim," she asserted.  "You treat vendors as partners; most of your peers treat us like dirt."

I admit that I was taken aback by these comments...but only a little.  Vendor opinions of me tend to be decidedly bipolar.  My style of engagement tends to be direct and pointed;  While many vendors enjoy this honest dialogue, many more have found me extremely "difficult" to engage with. I didn't pursue the conversation over dinner (I was the only non-vendor at the table and my colleagues were in full rant mode :) ), but I did spend some time mulling over the problem.  

Like most relationship challenges, the problems with the vendor/CISO relationship are two sided.  I would posit that the CISO portion of the relationship dysfunction centers around something I like to call the egoism of motivation.   In an earlier blog post, I posed the question of why security professionals do what they do.   While I left the question open ended, I would submit that the majority of us walk the path we do for semi-altruistic reasons.  While our careers tend to be fairly lucrative these days, most of us end up fighting an uphill battle for resources and understanding with those who would quickly turn us into scapegoats should an adverse event occur.  Yet despite this environment we keep going back into the fray with zeal, passion, and dedication.  We are not cops or soldiers, priests or firemen...but at some visceral level we do share the same passion for service and making a difference as those in the aforementioned professions.  In this context, it is at times difficult to engage with those whom purport to understand our concerns yet do not share our motivations.  CISOs have no objection to money or profit motives -- hell, I have a kid in college and am all about not having my paycheck bounce :)  That being said, it is at times vexing to engage in conversations about a tool or service with vendor personnel who don't share your motivations; who don't necessarily have similar experiences; and who seem more concerned about acquiring your (very) limited dollars versus resolving your near and long term challenges.  

Even for those of us who manage to get past our own egoism, there still exists the challenge of vendor-CISO communication.  Several years ago I came across a webinar by Paul Glen, author of the book Leading Geeks.  In this webinar, Mr. Glen discussed seven "contaxioms"  -- axiomatic ideas and/or concepts for which geeks and non-geeks have contrasting ideas.  Glen's 6th contraxion -- one which I feel is especially relevant to the topic -- centers around the concept of lying. For the geek:

-Lying is evil;  truth is sacred.
-Answering yes to a question when you don't absolutely know if something is true is a lie.
-Exaggeration and opinion stated as fact are lies.

For the non-geek:

-Lying is not good;  it is bad manners.
-Answering yes to a question that you know is false is a lie.
-Exaggeration and opinion stated as fact are simply a part of normal speech.

With such a disconnect in terms and terminology, the CISO oftentimes finds it daunting to trust that which he hears from his vendor brethren.  Our axiomatic differences leave us at an impasse whereby our vendor brethren are often perceived as disingenuous in their dialogue...and the time it takes to determine the proper questions to ask to get to the level of detailed, accurate data desired is time taken away from our daily missions of protection and enablement.  Just yesterday one of my esteeemed colleagues said at a conference:  "Every time a vendor speaks to someone in my organization I lose a week's worth of work getting to the truth behind the sales pitch."

With these types of cultural dynamics at play, it is easy to understand why CISOs and vendors operate at best under a gaurded truce...but it doesn't have to be that way. Indeed, both vendors and CISOs would benefit from an attitude of true partnership on both sides of the equation.  As a CISO, I operate with certain guidelines when dealing with vendors:

-1.  Be Plain Spoken.  Understand what requirements you are trying to fulfill, and communicate them directly. As part of that communication, ensure your vendor understands whether your engagement is exploratory; whether you are trying to fulfill a short term spend; or whether this will be a long term process happening within the next fiscal year.  Your vendors, like you, also have requirements they need to fulfill;  it is disrespectful of their time and mission to have them spend months with you for a supposed potential sale when in reality you have no intention of making a purchase.

-2.  No Loss Leaders.  While I admit freely that I will always try to obtain services as cheaply as possible, I recognize the vendor must make a profit.  I do not insist upon loss leaders or additional free services from a vendor in order to close a deal.  If offered, I will accept them...but I do not make or break deals based upon the amount of free stuff I receive.

-3.  Respect Vendor Budgets.  This one plays in the realm of both ethics and mutual respect.  Vendors will regularly offer up dinners, tickets, etc. to get your attention or your time.  Notwithstanding appropriate legal and corporate guidelines for accepting such gifts, I make it a practice not to accept such offers if (a) I am not interested in the product or (b) I have no budget for such products. 

The vendor reps with whom I operate best also understand my expectations of them:

-1.  Be Plain Spoken.  I would rather be told "No, I can't do that," than have someone tell me that their service or product meets a need of mine that they are not equipped to perform.  Don't attempt to put a square peg into a round hole for the sake of  a near term sale.

-2.  Focus on the Long Term.  While I respect your near-term quota, I am looking for vendor partners who understand my long term needs and constraints.  Don't sacrifice a long term relationship for the sake of a sale.

-3.  Deliver.  Do what your say you are going to do...and ensure your products do what they say they will do, as well.  I expect this level of discipline and results from staff;  I should expect no less from my vendors.

I have met a handful of vendors (Anna, Ed, Gabi, J.R., Jason, Joel..and the late Ryan Richard) who understand and operate comfortably within these expectations.  In return, I have developed strong partnerships with these individuals and the companies they represent. Indeed, as these individuals have moved from company to company they have opened doors to their new products to me;  any company that would employ vendors of their caliber clearly has highly ethical business practices.  These individuals get my most valued resource -- my time -- freely.  Conversely, I have met a plethora of vendors who refuse to be straightforward; who don't deliver on promised functionality; and remain primarily concerned about making a quarterly quota.  These vendors are either relegated as afterthoughts in my strategic planning...or are removed from my environment along with their products.

While I might be considered an anomaly to my vendor collagues, I have found the aforementioned vendors to be anomalies amongst their profession as well. Indeed, I reminded my CISO-turned-vendor that his new company (which had a reputation for arrogant, bullying marketing tactics) only hired him after the better part of a decade in the security space.  Could it be that their incentive is due to having achieved a certain market saturation that they cannot move beyond without a  long-overdue change in approach?

Vendors and CISOs do need to reevaluate their relationship if the collective profession is to improve.  Both sides have work to do in strengthening our ties if we are to succeed.

My two cents...

Sunday, November 17, 2013

Preparing for Black Friday and Cyber Monday

If you don't know who Dan Lohrmann is and you work in security, you're truly missing out. Dan is CISO for the State of Michigan and is one of the thought leaders of our profession. Early on, Dan's leadership challenged him with the classic "figure out HOW instead of telling me NO" dilemma -- and he rose to the occasion with some innovative approaches and solutions.  Dan is a regular speaker and blogger on the business of security and is worth listening to.

What I love about Dan is that he never forgets that security is a personal matter. Addressing security issues and challenges as relate to individuals is just as important as looking at holistic, enterprise technical issues.  In his latest blog post, Dan gives us a "good, bad, and ugly" look at some of the pitfalls and benefits of Black Friday and Cyber Monday shopping.  Definitely worth your perusal...and worth sharing with your friends, colleagues, and security constituents.  You can find Dan's article at this link.  Enjoy!

Monday, November 11, 2013

Solving the Identity Problem

Just last week I ran across an article regarding the FIDO Alliance.  FIDO -- which stands for "Fast Identity Online" -- was created about 18 months ago to address the problem of a lack of interoperability amoungst strang authentication standards/controls/technologies online.  The typical solution to this problem has been multiple authentication credentials...which has lead to weak passwords and the use of a single password across multiple accounts (both conditions which actually weaken security).  The FIDO alliance seeks to correct this problem by promulgaring strong open authentication standards which can be utilized across multiple technologies on multiple platforms.  Currently the FIDO Alliance has begun conformance and interoperability testing for its Universal Authentication Framework and Universal Seconf Factor products

So...why should we care?  Several reasons:

  • The FIDO Alliance has attracted some heavy hitters in the heavily-regulated payments industry such as Mastercard, PayPal, and Oberthur Technologies
  • Michael Barrett, former CISO of PayPal, is president of the alliance.  Love him or hate him, Mr. Barrett has always taken a thought-leading approach to security issues.  He's worth listening to/paying attention to.
  • Multiple passwords are the bane of a security professionals' existence, yet we haven't yet solved the problem;  the Alliance's structured approach signals a beginning to a potentially viable solution.
  • The FIDO solutions represent a potential beginning to the long talked-about concept of "bring your own IDENTITY" which has been banted about in recent months.  BYOI's problem centers around how we truly federate identity across disparate platforms and providers.  FIDO's standards an tools seek to solve this problem.  If they are even mildly successful, it could be a truly seed-changing leap in how we approach issues of security, authentication, and compliance.
Information about FIDO can be found here.  Keep an eye on these guys!




Wednesday, November 6, 2013

Why Do You Do It?

Several weeks ago I sat down with my good friend Jill to discuss security and the security profession.  Jill doesn't come from our world yet she has a keen and sincere interest in what we do.

After a half hour or so of discussion Jill asked me a question that no one else has ever asked me:  "Why do you do it?"  I found myself a bit taken aback and momentarily speechless.  Jill pressed on:  "Several years ago someone asked me why I was an accountant.  She then went on to describe all the ways in which accountants are mistreated and looked down upon in the company I was in, and asked me why I did what I did for a living.  After thinking about it for a week or so, I decided that I didn't want to be an accountant.  

"So why do security guys keep doing what they do?"

I admit I was touched by Jill's question.  Most people view security as a necessary evil and fail to think about what we do every day -- or, rather, they don't think about it until something bad happens.   I have likened the job of the security professional as that of the lone knight defending the drawbridge.  Every day, the knight wakes up and dons his dented armor.  Picking up his rusty sword, he steps out on the drawbridge to defend the castle.  His bones are weary and achy, but he stands tall and faces off against the 100 dragons trying to enter his home.  Now, most of the occupants of the castle don't see the dragons he faces daily...and those that do regularly underestimate their size/capabilities/intentions.  At the end of a good day, the knight holds off the dragons and is only slightly worse for the wear.  He goes back into castle, petitioning for better armor or a newer blade...and for the most part he is ignored. After all, no dragons have entered the castle yet, have they?  Sighing, he goes to his quarters for a brief respite, and gets up to do the same thin the next day...

...and all the while he smiles, happy to do the work and be successful at it.

In truth, for me the answer to Jill's question has always been easy.  I am, by nature, a sheepdog in the way that Dave Grossman defined the term in his essay.  I have an overly-developed sense of justice and a need to keep bad things from happening to good people.  This is why I soldiered when there were (lucrative) options to do other things, and why I chose the profession I did when I hung up my Army greens.  In my current gig, I talk a lot about the Single Mom at Wal-Mart as my motivation.  It goes something like this...

Picture a single mom shopping at Wal-Mart.  She hovering just above the poverty line simply though hard work, determination, and personal resolve.  Her kids don't wear new clothes, but they are always neat and clean.  Their bellies are never hungry, if only through the three jobs she works.

It's shopping day.  She has clipped her coupons and is trying to get her shopping done before she starts her third shift.  The kids are tired, but well behaved.  Her cart is full.  She goes up to the checkout counter and swipes her card...

...and the transaction is DECLINED, either because (a) my systems have been hacked and someone stole all her money, or (b) my systems have been sabotaged and are down so Wal-Mart can't process the transaction.

I get up every morning, smiling, to prevent either scenario from occurring.

That's my story;  what's yours?  Why do you stand in the gap that few others see and even fewer appreciate?  Please post your responses here if you feel like sharing.

Sunday, November 3, 2013

Apple Takes Additional Precautions with its iPhone Fingerprint Sensor

After the release of iOS7 which touted several new security features, I gave Apple some grief for the discovery of a security which was indicative of  some lackadaisical security testing.  In the spirit of equal time, however, I need to give Apple its "propers" regarding some security forethought.  In a recent online post, Mactrast.com discusses Apple's apparent pairing of its TouchID sensor with the specific processor chip contained within its 5S phone.  In other words, swapping out either a touch sensor or the phone's processing chip rendors access to the biometric data useless for accessing the phone's applications.  This clearly showed forethought on Apple's part re: securing biometric information as well as sensitivity to applicable privacy concerns.  

You can read the full article here. Note that the article (correctly) points out the potential issues re: repairing phone screens on the 5S (translation -- if the screen repair damages the TouchID sensor then the sensor and chip will need to be replaced in order for the biometric feature to work...which means that you're basically talking about a whole new phone.)

The Fate of the Security "Profession"

I've been off the air for about a month due to some personal challenges, so I'm just catching up on some of the older stories that have been floating out there since late September.  One that has caught my eye is the National Academy of Sciences (NAS) report regarding the professionalization of information security.  In this report  NAS concludes that cybersecurity is best classified as an occupation rather than a profession;  further, NAS concludes that professionalization of cybersecurity should only occur when "the occupation has well-defined and stable characteristics [and] when there are observed deficiencies in the occupational workforce that professionalization could help remedy."  NAS (and several industry pundits) further pointed out the challenges of our ever morphing enemay as well as the self-taught nature of many of our most seasone professionals.  

What struck me most about this report is that hue and cry that did not occur from security professionals.  There were a small handful of articles and  some (predictable) responses from folks in the who resented the implication that they were not "professionals" (in the strictest interpretation of the word), then...nothing.

It is this lack of commentary that concerns me the most.  Several reasons for this.
  1. One of the criteria for professionalization has been (at least partially) met.  The security profession is facing a shortage of qualified personnel.  The operative term here is "qualified."  In an era where colleges and universities are regularly pumping out folks with computer/information security degrees,  senior professionals are still having difficulty finding people with the KSAs to do the work.  Experience (The "E" that we add over time to KSAs) helps and is supposed to enhance basic skills...but many organizations have taken to ignoring the training and experience offered by colleges and universities as being meaningless to the security utility in the workplace.  Further, there is still a wide variety of degree variance between university programs in Infosec -- and very few security professionals recommend ANY program as being appropriatly constructed to tackle a security gig straight out of the classroom.  To me, this translates to a case of "deficiencies in the occupational workforce" as well as an inability to provide a steady stream of qualified personnel into the workforce.                      
  2. What do we do about it?  Folks, the lack of response from us as a profession seems to indicate either that (a) we agree with the characterization or (b) while we disagree with the report we don't see how to change it. While I will be the first person to admit that there is a portion of our work is art, we cannot surrender the battle for the science lest we lose the ability to maintain the seat at the table that we have fought to occupy over the past 15 years.  When organizations cannot afford to steal senior folks from other organizations, they will turn more and more to technology to substitute for experience.  Should this trend occur, we may find ourselves in a position where the chief security officer position (one of the 3 most senior positions our career progression has to offer) goes the way of the VP of Telephony.  
Think I'm exaggerating?  I am personally aware of three multi-billion dollar entities who have broken up their security responsibilities amongst multiple entities upon the departure of their CSO.  Two of those three seem to be sustaining compliance and security levels within minimal to no difficulty.

The point of this post in a fairly simple one:  we cannot as professionals (even if we aren't technically a profession) to accept the status quo accurately pointed out by the NAS report.  We need to find a method of identifying and fostering the skills and mindset needed to succeed and -- most importantly -- stay ahead of the bad guys.  If we fail to invest in this effort than we do a disservice to our constituents as well as those who are trying to follow in our footsteps.

My two cents...

(Note:  the link to the report above lists a price for the printed version of the report;  downloading the PDF is still free. )

Saturday, October 5, 2013

World's Largest Data Breaches

Here's one for your SETA quiver:

David McCandless and the team from Information Is Beautiful recently released both  a static and interactive infographic visualizing the World's Biggest Data Breaches.  It provides an interesting perspetive of the size, scope, and cause of breaches cor the past ten years.  There were some interesting nuggets there, even for a security guy!

You can find a link to the infographic here.  Enjoy!

Wednesday, September 25, 2013

Data Aggregator Giants Hacked

Today Brian Krebs (krebsonsecurity.comhas posted the results of a months-long investigation conducted by his organization.  These results, while long suspected, are disheartening:  it appears that several well known data aggregators have been compromised, and their files accessed for malicious use.


The underground ID Theft service SNNDOB[dot]ms (hereafter SSNDOB) has for two years marketed itself as a source for valid compromised identities.  The source of their data has been largely unknown, but access to a major data aggregator was suspected.  Several months ago, SSNDOB’s  own compromised database was compromised and a copy was provided to Brian Krebs for analysis. Further analysis was performed on the networks, activities, and credentials held by SSNDOB administrators revealing a small Botnet operating on the internal systems of LexisNexis, Dunn & Bradstreet, and Kroll Background America.


The SSNDOB service has served up more than 1.02 million unique social security numbers, and nearly 3.1 million date of birth records since its  inception in early 2012


You can read Krebs' full post regarding the compromise here.  Be advised that I have no further substatiation of Mr. Krebs' claims nor any statements from the aformentioned companies...but  krebsonsecurity.com is known to be one of the most credible sources out there.  Also here is a link with some great tips about what to do if you suspect your identify has been compromised.


Be aware...



Monday, September 23, 2013

IE Zero Day Released Into the Wild

Kudos to my buddy Matt for pointing this one out.  Recently SANS reported a zero-day exploit to all supported versions of Internet Explorer.  

It looks like this zero day is no joke.  SANS raised their threat level to Yellow so it looks like it is actively being exploited.   Since this is a zero day, the best bet for now is to make sure you have appropriate mitigation in place. Matt's blog does a great job of laying out the options.  Give it a read!

Saturday, September 21, 2013

iOS 7 Security Bug Discovered

Well THAT didn't take long at all, did it?

Just days after the release of Apple's new operating system -- which Apple is touting as having (among other things) enhanced security features -- websites are reporting the discovery of a security-related bug.  In a video released online, hackers demonstrated how accessing the Control Center feature from the lock screen and executing a specific series of commands will allow someone to access other applications (such as email) which are supposedly inaccessible when the phone is locked.  While Apple says it's working on a fix, the simplest solution for the nonce is to change the Control Center settings so that you cannot access Control Center on the Lock screen.  This is easily doable from the Settings screen (though feel free to ping me directly if you need a walk-through).

While some have dismissed the relevance of this bug, I like security controls to work.  I remain cautious about what I do and do not do online, but given the ubiquitous nature of technology it is nigh impossible to avoid utilizing wireless devices to store, process, or transmit some type of data.  In this context, dismissing security flaws as hyperbole is short sighted and naive.  Yes, convenience comes with risk and even security geeks like me understand that.  I wonder, though, how much Apple spent on the Security Testing and Evaluation (ST&E)  of its operating system as compared to, say, redisigning its icons.  Would shifting 1% of that spend toward ST&E have made a difference?  We'll never know.  What we do know is that Apple is now spending unplanned dollars fixing flaws and responding to public embarrassment instead of innovating.  Not a good position for a technology company to be in.

Here's hoping Apple takes a much harder look at its iOS  and sends a fast update before the next security bug is discovered.  Oh, wait...too late...the next bug has already been found.  

Be aware...

Sunday, September 15, 2013

Nymi -- Biometrics Revisited

Last week my friend Lori approached me with an article she had read about a new device called Nymi.  This device (which is in pre-release and available for preorder) purports to be able to use "a person's unique heartrate" for authentication purposes.   Payment devices, hotel check-in technologies, enterprise computer systems, and even automobile locks can then be secured and accessed without remembering a plethora of passwords or carrying half a dozen token devices (to include physical keys).  Lori's question to me -- which warmed my heart :)  -- was what the security implications and ramications would be of  such a technology.  To answer this question, we need to go back to the basic principles behind authentication and biometrics.  If you are already more than well versed in these topics, then you should scroll ahead a few paragraphs; however a base-level review of these topics is never a bad thing.

As most of us know, the best authetnication schemas use two of the following three factors:  (1) something you have (a physical token such as a key or an digital key fob); (2) something you know (a unique password); and/or (3) something you are (a biometric identifier such as a fingerprint).  Most true two-factor authentication schemas employ (1) and (2) above;  many schemas use two instances of item (2) -- such as a user ID and a password -- which is not true two-factor authentication.  

Very, very few authentication schemas employ widespread use of biometrics in their environments.  The reasons are straightforward:
  • Invasiveness.  Utilization of biometrics in some form or fashion usually means the surrender and recording of a person's unique physical characteristics.  If you use a fingerprint scanner, for example, then somewhere within your network is some type of digital representation of  your staffs' fingerprints.  Same for retinal scanners.  Many organizations see the adoption of such tools to be invasive and "overkill" from a security standpoint.
  • Privacy.  With over 35 states having data privacy and security laws, protection of biometric data adds yet another category of data to be secured within the enterprise.  Worse, biometric data may subject organizations to portions of the HIPAA/HITECH regulations that they mighn't have to deal with at present.
  • Rejection/Acceptance Rates.  If you enter your password and token data in correctly, the system will allow you access.  Period.  If you use a biometric device, you are subject to false rejection and denial of access -- or worse (from a security perspective) false acceptance which will allow unauthorized personnel access to your secure data.  While these rates are falling as technologies get better, they are still not at 100% -- which means they run the risk of being labelled as a (a) nuisance or encumberance to operations or (b) ineffective in securing the enerprise.
With these things in mind, let's take a took at the Nymi.

Nymi's use requires it to be on your wrist and active.  Once there, Nymi purports to be able to "continuously" sample your heartrate and provde continuous proximity-based authentication for those systems which require such things (say, for example, your network-based office computers which could automatically lock when you get up from the desk).  Interviews with the CEO (which can be found on their website) discuss how Nymi was buildtutilizing the Privacy by Design framework which emphasizes minimum utilization of personal data and total transparency re: where the data goes...

...but there is no information or discussion around the security of the device and the data.

Without the technical specifications I am only guessing regarding how Nymi actually works...but logic would dictate that it is either (a) transmitting a digitized versious of your heart rate signature or (b) is utilizing your hear rate to authorize the transmittial of a unique "go code" to an authentication device (in other words, Nymi samples your heart rate and determines that it is, indeed, you...at which point it sends a unique authentication key to the device you're attempting to utilize).  Here are the top-of-head questions the Old Security Guy in me has regarding security and utility of the Nymi:
  1. Static Nature of My "Unique Heart Rate."  I'm not a doctor, but I would assume that my heart characteristics now as an overweight 47 year-old man have changed slightly since I was a 22 year-old Lean Mean Fighting Machine.  What specific items are measured to generate this unique signature.  If my heart health changes (cholesterol, etc.), will I be locked out of my own Nymi-enabled devices?  While heart rate and heart beat are different things, I would assume that my heartbeat is one of the variables which goes into my unique signature.  What's the variance and/or tolerance rate of the device in this regard?  If (for example) I set Nymi at my resting heart rate just after I wake up, will I be unable to use it just after a workout when my heart beat is accellerated?  What if I get a pacemaker installed or need heart surgery (as another dear friend of mine is undergoing this week)?  Would those things change my characteristics to the point of needing to reset my Nymi -- and is such a reset possible?
  2. It's All About The Data.  What, specifically, is being transmitted by the Nymi?  Is is compared against a centrally-stored signature or is the authentication done in the local device?  If there is a centralized store of data, then I would want to know how Nymi is protecting that data.  If authentication is done locally in the Nymi device then I would expect that either a static or dynamic "go code" is sent to the authenticating system.  If the code is dynamic (similar, for instance, to the random RSA token), what's the schema used to generate the random code to ensure it can't be spoofed?  If it is static and tied to the individual Nymi device, then how is the code server secured?  (Note:  Nymi speaks often about its use of Bluetooh technology...but Bluetooh technology isn't foolproof or hackproof. :) )
  3. What's the Uplift?  The marketing campaign for Nymi is clearly geared to the consumer...but for this technology to work in as widespread a fashion as described there needs to be acceptance by enterprise-class users such as (for example) payment processors.  Given the highly-regulated nature of that industry (and the heightened level of  concern regarding data security these days), the questions listed in (2) above would have to be answered in meticulous detail before widespread adoption could take place.
Conclusions:  In an era where people are still using weak passwords and changing them infrequently,  convenient biometric solutions make sense; that being said, Nymi's marketing focus on privacy versus security leads me to believe that they mightn't be ready for security prime time just yet.  I would be reluctant to employ Nymi even on my personal devices until I got some answers to some fairly straightforward security questions...

...answers that, as of yet, aren't forthcoming.

My two cents...

Tuesday, August 20, 2013

Time For A New Security Model (?)

Earlier this week I received an email from Justin Somaini, former CISO of Yahoo! and Symantec.  If you haven't had the pleasure of talking with Justin and you get the opportunity, I urge you to do so.  He's a brilliant, rock-solid security professional and an all-around Good Human Being :)

Justin has just taken on the new role of Chief Trust Officer for box.com, a company that markets itself as offering secure cloud-based file sharing solutions for home and for business.  In his new role, Justin is starting a dialog with security professionals around the need for a new security model.  Justin's initial blog post lays out what I call the classic ROI versus Luddite argument that security professionals find themselves in regularly.  Specifically:  new cloud based technologies offer companies the ability (among other things) "to remove legacy tech debt and enable services in a faster development and release cycles" whereas security professionals are still trying to "extend our 'enterprise' to devices as they traverse the Internet."

Admittedly, I have some problems with this characterization.  While I agree conceptually with where (I think) Justin wishes to go, the presentation of the problem in the aforementioned fashion fails to spend time asking why security professionals operate in this fashion;  indeed, the lack of root cause analysis tends to perpetuate the myth that security professionals "just don't get it" and just don't want to get it.  For most of us, this couldn't be further from the truth.

Typically, introducing innovative technologies into an enterprise environment takes one of three forms from a security perspective:

  1. The Afterthought Model.  The technology team has already made a buy decision and at the nth hour (or later) has brought the security team to the table.  The security team is now forced to try and fit the square peg technology into the round hole of security -- in many cases, in a way in which the technology was not designed to operate.  In the end, the security team must either (a) reject the technology and be blamed for lost ROI, or (b) jerry-rig an unsustainable solution which reduces the efficacy of both the technology and the security infrastructure.  A lose-lose proposition all around
  2. The Risk Acceptance Model.  The security team informs the technology team that there is additional risk associated with the new technology.  While the risk is beyond normal parameters, the security team is perfectly willing to allow the technology withn the environment if the additional risk is acknowledged and accepted at an appropriate level within the organization.  In many organizations this dialogue perpetuates a weeks (months?) long kabuki dance as the technology team (a) questions the risk calculus; (b) tries to get the vendor to persuade you that your perception of risk and/or the new technology is incorrect; (c) the vendor struggles to demonstrate that it has appropriate security controls and processes to mitigate the risk; and/or (d) the company leadership asks the security team to "find a way" to implement the new technology without addition of any additional risk inthe environment.
  3. The Standards & Options Model.  This is the model under which I prefer to operate.  The security team outlines a clear set of controls and standards that are clearly articulated and clearly defined for the levels of risk/criticality associated with the technology.  Included with those standards are securty architectural patterns which demonstrate the preferred (but not the only) method of meeting thse control requirements.  When a new technology is introducted, the criticality is assessed and a set of required security controls is layered onto the technical requirements.  The security architects work with the technology team and the vendor to ascertain how the technology meets these standards...only to find that, in many cases, security has not been appropriately layered into the technology.  In the rush to innovate, the technology company focused primarily on operations as opposed to security...and basic requirements cannot be met in any reasonable fashion.  
I admit freely that I am generalizing about technology vendors as well as  technology adoption models...but I think that most of us would agree that we have been in one of the three aforementioned buckets at some time in our security careers.  While this does not abrogate the need for security professionals to keep an open mind re: new technologies as well as to remain abreast of technological improvements which occur, it remains difficult (though not impossible) to embrace innovation and new technologies in an environment where ROI and technical debt reduction is not balanced against acceptance of risk  by the enterprise and/or innovative accomplishment of security objectives by the technology vendor out there.

Time for a new security model?  No.  The model for security has been and needs to remain the appropriate balancing of risk versus return within an organization.  Time for a new security implementation model?  Maybe.  If there is an alternative implementation schema that provides appropriate levels of trust; balances liability and risk; and achieves the appropriate level of accountability and efficacy re: security, then we should explore if not embrace such a model.  

If there is a vendor/technology out there that can assist in providing such solutions, we should definitely listen...but vendors and technologists need to understand their roles in this dance as well.  ROI without security is a flawed calculus.  Vendors who sell innovation without security are setting their customers up for failure and delaying (if not eliminating) adoption of their products.  I applaud Justin's call for a dialogue and discussion...but he needs to bring the other members of the vendor-security-technologist triad to the table if he intends to succeed.

My two cents...

Wednesday, August 14, 2013

Hacked Baby Monitor Caught Spying on 2 Year Old in Texas

Last week a couple in Texas awoke to hearing a stranger's voice in their two year old daughter's room.   What they found when they investigated was that a hacker had taken over their baby monitor and was using the monitor to communicate to the child.  Indeed, the parents watched the camera rotate to observe the parents when they walked in to unplug the unit.

The practice of taking over webcams in nothing new, but this story truly highlights the dangers of adding additional technical capabilities atop of an unsecure underlying network.  A great cautionary tale for your folks on a personal level...and a timely parable for your businesses re: the importance of fixing the security basics before adding capability/functionality.  

You can find the article here.  Worth your time...

8 "Terrifying" Cybercrimes of 2025

As I was catching up on my security reading this morning, the tagline of this techradar.com article caught my eye (as it was clearly intended to do).  

In truth, there is nothing earth-shattering postulated in this article;  indeed, those who regularly think about the intelligence and predictive portions of security have opined various scenarios like this for the past few years.  What's interesting, though, is to step back and realize that the technology and expertise needed to execute these scenarios exists today...and to consider how such scenarios could impact your own security environments even on a smaller scale.

Don't just read this as "pie in the sky" fear-mongering, folks;  rather, dissect these scenarios and think about similar concerns within your own environments.

A short but interesting read.  You can find the article here.  Enjoy!

Security fix MS13-061 breaks content index on Exchange Server 2013

To those of you waking up to problems with Exchange 2013 server, Microsoft has announced that a recent security fix (MS13-061) inadvertently breaks the content index.  Details re: fixing this problem may be found here.  Spread the word!

Sunday, August 11, 2013

More Random Thoughts on Big Data Analytics

Earlier this week I was asked to comment once again on potential issues and concerns around big data.  This time, the concerns were around bad analytics being applied to big data.  In an article recently published on searchCIO.com, a benign example of bad analytics being applied to big data resulted in the funding of a research grant where no correllation of facts actually existed.  Other articles point to the potential for people being wrongly excluded from vital benefits such as healthcare or the government making egregiously bad decisions based upon poor analysis (as if that has never happened before :) ).  Below are some of my general thoughts on the topic your amusement:
  1. Data and Information Are Not Synonymous Terms.  Data are facts;  information is a fact (or facts) in context. Removing context from data can obsure its meaning as effectively as encrypting it.  For an example, take the 10-digit number 3015553078.  Standing alone as datum, without context, this number has no meaning.  If we were to give it context by, say, adding commas (3,015,553,078) or by segmenting it in two (30155 53078) the data takes on some level of significance.  Only by adding the proper context, though -- in this case, (301) 555-3078 -- can we extract the proper meaning (or information) behind the datum provided.  
  2. Intelliegence Requires Data and Information.    Intelligence is a collection of information which has political and/or military value.  By analyzing data and information we can accurately extract hidden information of significance and relevance.  In the above example, for instance, if you were given the information that (a) 301 is the area code for Maryland and (b) I used to live in Maryland, you might be able to conclude that the aforementioned telephone number used to be mine.
  3. Big Data Collection Risks Removing Too Much Context.  This is especially the case with unstructured data.  In many cases the only context searched for is a cross referencing between an individual and certain terms.  The more those terms come up, the more an individual is assumed to meet a certain criteria.  For a real-world example, I harken back to the late 80's/early 90's.  Around this time, law enforcement officials in Dade County began stopping individuals traveling north on I-95 for suspicious of narcotrafficking.  Based upon their data, most overland drug couriers were (a)  dark-skinned males (b) between the ages of 20 and 30 (c) driving late model luxury cars who (d) made it a point not to speed.  Based upon this confluece of data, I was once pulled over for such a stop...despite being in military uniform with my West Point ring proudly on display.
  4. Data Analytics Is A Starting Point, Not An End Point.  Using the example in (3) above, even I can understand why I was pulled over;  what continues to annoy me to this day about that situation is that the officer insisted upon doing a full search of the vehicle despite me offering both positive military ID and a set of valid military orders.  As I fit the selection criteria for a profile stop, the officer felt it reasonable to ignore all other information being presented and delay my journey north for over an hour.  This, to me, epitomizes the problem with big data analytics.  Even the best-written search strings and heuristic models will get it wrong.  While the best models can achieve as much as a 98% accuracy rate, a 2% error rate scattered of 1 million selectees still amounts to 10,000 erroneous results.  If these results pertained to, say, healthcare coverage, the impact could be tremendous.
My bottom line with big data analytics is this:  utilizing the data to narrow the pool through which a human being must search may (note word) be sensible and proper depending upon the context.  Utilizing a data search query for ultimate decisioning without human intervention is short sighted and will lead to potentially life-changing errors.  As pundits who advocate big data continue to extol the potential efficiency gains of the associated technologies, we as security professionals must ensure that we do not lose sight of the dangers associated with its irresponsible use.

My two cents...

Yet Another Twist to Credit Card Phishing

(The following is reposted from www.securityweek.com.  You can read the original posting here.) 

According to Daniel Cid, Chief Technology Officer at Sucuri, phishers are breaking into e-commerce web sites and surreptitiously planting code to redirect sensitive payment details to third-party domains.

"The attackers modify the flow of the payment process so that instead of just processing the card, they redirect all payment details to a domain they own so they can steal the card details," Cid explained in a blog post.

The trick involves very stealthy, minimal changes to the hacked site. This is done to ensure persistence and to stay undetected for as long as possible. 

In one example, Cid showed how a credit card processing file on a hacked e-commerce site was modified to either transmit the credit card data via e-mail or redirect the data flow to a new domain.
The third-party domain receiving the stolen data looks almost like the payment handling site (slightly misspelled to avoid detection). 

"This redirection forces all the transaction data, including credit card details (name, address, CC and CVV), through their malicious server, in turn allowing the data to be stolen by the bad guys," Cid explained.

Interestingly, the data redirection does not affect the actual credit card transaction. Instead, the phishers are basically siphoning all the confidential data during the transaction process, quietly stealing credit card data without triggering alarm bells.

Infographic: Insuring Against a Data Disaster

Experian and the Ponemon Institute have released a new white paper which discusses the current state of cyber risk insurance.  Nothing earth-shattering in their findings, but it's a good refresher/primer on the insurance options that exist and should be taken into consideration.  You can find an infographic which summarizes some of the salient points at this link.  Note that you will need to provide your contact data to Experian to get ahold of full white paper. Enjoy!

Wednesday, July 10, 2013

Medical Device Security: Guidelines Released for Comment

In 2003 Barry Eisler released Rain Fall, his first novel.  In it, the assassin John Rain kills his target by hacking into his pacemaker using a program he installed on his PDA.  

In December 2012, the Showtime series Homeland depicted the assassination of the Vice President by having a terrorist group remotely take control of the VP's pacemaker and induce a fatal heart attack.

Last month -- finally -- the Center for Devices and Radiological Health (a department of the US Food and Drug Administration) released for comment a set of proposed guidelines to make medical devices incorporate more protections against cybersecurity attacks.  Just this week the FDA said that it is aware of dozens of cybersecurity attacks which have effected hundred of devices...but to date they are unaware of any patients that have been harmed from such attacks.  

While the proposed guidelines are fairly benign from a security standpoint, their implementation may have a significant impact on the $300 billion medical device industry -- an industry which has always (and somewhat appropriately :) ) tipped the balance toward functionality versus security.  

The guidance is located here.  Give it a looksee...and as security guys, consider commenting if you have concerns. We can only make things beter if we make our voices heard.

Monday, July 8, 2013

The Misadventures of Edward Snowden

The story of Edward Snowden continues to provide fodder for news pundits, the blogosphere, and security professionals alike.  As Mr. Snowden's exploits continue to play out, I'd like offer some random thoughts and opinions for your consideration.  Fair warning:  it is highly likely that some of what I say will end up annoying and/or upsetting someone at some level.  Remember, these are my opinions only;  they are designed to spark conversation and dialogue.  Feel free to disagree and to (courteously) provide comment to this entry.  Here goes...

  1. Edward Snowden is not a genius.  Mr. Snowden's résumé has yet to be released publically, but a recent New York Times article briefly described his four-year ascension from supervising computer system upgrades to "cyberstrategist."   Many of us have seen this sort of thing before, and more so in recent years.  Information Security remains a hot commodity with  a low unemployment rating (well less than 3% despite the economic downturn).  Many highly talented and highly skilled individuals noticed this trend in the late 2000s and began to re-tool their resumes toward information security.  Mr. Snowden, like many of his ilk, quickly parlayed a little knowledge into an opportunity;  he then continued to take advantage of those opportunities for professional gain.  This does not make Mr. Snowden a sophisticated "hacker;"  indeed, there is little evidence to date to suggest that Mr. Snowden did little more than take advantages of elevated privileges to access information that was poorly compartmentalized and/or poorly secured within NSA's network.  This is less a statement of genius than it is of opportunism (which seems to be Mr. Snowden's guiding force).
  2. Edward Snowden is not a martyr nor a hero.  Let me be clear:  I have genuine and far-reaching concerns about the PRISM program and the data collection activities of our government.  As I have stated in recent posts, I believe that we as a nation surrendered too much power and authority to the federal government in a post 9/11 world...and our government has taken/is taking full advantage of that.  Even if we give Mr. Snowden the benefit of doubt re: (a) naiveté when he went to work for the military industrial complex and/or (b) conscience when we saw what was occurring, my problem with Mr. Snowden is that he ran.  Martyrs don't run;  they suffer for their beliefs.  Heroes don't run either;  they stand in the gap and willingly face the slings and arrows of those who would disagree with their actions.  The fact that Mr. Snowden ran  to foreign soil to escape prosecution for his crime -- and by violating the oaths and agreements he signed in order to receive high clearance he did commit a crime -- labels him as neither martyr nor hero but as criminal and coward.  Worse, it casts doubt upon his motives and leads one to question whether there are other more malevolent motives at play here...or am I the only one who can see the possible hostile intelligence storyline here? :)
  3. Edward Snowden isn't the problem.  While focusing on Mr. Snowden makes for good copy, there are a whole list of other issues/questions that are being overlooked here.  Top of head:
    • How did Mr. Snowden get the information out of NSA?  Most likely, this was via USB device...which means that USB devices were enabled on sensitive computing devices and usage was not being monitored/tracked.
    • Where is the supervision/oversight of the contracting entities and their personnel?  Regardless of duty description (even if said duties included white hat penetration of systems), appropriate oversight and process would have easily raised appropriate flags early on in Mr. Snowden's exploits
    • What were Booz Allen Hamilton's screening and qualification criteria for its employees?  Were they too lax in their zeal to put faces in spaces and keep lucrative contracts?
    • What are the government's screening criteria for clearances these days?  The sad reality of the situation is that there has been a heightened demand for cleared workers since 9/11;  has the government backed off on its clearance requirements in order to keep up the increasing demand for cleared technical workers?
    • Where is the civilian oversight?  The very public face of this scandal for the government has been GEN Keith Alexander, Director of the NSA and head of the US Cyber Command.  While GEN Alexander's testimonies before Congress are appropriate given his posting, there remains this concept of civilian oversight of the military.  Where, then are the various civilian leaders during this scandal?  Other than to call Mr. Snowden a traitor, their presence has been notable by their absence in the hearings and in speaking to the media on PRISM.  (Note:  kudos to a colleague and peer of mine for first pointing this out;  I admit freely that I missed this one out the chute)
While the Snowden debacle remains titillating to most, as security professionals it should remain troubling to us on multiple fronts.  In addition to being a potential case study regarding access control, permissions,  information risk management, and network monitoring ot should also be a call to arms to understand the full scope of the government's powers regarding data and monitoring...and to (legally) cast a light upon potential overreach.

My two cents...

Sunday, June 30, 2013

Three Reasons Why America's Security Model is Broken -- Counterpoint

Last Friday  CSO Online published and article by Craig Shumard entitled 3 Reasons Why America's Security is Broken.  In this article, Shumard -- the former CISO of CIGNA who survived the early days of HIPAA and HITECH legislations -- offers a three-pronged approach to fixing security in the US:  (a)  More detailed/prescriptive rules and regulations; (b) fixing the basics; and (c) more transparency regarding security implementations to our customers.  As security professionals, I think it's worth taking a look at each of these proposed remediation strategies in some detail.

1.  The regulatory environment.  Regulatory prescriptiveness in any arena is a matter of balance.  If you are too prescriptive you run the risk of imposing needless costs as well as hamstringing the ability to innovate and/or embrace new technologies.  Let's take, for instance, the use of antivirus software.  It seems to make sense to be prescriptive about the use of anti-virus, yes...until you realize that the percentage of malware attacks that AV systems (or many signature-based technologies these days) are capable of stopping is small and getting smaller.  Change the signature ever so slightly and the malware has a heightened opportunity of slipping through.  In the past, I and several colleagues have made the argument that heightened levels of access and permissioning further down the OSI stack combined with improved malware technologies can achieve a higher percentage of success than AV suites...yet the more-prescriptive regulations out there all mandate the use of antivirus software.

This problem is, of course, exacerbated by the pace of change for regulation and law.  By the time most regulations and laws get passed, technology has already begun the march into new territories which regulations do not address completely.  Imagine if, for instance, the PCI-DSS prescribed a specific algorithm and bit-length for encryption.  As technology advances and processing speeds change, meeting the prescribed standard actually might place the complying entity in a state of weakened security.

Of course, setting a prescriptive minimum requirement for security controls can eliminate this potential risk...but again, unless the minimum can move at a pace that keeps up with technological change then you still end up with the potential for prescribed weakness in controls.

I would argue that the thing that needs to be fixed in this area isn't the lack of specificity in regulations;  rather, it is the cessation of equating of compliance with security.  A secure framework meets all compliance standards...but a compliance framework will never meet all security needs. Attempting to prescribe security via regulation is a gesture in futility;  despite our legislators' belief that this is possible, as professionals we need to break the security = compliance equation.

2.  Fixing the basics.  I absolutely support Mr. Shumard's position here. To quote an earlier posting of mine, I truly wonder what the various data breach reports would look like if we:
  • Enforced heightened password complexity
  • Patched vigorously and rapidly (to include addressing aggregate risk via patching low-level vulnerabilities with regularity)
  • Cleaned up roles and access to systems, ensuring a least privilege model; and
  • Managed(and monitored) super user accounts and privileges aggressively throughout the environment.
Even more scary to contemplate is the impact of such basic blocking and tackling on the security industry and its never-ending race toward new tools.

3.  Transparency.  Again, I agree with Mr. Shumard's basic premise in this instance.  Creating a model of transparency re: what we are doing from a security perspective will help us more than hurt us by forcing those who are doing less than what is deemed prudent to leave the shadows.  The problem -- one which Mr. Shumard spends far too little time on -- is the determination of what is deemed "prudent."

When I build programs, I often tell folks that I can build them Fort Knox but Fort Knox mightn't be the proper solution for their business model.  The trick (or, if you prefer, the "art" portion of our professional "science") is determining what the right balance is for the environment and what that does to the overall risk calculus of the business.  Once again, the hindrance in this area is staring us in the mirror.

Two well trained, well educated security professionals with similar backgrounds and perspectives can walk into an organization and evaluate its security/risk postures and reach different conclusions.  On a 10-point scale, one professional will rate the organization as a 7 and another will rate it as an 8.  As the Mr. Shumard points out, we need to get to a state where we have some type of common best practices/risk-versus-control-implementation knowledge base that we can look towards as we implement programs.  Continuing the example above, if such a knowledge base existed we could agree as a profession that the evaluated organization is currently at a 7 on a 10-point scale...at which point, the discussion becomes one of whether a 7 is sufficient and appropriate for the assets being protected.

This is the hard and non-sexy work of our profession; it turns more of our "art" into "science" and that scares many folks in our field.  Until we complete this work, though, meaningful transparency isn't achievable...and sooner or later our constituents are going to become less enamored with security professionals continually stating that "it's complicated" or "it depends" versus explaining what we're doing and why we're doing it.

Conclusions.  Where Mr. Shumard states three reasons that our security model is broken, I offer three actions necessary for remediation:
  • Fix the basics.  It's past time we did so.
  • Stop looking toward regulation as the only expression of security value.  As a profession we made an egregious error in hanging our hats onto fledgling regulations at the turn of the century.  In doing so, we placed the power of determining appropriate security in the hands of those for whom legal sufficiency is a perfectly acceptable standard.  We don't need more prescriptive regulation; we need to express security value in terms not tied to regulation.
  • Create a standard.  It's time to spend serious cycles on driving down the art of security and focusing on the science.  Only by creating some level of professional normalization (forgive the double entendre :) ) can we avoid going the way of the VP of Telephony within our organizations.
For for thought!

Sunday, June 23, 2013

Random Thoughts on the Adaptive Mindset

As we approach the midpoint of 2013, I have begun to shift some of my thinking to strategic initiatives for 2014 and beyond.  As I begin this shift in focus -- and, admittedly, as I come off of vacation :) -- I have begun to spend time thinking about what I am calling the "adaptive mindset." A friend a colleague of mine refers to this same topic as the "Agile mindset," but this often gets closely intertwined with the Agile development methodology.  I believe the challenge I am referring to extends beyond Agile development, although we see this challenge most clearly manifested within Agile development environments. 

I have often preached that the job of a security team is to "make lemonade out for two apples, a grapefruit, and a kumquat and make it look easy while doing so."  The needs of the business can shift at an almost mercurial pace, and if security wishes to remain a supportive (and, therefore, a valued and relevant factor), security professionals need to be able to innovate secure approaches and solution on the fly and often without the benefit of exhaustive research time and/or ideal toolsets.  Think of the Movie Apollo 13 when the lead engineer dumps a pile of parts on the table and informs the team that they must develop a solution to the orbiter's problem utilizing only the items on the table.  Just another day in the life for a typical security guy :)

The problem -- or, at least, the thing that I perceive to be a problem -- is that we appear to be losing some of profession's inherent ability to innovate on the fly.  In the early days of security, we came from a wide variety of backgrounds;  many of my peers trained as mathematicians, musicians, accountants, and (in two cases that I know of) Jesuit priests.  As we have begun to create college programs centered around our information security, we have created a standardized group of people who understand The Way Things Ought To Be...but not necessarily how to get them there in less than a perfect methodology.

I first ran across this problem en masse during my studies for my Masters degree.  I was in a cohort-driven program and for each class we needed to engage in online discussion groups around questions posed by the professor.  I was to only sitting CSO (and had been for 3+ years) in my cohort, and I would often challenge the other students' answers with responses like "That makes sense and is leading practice, yes...but what happens when the situation is <X>?"  Invariably someone would chime in "well that would never happen," only to have me explain that I had to deal with such a situation just the month before.  This would lead to some stilted discussion as fifteen highly experienced and well educated personnel struggled to innovate a solution to a real world problem.

I see similar challenges as security shops attempt to work within an Agile development or project environment. Decisioning in such environments happens in small teams at the lowest level.  The security SME doesn't need to know everything...but he does need to be able to think critically at a fast pace; make decisions; and consult the appropriate knowledge repositories to drive new and innovative solutions rapidly.  Too often, security personnel struggle in these environments; in some cases, they mask their inability move and think rapidly by defending the need for security to follow a traditional waterfall model.

I am blessed to run a decent-sized security shop in an organization that truly values programmatic holistic security. My people are top notch with a true desire to do the right things as well as improve their personnal skills. As the security leader in such an organization, I find myself in a quandry.  How do I balance the need for security specialists who can dig into certain topics and areas with nimble-minded generalists who have a passably working knowledge of multiple topics as well as the ability (and confidence) to make decisions on the fly in a fast-moving organization? 

The first answer that comes to mind when I discuss this topic with many of my peers is "experience."  Yes, clearly a more tenured and seasoned individual has a greater ability to flex and maneuver than a new recruit...but this begs the real questions of (a) how do we actively train and prepare young security professionals to adopt a nimble mindset as well as (b) persuade young security professionals to eschew some of their 'specialist' chops in favor of a more holistic knowledge base.  Adding to this challenge, of course, is the young professional's resistance to knowledge transfer.  Security professionals are proud of their skills and knowledge...and they have a right to be.  Many of these younger professionals can feel threatened at the prospect of either sharing that knowledge with someone else (a la cross traiing) or placin that knowledge within some type of knowledge repository.  After all, if someone else has the knowledge doesnt that make them expendable?

No organization can afford to staff itself with only senior personnel (even if such personnel were available in large numbers).  Further, there is still a need for "screen jockeys" at some level to do the analysis on incoming events etc. Clearly an organization must find and strike a balance between the two - a balance that is partially driven by (a) training younger professionals on a broader range of skills; (b) encouraging critical thinking; (c) building knowledge repositories and documented security processes; and (d) automating as many routine processes as possible.    We must eschew the notion that a young security analyst needs to spend 3-5 years perfecting nothing but one specialty skill before they can branch out.  We must also encourage the notion that a well rounded, critical thinking professional is what is needed in order to drive transformation and value within an organization.  

Fostering the mindset for agility and adaptability will be critical to the future of any transformative security program. Figuring out the right skillset balance without jeopardizing the daily activities or ballooning the security organization to an unreasonably large number is the hard work that accompanies that easy declaration. As I begin to put solution for adaptive mindset within my organization, I will share my approach and thoughts on this blog.  All comments and inputs are welcome :)

Sunday, June 9, 2013

Random Thoughts on the Recent NSA Scandal

Last week, the news broke that the National Security Agency (NSA) had been secretly collecting phone records of Verizon customers in the U.S.  Since the story broke the commentary from pundits, politicians, and the Great American Public alike has bordered on cacophonous.  

My thoughts on the scandal are wide ranging and somewhat disjointed, but they might bear some consideration. Here goes...
  1. We had to go overseas to find out what's happening at home.  It's somewhat disturbing to me that we needed to rely on a newspaper published in the UK to get information about actions occurring on our home soil.  Only after the story broke in The Guardian did US new media outlets grab the story and start speading it like wildfire.
  2. Someone is leaking like a sieve.  The original newspaper article allegedly* contained excerpts from a classified PowerPoint presentation as well as the original court order from the Foreign Intelligence Surveillance (FISA) Court. The documents in questions bear several classification markings including the term NOFORN -- which means "not releasable to foreign nationals."  (*Personal note:  I used the term "allegedly" to describe the documents as I personally make it a point not to review classified information released inappropriately into the wild.  As a former holder of high government clearances I consider it a violation of my oath and commitment to protect such information.  You are welcome to check out the documents yourselves and form your own opinions.)
  3. Why are we so surprised?  Title II of the USA PATRIOT Act broadly amends the FISA act and gives tremendous latitude to the FISA Court in pursuit of combatting terrorism.  In the furor of FUD (fear, uncertainty, and doubt) that followed the 9/11 tragedy, we as a nation made the willing determination that sacrificing some of our freedoms to a governmental entity in the name of security was the appropriate thing to do;  now that we find out our government is actually utilizing the authority which we surrendered to them, we cloak ourselves in outrage and suspicion?   Admittedly, part of my incredulity here comes from some of the folks who are expressing their outrage to me. I remember having conversations about this topic in the early 2000's and the dangers of governmental excess in this space.  Many people said to me back then that they saw nothing wrong with the government having such broad sweeping powers as "only criminals and terrorists and people with something to hide" should be concerned.  Now those same individuals are the ones emailing me and calling me to express their outrage and ask for my advice re: if they should cancel their Verizon accounts.
  4. What are we going to do about it?  Righteous indignation, Facebook campaigns, Internet memes and (yes) blog posts feel good and give us a chance to express our concerns in a public forum...but if we are truly concerned about this situation we need to take more postive, impactful actions such as:
    • Supporting privacy advocacy groups
    • Writing your local congressperson to express your concern -- and asking them for their positions on such issues.
    • Voting for candidates that agree with your position on this issue
    • Educating yourselves on proposed laws and acts which may further limit your rights to privacy online and over the airwaves.  While stalled in the Senate, earlier this year the House of Representatives resurrected and passed the Cyber Intelligence Sharing and Protection Act (CISPA).  Most Americans remain unaware of CISPA, the broadness of its reach, or its continued one-sided approach to information sharing and protection.  The fact that CISPA passed one of the chambers of Congress yet the nation remains indignant at the current NSA scandal is yet another reflection of the importance of becoming (and remaining) an informed citizenry re: these issues.   
I don't want people reading this blog to be left with the impression that I am anti-government or anti anything.  I love my country, and am extremely proud my service to it and its people.  That being said, I also believe that security professionals must understand and respect the need for appropriate balance and controls to prevent excesses and abuse which would tarnish that which makes us the Greatest Nation in the world. President Obama was correct when he stated that security and privacy are concepts that require the sacrifice of each in order to respect the other. Mayhap it is time, though, to relook at that balance and ensure that we haven't allowed FUD to (continue to?) skew where we draw certain lines.