Information Security in the Real World. Confidentiality, Availability, Integrity, Practicality.

Monday 21 November 2011

The Cloud Security Standard is out, and the ISO27001 author is unhappy.

What tangled webs we weave -David Lacey. Quotes from David Lacey's latest criticism of the "standards industry" include: "The standard [which became ISO27002] aimed to remove 90% of the effort in risk assessment by documenting commonly applied controls. Unfortunately it was hijacked by a consultancy community who subsequently reintroduced the need for mandatory risk assessment. It was also intended to be sufficiently broad and deep to minimise the need for any further standards. Yet two decades on, it has inspired a family of dozens of near identical standards and guidelines.". What has sparked Lacey's ire is the Cloud Security Standard. At 176 pages: "The real challenge however will be to turn this impressive body of knowledge into something of practical use to busy security managers. "


There is a reason I called this column "Practically Secure". Because I know how Mr. Lacey feels. Pragmatism is way down the list of objectives for the authors of today's security standards.


[NB I have finally edited this post to add my commentary, sorry for the delay!]

Thursday 16 June 2011

LulzSec - a wake up call?

Conflicted about "hacktivist" activity such as the current actions of LulzSec and Anon? There appears, anectodally to be widespread public support (or at the very least, an absence of unequivocal condemnation) of these semi-organised hacking groups, who seem able to bring down online services at a whim. And a layman's belief that they must have some serious kit, expert knowledge and access to enormous resources, right? According to Sophos Labs Naked Security blog, nothing could be further from the truth:

"LulzSec website break-ins look to have been languorously orchestrated, using nothing more sophisticated than entry-level automatic web database bug-finding tools, available for free online.
In other words, LulzSec is a timely wake-up call to better security if you are still asleep at the wheel. Your customers' data is important - both to them and to you."

We InfoSec professionals need to heed the warning. It's going to get worse before it gets better. The Advanced Persistent Threat is actually the "Simple Persistent Threat". The online organisation without any weak spots, the impregnable network is a fantasy. We need to wake up, improve security but also reduce the potential impact of a breach, with encryption, data cleansing and segregation, and a decent Incident Response plan.

But for now, back to Sophos' take on LulzSec, for those that are ambivalent to their activities:-

"But the end doesn't justify the means. Time spent throwing bricks through other people's digital windows doesn't actually teach anyone anything about glassmaking, glazing or civil engineering. If you consider yourself a hacker and you have time to spare, but you're tempted by "hacking" such as DDoSes or gratuitous break-ins, why not use your skills for active benefit instead? Follow the lead of a guy like Johnny Long and hackersforcharity.org"

Tuesday 26 April 2011

David Lacey lays into Compliance. Again!

He co-authored BS7799, the forerunner of the now trendy ISO27000 family of documents that describe best practices in Information Security Management. And now he has disowned his own creation as not fit for purpose in the modern age. In his latest blog post The Three Faces of Information Security, David Lacey goes further and decries all compliance thus:-

"Unfortunately, it's all based on collections of ancient practices, with a heavy emphasis on documentation and audits. And if you don't want to pay for security, you simply accept the risk. Your security might be completely ineffective but your paperwork will gain you full marks."
Lacey then goes on to describe "Real Security" as distinct from both compliance and the "business enablement" view of security we sell to management. With a doom and gloom conclusion that "most organizations are sleepwalking into a future crisis" Lacey paints a grim picture of the current state of Information Security.

Is he right? Certainly the continuous stream of breach notifications and ever growing landscape of threats seems to bear this out. We once knew what we were dealing with, or at least thought we did. We don't, not if we are relying on standards written in the 80s and revised six years ago when the term "cloud computing" was still met with giggles and shrugs and "virtualisation" was a software tool fit only for development environments. The future of Infosec demands imagination, foresight, a step change fit for the 201Xs just as BS7799 was a step change in the 1980s. Because the if current trends show anything, it's that we're even worse prepared against the enormous imagination and technical skill of todays malicious agents than we thought we were. But in truth, no more than we deserve to be.

Sunday 27 March 2011

Human Nature. Friend, not Foe?

A good education programme is worth a dozen new technical controls.

In 2007, details of 25 million UK citizens went missing on two CDs because a junior employee didn't know the rules and management procedures were lax. Similar mistakes led to huge data losses in more recent years at Zurich insurance, UK railways operator Network Rail and the British Ministry of Defence.

One possible response to these breaches is a technological one. Maybe Data-Loss Prevention technology could have helped, even something as simple as disabling the writable DVD drive on employee workstations. If the employee could not copy data to DVD they could not have lost it.

But are we missing the point? These were human failings. Like many security issues these were entirely preventable human errors. A system of people is capable of a myriad different failings. If we continue to throw expensive technological solutions at human error then we will never be finished. Wouldn't we feel better knowing that our employees know what is expected of them in the fight to remain secure and compliant? That they are on the side of Information Security and they work with us to prevent fraud, loss and service disruption?

This is what a security awareness programme does. A good one will change people's understanding of security, will encourage them to feel part of the solution, and engender good habits in their day to day activities. If the HMRC junior employee had had some education around the value of sensitive information, the trust placed in them by their customers - the British people - and the risks inherent in moving that data from a secure place to an insecure one, then maybe that breach would never have happened.

Much talk after the events above was about technological prevention and improving procedures. But human nature suggests that whatever technical or administrative control you put in place, there will be a tendency to resent the control,  to see it as a barrier to productivity and to work around it. More so if the subjects of the control - the employees with pressure to get the job done - perceive it to be too restrictive, or don't value the risk you are mitigating.

Technical and administrative controls have their place, they are a major weapon against data breaches. But a far more effective weapon is the power of human nature. Education programmes can go a long way to change staff behaviour and keep your data safe.

Why then do we spend so much money on technological solutions to human problems? DLP and Security Incident and Event Management (SIEM) are often recommended after a breach with its roots in human error. While these have their place, human element measures such as education are often more cost-effective. So why the technological focus?

Maybe it has something to do with the people doing the recommendations. Maybe the auditors, analysts and CISOs feel they have to justify their position and sizeable fee by sounding knowledgeable. Recommending staff training does not sound like expert Information Security advice. It's too simple, and not what we expect from a CISSP/CISA/whatever. So several new appliances and desktop software suites are recommended - the latest wizardry - thus the CIO feels he has received value for money from his security experts.

This needs to change. We need to value the human element in our Information systems, and recognise that it needs managing at least as expertly as the digital elements. Our people need help, encouragement and empowerment to become security advocates.

Once you've established a permanent, rolling security education programme then you might want to review your technical controls and ensure they are appropriate to the risk you are managing. Who knows, maybe you might find you can relax some controls without degrading your risk posture, and at the same time make your staff more productive. And what CIO doesn't want that?

Wednesday 16 February 2011

Information Wants to be Free 2.0.

It's a refrain from the early days of computer hacking. A rallying cry of hackers, anti-censorship activists and just plain anarchists it dates back at least to the mid-80s use by Stewart Brand, while the phrase "Information wants to be free" has been used by bloggers the world over to justify the current Wikileaks phenomenon. But the phrase has a new connotation, a new white paper from Intel quotes the old hackers mantra as one of five new "Irrefutable Laws of Information Security".

Intel's use of the phrase recognises the fact that employees, associates and outside agents regularly find ways around our efforts to contain our data and many do so without malice but in order to get their job done. We should therefore recognise this behaviour and manage it, instead of trying to limit or quash it. This is genuinely refreshing stuff from a big name, and is a timely response to David Lacey's call for new standards and security models. The five "laws" in Intel's model are:

  1. Information wants to be free
  2. Code wants to be wrong
  3. Services want to be on
  4. Users want to click
  5. Even a security feature can be used for harm.
The full article explains these laws and how Intel has devised new models to achieve security within them, including the "Trust Calculation" to provide an access control model flexible enough to support remote working with a variety of portable devices and locations. 

As someone who has been suggesting for a while that compliance does not equal security, and the human factor is much much bigger than most of us credit, I think this is genuinely forward-thinking stuff and I look forward to the Information Security industry's response.

Thursday 13 January 2011

Mainframe Security, PCI-DSS and other docs

Sorry, I've been busy with my other blog for a while, about System z (IBM mainframe) security, which if you missed the announcement is over here on IBM's developerworks.
I'm delighted to be able to tell Practically Secure: readers that I've written an article for respected mainframe magazine z/Journal, discussing mainframe security. While it's mostly about System z, the general concepts (including the paragraph entitled "Secure for Compliance, Don’t Comply for Security" will be of interest to all. Some of you may be familiar with the content if you've been reading me long enough.
z/Journal is here, and my article in the Dec/Jan issue can be read online in HTML format here.
For completeness, here are my earlier white papers written for Pirean.com (all rights reserved by them) covering mainframe compliance, and PCI-DSS.