"Put all your eggs in one basket and then watch that basket."

--Mark Twain, Pudd'nhead Wilson

Most cyber attacks and breaches are not manifested as bad actors storming the data center or network perimeter. Threats typically move from the data center out, whether as malware or an insider undertaking some form of exfiltration. Indeed, today's network perimeter is increasingly not a single physical or virtual place, yet much of the industry debate is still focused on the perimeter.

For enterprise IT and security professionals, this means they must take a different InfoSec perspective. Here are 7 considerations every security manager must keep in mind going forward.

1. Secure the Right Boundary. The evolution of dynamic, multi-tenant cloud architectures and the reality of distributed computing stacks have effectively destroyed the foundational elements of perimeter security. If you only focus on the edge of the data center, you are ignoring the greatest amount of attack surface—the inter-workload communications that never transverse the perimeter. Security leaders must focus on the intra- and inter-data center architecture challenges posed by new software and cloud computing infrastructures.

2. Security Should Be Part of the DevOps Cycle. Security must be built in to applications, not bolted on after. The enterprise increasingly relies on agile software development, but the lack of corresponding fast-moving security approaches effectively increases the risk of a breach. You cannot build and mount applications in a distributed application and computing environment and then rely on a static, hierarchical security model built upon chokepoints, infrastructure control points, and organizational silos.

Reducing Attack Surface3. Ports Still Matter. If you leave the front door open, will you be surprised if someone walks in? How well are you keeping an eye on port controls within workloads in your data center or moved to public clouds? Some pretty big breaches have happened because a development server was open to the Internet. Adopting white-list models reduces the attack surface.

4. Reduce the Complexity. Corresponding to the port issue, today most enterprises suffer from "policy debt," because they have stacked incremental controls such as firewall rules on their perimeter security. (Why doesn't anyone decommission extinct rules?) Last year I met with a security team from a large enterprise who told me they had 2.5 million firewall rules. (Yes, you read that right.) I asked if that number made them feel more in control. They said no, it made them feel less secure since they grasp less of their situation. Policy debt drives up cost and complexity and leaves open avenues for potential infiltration and exfiltration.

5. Encrypting Critical Data. The traditional approach to encrypting inter-workload traffic was to string a VLAN from a workload to a firewall and then create an encrypted tunnel to another firewall (and then reversing the process). This has been time consuming in traditional data center environments and nearly impossible in heterogeneous cloud environments such as the big cloud providers that are inherently multi-tenant environments. This must change.

6. Monitoring and Alert in Real Time. Visibility and monitoring are critical to keeping a secure cloud and data center architecture. Moreover, anomalies and policy violations must be flagged in real time. Real-time monitoring and policy alerting can reduce the time to infection for other machines from months to minutes and reduce the collateral damage on the path to remediation.

7. Don't Choose Between Network and Host. I have a friend who is a foreign policy reporter. When asked why he travels so much, he responds: "If you don't go, you don't know." Because vendors traditionally can only track the security on the host or the network, they advise on the benefits or perils of a counter approach. Today, the only way to survive the increasingly sophisticated attack environment is to understand the context of both the computing (think actual processes and roles) and the network (which tracks the flows).

Ultimately, security must be orchestrated and automated to meet the complex computing world we have moved into and reduce the overhead thrown at people. And it must scale to meet the challenges presented by distributed computing. This requires new considerations and the ending of a few "truths" of the prior era of information security.

Detection is a Philosophy, Approach, and Methodology...

When building, improving, or operating a security program, aiming for the right balance between prevention and detection/response seems like an obvious choice. I've discussed the question of why this is the case in a number of different pieces I've written for SecurityWeek and others. One might be inclined to conclude that my position on this topic stems from my background in security operations and incident response, where I practiced this methodology for many years. That may certainly be the case, but I do think it is worth examining this concept a bit further in this piece.

In my current position, I spend a lot of time educating people on the merits of security operations and incident response, including a balanced approach consisting of both prevention and detection/response.

What I find fascinating is that I am not alone in my thinking – not by a long shot. Sure, my professional contacts agree with and follow this philosophy, but that is to be expected and doesn't prove much of anything. What I find truly remarkable are my interactions and conversations with non-security or non-technical audiences. I often explain to these audiences that security can and should be approached as a business function aimed at mitigating risk, rather than as an enigma. I discuss some of the risks and threats that the modern organization faces, and I speak to some of the ways in which this risk can be mitigated.

Data Breach Detection

Perhaps not surprisingly to those who read my pieces, during these sessions, I map out a strategic approach to risk mitigation including a formalized incident response process/incident handling life cycle. What may surprise the reader, though, is the speed at which these audiences can understand and grasp the concept of security in this context, at least at a high level. I always enjoy watching the audience transformation, which involves a journey from whatever pre-conceived notion of security each individual may have possessed, to a different understanding of security entirely. These audiences, the overwhelming majority of which have no operational security experience, can quickly understand the merits of a balance between prevention and detection/response. After all, most business leaders understand the concepts of both risk and risk mitigation. Security is merely another domain to which those analytical skills need to be applied.

At the same time, I hear a lot of noise from many in what I would call the "pro-prevention" camp. I hear things like "detection is ineffective" or "detection is dead" or "detection is not a winning strategy". Essentially, the narrative coming out of this camp states that an organization should focus on prevention entirely without paying much mind to detection/response. I'd like to put aside, for the moment, the fact that we have tried this approach for the last 20 years with less than spectacular results, as well as the fact that the approach simply doesn't reflect reality, at least as I have experienced it operationally. I simply can't understand how one could recommend against a balanced approach of both prevention and detection/response.

The people beating the prevention drum aren't fools, which leads me to two potential explanations for this behavior. They either don't really believe what they are saying (i.e., they have some ulterior motive driving their messaging), or they simply don't understand what I, and others are referring to when we discuss detection. In case there is some confusion around what detection is all about, I'd like to make an effort to clear that up.

Let's start at the beginning. Detection is not about anti-virus (AV), Intrusion Detection Systems (IDS), Security Information and Event Management (SIEM), or any other type of technology. Detection is also not about signatures, alerts, events, tickets, or any other type of meta-data. Do security technologies and meta-data play a role in detection? Absolutely. They are necessary for detection, but they are not sufficient.

Detection is a philosophy, approach, and methodology that seeks to identify suspicious or malicious behaviors matching risks and threats the organization is concerned about. The output and success of the detection process is highly dependent on the input to that process. In other words, garbage in leads to garbage out. If one wants to understand why detection rates are so poor across most organizations, one need only look at the content development process used to feed the detection process. The solution to the detection challenge many organizations face isn't to throw away the concept of detection entirely, but rather to improve the content development process feeding it.

A high quality content development process is critical to the success of detection efforts because it is the process that feeds detection. The better the content with which we feed our detection process, the better our detection works for us. I discussed the importance of this topic in additional depth in a previous SecurityWeek piece entitled "Spear Alerting: Improving Efficiency of Security Operations and Incident Response", but would like to highlight some of the steps involved in the development of high fidelity, reliable content:

• Collect the smallest amount of data of highest value and relevance to security operations and incident response that provides the required visibility.

• Identify goals and priorities for detection and alerting in line with business needs, security needs, management/executive priorities, risk/exposure, and the threat landscape. Use cases can be particularly helpful here.

• Craft human language logic designed to extract only the events relevant to the goals and priorities identified in the previous step.

• Convert the human language logic into precise, incisive, targeted queries designed to surgically extract reliable, high fidelity, actionable alerts with few to no false positives.

• Continually iterate through this process, identifying new goals and priorities, developing new content, and adjusting existing content based on feedback obtained through the incident response process. Neither content development nor detection is ever really complete.

Detection/response serves to augment prevention and round out an organization's approach to risk mitigation. Put another way, attackers regularly get by any and all preventative measures. When this occurs, an organization can use detection/response to quickly identify that this has occurred and contain and remediate the activity before it causes damage to the organization. This balanced approach allows the organization to spread its risk mitigation strategy across more than one mitigating factor.

When I listen to the talking points of the pro-prevention camp, perhaps ironically, I hear an ever-greater need for detection/response. Clearly, this is not their intended result. But I do believe that a better understanding of what detection actually refers to benefits everyone and the security community as a whole. The time has come to move the discussion beyond the hype.

Sorry Airman Supershaggy, "Transformers 3" is not coming to Andersen Air Force Base. And by the way, you've been phished.

Security testers at the Guam Air Force base's 36th Communications Squadron had to send out a clarification notice on Monday after an in-house test -- called an operational readiness exercise (ORE) in Air Force parlance -- of how airmen would respond to a phishing e-mail worked out a little too well.

Read more ...

google logoAccording to a report in the Financial Times, Google are phasing out the use of Microsoft's Windows within the company because of security concerns. Citing several Google employees, the FT report reports that new hires are offered the option of using Apple Mac systems or PCs running Linux. The move is believed to be related to a directive issued after Google's Chinese operations were attacked in January. In that attack, Chinese hackers took advantage of vulnerabilities in Internet Explorer on a Windows PC used by a Google employee and from there gained deeper access to Google's single sign on service.
Read more ...
Page 1 of 2