Docstoc

Accuvant Insight

Document Sample
Accuvant Insight Powered By Docstoc
					      Home
      About Us
      Contributor Bios
      News/Events

Dec 14 2010

Virtualized Security Works Best When it’s Built on the Basics
Published by abrink under virtualization

Industry analyst firm Gartner says that virtualization projects are currently the number one priority for
CIOs. Yet Gartner also reports that, “Through 2012, 60 percent of virtualized servers will be less secure
than the physical servers they replace.” Why is there such a significant disconnect between virtualization
and security?

One reason is that it’s relatively easy to deploy virtual servers in an environment. So easy, in fact, that the
information security department is often removed from the process. As a result, companies fail to follow
pre-defined security measures.

Secondly, many companies locate virtual servers on the same physical host instead of segmenting them,
introducing a lack of visibility and control between virtual systems. The visibility of guest-to-guest
communication, on the same host, has become more difficult. A lot of security measures have been put on
the physical network and hence cannot see this communication. Regardless of whether a company is using
physical servers or a virtualized environment, it is essential that they keep security domains and risk zones
separate. A host that houses DMZ guest servers should not also house an internal accounting server.

The final area of concern, one that is too often overlooked, is the security of the host system itself. While the
guest systems usually have known controls, patch cycles, etc., the host is left in default configurations and
remains open to modification, thus allowing copies or re-routing of traffic without the knowledge of the
guest systems.

One solution to these problems is to enforce security controls within the virtual environments in a similar
manner that physical environments are enforced. Virtualized environments can be segmented using existing
IPS and firewall appliances; however, in-house security or server teams may not have experience
integrating these technologies with their virtual environments. This integration is becoming easier because
virtualization vendors have released programming interfaces that allow new products to be introduced into
the marketplace that provide additional security and features. Firewall and IPS vendors are releasing “VM
Aware” products that can inspect the VM-to-VM communications. Vendors are offering products that can
help secure virtual environments by allowing antivirus and other protections to reside at the hypervisor
level. This provides the additional benefit of conserving processing cycles for each virtual machine.

Additionally, security vendors are releasing virtual platforms of their hardware appliances, which allow
increased flexibility, visibility and control of virtual environments. In the past, these virtual appliances were
primarily used for non-production environments. With the increased speed in host servers, virtualized
appliances are becoming viable for use in full production environments. Not all virtualized appliances work
well in production environments, so to ensure success, appropriate sizing and planning needs to be done.

While increased functionality, visibility, and control have become possible with products geared towards
virtual environments, the best practice in promoting “virtualized security” is to go back to the basics. The
use of these products needs to be in conjunction with a strong security framework in order to provide a
secure virtualized environment.
Andrew Brink
Solutions Engineer – Accuvant

No responses yet

Dec 09 2010

Online Shopping Can Compromise Your Identity
Published by rsmith under Identity Theft

Last year, identity theft raked as the number one consumer complaint category with 1.3 million people
falling victim to the crime, according to the Federal Trade Commission (FTC). As e-commerce sales
continue to increase (Forrester Research has forecasted a 10 percent compound annual growth rate through
2014, rising from $155 billion in 2009 to $250 billion), so does the opportunity for cyber criminals looking
to make a quick buck. And, what better time than the online holiday shopping season?

You know that flood of retail promotions you’ve been receiving via email? Think twice before you click to
open. Some may be legitimate, but others may be coming from unscrupulous individuals who want to steal
your personal data.

Here are three tips that can help you keep that deal of a lifetime from turning into a headache of a lifetime:

   1. Make sure that the name of the Web site is properly spelled and the end contains a period, the
      real name of the company followed by “.com” – In order to fool consumers, thieves will register
      Web sites with look-alike names. Similar to knock-off handbags, every Web site requires a more
      detailed inspection to ensure that it’s authentic. Make sure the Web site address in the URL bar ends
      with “.company.com,” where “company” is the correctly spelled name of the company. If it doesn’t,
      you may have a knock-off Web site in your browser window.
   2. Look for a blue or green URL bar at the top of your browser – When a company has gone
      through extra verification steps to set up their Web site, your browser will let you know by shading a
      portion of the URL bar either green or blue. This proves that there’s a real company behind the Web
      site and not a criminal stealing your financial information.
   3. Make sure you have the latest Web browser plug-ins – Have you been getting a popup reminding
      you to update a piece of software? It’s a great idea to activate that update before you begin or
      continue your shopping. An attacker can much more easily take control of your computer when your
      software is out of date.

Most importantly, when shopping for holiday gifts, follow your gut. If a deal SEEMS too good to be true, it
probably IS too good to be true. Street sense is sometimes all you need to ensure that you’re not going to be
the next victim of a crime. Companies stay in business by making profits or breaking even and few people
are willing to sell anything below the going market rate. Although certain deals can be enticing, avoid them
if they seem to be too good.

Ryan Smith
Chief Research Scientist – Accuvant LABS

No responses yet

Nov 16 2010

How Much of a Security Concern is Cloud Computing Really?
Published by rsmith under Cloud Computing

Before cloud computing had even gotten off the ground, people were talking about the security implications
of computing in the cloud. When you step down from the semantic sugar and look at the basics, cloud
computing is not fundamentally different from any other technology. When a technology can be influenced
to execute outside of its intended purpose, a vulnerability is present. The following elucidates some of my
thoughts regarding cloud computing.

Let me start off with enterprise vulnerabilities. First off, enterprises face exactly the same vulnerabilities
whether they’re using on-premise equipment, cloud computing, or some combination thereof. The standard
enterprise vulnerabilities range from the Open Web Application Security Project (OWASP) Top 10 to the
low level buffer overflow, all of which people have been fighting for the past 15 years or so. The big change
with mobile rising from cloud adoption is that vulnerabilities move from something the enterprise owns and
controls to something a third party owns and controls. It raises the question about who owns the
vulnerabilities and who is able to find the vulnerabilities and remediate them.

There is also no difference in using social media as an avenue for attack (one of the biggest threats for most
organizations) when comparing on-premise versus mobile computing. A computer accessing Facebook or
Twitter is just as easily hacked as a mobile device accessing those same sites. That’s because the
fundamental constructs of accessing a site over mobile or via a traditional method are identical. Despite the
device the consumer is using, hackers can still get to sensitive data by exploiting the same vulnerabilities
present in the various social media platforms.
In my opinion, the more real and immediate dangers of cloud computing impact individual consumers, but
not enterprises. For example, consumers have traditionally been able to protect their data by using the latest
patches and avoiding risky behavior while on the Internet. As consumers use more mobile technology tied to
the cloud, attackers looking to compromise data are going to have an easier time finding specific
consumer’s data, and may do it at any time throughout the day.

This is a new paradigm because consumer data has historically been available to attackers only when
consumers’ computers have been on or they’ve been browsing Web sites. Consumers have been able to
unplug their computers from their network and turn them off in order to protect their data. With cloud
computing, there is a server out there with available data 24 hours a day, seven days a week. It will no
longer matter what steps consumers take to protect their data – the onus shifts to the third party MSP, and
IT departments need to be constantly vigilant.
Another real consumer danger: if an attacker is able to compromise the credentials in a cloud-based
environment, they can access all of the data. This is often not an issue with enterprises since passwords are
generally just the first line of defense. But, with consumers, usernames and passwords are often the only
lines of defense for cloud computing (consumers don’t need usernames and passwords if their data resides
on their hard drives). Often times, password compromise is the easiest way to gain access to a system.

As cloud adoption continues to become more prevalent, vulnerabilities within Web browsers will matter less
to attackers. The data that is valuable to attackers will soon be predominantly located on cloud-based
computing servers, rather than on consumers’ systems. Although getting into cloud-based computing servers
will be a more difficult task, the reward will be greater. Compromising a single entity will lead to the
exfiltration of a large number of consumers’ personal information.

Ryan Smith
Chief Research Scientist – Accuvant LABS

No responses yet

Sep 29 2010

Monitoring Your Networks and Systems Can Save you A Lot of
Heartache
Published by cmorales under Strategy, botnet, malware

In my last blog posting, I shared with you some long-term strategies to help you change user behavior so
that you can more quickly find malware infections and mitigate the loss of information associated with a
breach. You can address the current infection of systems by monitoring for malware in three areas: external
network monitoring, internal network monitoring and system monitoring. This type of monitoring lets you
know where to look for infected systems, helps you determine if these systems have led to a compromise of
confidential data, and allows you to evaluate whether or not further investigation is required.

External network monitoring is the monitoring of malicious traffic on the Internet. It includes collecting
Internet traffic for inspection for malware and the tracking of criminal organization groups, and emphasizes
things such as the responsible parties for malicious traffic, where this traffic originates, the motivation of
the attackers, and what tools are used by criminal organizations for malicious intent. This information is
important because it allows you to validate the presence of malicious traffic emanating from your domain as
an initial indicator of infected machines controlled by criminal organizations, and tells you if you need to
pursue further inspection.

Internal network monitoring refers to the monitoring of malicious traffic at the network perimeter, and helps
you identify infected systems communicating over your network to malicious networks known as botnets,
which are used for criminal activity. Most tools classified as malware or botnet monitoring look for the
command and control channels of botnets gained from external research or analyzed from known malware.
When these tools detect communication that matches a botnet, they log offending packets for further
analysis and generate alerts to help you identify the traffic and systems in question. As a result, you are able
to quickly ascertain that there is an immediate need for investigation, determine when the attack occurred,
and gain an understanding of the number of hosts that are involved. Some network tools focus on collecting
only the offending traffic while others can be augmented to collect all of the traffic on the network to
provide a more advanced and intensive analysis. Combining the two provides for a very thorough and
complete picture of malicious activity, as well as better insight to the type of regular activity that could have
led to an infection.

Host system analysis refers to looking for the presence of malicious behaviors occurring on a system, and
the existence of the malicious code on these systems. By analyzing the memory of a system, you can
identify malware regardless of the techniques the attacker used to hide it. Traditional anti-virus and anti-
malware techniques are mostly signature-based looking for execution of a file on a system, and require a
heavy reliance on prior knowledge of an attack. Shifting the focus to memory analysis allows you to
determine exactly what is happening on a system, including what rights the malicious code has, what
systems it is attempting to make connections to, and what data is exposed to loss. With these tools, you can
perform scans on a scheduled basis across multiple systems simultaneously or as needed based on the
indicators of interest from the network malware monitoring and analysis.

Today there are some protection systems that are looking to enhance the traditional methods of using
signatures through behavioral techniques designed to identify the malicious behavior of malware. This is
similar to what I described for host system analysis, but with a focus on real-time detection and prevention
before an infection occurs. These techniques are performed through virtualization, which allow for
execution of code in a sandbox to see what the code does, and are performed on the network or on the host
and provide what is effectively considered to be a better antivirus.

What are you doing to address the current infections in your systems?

Chris Morales
Solutions Engineer – Accuvant

Comments Off

Sep 23 2010

Changing User Behavior is Key to the Malware Protection Process
Published by cmorales under Strategy, malware

My colleague, Ryan Smith, recently wrote about Defense in Depth and talked about the fact that, regardless
of how many tools and techniques an organization implements to prevent infection through malware, they
won’t be able to stop every infection. I agree, and would take that a step further to say that it’s practical to
assume a certain percentage of systems will be infected at some point during the course of a year.
Therefore, it’s extremely important to create a methodology so that you can find infections within a
reasonable timeframe and mitigate the loss of information associated with a breach.

Here are two long-term strategies that you can implement and develop over the course of time.

   1. Information policy – Compensation tied to security metrics is a strong initial method to create
      change in company culture. Metrics should be simple and address basic security requirements that can
      be easily measured, such as total number of system vulnerabilities. If incentive compensation isn’t
      your organization’s bag, at a minimum you should create and notify employees of a policy to conduct
      regular, unannounced social engineering tests. The results of the tests should be immediately returned
      to employees along with information about further security training, as required.

      Information policy won’t ensure the safety of your organization, but it will help reduce the footprint of
      exposure through a very common method of infection: users accepting malicious email or clicking
      through to malicious sites unintentionally while surfing the Internet.

   2. Define the access of confidential data – Data policies need to define what constitutes critical data,
      who has access to the data, and where and how the specific data should be stored. The goal is to know
      who and what the real threats are in order to identify the risk. By removing the ability of most users to
      access confidential data, you can focus your efforts on more stringent requirements for those users and
      systems that do have confidential data, helping you to avoid a costly breach.

Information policy and defining (and limiting) the access of confidential data will enable you to change user
behavior so that you can minimize the threat, and respond more quickly as infections occur. There are also
some tools and techniques that you can use in the short-term to quickly to address the current infection of
systems. I’ll talk about those in my next blog posting.

Chris Morales
Solutions Engineer – Accuvant

Comments Off

Sep 14 2010

If You Are Attending Ekoparty in Argentina…
Published by cvalasek under Conferences

Hello internet-sphere,

My name is Chris Valasek and I’m the newest edition to the Accuvant LABS research team. I will be
working alongside Chief Research Scientist Ryan Smith on a variety of subjects. While I mainly do reverse
engineering and exploitation related work, we have plans to work on a wide array of internet awesomeness.

Additionally, I wanted to let everyone know that I will be speaking at Ekoparty (http://www.ekoparty.org)
in Buenos Aires, Argentina, September 16-17. Ekoparty is on its sixth year and appears to be getting bigger
and better as time goes on. I am truly honored to be speaking this year and hope to make subsequent
conferences as well. I look forward not only to speaking, but also to chatting with all the South Americans
about computer security in general. I will try to keep everyone posted on some of the conversations I have
via Twitter at @nudehaberdasher.

My topic at the show will be Understanding the Low Fragmentation Heap: From Allocation to Exploitation,
which specifically focuses on how the heap manager in Windows 7 (32-bit) works. It covers topics ranging
from data structures to exploitation techniques and is useful for security professionals to software
developers. If you have any interest in modern Windows memory management then I’d suggest stopping by
if you live near Buenos Aires, otherwise, please feel free to read the following paper/slides that were
presented on the same topic at Blackhat USA 2010 in Las Vegas, Nevada.

Again, for real-time info, don’t hesitate to follow me on Twitter for more frequent updates about my work
and presentations (@nudehaberdasher).

http://www.illmatics.com/Understanding_the_LFH.pdf
http://www.illmatics.com/Understanding_the_LFH_Slides.pdf

Chris Valasek
Senior Research Scientist – Accuvant LABS

Comments Off

Sep 13 2010

PCI DSS 2.0 is on the Horizon
Published by bserra under PCI

A new version of the PCI Data Security Standard (PCI-DSS) is targeted for release in October. A lot of
companies are aware that the revised standard is coming out, and many of our clients have been asking us
what the revisions will entail, and what they’ll mean to them.

I think Seana Pitt, American Express’ vice president of global merchant policies and data management, did
a great job summing up PCI DSS in this quote, “PCI DSS is intended to protect cardholder data, wherever it
resides. All payment card brand members must comply and ensure their compliance of their merchants and
service providers who store, process, or transmit credit card account numbers. The program applies to all
payment channels.”

The standard was originally developed on a two-year lifecycle, meaning it was published for two years.
During those two years, the PCI Security Standards Council reached out to merchants, Qualified Security
Assessors (QSAs), banks, processors and service providers worldwide for ongoing feedback and comments.
The idea was to obtain regular input from key stakeholders in order to continuously strengthen the standards
and keep them inline with the threat landscape. The Council recently announced that they were changing the
development lifecycle period for PCI-DSS from two years to three years moving forward. The extended
cycle is good news for merchants because it gives them more time to understand and implement the
requirements.

While the new version of the standard, 2.0, is scheduled for release at the end of October, it “officially”
goes into effect on Jan. 1, 2011. That means that many companies will have only a few months (November
and December) to address any of the changes. “Applying a risk-based approach to addressing
vulnerabilities,” PCI DSS Requirement 6.2, could be the most impactful change for risk and compliance
management. Today, it is possible that any identified system vulnerability would cause a fail. With this new
version of the standard, a risk-based approach can be used to determine if the indentified vulnerability has
exploit potential in the environment.

Virtualization also is interesting. The current version of the standard allows users to have only one primary
function per server. With virtualization, it is possible to have many servers within a server, all within one
physical box. The revised version of the standard clarifies that virtualization is allowed. There is also
expected to be a subsequent, independent document that defines validation requirements on how to
implement or assess a virtualized environment.

The lion’s share of the modifications in v2.0 relates to clarification and guidance, to make sure everyone has
a common interpretation. Because of this update, there will be more consistency and interpretation of the
standard for companies that need to comply. In addition, it’s going to level playing field by forcing
companies that don’t currently meet minimum requirements to enforce more stringent controls.

One final note: companies will [still] need to validate every year. I recommend that even companies
beginning the validation assessment before Jan. 1, 2011 use the new standard. It may provide more
flexibility and clarification.

Brian Serra
PCI Manager – Accuvant

Comments Off

Sep 08 2010

Malware Mitigation Trends: Utilizing the Latest Weapons Against
the Modern Malware Threat
Published by srichards under MSSP, encryption, malware

In the malware mitigation market, there are divisions among the vendors. The perspective of the vendor,
detection philosophy and technology approaches are examples of the vendors’ different views.

Most legacy network security devices have developed some semblance of controls to fight malware. Similar
to the approach of traditional AV vendors, it is relatively easy for a network security device such as a
content-aware firewall or intrusion prevention system, to stop identified malware once the vendor has
developed signatures or detection mechanisms that look for known instances of malware “breeds”. For
known malware threats, this signature approach can be effective.

However, known threat detection mechanisms have been rendered less effective with the advent of the
“commercial” malware market. This quickly growing black-market offers a forum for criminal enterprises
to market their own malware-creation suites. Some even offer technical support.

These user-friendly, GUI-based software suites enable criminal-entrepreneurs to very easily create their
own customized versions of malware. Each variation created can be encrypted and packed to create a new
and unique signature for each malware package. As a result, each new malware breed requires known
threat detection vendors to obtain, deconstruct and develop a new detection signature for the malware
package variation in order to detect and block it. Some of the more sophisticated malware creation tools
even provide a polymorphic repacking function that is executed in an automated fashion for each new
victim host. Each time a new victim host is exploited, the malware creates a unique variation to transmit to
the next targeted system.

Perhaps not surprisingly, this new evolution in malware development has created multiple, unpredictable
variations. It also has spawned new technologies and detection philosophies based on the behavior of the
new malware and/or compromised host.

One approach to combat malware is to use a virtual environment within a network device or host agent.
This can enable the security device to determine the behavior of malware that is plucked off the network
once it is allowed to run in this safe environment. Based on the captured malware’s behavior, the source IP
address is then added to a known “bad-actor” database and optional controls can be added to restrict that
infected host’s network access.

Other vendors utilize a network sensor to detect malware callback or “command and control” (CnC) traffic
behavior. Once a malware-infected system is identified, steps can be taken to quarantine and decontaminate
the host. The CnC traffic detection approach, however, requires accurate and timely intelligence regarding
the CnC networks’ methods of communication as well as up-to-date knowledge of the active CnC networks.

Building on this, another approach to countering malware threats is to work with companies that offer
malware threat intelligence services. These services can include building and maintaining databases that
aggregate malware infected or suspicious IP addresses and identifying active players in malware
organizations and botnet networks. Companies that provide malware threat intelligence services typically
build their malware intelligence databases by infiltrating malware organizations using human intelligence
efforts, performing dedicated malware reverse-engineering research, utilizing “honey-pot” networks (fake
hosts and networks) and by forming alliances with Managed Security Service Providers.

It is safe to say that almost every organization has a vested interest in keeping their customers and end-users
safe. By working with companies that provide malware threat intelligence services, an enterprise can
drastically reduce the risk of an infected user inadvertently exposing sensitive personal information to a bot
or malware agent that is active on the customer’s system. It can also integrate malware intelligence into the
front-end of an application to limit access to infected hosts, or utilizing the intelligence in their log
management life-cycle to notify potential compromised users before the information can be used by the
criminals who control the malware agent. Numerous security vendors are beginning to utilize this type of
malware specific intelligence in their products or enabling the customer to integrate this information into
products.

However, another effective method is a host-based approach that can be divided into pre-incident and post-
incident areas of concentration. A pre-incident approach utilizes process and tools to perform regular
auditing of machines in the environment, application white listing and process behavior analysis. A number
of vendors provide software solutions that offer value in this pre-incident space. This approach can present
some challenges but overall, it looks promising provided the deploying organization uses an effective
method to reduce the potential impact of this type of tool could have on the business during deployment.
The post-incident approach utilizes investigative or forensic examination of infected machines to determine
the extent and impact of a malware incident. This can be accomplished through the use of host examination
tools that verify the existence of malware through memory analysis among other techniques. The use of
investigative and forensic tools to safely analyze infected systems is necessary to effectively determine the
extent of the breach and identify the exact data compromised. Organizations that are serious about
measuring and addressing the impact of malware incidents should acquire a software suite especially
tailored for the unique challenges of malware investigation or forensics.

In conclusion, the nature of the continually evolving malware threat and the criminal innovations that are
taking place at a record pace require enterprises to adopt a multi-faceted approach to malware if they even
hope to have a chance. Attack surface management, active controls at either the host and/or network
combined with an effective investigative capability will provide organizations the toolset needed to help
mitigate the impact of malware on its’ business. A countermeasure or compensating control used in
isolation will likely not provide the breadth needed to cover all the possible attack vectors presented by
modern malware threats.

Steve Richards
Solutions Engineer – Accuvant

Comments Off

Aug 25 2010
Learning About NAC From Higher Education
Published by jprost under NAC

Network Access Control (NAC) is something that people are talking about everywhere, whether they realize
it or not. It’s not that they are determining how to utilize standards such as 802.1X, IF-MAP and MS-NAP,
or marveling at how cool and exciting they might be. Instead, the discussions are around business decisions
and initiatives that are being driven by business challenges and needs. These challenges and needs relate to
NAC.

For instance, NAC has been finding increased traction within the traditional enterprise as businesses expand
their use of SaaS solutions and cloud services. Just think about how many sales organizations have
implemented online cloud-based CRM offerings such as Salesforce.com or NetSuite. More and more
companies are moving away from purely in-house solutions, and looking at MSPs and saying, “I want you
to manage my X” or “I want to leverage your infrastructure to do Y.” As a result, companies are
employing a combination of on-premise equipment and cloud services. And, oftentimes, that erodes their
security focus. Instead of discussing NAC strategies and how they can help protect corporate assets,
companies talk about how they can secure the cloud services, SaaS, and other services beyond their
perimeters that they don’t necessarily control.

NAC is also finding its place as growing IT environments become increasingly difficult to manage and
maintain. A homogenous Windows environment may still have four different flavors of Windows running,
two different versions of Windows server… you get the point. That is a challenge in and of itself, but add
the need to support Smartphones (Androids, iPhones Blackberries, etc), iPads, hand scanners, you name it,
and you’ve got a growing, disparate environment that is further dissolving the hard perimeter of yesteryear.
Don’t forget about trends such as telecommuting! The results? A management conundrum as the perimeter
continues to deteriorate. The big question is: how do we secure all those devices?

Higher education has been successfully dealing with these very challenges for quite some time. Students
want to use divergent technologies, such as laptops for doing schoolwork, Smartphones, gaming consoles,
and DVRs, all of which connect to the network and want Internet access. This alone creates a heterogeneous
environment that is challenging to manage. Higher education institutions have responded in a number of
ways including strategically using NAC to adapt effectively to the hyper-changing environments.

Rather than trying to control and manage every end point, NAC audits the end point and enforces access
based on the results. Auditing end points enables organizations to provide healthy networking environments.
This concept can be equated to secondary school requirements that parents deal with every year – every
child must have an annual doctor’s check-up and be up-to-date on certain immunizations so that he or she
can attend school. Within information security, organizations can see whether or not a user has up-to-date
antivirus software, a firewall running, etc., segregate them into the environment based on the results, and
allocate specific resources to the user to make them healthy. For example, if a user doesn’t have the latest
antivirus software, the organization can restrict access to all network resources except those necessary to
update their antivirus software. The user is granted access to the rest of the network only after the antivirus
software is downloaded.

Commercial organizations are now revisiting NAC and looking at the solutions and strategies that Higher
Education as employed. Do you think it’s possible for them to achieve this level of control?

Jason Prost
Solutions Engineer – Accuvant

Comments Off

Aug 18 2010

Is DiD Really the Way?
Published by rsmith under Strategy

It’s a pretty well known fact that an attacker with sufficient means and motive has the potential to bypass
every security measure you put in place. As a countermeasure to this belief, people often propose Defense in
Depth (DiD), believing that by implementing layers of security controls at various logical and physical tiers
within an organization, they can reduce security risk. Unfortunately, that’s not necessarily true.
Sorry to be the bearer of bad news, but DiD can actually make the job of an attacker far easier than it
otherwise would be, depending on how it is implemented. Here’s why: as the complexity of the data that is
processed increases, it becomes easier for an attacker to introduce an exploitable vulnerability. Therefore,
when an attacker is culling the potential target list, they will focus on the applications that process the most
complex data. Anti-virus applications are a pretty good fit.

There are companies that implement as many anti-virus products in as many places as their budgets will
allow because they think this strategy will keep them safe. They’ve got anti-virus software on workstations,
email gateways, proxy servers, network attached storage, mobile devices, messaging, gateways, FTP and
HTTP traffic analyzers, and soon enough, they’ll have it on any other technology that stores or transmits
files. This strategy gives the attacker a path into each of these systems and allows them to bypass each
segmentation layer that may exist within the network. This strategy also makes end users feel invincible, and
often leads them to participate in more risky online behavior. When a false sense of security is established, a
user may use the same machine to perform risky online behaviors that they use to perform financial
transactions, putting sensitive personal or corporate data at risk.

So, what security measures will work without providing additional opportunities to attackers?

Patching the underlying error within the code is the easiest way to keep a vulnerability from being
exploited. This process increases security without increasing the amount of code an attacker can interact
with. While it is the most straightforward solution, many organizations fail to quickly patch vulnerabilities
because of time constraints, management issues or because the patch causes a mission critical application to
fail.

Virtualization can provide a computing platform where dangerous operations can be performed and
relatively little effort expended to revert the virtual machine to the exact state it was before dangerous
actions were performed. The biggest danger with virtualization is that attackers can leverage vulnerabilities
to move between the virtual machine and the host machine. As long as the virtual machine software is kept
up-to-date with the latest patches, then an attacker would have to use a zero day exploit.

Another effective strategy is to remove infrequently used features from software packages. In general this
approach is not commonly employed because software developers feel the need to maintain backwards
compatibility, a tendency that is driven by end users who want to be able to access and manipulate historical
documents. Here’s a workaround: include a separate program that updates documents produced by outdated
versions of a program to the newest version. This enables the backwards compatibility that some end-users
desire while keeping the main program lean with regard to rarely used features.

The bottom line is that DiD increases the attack surface available to an attacker and can lead to assumptions
that further increase risk to an organization. When implementing a security strategy, it is always preferable
to limit the amount of code that processes potentially malicious data.

Ryan Smith
Chief Research Scientist – Accuvant LABS

Comments Off

                                                     Next »

      Categories
            Application Security (5)
            botnet (1)
            Cloud Computing (1)
            Conferences (2)
            Database (1)
            encryption (3)
            HIPAA/HITECH (1)
            Identity Theft (1)
            malware (3)
            MPLS (1)
            MSSP (1)
            NAC (1)
            News (1)
            OpenSSL (1)
   PCI (1)
   risk and compliance (5)
   rootkits (1)
   smartphone (2)
   social media (1)
   Strategy (15)
   Uncategorized (1)
   virtualization (2)
   Vulnerabilities (4)
   WAN (1)
   WIPS (1)
   wireless (1)
   WLAN (1)

Monthly
   December 2010 (2)
   November 2010 (1)
   September 2010 (5)
   August 2010 (2)
   July 2010 (1)
   June 2010 (3)
   May 2010 (3)
   April 2010 (5)
   March 2010 (3)
   February 2010 (3)
   August 2009 (1)
   July 2009 (1)
   April 2009 (5)

Feed on
   Posts RSS
   Comments RSS

Blogroll
   Black Hat
   Casaba Security
   Errata
   Information Security Place
   Jack Jones – RMI
   Jim Broome
   Metasploit
   Security Research & Defense
   The Security Development Lifecycle
   Veracode
   WiFi Kiwi
   Zynamics Blog

Links
   State Security Breach Notification Laws

				
DOCUMENT INFO