The Anti-Virus Guy

November 8, 2009

I am in danger of pigeon-holing, type-casting myself as The Anti-Virus Guy. That doesn’t bother me too much, when I see how the Heartland, Hannaford Brothers and RSA data breaches remained effective and undiscovered due to undetected malware. According to the 2009 Verizon Data Breach Investigations Report, 38% of data theft utilized malware (67% were aided by significant errors). According to the 2009 CSI Computer Crime and Security survey, 74% of companies experienced malware infections in 2005, with that number decreasing to 50% in 2008 but returning to 64% in 2009.

Nick Lewis, in his debriefing Operation Aurora: Tips for thwarting zero-day attacks, unknown malware said:

Never-before-seen malware is a fairly common attack vector, often used to do something that will immediately be monetized by a common criminal.

My statistically unsupported speculation (but confirmed by reviews of the Heartland, Hannaford and other breaches): most data theft can be described by the following scenario:

  1. a mistake was exploited (misconfiguration or vulnerability left unpatched),
  2. the network was hacked,
  3. malware was installed and
  4. data collected.

From this you learn that finding and fixing configuration errors and applying patches are required measures. Finding the intrusions, undetected malware and data exfiltration are also required measures.

If you have a highly mobile workforce, anti-virus software should be considered as an intrusion detection system.

Intrusion detection systems detect anomalies, typically restricting their focus to anomalous network activity. They detect anomalies; anomalies which may have been caused due to an intruder, although they rarely are. Intrusion detection systems rely upon a person to investigate and determine the appropriate action.

Anti-virus software detects malware, typically spyware or Trojan horse software. A virus (malicious code inserted in a host program) is rare. Anti-virus software has expanded its scope to include a broader range of software that you may not want running.

Learning where the detected malware came from helps you to block access to that location and helps you to learn what other programs arrived from that location. Treat malware detection alerts as suspicious activity to investigate and take appropriate action.

Network-based IDS blind spots

In a mobile workforce you cannot rely upon your network monitoring equipment to inform you about anomalous conditions. Your network-based intrusion detection system can scan internal network traffic including traffic on VPN connections. Other network traffic, out-of-company-network traffic, is outside its scope. Nonetheless, you can still gather information about anomalous events through your anti-virus software.

A network-based IDS may not map to your organizational structure. When the Network Operations Center (NOC) or Security Operations Center (SOC) receives an alert, it must be dispatched to another organization for investigation. Does the receiving organization have a mechanism for accepting work that is not customer-initiated? Can the NOC or SOC open a problem report or work ticket for the receiving organization? Is the NOC or SOC willing to do the data entry required? The problem with testing is that you eventually find something; that something means more work. Will the NOC or SOC drop the alert because it is easier to ignore?

Zeus botnet

Tracked by Swiss security team at abuse.ch. Watch statistics about detection and current Command and Control (C&C) servers.

Related links


October 2009 is National Cyber Security Awareness Month

October 1, 2009

From InfraGard:

October 2009 is National Cyber Security Awareness Month (NSCAM), which the FBI endorses and participates.  The NSCAM event has been held every October since 2001, as a national awareness campaign to encourage everyone to protect their computers and our nation’s critical cyber infrastructure.

Cyber security requires vigilance 365 days per year.  However, the Department of Homeland Security, the FBI, the National Cyber Security Alliance, and the Multi-State Information Sharing and Analysis Center, coordinate to shed a brighter light in October on what home users, schools, businesses and governments need to do in order to protect their computers, children, and data.

Ultimately, our cyber infrastructure is only as strong as the weakest link.  No individuals, business, or government entity is solely responsible for cyber security.  Everyone has a role and everyone needs to share the responsibility to secure their part of cyber space and the networks they use.  The steps we take may differ based on what we do online and our responsibilities.  However, everyone needs to understand how their individual actions have a collective impact on cyber security.

Please read the Awareness Month Fact Sheet, Awareness Month What Home Users Can Do Tip Sheet, and the Awareness Month CSAVE Fact Sheet.

You can read more by visiting STAYSAFEONLINE.ORG.

Thanks,

John “Chris” Dowd
Unit Chief
Public/Private Alliance Unit
Strategic Outreach and Initiative Section
Cyber Division


Apple iPhone OS 3.0.1 SMS Vulnerability Debriefing

August 4, 2009

Apple iPhone SMS vulnerability sequence of events

June 24, 2009 CVE is created (CVE-2009-2204), no details.

July 2, 2009 IDG news service releases story about Charlie Miller demonstrating a malicious text message (Short Message Service (SMS) message) at SyScan ’09 in Singapore that crashes the iPhone. This demonstration suggests that a maliciously crafted text message could give a remote user root access to the phone. The attacker could then install software of their choice.

The story indicates that Apple is working on a patch.

July 3, 2009 IDG news service releases a retraction about Apple working on a patch. Confirmation from Apple was not available.

July 5, 2009 CVE is created (CVE-2009-2315) (Bugtraq ID 35569), as placeholder for July 3 Charlie Miller story.

July 28, 2009 Forbes publishes How To Hijack ‘Every iPhone In The World’ announcing upcoming demonstration of SMS vulverability at Black Hat in Las Vegas.

July 30, 2009 Charlie Wilson and Collin Mulliner demonstrate the SMS vulnerability at Black Hat in Las Vegas.

July 31, 2009 Apple issues security bulletin HT3754 titled About the security content of iPhone OS 3.0.1 (last updated July 31, 2009) announcing the availability of OS 3.0.1, fixing an SMS message vulnerability described in CVE-2009-2204 (the June 24 CVE), crediting Charlie Miller of Independent Security Evaluators and Collin Mulliner of Technical University Berlin. The security bulletin reiterates Apple’s policy about vulnerabilities:

For the protection of our customers, Apple does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches or releases are available.

The iPhone OS 3.0.1 update is available through iTunes.

Apple spokesperson Tom Neumayr announces they have made a security patch available less than 24 hours after a demonstration of this exploit. New York Times, Yahoo News, Gizmodo

We appreciate the information provided to us about SMS vulnerabilities which affect several mobile phone platforms. This morning, less than 24 hours after a demonstration of this exploit, we’ve issued a free software update that eliminates the vulnerability from the iPhone. Contrary to what’s been reported, no one has been able to take control of the iPhone to gain access to personal information using this exploit.

The Apple mailing list announces the availability of OS 3.0.1.

APPLE-SA-2009-07-31-1 iPhone OS 3.0.1

iPhone OS 3.0.1 is now available and addresses the following:

CoreTelephony
CVE-ID: CVE-2009-2204
Available for: iPhone OS 1.0 through iPhone OS 3.0
Impact: Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution
Description: A memory corruption issue exists in the decoding of SMS messages. Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution. This update addresses the issue through improved error handling. Credit to Charlie Miller of Independent Security Evaluators, and Collin Mulliner of Fraunhofer SIT for reporting this issue.

Installation note:

This update is only available through iTunes, and will not appear in your computer’s Software Update application, or in the Apple Downloads site. Make sure you have an internet connection and have installed the latest version of iTunes from http://www.apple.com/itunes/

iTunes will automatically check Apple’s update server on its weekly schedule. When an update is detected, it will download it. When the iPhone is docked, iTunes will present the user with the option to install the update. We recommend applying the update immediately if possible. Selecting “don’t install” will present the option the next time you connect your iPhone.

The automatic update process may take up to a week depending on the day that iTunes checks for updates. You may manually obtain the update via the “Check for Update” button within iTunes. After doing this, the update can be applied when your iPhone is docked to your computer.

To check that the iPhone has been updated:

* Navigate to Settings
* Select General
* Select About. The version after applying this update will be “3.0.1 (7A400)” or later

Information will also be posted to the Apple Security Updates web site: http://support.apple.com/kb/HT1222

August 4, 2009 CVE is updated (CVE-2009-2204), adding details previously available under CVE-2009-2315 and details from Apple.

August 13, 2009 Just back from Defcon, PaulDotCom Security Weekly reported that Charlie Miller had been unable to exploit this vulnerability in a useful way. AT&T appears to throttle SMS traffic. Useful exploit of the SMS vulnerability requires many SMS messages. By removing AT&T from the testing scenario, by using his own phone to attack his own phone, he could demonstrate the vulnerability.

Was Mr. Miller reporting that a high volume of SMS messages denies SMS traffic to all users of that cell tower?

Procedural issues surrounding the SMS vulnerability and its mitigation

The current mitigation plan builds in a delay of at least one week after the fix becomes available to be reach full deployment.

Were AT&T, O2 and other carriers notified? Were AT&T. O2 and other vendors prepared to filter malicious SMS messages if a security fix was not available before the exploit was in wide use?

Were anti-virus vendors notified? Did they receive a sample of a maliciously crafted SMS message? Have anti-virus vendors distributed detection of these malformed SMS messages?

Notes

Tom Neumayr’s remarks are accurate:

  1. The vulnerability had been demonstrated the day before. He did not claim that a fix was made available within 24 hours of the first demonstration, only that a demonstration had occurred 24 hours before the fix was made available.
  2. Reports that an iPhone had been controlled or personal information had been disclosed using this exploit (“pwned”) were inaccurate. Reports that this had occurred were misinterpretations of the researchers. Apple confirms that this is a potential consequence of leaving the vulnerability unmitigated, but there were no known successful attacks that controlled the iPhone or disclosed personal information.

The C-Level Virus

July 30, 2009

The significant characteristic of the C-Level Virus is the request from a CEO, CFO, CIO (C-level) executive to learn if we are protected against “I have no specific details, but I heard it on the news.” Much like the hoax, its payload is the consumption of resources.

There are new threats every day; see some of the Current Treat News links. To appear in a broadcast news report, there must be something peculiar about the payload. “You’ll see a floating Obama head,” for example, is newsworthy.

It is nearly impossible to confirm protection against these newsworthy threats because it was reported locally. There is no sample in the wild. It is not a significant threat.

The Sophos Security Threat Report for July 2009 contains a sidebar (on page 10) titled “Conficker – A worm gains notoriety” which lays blame for Conficker fears such as “Will your PC be jacked on April first?” (in the British tabloid The Sun) “The Conficker Worm: April Fool’s Joke or Unthinkable Disaster?” and “PC security forces face April 1 showdown with Conficker worm” at the feet of journalists who ignored computer security professionals.

The C-Level Virus diverts resources to low risk threats. Unfortunately, there are few mitigation measures available to defend against the C-Level Virus.


Busy Firewall Administrators Note

July 23, 2009

New job? Review how to restore services. E.g., how would you do a restore? How old is the (Windows) Automated System Recovery (ASR)?

NETALYZR for a snapshot of what your connection to the Internet is like (requires Java). Save links it creates for comparison with future results.

You drop inbound traffic for unnecessary protocols and ports. You drop inbound traffic with known malicious patterns or signatures. See “The Anomaly or Signature based intrusion detection: Do you need both?” [mp3] presentation by David Jacobs, Principal of The Jacobs Group for an overview.

You drop inbound email for malicious patterns or sources.

Visit SRI Malware Threat Center. See the list of Most Aggressive Malware Attack Source and Filters. Test the rules. Implement the rules. See the Most Prolific BotNet Command and Control Servers and Filters. Test the rules. Implement the rules.

Test the rules: Flint is a free, open source, web-based firewall rule scanner.

Visit DShield Top Ten Source IPs or SANS Top Ten Source IPs or StopBadware’s Top 50 IPs. Block access (In and Out, all ports).

No, this is not rigorous. You’re slashing out the alerts you don’t want to waste time investigating, so you can focus on the interesting alerts. You still need to review the logs and follow up. But look at what you’re doing. You tell your boss you finished this. These are measurable tasks, good for status reports. They are good work, too.

For an explanation of the steps you just skipped (because your boss should ask why he cares), a walk-through is in Chapter 4: Lifecycle of a Vulnerability from Practical Intrusion Analysis: Prevention and Detection for the Twenty-First Century by Ryan Trost.

Who forwards your network traffic? How do you get on the Internet? Tracert shows IP addresses, but what’s the network diagram? Robtex.com, friend. Look up your domain name. You can give your boss a network diagram. Again, good work!

ip information

In a jam, trying to figure out what’s going on? Robert Graham’s FAQ: Firewall Forensics (What am I seeing?) is a practical file to work from. Can’t connect to it? Various versions appear around the net, the latest I can find (from a reliable source) is version 0.4.0 (April 20, 2000) at linuxsecurity.com and be.at. A version 1.2.0 (January 2003) can be found at coffeenix.net.

See also:

  • Spyware warrior’s firewall links
  • Configuring IP Access Lists Cisco’s Guide To Access Control Lists (ACLs)
  • Port numbers
  • Sanewall Linux firewall builder for IPv4 and IPv6
  • Linux IPTABLES HOWTO
  • Linux firewalls and routing
  • Hakin9 04/2010 [pdf] with Firewalls for Beginners by Antonio Fanelli
  • Firewall leak testing
  • Firewall Builder firewall policy configuration and management
  • Firewall Auditor, a free firewall PCI assessment tool provided by FireMon
  • Center for Internet Security (CIS) Cisco Router Audit Tool (RAT) assesses target devices for conformance with the CIS Benchmarks for Cisco Router IOS and Cisco PIX firewalls. The installation package for the tool includes benchmark documents (PDF) for both Cisco IOS and Cisco ASA, FWSM, and PIX security settings.
    NOTE: CIS RAT is out of date with the current CIS Cisco Benchmarks. A new, updated version of the tool is under development. Until the new version is released, RAT will remain an unsupported tool. Check for updates.
  • Nipper assists security professionals and network system administrators to securely configure network infrastructure devices. Search for the phrase “Cisco Router Device Router Security Report” to see examples posted on the Internet. The Open Source version is no longer supported. An Open Source fork (nipper-ng) exists.
  • Webfwlog is a flexible web-based firewall log analyzer and reporting tool. It supports standard system logs for linux, FreeBSD, OpenBSD, NetBSD, Solaris, Irix, OS X, etc. as well as Windows XP. Supported log file formats are netfilter, ipfilter, ipfw, ipchains and Windows XP. Webfwlog also supports logs saved in a database using the ULOG or NFLOG targets of the linux netfilter project, or any other database logs mapped with a view to the ulogd schema. Versions 1 and 2 of ulogd database schemas are supported.

Busy network administrators may wish to turn to Qsolved.com for tech support answers from Cisco professionals.

Diagram your network, perhaps using CADE, Dia, Diagram Designer, Gliffy or yEd.


Basic Virus Defense

July 14, 2009

While I would encourage everyone to pitch in and report the malware that isn’t being detected, they should do this only after they have verified that their infrastructure is intact.

The management console provided by the anti-virus vendor should not be your only guide for managing anti-virus software. Machines that are turned off or are off the network do not check in with the server. If the anti-virus client is not running (“broken machines”), then it is not checking in with the server. That means you can’t tell broken machines from turned off machines when your only source of information is the anti-virus vendor console. Of course everything looks fine, when your only frame of reference is the anti-virus vendor console.

At a minimum, periodically check for machines whose virus protection service is not in a running state. You will need an independent inventory tool for this. I’ve used Microsoft’s Systems Management Server (SMS). In the following WQL example, you may wish to change “RTVScan” (Trend Micro’s real-time virus protection service) to the name of the real-time antivirus scan service you are using.

select SMS_R_System.ResourceDomainORWorkgroup, SMS_R_System.Name, SMS_R_System.LastLogonUserDomain, SMS_R_System.LastLogonUserName, SMS_G_System_WORKSTATION_STATUS.LastHardwareScan, SMS_G_System_SERVICE.DisplayName, SMS_G_System_SERVICE.Name from  SMS_R_System inner join SMS_G_System_WORKSTATION_STATUS on SMS_G_System_WORKSTATION_STATUS.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_SERVICE on SMS_G_System_SERVICE.ResourceID = SMS_R_System.ResourceId where SMS_G_System_SERVICE.Name like “ntrtscan” and SMS_G_System_SERVICE.State != “Running”

Note that a similar argument should be made for validating your security patch deployment tool. Your patch deployment can show that it has patched all the machines it can communicate with. Machines it cannot communicate with are not reported. That is, these machines are not patched and not reported.

Validate that your patch deployment solution is as thorough as it appears to be. Validate that your antivirus defense is as intact as it appears to be.

Then proceed with reporting that which your anti-virus vendor isn’t detecting.

Preventative measures:

  • Apply patches or address software vulnerabilities through other measures.
  • Install and maintain virus protection software.
  • Use a firewall.

These preventative measures are complementary measures. Use information gleaned from malware detection alerts (even detected and blocked alerts) to supplement your firewall configuration. Specifically, drop all access to networks known to provide malware (such as Intercage and Inhoster).

A firewall drops network traffic. To defend yourself from external attacks, you should drop any unsolicited network traffic. Exceptions may be required to enable remote administration. At work, a corporate firewall should be protecting you. At home, your router should be (and probably is) performing this function for you. A laptop taken to hotel or wireless cafe will not have the benefit of the corporate firewall or home router and must have its own firewall software installed.

In addition to inbound traffic maintenance, a firewall performs outbound traffic maintenance. Outbound connections of only specific types should be supported. Your firewall’s default configuration is a good start.

Lesson learned: Have a firewall in place.

Many new variations of viruses appear each month. These new variations rarely reflect new software vulnerabilities; instead they use previously known software vulnerabilities and disguise their use in an attempt to evade detection by anti-virus software.

Lesson learned: Apply patches (or address software vulnerabilities through other measures).

Malware does not require a software vulnerability. Many instances of malware are willingly installed; these are the Trojan horses. Keeping out-to-date on software patches affords no protection from Trojan horses.

Lesson learned: Install antivirus software.

Application whitelisting (“permit only these applications to run”) is a good idea that requires time to collect information, has recurring maintenance costs (“adds” are a maintenance issue), will probably leave approved applications on the whitelist long after they have been retired (“deletes” are a maintenance issue), and provides no protection from in-memory threats. Hash collisions can create false confidence in application whitelists. As a preventative measure, application whitelists are a challenge to a changing environment. Application whitelists can still be used as a detective measure; this is part of configuration management.

When a new malware variation is released, a few thousand machines will be effected before a sample is isolated and submitted to an anti-virus vendor. A few thousand more machines will be effected before the anti-virus vendor releases detection for the variation. With multiple vendors, multiple submissions and varying virus pattern files released, there can be a few hundred thousand effected machines. This makes the labor of producing malware variants worthwhile.

Test to confirm that your anti-virus software is running using the eicar.com file or Detplock (“X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-CLOUD-TEST-FILE!$H+H*”). Test for spyware detection using spycar.org.

IBM ISS reports 14,000 pieces of malware a month. At AV-Test.org, you can see current statistics on anti-virus pattern file updates released in the past 7 days. In a 60,000 user environment, I could collect at least one sample of undetected malware each day.

An “install and walk away” approach to virus protection provides minimal protection. That is, installing an anti-virus product and configuring it to update on a regular and frequent basis provides minimal protection. Perhaps an occasional virus detection alert reminds you that the antivirus software is still running. The absence of alerts is an ambiguous result; having few or no alerts could mean there is little to report, or it could mean that anti-virus software is not doing an effective job.

When the “install and walk away” approach is adopted, or even when that approach is enhanced to include regular confirmation that anti-virus software is still running and accepting regular updates, there is still an opportunity for the malware author to evade anti-virus protection. In this scenario, frequently modifying the appearance of the virus (frequently changing its “signature”) can stay slightly ahead of the anti-virus product updates. This produces sufficient financial reward for the malware author.

What additional steps could you take? What would you do if you could?

Prepare for simple malware detection: Maintain a list of hash codes of known good software. fsum is a utility that computes hash codes quickly. You want this list if you suspect malware is running and you want to narrow your investigation by eliminating the known good software. Use fsum to compute a new list, then drop all the files that were on your old list. Keeping the hash code list up-to-date is a pain, but even an out-of-date list can quickly dismiss some files from consideration. (Incorporate hash codes into your change control process.) This can also assist if you intend to do application whitelisting (such as Windows AppLocker); you have identified applications.

When malware is detected

When malware is detected, what should you do?

The first question you should ask is “Is this evidence of criminal activity?” In general it is not. As a rule, a malware detection incident does not suggest criminal activity, so we can proceed. If there is reason to suspect criminal activity is involved, extraordinary measures will be required. This could include seizing the system involved. Any further root cause inspection should occur using an image of the system; the image should be created in a manner that does not modify the original system. In this unusual situation, you should notify others who investigate malware alert messages that no additional action should be taken. Take no additional action without coordination.

Lessons learned: Know what the criminal activity policy is and how the forensic procedure begins.

The second question you should ask is “Is this evidence of an internal policy violation?” or inappropriate behavior (such as a porn site, gambling, or web based external email account when this is counter to policy). You should know if appropriate management persons want to be notified.

Lessons learned: Know what the internal policy violation process is.

The final question will be “how should recovery be performed?” Between detection and recovery there are steps you can take to learn how your defenses can improve. See “Simple Malware Discovery Measures“.

When a malware alert appears, bear in mind:

A malware alert could indicate that the antivirus software is successfully protecting the system. In this case, follow-up should investigate where the malware came from. This information can be used to discover if any undetected malware arrived with the detected malware and to determine in that source (such as web site or a storage device) should be blocked in the future.

A malware alert could be a “false report.” A new virus signature file may detect benign code as malicious code.

A malware alert could indicate a successfully infected system. Learn where the infection came from, just as you would when a system was successfully protected. Restore the system to a trustworthy state. “Cleaning” is the least preferred option. See “Can you clean a virus?

A malware alert could detect residue of an earlier successful infection. This would occur if someone “cleaned” the system instead of restoring it to a trustworthy state (in an effort to avoid reimaging, for example). Restore the system to a trustworthy state.


Simple Malware Discovery Measures

July 9, 2009

If we really want to take virus protection seriously, we will get involved with reporting suspicious files to anti-virus vendors.

Malware developers thrive because very few people investigate virus alerts. A typical web-based virus attack scenario consists of multiple components. A person may willingly install software (a Trojan horse) and that software may download additional malicious components. A person may inadvertently install software, be the victim of a drive-by download, when visiting a web site. This software also downloads additional malicious components. Frequently one or more of these components is already detected as malicious. The malicious developer needs at least one of these measures to be successful. The malicious person can be detected if at least one of these measures is detected. At least one measure is often detected.

Unfortunately, it is generally thought that if anti-virus software has detected a threat, then it is sufficiently addressed. Thus enables the malicious person to try as many threats as they wish.

  • Undetected threats work.
  • Detected threats are ignored.

If enough people follow up on enough of these detected threats, then submit samples to anti-virus vendors and report malicious sites found, we can make malware development less profitable and less attractive.

There’s a mystique to finding malicious files, a belief that you need special skills. That’s not true. There’s a belief that it is the job of the anti-virus vendor to both find the malicious files and to develop protection. How is the vendor supposed to find the files?

Abandon those misconceptions. You can be informed about various attacks before you read about them, if you just look.

A simple measure would be “web browser forensics.” What was downloaded at the same time as the detected file? These files would be suspect.

Upload the suspicious file to VirusTotal. You may find that other anti-virus vendors already consider the file to be suspicious. Give your vendor the file.

MalwareSigs Helping Network Analysts Detect Malware

Redline is Mandiant’s free tool for investigating hosts for signs of malicious activity through memory and file analysis, and subsequently developing a threat assessment profile.

Malware wants persistence, so review Windows registry locations that malware may use to ensure it gets run. That would include the usual Autoruns locations, such as:
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
HKCR\exefile\shell\open\command
HKLM\SOFTWARE\Classes\exefile\shell\open\command
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows
AppInit_DLLs=
HKLM\SYSTEM\CurrentControlSet\Services

There are also less frequently investigated registry entries, such as:
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options
Debugger=

can be a malicious payload.

Parse user hives, not only the current user.

Identify suspicious keys in Registry Hives, such as long entries and anything that would point to signs of malware presence/persistence.

Eric Zimmerman’s Registry Explorer can make malware review simple. Set the filter for keys over a certain size. There are a few normal windows keys that are large, but your results will be very small if you set the filter to 1024.

get-childitem registry::hkcu\Software\Microsoft\Windows\CurrentVersion
-recurse | get-item | select property

Pipe that to a text file and you’ll get a recursive list of every value in
the currentversion area. Delete everything back to HKCU (or replace it with
HKLM) to get a full dump. One caveat is that it will pair everything in the
output file to the same brackets if you redirect (>) it, but the large, typically base64 code that malware stores will stick out. You can also do this with offline hives.

From the registry dump, a Python script can read the lines of a text file. Something like

list = []
a = open("pathtofile.txt").read().split("\n")
for item in a:
list.append(item)

and then just do a for item in list, length of item check. But the main
part can be done by PowerShell (and could probably done at scale in an
enterprise, now that I think about it).

Assemblyline is a malware detection and analysis tool developed by Canada’s Communications Security Establishment (CSE) and released to the cybersecurity community in October 2017. Assemblyline is designed to assist cyber defense teams to automate the analysis of files and to better use the time of security analysts. The tool recognizes when a large volume of files is received within the system, and can automatically rebalance its workload. Users can add their own analytics, such as antivirus products or custom-built software, in to Assemblyline. The tool is designed to be customized by the user and provides a robust interface for security analysts.

A Complementary Measure: OTL by OldTimer
OTL by OldTimer presents system information, processes, modules, services, drivers, Internet Explorer extensions, Firefox extensions, browser helper objects (BHO), run keys and recently modified files. Your task is to find the anomalous entries and files and forward them to your vendor for review.

An even simpler measure: What’s new in System32? Sort by Date Modified, and see what’s at the top (or bottom) of the list. This is will miss a lot of malware, but will discover suspicious files with very little training. The scenario is: your antivirus found something, but did it find everything? By looking for a dll (or exe) file with a recent (perhaps today’s) date, you have located a suspicious file. Similarly, find what’s new in the Hidden Files areas (user’s temporary files, C:\Windows\Downloaded Program Files).

Another Simple Measure: Madiant Red Curtain.
MRC examines executable files (not only .exe and .dll files, but many more) looking at entropy (randomness), indications of packing, compiler and packing signatures, the presence of digital signatures, and other characteristics to generate a threat “score.”  Sort the result by “Score” and review the files with a high score. Use the built-in Help feature for an explanation of what MRC found.

Another Simple Measure: Windows File Analyzer
The Windows PreFetch Folder contains information about programs that have been running. If malicious software has been installed, it is probably listed in the Windows\PreFetch folder. This narrows the number of suspected programs considerably.
Read the rest of this entry »


Do Cell Phones Get Viruses?

July 6, 2009

It is a disturbing thought. Could cell phones could be an agent for distributing malware? Can a cell phone get a virus when it receives a call? Should you be purchasing anti-virus software for your cell phone?

More importantly, you should be concerned about physical security. The risks from a lost, stolen or tampered phone are more likely than an over-the-air threat. Don’t leave your phone unattended. Lock your phone. Don’t leave sensitive information on your phone.

A recent issue of Science (22 May 2009: Vol. 324. no. 5930) contained two relevant articles: Phone Infections by Shlomo Havlin (pp. 1023 – 1024) and Understanding the Spreading Patterns of Mobile Phone Viruses by Pu Wang, Marta C. González, César A. Hidalgo, and Albert-László Barabási (pp. 1071 – 1076). They recognize that there are cell phone viruses. These malware instances are not prolific; they rely upon the Bluetooth communications feature within many cell phones. If a Bluetooth device is within range, the virus asks the user of the target cell phone if they would like to install a new application. This is the classic Trojan Horse technique; you can expect a high number of users to reply with a “yes,” and thereby run the malware. The authors indicate that the limited range of Bluetooth (10 meters, about 32 feet) keeps these viruses in check. However, that’s the specification limit. 10 meters is the the expected and supported range. Given an appropriate antenna, the range of Bluetooth approaches a mile (according to Chris Roberts, CEO & Founder, Cyopsis (Interface 2009)).

Bluetooth passwords are up to four digits, making them easy to guess. They are often not changed from defaults.

Continuing my summary of the articles: cell phones are computing devices. If a vulnerability is found in a data handling mechanism (a music player or image viewer, for example), and that vulnerability can be exploited to execute arbitrary code, then data (music or images) can be used to install malware. For example, if an exploitable vulnerability in the picture viewer was found, then an image attached to Short Message Service (SMS) text or email can be used to propagate malware. The authors then speculate about the propagation rate of this malware.

The National Institute of Standards and Technology (NIST) has a Special Publication (SP800-124) titled Guidelines on Cell Phone and PDA Security. In summary: a fine synopsis of technologies, threats and recommendations. Recommendations are the following User Oriented Measures and Organizational-Oriented Measures:

User Oriented Measures

  1. Maintain Physical Control
  2. Enable User Authentication
  3. Backup Data
  4. Reduce Data Exposure. Avoid keeping sensitive information, such as personal and financial account information, on a handheld device.
  5. Shun Questionable Actions. Don’t trust messages.
  6. Curb Wireless Interfaces. Turn off Bluetooth, Wi-Fi, infrared, GPRS, Edge until they are needed.
  7. Deactivate Compromised Devices. Disable service. Remote lock. Remote wipe.
  8. Minimize Functionality. In addition to “curb wireless interfaces”, are there other vectors you don’t need? Add-on applications? Plug-ins? Have your provider block SMS that originated from the Internet (since it is largely SPAM).
  9. Add Prevention and Detection Software. Stand-alone, consider: authentication alternatives, encryption, firewall, anti-virus, intrusion detection, antispam, remote erasure, VPN

Organizations, consider a long list of device management possibilities

Organizational-Oriented Measures

  1. Establish a Mobile Device Security Policy
  2. Prepare Deployment and Operational Plans
  3. Perform Risk Assessment and Management
  4. Instill Security Awareness through Training
  5. Perform Configuration Control and Management

The most common SMS attack is the social engineering attack. A text message from a purportedly trusted source (such as a bank) prompts the user to (1) call a phone number and reveal private information or (2) connect to a web site and reveal private information or (3) connect to a web site and install software (which will reveal private information).

June 2005: (Arbitrary start date) Trojan Horse programs for Symbian OS. See SymbOS.Romride.A, SymbOS.Commdropper, SymbOS.Doomboot.a (Symantec) or Romride.A ,Cabir (F-Secure).  SymbOS.Commwarior (Symantec) is a Trojan Horse that will send MMS messages with a copy of itself and will copy itself through exposed Bluetooth connections. Symantec categorizes this Trojan Horse as a worm because these measures have a high probability of success.

The scenario:

  1. The Trojan Horse arrives as an SIS file. Mechanisms that could be used: installed locally (physical access), received MMS message with MIME attachment of type application/vnd.symbian.install, copied through exposed (discoverable, visibility is not “hidden”) Bluetooth connection, downloaded from website.
  2. User is induced to open file. Various fraudulent presentations (social engineering) such as “cracked version of …” “important security update” are used.
  3. When the user opens this file, the phone installer application displays a dialog box to warn the users that the application may be coming from an untrusted source and may cause potential problems. There are many reasons why a user would ignore this warning: they really want the cracked version of the game or they’ve seen the warning before and ignored it and never seen a problem (“it always does that”).
  4. The user is again asked if they want to install the program. It always does that.
  5. Payload is installed, which may mean contacts or other information is disclosed. Anything that the phone could do.

November 2008: Remote SMS/MMS Denial of Service – “Curse Of Silence” for Nokia S60 phones announced by Tobias Engel

Until a firmware fix is available, network operators should filter messages with TP-PID “Internet Electronic Mail” and an email address of more than 32 characters or reset the TP-PID of these messages to 0.

Secunia indicates no fix is available in Nokia Phones SMS Denial of Service Vulnerability (SA33359).

April 2009: Hacking a Smartphone, stealing data from a Microsoft Windows Mobile operating system device, is demonstrated by Trust Digital in a CSO Online article “3 Simple Steps to Hack a Smartphone“. One attack relied upon the ability to use an SMS message to open a web browser session. If scripting is enabled and a person or device can be navigated to a maliciously crafted web site, then (machine is pwned) data can be stolen or software installed.

The ability to run a program, such as Internet Explorer, by sending an SMS message would be a significant vulnerability. This would be news. The presentation assumes that such a vulnerability exists.

Another attack relied upon the ability to remotely reconfigure the device, reducing its security posture or destroying information.

April 6, 2009 Security UK columnist Ken Munro writes that the iPhone is not ready “for a secure mobile email environment.” Summary: no encryption, no remote wipe, access points easy to spoof. Blackberry and even Windows Mobile fared better.

June, July 2009 SMS vulnerability on iPhone disrupts usage, could lead to arbitrary code execution, see Apple iPhone OS 3.0.1 SMS Vulnerability Debriefing.

An additional story about a false sense of security with the iPhone is iPhone Security: A Complete Misnomer. Summary: access to a secure iPhone is easy to bypass and remote wipe is unreliable.

July 2009 A New Symbian S60 Worm Variant Spreading in the Wild as reported by cell phone anti-virus vendor NetQin. The threat requires installing a spurious version of the Symbian security application called “Advanced Device Locks.” Once installed, it propagates by sending text messages with a link to a copy of the malicious software.

July 2009 A new worm and botnet for the Symbian OS is discovered as reported by cell phone anti-virus vendors F-Secure and Trend Micro. The threat requires installing a spurious Sexy View application that Symbian has signed. A worm because it sends text messages with a link to a copy of the malicious software. A botnet because it can download new SMS templates, which can be used to generate other text messages.

July 2009 UAE cellular carrier Etisalat rolled out spyware as a 3G “update.” Service provider Etisalat sent an SMS message advising Blackberry users to install a software update (“Registration”). The software update included spyware. See RIM’s Blackberry security page.

July 2009 Additional presentations at Black Hat USA 2009: Attacking SMS presentation by Zane Lackey and Luis Miras; Fuzzit: A Mobile Fuzzing Tool by Kevin Mahaffey, Anthony Lineberry and John Hering.

October 2009 CNN publishes article Smartphone security threats likely to rise. No specific threat looming.

November 2009: Some Jailbroken Apple iPhones receive worm. This is an actual worm, since a iPhone user who chose to jailbreak their device created an SSH server with a default user name and password, and never changed the password. The phones could be discovered via the Universal Mobile Telecommunication System (UMTS) Network, then accessed and malware installed. The malware repeats this cycle, looking for more phones.

November 22, 2009: A second iPhone worm, which also relies upon users to Jailbreak their phones and not change the password, is reported by F-Secure. In addition to the previous worm’s characteristics, this worm steals information and changes the password to “ohshit”.

This is much like the scenario described in Science, but did not reach their distribution estimates.

December 1, 2009: RIM acknowledges vulnerability which could lead to executing arbitrary code on the Blackberry Enterprise Server (BES) when a maliciously crafted PDF is opened on a Blackberry handheld device (KB19860). BES customers (service providers such as corporations with Microsoft Exchange or Lotus Notes) should patch their servers; there was no client update.

Note that RIM also offers Blackberry Enterprise Server (BES) Express:

Designed for small and large businesses that have an on-premises mail server, BlackBerry Enterprise Server Express is a low-cost and secure option for businesses that want to connect both corporate-liable and individual-liable BlackBerry smartphones to company email, calendars and business applications.

December 4, 2009: ChrisJohnRiley posts information about what the iPhone configuration tool reveals. A .mobileconfig file can be exported. This is an XML file with the passcode can be found in Base64 encryption. That is very rudimentary. Persons who would use the iPhone configuration tool are corporate accounts who would export the configuration as a backup measure. Unfortunately, many corporate accounts are unaware of the information within the .mobileconfig file and publish it on the Internet; search for “filetype:mobileconfig”.

December 27, 2009: Chris Paget and Karsten Nohl report about the weak nature of GSM security at the 26th Chaos Communication Congress. That is, with about $1,500 of hardware, open source software and pre-computed tables, whatever information is passed through GSM can be discovered.

December 30, 2009: Cordless phones based upon Digital Enhanced Cordless Telecommunication (DECT) also have flaws in the way encryption was implemented, as explained at the 26th Chaos Communication Congress.The long term fix will be to replace cordless phones with models whose firmware has fixes for these flaws (and have upgradable firmware). The short term fix is to avoid long silences and keep conversations short.

February 4, 2010 Elcomsoft iPhone Password Breaker enables forensic access to password-protected backups for iPhone 2G, 3G, 3GS, and iPod Touch 1st, 2nd, and 3rd Gen devices. Featuring the company’s patent-pending GPU acceleration technology, Elcomsoft iPhone Password Breaker is the first GPU-accelerated iPhone/iPod password recovery tool on the market. The new tool recovers the original plain-text password that protects encrypted backups containing address books, call logs, SMS archives, calendars, camera snapshots, voice mail and email account settings, applications, Web browsing history and cache.

Similarly, see iPhorensic from EviGator.

February 7, 2009: Veracode demonstrates proof of concept spyware for the Blackberry (TXSBBSpy).

it should be noted that while we chose BlackBerry for our proof-of-concept, this is not just a BlackBerry problem. All mobile platforms provide similar mechanisms for writing applications that have access to the user’s personal, potentially sensitive information. As consumers become increasingly dependent on their mobile devices, we are certain to see an uptick in the volume and sophistication of mobile malware.

February 12, 2010: OpenCORE 2.0 or less vulnerability disclosed at Shmoocon. oCERT #2009-002.CVE CVE-2009-0475 Since OpenCORE is the multimedia rendering system used by Android, this counts as an Android vulnerability. Browsing to a malicious web site with a maliciously crafted MP3 could installed arbitrary code on the Android. Fixed in 8815. How would T-Mobile get the patch to customers?

March 8, 2010: Vodafone distributes Mariposa botnet Panda Security researcher reports an associate received an HTC Magic with Google’s Android OS, and found (Windows OS) malware on the memory card. The malware would attempt to be installed if Autorun was enabled. That is, the HTC Magic’s memory card is as vulnerable as any USB stick. As Lee Whitfield points out, this reflects a Quality Assurance issue at Vodafone or HTC (or another agent in the supply channel). At least two HTC Magic devices with infected memory cards have been identified.

April 2, 2010: Chinese government officials report MMS Bomber (a variant of the Worm.SymbOS.Yxe). This worm spreads through URLs in SMS messages to phones with the Symbian S60 3rd Edition operating system. If the application the URL points to is installed, data from the mobile phone is sent to a server, SMS messages are sent to numbers in the phone’s directory and the phone’s system management software is modified to prevent removal of the worm. Reinstall the operating system.

April 10, 2010: Installing pirated version of “3D Anti-terrorist action” makes Windows Mobile phones place International calls (Troj/Terdial-A). Windows Mobile Terdial Trojan makes expensive phone calls

June 1, 2010: Samsung distributed a malware program called slmvsrv.exe on the 1GB microSD memory card shipped with the new bada-powered Samsung S8500 Wave smartphone. This Windows-based application, known as Win32/Heur, appears with an Autorun.inf file in the root of the memory card and will install itself when it is inserted into any Windows PC that has the autorun feature enabled.

June 4, 2010: Romanian Directorate for Investigating Organized Crime and Terrorism (DIICOT) arrests 50 individuals for using smartphone spyware (probably FlexiSPY) for eavesdropping.

June 25, 2010: Jon Oberheide reports Google used REMOVE_ASSET and INSTALL_ASSET, remotely managing applications on his Android-based phone.

August 11, 2010: Apple releases iOS 4.02 (for the iPhone) and iOS 3.22 (for the iPad), fixing a PDF rendering bug. Rendering of a PDF could install arbitrary code.

August 17, 2010: Android game Tap Snake is a Trojan horse, a GPS Spy client in disguise (according to F-Secure).

September 9, 2010: Kaspersky reports another Android Trojan horse, whose payload sends SMS messages to premium sites without user intervention. The malware spreads through black hat search engine optimization (BHSEO) techniques, much like bogus anti-malware software propagation.

September 30, 2010: According to a study by pskl.us blogger Eric Smith, a number of free iOS apps send private user data back to their application developers. Smith examined a total of 57 free news, shopping, business and finance applications, including the top 25 free apps from the US iTunes App Store. He found that 68% of the applications tested transmitted the software-readable unique device identifier also known as UDID each time the application was launched. The data was transmitted to servers controlled by the relevant application vendor. A further 18% of apps transmitted encrypted data, meaning that there is no easy way of knowing what data they are forwarding to the vendor. (See further investigation at “iPhone Privacy: What about the SSL apps?“.) According to Smith’s analysis, just 14% of applications are clean. Smith notes that, where the user name for a user account is also known, the UDID allows many applications to draw conclusions about the identity of the iPhone user. As an example, he cites the Amazon app, which stores the phone’s serial number on mail order company Amazon’s servers. The full text of the study, entitled “iPhone Applications & Privacy Issues an Analysis of Application Transmission of iPhone Unique Device Identifiers” is available online [pdf]. The list of apps tested can be found in Appendix A on page 16.

October 2010: TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones [pdf] Abstract: Today’s smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 14% performance overhead on a CPU-bound micro-benchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of potential misuse of users’ private information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.

November 2010: Android 2.0-2.1 Reverse Shell Exploit flaws in WebKit appear as flaws in Android. This flaw could be used to install arbitrary code.

November 2010: Insecure Handling of URL Schemes in Apple’s iOS could be used to initiate connections to web sites.

December 27, 2010: SMS-o-Death (Collin Mulliner, Nico Golde) at the 27th Chaos Communication Conference describes how flaws in SMS and MMS message processing can be exploited to interrupt phone calls, to disconnect people from the network, and even brick phones remotely.

December 29, 2010: Running your own GSM stack on a phone (Introducing Project OsmocomBB) at the 27th Chaos Communication Conference describes how a custom GSM stack can be used to intercept GSM communications and decrypt them. See review at Gizmodo.

December 30, 2010: Android trojan Geinimi trojan, which may be bundled with applications (such as Monkey Jump 2, Sex Positions, President vs. Aliens, City Defense and Baseball Superstars 2010) from untrustworthy sources, can collect personal information.

April 2011: DroidDream trojan found bundled with many Android applications.

April 19, 2011: Microsoft announces Windows Mobile 6.x, Windows Phone 7, Microsoft Kin, and Zune devices are vulnerable to spoofing using revoked certificates. Microsoft Security Advisory (2524375)

May 3, 2011: Microsoft announces availability of security patch for Windows Phone 7 to modify certificate revocation. Microsoft Security Advisory (2524375)

May 13, 2011: Unencrypted authentication tokens allow access to Google applications from Android devices, as demonstrated by researchers at the University of Ulm.

February 13, 2012: Android.Bmaster is Android malware discovered on a third party marketplace (not the Android Market) and bundled with a legitimate application for configuring phone settings.

Summary

How concerned should you be? It helps to focus upon the mobile device aspects rather than the phone features. Access control, patch (or upgrade), firewall (drop unnecessary traffic), and virus protection.

As with other computing devices, if a device vulnerability is discovered, the preferred mitigation remedy would be to patch the vulnerability. There are over-the-air (OTA) update mechanisms available. (How secure are these mechanisms?)

Alternately, your cell phone provider may be able to update your firmware without putting it on the telephone network. Alternately, your cell phone provider may be able to filter and remove malformed traffic.

If you relied upon anti-virus software to mitigate the vulnerability, you need to address how the pattern file updates are installed. You would need to wait for the pattern file update to detect the specific exploit of the vulnerability. Obfuscated variants of the current, specific exploit will be missed by the anti-virus software. You want the vulnerability patched. You would purchase anti-virus software betting that the pattern file update would protect you until the firmware update was deployed.

The articles in Science speculate about the rate infections could spread across cell phones. Perhaps the network would become unusable. Firmware updates would be required. A targeted threat, on the other hand, would exploit the previously unknown vulnerability on a small number of phones. A firmware update would not appear until the vendor was aware of the issue. An anti-virus pattern update would not appear for the same reason.

There are additional (non-virus) malware scenarios that may tempt you to purchase ant-virus software.

  • Cell phones are computing devices. If you install software from untrustworthy sources, you can be installing malware. Ask yourself what risk you are willing to adopt by installing Elf Bowling.
  • Software (such as FlexiSpy, Neocall or Mobile Spy) can record your text messages. This is similar to a keystroke logger. With the software installed, when Short Message Service (SMS) text messages are sent, they are also sent to a web site. If you have no data plan, this mechanism will fail or incur charges. This mechanism requires local access, physical access, to the telephone; it cannot be installed remotely. Often, anti-virus software does not detect these packages because they must be deliberately installed. They are marketed to the parent who wants to monitor their child’s text messages, and the spouse who monitors their mate’s messages.
  • The Blackberry application PhoneSnoop can be used to eavesdrop on calls made from the victimized Blackberry. PhoneSnoop can be installed through physical access to the phone or convincing someone to install the application. Should anti-virus software detect such applications as threats?
  • The Palm Treo also has applications which can record calls.
  • Skype for Android leaks sensitive data blog post at Sophos, reviewing a beta version of Skype for Android. Skype leaves private information unencrypted and available to other applications to collect and transmit. Skype claims that this will be fixed in the released version and warns people about installing software on their phones. Nonetheless, there is some doubt that software is tested for information disclosure vulnerabilities.
  • Cell phones are also data storage devices. As argued in Semantic Difficulty: Do Macs / Linux Get Windows Viruses?, a Windows virus can be copied to the cell phone. It does not run on this device, but it does consume space and it can be copied from the cell phone to a Windows machine.

The articles in Science argued that we haven’t seen such activity because there is limited operating system commonality. There should be a vulnerability in data handling and it should be exploitable; we don’t see the malware because the broad acceptance of an operating system hasn’t occurred.

I don’t think acceptance needs to be any broader. It could be broader, it will get broader, but cell phones are prolific enough. I think they’ve let the disease analogy distort the problem. Malware does not need to be prolific to be malware. (Are prolific viruses the norm or the anomaly in microbiology? I believe prolific viruses are rare, but they get all the attention.)

The bad news: cell phone malware is already here.

The good news: don’t invest in anti-virus software. Physical security is your best protection.

Caveat: Regulations may override these considerations. For example, if you are processing credit card payments with your cell phone, PCI regulations may require anti-virus software.

See also: mobileburn.com, Mobile Active Defense blog
Read the rest of this entry »


Rant: Terminology, Semantics, Pedantic Rant

July 6, 2009

“Unusual and new-coined words are, doubtless, an evil; but vagueness, confusion, and imperfect conveyance of our thoughts, are a far greater,” wrote English poet Samuel Taylor Coleridge in Biographia Literaria, 1817.

Introduction to Formal Semantics “Generally, the term “Formal semantics” may refer to 3 uses, it could be used in computer sciences, in logic, or in linguistics. This course talks about its linguistic sense, that  is, it’s  a branch of Semantics which aims to explain and reason the meanings of language with precise mathematical models. It is really useful for quantitative linguistic research, and natural language understanding/processing.”

For a field that must pay attention to semantics when working with computers (such as “SNMP … that’s port 161, right?”), Information Technology (IT) uses technical terms carelessly. Granted, in ordinary language, terms change their meaning. Sir Thomas More’s Utopia was not a vision of an ideal community; now “ideal community” is what “utopia” means. An “Epicure” once sought tranquility, and tranquility would require modest pleasure; now an Epicure is a gourmet, seeking pleasure. These changes occurred over centuries. When technical terms change meaning within a decade, technical terms approach uselessness.

Language is not how the world was wired. Language is used to describe and denote, to convey and coalesce. Terms should not be used to define a problem (to delineate), but terms should be used to give a problem definition (to describe).

Advanced Persistent Threat (APT): See Michael S. Mimoso’s Beware the APT Hype Machine in Information Security Magazine’s Essential Guide to Threat Management [pdf]. APT describes the motives of the attacker, not a different kind of attack. Marketing, however, likes terms they can use for product differentiation. Expect products to be marketed with APT capabilities or as APT defenses, even though this characteristic is either trivially true or patently false.

“Advanced” describes the background, preparation and planning that the attacker employs. The techniques are not new.

I received an email whose was subject was “Study finds lack of defenses against advanced persistent threats‏”. On the face of it, that would be a silly study, akin to “study finds damp things are moist.” There is no defense against the motivated, persistent attacker. That’s why you have detective mechanisms and procedures (or “controls”) to determine if an attack is underway or has occurred. You have preventative mechanisms and you understand that they are not perfectly reliable, so you implement detective mechanisms or procedures.

The email body referred to the article More firms targeted by advanced persistent threats, study finds by Robert Westervelt. He summarizes a Ponemon Institute study (funded by network security monitoring vendor NetWitness Corp).

Those survey (sic) also found a rising level of fear that organizations are not prepared to prevent APTs. About half of those surveyed said security-enabling technologies are not adequate and 64% report their security personnel were not up to dealing with the threat. The survey supports previous warnings from security experts who say perimeter defenses are inadequate against APTs.

It is easy to slip into thinking of APTs as a type of threat instead of recognizing them as a weakness in defenses. Of course perimeter defenses are inadequate; if they were adequate there would be no attack vector for the motivated person or organization to exploit, and hence no APT.

When you hear that an organization was a victim of an advanced persistent threat attack, you should recognize that no single vulnerability was exploited. Instead, multiple vulnerabilities were exploited. No single preventative measure failed; multiple preventative measures failed.

Identity and Access Management (IAM): Idan Shoham (CTO at Hitachi ID Systems) has his own ax to grind about IAM – would it be more correctly referred to as “Entitlement Administration and Governance (EAG)”?

Layer 4 Switch: What are they getting at?

IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.

PCMag layer 4 switch: A network device that integrates routing and switching by forwarding traffic at layer 2 speed using layer 4 and layer 3 information.

thenetworkencyclopedia: Vendors tout Layer 4 switches as being able to use TCP information for prioritizing traffic by application. For example, to prioritize Hypertext Transfer Protocol (HTTP) traffic, a Layer 4 switch would give priority to packets whose layer 4 (TCP) information includes TCP port number 80, the standard port number for HTTP communication.

horms.net: Layer 4 switching is a term that has almost as many meanings as it has people using the term. In the context of this paper it refers to the ability to multiplex connections received from end-users to back-end servers.

iOS, IOS: It is unfortunate that Apple iOS and Cisco IOS are operating systems. It is not the first time such as name collision has occurred; RCA (later acquired by Sperry) had a product named DOS before Microsoft, and “dos” is the generic term for a disk operating system.

EDT, EST, PDT, PST and so forth: I received a “Polite reminder” about an Upcoming Webcast: One week away – November 18th, 2009 at 2:00 PM EDT (1800 GMT). EDT in November? That letter in the middle of EDT means something. The East coast of the United States will be observing Standard time on November 18. I realize that you mean “EST,” not EDT. However, if I drop that text on my calendar and allow it to convert to Pacific time, it will take the literal Eastern Daylight Time and convert it to Pacific Standard time. My calendar entry will be an hour off.

That letter in the middle of EDT means something. Semantics is important. Just say “Eastern” or “ET” and avoid such problems.

Hacker: From “self-taught programmer” to “person who figures things out” to “system invader” to “information thief” to “attacker,” you cannot use the term “hacker” without explaining what you mean. When you have to explain the term, the term has lost its informative value, its denotation. It can still be useful for invective, hyperbole or other emphasis; that is, the term still has connotation. If you must clarify the word whenever you use the word, avoid the word.

RFC 2828 (Glossary) defines “hacker” as:

(I, RECOMMENDED Internet definition) Someone with a strong interest in computers, who enjoys learning about them and experimenting with them. (See: cracker.)

(C, commentary or additional usage guidance) The recommended definition is the original meaning of the term (circa 1960), which then had a neutral or positive connotation of “someone who figures things out and makes something cool happen”. Today, the term is frequently misused, especially by journalists, to have the pejorative meaning of cracker.

For further clarification and characterization, you may wish to consult The New Hacker’s Dictionary.

Penetration test: A penetration test begins with a contract. This avoids the misunderstanding and liability which will arise due to the disagreement about what a penetration test entails; what the delivered product (or “deliverable”) should be, what the process and result can look like.

PCI DSS distinguishes between vulnerability assessment and penetration testing [pdf].

A vulnerability assessment simply identifies and reports noted vulnerabilities, whereas a penetration test attempts to exploit the vulnerabilities to determine whether unauthorized access or other malicious activity is possible. Penetration testing should include network and application layer testing as well as controls and processes around the networks and applications, and should occur from both outside the network trying to come in (external testing) and from inside the network.

What purpose does that word (“simply”) serve? Dismissing the role of vulnerability assessment does a disservice to risk analysis and mitigation. Vulnerability assessment combined with vulnerability mitigation is more rigorous, easier to document and frequently less expensive than what is being referred to as penetration testing.

Securosis, L.L.C. [pdf] distinguishes among

  • general vulnerability assessment (characterized by port scans, service identification including version, configuration details),
  • web application vulnerability assessment (characterized by web application attack methods, such as cross-site scripting and cross-site request forgeries), and
  • penetration testing (characterized by verifying that a detected vulnerability presents an actual risk).

Let’s distinguish between vulnerability scans and vulnerability assessments. A vulnerability scan can be conducted by scanning for library components (some sort of signature), and associating the components with reported security vulnerabilities. Following the scan, you perform an assessment of the situations. What risks exist? What mitigation measures are available? Where indicated, implement mitigation measures. This is a vulnerability assessment and mitigation plan. The vulnerability scan consists of definable tasks which can be implemented as a product. A vulnerability assessment requires interpretation. A more robust vulnerability assessment will look beyond vulnerability scans and recognize that there are other sources of risk.

To perform a penetration test, one must identify a vulnerability and then attempt to exploit it in a financially compromising manner.

The distinction between “vulnerability assessment” and “penetration test” is largely a marketing distinction. More rigorous assessments with less definable tasks should supplement these standardized and mechanical reviews. What name should be given to these more rigorous vulnerability assessments? Since less rigorous vulnerability scans are already marketed as vulnerability assessment tools, the term “penetration test” has been adopted for the broad category of “more rigorous vulnerability assessment.”

This returns us to the contract. What is fair game in the penetration test? The PCI Security Standards Council seems to indicate “everything.” This vagueness in contractual terms invites a challenge. No executive should be willing to accept that interpretation. When having a penetration test conducted, or when conducting a penetration test, get the constraints in writing.

Consider the  case of a cloud service provider, particularly where many companies share the cloud service. Should each company conduct penetration tests against the cloud service? Does that penetration test include compromising the physical security of the shared site? What would client companies accept as reasonable actions from other client companies? In other words, what would such a penetration test look like? What is a penetration test?

Worm, Virus, Trojan horse: The distinction between worm and virus has disappeared, even amongst experts. “Worm” used to reserved to describe self-proliferating viruses; malware that required no human intervention to propagate. SANS and RFC 2828 defines Worm as:

A computer program that can run independently, can propagate a complete working version of itself onto other hosts on a network, and may consume computer resources destructively.

A running instance of Blaster, for example, would seek other Windows machines running the DCOM RPC, and (if not patched) install a copy and start it.

Lately, “worm” has been expanded to include malware that propagates with minimal human intervention. Conficker, for example, leaves a copy of itself on unprotected network shares and on USB drives (“thumb drives”). Trivial human intervention, such as a double-click or selecting an Autorun option) is required to infect a machine with Conficker, and Microsoft considers this a “worm”.

Meanwhile, “virus” has been expanded to refer to malware which needs little or no human intervention to propagate. In general parlance, the distinctions among worm, virus and Trojan Horse has disappeared.

The SANS and RFC 2828 definition of virus:

A hidden, self-replicating section of computer software, usually malicious logic, that propagates by infecting — i.e., inserting a copy of itself into and becoming part of — another program. A virus cannot run by itself; it requires that its host program be run to make the virus active.

Note that the self-sufficient and self-replicating characteristics of the malware distinguishes a worm from a virus. A worm is a stand-alone program while a virus requires another program.

Very few new (that is, previously undetected) malware samples use this parasitic approach to propagation. Malware developers don’t do that any more. A surprising consequence is that there are almost no new viruses being created. This should be news!

A Trojan Horse is malicious software that tempts the user to install it by offering an attractive feature (such as a screen saver or emoticons), but includes unattractive, usually undisclosed, features (enable remote access, send SPAM) as well.

RFC 2828 defines “Trojan horse” as:

(I) A computer program that appears to have a useful function, but also has a hidden and potentially malicious function that evades security mechanisms, sometimes by exploiting legitimate authorizations of a system entity that invokes the program.

The distinctions among virus, worm, Trojan horse, spyware and adware are blurred because anti-virus software is expected to defend against them all. The reasoning: if anti-virus software reports it, it must be a virus.

As nomenclature, this system is appalling. It is better to refer to malware, propagation methods (“requires no manual intervention,” “requires trivial human intervention that is likely to occur,” or “requires significant human intervention”) and payloads (“installs remote access method”, “sends file to attacker”). Be clear about the threat, stop implying living creatures infesting a host.

This isn’t an impractical, pedantic rant. We should recognize that “anti-virus software” addresses much more than viruses. It addresses worms, some trojan horses, some spyware, some adware, and some other risks. Anti-virus software addresses a vague set of malware that cannot precisely overlap with customer expectations. Sure, detect viruses, but that’s a limited expectation. Worms, too. But Trojan horses and spyware? In some cases, customers install the Trojan horse software for the promised features and don’t care about the undisclosed information leakage issues. That’s a technical, educational, marketing and security issue we haven’t addressed.

Firewall: Sounds hot. Marketing, product differentiation hot.

Marketing has an important role to play when informing you about how products can fill your needs. Marketing does not do you a service by inventing a new product category then leaving you to determine if there is a need for the product.

Ports: Initially (citation required) “firewall” referred to the device (hardware) or software that dropped or ignored packets which specified particular ports. That is, hardware or software which blocked inbound or outbound port access.

Addresses: Packets from (or to) specific addresses might be dropped. This “blacklisting” is sometimes considered a “firewall.” At other times, it is considered an Intrusion Prevention System (IPS).

Figure 1: IP Packet Header Structure

So far we’re dropping packets with certain ports or with certain addresses specified. There are other fields of an IP packet we can interpret and then choose to ignore the packet. That is, you can have “malformed” (unexpected) traffic packets with anomalous (therefore suspicious) flags or lengths. A product which drops packets based upon these characteristics is sometimes described as an Intrusion Prevention System (IPS) and sometimes included in a product sold as a firewall.

We can also monitor the relationship amongst packets; that is, we can add “state” to a packet and ignore packets with an anomalous (therefore suspicious) “state”.

This is just the packet header. If we examine the packet data and interpret it for malicious content, we may be performing the work of products sold as “application firewalls.” For example, a Web Application Firewall (WAF) will watch for content which resembles a SQL injection attack (unexpected SQL commands in what should be data), a cross-site scripting attack (unexpected Java script is what should be data), or other web application attack pattern and the WAF ignores (drops) that traffic.

A review of OSI layers could be inserted here. Physical, Data-Link, Network, Transport, Session, Presentation, Application … a device which drops frames and packets at any layer seems to be referred to as a “firewall.”

So what is a firewall? “Firewall” will indicate that network traffic is intentionally ignored for some reason. Beyond that, pay attention to the context. Get the technical details, not the marketing phrase. Get the description, not the connotation.

Rootkit: When a hacker successfully compromises a system (obtained root authority), they install their utilities (their “kit”). This “rootkit” enables them to maintain control of the compromised system and expand their activities to other systems. The rootkit would need to be hidden from system administration tools, such as the process list. There were various techniques (“rootkit technologies”) for hiding the rootkit.

Now “rootkit: has come to refer to any “hidden” (that is, could be missed) program or process. What the kit consists of or can perform is no longer the characteristic of a rootkit, it is the “hidden” (darn, didn’t see it) nature that makes it a “rootkit”.

Zero-day (attack, threat, or vulnerability): “Zero-day” used to refer to the vendor’s response time. When a vendor learns of a vulnerability in their product, they must estimate how long it might take before the vulnerability is successfully exploited. Often a vulnerability cannot be exploited in a practical way, and the vulnerability can be patched as part of a scheduled release cycle. Often, the vendor has a number of days between notification of the vulnerability and its exploitation, and that number exceeds zero.

When the vendor learns of a vulnerability in their product because someone is currently exploiting that vulnerability, then they have zero days to provide a patch before the vulnerability is exploited. The July 6, 2009 vulnerability in Microsoft’s DirectShow DLL (msvidctl.dll) ActiveX control (SANS handler diary entry) appears to be just such a 0-day vulnerability. The vendor was privately informed of the vulnerability that was being exploited in drive-by attacks.

Now the term is used to refer to any vulnerability that is known by only a few people. The few people may be in contact with the vendor and working on mitigation, or may be figuring out how to exploit the vulnerability. Shon Harris says that “zero-day means that there is no fix”. E-Eye maintains a Zer0-Day Tracker of “publicly disclosed and/or used in attacks, and do not have any published vendor-supplied patch.” Proof of concept is not exploitation. Is public disclosure with a proof of concept a “zero-day”? Should 0day vulnerability be synonymous with “identified flaw, with no patch available”? Given these definitions, every vulnerability goes through a zero-day phase. Every vulnerability has a time when it is unknown, a time when it is known to very few people, a time when the vendor has no patch. In that case, “zero-day” is not a particularly useful term; all vulnerabilities have a “zero-day” characteristic. “Utility” should not be confused with “correctness,” but terms can be misleading.

Instead of “time to respond to exploit” being the defining characteristic of “zero-day vulnerability,” “how long has it been known” has become the defining characteristic. Can we return to its previous definition, when the term could be used to make distinctions and terms not used as hyperbole? Please?

Intrusion: As in Intrusion Detection System (IDS) and Intrusion Prevention System (IPS). Joel Snyder wrote an article titled “IDS or IPS? Differences and benefits of intrusion detection and prevention systems.” SearchSecurity.com distributes it. I’ll let him make the point.

… since both IDS and IPS have the word “intrusion” as the beginning of their acronym, you may be wondering why I haven’t mentioned “intrusion” as part of the function of either IDS or IPS. Partly that’s because the word “intrusion” is so vague that it’s difficult to know what an intrusion is. Certainly, someone actively trying to break into a network is an intruder. But is a virus-infected PC an “intrusion?” Is someone performing network reconnaissance an intruder… or merely someone doing research? And if a malicious actor is in the network legitimately — for example, a rogue employee — are their legitimate and illegitimate actions intrusions or something else?

The more important reason for leaving “intrusion” out of the description for both IDS and IPS is that they aren’t very good at catching true intruders. An IPS will block known attacks very well, but most of those attacks are either network reconnaissance or automated scans, looking or other systems to infect — hardly “intrusions” in the classic sense of the word. The best Intrusion Prevention System in this case is the firewall, which doesn’t let inappropriate traffic into the network in the first place.

It’s the misuse of the word “intrusion” in referring to these visibility and control technologies which has caused such confusion and misguided expectations in staff at enterprises that have deployed either IDS or IPS.

IDS: Passively monitors traffic for anomalous behavior. Clipping levels would trigger alerts.

IPS: Monitors traffic in-line, as a firewall would. Drops traffic that matches previously identified patterns, permits everything else.

Firewall: Monitors traffic in-line, permits selected types of traffic, drops everything else.

Thin client: For CISSP test purposes, “thin client” is a single sign-on (SSO) solution. Sign in to the thin client and these credentials will be used for all (or nearly all) applications you need.

A Windows XPe (XP embedded) thin client is a reduced feature set of Windows XP. This is not a SSO solution, although you may also implement Active Directory and use it as a SSO. An XPe platform is an XP platform, but with fewer operating system components (a “thinner” operating system).

A Windows CE thin client is a “thin” (few features, low maintenance, low overhead) operating system client. It, too, is not a SSO solution. However, you may choose to implement a Service Oriented Architecture (SOA) such as Citrix as an application delivery platform (where the application runs in a session on the server). With an SOA, you still need a device with a limited operating system, often a Windows-based operating system, to act as a “terminal” connecting to the server. You have implemented SSO, but credit goes to the SOA. The “thin” client could just as well have been a “fat” client for SOA/Citrix/SSO purposes.

Alternately, you may be focusing on the hardware that was implemented. In the case, the “thin client” has a reduced feature set, including no moving parts, no disk drive.

Forensics: Some will call the Root Cause Analysis described in this blog “forensics.” I urge you to avoid that term. If you have declared an incident, you’re should be following your Incident Response procedure. If you haven’t declared and incident, you are working on Root Cause Analysis (which works more efficiently with a documented procedure).  You should recognize that doing a forensic investigation requires proper procedures (characterized by assigning a case number, preserving the evidence, chain of custody and such). Incident Response has its own PICERL procedure. Forensic Investigation, Incident Response and Root Cause Analysis have many tools in common, but their procedures are not identical.

Using the term “forensics” lightly jeopardizes your ability to defend your proper procedures. Suppose you were called upon to give testimony about one of your forensic cases. Under cross-examination, you are asked about email where you referred to a web history review as “forensics.”

“Do you have a forensic procedure?” you are asked.

“Yes,” you reply.

“Do you always follow your forensic procedure?”

“Yes.”

“I draw your attention to this email, in which you refer to a forensic investigation of web browser history. I see no mention of a case number. Was a case number assigned?”

“No.”

“I ask again, do you always follow forensic procedure?”

At this point, your testimony is of little value.

Semantics is important. Use “forensics” to refer to the investigations you may need to defend in court.

Where does this leave “network forensics,” the practice of network traffic analysis to determine its purpose? I am still looking for case law that supported the admissibility of packet captures. I would not wish to rely upon network traces as evidence. Network forensics is an exotic, fancy phrase for network analysis. It sells, but has questionable accuracy.

Social engineering: You mean confidence game? Grifter? Bunco artist? Swindler? Diddling (the term Edgar Allen Poe would have recognized)? Does every generation need its own term that sanitizes fraud? Even “fun” is from the Middle English fon “to befool,” hoax, or trick; surviving as “funny money” (as in “counterfeit”).

Valid: At the risk of weakening any confidence you may have in the preceding rants … Valid refers to an argument form; to reasoning. It does not refer to responses or data. People refer to “valid data” when they talk about “expected data” or data which conforms with a standard.

When completing a form for Delta Airlines, I used the (xxx) xxx-xxxx format when supplying a telephone number. The form returned the response

The following errors occured:
– Please enter a valid phone number

Pardon me? The phone number is my phone number. I believe the point the web developer was trying to make was that the phone number is not in a format you choose to interpret. The onus is upon me to guess what format(s) you are prepared to interpret. I must discover your expectations. “Enter a valid phone number” is poor customer service, and just lazy.

When “Washington” is typed into a social security number field, the response should not be “invalid entry.” It is an unanticipated response, an unacceptable response and an uninterpretable response. During design, referring to “valid data” brushes over the stumbling block of many programmers: the specifications. “Valid input” tells you nothing.

However, I am a fan of validated parking. I don’t know what validation occurs when a ticket is stamped. Verification (of visit) or authorization (to park) I could understand. Validation, though, I understand only because it is the traditional phrase.

Just: The dismissive use of “just” is to be avoided. “Its just semantics,” for example, asserts that we can dismiss semantics. No justification is provided. “Its just politics” has no explanatory value.

Application Abuse: Mykonos Software published a white paper titled “Understanding and Responding to the Five Phases of Web Application Abuse“. The five phases:

  1. Silent Introspection (reconnaissance)
  2. Attack Vector Establishment
  3. Implementation
  4. Automation
  5. Maintenance

They do a standard summary of web site attacks. My beef is with referring to this activity as “web site abuse”. The attackers are using the web site in ways the web site owner had not intended and does not desire, but has enabled. Place the focus upon removing these undesirable web features.

Anyone else see a pattern of blaming others for our own mistakes?

The moral: Semantics is about meaning and being clear; pedantic is about insistence, dogmatic insistence about conformity with a norm. Operational definitions, definitions that take ordinary language terms and use them for specific purposes (“for the purposes of our discussion, I will use such-and-such to mean”) are a confusing and dangerous practice. The speaker rarely adheres to their operational definition, switching between the common usage and specialized purpose. Recognize that these terms have become vague through misuse and focus upon the characteristic that is being identified. Be tolerant. Be clear about expectations. Selah.
Read the rest of this entry »


Using Behavioral Analysis To Discover Undetected Malware

July 6, 2009

This post exists to flesh-out an outline in “Is Anti-Virus Dead?” The outline:

Preventative and defensive measures:

  • Patch vulnerabilities.
  • Use anti-virus software with pattern matching technology to detect known exploits of vulnerabilities, even those you have patched. Prevent the exploits from executing, even those that will fail because the vulnerability has been patched.
  • Block access to known malware distributors.
  • Remove unnecessary services and ports.

Supplement these preventative measures with reactive discovery measures.

  • Use behavioral analysis technology (you are here)
  • Use analysis technologies that do not require behavioral analysis, such as those described in “What’s Different About This Approach?” to detect unknown exploits of unpatched vulnerabilities.

The “analysis technologies that do not require behavioral analysis” is meant to be novel and suggestive. The other sections exist as reminders that a single focus is insufficient.

Egress filtering. That is, watch the nature of the traffic sent out and watch where the traffic goes.

Snort, from Sourcefire. Inbound, implement rules (another set of signatures) that detect the use of exploits. Outbound, implement rules that alert when connections are made to addresses known to host malware (or other prohibited addresses).

See CS 646: Manual Intrusion Detection for a PowerPoint presentation of IP and TCP headers and the effects of invalid header contents (what a smurf attack looks like, for example).

Get baseline statistics about machines and ports that you can expect to have large volumes of network activity (using, for example, Cisco IOS Netflow or Netwitness). Investigate aberrations.

Nessus, from Tenable (for commercial use); otherwise from Tenable. You’ll get a vulnerability scanner you can use to audit your patch deployment mechanism. You’ll have a way to audit for peer-to-peer file sharing.

Monitor attempts to connect to the master servers of botnets (Command and Control or C&C servers). The University of Washington maintains a list of IP addresses used by botnets. An outbound request indicates a compromised (bot-infected, pwned) machine. Appropriate measures would be to block outbound access to these addresses at the proxy servers and monitor attempts to use them (at routers and at proxy servers).

See Busy Firewall Administrators Note for quick tips.

Use known bad site lists and your DNS to watch for attempts to resolve the names of domains that are known to be malicious. F-Secure reports that in addition to suspicious-looking domain names (such as weloveusa.3322.org), malware may “phone home” to typo-squatting domain names (domain names that resemble legitimate domain names) such as ip2.kabsersky.com.

Watch for responses to DNS requests, where the response returns more than two resource records. This would normally indicate a zone transfer, or the DNS request was to resolve an IRC host, and could indicate that the requester is bot-infected.

Vendor suggestions: Lancope. I have seen only presentations. StealthWatch uses LanFlow messages to make that correlation between the attention-grabbing event and the source machines and users. That’s a lot of LanFlow messages.