Monday, December 28, 2009

eLearnSecurity : Breaking into system is no more enough

| Armando Romeo |
Hello everyone. We have been out for awhile working on the upcoming and long awaited eLearnSecurity Penetration Testing Professional Course.

The work has been hard as you can see here but the feedbacks from the first test run have been great: the 5 CISSP's who have evaluated one of our modules have been amazed of how simple and interactive it is to learn the most advanced pentesting techniques.

We already have 400 reserved seats and you can be notified of the release by clicking here

If you're not up to date about eLearnSecurity project read on:
it's a project born as the natural evolution of Hackers Center philosophy of teaching security matters in the most simple way possible.

From the HSC Ethical Hacker Kit we moved towards a completely new way of distance training security professionals:

  • 2000+ Interactive slides
  • 5 hours of videos
  • Practical examples in every module
  • 3 Sections (Net Security Testing, Web App seecurity testing, System security)
  • Labs
Just another Pentesting course?
Not really. We aim at providing the necessary background study that will make you a real penetration tester: from legal perspective to the most advanced techniques explained step by step.

Breaking into system is no more enough: you need to provide solutions to your clients.
At the end of the course you will be capable of understanding (and exploiting) every threat in its smallest part and identify the root cause suggesting the best remediation methodology.

The Professional version of our course will cover much more than any other course available now at the same or even much higher price.
  • Web application security
    Information gathering, Web server hacking, Exploiting XSS, SQLi, CSRF, HTTP Session attacks, CSRF, Response splitting, Client side attacks, Web server rooting + 2 hours of videos

  • Network Security
    Information gathering, Fingerprinting, Mapping the network, SNMP Enumeration, Advanced scanning, Sniffing, Vulnerability assessment and remote exploitation + 2 hours of videos

  • System Security
    Cryptography, Writing advanced malwares, Metmorphic and polymorphic coding, Advanced Buffer overflows, Shellcoding, Writing rootkits step by step + 1 hour of videos
Note: we do not include topics in our syllabus just for the sake of having them listed: we actually cover everything in depth and with real world examples.

Release date and price
Price and release date have not yet been decided.

Estimate for Professional version is : end of January 2010
Estimate for Student version is : end of March 2010

Stay tuned on twitter for more release information

Thursday, November 5, 2009

NIST releases Security Content Automation Protocol for FISMA

| Brett D. Arion |

Automated tools take sweat out of security compliance

When it comes to complying with federal security mandates, chief information security officers contend with a set of arduous tasks that could rival the 12 labors of Hercules.

Under the Federal Information Security Management Act, agencies must file annual reports to Congress that outline their compliance with more than a dozen categories of security controls that span technology, management and operations.

In addition, the Federal Desktop Core Configuration (FDCC), which seeks to secure desktop and laptop PCs that run Microsoft Windows, has an extensive list of required security settings that agencies must track and report on for every computer they operate.

Such reports relay configuration data on hundreds or thousands of devices and can take months to compile.

So it’s no wonder that federal CISOs who responded to an ISC(2) survey earlier this year identified “meeting compliance objectives” as among their top three priorities. One software vendor’s research has found that agencies’ security managers spend anywhere from a quarter to almost half their time on compliance issues.

However, agencies can get some help. The National Institute of Standards and Technology, which writes the FISMA standards, has created the Security Content Automation Protocol to deal with some of the problems of compliance. SCAP targets the security posture of individual devices and can be used to verify patch installation and check a machine’s security configuration settings.

A number of vendors offer SCAP-enabled security monitoring products that can automate and reduce the painstaking effort involved in achieving several aspects of compliance mandates.

However, those products can only go so far. Industry executives say the tools focus mostly on asset-level security and don’t, by themselves, provide a way to tackle the big picture of FISMA compliance. However, NIST and security vendors are working to make SCAP and related tools more broadly applicable.

SCAP makes inroads

The lack of standardized, automated methods for securing software makes the jobs of patching and securely configuring systems labor intensive and prone to error, according to NIST. In addition, vendors have different ways to identify vulnerabilities and platforms, so organizations that use tools from multiple vendors often generate inconsistent reports that can bog down a security assessment.

SCAP is intended to perform those chores in a consistent way. The protocol has found its greatest traction so far in the realm of FDCC compliance testing. For FDCC, the Office of Management and Budget requires agencies to adopt a standard configuration involving about 300 security settings. The idea is to shrink the avenues through which intruders could compromise a government computer. OMB requires agencies to use SCAP tools to verify that their PCs adhere to FDCC settings.

The National Science Foundation has been using SCAP-enabled scanning tools to continuously verify security configurations for more than a year.

“The main benefit of SCAP is that it allows us to determine the level of compliance to FDCC settings with a high degree of accuracy,” said Bill Marsh, an information technology security officer at NSF’s Division of Information Systems.

“FDCC is the most common use case…across the federal government,” said Matthew Scholl, group manager for security management and assurance at NIST’s Computer Security Division.

Organizations with thousands of devices and the need to assess daily changes in their security posture would find compliance reporting difficult without a SCAP tool, said Matt Mosher, senior vice president for the Americas at Lumension Security, which offers SCAP-validated tools.

“If I have no automated tool to assess and report on [FDCC configurations] on a fairly regular basis, there’s no way I can comply,” Mosher said.

In NIST’s SCAP validation program, independent laboratories validate vendors’ products against several SCAP capabilities, one of which focuses on FDCC.

In the case of FDCC scanning, vendors usually offer SCAP features as part of their vulnerability management or governance, risk, and compliance software. About 20 vendors offer validated FDCC scanning capabilities.

SCAP is based on Extensible Markup Language and enables the creation of machine-readable security configuration checklists. The validated tools process SCAP checklists that map to FDCC guidelines and compare the checklists against a given machine’s configuration.

Such tools generally fall into two categories: agent-based and agentless. The first group deploys software agents on the devices to be checked. The agents send reports on the security status of the various machines to server-based software for analysis.

The agentless tools operate in much the same way as a network vulnerability scanner: They seek out devices on a network and check for gaps in their security posture.

Products may work in one mode or the other or support both. Each approach has its pros and cons, said John Bordwine, public-sector chief technology officer at Symantec, which offers the Control Compliance Suite Federal Toolkit as its SCAP tool. The Symantec product offers both modes.

"An agent based system provides much more detail around the endpoint configuration since it is resident to the device,” said Bordwine. “Network based approaches can provide a fairly deep level of detail as well, but this requires some level of administrative access that has to be granted.”

SCAP tools are typically licensed based on the number of assets an organization seeks to evaluate. Prices range from $5 per device to more than $50, depending on the capabilities provided, among other factors, said David Wilson, vice president of product management for Telos’ Xacta IA Manager.

SCAP's limitations

Although SCAP can help agencies track FDCC settings, it is far from a comprehensive compliance strategy.

For example, SCAP-based tools check for compliance at a specified level for a particular setting, but they stumble when a device is set at a higher level than the FDCC requirement.

“If the FDCC setting is ‘medium’ and the NSF setting exceeds that as ‘high,’ the SCAP tool would mark the setting as ‘noncompliant,’ which is not accurate,” Marsh said.

However, the bigger issue for most is SCAP’s usefulness for complying with FISMA.

“FISMA is an overarching set of policies and controls that is really covering all the security aspects of the organization,” Mosher said. “The SCAP tools are one subsection of FISMA.”

Wilson pointed to a gap between SCAP's objectives and FISMA’s objectives. Much of FISMA reporting occurs at the systems level, while SCAP focuses on “the configuration of a single asset, one at a time,” he said.

NIST Special Publication 800-53, which provides recommended security control guidance for complying with FISMA, discusses protection in terms of “information systems” — sets of resources rather than individual devices.

Although SCAP doesn’t address FISMA in its entirety, it can be applied to the recommended technical security measures that support the directive.

“No SCAP content can make you FISMA compliant,” Scholl said. “We do have SCAP content that directly reflects the technical security controls found in SP 800-53.”

For example, NIST has developed checklists that map Windows XP security configuration settings to the high-level security controls in SP 800-53, according to NIST’s guide for adopting SCAP.

In addition, Scholl said organizations can and often do create their own SCAP content that directly reflects their security policies.

Meanwhile, SCAP vendors say they are offering tools to help close the SCAP gap. Wilson said Telos’ Xacta IA Manager can aggregate asset-based configuration data so agencies can draw security conclusions at the systems level.

Bordwine said industry and NIST are also seeking to extend SCAP to operational checks. For example, SP 800-53 calls for security awareness training for information systems’ users. Vendors are exploring ways to search records in a training database and pull out relevant data for compliance reporting, he said.

What's the bottom line for SCAP? Tools that support the protocol can bolster certain aspects of compliance reporting, particularly those related to FDCC. However, the ability to automate broader FISMA duties is still a work in progress.

A zero-day flaw in the TLS and SSL protocols, which are commonly used to encrypt web pages, has been made public.

| Brett D. Arion |

Security researchers Marsh Ray and Steve Dispensa unveiled the TLS (Transport Layer Security) flaw on Wednesday, following the disclosure of separate, but similar, security findings. TLS and its predecessor, SSL (Secure Sockets Layer), are typically used by online retailers and banks to provide security for web transactions.

Ray, who along with Dispensa works for two-factor authentication company PhoneFactor, explained in a blog post on Thursday that he had initially discovered the flaw in August, and demonstrated a working exploit to Dispensa at the beginning of September.

The flaw in the TLS authentication process allows an outsider to hijack a legitimate user's browser session and successfully impersonate the user, the researchers said in a technical paper.

The fault lies in an "authentication gap" in TLS, Ray and Dispensa said. During the cryptographic authentication process, in which a series of electronic handshakes take place between the client and server, there is a loss of continuity in the authentication of the server to the client. This gives an attacker an opening to hijack the data stream, they said.

In addition, the flaw allows practical man-in-the-middle attacks against hypertext transfer protocol secure (Https) servers, the researchers said. Https is the secure combination of http and TLS used in most online financial transactions.

The flaw will prove a problem for a long time to come, security researcher Chris Paget wrote in a blog post, as it also affects SSL.

"How about the thousands of different software update mechanisms out there that depend on SSL being secure in order to function?" wrote Paget. "This is a protocol-level breach; one that requires a modification to the way that SSL and TLS function in order to repair."

After they found the flaw, Ray and Dispensa disclosed their findings to the Industry Consortium for the Advancement of Security on the Internet (Icasi), a tech association that consists of Cisco, IBM, Intel,Juniper Networks, Microsoft and Nokia. The researchers also alerted the Internet Engineering Task Force (IETF) and a number of open-source SSL implementation projects.

On 29 September, the various groups involved met and decided to set up a project, called Project Mogul, to handle remediation efforts. It will first concentrate on creating a protocol extension as a preliminary solution. Ray said in his blog that he expected to see announcements from the multi-vendor collaboration "shortly", including an internet draft proposal for the fix.

At the September meeting, Ray and Dispensa were informed about research being done by the IETF TLS Channel Bindings working group, which was following a similar line of inquiry into the TLS protocol.

On Wednesday, Martin Rex, a member of the IETF TLS Channel Bindings working group and researcher at SAP, published a man-in-the-middle TLS renegotiation flaw in Microsoft IIS. The flaw, which is essentially the same as the one discovered by Ray, was publicised on Twitter by security researcher HD Moore.

Ray and Dispensa decided on Wednesday that the flaw was in the public domain, and so decided on full disclosure of their work.

Friday, October 23, 2009

Use Data Masking to Secure Sensitive Data in Non-Production Environments

| Brett D. Arion |

Data masking is the process of de-identifying (masking) specific elements within data stores by applying one-way algorithms to the data. The process ensures that sensitive data is replaced with realistic but not real data; for example, scrambling the digits in a Social Security number while preserving the data format. The one-way nature of the algorithm means there is no need to maintain keys to restore the data as you would with encryption or tokenization.Last week's article covered the topic of protecting data in databases from the inside out. That is, watching every action involving data as it happens, and promptly halting improper actions.

Data masking is typically done while provisioning non-production environments so that copies of data created to support test and development processes are not exposing sensitive information. If you don't think this is important, consider what happened to Wal-Mart a few years ago. reports that Wal-Mart was the victim of a serious security breach in 2005 and 2006 in which hackers targeted the development team in charge of the chain's point-of-sale system and siphoned source code and other sensitive data to a computer in Eastern Europe. Many computers the hackers targeted belonged to company programmers. Wal-Mart at the time produced some of its own software, and one team of programmers was tasked with coding the company's point-of-sale system for processing credit and debit card transactions. This was the team the intruders targeted and successfully hacked.

Wal-Mart's situation may not be unique. According to Gartner, more than 80%t of companies are using production sensitive data for non-production activities such as in-house development, outsourced or off-shored development, testing, quality assurance and pilot programs.

The need for data masking is largely being driven by regulatory compliance requirements that mandate the protection of sensitive information and personally identifiable information (PII). For instance, the Data Protection Directive implemented in 1995 by the European Commission strictly regulates the processing of personal data within the European Union. Multinational corporations operating in Europe must observe this directive or face large fines if they are found in violation. U.S. regulations such as the Gramm-Leach-Bliley Act (GLBA) and the Health Insurance Portability and Accountability Act (HIPAA) also call for protection of sensitive financial and personal data.

Worldwide, the Payment Card Industry Data Security Standard (PCI DSS) requires strict security for cardholder data. In order to achieve full PCI compliance, organizations must protect data in every system that uses credit card data. That means companies must address their use of cardholder data for quality assurance, testing, application development and outsourced systems -- and not just for production systems. In the Wal-Mart case discussed above, the retailer failed to meet the PCI standard for data security by not securing data in the development environment.

Many large organizations are concerned about their risk posture in the development environment, especially as development is outsourced or sent offshore. A lack of processes and technology to protect data in non-production environments can leave the company open to data theft or exposure and regulatory non-compliance. Data masking is an effective way to reduce enterprise risk. Development and test environments are rarely as secure as production, and there's no reason developers should have access to sensitive data. And while encryption is a viable security measure for production data, encryption is too costly and has too much overhead to be used in non-production environments.

Many database vendors offer a data masking tool as part of their solution suites. These tools, however, tend to work only on databases from a specific vendor. An alternative solution is to use a vendor-neutral masking tool. Dataguise is one of the leading vendors in the nascent market of data masking.

The dataguise solution has two complementary modules. dgdiscover is a discovery tool that searches your environment (including endpoints) to find sensitive data in structured and unstructured repositories. So, even if someone has copied data to a spreadsheet on his PC, dgdiscover can find it. This can be a valuable time-saving tool as data tends to be copied to more places, especially as virtual environments grow and new application instances can be deployed on demand. dgdiscover also can be used to support audits and create awareness of data repositories.

The second dataguise module is dgmasker, a tool that automatically masks sensitive data using a one-way process that can't be reverse engineered. Dgmasker works in heterogeneous environments and eliminates the common practice of having DBAs create masking techniques and algorithms. The tool preserves relational integrity between tables/remote databases and generates data that complies with your business rules for application comparability. In short, you have all the benefits of using your actual production data without using the real data. Instead, dgmasker obfuscates the real data so that it cannot be recovered by anyone -- insider or outsider -- who gains access to the masked data.

Data masking is an effective tool in an overall data security program. You can employ data masking in parallel with other data security controls such as access controls, encryption, monitoring and review/auditing. Each of these technologies plays an important role in securing data in production environments; however, for non-production environments, data masking is becoming a best practice for securing sensitive data.

Symbian Microkernel released as Open Source

| Brett D. Arion |
It was well over a year ago now that news of the Symbian operating system--found on approximately half of global smartphones--going open source broke. The news was interpreted as particularly important to Nokia's forward-looking Symbian strategy, but after all this time, an open source version of Symbian's platform is still only in beta testing.

Today, though, as EETimes notes, Symbian has released its platform microkernel, and software development kit (SDK), as open source under the Eclipse Public License. The Symbian Foundation claims that it is moving quickly toward an open source model, which is questionable, but the release of the EKA2 kernel is a signal that Symbian still means business about adopting an open source model.

Accenture, ARM, Nokia and Texas Instruments contributed software to the microkernel, Symbian officials said. They also note that the microkernel is responsible for most key functions in the operating system. What puzzles me, though, are the many posts and news stories that I'm seeing that seem to agree with the Symbian Foundation's claim that it is nine months ahead of schedule with its shift to open source.

Ahead of schedule after more than a year? Has anyone alerted the Symbian Foundation and Nokia that there is an absolute, competitive maelstrom going on in the smartphone arena? Android will soon come out in a full version 2.0 and has major momentum. Meanwhile, Nokia is bleeding money and taking an old-fashioned butt-kicking from the iPhone in the smartphone market. Nokia's North American sales are down more than 31 percent over last year.

It's about time that the Symbian platform showed some actual signs of going open source in earnest. If it does, it will only be good for market share, but I'm really not sure that this latest release qualifies as "ahead of schedule" in this mobile technology market.

Reblog this post [with Zemanta]

Congressional Advisory Panel: China taking valuable information from hitech companies

| Brett D. Arion |
The Chinese government is stepping up efforts to steal valuable information from high-technology companies in other countries, according to a congressional advisory panel, which detailed one operation that siphoned "extremely large volumes" of sensitive data.

The 2007 attack against the unnamed high-technology company was just one of several successful operations the US-China Economic and Security Review Commission ( believes was sponsored by Beijing.

According to ( The Wall Street Journal, which reported the contents of a report the panel was expected to release Thursday, the Chinese government is suspected because of the "professional quality" of the attack and the technical natures of the stolen information.

According to the WSJ:

The hackers "operated at times using a communication channel between a host with an [Internet] address located in the People's Republic of China and a server on the company's internal network."
In the months leading up to the 2007 operation, cyberspies did extensive reconnaissance, identifying which employee computer accounts they wanted to hijack and which files they wanted to steal. They obtained credentials for dozens of employee accounts, which they accessed nearly 150 times.

The cyberspies then reached into the company's networks using the same type of program help-desk administrators use to remotely access computers.

The hackers copied and transferred files to seven servers hosting the company's email system, which were capable of processing large amounts of data quickly. Once they moved the data to the email servers, the intruders renamed the stolen files to blend in with the other files on the system and compressed and encrypted the files for export.

The attackers used at least eight US-based computers, some at universities, as drop boxes before sending it overseas. The company's security team managed to detect the theft while it was in progress, but not before significant amounts of data left the company network.

China is one of 100 countries believed to have the capability to conduct such operations, according to the report. ®

Reblog this post [with Zemanta]

Almost half ISO 27001 'compliant' firms break with security

| Brett D. Arion |
Almost half of businesses that claim compliance with ISO 27001 are sharing privileged user accounts and breaking other standard guidance, according to a survey of IT managers.

Some 47 percent of firms in the UK said they were compliant with the standard. But forty-one percent of these said that they were using various non-compliant practices.

Bad practice by privileged users is putting European data at "high risk", according to the 'Privileged user management -- it's time to take control' report. These practices included use of default user names and passwords, the granting of wider access than is necessary, failure to monitor the users, and an ignorance around the existence of privileged users in the first place.

Two hundred and seventy European IT managers, including 45 in the UK, were interviewed for the survey that was conducted by Quocirca.

Twenty nine percent of firms in the UK rely on manual control of privileged users, who include system administrators, application service users, and privileged personal users. Only a quarter have implemented privileged user management software, which aims to help businesses enforce and track policy. Around 20 percent plan to implement the software.

UK firms saw privileged users as a medium threat, rating them on average at 2.5 on a scale of one to five, where one meant no threat and five represented a very serious threat.

On a similar scale, they exhibited a medium level of confidence that they could monitor and control privileged user accounts, at 3.1 and 3.2 respectively.

Tim Dunn, VP security at management software firm CA, which commissioned the survey, said at this week's RSA Security Conference in London that there is a "necessity for privileged user access", but that they are "the main target for hackers".

There are a number of recommendations Dunn gave to businesses, including making sure risk managers and other executives "take charge of the problem" instead of "leaving it to IT". Firms should also introduce individual accountability, enforce the segregation of duties for privileged users, secure log files, and implement a privileged user management platform, he said.

Reblog this post [with Zemanta]

Friday, October 16, 2009

Firefox Users At Risk From MIcrosoft Plug-In

| Brett D. Arion |
[ UPDATE: Mozilla has now removed the extension from the blocklist after Microsoft clarified some information in its bulletin on how Firefox users were affected. ]

Patches critical bug, exploitable because of add-on silently slipped into Firefox last February.....

An add-on that Microsoft silently slipped into Mozilla's Firefox last February leaves that browser open to attack, Microsoft's security engineers acknowledged earlier this week.

One of the 13 security bulletins Microsoft released Tuesday affects not only Internet Explorer (IE), but also Firefox, thanks to a Microsoft-made plug-in pushed to Firefox users eight months ago in an update delivered via Windows Update.

"While the vulnerability is in an IE component, there is an attack vector for Firefox users as well," admitted Microsoft engineers in a post to the company's Security Research & Defense blog on Tuesday. "The reason is that .NET Framework 3.5 SP1 installs a 'Windows Presentation Foundation' plug-in in Firefox."

The Microsoft engineers described the possible threat as a "browse-and-get-owned" situation that only requires attackers to lure Firefox users to a rigged Web site.

Numerous users and experts complained when Microsoft pushed the .NET Framework 3.5 Service Pack 1 (SP1) update to users last February, including Susan Bradley, a contributor to the popular Windows Secrets newsletter.

"The .NET Framework Assistant [the name of the add-on slipped into Firefox] that results can be installed inside Firefox without your approval," Bradley noted in a Feb. 12 story. "Although it was first installed with Microsoft's Visual Studio development program, I've seen this .NET component added to Firefox as part of the .NET Family patch."

What was particularly galling to users was that once installed, the .NET add-on was virtually impossible to remove from Firefox. The usual "Disable" and "Uninstall" buttons in Firefox's add-on list were grayed out on all versions of Windows except Windows 7, leaving most users no alternative other than to root through the Windows registry, a potentially dangerous chore, since a misstep could cripple the PC. Several sites posted complicated directions on how to scrub the .NET add-on from Firefox, including .

Annoyances also said the threat to Firefox users is serious. "This update adds to Firefox one of the most dangerous vulnerabilities present in all versions of Internet Explorer: the ability for Web sites to easily and quietly install software on your PC," said the hints and tips site. "Since this design flaw is one of the reasons [why] you may have originally chosen to abandon IE in favor of a safer browser like Firefox, you may wish to remove this extension with all due haste."

Specifically, the.NET plug-in switched on a Microsoft technology dubbed ClickOnce, which lets .NET apps automatically download and run inside other browsers.

Microsoft reacted to criticism about the method it used to install the Firefox add-on by issuing another update in early May that made it possible to uninstall or disable the .NET Framework Assistant. It did not, however, apologize to Firefox users for slipping the add-on into their browsers without their explicit permission -- as is the case for other Firefox add-ons, or extensions.

This week, Microsoft did not revisit the origin of the .NET add-on, but simply told Firefox users that they should uninstall the component if they weren't able to deploy the patches provided in the MS09-054update.

According to Microsoft, the vulnerability is "critical," and also can be exploited against users running any version of IE, including IE8.

Latest Fake Antivirus Attack Holds Compromised Systems Hostage

| Brett D. Arion |

Attack forces user to purchase phony antivirus package to free computer

Attackers have added a new twist to spreading fake antivirus software: holding a victim's applications for ransom.

Researchers discovered a Trojan attack that basically freezes a user's system unless he purchases the rogueware, which goes for about $79.99. The Adware/TotalSecurity2009 rogueware attack doesn't just send fake popup security warnings -- it takes over the machine and renders all of its applications useless, except for Internet Explorer, which it uses to receive payment from the victim for the fake antivirus. "The system is completely crippled," says Sean-Paul Correll, threat researcher and security evangelist for PandaLabs, which found the new attack.

Correll says when the rogueware detects any application on the machine starting to execute, it then shuts down the application. "This happens for every file you try to open except IE. The only reason IE works is because that's what's used to allow victims to pay the cybercriminals," he says.

Bad guys have used ransom threats in phishing attacks and distributed denial-of-service (DDoS) attacks, but Correll says this is the first time it has been used to force users to buy rogueware. Rogueware distributors typically prompt the victim with pop-up messages, but the user can bypass the purchasing process by ignoring them or clicking through them.

Adware/TotalSecurity 2009 isn't new rogueware, but the difference is its distributors are using a more aggressive tack to ensure they make money from it. "Users are put into a Catch-22," Correll says. To free their systems, they are pressured into purchasing the package and sending their financial details to the bad guys, he says. Once the transaction is complete, they receive a serial number that releases their apps and files and can recover their information.

The good news is that, so far, this type of attack is relatively rare. And PandaLabs has posted the serial numbers for the malware application so that users can temporarily "unlock" their systems.

Rogueware has been on the rise this year, and its creators are pumping out new versions of the malware in rapid-fire. PandaLabs found 374,000 new versions of rogueware samples released in the second quarter of this year, a number the company expects to nearly double to 637,000 in the third quarter.

Correll says it's only a matter of time before other rogueware developers emulate the ransom attack. "By forcing the user to pay so quickly, they are able to maximize their profitability before getting caught and removed," he says.

Reblog this post [with Zemanta]

Botnet Operators Impacted by Global Economy. DDoS and other attacks cheaper

| Brett D. Arion |
Security researchers say the cost of criminal services such as distributed denial of service, or DDoS, attacks has dropped in recent months. The reason? Market economics. "The barriers to entry in that marketplace are so low you have people basically flooding the market," said Jose Nazario, a security researcher with Arbor Networks. "The way you differentiate yourself is on price."

Criminals have gotten better at hacking into unsuspecting computers and linking them together into so-called botnet networks, which can then be centrally controlled. Botnets are used to send spam, steal passwords, and sometimes to launch DDoS attacks, which flood victims' servers with unwanted information. Often these networks are rented out as a kind of criminal SaaS to third parties, who are typically recruited in online discussion boards.

DDoS attacks have been used to censor critics, take down rivals, wipe out online competitors and even extort money from legitimate businesses. Earlier this year a highly publicized DDoS attack targeted U.S. and South Korean servers, knocking a number of Web sites offline.

Are botnet operators having to cut costs like other businesses in these troubled economic times? Security researchers don't know if that's been a factor, but they do say that the supply of infected machines has been growing. In 2008, Symantec's Internet sensors counted an average of 75,158 active bot-infected computers per day, a 31% jump from the previous year.

DDoS attacks may have cost hundreds or even thousands of dollars per day a few years ago, but in recent months researchers have seen them going for bargain-basement prices.

Nazario has seen DDoS attacks offered in the US $100-per-day range, but according to SecureWorks Security Researcher Kevin Stevens, prices have dropped to $30 to $50 on some Russian forums.

DDoS attacks aren't the only attacks that are getting cheaper. Stevens says the cost of stolen credit card numbers and other kinds of identity information has dropped too. "Prices are dropping on almost everything," he said.

While $100 per day might cover a garden-variety 100MB/second to 400MB/second attack, it might also procure something much weaker, depending on the seller. "There's a lot of crap out there where you don't really know what you're getting," said Zulfikar Ramzan, a technical director with Symantec Security Response. "Even though we are seeing some lower prices, it doesn't mean that you're going to get the same quality of goods."

In general, prices for access to botnet computers have dropped dramatically since 2007, he said. But with the influx of generic and often untrustworthy services, players at the high end can now charge more, Ramzan said.

Oracle to fix 38 database, product vulnerabilities

| Brett D. Arion |

Oracle CorporationImage via Wikipedia

Oracle has announced plans to ship a Critical Patch Update (CPU) with fixes for at least 38 security vulnerabilities in a wide range of database and server products.

The most serious vulnerabilities (CVSS score of 10.0) affect Oracle Core RDBMS, Oracle JRockit and Oracle Network Authentication. The patches are due on Tuesday, October 20, 2009.

According to an advance notice from Oracle, the following products and components will be affected by the October CPU:

  • Oracle Database: 16 new security vulnerability fixes for the Oracle Database. Six of these vulnerabilities may be remotely exploited without authentication, i.e., may be exploited over a network without the need for a username and password.
  • Oracle Application Server: Three new security fixes for the Oracle Application Server. Two of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password.
  • Oracle E-Business and Applications Suite: Eight new security fixes for the this product. Five of these vulnerabilities may be remotely exploitable without authentication.
  • Oracle PeopleSoft Enterprise and JD Edwards EnterpriseOne: Four new security fixes for the PeopleSoft and JD Edwards Suite. None of these vulnerabilities may be remotely exploitable without authentication.
  • Oracle BEA Products: Six new security fixes for the BEA Products Suite. All of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Oracle BEA Products affected:
    • Oracle JRockit

    • Oracle WebLogic Portal

    • Oracle WebLogic Server
  • Oracle Industry Applications Products Suite: One 1 new security fix for the Oracle Industry Applications Products Suite. This vulnerability is not remotely exploitable without authentication.

“Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update fixes as soon as possible,” the company said.

Reblog this post [with Zemanta]

Tuesday, October 6, 2009

NIST maps out the emerging field of IT metrology

| Brett D. Arion |
NIST maps out the emerging field of IT metrology

Information technology security is a hot topic, but attention usually focuses on the lack of it. What is missing is an objective, quantifiable way to effectively measure it.

“Security can be looked at in different ways by different people,” said Wayne Jansen, a computer scientist at the National Institute of Standards and Technology’s IT Laboratory. There is quality control for code developers, the process of deploying a system, and its maintenance by users. “These are all different aspects,” and they do not lend themselves to traditional methods of measurement used in physical science, he said.

Jansen has examined the status of efforts to develop security metrics, identified challenges and suggested a course for future research in a recent NIST report, "Directions in Security Metrics Research."

There have been a number of efforts to establish metric systems for security, including the international Common Criteria, the Defense Department’s Trusted Computer System Evaluation Criteria, the European Communities’ Information Technology Security Evaluation Criteria, and the International Systems Security Engineering Association’s Systems Security Engineering Capability Maturity Model.

“Each attempt has obtained only limited success,” Jansen wrote. “Compared with more mature scientific fields, IT metrology is still emerging.”

The issue is complicated because security means different things to different people and organizations. “Security is risk- and policy-dependent from an organizational perspective; the same platform populated with data at the same level of sensitivity, but from two different organizations, could be deemed adequate for one and inadequate for the other,” he wrote. “The implication is that establishing security metrics that could be used for meaningful system comparisons between organizations would be extremely difficult to achieve.”

There is no standardized terminology for discussing or describing security, Jansen said. The Federal Information Security Management Act's criteria for rating systems as low, medium or high impact is subjective, and assigning them numerical rankings can blur the distinction between qualitative and quantitative measures.

It is difficult to remove subjectivity from IT security. Security measures can be correctly implemented yet still not be effective. “Effectiveness requires ascertaining how well the security-enforcing components tie together and work synergistically, the consequences of any known or discovered vulnerabilities, and the usability of the system,” the report states. In other words, what is effective for one system might not be for another.

Are meaningful security metrics even achievable?

“The answer is yes,” Jansen said, “but they might not be as satisfying as you want.”

He identified two broad areas of research — process and organizational maturity — that focus on the care and maintenance of IT systems, and the intrinsic characteristics or properties of the systems. “I think we can make good progress on the maturity aspect,” he said. Research on security characteristics is not as far along.

There is not likely to be a single system of security metrics anytime soon because of the need to address different elements of security separately. Jansen cited the Federal Information Processing Standard 140 for cryptographic modules as a workable metric “because it bites off a manageable chunk.” The much broader Common Criteria, on the other hand, is less effective, he said.

“The issue of how to do this is going to be with us for the foreseeable future,” he said.

Challenges to effective security metrics identified in the report include:

  • The lack of good estimators of system security.
  • The entrenched reliance on subjective, human, qualitative input.
  • The protracted and delusive means commonly used to obtain measurements.
  • The dearth of understanding and insight into the composition of security mechanisms.

Promising lines of research for improved metrics include:

  • Formal models of security measurement and metrics.
  • Historical data collection and analysis.
  • Artificial intelligence assessment techniques.
  • Practicable concrete measurement methods.
  • Intrinsically measurable components.

Avert Labs Paper: Inside the Password Stealing Business:the Who and How of Identity Theft

| Brett D. Arion |

Avert Labs has published a new research paper, “Inside the Password-Stealing Business: the Who and How of Identity Theft.” With so many financial transactions occurring online today, stealing passwords to banks and other accounts is an irresistible attraction for cybercriminals. Thieves around the world use Trojans and other malware to grab user credentials, which they can resell to their crooked clientele while supporting their own illegal businesses.

The report uncovers technical details on the capabilities, level of sophistication, and inner workings of the most infamous contemporary password-stealing malware families such as Zbot, Sinowal, and Steam Stealer. We also discuss the prevalence of such malware, distribution channels, how criminals keep up with the changes banks make to keep transactions secure, and how they exploit today’s economic climate. Offering illegal “work at home” opportunities to desperate job seekers is one way criminals lure the unsuspecting into furthering their illegal activities.

You’ll find the report here in English and eight more languages.

Want to peek inside another one of these infamous password thieves? Let’s have a look at SilentBanker.

Our story starts with browser helper objects (BHOs), which are plug ins for Internet Explorer. BHOs give developers the opportunity to extend the browser’s functionality without their having access to the browser’s source code. That doesn’t sound too bad, as users aren’t forced to rely on the browser’s developers to implement new features. Even if you’re not a developer, it’s seems useful to download any desired extension, whether you want to customize the user interface or be able to read PDF documents directly in the browser, isn’t it? Well, yes and no! The answer depends on the trustworthiness of the BHO’s author, the server you download from, or the DNS server. Unfortunately, not all BHOs are safe applications—the bad guys are always looking for ways to turn originally useful features into a way to deploy their malware, hunting for usable information such as credentials. Silentbanker is one of those nasty password-stealing malware that comes in the form of a BHO.

This is one “helper” you don’t want on your side: Once installed and automatically loaded by the browser, Silentbanker can interrupt communication between your browser and the Internet! The malware is highly configurable and targets online banking users. Silentbanker will not only recognize and monitor online banking activity but may also modify HTML pages to include additional code or to change a transfer’s details. The data thief acts as a “man in the middle” to inspect and modify data before it is encrypted and sent to a server and after it is received from the server and decrypted. Still think you’re secure with SSL? Unfortunately that’s not the case with this freeloader sitting on top of the browser.

Silentbanker BHO

The screenshot above shows a pseudocode representation of Silentbanker’s malicious core. The code is responsible for detouring relevant operating system functions to its own malicious routines. This malware effectively kills security applications such as host intrusion prevention systems and others. Before its own malicious detours are installed, the malware disables any previously installed detours by reading a Windows library’s original code from the hard disk (”read_whole_file”), and then mapping it back to the process’ memory (”remove_API_hooks”)—thus rendering security products relying on the same technology ineffective.

Computer scientists successfully boot one million Linux kernels as virtual machines

| Brett D. Arion |
Computer scientists successfully boot one million Linux kernels as virtual machines
September 25th, 2009 in Technology / Computer Sciences

Sandia National Laboratories computer scientists Ron Minnich (foreground) and Don Rudish (background) have successfully run more than a million Linux kernels as virtual machines, an achievement that will allow cybersecurity researchers to more effectively observe behavior found in malicious botnets. They utilized Sandia's powerful Thunderbird supercomputing cluster for the demonstration. (Photo by Randy Wong)
( -- Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.
The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.
Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration.
Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow cyber researchers to watch how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.
Previously, Minnich said, researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.
A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.
“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”
A virtual machine, originally defined by researchers Gerald J. Popek and Robert P. Goldberg as “an efficient, isolated duplicate of a real machine,” is essentially a set of software programs running on one computer that, collectively, acts like a separate, complete unit. “You fire it up and it looks like a full computer,” said Sandia’s Don Rudish. Within the virtual machine, one can then start up an operating system kernel, so “at some point you have this little world inside the virtual machine that looks just like a full machine, running a full operating system, browsers and other software, but it’s all contained within the real machine.”
The Sandia research, two years in the making, was funded by the Department of Energy’s Office of Science, the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program and by internal Sandia funding.
To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.
The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.
“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”
Sandia’s researchers plan to take their newfound capability to the next level.
“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”
Provided by Sandia National Laboratories (news : web)

Express Scripts: 700,000 notified after extortion

| Brett D. Arion |

Express Scripts: 700,000 notified after extortion

Last November, the company reported that someone had threatened to expose millions of customer prescription records, but it has come under criticism for being vague about how many of its customers' records were accessed. Now the company says that about 700,000 have been notified.September 30, 2009 (IDG News Service) Nearly a year after being hacked by computer extortionists, pharmacy benefits management company Express Scripts now says hundreds of thousands of members may have had their information breached because of the incident.

The trouble started for the St. Louis-based company in October 2008, when it received a letter containing the names, birth dates, Social Security numbers and prescription data of 75 patients. The extortionists threatened to turn the information public if they weren't paid. Express Scripts refused and instead notified the U.S. Federal Bureau of Investigation. The company is now offering a US$1 million reward for information leading to the arrest of the perpetrators.

Express Script has not said how the criminals managed to get hold of the data, but in an e-mailed statement the company said that "there have been no reported cases of misuse of member information resulting from the incident."

In a June court filing, the company said that three of its customers have also been approached by the extortionists.

Toyota is one of those companies. In November 2008 it received a letter that was similar to the October Express Scripts threat, from extortionists who threatened to release information on Toyota employees and their dependents.

Express Scripts manages pharmacy benefits for corporations and government agencies. It reported $22 billion in revenue last year.

Customers are not the only people who have been approached by the criminals. A few weeks ago, an unidentified law firm was also provided with more records, according to Express Scripts spokeswoman Maria Palumbo. That firm turned over the records to the U.S. FBI, which in turn informed Express Scripts.

"In late August 2009, Express Scripts was informed by the FBI that the perpetrator of the crime had recently taken action to prove that he possesses more member records from the same period as those identified in the 2008 extortion attempt," the company said on its Web site. "Express Scripts is in the process of notifying these members."

In May, Washington, D.C., law firm Finkelstein Thompson brought a class-action suit against Express Scripts on behalf of members whose data was stolen. Attorneys at the firm did not return messages seeking comment for this story.

It's troubling that Express Scripts has apparently been unable to figure out exactly whose data was accessed, said Dissent, a health care professional who runs the Web site and uses a pseudonym to keep her privacy advocacy separate from her professional practice. "Given that they may not really yet know the full scope of this incident and that we really cannot be sure that the extortionist didn't acquire the entire database, it would seem prudent to notify everyone whose records were in the database," she wrote in an e-mail interview.

"This breach is certainly not the largest breach involving personal health information that we've seen," she said. "But it is nevertheless a very troubling breach because it signals that cybercriminals are recognizing the value of databases containing patient information even where no financial or credit card information is included."

Thursday, September 10, 2009

MS Windows 2000 SP4 and XP Owners beware, "No patch for you" for MS09-048

| Brett D. Arion |

Microsoft Co. Ltd.Image via Wikipedia

Microsoft this week started what will be one of the largest debated issues they will have in some time. Especially given that organizations that pay maintenance on their software are supposed to get patches for that software as long as it is supported by Microsoft. When Microsoft released MS09-048 to address certain D0S/Remote Code Execution issues, it did not include patches for Windows 2000 SP4 and Windows XP operating systems, citing this in the FAQ for the update:

"If Microsoft Windows 2000 Service Pack 4 is listed as an affected product, why is Microsoft not issuing an update for it?

The architecture to properly support TCP/IP protection does not exist on Microsoft Windows 2000 systems, making it infeasible to build the fix for Microsoft Windows 2000 Service Pack 4 to eliminate the vulnerability. To do so would require rearchitecting a very significant amount of the Microsoft Windows 2000 Service Pack 4 operating system, not just the affected component. The product of such a rearchitecture effort would be sufficiently incompatible with Microsoft Windows 2000 Service Pack 4 that there would be no assurance that applications designed to run on Microsoft Windows 2000 Service Pack 4 would continue to operate on the updated system. The impact of a denial of service attack is that a system would become unresponsive due to memory consumption. However, a successful attack requires a sustained flood of specially crafted TCP packets, and the system will recover once the flood ceases. Microsoft recommends that customers running Microsoft Windows 2000 Service Pack 4 use a firewall to block access to the affected ports and limit the attack surface from untrusted networks.

If Windows XP is listed as an affected product, why is Microsoft not issuing an update for it?
By default, Windows XP Service Pack 2, Windows XP Service Pack 3, and Windows XP Professional x64 Edition Service Pack 2 do not have a listening service configured in the client firewall and are therefore not affected by this vulnerability. Windows XP Service Pack 2 and later operating systems include a stateful host firewall that provides protection for computers against incoming traffic from the Internet or from neighboring network devices on a private network. The impact of a denial of service attack is that a system would become unresponsive due to memory consumption. However, a successful attack requires a sustained flood of specially crafted TCP packets, and the system will recover once the flood ceases. This makes the severity rating Low for Windows XP. Windows XP is not affected by CVE-2009-1925. Customers running Windows XP are at reduced risk, and Microsoft recommends they use the firewall included with the operating system, or a network firewall, to block access to the affected ports and limit the attack surface from untrusted networks.

Does this update completely remove the vulnerabilities, TCP/IP Zero Window Size Vulnerability - CVE-2008-4609 and TCP/IP Orphaned Connections Vulnerability - CVE-2009-1926?
Since the denial of service vulnerabilities, CVE-2008-4609 and CVE-2009-1926, affect the TCP/IP protocol itself, the updates for Windows Server 2003 and Windows Server 2008 do not completely remove the vulnerabilities; the updates merely provide more resilience to sustain operations during a flooding attack. Also, these denial of service vulnerabilities can be further mitigated through the use of NAT and reverse proxy servers, further lowering the severity of this issue on client workstations."

Besides the fact that they are not patching the vulnerability for these supported products is one thing, but the following statement is just comical:

"The impact of a denial of service attack is that a system would become unresponsive due to memory consumption. However, a successful attack requires a sustained flood of specially crafted TCP packets, and the system will recover once the flood ceases. Microsoft recommends that customers running Microsoft Windows 2000 Service Pack 4 use a firewall to block access to the affected ports and limit the attack surface from untrusted networks.

Is this not the case for any denial of service attack? The systems always recover once the flood attack ceases. Is it not best practices anyways to have a firewall limiting the attack surface from untrusted networks? It is also hard to believe that this can be "Critical" for newer operating systems, but just "Low" or "Important" for older Operating Systems. Is this not backwards?

Ok, so they are not patching the issue, so maybe we should consider updating our servers to Windows 2003 or Windows 2008. Wait, they say they still do not fix the issues with those products either:

"Since the denial of service vulnerabilities, CVE-2008-4609 and CVE-2009-1926, affect the TCP/IP protocol itself, the updates for Windows Server 2003 and Windows Server 2008 do not completely remove the vulnerabilities; the updates merely provide more resilience to sustain operations during a flooding attack."

So an issue from 2008 is included here, but it really is not fixed, just made more resilient......

For some reason, I have a sneaking suspicion that this is the first of many to come until these Operating Systems go into the Extended Support phase. Maybe Customers should ask for some of thier maintenance costs to be refunded as no patch is being created according to support contracts. It will be interesting to see if any breach of contract issues, or other litigation in the event of a breach or exploit of these vulnerabilities occurs.

Disclaimer: The opinions expressed in this article are those of the author and do not represent the views of Hackers Center or its affiliates.

Reblog this post [with Zemanta]

Avert Labs Releases A New Version of McAfee FileInsight

| Brett D. Arion |

McAfee, Inc.Image via Wikipedia

Today Avert released the new version 2.1 of McAfee FileInsight. You can download a free copy from the Avert Tools site. FileInsight is a handy integrated tool environment for web site and file analysis. Hex editing, syntax highlighting, and it comes with several built-in decoders, built-in calculator, a disassembler, JavaScript scripting support, a Python-based plugin system and many more.

Let’s go through some stages of an exemplary malware attack to highlight some of its analysis features – but don’t try this stunt at home, unless you know what you’re doing; a safe, isolated lab environment is absolutely mandatory for any such research work.

The above screen shows the initial malicious web site, trying to determine your browser and redirect to one or more respective exploits of choice. One of them being an exploit for the Microsoft DirectShow Video ActiveX Control Vulnerability (MS09-032) (stopped as “Exploit-MSDirectShow.b” by McAfee Virus Scan and as “BehavesLike.Exploit.CodeExec.EBEO” by McAfee Gateway Anti-Malware).

Getting to the actual shellcode takes some JavaScript unpacking steps. The JavaScript code is spread over several script files and custom encoded. In the above screen, we take that malicious code into FileInsight’s Scripting window and let it deobfuscate there.

Once we’re down to the shellcode level, we can directly look at the shellcode in the built-in disassembler. The Disassembler window also features recursive traversal to come up with branch labels automatically.

It CALLs-to-POP in order to determine actual memory location of the obfuscated payload, sets up and loops to decode the payload, and then executes that in order to download a XOR-obfuscated executable that turns out to be a UPX-packed backdoor (stopped by Artemis and by McAfee Gateway Anti-Malware as „LooksLike.Win32.Suspicious.C“).

Advanced users may also want to look into FileInsight’s Python-based plugin system, but be warned: writing plugins at the overwhelming simplicity of the Python language has a certain addiction potential! ;-)

FileInsight is available here.

Reblog this post [with Zemanta]

Wednesday, September 2, 2009

Absolute Poker Scandal

| Armando Romeo |

In the Summer of 2007, a disturbing trend was occurring on the poker site Absolute Poker. Four accounts were consistently winning large amounts of money in high stakes games by playing a clearly losing style of poker. Players complained at first and when nothing was done to answer their concerns, the players started their own investigation. This led to uncovering the super user account scandal at Absolute Poker.

From Megaloser to Megawinner Overnight

Four accounts that were significant losers in 2006 returned to play in 2007. This time, the players were posting huge wins although they were playing what should have been losing poker. The accounts were those of "potripper", "Steamroller", "Doubledrag", and "Graycat." The accounts were play very short sessions, post huge wins, and then leave. The accounts also never played together at the same time.

Players started doing analysis of hands and win rates and saw that the win rates were well above what even the best players on the site should be able accomplish. On September 13, 2007, potripper won a $1,000 buy-in event on Absolute Poker and Marco "CrazyMarco" Johnson insisted that there was cheating going on. What convinced him was when CrazyMarco moved all-in with a 9-high bluff only to be called by potripper, who was holding ten-high.

After his loss, CrazyMarco contacted Absolute Poker requesting a hand history. What they sent by mistake was a copy of all the hand, including all hole cards of all players. Upon looking at the data it was determined that potripper won every showdown in the torunament, he folded when he was behind and raised when he wasn't, and saw nearly every flop in the event unless an opponent held at least pocket queens.


The file that CrazyMarco was further analyzed and the IP addresses and emails of the players and observers in the tournament are traced. It is noticed that during potripper's torunament run, user363 joined potrippers table and stayed at his table during the entire tournament run. The account is then traced back to the offices of Absolute Poker in Costa Rica.

Absolute Poker First Denies and Then Recants

At first, Absolute Poker denied that there was any wrongdoing going on at the site. On October 12, 2007, they stated, "We have determined with reasonable certainty that it is impossible for any player to see the hole cards as was alleged. There is no part of the technology that allows for a '¨superuser' account."

Allegations of a cover-up flew and as the company's name was continually smeared around the world, Absolute Poker finally came forward and admitted there was a problem. The following is part of a statement admitting the security breach:

"Based upon our preliminary findings, it appears that the integrity of our poker system was compromised by a high-ranking trusted consultant employed by AP whose position gave him extraordinary access to certain security systems. As has been speculated in several online forums, this consultant devised a sophisticated scheme to manipulate internal systems to access third-party computers and accounts to view hole cards of other customers during play without their knowledge."

Kahnawake Investigation

The Kahnawake Gaming Commission ran an investigation and released their findings on January 11th, 2008. They confirmed that the potripper account as well as several others had indeed used a superuser account to gain access to opponents hole cards. They required Absolute Poker refund players for any money lost to the accounts and also fined the company $500,000. In addition, they would continue to monitor the site for two years.

As we know, this was just the beginning as another scandal would unfold with Absolute Poker's sister site Since the findings of the Kahnawake Gaming Commission, Absolute Poker and UltimateBet have joined the Cereus Network to ensure better security and gaming integrity.

The parties responsible for the cheating have never been brought to justice and it is suspected that they never will be. As a result, Absolute Poker has taken a black eye in public perception. The site still has a loyal following and has tried to move forward since the scandal, however the nagging thought will always be in the back of people's minds wondering if there will be a SonofPotrpper in the future.

Thursday, August 6, 2009

Researchers find large-scale XML library flaw - Sun, Apache and Python vulnerable

| Brett D. Arion |
Researchers from Codenomicon, working with CERT-FI in Finland, have uncovered a series of flaws in the eXtensible Markup Language (XML) libraries that could pose a serious security risk. The flaws uncovered deal with the way open-source programs process XML functions.

The flaws could be exploited by crafting a specially designed XML file, or by sending specific requests to XML engines.

Application makers such as Sun Microsystems, Apache and Python are all expected to release new versions of their XML libraries to counter the problem in the next 24 hours. The researchers waited until such library upgrades were available before releasing news of the vulnerabilities.

Reblog this post [with Zemanta]

Tuesday, June 30, 2009

Cybercrime spreads on Facebook

| Brett D. Arion |
BOSTON (Reuters) - Cybercrime is rapidly spreading on Facebook as fraudsters prey on users who think the world's top social networking site is a safe haven on the Internet.

Lisa Severens, a clinical trials manager from Worcester, Massachusetts, learned the hard way. A virus took control of her laptop and started sending pornographic photos to colleagues.

"I was mortified about having to deal with it at work," said Severens, whose employer had to replace her computer because the malicious software could not be removed.

Cybercrime, which costs U.S. companies and individuals billions of dollars a year, is spreading fast on Facebook because such scams target and exploit those naive to the dark side of social networking, security experts say.

While News Corp's (NWSA.O) MySpace was the most-popular hangout for cyber criminals two years ago, experts say hackers are now entrenched on Facebook, whose membership has soared from 120 million in December to more than 200 million today.

"Facebook is the social network du jour. Attackers go where the people go. Always," said Mary Landesman, a senior researcher at Web security company ScanSafe.

Scammers break into accounts posing as friends of users, sending spam that directs them to websites that steal personal information and spread viruses. Hackers tend to take control of infected PCs for identity theft, spamming and other mischief.

Facebook manages security from its central headquarters in Palo Alto, California, screening out much of the spam and malicious software targeting its users. That should make it a safer place to surf than the broader Internet, but criminals are relentless and some break through Facebook's considerable filter.

The rise in attacks reflect Facebook's massive growth. Company spokesman Simon Axten said that as the number of users has increased, the percentage of successful attacks has stayed about the same, remaining at less than 1 percent of members over the past five years.

By comparison, he said, FBI data shows that about 3 percent of U.S. households were burglarized in 2005.

"Security is an arms race, and we're always updating these systems and building new ones to respond to new and evolving threats," Axten said.

When criminal activity is detected on one account, the site quickly looks for similar patterns in others and either deletes bad emails or resets passwords to compromised accounts, he said. Facebook is hiring a fraud investigator and a fraud analyst, according to the careers section of its website.


But ultimately Facebook says its members are responsible for their own security.

"We do our best to keep Facebook safe, but we cannot guarantee it," Facebook says in a warning in a section of the site on the terms and conditions of use, which members may not bother to read. (

"People implicitly trust social networking sites because they don't understand the real threats and dangers. It's like walking down the street and trusting everybody you meet," said Randy Abrams, a researcher with security software maker ESET.

Amy Benoit, a human resources manager in Oceanside, California, said she may stop using Facebook altogether after she became entangled in a popular scam: A fraudster sent instant messages to a friend saying that Benoit had been attacked in London and needed $600 to get home.

Yale University last week warned its business school students to be careful when using Facebook after several of them turned in infected laptops.

One of the most insidious threats is Koobface, a virus that takes over PCs when users click on links in spam messages. The virus turned up on MySpace about a year ago, but its unknown authors now focus on spreading it through Facebook, which is struggling to wipe it out.

"Machines that are compromised are at the whim of the attacker," said McAfee Inc (MFE.N) researcher Craig Schmugar.

McAfee, the world's No. 2 security software maker, says Koobface variants almost quadrupled last month to 4,000. "Because Facebook is a closed system, we have a tremendous advantage over e-mail. Once we detect a spam message, we can delete that message in all inboxes across the site," said Schmugar.

Facebook's Axten said the site does not know how many users have been infected by Koobface.

A new website that follows Facebook news,, recently identified a vulnerability that made it possible to access any user's private information using a simple hack. The loophole has since been closed.

"We don't have any evidence to suggest that it was ever exploited for malicious purposes," Axten said.

Hackers even find ways to get into accounts of savvy users like Sandeep Junnarkar, a journalism professor at City University of New York and former tech reporter. Last month he learned his account was hacked as he waited for a flight for Paris. He quickly changed his password before boarding.

"Am I surprised that it happened? Not really," he said.

Reblog this post [with Zemanta]

FBI Defends Disruptive Raids on Texas Data Centers

| Brett D. Arion |
The FBI on Tuesday defended its raids on at least two data centers in Texas, in which agents carted out equipment and disrupted service to hundreds of businesses.

The raids were part of an investigation prompted by complaints from AT&T and Verizon about unpaid bills allegedly owed by some data center customers, according to court records. One data center owner charges that the telecoms are using the FBI to collect debts that should be resolved in civil court. But on Tuesday, an FBI spokesman disputed that charge.

"We wouldn’t be looking at it if it was a civil matter," says Mark White, spokesman for the FBI’s Dallas office. "And a judge wouldn’t sign a federal search warrant if there wasn’t probable cause to believe that a fraud took place and that the equipment we asked to seize had evidence pertaining to the criminal violation."

In interviews with Threat Level, companies affected by the raids say they’ve lost millions of dollars in equipment and business after the FBI hauled off gear belonging to phone and VoIP providers, a credit card processing company and other businesses that housed equipment at the centers. Nobody has been charged in the FBI’s investigation.

According to the owner of one co-location facility, Crydon Technology, which was raided on March 12, FBI agents seized about 220 servers belonging to him and his customers, as well as routers, switches, cabinets for storing servers and even power strips. Authorities also raided his home, where they seized eight iPods, some belonging to his three children, five XBoxes, a PlayStation3 system and a Wii gaming console, among other equipment. Agents also seized about $200,000 from the owner’s business accounts, $1,000 from his teenage daughter’s account and more than $10,000 in a personal bank account belonging to the elderly mother of his former comptroller.

Mike Faulkner, owner of Crydon, says the seizure has resulted in him losing millions of dollars in revenue. It’s also put many of his customers out of business or at risk of closure.

The raids are the result of complaints filed by AT&T and Verizon about small VoIP service providers whom the telecoms say owe them money for connectivity services. But instead of focusing the raid on those companies, Faulkner and others say the FBI vacuumed up equipment and data belonging to hundreds of unrelated businesses.

In addition to Crydon, the data center of Core IP Networks was raided last week. Customers who went to Core IP to try to retrieve their equipment were threatened with arrest, according to an announcement posted online by the company’s CEO, Matthew Simpson. According to Simpson, the FBI is investigating a company that purchased services from Core IP in the past but had never co-located equipment at Core IP’s address. Simpson reported that 50 businesses lost access to their e-mail and data as a result of the raid. Some of those clients are phone companies, and the loss of their equipment has meant that some of their customers lost emergency 911 access.

"If you run a data center, please be aware that in our great country, the FBI can come into your place of business at any time and take whatever they want, with no reason," Simpson wrote.

Faulkner says the FBI seized about $2.5 million from Simpson’s personal bank account. Simpson did not respond to a request for comment.

Faulkner and others say that the FBI agent who led the raid, Special Agent Allyn Lynd from the Dallas field office, warned them not to discuss the raid with each other or with the press.

But a 39-page affidavit (.pdf) related to the Crydon raid provides a convoluted account of the investigation. It alleges that a number of conspirators, some of who may have connections to Faulkner, conspired to obtain agreements from AT&T and Verizon to purchase connectivity services with the telecoms. Several documents used to provide proof of business ownership and financial stability were forged, according to the affidavit. For example, the affidavit claims that one of the conspirators named Ronald Northern sent AT&T a bill from Verizon to show that he had a history of paying for services on time. The bill was allegedly forged with Verizon’s logo — which the company is claiming is a trademark infringement — and that the corporation number the conspirator used actually belonged to a different Verizon customer.

Northern could not be reached for comment.

The affidavit claims that Faulkner, Northern and others committed mail and wire fraud, criminal e-mail abuse (stemming from separate allegations of spamming), criminal copyright infringement and criminal use of fraudulent documents. The affidavit mentions several companies that Faulkner has been connected to including, Crydon, Premier Voice and Union Datacom.

But mixed in with these allegations is a separate tale that hints at the larger story behind the raid. AT&T and Verizon say they’re owed about $6 million in fees from VoIP service providers who used servers that were co-located at Crydon and the other data centers. The telecoms claim that these VoIP providers used up more than 120 million "physical connectivity minutes" without paying for them, and that attempts by AT&T and Verizon to collect on the debts proved fruitless.

"Based on my investigation and that of AT&T and Verizon," writes Special Agent Lynd in the affidavit, "I believe individuals associated with Lonestar Power and Premier Voice defrauded AT&T and Verizon out of hundreds of millions of minutes of physical connectivity service and significant revenue by means of the submission of false/fraudulent credit information and other false representations."

Faulkner, who was a part owner of Premier Voice before selling it about a year ago, acknowledges that Premier owed money to AT&T at one time — though he says he’s not certain it was for interconnection. He says that debt was assumed by the new owner when he sold the company. Either way, he says, this would be categorized as corporate debt, not fraud.

"There’s a big difference between stealing money and owing money," he says.

He says he often invests in troubled companies that are carrying debt when he buys them.

"Usually you settle the debt," he says. "But AT&T never contacted me about owing money. Verizon never contacted me."

Faulkner says the two telecoms have used the FBI to seize equipment to obtain evidence through a criminal investigation instead of pursuing the companies through civil litigation and the discovery process. And instead of targeting the investigation specifically at the VoIP companies, he says the FBI swept in everyone who had servers in the same place where the VoIP servers were located. As a result, all of Crydon Technology’s equipment was seized, as was the equipment of numerous businesses that had the bad luck to own servers running out of Crydon’s facility.

"They’re destroying more and more customers and it just doesn’t seem to make sense," Faulkner says. "They’ve done a horrible amount of damage and have been so barbaric in the way they’ve shut things down. If they just picked some random guy off the street to do this investigation, he could have done a better job than the FBI did."

Among more than 300 businesses affected by the raid on Crydon were Intelmate, which provides inmate calling services for prisons and jails and had about $100,000 in equipment seized in the raid; a credit card processing company that had just become PCI compliant and was in the process of signing on its first customers; Primary Target, a video game company that makes first-person shooters; a mortgage brokerage; and a number of VoIP companies and international telecoms that provided customers with service to the U.S. through servers belonging to a separate company Faulkner ran called Intelivox. These customers essentially lost connectivity to the U.S. after the raid, Faulkner says.

Faulkner says the FBI appears to have assumed that all the servers located at Crydon’s address belonged to him, and didn’t seem to understand the concept of co-location.

The seized data included transactional records for companies, which means the companies won’t be able to bill customers for services already rendered before the raid.

"All of our clients will have to refund their customers, and we’re in the hole now to refund our customers," says Faulkner. "I could tell the FBI agent had never even considered that. He just said, ‘Well, that’s your problem.’"

The owner of a credit card processing company who had servers at Crydon says he lost about $35,000 in equipment in the seizure, and that the survival of his company is at risk until he secures a new location. He asked that he and his company not be named because the company is in the process of securing business partners to launch its processing service. He fears that news about the disruption to his business operation could lead potential partners to avoid contracting with him. To keep his launch on track, he’s had to purchase about $32,000 in new equipment.

He said when he tried to explain to an FBI agent that some of the servers that were seized belonged to him and not to Faulkner, the FBI agent implied he was lying.

"We were treated like we were criminals," he said. "They assumed there was no legitimate business in there."

In addition to the transaction servers taken from Crydon’s facility, he also lost telephone service for his company after the FBI raided Core IP, which housed a business that was providing his company with VoIP.

FBI spokesman White says the equipment seizures were necessary.

"My understanding is that the way these things are hooked up is that they’re interconnected to each other," he says. "Company A may be involved in some criminal activity and because of the interconnectivity of all these things, the information of what company A is doing may be sitting on company B or C or D’s equipment."

White says the FBI is working with affected companies to provide them with copies of seized data they need to run their businesses.

"It’s not that we’re doing nothing to assist them," White says. "We’ve repeatedly asked the companies to call and provide us with the information we need so we can get the info they need back to them. It is a time-consuming process."

The owner of the card-processing company, however, says the FBI has been "completely unresponsive" to the needs of Crydon customers caught up in the raid. An agent gave him a fax number to send the FBI details about the equipment that belongs to him, but the fax number didn’t work. Then, he says, the agent in charge took a vacation.

"They were all unavailable after they effectively seized all of our equipment," he says.

An agent told the customer that no equipment would be released until agents could determine if it was used in criminal activity. And if it was used for criminal activity, it wouldn’t be released until after a trial.

"Our equipment could be there indefinitely," the customer said. "There’s been no due process…. I consider this to be an issue for anyone owning a data center right now. That they have this much power and can take anyone just because your equipment is inside a facility…. They’re supposed to limit their search and seizure to the owner of the equipment."

Faulkner says he’s managed to replicate mail servers and some functionality for some customers and is building up new business resources elsewhere — this time offshore in Panama, Mexico and Canada, where the FBI would have trouble seizing servers in the future. The Electronic Frontier Foundation has contacted him to investigate the FBI’s possible violation of due process.

Faulkner says when he visited the FBI’s office after the raid, he found numerous cubicles stacked full of servers seized in other raids that were waiting for someone to examine them. The irony, he says, is that in the case of his servers the data was all hardware encrypted.

"It would take a lot of NSA time to crack just one of them," Faulkner says.

Many of the allegations against Faulkner are based on claims from an unidentified informant who told the FBI that he used to work for Faulkner, and witnessed many criminal acts Faulkner committed. The witness told authorities he was "unaware of any legitimate business being run by Faulkner and that as far as he/she knew all of his income was derived from his illegal activities." The informant also claimed Faulkner used crack cocaine and methamphetamine and engaged in commercial spamming.

Faulkner says the unnamed informant is a former employee who was fired after failing to show up to work over an extended period.

"We paid him $70,000 to help us launch a VoIP business, and he never actually did anything," Faulkner says.

Faulkner says he doesn’t do drugs and he’s never conducted spamming nor been associated with spammers. He says when he has discovered spammers using ISP services he provided through companies he owned in the past, he would block their activities.

Free Security Magazines