A few weeks ago, Microsoft had its VML zero day exploit; this week, it's Firefox's turn.

Obviously, as more people are switching from Internet Explorer to Firefox, hackers are doing the same.

The thing that struck me about this particular problem was that the hackers gave no advance warning to Mozilla prior to their presentation, and

The hackers claim they know of about 30 unpatched Firefox flaws. They don't plan to disclose them, instead holding onto the bugs.
why are they holding on to them? one of the hackers explains:
what we're doing is really for the greater good of the Internet. We're setting up communication networks for black hats
for the greater good of the Internet? yeah right.

The scary thing is though that one of the hackers works for Six Apart, the company behind popular blogging software like Movable Type, Live Journal and Typepad.

Six Apart needs to do some major damage control, fire this guy immediately and review all code he may have had access to. It doesn't exactly ease my mind to know my weblog is running on code this guy may have had access to. Maybe it's time to move to WordPress...

UPDATE: it looks like this may have just been a hoax. Still not exactly good publicity for six apart though...

SharpReader 0.9.7.0 is now available at sharpreader.net.

Changes since the last version are:

  • Run internal browser in restricted security zone in order to make IE responsible for blocking restricted content, instead of just doing so by parsing and stripping tags.
  • Allow embedded CSS styles in item descriptions (was previously disabled because of javascript exploits that are now caught because of the security zone).
  • Support both <commentRSS> as well as <commentRss> as there was some confusion as to the proper capitalization of this element.
  • Fixed linebreak handling for some feeds.
  • Improved handling of relative urls in atom feeds (like Sam Ruby's feed for instance).
  • Now displaying enclosure links at the bottom of the item description.
  • Fixed installer to no longer complain if only .NET 2.0 is installed.

In my "Spam Suspects" email folder today, I noticed some spam which used Google as a redirection service, by linking to http://www.google.com/url?q=http://www.somespamsite.com. When trying this technique with some other site, I found that google responds to this query with a 302 redirect to the site in question. Clearly, the spammer was using this system to lure people who trust Google into visiting their site.

What I don't understand is why Google needs a public redirect system like this that is so obviously open to abuse. The google.com/url?q=... page doesn't seem to accept anything but already fully specified urls, so the sole purpose of this page is to do redirects.

The only reason I can think of for them needing a service like this is if they serve up one in a thousand search-results pages with redirect links, in order to log what people actually click on. If this were the case though, why not at least check the referrer to see if the user actually came from a google.com page? Am I missing something here?

Scott Hanselman and his wife will be joining the walk for diabetes on May 6 2006. They've set a goal of raising $10,000 for this event and could use your help in reaching that goal. I encourage all of you to go to Scott's blog to find out more about this worthy cause, or go directly to diabetes.org to make your donation. Thank you.

Silicon Valley Sleuth reported this morning how several stories about Google buying Sun suspiciously made it to the front page of Digg.com. These "baseless rumours" were all submitted and promoted by a small group of Digg members that seemed to be working together.

I found this story through Digg itself, where it was posted on the front page. It later mysteriously disappeared from Digg though, and a URL search indicated that the story was since marked as "buried".

The Digg Blog says the following about this burying feature:

Digg now allows logged in users to bury stories as 'inaccurate'. Once enough people bury the story, it is removed from the queue and the following banner is displayed at the top:



No banner is displayed though, which makes me wonder if it was buried because enough people marked it as inaccurate (the same people who were promoting these Google+Sun stories maybe?) or whether an admin removed it in an effort to hide how easily Digg can be manipulated. There's currently an update on Silicon Valley Sleuth stating that it seems unlikely the Digg system was actually manipulated in this case, but this update wasn't there when the story was buried, and also doesn't make the theoretical possibility of this happening any less likely.

Due to the automated nature of Digg (which uses user-votes to determine how prominently to display a story) it certainly seems possible for a group of people to get together and promote stories in order to get them onto the coveted front page, while at the same time burying stories they don't like. Worse than that, what would stop someone from automating this process and creating a couple hundred accounts for this purpose? To reduce suspicion, these accounts could digg random stories from time to time, or even undigg stories once they've made it to the front page.

If this is not going on already, I predict it will soon. Compared to the trouble BlogSpammers are going through in order to game sites like Google, DayPop or Blogdex, gaming Digg seems relatively easy. While Digg claims to have ways to prevent manipulation, one can't help but wonder whether it's enough, and I'm sure there are plenty of spammers out there just dying to beat the system...

Other Recent Entries

[03/09/2006] Why I Hate Frameworks Benji Smith in Why I Hate Frameworks: So this week, we're introducing a general-purpose tool-building factory factory factory, so that all of your different tool factory factories can be produced by a single, unified factory. The factory factory factory will produce only the tool factory factories that you actually need, and each of those factory factories will produce a single... (112 words)

[01/27/2006] NASA & RSS I'm not exactly sure why, but someone forwarded me the feedvalidator results of a NASA RSS feed, and they fail miserably. There's empty <pubDate>s, javascript <link>s, unencoded HTML in the <description> that's not valid XML and therefore makes the entire document invalid, etc. Quite frankly, coming from NASA, I found it somewhat amusing... I mean, it's not like this stuff... (67 words, 12 Comments)

[01/08/2006] Xbox 360 After seeing it in stock through their inventory locator, I drove by Circuit City yesterday to try and get myself an xbox 360. Since the store is about 20 minutes from my home, there was a pretty big chance they'd be gone by the time I got there, and they were indeed nowhere to be found in the showroom by... (374 words, 3 Comments)

[12/19/2005] Atom testcases and SharpReader Phil Ringnalda: Oh, and Luke? Pretty nice showing for a one-person unpaid hobby aggregator, mate Thanks Phil :-) I'm actually not that surprised SharpReader managed to get all these tests right; what does surprise me is that 9 out of 11 aggregators don't...... (45 words, 18 Comments)

[12/03/2005] Subscribed to Digg? better get written permission For all of you who are subscribed to Digg's RSS feed: it looks like you now need written permission in order to use an aggregator. From their new TOS: 8. you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission. for some reason they chose not... (180 words, 7 Comments)


Copyright © 2003 - 2008 Luke Hutteman