SharpReader is now available at

Changes since the last version are:

  • Run internal browser in restricted security zone in order to make IE responsible for blocking restricted content, instead of just doing so by parsing and stripping tags.
  • Allow embedded CSS styles in item descriptions (was previously disabled because of javascript exploits that are now caught because of the security zone).
  • Support both <commentRSS> as well as <commentRss> as there was some confusion as to the proper capitalization of this element.
  • Fixed linebreak handling for some feeds.
  • Improved handling of relative urls in atom feeds (like Sam Ruby's feed for instance).
  • Now displaying enclosure links at the bottom of the item description.
  • Fixed installer to no longer complain if only .NET 2.0 is installed.

I'm not exactly sure why, but someone forwarded me the feedvalidator results of a NASA RSS feed, and they fail miserably.

There's empty <pubDate>s, javascript <link>s, unencoded HTML in the <description> that's not valid XML and therefore makes the entire document invalid, etc.

Quite frankly, coming from NASA, I found it somewhat amusing... I mean, it's not like this stuff is exactly rocket science, is it?

For all of you who are subscribed to Digg's RSS feed: it looks like you now need written permission in order to use an aggregator.

From their new TOS:

8. you will not use any robot, spider, scraper or other automated means to access the Site for any purpose without our express written permission.

for some reason they chose not to enforce this rule in their robots.txt though.

Oh, and if you, in an effort to stay fully compliant to their TOS, plan to just manually check their site instead, realize that registration is now required:

As a condition to using Services, you are required to register with Digg and select a password and screen name ("Digg User ID").

I guess slashdot doesn't have to worry about digg overtaking them anymore.

UPDATE: They changed their TOS to allow RSS aggregators to automatically access their site. Search engine spiders officially still need written permission though, and registration is still required for use of their "services" (where services are defined to, among other things, include text, images and articles).

Arturo Zacarias (eBay) writes:
RSS [...] is a technology that makes it easy to see updated headlines from websites or pages that you like. This year we have been working to incorporate this technology for members to utilize on eBay.

[...] eBay Stores sellers are now able to set up a "feed" that will send out a regularly updated summary of the most recently-listed items in their Store to people who subscribe to it. [...]

If you're a Store owner who is interested in providing listings via RSS feeds, go to "Manage My Store > Listing Feeds (within Store Marketing)". Click on "Distribute your listings via RSS" and click "Apply". The RSS button should appear on the lower left hand side of your Store.


We are working to add RSS to other areas as well in the coming months, so stay tuned.

Let me get this straight - they've been working for 10 months on the ability to generate a (really) simple xml file for a store, and they didn't even get around to allowing people to subscribe to search results?

Though I can see some people might be interested in subscribing to an individual store, I think subscribing to search results is way more interesting. If I'm for instance looking to buy a used iPod, I could subscribe to a feed for some ebay electronics store, but I'd much rather subscribe to a search-feed for "iPod" and get results for all stores, as well as for individual sellers.

He said they're "working to add RSS to other areas as well in the coming months", so hopefully search is coming up too at some time, but given how long it took for them to implement it for stores, there's no telling how long this will take.

In the mean time though, there's always (or is it? it seems to be down at the moment).

The Microsoft RSS Blog just announced that Vista will only accept RSS feeds that are well-formed XML.

I agree with Nick, who commented "This is the right thing to do, and I'm glad you're doing it - thanks". I'd like to add some emphasis to that statement though: "This is the right thing to do, and I'm glad you're doing it - thanks".

See, neither Nick's aggregator nor mine requires well-formed XML. This is because there are a lot of non-well-formed feeds out there, and the typical aggregator user doesn't care about XML specs, they just want to see the feed content. And if you're requiring well-formed XML, something as small as a single "&" in a single post will invalidate the entire feed, for as long as that post remains in the feed (which can be weeks depending on the update frequency of the feed).

Microsoft being more strict than us has the following positive results though:

  • It will most likely reduce the number of invalid feeds out there, making it easier for everyone to parse feeds.
  • Microsoft gets positive press for "doing the right thing".
  • For those feeds that still break, it may be another reason for people to look for alternative aggregators that can read that feed they're interested in.
Maybe there's an exception to Postel's law after all.

Last week Robert McLaws started placing Google Adsense Ads in the LonghornBlogs RSS Feed. The reaction from the community seemed to be mostly negative; Dave Winer for instance, wrote:

If we wanted to, as an industry, reject the idea, we could, by asking the people who create the software to add a feature that strips out all ads. Make it default to on. Then, that would force the advertisers, if they want to speak to us, to do so respectfully, by our choice. Create feeds of commercial information that we might be interested in, and if we are, we'll subscribe. If not, we won't.
I'm not sure what Dave is smoking, but who in their right mind would subscribe to a feed with nothing but ads? Maybe some deals feeds might be interesting, but the majority of ads (and therefore ad-feeds) don't work that way.

Another reaction was tricks to hide the ads. I linked to Sam Ruby's blog here instead of to the actual code as the responses to his post are again interesting; Phil Ringnalda states he won't bother hiding the ads, but rather just unsubscribe, while Aaron Junod calls it a "sad day for blogs".

Ads are not all bad though. Without ads, would we have sites like Wired, CNN, MSNBC or even slashdot?. All these sites have RSS feeds, but only one (slashdot) contains the full post text of the entry. This by the way also happens to be the only feed of these four that contains ads.

For me, I'd much rather subscribe to an ad-supported full content feed, than an excerpt feed without ads, where, in order to see the full contents, you'd have to visit the (ad-supported) website itself. For those that truly despise ads, hopefully CNN et al. will decide to publish two feeds, one like they have now, and the other being an ad-supported full-content feed.

The truth of the matter is that publishing content on the web isn't free. People need to pay hosting costs, news organizations need to pay journalists, etc. If a few ads can help offset those costs, and also give something back to bloggers who invest many hours of their spare time to this, then what's the big deal? We can't all be Kottke (who, despite his success so far, even admits himself that he doesn't expect this to last).

Personally, once RSS feeds start showing flashy distracting animated ads, I probably will unsubscribe from the feed in question. But an on-topic unobtrusive ad at the bottom of an entry, why not?

There's yet another RSS Aggregator out there: JNN (Juicy News Network) by none other than James Gosling. It's very basic right now but such is to be expected for, as he calls it, a "weekend hack".

In releasing this aggregator as open source, he brought the wrath of John Munsch upon him though. John has been working on another Java based open source aggregator (hotsheet) and is upset that, instead of joining his project, Gosling started a new one. Gosling of course had never even heard of John's aggregator and quite frankly, neither had I.

Looking at both aggregators, I noticed the following:

  • Java WebStart works pretty well- Being a server-side Java developer, I'd never seen this in action before, and it sure beats the hell out of downloading jars, setting up classpaths, etc.
  • Gosling implemented subscribing to a feed by accepting drag-n-drops of links from a browser (similar to what SharpReader does). I was unaware this was even possible using Java.
  • Neither project looks quite ready for prime-time yet.

I'm sure John would've loved to have his project validated by having someone of Gosling's caliber work on it, but I hardly think Gosling (or anyone else for that matter) can be blamed for starting from scratch instead of first perusing sourceforge or freshmeat for existing Java open source aggregators. Sure, there are already a lot of aggregators out there, but competition is a good thing. It keeps you on your toes and forces you to keep updating and improving upon your code if you don't want to be part of the masses of aggregators with only a handful of users. Also, with both HotSheet and JNN being open-source, they can freely take code from each other if they want to. If John wants to add JNN's drag-n-drop functionality to HotSheet, he can just go into Gosling's code and copy the implementation. Besides, this isn't like Eclipse vs Netbeans or anything. It's two basic Java Swing aggregators, both in fairly early stages of development. Big deal.

Personally, being a Java developer (albeit server-side) I'd love to see a good client-side Java aggregator emerge, if only to show that it can be done. Contrary to popular belief, it is possible to create good GUI applications in Java. Just look at apps like IDEA or TogetherJ, or check out jgoodies.

I think the main problem in creating a good Java aggregator is the fact that it will need to render HTML, and Java's native HTML rendering is not even close to being good enough for this task. For a real client-side aggregator, you'll have to embed a real browser like IE or Mozilla, and this is not an easy task in Java Swing. I did see some potential options in this thread, but am unaware of how stable any of these solutions are.

Either way, if I were to create a Java aggregator (don't get your hopes up Java zealots, I have no plans to do so) I would probably start from scratch too as opposed to basing it off either JNN or HotSheet. Call me a control-freak, but I like setting up my API's the way I see fit, as opposed to learning someone else's. Besides, parsing RSS or Atom isn't that hard. As Brent Simmons noted, most of the work in implementing an aggregator goes into the UI code. And if you want your aggregator to stand out, you surely have to do that part yourself.

Looks like RSS is getting more mainstream press lately. After the Slate article earlier this month, today the Washington Post published an article about RSS and Aggregators. The article states

(The Post's was scheduled to begin offering a set of RSS feeds this weekend.)
"was" scheduled? what went wrong? was it postponed? shot down? unfortunately there's no further mention of it in the article...

UPDATE: duh - I should've just clicked that link - their feeds are up at cool!

A quote Microsoft is probably not very happy about is the following in the post's coverage of RSS Reader:

It was easily the slowest newsreader we tried -- partially because it runs on Microsoft's .Net Framework, an inefficient bundle of code that lets developers add Web functions to their software.
I've never used this aggregator, but I'm sure that if their performance is lagging, the fault lies in the design and implementation of the reader itself and is not due to the .Net framework. Similarly, I've seen blog entries on the net before blaming SharpReader's memory requirements on .Net, which again is just not true. Blaming Microsoft may be the popular thing to do these days, but that doesn't mean it's always true.

The post has this to say about SharpReader:

SharpReader (Win 98 or newer, free at also relies on the .Net Framework, although it wasn't as slow as RSSReader. It feels unfinished in some ways: Instead of an installation routine, you have to unzip a downloaded file, then move that folder into your Program Files directory. On the other hand, it supports Atom as well as RSS and offers the most attractive, simplest interface of any Windows newsreader.
I have to say I'm quite proud of SharpReader's interface being called "the most attractive and simplest of any Windows newsreader". This is exactly what I've been trying to create: a simple, easy to use interface that doesn't make you jump through hoops to get at the desired functionality. With any ideas I have about new features, I always try to think about how to best fit it into the current UI with a minimum of added complexity. If I cannot figure out how to do this, I typically rather leave out the feature than add it at the cost of a more complicated UI.

Regarding the post's comment about SharpReader's lack of an installer: I've been meaning to get to this and have been told it's a snap to do using VS.NET; I just haven't gotten around to it yet. After the Slate article earlier this month, I noticed from some of the emails I received that "xcopy deployment", while great in theory, is really not for the average (non-techie) user. So as RSS ventures more into the mainstream this year, I'll definitely have to spend a bit of time to add that installer.

Copyright © 2003, 2004 Luke Hutteman