Validating is like flossing

September 17, 2008

How often do you floss?

Honestly!

For me it’s one of those things that I know I should do, but which I don’t actually do nearly as often as I could.

Validating HTML is the same.

We web developers all know we should, but so often don’t.

Why?

Is it because we don’t think standards are important? I know this is true for a small minority, but I don’t think this is the reason for most, because I think smart developers and testers understand how having valid code makes life easier for both them and the people using their sites.

Is it just too hard?

When we were migrating Trade Me to .NET we decided we would take the opportunity to improve the quality of the underlying HTML as we touched each page. The intention was to validate all pages using the free tools provided by the W3C.

But, as we quickly discovered, this is no trivial undertaking.

It’s fine when you’re working with a mostly static page. But, as soon as you’re working with a dynamic data driven page the number of different variations of the page can quickly become overwhelming.

If you have pages which require authentication (either on the server or in the application), or requires a user to post information into a form, it becomes more or less impossible. If the validator can’t reach the page directly you have to save a local copy of the HTML and upload this manually to the validator.

Who has time for that?

Even when you do make the effort the results often confuse more than they help.

Validators are (almost by definition) pedantic, and as a result do a generally poor job of differentiating between things that make a real difference to users and things that, while strictly and correctly identified as errors, are not so critical.

And, there is no easy way to keep track of the errors that exist on a page over time. So, when you’re presented with results it’s difficult to identify those errors which are new or to easily exclude results you’ve seen before.

This is not so bad if you have a page that is normally fully compliant, but a much more common scenario, unfortunately, is working on improving a page that is full of invalid code. In that case it’s a nightmare.

The tools really don’t make it easy.

So, what do we do? Continue on wishing we could be more diligent, but lacking time and tools?

I think we can do better!

A while back I wrote about an idea I had for solving some of these problems.

I was stoked when one of the smartest developers I know put his hand up. Over the last couple of months we’ve been working on turning this idea into a real working tool. And now we have something to show you all:

We’re calling it Wingman.

It’s a Firefox browser plug-in, which automatically sends the exact pages you visit to the server, making it trivial to validate.

And, it’s a website which organises the results, making it easy to identify the errors you’re interested in, and to spot trends so you can fix things as soon as they occur.

Plus, it’s designed to get smarter as more people use it, by aggregating information about what types of errors are commonly ignored across all users.

In time we imagine a system which runs various different types of validation, including potentially hooking into validations services which are hosted outside of Wingman itself. But, for starters, we have implemented a simple HTML validator, based on the service created by validator.nu. CSS and Javascript validation are the next obvious candidates, but we’re really interested to hear your ideas for what other things we could include in this mix - for example, a spell checker, a test for basic SEO rules, or an outbound link checker are three ideas that have been suggested to us already.

Today we’re opening up a free invite-only preview of the service, so we can start to see how people might use a tool like this.

If you’d like to have a play please register on the site. We’ll be sending out the first group of invite codes shortly.

I’ll look forward to hearing what you think!