28x

Phil Haack has an interesting post about the difference between the most productive developers and us mere mortals.

His conclusion:

“By writing less code that does more, and by writing maintainable code that has fewer bugs, a good developer takes pressure off of the QA staff, coworkers, and management, increasing productivity for everyone around. This is why numbers such as 28 times productivity are possible and might even seem low when you look at the big picture.”

From: 10 developers for the price of one

He’s absolutely right (imo).

Some developers are much more productive than others.

And, it’s very difficult as a manager to find the right way to recognise this without stepping on toes.

At least that’s true in the short term.

In the long run the cream normally rises to the top.

Parlez vous Anglais?

Some interesting responses to my post yesterday about the evergreen VB.NET/C# debate.

Andrew from Mindscape makes the point that every different language has “different levels of expressiveness and also different aesthetics.”

Quite right.

He posts this Ruby code snippet …

10.times {|i| puts i }

… and asks “Which do you find more beautiful?”.

The VB.NET equivalent is:

For i As Integer = 0 To 9
Console.Write(i)
Next

I agree that the Ruby code is beautiful.

However, I’m not sure that is the right thing for us to optimise on.

How about optimising for readability? Most code is read much more often than it is written. When we’re designing databases we understand what this means. Adding an index to a table adds a small cost to every write, but it’s worth it in situations where there are many more reads than writes. But we don’t seem to apply the same principles to the code we write.

I’ve worked on a few different applications now and can’t think of any where the limiting factor was the number of keystrokes in the code. And as a couple of people have pointed out a good IDE setup can significantly help with this anyway.

MVP Alex calls the VB syntax “ridiculously clumsy”, and points the finger specifically at keywords like Overridable, NotOverridable, and MustOverride.

I guess that depends on what language is native for you. For me, when I read ‘virtual’ (a term that can actually mean 1000 different things depending on the context) I have to translate that into ‘able to be overridden’ or ‘overridable’. I’m not the only one.

You say tomato, I say tomato.

And that was really the point I was trying to make.

An argument between two developers about which language is “best” is like a debate between an Englishman and a Frenchman. Each will prefer their own language. And each will be right because they will both be able to express themselves best in their own native language.

And again, while the debate continues, important problems remain unsolved.

:-)

C# vs VB.NET

Kirk (one of my new colleagues at Xero) and Phil (one of my old colleagues at Trade Me – Phil, where’s your blog?) have organised a C# vs. VB.NET debate for the Wellington .NET users group tonight.

Should be fun.

My predictions:

  • Most of the audience will be C# developers;
  • Few of them will have ever used VB.NET in anger;
  • Despite that, they will have already convinced themselves somehow that VB.NET is inferior;
  • None, if challenged, would be able to build anything using C# that Phil couldn’t build just as well in VB.

Meanwhile, important problems remain unsolved.

:-)

Update: I’ve responded to some of the comments and emails generated by this post in a subsequent post called Parlez vous Anglais?

Constant improvement

Are you taking advantage of the web platform and constantly making small improvements to your software?

Or have you got an excuse for why this is too hard (your application is too complex, you can’t easily deploy changes without interrupting users, you can’t afford the testing costs associated with each release, you prefer to focus on the bigger headline stuff, etc, etc)?

Here’s what Google achieves

“… half a dozen major or minor changes are introduced in Google’s search engine every week, and each change can affect the ranking of many sites — although most are barely noticed by the average user.”

That’s the benchmark.

Thoughts about Safari

Yesterday, as I’m sure you’ve heard, Apple announced a version of Safari for Windows and also revealed that Safari will be the platform for app development on the iPhone.

Josh Catone on Read/Write Web has a good summary.

Understandably there has been a mixed reaction to these announcements.

Here are my random thoughts:

Safari is right on the brink of becoming a browser that web developers need to care about. This might be enough to make that happen. That would take the tally to four (including IE6, IE7, Firefox). It’s like the 90s all over again, which will come as a bit of a shock to developers who grew up in the Internet Explorer dominated era.

Some people have questioned who in their right mind would run Safari on Windows? That’s an easy one to answer: all of the Mac fanboys who are stuck using Windows PCs at work will (cough Amnon, cough Tim) and, I suppose, all of the Windows developers creating apps for the iPhone.

I agree with Jason from 37signals. Creating a separation between the platform (the physical phone and the browser) and the apps which third-party developers create is a smart move.

What do you think?

In the last 12 months Safari has grown from 1.1% market share to 2.1%. This is a simple result of selling more Macs. According to this recent Bloomberg report Apple now has 7.7% market share in desktops and just under 10% in laptops, which is a lot more than I would have guessed. So, what does the future hold for Safari?

And, related to this: what will happen to IE7? As I noted yesterday, it’s grown quickly to ~30% in the first half of this year, but seems to have stalled there.

Let’s hear your predictions for the next 12 months.

We’ll come back to them in a year and see who was closest.

Browser stats for May

Sam sent through the latest Trade Me browser stats (for May ’07):

Browser Market Share
IE 6 51.3%
IE 7 29.9%
Firefox 2.0 9.2%
Firefox 1.5 3.9%
Safari 2.1%
Firefox 1.0 1.1%
Others 2.1%

It’s interesting to compare these to previous months: February ’07 and December ’06.

After growing from nothing to 30% market share in the first few months of the year IE 7 has now totally stalled. It seems that everybody who is going to get the new version via Windows Update already has.

IE 6 is hanging in there at around 51%. Presumably all of these people have either disabled Windows Update, work for somebody who has disabled Windows Update or are using an illegitimate copy of Windows.

Perhaps, as Robert McLaw suggested in a recent post about compromised web servers, Microsoft’s policy of not patching pirated copies of Windows is actually causing them more problems than it is solving?

In other browser news, the first beta of Netscape 9 was released last week. I was surprised to find there still was a Netscape, to be honest. I’m not entirely sure why they are bothering.

Is Computer Science dead?

Like Luke Welling, I suspect that reports of the death of Computer Science has been greatly exaggerated.

“The death of computer science was a fairy tale in 1987, and 20 years later it is still a fairy tale. More powerful computers are not replacing programmers any more than calculators are replacing accountants or power tools are replacing carpenters.”

Read the full post.

Designing for blind users

In the comments to my recent post about XHTML Scott Mayo asks an interesting question:

“How many complaints you have had about the usability of your website by blind, site impaired or other-wise impaired users. Surely as NZ’s site with the broadest coverage you would have a lot of exposure to such feedback?”

I have to admit that I haven’t had any direct feedback, or any first-hand experience using Trade Me with a screen reader.

I’d be interested to hear from anybody who does.

At TechEd last year we used Trade Me in a demo of the speech recognition features built into Windows Vista, and it worked great.

Amongst all of the other things to consider when creating a new site or page, it can sometimes be hard to get excited about accessibility.

But don’t forget that the single most important visitor to your site is effectively blind (a.k.a GoogleBot).

Design for accessibility and you’ll often get search engine optimisation for no additional charge!

Make it work, then make it look good

Here is a simple rule: if you’re building a web site make sure it:

(1) works; and

(2) looks good.

The order here is important.

What does it profit someone to have a site which looks like a million dollars but which doesn’t actually work?

Here is my theory: the better a given web site looks the less likely it is to actually work.

A recent example …

Both Rod and Nic have raved about their Blackbox M-14 headphones, made by NZ company Phitek.

I figured I’d get me some of that noise canceling goodness.

(and, in case you’re reading this Larissa, this purchase decision was not influenced by sitting beside you this week! ;-)

I had an uneasy feeling from the moment I hit their home page. The main navigation links unfurl onto the page in a very pretty (but otherwise pointless) Flash animation.

The real fun started when I got to the check-out.

There are pretty well understood conventions now for how an online check-out should work. Don’t make me think.

Instead, here is the Blackbox check-out page:

Blackbox Check-out

So, I select the product I want and click the “Go To Payments Page” button.

An error message is displayed: “Please check your personal details”.

Eh?

It turns out that all three steps are contained on one page – with the second and third steps hidden in collapsed sections at the bottom of the form. To get to the second step I need to click on the “Review Purchase Summary” bar, although there is nothing to indicate that it’s a link. It doesn’t look clickable, and the mouse cursor doesn’t even change to a hand when I hover over it.

Then, having made it through the form I get this:

Blackbox Check-out Payment Page Re-direct

Unfortunately, the re-direct doesn’t work. And there is no obvious way to click-thru manually. I can’t get to the page where I enter my credit card details.

The net result: I haven’t purchased the headphones and my confidence in their brand is dented.

I wonder how many sales they miss because of a website which looks great but which doesn’t work?

An idea

Here is an idea that has been bubbling in the back of my mind for a while.

I don’t have the time at the moment to make it a reality, but if there is a smart developer or two out there looking for a project give me a shout.

The problem

It is very difficult to ensure that all of the page in a large and dynamic web sites contain valid HTML, especially as changes are made to those pages over time.

Why?

There are a few reasons:

  • It is very time consuming to manually validate a large number of HTML pages. And it’s hard to make the case for spending extra testing time on this.
  • Dynamic pages change (duh!) so it is difficult to validate all of the different permutations.
  • Pages that require users to post information in a form or login cannot be easily accessed by an automated test script, so are often not validated at all.

The solution

While developing and testing a site, or even just while browsing, we typically visit a large number of different pages.

The proposed tool will run in the background while we use the website and capture the details of each HTTP request and corresponding response. The HTML that is returned can then be validated asynchronously by the tool.

At the end of the browsing session (or even during, if required) the tool will provide a list of the pages that have been visited and a count of the number of warnings or errors contained on each. The user will be able to drill into the details of any page to investigate the cause of the warnings or errors.

This would allow developers and testers to quickly and easily validate a large number of pages, including those that require users to post form information and login, without any extra work over and above what they are already doing.

Required Components

1. A program that runs in the background collecting details of each HTTP request and HTML response

Perhaps this would be a Firefox extension? Or, a Windows application that hosts IE in a sub-window?

The user interface should be very simple (e.g. start/stop button, plus perhaps an indicator of the page count and/or total warnings/errors or a summary of the previous request a la the Firefox HTML Validator add-on (btw, does anybody know of an equivalent add-on which works on a Mac?)

2. An XHTML Validator

This would ideally give equivalent results to the W3C validation tool, which will require an SGML parser (but if this is too painful it could just implement HTML Tidy-like validation for starters)

This could also be extended in the future to validate CSS, JavaScript, and also run standard accessibility tests.

3. Result viewer

A simple web interface, with two views would do the trick:

  • List view
    • Identify each page by URL and/or HTML title
    • Identify pages which contain errors (perhaps using simple green, orange and red light icons)
  • Details view
    • Lists all warnings/errors for the selected page (with HTML snippits)
    • View full HTTP request and response details

Extra for experts

It would be nice to allow the user to compare results for a given page to the results from previous sessions, so that trends can be identified.

Comments welcome

What do you think?

Would something like this be useful?

Does something like this already exist?

If you have any thoughts or suggestions comment away. :-)

The NetGuide Awards & XHTML

Pete has posted his annual review of NetGuide nominated sites.

Interesting reading!

I notice Russell is claiming bragging rights for having the only site which is fully HTML and CSS-compliant.

I was a bit disappointed to read this comment though:

“One thing I did notice is the number of sites now using XHTML, but still using tables for layout. I’m looking at you Trade Me. It seems so frustratingly stupid, why go to all the trouble of moving to XHTML and not use it semantically?”

From: Validation the 2007 NetGuide Awards

This time 12 months ago Trade Me didn’t even have a DOCTYPE.

That was embarrassing!

Moving to XHTML (as part of the migration to .NET) was a big job and shouldn’t be underestimated. We’ve removed a massive amount of non-semantic mark-up as part of that process. But we’ve also been pragmatic about it. Where it was significantly easier to use HTML tables for layout we’ve used them. The net result is that our pages are now mostly valid and much smaller than they used to be, but still with a lot of room for improvement.

There are a lot of people who are very passionate about web standards. That’s a good thing. But sometimes I think they approach their evangelism with a little too much vigour.

Give people some credit for the improvements they make.

Remember that they are often hard won.

Don’t confuse better for best.

P.S. It was a good night at the NetGuide awards for Trade Me. We picked up the award for ‘Best Trading Site‘ as well as ‘Best Motoring Site‘ and ‘Best Real Estate Site‘. Full credit to everybody who has contributed to those successes and thanks too to everybody who voted. And congratulations to SmileCity for picking up the ‘Site of the Year’ award. :-)

A conversation about an API

There has been a lot of interesting discussion around my posts last week about the new Vista sidebar gadget and XML feed and follow-up about why Trade Me doesn’t have an API.

Thanks to everybody who has contributed. Be assured that your comments have been widely read here at Trade Me.

If you didn’t already feel free to add your 10c worth.

A couple of things that are worth following up …

Firstly, people have been busy building wadgets of various persuasions and I promised to provide some links:

I’d be interested to hear from anybody who is using any of these? Are you finding them useful?

There are a few others I’m aware of which are still “under development”, including an OS X widget which Ben and the guys at DNA are throwing together. I’ll post more links here as they come to my attention.

If you’d like to build something but need some inspiration, check out the recently released eBay companion for Firefox. A browser add-on which lets people track their listings in the sidebar of the browser like this would be wicked.

Secondly, a few of the comments I received warrant a response:

“I think it’s a bit rich to say that you don’t want other people to build things you might eventually build yourselves. I’d be more inclined to accept that argument if you were likely to get to new features. And, don’t forget, while you sit worrying about what you *might* do at *some* point in the future, your users don’t have the features.”
Nat Torkington

Fair point. We’ve been threatening to build our own listing tool for a few years now without much to show for it. In the meantime people behind tools like Auctionitis have got on with actually building something, which has proved to be a much more effective strategy!

“A cynic might say that the real reason you don’t have an API is because you already own the sector.”

Nat Torkington (again)

Ouch. Nobody is that cynical are they Nat?

It’s true that “want to” and “need to” are two different things. But, I think this comes back to my point about having bigger fish to fry. Whenever we decided to invest time in some new functionality we are, at the same time, deciding to not invest time in something else. For each thing we do there is a long (infinite?) list of things we don’t do.

Of course there is also an argument to say that an API would help to alleviate this by letting others fry the smaller fish we don’t have time for. It’s unlikely, for example, that we would have ever prioritised the various tools that have already been built on top of the XML feed (see above) but some people are obviously finding those useful, which is all good.

“I think that lots of NZ websites are afraid to offer feeds as they believe that this will stop people from visiting the main site. Those that do offer feeds, don’t provide full-text feeds, for that same reason. The idea is that if you offer a partial text feed it will encourage users to click through and visit the main site, but this has been proven to be untrue.”

Stuart

I agree it would be great if we could provide more RSS feeds. The “My Favourites” page would be the obvious place to start and new listings within the “$1 reserve” page would be a close second.

The reason why this hasn’t been done has nothing to do with wanting to drive additional traffic to our site. We have lots of traffic already. If anything, would probably appreciate taking some heat off our listing servers. RSS feeds, which are smaller than HTML pages and more easily cached would only help with this.

“Any of us could (and some have) easily talk through the issues raised in Rowan’s blog and come up with solutions to the objections regarding versions, support, development time etc etc.

But I believe it falls into the above category because the underlying issue is simply one of trust.

Do they trust us, the people out here, to build things that will increase their value instead of subverting it.

If you’re basically inclined to trust people, then you’re going to be able to invent a million reasons why giving them a means to add value to your business by building their own is going to work.

If you’re basically inclined to distrust people (at least in this context), you will be able to discover a million reasons why it could all go horribly wrong.”

— Richard Clark, in the NZ 2.0 Google Group

I agree with the first part. I’m sure we could find solutions to all of the road blocks I listed.

But, I think it’s a bit unfair to say that reason we haven’t done this yet is because we don’t trust people. Our whole business is built on the premise that most people are trustworthy. Everyday thousands of Trade Me members send money to people they have never met for goods they have never seen. That requires a lot of trust!

“Do you know of any other NZ web companies apart from ZoomIn that are aiming at consumers and have released APIs?”
Peter Griffin, via email

A good question? I can’t think of any. How about you?

This is something we’ve been talking about internally for a while, so it’s really interesting to get a broader perspective.

Thanks again for being part of the conversation

Why doesn’t Trade Me have an API?

This is a question I get a lot:

Why doesn’t Trade Me have an API?

It’s actually a slightly frustrating question for me to answer.

Internally I’m usually the one asking this question. Externally, at places where technical people gather, I’m the one defending the fact that we don’t have an API and, what’s more, have no immediate plans to build one.

Why not?

Nat Torkington’s recent post has some of the answers.

It’s not that we haven’t thought about it. There are some legitimate reasons why we’ve chosen to not build an API to date. I thought it would be interesting to talk about some of these and get your thoughts.

Some questions to think about

Would we need to communicate all changes in advance to third party developers? If so, how much in advance? We’re constantly making small changes to the site. We generally deploy site changes twice per day. The cycles can be very short. We sometimes deploy something in the morning and then tweak it later that afternoon. Anything which threatens to slow us down is quite correctly frowned upon.

What happens when we need to make breaking changes to the API? Do we version the API and continue to support older versions? If so, how long do we leave this support in place? If not, what liability do we have if we break a third-party application?

How do we deal with authentication? We put a big effort into keeping Trade Me safe for buyers and sellers. We have a full-time team working on this. One of the problems this team deals with is phishing of members login details. We have a simple and consistent message for members: don’t enter your Trade Me email address and password anywhere other than on the Trade Me site. So, obviously allowing third-party developers to build tools which require our users to enter their login details is inconsistent with this. To solve this we’d need to build an alternative authentication process – e.g. the token based approach used by upcoming.org.

Are we prepared to invest in creating an eco-system where third-party developers can profit? As Nat pointed out to me when I discussed this with him, one of the reasons that Amazon have been so successful with their new web services is that they are creating more value then they are capturing. In other words, they are leaving some money on the table for the people using their API.

Are we prepared to allow our customers to become dependant on a third-party tool? If somebody created a really wicked tool using the API, and lots of our users started to use it, would that limit our ability to innovate that the same area? This is a dilemma that eBay have started to encounter with their API, where they have created listing tools which compete directly with third-party tools built on top of their API. At the moment I’m not sure we’re prepared to let others build something we then wish we had built. Is that bad?

How do we protect the user experience? How do we protect our brand? We’re currently very protective of both of these things, for very good reasons.

How do we protect our infrastructure? In the past we’ve had to ask people to discontinue or specifically block access to automated external tools which were causing us pain. To an infrastructure guy there is a fine line between a well-meaning but poorly implemented external tool and a Denial of Service attack. In fact, we currently prohibit the use of any “robot, spider, scraper or other automated means to access the Website or information featured on it for any purpose” in our Terms & Conditions (see 4.1 c).

If we build it will they come? Are there enough developers in New Zealand to justify our effort in creating an API? How many people will actually use it? How many people will use the applications they build on top of it?

Do we have bigger fish to fry? Keep in mind that any development work required at our end would be at the expense of something else. Is an API just too much work for us for too little reward? Any argument in favour of an API need to be more compelling than: all the cool kids have one. :-)

Your thoughts?

What would you do if you were in our position?

You down with FPP?

Steven is wondering why IT recruitment agents are not well respected:

http://searchniche.blogs.com/nzrecruiter/2007/03/why_recruiters_.html

Here is a story from my time in London, as told to me by a flatmate, which might have some of the answers. I’ve changed the names to protect the fictional.

Scene: Jim is a recruitment agent. Bob is an IT professional looking for work.

[Jim’s phone rings]

Jim: Hello
Bob: Hi, my name is Bob. I’m calling about the ASP job you have advertised on Jobserve

[An uncomfortable pause]

Jim: Oh, the ASP job. Well, actually, that position has just been filled.

Which is a recruitment agents way of saying that the job never existed in the first place. Actually he was just using the advert to try and build up a database of candidates that he could then pimp to his clients.

Anyhow, the conversation continues …

Bob: Okay, what else do you have at the moment that might suit?
Jim: Actually I have another client who might be looking for somebody with ASP experience. How many years’ ASP do you have?
Bob: [sigh]

How has tenure come to be so important in recruitment? Why do agents talk in terms of “having” a technology? Surely there are 1,000 shades of grey when it comes to technical experience? I never answered those questions satisfactorily during my time overseas.

Bob: I have been using ASP for about 3 years.
Jim: Really, is that all?
Bob: Yeah, but I’ve used a lot of different technologies prior to that. I think I know ASP pretty well after 3 years.
Jim: I’ve spoken to a lot of candidates this morning who have 5 years’ ASP experience, so they would probably be preferred.
Bob: 5 years eh? Is that really 5 years’ experience, or is it 1 year repeated 5 times?
Jim: What?
Bob: Never mind.

Bob decides that he’s probably wasting his time, so an experiment is in order…

Bob: What other web development positions do you have open at the moment?
Jim: We have loads. What technologies do you have? HTML? CSS? SQL?
Bob: Yeah all of those.
Jim: Okay, great.
Bob: I also have about 5 years’ MMP experience. And a little bit of FPP experience.
Jim: A ha. Okay, great.

At this point Bob has confirmed that he is wasting his time with this agent. To a New Zealander MMP and FPP are two alternative electoral systems. To a dumb London recruitment agent they are two more TLAs to enter into their database – independent of their relevance to an actual web development position.

Bob: Yeah, do you get many web development jobs requiring MMP or FPP experience?
Jim: Yeah, all the time. You know what gov, just doing a quick search, we don’t have any open positions at the moment, but I’ll add you to our database and call you as soon as something comes in. Actually I’m going to see a client this afternoon who is probably quite interested in …

[Bob hangs up]