Is marketing broken?

Rod posted a quote I made recently about Trade Me. The emphasis is his:

Trade Me wasn’t about technology. Sam’s insight was that marketing was broken. Rather than wasting lots of money on big billboards and TV ads (’it’s shopping on the internet’!) he instead decided to focus on building a really great product, which people like to use and tell their friends about.

Andy Lark disagrees. At least, Alex from Base4 thinks he did.

To start, you need to understand the context. It was a light-hearted breakfast debate put on by Synergy Fronde to launch their new brand. The topic was: “We’d all be better off if IT people ran NZ business”. I was on the negative team. The affirmative team, lead by Tom Scott, had painted Trade Me as an IT success story. So, we were running the “inmates are running the asylum” line.

Perhaps I got a bit distracted in talking about marketing as part of this. But Ferrit, with their flashy conventional marketing and crappy product, are just such an easy target. I couldn’t help myself.

That aside, I don’t actually think that this is a disagreement. Here’s the quote again, this time with my emphasis:

Trade Me wasn’t about technology. Sam’s insight was that marketing was broken. Rather than wasting lots of money on big billboards and TV ads (’it’s shopping on the internet’!) he instead decided to focus on building a really great product, which people like to use and tell their friends about.

So, Alex, aren’t both Andy and I saying the same thing?

Something is happening

Today we added some Ajax-y goodness to the My Favourites page on Trade Me. This makes it much quicker and easier to save your favourite categories, searches and sellers. And on the page itself you can change the email frequency or delete existing favourites without a post back.

All good, and probably well overdue.

One of the lessons we’ve learnt with asynchronous buttons is the importance of an intermediate ‘Saving…’ state.

So, for example, as of today our ‘Save as a favourite search’ button starts out looking like this:

Save as favourite category

When you hover over it glows like this:

Save as favourite category

When you click, it immediately changes to look like this:

Saving

This makes it clear that the click was recorded and that “something is happeningTM“. Then we go ahead and make the asynchronous call to record this change in the database.

Finally, when the asynchronous call returns, we update the button to look like this:

Saved as favourite category

This is a simple thing, and very easy to implement. But, especially for people on slower connections, this can make the difference between understanding what is happening and confusion.

What other tricks are you using in your Ajax-based designs?

I think we’re starting to see some de-facto standards emerge, which is good news for users as all sites will tend to start working in a way that they are familiar with.

But there are still lots of problems which don’t have nice solutions.

For example, pages implemented using Ajax often break the back button. This is a problem as the back button is a concept that even the most novice web user quickly learns and relies on when they follow an incorrect link.

Also, using Ajax can cause the URL displayed in the address bar to get out of synch with the content displayed on the page. This makes it difficult to bookmark or link to the page. If you are relying on word-of-mouth to market your site then making it difficult for people to tell their friends (e.g. by sending them a link in an email) could be fatal.

A community site which documents these sort of simple common Ajax design patterns would be really useful. Is anything like this already out there?

www.trademe.com.au

We recently received this email from a frustrated Trade Me user (I’ve removed the name of the company to protect their privacy):

Hi There

We here at [company name] are Trademe fans , and since 2 weeks ago they have barred the trademe address (www.trademe.co.nz) and also the IP address of (http://202.21.128.2) I was wondering if you could provide us with other access points to get on to Trademe, currently we are using (www.trademe.com.au) . We have a lot of staff members here that would love to get back on trademe .

Thanks

Sigh.

Lots of organisations seem to try to block staff from accessing Trade Me. I don’t think it’s unreasonable if somebody is taking the piss and spending all their time browsing the web when they should be working. And, it doesn’t seem to impact us at all.

But, there are surely lots of different ways to solve this problem, and I doubt that blocking IP addresses is the most effective.

I think Mod put it nicely when asked to comment about Carter Holt Harvey blocking Trade Me for their staff in 2005:

Some companies treat their employees like grown-ups. Some don’t. It’s nothing new.

Source: Worldwide Online Auctions News, August 3, 2005

Indeed.

Do kiwis have arms?

I’ve been lucky to work with some really talented designers at Trade Me. Sam, amongst his many talents, is pretty handy in Photoshop. Nigel came on board in 2001 to help with the first major re-design of the site and came up with the colour palette and general feel it still has today (I’ve heard it described as “Tommy Tippee” style). And Tim, who is our current designer and illustrator, is basically a magician.

So, I had to smile when I saw this image of Kevin (the blue kiwi in the Trade Me logo) and his lady friend in the latest Trade Me newsletter.

When Nigel designed the current logo back in the day he didn’t give him arms or wings, or anything that could be used to hold stuff. It drives Tim crazy, especially when we ask him to draw Kevin doing something interesting. Like voting, or throwing a javelin, or accepting an Academy Award.

We’re not the first company ever to use a cartoon kiwi as our logo. So, how have others dealt with this dilemma?

Take Goldie, the mascot from the 1990 Commonwealth Games. He was a pretty athletic kiwi, but got by somehow with stumpy little arms.

What about the grand-daddy of them all: the TVNZ goodnight kiwi. Does he have arms? What kind of arms? Once you think you know the answer, check out this Wikipedia page. You might be surprised!

:-)

P.S. We’re currently trying to hire a Web Designer to be Tim 2.0 (that is, an off-sider for Tim not a replacement). If you’re interested, check out the job description.

.NET usage on the client

Nic commented on my recent server stats post asking if we have any stats on the % usage of .NET on the client. As he pointed out the CLR version number is included in the IE user agent string.

I took a sample of 70,000 IE users from recent server logs and these are the results:

.NET CLR version Usage
None 43.9%
1.0.3750 6.7%
1.1.4233 50.8%
2.0.5072 12.5%
3.0.* 1.7%

In case you’re wondering why these percentages add up to more than 100%, it is possible to install multiple versions of the runtime side-by-side. In total 56.1% of people have one or more version installed.

This would give me significant pause if I was developing a client-side application which depends on the runtime being installed.

That’s the beauty of a web application I suppose.

Source data: http://spreadsheets.google.com/pub?key=p03Pw5UOTJJ425das60qoLA

UPDATE (30-Jan): Picking up on Nigel’s comment, I’ve updated the table above to include version 3.0. This number includes both 3.0.04320 (beta) and 3.0.04506.

ASP.NET 2.0

Last week we deployed Trade Me as an ASP.NET 2.0 application. We switched over early on Tuesday morning without even taking the site offline. With luck, nobody noticed. Nonetheless, this is an exciting milestone.

Eighteen months ago all four sites (Trade Me, FindSomeone, Old Friends & SafeTrader) were built using classic ASP, which was starting to show its age. We’ve been working off-and-on since then to bring this code base up-to-date. Most of the heavy lifting was actually done this time last year, when we took the opportunity over the quiet Christmas/New Year period to make a start on the main Trade Me site – taking it from ASP to ASP.NET 1.1.

The opportunity to work on this project was a big part of my motivation for heading home from the UK in 2004. It’s great to reach a point where we can reflect on the huge benefits it has realised, not the least being that we’ve been able to complete this work on our own terms. It’s an awesome credit to the team of people who have made it happen.

Our motivation

I’m pretty proud of the approach we’ve taken. To understand this you really need to understand the motivation for the change in the first place.

In 2004 there were a number of unanswered questions:

How much further could we push ASP before performance was impacted?

Back then, we were actually pretty happy with the performance of our ASP code. It had been tweaked and tuned a lot over the years. We’d ended up building our own ASP versions of a number of the technologies included in ASP.NET, such as caching.

The interaction between ASP and the database, which was (and is!) at the heart of the performance of the site, was pretty carefully managed. For example, we were careful not to keep connections open any longer than absolutely required, etc, etc.

At the application layer we had managed growth by adding more web servers. But, this was something we could only push so far before it would start to create problems for us in other places, most importantly in the database.

While we had confidence that we could continue to work with ASP, that wasn’t necessarily shared by everybody else.

Which lead us to the next problem …

How could we continue to attract the best developers to work with us?

It’s hard to believe now that we managed as well as we did without many of the tools and language features that we now take for granted: compiled code, a debugger, a solution which groups together all of the various bits of code, source control to hold this all together, an automated build/deploy process, … the list goes on.

For obvious reasons, we were finding it increasingly difficult to get top developers excited about working with us on an old ASP application.

And there was lots of work to do. As always seems to be the case, there was a seemingly infinite list of functional changes we wanted to make to the site.

So, that left us with the question that had been the major stumbling block to addressing these problems earlier …

How could we make this change without disrupting the vital on-going development of the site?

Looking at the code we had, it was hard to get excited about improving it, and hard to even know where to start. There was a massive temptation to throw it all out and start again.

But, inspired by Joel Spolsky and the ideas he wrote about in Things you should never do, Part I we decided to take the exact opposite approach.

Rebuild the ship at sea

Rather than re-write code we chose to migrate it, one page at a time, one line at a time.

This meant that all of the special cases which had been hacked and patched into the code over the years (which Joel calls “hairy” code) were also migrated, saving us a lot of hassle in re-learning those lessons.

The downside was that we weren’t able to fix all of the places where the design of the existing code was a bit “clunky” (to use a well understood technical term!) We had to satisfy ourselves in those cases with “better rather than perfect”. As it turned out, none of these really hurt us, and in fact we’ve been able to address many of them already. Once the code was migrated we found ourselves in a much stronger position to fix them with confidence.

Because we had so much existing VBScript code we migrated to VB.NET rather than C# or Java or Ruby. This minimised the amount of code change required (we enforce explicit and strict typing in the compiler, so there was a fair amount of work to do to get some of the code up to those standards, but that would have been required in any case).

We kept the migration work separate from the on-going site work. When migrating we didn’t add new features and we didn’t make database changes. When we were working on site changes we made them to the existing code, leaving it as ASP if necessary, rather than trying to migrate the code at the same time.

We focussed on specific things that we could clean-up in the code as part of the migration process. For example, we added an XHTML DOCTYPE to all pages and fixed any validation errors this highlighted. We moved all database code into separate classes. And, we created controls for common UI elements (in most cases replacing existing ASP include files). We also removed any code which was no longer being used, including entire “deadwood” pages which were no longer referenced.

To build confidence in this approach we started with our smaller sites: first SafeTrader and Old Friends followed by FindSomeone then finally Trade Me.

After each site was migrated we updated our plans based on what we’d learnt. The idea was to try and “learn early” where possible. For example, after the Old Friends migration we realised we would need a better solution for managing session data between ASP and ASP.NET, so we used the FindSomeone migration as a test of the solution we eventually used with Trade Me. The performance testing we did as part of the early migrations gave us confidence when it came time to migrate the larger sites.

We re-estimated as we went. By keeping track of how long it was taking to migrate each page we got accurate metrics which we fed into future estimates.

Finally, we created a bunch of new tools to support our changing dev process. For example, we created a simple tool we call “Release Manager” which hooks into our source control and is used to combine various code changes into packages which can then be deployed independently of our test and production environments. We created an automated process, using Nant, which manages our build and deploy, including associated database changes. More recently we implemented automated code quality tests and reports using FxCop, Nunit and Ncover. All of these mean that, for the first time, we can work on keeping the application itself in good shape as we implement new features.

The results

This has been an exciting transformation. The migration was completed in good time, without major impact on the on-going development of the site – we made it look easy! We added four new developers to the team, all with prior .NET experience, and we got all of our existing dev team members involved in the project, giving them an opportunity to learn in the process. Having more people, along with process improvements and better tools, has enabled us to complete a lot more site improvements. We’re in good shape to tackle even more in the year ahead. We’ve even been pleasantly surprised by the positive impact on our platform, which has allowed us to reduce the number of web servers we use (there are more details in the Microsoft Case Study from mid last year, if you’re interested in this stuff).

As is the nature of this sort of change, we’ll never really finish. With the migration completed we’ve started to think about the next logical set of improvements. It will be exciting to see how much better we can make it.

If you’re interested in being part of this story, we’re always keen to hear from enthusiastic .NET developers. Send your CV to careers@trademe.co.nz.

Collective code ownership

Speaking of rotation … what happens when the same principle is applied to other types of teams? For example, software development teams.

You could argue that we use a rotation policy of sorts within the dev team at Trade Me in that we tend to mix up the projects a bit so that over time everybody works on different parts of the site.

This is a form of collective code ownership, which is not a new idea.

The main benefits are anybody is able to make changes to any part of the application, without fear of stepping on others toes; no individual becomes a bottleneck when changes are required; and the team is resilient to changes in roles and personal (this was also the justification used by Graham Henry for his rotation policy, pointing at the impact injuries to key players had in previous World Cup campaigns).

But what are the associated costs?

As Stefan Reitshamer points out, there is a pretty fine line between everybody owning the code and nobody owning the code. Instead of maximising flexibility and code quality as intended it becomes a tragedy of the commons.

I’m not sure there is a simple answer to this problem. However, unlike Messrs Henry and Bracewell, it’s nice to be able to work through these trade-offs without the media scrutinising every decision.