Moving on

Today is my last day at Trade Me.

In 2000 I was the first person hired who didn’t share a surname with Sam.

After a crazy 18 months I left to go and live and work in London for a bit. In that first stint I’d seen Trade Me grow from 10,000 members to 100,000. Shortly after that Sam and Jess left too. Whether Trade Me would make it was an open question.

Times have changed!

Nonetheless, I have mixed feelings about leaving for a second time.

You sometimes don’t know what you’ve got until it’s gone. It’s a great crew here and a working environment that I haven’t experienced anywhere else. It’s been an honour and a privilege to be part of it.

On the other hand I’m really excited about the challenges which are ahead.

I’ll talk more about those here in the next few weeks.

Stay tuned …

The NetGuide Awards & XHTML

Pete has posted his annual review of NetGuide nominated sites.

Interesting reading!

I notice Russell is claiming bragging rights for having the only site which is fully HTML and CSS-compliant.

I was a bit disappointed to read this comment though:

“One thing I did notice is the number of sites now using XHTML, but still using tables for layout. I’m looking at you Trade Me. It seems so frustratingly stupid, why go to all the trouble of moving to XHTML and not use it semantically?”

From: Validation the 2007 NetGuide Awards

This time 12 months ago Trade Me didn’t even have a DOCTYPE.

That was embarrassing!

Moving to XHTML (as part of the migration to .NET) was a big job and shouldn’t be underestimated. We’ve removed a massive amount of non-semantic mark-up as part of that process. But we’ve also been pragmatic about it. Where it was significantly easier to use HTML tables for layout we’ve used them. The net result is that our pages are now mostly valid and much smaller than they used to be, but still with a lot of room for improvement.

There are a lot of people who are very passionate about web standards. That’s a good thing. But sometimes I think they approach their evangelism with a little too much vigour.

Give people some credit for the improvements they make.

Remember that they are often hard won.

Don’t confuse better for best.

P.S. It was a good night at the NetGuide awards for Trade Me. We picked up the award for ‘Best Trading Site‘ as well as ‘Best Motoring Site‘ and ‘Best Real Estate Site‘. Full credit to everybody who has contributed to those successes and thanks too to everybody who voted. And congratulations to SmileCity for picking up the ‘Site of the Year’ award. :-)

A conversation about an API

There has been a lot of interesting discussion around my posts last week about the new Vista sidebar gadget and XML feed and follow-up about why Trade Me doesn’t have an API.

Thanks to everybody who has contributed. Be assured that your comments have been widely read here at Trade Me.

If you didn’t already feel free to add your 10c worth.

A couple of things that are worth following up …

Firstly, people have been busy building wadgets of various persuasions and I promised to provide some links:

I’d be interested to hear from anybody who is using any of these? Are you finding them useful?

There are a few others I’m aware of which are still “under development”, including an OS X widget which Ben and the guys at DNA are throwing together. I’ll post more links here as they come to my attention.

If you’d like to build something but need some inspiration, check out the recently released eBay companion for Firefox. A browser add-on which lets people track their listings in the sidebar of the browser like this would be wicked.

Secondly, a few of the comments I received warrant a response:

“I think it’s a bit rich to say that you don’t want other people to build things you might eventually build yourselves. I’d be more inclined to accept that argument if you were likely to get to new features. And, don’t forget, while you sit worrying about what you *might* do at *some* point in the future, your users don’t have the features.”
Nat Torkington

Fair point. We’ve been threatening to build our own listing tool for a few years now without much to show for it. In the meantime people behind tools like Auctionitis have got on with actually building something, which has proved to be a much more effective strategy!

“A cynic might say that the real reason you don’t have an API is because you already own the sector.”

Nat Torkington (again)

Ouch. Nobody is that cynical are they Nat?

It’s true that “want to” and “need to” are two different things. But, I think this comes back to my point about having bigger fish to fry. Whenever we decided to invest time in some new functionality we are, at the same time, deciding to not invest time in something else. For each thing we do there is a long (infinite?) list of things we don’t do.

Of course there is also an argument to say that an API would help to alleviate this by letting others fry the smaller fish we don’t have time for. It’s unlikely, for example, that we would have ever prioritised the various tools that have already been built on top of the XML feed (see above) but some people are obviously finding those useful, which is all good.

“I think that lots of NZ websites are afraid to offer feeds as they believe that this will stop people from visiting the main site. Those that do offer feeds, don’t provide full-text feeds, for that same reason. The idea is that if you offer a partial text feed it will encourage users to click through and visit the main site, but this has been proven to be untrue.”

Stuart

I agree it would be great if we could provide more RSS feeds. The “My Favourites” page would be the obvious place to start and new listings within the “$1 reserve” page would be a close second.

The reason why this hasn’t been done has nothing to do with wanting to drive additional traffic to our site. We have lots of traffic already. If anything, would probably appreciate taking some heat off our listing servers. RSS feeds, which are smaller than HTML pages and more easily cached would only help with this.

“Any of us could (and some have) easily talk through the issues raised in Rowan’s blog and come up with solutions to the objections regarding versions, support, development time etc etc.

But I believe it falls into the above category because the underlying issue is simply one of trust.

Do they trust us, the people out here, to build things that will increase their value instead of subverting it.

If you’re basically inclined to trust people, then you’re going to be able to invent a million reasons why giving them a means to add value to your business by building their own is going to work.

If you’re basically inclined to distrust people (at least in this context), you will be able to discover a million reasons why it could all go horribly wrong.”

— Richard Clark, in the NZ 2.0 Google Group

I agree with the first part. I’m sure we could find solutions to all of the road blocks I listed.

But, I think it’s a bit unfair to say that reason we haven’t done this yet is because we don’t trust people. Our whole business is built on the premise that most people are trustworthy. Everyday thousands of Trade Me members send money to people they have never met for goods they have never seen. That requires a lot of trust!

“Do you know of any other NZ web companies apart from ZoomIn that are aiming at consumers and have released APIs?”
Peter Griffin, via email

A good question? I can’t think of any. How about you?

This is something we’ve been talking about internally for a while, so it’s really interesting to get a broader perspective.

Thanks again for being part of the conversation

Why doesn’t Trade Me have an API?

This is a question I get a lot:

Why doesn’t Trade Me have an API?

It’s actually a slightly frustrating question for me to answer.

Internally I’m usually the one asking this question. Externally, at places where technical people gather, I’m the one defending the fact that we don’t have an API and, what’s more, have no immediate plans to build one.

Why not?

Nat Torkington’s recent post has some of the answers.

It’s not that we haven’t thought about it. There are some legitimate reasons why we’ve chosen to not build an API to date. I thought it would be interesting to talk about some of these and get your thoughts.

Some questions to think about

Would we need to communicate all changes in advance to third party developers? If so, how much in advance? We’re constantly making small changes to the site. We generally deploy site changes twice per day. The cycles can be very short. We sometimes deploy something in the morning and then tweak it later that afternoon. Anything which threatens to slow us down is quite correctly frowned upon.

What happens when we need to make breaking changes to the API? Do we version the API and continue to support older versions? If so, how long do we leave this support in place? If not, what liability do we have if we break a third-party application?

How do we deal with authentication? We put a big effort into keeping Trade Me safe for buyers and sellers. We have a full-time team working on this. One of the problems this team deals with is phishing of members login details. We have a simple and consistent message for members: don’t enter your Trade Me email address and password anywhere other than on the Trade Me site. So, obviously allowing third-party developers to build tools which require our users to enter their login details is inconsistent with this. To solve this we’d need to build an alternative authentication process – e.g. the token based approach used by upcoming.org.

Are we prepared to invest in creating an eco-system where third-party developers can profit? As Nat pointed out to me when I discussed this with him, one of the reasons that Amazon have been so successful with their new web services is that they are creating more value then they are capturing. In other words, they are leaving some money on the table for the people using their API.

Are we prepared to allow our customers to become dependant on a third-party tool? If somebody created a really wicked tool using the API, and lots of our users started to use it, would that limit our ability to innovate that the same area? This is a dilemma that eBay have started to encounter with their API, where they have created listing tools which compete directly with third-party tools built on top of their API. At the moment I’m not sure we’re prepared to let others build something we then wish we had built. Is that bad?

How do we protect the user experience? How do we protect our brand? We’re currently very protective of both of these things, for very good reasons.

How do we protect our infrastructure? In the past we’ve had to ask people to discontinue or specifically block access to automated external tools which were causing us pain. To an infrastructure guy there is a fine line between a well-meaning but poorly implemented external tool and a Denial of Service attack. In fact, we currently prohibit the use of any “robot, spider, scraper or other automated means to access the Website or information featured on it for any purpose” in our Terms & Conditions (see 4.1 c).

If we build it will they come? Are there enough developers in New Zealand to justify our effort in creating an API? How many people will actually use it? How many people will use the applications they build on top of it?

Do we have bigger fish to fry? Keep in mind that any development work required at our end would be at the expense of something else. Is an API just too much work for us for too little reward? Any argument in favour of an API need to be more compelling than: all the cool kids have one. :-)

Your thoughts?

What would you do if you were in our position?

Vista gadget for Trade Me

Today we launched a Trade Me gadget for Windows Vista.

It is designed to help you keep an eye on your current listings directly from the sidebar of your desktop. Of course, you can also use it to track listings from any member – just enter the user name of the seller you’re interested in.

More information about installing and using the gadget

Credits: Thanks to Darryl from Microsoft and Jeremy from Mindscape for putting this together for us.

The gadget uses a new XML feed we have implemented, which returns the details of current listings for a given member.

http://www.trademe.co.nz/API/MemberListingFeed.aspx?nickname=movieshack

In theory there is nothing stopping anybody from using this feed to build their own version of this gadget, or perhaps a Mac widget?

If you do build something interesting using this feed drop me the details and I’ll link to you from here.

One wonderful day

We’ve recently been talking about the next round of under-the-covers clean up work at Trade Me.

In the last couple of years we’ve migrated all of our sites from VBScript to ASP.NET and made some pretty major changes to our development and testing process. It’s been a lot of hard work, but the results have made it well worthwhile.

However, the ground is constantly shifting. There are always further improvements to be made. You’re never finished. Sometimes it can feel like you need to run just to stay still.

In all of this I was reminded of an excellent 37signals post which contained this insight:

“The business is not going to slow down to allow you to clean all these things up one wonderful day. It just won’t happen. “

Sometimes the amount of work involved in these sort of improvements is daunting – for example, thinking about adding automated unit tests to a large code base that has never been written with testability in mind can seem like an impossible challenge. But, unless you change the way you are working now the gap between where you are and where you want to be is only ever going to get bigger and bigger.

To make an application more capable over time it’s important to include enough time in the schedule to remove scar tissue. But you also need to stop cutting yourself.

Questions from Tim Haines, Part II

This is Part II in a two-part series. Part I covers the Trade Me application architecture.

Tim’s second lot of questions are about our dev tools and process:

Q: Any third party tools in the software or the dev/management process?

Q: What source control software do you use, and how do you use it?

Q: How do you manage roll outs? Dev/Staging/Live?

Q: Do you use pair programming, or adopt any other methodologies from the agile world?

The answers to these questions are just a snapshot, capturing how we do things today (early in April, 2007).

I go far enough back to remember when our “development environment” was Jessi’s PC (Jessi at that stage was our entire Customer Service department!) Back then there was no source control as such, we all shared a single set of source files. To deploy a change we would simply copy the relevant ASP files directly onto the live web server and then quickly make the associated database changes.

Somehow it worked!

Ever since then we’ve been constantly tweaking the tools and processes we use, to accommodate a growing team and a growing site. As our application and environment has evolved and become more complex our tools and process have had to change also.

This change will continue, I’m sure. So, it will be interesting to come back to this post in another 8 years and see if the things I describe below sound as ridiculous then as the things I described above do now.

Also, the standard disclaimer applies to these ideas: what makes sense for us, in our environment and with our site, may not make sense to you in yours. So, please apply your common sense.

Tools

Our developers use Visual Studio as their IDE and Visual SourceSafe for source control.

All of our .NET application code and all of our stored procedures are kept in a SourceSafe project. Developers tend to work in Visual Studio and use the integration with SourceSafe to check files in and out etc.

Thus far we’ve used an exclusive lock approach to source control. So, a developer will check out the file they need to make changes to and hold a lock over that file until the changes are deployed.

However, as the team gets bigger this approach has started to run into problems – for example, where multiple developers are working on changes together, or where larger changes need to be made causing key files to be blocked for longer periods.

To get around these issues, we’re increasingly working on local copies of files and only checking those files out and merging in their changes later. I imagine we will shortly switch to an edit-merge-commit approach, and that will require us to look again at alternative source control tools (e.g. SourceGear’s Vault, Microsoft’s Visual Studio Team System or perhaps Subversion – we’d be interested to hear from anybody who’s had experience with any of these).

Release Manager

At the centre of our dev + test process is a tool we’ve built ourselves called the ‘Release Manager’.

This incorporates a simple task management tool, where tasks can be described and assigned to individual developers and testers. It also hooks into source control, and allows a developer to associate source code changes with the task they are working on.

This group of files, which we call a ‘package’, may include ASPX files, VB class files as well as scripts to create or replace stored procedures in the database.

The tool also incorporates reports which help us track tasks as they progress through the dev + test process. These are built using SQL Reporting Services.

Environments

We have four environments:

  1. Dev: this includes a shared database instance and local web servers for each developer.
  2. Test: this includes a production-like database (actually databases, as we now have multiple instances in production) and a separate web server.
  3. Stage: our pre-production environment, again with it’s own web server
  4. Production: our live site, which actually incorporates two environments currently, one in Wellington and one in Auckland.

Developers typically work on changes individually. We have a code-review process, so any code changes have two sets of eyes over them before they hit test.

Once a code change is completed, the developer will create the package in Release Manager and set the task to be ‘ready to test’ so it appears on the radar of the test team.

We have a web-based deployment tool which testers can use to deploy one or more packages into the test environment. This involves some Nant build scripts which get the source files for the selected packages, copy these into the test environment and then build the .NET assemblies on the test server. The build script also executes any associated database changes that are included, and then updates the status of the package/s to ‘in test’.

The deploy tool is able to use the data from Release Manager to identify any dependencies between packages. Where dependencies exist we’re forced to deploy packages in a certain order, but in the general case we’re able to deploy packages independently of each other, which provides a great degree of flexibility and allows us to respond quickly where required (e.g. when there is an urgent bug fix required).

Production

Once a package has been tested the test team use the same deploy tool to move the package into the stage environment ready for go-live.

From there the responsibility switches to the platform team, who manage our production web servers. They have automated scripts, again built using Nant, which deploy from stage to our production environment/s. These scripts update configuration files then copy the required production files to the various web server locations. It also manages the execution of database scripts. The idea is to get everything as close to the brink as possible (which is the time consuming part of the deploy process) and then tip everything over the edge as quickly as possible, so as to minimise disruption to the site.

Typically we do two production releases each day, although this number varies (up and down) depending on the specific packages. In most cases these releases are done without taking the site offline.

The bigger picture

Our dev + test process is just one part of a much bigger product management process, which is roughly represented by the diagram below (click for a larger view):

Product Management Process

The other parts of this process are probably fodder for a separate post, but it’s important to note that there is a loop involved here.

Most of the changes we make to the site are influenced heavily by previous changes. In many cases very recent changes. This only works like it does because our process allows us to iterate around this loop quickly and often.

While we don’t follow any formal agile methodology, our process is definitely lightweight. We don’t produce lots of documentation, which is not to say that we don’t design changes up-front, just that we don’t spend too much time translating that thinking into large documents (it’s not uncommon for screen designs to be whiteboard printouts for example).

While we do make larger changes from time to time (for example, the DVD release which went out last week) the vast majority of changes we make are small and seemingly insignificant. Again, this only works because each of these small changes is able to flow through with minimal friction added by the tools and processes.

I’d also hate to give you the impression that this process is perfect. There is massive room for improvement. The challenge for us is to continue to look for these opportunities.

More?

That’s it for Tim’s questions. I hope some of that was useful?

If you have any other questions, ask away. You can either place a comment below or contact me directly. My email address is in the sidebar to the right.

DVDs on Trade Me

Last week we released some changes to the DVD category on Trade Me.

Here is the site announcement about this change:

“We’ve made some exciting changes to the DVDs & Movies category designed to make life easier for buyers and sellers.

To place a listing sellers just need to enter the DVD title and we will automatically include all of the other details from our catalogue of over 10,000 titles. This includes cover art, genre, synopsis, director, cast, classification, etc.

Buyers can browse all of the items for sale from the new DVDs & Movies page. Listings are categorised by genre such as Drama or Action, or by other criteria such as New Releases, Top 100 and New Zealand movies.

Important information for sellers:

  • Sellers who have exceeded their free listing allowance will not be charged listing fees for DVDs listed using the catalogue. However, successful DVD sales will continue to count towards your overall free listing allowance.
  • DVDs listed using the catalogue can be listed for up to 14 days at no extra charge.
  • We encourage DVD sellers to list with a start price equal to reserve and to include a Buy Now price. A 25c reserve fee will apply to auctions with a reserve price higher than the start price.
  • DVDs listed using the catalogue receive free gallery. Bold and featured listings are no longer available. All sellers with DVD listings at the time of this change have had promotional fees refunded.
  • All of the titles in the catalogue are classified for sale in NZ. However, sellers must confirm that the specific DVD they are selling is a legitimate copy with a NZ classification sticker.”

(although, of course you already subscribed to the RSS feed for these announcements, right?)

If you haven’t yet, take a minute and check it out.

This change is great for buyers. It makes it much easier to browse the DVD category, as the focus is on the titles rather than on individual listings. So, for example, you can quickly find all of the movies starring Brad Pitt, or all of the movies directed by Tim Burton. If you’re after a specific title you can use to the search on the sidebar to track it down.

It will be interesting to see what happens to prices over the next couple of weeks, once the majority of listings are within this catalogue. For example, take Once Were Warriors. As I type there are 8 copies available on the site. The Buy Now prices range from $15 up to $24. It’s hard to believe that the $24 copy will sell now that it’s easy for buyers to quickly find and compare all copies for sale like this.

We’ve also introduced a ‘reserve fee’ of 25c in this category, which should encourage sellers to run auctions with start = reserve (i.e. we don’t expect many sellers to pay this fee) and we encourage sellers to specify fixed shipping costs and a Buy Now price so that buyers can complete the purchase in one visit.

But, it’s not all about the buyers. As described in the announcement this change also removes just about all of the pain from the sell process too. Where as in the past you’d need to track down the details for the listing (e.g. from IMDB) and take photos etc, it’s now very quick and easy to place a listing. You just enter the title of the DVD and we do the rest!

What do you think?

A big couple of days

It’s been a big traffic week at Trade Me.

Last Monday we set a new daily record with 35 million page impressions.

Then this Sunday just been we knocked that out of the park with 38 million page impressions *

(And, when I say we I actually mean you, of course!)

Either way, that’s a lot of pages!

Obviously there are internal things that influence these fluctuations – site design, speed, etc. But, it’s also interesting to consider some of the external factors.

One is television. There does seem to be a long term correlation between more and more crappy (but cheap to produce) reality television and more and more traffic on Trade Me, although I’m not sure which is the chicken and which is the egg. ;-)

Big sporting events can have an influence too. The night Hamish Carter won his gold medal traffic was noticeably down. Likewise during the Lions tests in 2005. We should probably plan for a quiet September and October this year during the Rugby World Cup.

Another factor is the weather. People who are outside making the most of the sunny weather are not inside browsing the web.

In this particular case I wonder if daylight saving played a part. All of a sudden it’s darker outside and you need to find something different to fill the evenings. On top of that Sunday had 25 hours in the day, which doesn’t happen very often!

Any other ideas?

* numbers from Neilsen//NetRatings.

Questions from Tim Haines, Part I

This is Part I in a two-part series. Part II covers the Trade Me development process and tools.

It’s been a while since we got geeky here, so …

After my recent post about our migration to ASP.NET I got sent a bunch of questions from Tim Haines. I thought I’d try and pick these off over a couple of posts.

To start with, a few questions about our application architecture:

Q: What’s the underlying architecture of Trade Me – presentation layer / business logic / data layer / stored procedures? All isolated on their own servers?

Q: Are there any patterns you find incredibly useful?

Q: Do you use an O/R mapper or code generator, or is all DB interaction highly tuned?

Q: What third party libraries do you use for the GUI? I see you have Dustin’s addEvent. Follow any particular philosophy or library for AJAX?

Here is a basic diagram I use to represent this application architecture we use on all of our sites at Trade Me (click for a larger view):

Application Architecture Diagram

We’ve worked hard to keep this application architecture simple.

There are two layers within the ASP.NET code + one in the database (the stored procedures). I’ll start at the bottom of the diagram above and work up.

Data Layer

All database interaction is via stored procedures. This makes it easier to secure the database to threats like SQL injection. And it also makes it easier to monitor/trace the performance of the SQL code and identify where tuning is required.

Within the application we manage all access to the database via the Data Access Layer (DAL).

All of the classes within the DAL inherit from a common base class, which is a custom class library we’ve created (based loosely on the Microsoft Data Access Application Block). This base class provides all of the standard plumbing required to interact with the database – managing connections, executing stored procedures and processing the results.

The methods within the DAL classes themselves specify the details of the database logic – specifying which stored procedure to call, managing parameters, validating inputs and processing outputs.

So, for example, to process a new bid we might implement this DAL method:

Public Sub ProcessBid(ByVal auctionId as Integer, ByVal bidAmount as Decimal)
	ExecuteNonQuery("spb_process_bid", _
		New SqlParameter("@auction_id", auctionId), _
		New SqlParameter("@bid_amount", bidAmount))
End Sub

A couple of things to note here:

  • All of our code is VB.NET, so that’s what I’ll use in these examples. Apologies to those of you who prefer curly brackets. Perhaps, try this VB.NET to C# converter ;-)
  • Obviously (hopefully!) this is not actual Trade Me code – just an example to demonstrate the ideas.

When we need to return data from the DAL we use Model classes. These are thin container classes which provide an abstraction from the data model used within the database and mean we don’t need to hold a database connection open while we process the data.

A simplistic Model class might look like this:

Public Class MemberSummary
	Public MemberId as Integer
	Public Name as String
End Class

Some Model classes use properties rather than exposing public member variables directly, and a few include functions and behaviours, but most are just a simple collection of public member variables.

Model classes are always instantiated within the DAL, never within the Web layer. We don’t pass Model objects as parameters (if you look closely at the diagram above you’ll notice the lines through the Model layer only goes upwards). This gives us an explicit interface into our DAL methods.

So, to get a list of members from the database we might implement this DAL method:

Public Function GetMemberSummaries() As IList(Of Model.MemberSummary)

	Dim list As New Generic.List(Of Model.MemberSummary)
	Dim dr As SqlDataReader = Nothing
	Try
		dr = ExecuteDataReader("spt_get_member_summary")
		While dr.Read()
			Dim item as New Model.MemberSummary
			item.MemberId = GetInteger(dr, "member_id")
			item.Name = GetString(dr, "name")
			list.Add(item)
		End While
	Finally
		If Not dr Is Nothing AndAlso Not dr.IsClosed Then
			dr.Close()
		End If
	End Try
	Return list
End Function

DAL methods are grouped into classes based on common functionality. This is an arbitrary split – in theory we only need 6 DAL classes (one class per connection string variation), but in practice we currently have 47.

The two examples above show the patterns that make up the vast majority of DAL methods.

While we don’t use an O/R mapper, we have created a simple tool, which we call DALCodeGen. Using this we can simply specify which proc to call and the tool generates the DAL method and, if appropriate, Model class. This code can then be pasted into the project and tweaked/tuned as required.

Web Layer

All the remaining application code sits in the Web layer. This is a mixture of business and presentation logic, which in part is a reflection of our ASP heritage.

During the migration we created controls to implement our standard page layout, such as the tabs and sidebar which appear on every Trade Me page. These were previously ASP #include files. We’ve also implemented controls for common display elements such as the list and gallery view used when displaying a list of items on the site.

We have a base page class structure. These classes implement a bunch of common methods – for example, security and session management (login etc), URL re-writing, common display methods, etc.

Most of the page specific display code is currently located in methods which sit in the code behind rather than in controls.

We don’t use the built-in post-back model – in fact ViewState is disabled in our web.config file and only enabled on a case-by-case basis as required (typically only on internal admin pages).

With the exception of addEvent we also don’t currently use any third-party AJAX or JavaScript libraries. To date none of the AJAX functionality we’ve implemented has required the complex behaviours included in these libraries, so we’ve been able to get away with rolling our own simple asynchronous post/call-back logic.

Layers vs. Tiers

Each of the yellow boxes in the diagram above is a project within the .NET solution, so is compiled into a separate .NET assembly. All three assemblies are deployed to the web servers and the stored procedures, obviously, live on the database servers. So, there are only two physical “tiers” within this architecture.

Inspirations

There is no such thing as an original idea.

Most of this design was inspired by the PetShop examples created by Microsoft as a means of comparing .NET to J2EE. These were pretty controversial – there was a lot of debate at the time about the fairness of the comparison. Putting the religious debate to one side, I thought the .NET implementation was a good example of an application designed with performance in mind, which was obviously important to us.

Another reference I found really useful when I first started thinking about this was ‘Application Architecture for .NET, Designing Applications & Services‘ which was published by the Patterns & Practices Group at Microsoft. This is still available, although likely now out of date with the release of ASP.NET 2.0. It’s also important to realise that this book is intended to describe all of the various aspects that you might include in your architecture. Don’t treat it as a shopping list – just pick out the bits that apply to your situation.

Disclaimer

I’m a little reluctant to write in detail about how we do things. I’d hate to end up in the middle of a debate about the “right way” to design or architect an application.

Should you follow our lead? Possibly. Possibly not.

I can say this: if somebody has sent you a link to this saying “look, this is how Trade Me does it … it must be right” they are most likely wrong. You should at least make sure they have other supporting reasons for the approach they are proposing.

A lot of our design decisions are driven by performance considerations, given our size and traffic levels. These constraints probably won’t apply to you.

In other cases we choose our approach based on the needs of the dev team. We currently have 8 developers and ensuring that they can work quickly without getting in each others way too much is important. Smaller or larger teams may choose a different approach.

Also, a lot of our code still reflects the fact that it was recently migrated from an ASP code base. If you’re creating an application from scratch you might choose to take advantage of some of the newer language features which we don’t use.

More?

I hope some of that is useful? If you have any other questions send them through – my email address is in the sidebar to the right.

Is marketing broken?

Rod posted a quote I made recently about Trade Me. The emphasis is his:

Trade Me wasn’t about technology. Sam’s insight was that marketing was broken. Rather than wasting lots of money on big billboards and TV ads (’it’s shopping on the internet’!) he instead decided to focus on building a really great product, which people like to use and tell their friends about.

Andy Lark disagrees. At least, Alex from Base4 thinks he did.

To start, you need to understand the context. It was a light-hearted breakfast debate put on by Synergy Fronde to launch their new brand. The topic was: “We’d all be better off if IT people ran NZ business”. I was on the negative team. The affirmative team, lead by Tom Scott, had painted Trade Me as an IT success story. So, we were running the “inmates are running the asylum” line.

Perhaps I got a bit distracted in talking about marketing as part of this. But Ferrit, with their flashy conventional marketing and crappy product, are just such an easy target. I couldn’t help myself.

That aside, I don’t actually think that this is a disagreement. Here’s the quote again, this time with my emphasis:

Trade Me wasn’t about technology. Sam’s insight was that marketing was broken. Rather than wasting lots of money on big billboards and TV ads (’it’s shopping on the internet’!) he instead decided to focus on building a really great product, which people like to use and tell their friends about.

So, Alex, aren’t both Andy and I saying the same thing?

Something is happening

Today we added some Ajax-y goodness to the My Favourites page on Trade Me. This makes it much quicker and easier to save your favourite categories, searches and sellers. And on the page itself you can change the email frequency or delete existing favourites without a post back.

All good, and probably well overdue.

One of the lessons we’ve learnt with asynchronous buttons is the importance of an intermediate ‘Saving…’ state.

So, for example, as of today our ‘Save as a favourite search’ button starts out looking like this:

Save as favourite category

When you hover over it glows like this:

Save as favourite category

When you click, it immediately changes to look like this:

Saving

This makes it clear that the click was recorded and that “something is happeningTM“. Then we go ahead and make the asynchronous call to record this change in the database.

Finally, when the asynchronous call returns, we update the button to look like this:

Saved as favourite category

This is a simple thing, and very easy to implement. But, especially for people on slower connections, this can make the difference between understanding what is happening and confusion.

What other tricks are you using in your Ajax-based designs?

I think we’re starting to see some de-facto standards emerge, which is good news for users as all sites will tend to start working in a way that they are familiar with.

But there are still lots of problems which don’t have nice solutions.

For example, pages implemented using Ajax often break the back button. This is a problem as the back button is a concept that even the most novice web user quickly learns and relies on when they follow an incorrect link.

Also, using Ajax can cause the URL displayed in the address bar to get out of synch with the content displayed on the page. This makes it difficult to bookmark or link to the page. If you are relying on word-of-mouth to market your site then making it difficult for people to tell their friends (e.g. by sending them a link in an email) could be fatal.

A community site which documents these sort of simple common Ajax design patterns would be really useful. Is anything like this already out there?

www.trademe.com.au

We recently received this email from a frustrated Trade Me user (I’ve removed the name of the company to protect their privacy):

Hi There

We here at [company name] are Trademe fans , and since 2 weeks ago they have barred the trademe address (www.trademe.co.nz) and also the IP address of (http://202.21.128.2) I was wondering if you could provide us with other access points to get on to Trademe, currently we are using (www.trademe.com.au) . We have a lot of staff members here that would love to get back on trademe .

Thanks

Sigh.

Lots of organisations seem to try to block staff from accessing Trade Me. I don’t think it’s unreasonable if somebody is taking the piss and spending all their time browsing the web when they should be working. And, it doesn’t seem to impact us at all.

But, there are surely lots of different ways to solve this problem, and I doubt that blocking IP addresses is the most effective.

I think Mod put it nicely when asked to comment about Carter Holt Harvey blocking Trade Me for their staff in 2005:

Some companies treat their employees like grown-ups. Some don’t. It’s nothing new.

Source: Worldwide Online Auctions News, August 3, 2005

Indeed.