A battle on multiple fronts

When Adobe acquired Macromedia (for US$3.4 billion!) I have to admit I was sceptical.

Between them they have two of the most installed applications that exist on the fringe of web content: Acrobat and Flash.

But, I doubted their ability to integrate the various product lines and generally get their act together. However, get their act together they have.

All of a sudden they are being seriously compared to Microsoft.

Adobe has momentum. Just like that Microsoft is fighting a battle on multiple fronts.

This week is Microsoft’s turn in the spotlight (or is that Silverlight?)

Will be interesting to watch…

UPDATE: Ryan Stewart has some more details on Silverlight.

One wonderful day

We’ve recently been talking about the next round of under-the-covers clean up work at Trade Me.

In the last couple of years we’ve migrated all of our sites from VBScript to ASP.NET and made some pretty major changes to our development and testing process. It’s been a lot of hard work, but the results have made it well worthwhile.

However, the ground is constantly shifting. There are always further improvements to be made. You’re never finished. Sometimes it can feel like you need to run just to stay still.

In all of this I was reminded of an excellent 37signals post which contained this insight:

“The business is not going to slow down to allow you to clean all these things up one wonderful day. It just won’t happen. “

Sometimes the amount of work involved in these sort of improvements is daunting – for example, thinking about adding automated unit tests to a large code base that has never been written with testability in mind can seem like an impossible challenge. But, unless you change the way you are working now the gap between where you are and where you want to be is only ever going to get bigger and bigger.

To make an application more capable over time it’s important to include enough time in the schedule to remove scar tissue. But you also need to stop cutting yourself.

Questions from Tim Haines, Part II

This is Part II in a two-part series. Part I covers the Trade Me application architecture.

Tim’s second lot of questions are about our dev tools and process:

Q: Any third party tools in the software or the dev/management process?

Q: What source control software do you use, and how do you use it?

Q: How do you manage roll outs? Dev/Staging/Live?

Q: Do you use pair programming, or adopt any other methodologies from the agile world?

The answers to these questions are just a snapshot, capturing how we do things today (early in April, 2007).

I go far enough back to remember when our “development environment” was Jessi’s PC (Jessi at that stage was our entire Customer Service department!) Back then there was no source control as such, we all shared a single set of source files. To deploy a change we would simply copy the relevant ASP files directly onto the live web server and then quickly make the associated database changes.

Somehow it worked!

Ever since then we’ve been constantly tweaking the tools and processes we use, to accommodate a growing team and a growing site. As our application and environment has evolved and become more complex our tools and process have had to change also.

This change will continue, I’m sure. So, it will be interesting to come back to this post in another 8 years and see if the things I describe below sound as ridiculous then as the things I described above do now.

Also, the standard disclaimer applies to these ideas: what makes sense for us, in our environment and with our site, may not make sense to you in yours. So, please apply your common sense.

Tools

Our developers use Visual Studio as their IDE and Visual SourceSafe for source control.

All of our .NET application code and all of our stored procedures are kept in a SourceSafe project. Developers tend to work in Visual Studio and use the integration with SourceSafe to check files in and out etc.

Thus far we’ve used an exclusive lock approach to source control. So, a developer will check out the file they need to make changes to and hold a lock over that file until the changes are deployed.

However, as the team gets bigger this approach has started to run into problems – for example, where multiple developers are working on changes together, or where larger changes need to be made causing key files to be blocked for longer periods.

To get around these issues, we’re increasingly working on local copies of files and only checking those files out and merging in their changes later. I imagine we will shortly switch to an edit-merge-commit approach, and that will require us to look again at alternative source control tools (e.g. SourceGear’s Vault, Microsoft’s Visual Studio Team System or perhaps Subversion – we’d be interested to hear from anybody who’s had experience with any of these).

Release Manager

At the centre of our dev + test process is a tool we’ve built ourselves called the ‘Release Manager’.

This incorporates a simple task management tool, where tasks can be described and assigned to individual developers and testers. It also hooks into source control, and allows a developer to associate source code changes with the task they are working on.

This group of files, which we call a ‘package’, may include ASPX files, VB class files as well as scripts to create or replace stored procedures in the database.

The tool also incorporates reports which help us track tasks as they progress through the dev + test process. These are built using SQL Reporting Services.

Environments

We have four environments:

  1. Dev: this includes a shared database instance and local web servers for each developer.
  2. Test: this includes a production-like database (actually databases, as we now have multiple instances in production) and a separate web server.
  3. Stage: our pre-production environment, again with it’s own web server
  4. Production: our live site, which actually incorporates two environments currently, one in Wellington and one in Auckland.

Developers typically work on changes individually. We have a code-review process, so any code changes have two sets of eyes over them before they hit test.

Once a code change is completed, the developer will create the package in Release Manager and set the task to be ‘ready to test’ so it appears on the radar of the test team.

We have a web-based deployment tool which testers can use to deploy one or more packages into the test environment. This involves some Nant build scripts which get the source files for the selected packages, copy these into the test environment and then build the .NET assemblies on the test server. The build script also executes any associated database changes that are included, and then updates the status of the package/s to ‘in test’.

The deploy tool is able to use the data from Release Manager to identify any dependencies between packages. Where dependencies exist we’re forced to deploy packages in a certain order, but in the general case we’re able to deploy packages independently of each other, which provides a great degree of flexibility and allows us to respond quickly where required (e.g. when there is an urgent bug fix required).

Production

Once a package has been tested the test team use the same deploy tool to move the package into the stage environment ready for go-live.

From there the responsibility switches to the platform team, who manage our production web servers. They have automated scripts, again built using Nant, which deploy from stage to our production environment/s. These scripts update configuration files then copy the required production files to the various web server locations. It also manages the execution of database scripts. The idea is to get everything as close to the brink as possible (which is the time consuming part of the deploy process) and then tip everything over the edge as quickly as possible, so as to minimise disruption to the site.

Typically we do two production releases each day, although this number varies (up and down) depending on the specific packages. In most cases these releases are done without taking the site offline.

The bigger picture

Our dev + test process is just one part of a much bigger product management process, which is roughly represented by the diagram below (click for a larger view):

Product Management Process

The other parts of this process are probably fodder for a separate post, but it’s important to note that there is a loop involved here.

Most of the changes we make to the site are influenced heavily by previous changes. In many cases very recent changes. This only works like it does because our process allows us to iterate around this loop quickly and often.

While we don’t follow any formal agile methodology, our process is definitely lightweight. We don’t produce lots of documentation, which is not to say that we don’t design changes up-front, just that we don’t spend too much time translating that thinking into large documents (it’s not uncommon for screen designs to be whiteboard printouts for example).

While we do make larger changes from time to time (for example, the DVD release which went out last week) the vast majority of changes we make are small and seemingly insignificant. Again, this only works because each of these small changes is able to flow through with minimal friction added by the tools and processes.

I’d also hate to give you the impression that this process is perfect. There is massive room for improvement. The challenge for us is to continue to look for these opportunities.

More?

That’s it for Tim’s questions. I hope some of that was useful?

If you have any other questions, ask away. You can either place a comment below or contact me directly. My email address is in the sidebar to the right.

eBay architecture

Here is a really interesting slide deck about the eBay software development process and the evolution of their architecture (via Matt from the Trade Me database team):

http://www.addsimplicity.com/downloads/eBaySDForum2006-11-29.pdf

As I’ve cautioned previously unless you happen to run a really really big website many of the approaches they describe here are probably more interesting than practical.

As far as I can tell from the architecture diagrams included in this deck what we have today at Trade Me is a mixture of their v2 architectures. Those maxed out for them at around 50m registered users, so that would suggest we have some more growth to go before we need to start looking too seriously at what they did in v3.

But, nonetheless, it’s interesting to think ahead to the challenges down the track – for example around the architecture of the application code, and the implementation of search within the database.

VB.NET – arrghh!

I get really annoyed when smarmy C# developers look down their noses at VB developers.

If I’m feeling like an argument I’ll ask them to write something in C# that I couldn’t replicate in VB. I’m not saying it’s impossible, but so far nobody has been able to do it (feel free to post a comment below if you’d like to try).

All of the Trade Me code is VB, and on the whole it does a perfectly good job. (btw, there is a good reason why we decided to migrate to VB rather than C#)

But, really, sometimes VB just doesn’t help itself.

Witness this recent email from Annie, one of the developers at Trade Me …


From: Annie
To: Development
Subject: Rounding numbers in VB.NET and SQL Server

Hey guys,I’ve just been debugging a situation in the sell process (for way too long) where two decimal numbers (9.5 and 10.5) were BOTH rounding to 10 by a call to exactly the same code: a simple call to good ol’ CLng.After searching around, it turns out that CLng, CInt, CByte, CCur and Round all implement what is known as “Banker’s Rounding” in VB.NET. This is also sometimes called “round to nearest”, or “round to even”. Basically, Banker’s rounding rounds 0.5 up sometimes and down other times. Apparently the convention is to round to the nearest EVEN number. Therefore, 1.5 and 2.5 would both round to 2. Likewise, 3.5 and 4.5 would both round to 4.If you’re interested, the rational behind this is that if you were rounding large sets of numbers, constantly rounding 0.5 UP would result in a bias as only 4 of 9 numbers (0.1, 0.2, 0.3 and 0.4) would cause a round DOWN and the remaining 5 numbers (0.5, 0.6, 0.7, 0.8 and 0.9) would cause a round UP.

ANYWAY, as you can imagine, this causes what seems to be slightly erratic un-deterministic behaviour to those of us who are used to “arithmetic rounding”. And unfortunately, it looks like there are quite a few calls to CLng particularly in the sell process. Calls to CLng are currently being used when checking for minimum or maximum values for attributes and eventually for storing and displaying integer values entered for said attributes. For example, if I enter 49.5m2 for the floor area of the apartment I’m trying to sell, it will round it up to 50m2. But if I enter 50.5m2, it will also round to 50m2. I dunno about you guys but I was always taught to round 0.5 UP in school, without exception. Anyhoo, some users might consider this to be a bug (I know I find it weird).

To add to the confusion, I just did a bit of a test in SQL Server and it looks like casting a decimal value to an int just truncates the number to the most precise digit (e.g. cast(10.9 as int) = 10).

So, basically just use a bit of caution when using CLng, CInt, CByte, CCur and Round in VB.NET and cast in SQL Server as they may be returning results quite different to what you’d expect, and worse yet, different results when calculated in the code as opposed to a stored proc.

Apparently Format and FormatNumber perform “arithmetic rounding” and although they spit out a string, in many of our cases these functions will do the trick as we tend to treat most of our integers in the sell process as strings anyway.

But yeah, just thought I’d share this with all of you. Please feel free to add to this discussion/rant.

/Annie

Fat homes

Matt from 37signals recently posted some excerpts from “The Eight Step Home Cure”. They are well worth a read.

This one especially jumped out at me:

But when we take something new into our home, we rarely let go of something else. This is how our home gains weight, grows unhealthy, and begins to nag at us…Most of us aren’t in need of more organizing; we need to manage our consumption, let go of our stuff, and learn how to restore life to our homes.

As a reformed hoarder, I love the analogy of a house slowly getting fat as you fill it with more and more stuff.

As an aside, I actually think that hoarding might be a genetic condition. The good news is that it isn’t necessarily terminal. I recently got my parents hooked on Trade Me (hey, it only took 7 years!) They are now enthusiastically selling off lots of their junk stuff and are well on the way to a full recovery. :-)

Fat software

If you can think of your home as a “living organism” which needs a healthy diet and regular exercise then software is surely the same.

Applications that have been around for a while and through a number of versions are typically obese with features.

A classic example is Microsoft Office. When the team planning the most recent version asked people what features they would like to have added many of the things people suggested were actually already in the product. The problem was not too few features but the opposite – there were so many features that people were struggling to find them and use them effectively.

To solve this they came up with their new ribbon UI.

As more and more functionality is added to Trade Me we have started to run into the same sort of problems. We recently added a Seller Acceleration Centre to our help to make it easier for big sellers to find the existing features they can use to make their lives easier.

In addition we haven’t been scared to take the “liposuction” approach and remove functionality altogether when it isn’t used enough to justify its place in the UI. For example, a few months back we removed Trust Webs, which allowed us to give extra prominence to a members Blacklist.

So, next time you adding a feature to your application ask yourself what you can remove to keep things in balance.

Like somebody who holds on to every nick-nack that is choking their house on the assumption that it might be useful one day, it is harder to do than it sounds.

Broadband usage still under 50%

Whenever I present to a group of technical people I always ask for a show of hands and ask these questions:

  • Who is using Internet Explorer
  • Who is using a monitor with a resolution of 800×600?
  • Who uses a dial-up connection at home?

There are usually a handful of IE users, but never any 800×600 users or dial-up users.

I do this to point out how poorly most technical people relate to “normal” users.

Meanwhile out in the real world …

According to the latest Trade Me server stats about 83% of people use one of the variants of Internet Explorer (including 1.2% on IE5.x – who are these people and don’t they know somebody, ANYBODY, who can help them to upgrade)

On the screen resolution front, there are still about 14% of people out there using an 800×600 monitor. I assume many of these people have hardware which is capable of more, but they just don’t know how to make the change, or don’t care to?

And, according to statistics reported last week over 50% of people in NZ are still on dial-up.

It depressing! But, let’s not pretend that the audience is something that it’s not.

Questions from Tim Haines, Part I

This is Part I in a two-part series. Part II covers the Trade Me development process and tools.

It’s been a while since we got geeky here, so …

After my recent post about our migration to ASP.NET I got sent a bunch of questions from Tim Haines. I thought I’d try and pick these off over a couple of posts.

To start with, a few questions about our application architecture:

Q: What’s the underlying architecture of Trade Me – presentation layer / business logic / data layer / stored procedures? All isolated on their own servers?

Q: Are there any patterns you find incredibly useful?

Q: Do you use an O/R mapper or code generator, or is all DB interaction highly tuned?

Q: What third party libraries do you use for the GUI? I see you have Dustin’s addEvent. Follow any particular philosophy or library for AJAX?

Here is a basic diagram I use to represent this application architecture we use on all of our sites at Trade Me (click for a larger view):

Application Architecture Diagram

We’ve worked hard to keep this application architecture simple.

There are two layers within the ASP.NET code + one in the database (the stored procedures). I’ll start at the bottom of the diagram above and work up.

Data Layer

All database interaction is via stored procedures. This makes it easier to secure the database to threats like SQL injection. And it also makes it easier to monitor/trace the performance of the SQL code and identify where tuning is required.

Within the application we manage all access to the database via the Data Access Layer (DAL).

All of the classes within the DAL inherit from a common base class, which is a custom class library we’ve created (based loosely on the Microsoft Data Access Application Block). This base class provides all of the standard plumbing required to interact with the database – managing connections, executing stored procedures and processing the results.

The methods within the DAL classes themselves specify the details of the database logic – specifying which stored procedure to call, managing parameters, validating inputs and processing outputs.

So, for example, to process a new bid we might implement this DAL method:

Public Sub ProcessBid(ByVal auctionId as Integer, ByVal bidAmount as Decimal)
	ExecuteNonQuery("spb_process_bid", _
		New SqlParameter("@auction_id", auctionId), _
		New SqlParameter("@bid_amount", bidAmount))
End Sub

A couple of things to note here:

  • All of our code is VB.NET, so that’s what I’ll use in these examples. Apologies to those of you who prefer curly brackets. Perhaps, try this VB.NET to C# converter ;-)
  • Obviously (hopefully!) this is not actual Trade Me code – just an example to demonstrate the ideas.

When we need to return data from the DAL we use Model classes. These are thin container classes which provide an abstraction from the data model used within the database and mean we don’t need to hold a database connection open while we process the data.

A simplistic Model class might look like this:

Public Class MemberSummary
	Public MemberId as Integer
	Public Name as String
End Class

Some Model classes use properties rather than exposing public member variables directly, and a few include functions and behaviours, but most are just a simple collection of public member variables.

Model classes are always instantiated within the DAL, never within the Web layer. We don’t pass Model objects as parameters (if you look closely at the diagram above you’ll notice the lines through the Model layer only goes upwards). This gives us an explicit interface into our DAL methods.

So, to get a list of members from the database we might implement this DAL method:

Public Function GetMemberSummaries() As IList(Of Model.MemberSummary)

	Dim list As New Generic.List(Of Model.MemberSummary)
	Dim dr As SqlDataReader = Nothing
	Try
		dr = ExecuteDataReader("spt_get_member_summary")
		While dr.Read()
			Dim item as New Model.MemberSummary
			item.MemberId = GetInteger(dr, "member_id")
			item.Name = GetString(dr, "name")
			list.Add(item)
		End While
	Finally
		If Not dr Is Nothing AndAlso Not dr.IsClosed Then
			dr.Close()
		End If
	End Try
	Return list
End Function

DAL methods are grouped into classes based on common functionality. This is an arbitrary split – in theory we only need 6 DAL classes (one class per connection string variation), but in practice we currently have 47.

The two examples above show the patterns that make up the vast majority of DAL methods.

While we don’t use an O/R mapper, we have created a simple tool, which we call DALCodeGen. Using this we can simply specify which proc to call and the tool generates the DAL method and, if appropriate, Model class. This code can then be pasted into the project and tweaked/tuned as required.

Web Layer

All the remaining application code sits in the Web layer. This is a mixture of business and presentation logic, which in part is a reflection of our ASP heritage.

During the migration we created controls to implement our standard page layout, such as the tabs and sidebar which appear on every Trade Me page. These were previously ASP #include files. We’ve also implemented controls for common display elements such as the list and gallery view used when displaying a list of items on the site.

We have a base page class structure. These classes implement a bunch of common methods – for example, security and session management (login etc), URL re-writing, common display methods, etc.

Most of the page specific display code is currently located in methods which sit in the code behind rather than in controls.

We don’t use the built-in post-back model – in fact ViewState is disabled in our web.config file and only enabled on a case-by-case basis as required (typically only on internal admin pages).

With the exception of addEvent we also don’t currently use any third-party AJAX or JavaScript libraries. To date none of the AJAX functionality we’ve implemented has required the complex behaviours included in these libraries, so we’ve been able to get away with rolling our own simple asynchronous post/call-back logic.

Layers vs. Tiers

Each of the yellow boxes in the diagram above is a project within the .NET solution, so is compiled into a separate .NET assembly. All three assemblies are deployed to the web servers and the stored procedures, obviously, live on the database servers. So, there are only two physical “tiers” within this architecture.

Inspirations

There is no such thing as an original idea.

Most of this design was inspired by the PetShop examples created by Microsoft as a means of comparing .NET to J2EE. These were pretty controversial – there was a lot of debate at the time about the fairness of the comparison. Putting the religious debate to one side, I thought the .NET implementation was a good example of an application designed with performance in mind, which was obviously important to us.

Another reference I found really useful when I first started thinking about this was ‘Application Architecture for .NET, Designing Applications & Services‘ which was published by the Patterns & Practices Group at Microsoft. This is still available, although likely now out of date with the release of ASP.NET 2.0. It’s also important to realise that this book is intended to describe all of the various aspects that you might include in your architecture. Don’t treat it as a shopping list – just pick out the bits that apply to your situation.

Disclaimer

I’m a little reluctant to write in detail about how we do things. I’d hate to end up in the middle of a debate about the “right way” to design or architect an application.

Should you follow our lead? Possibly. Possibly not.

I can say this: if somebody has sent you a link to this saying “look, this is how Trade Me does it … it must be right” they are most likely wrong. You should at least make sure they have other supporting reasons for the approach they are proposing.

A lot of our design decisions are driven by performance considerations, given our size and traffic levels. These constraints probably won’t apply to you.

In other cases we choose our approach based on the needs of the dev team. We currently have 8 developers and ensuring that they can work quickly without getting in each others way too much is important. Smaller or larger teams may choose a different approach.

Also, a lot of our code still reflects the fact that it was recently migrated from an ASP code base. If you’re creating an application from scratch you might choose to take advantage of some of the newer language features which we don’t use.

More?

I hope some of that is useful? If you have any other questions send them through – my email address is in the sidebar to the right.

The rise and rise of IE7

Juha posted recently about his server stats, noting that IE7 has overtaken IE6 and that Firefox is now 1/3rd of the market.

This is a reflection of the tech-savvy-ness of his audience.

Looking at the server stats for February across all of the Trade Me sites:

  • IE7 has increased to 26.7%. While this is up from 12.2% in December it is still some way behind IE6 at 55.3%
  • Firefox users are slowly shifting across to Firefox 2.0, which is now up to 6.4%. But, the overall market share across the three different versions of Firefox combined is steady at just under 13%, where it has been for the last 6 months.
  • Windows Vista (or is that Vus-tah in this part of the world?) is only just on the radar at 0.6%.

Interesting times!

Don’t break the back button

One of the golden rules of web development is “Don’t break the back button”.

The back button is one of the first concepts that somebody who is new to the web learns. It gives people the confidence to click on links safe in the knowledge that they can always return to where they were.

Breaking the back button is not a new problem. Jakob Neilsen has been banging on about it for longer than I’ve been using the web.

But, we keep finding new ways to break it.

I wrote recently that this is one of the unsolved problems with applications that use a lot of AJAX. Well, Julien Lecomte from the Yahoo! User Interface team has come up with a possible solution that he’s calling Browser History Manager.

This uses a combination of JavaScript hacks to fool the browser into thinking the page has changed, and so effects what happens when the user clicks the back button. Provided it’s coded smartly it also allows users to bookmark an AJAX page in a specific state, which is nice.

It’s good to see that people are working on this sort of thing, but to me it feels like a very hacky solution.

At what point does the fact we need to fix these sorts of problems cause us to re-consider the whole approach?

MySpace Application Architecture

http://www.baselinemag.com/article2/0,1540,2082921,00.asp

This is an interesting article about the underlying application architecture of MySpace.

There are quite a few parallels with Trade Me. I recognise a few of the problems that are described in the first few pages (up to 3 million customers):

  • Managing session data across multiple web servers;
  • Using caching to ease the load on the database;
  • Partitioning the database when it’s too big/busy to live on a single server, with the corresponding issues around data replication;
  • Implementing a storage area network (SAN);
  • Bumping into I/O constraints in the database;
  • The impossibility of realistic load testing

Like us, they have also recently migrated to .NET (in their case from ColdFusion). I previously wrote about our migration experience, if you’re interested.

Clearly, they’ve also had to deal with lots of problems we haven’t run into yet. I was talking to Scott Guthrie from Microsoft at TechEd in Auckland last year and was bragging (just a little!) about how we’d just clocked up 1 billion page impressions the previous month. He’d recently spent some time with the MySpace guys as part of their migration to .NET and told me that at that stage they were serving out 1 billion impressions per day! Ouch! :-)

So, there are probably some pointers here to the sorts of changes we’ll need to consider as Trade Me continues to grow – for example, moving to more of a distributed approach to the database design

Are we done?

I always take it as a good sign when people start complaining about the colours in a site design. If that’s all they can find to comment on, then more-or-less everything else must be pretty much ready to go.

You can never please everybody. Although, as Kathy Sierra points out, you can easily please nobody (actually she’s been talking about this for a while now). Sometimes you just have to go with what you think is right. And, when you do get opinions from others make sure you’re watching what they do rather than listening to what they say.

:-)

Yahoo Pipes

Yahoo have just announced a new service called Yahoo Pipes. At first glance it looks really interesting.

Jeremy Zawodny has some more details, including this description of the problem they’re trying to solve:

On the web … there are data sources and feeds, but until now we’ve had no pipes! Pulling together and integrating data sources using JavaScript isn’t on the client for the faint of heart. The browser isn’t the same as a Unix command-line, so building mashups has been more frustrating and time consuming that it needs to be–especially for Unix people like me.

Tim O’Reilly also adds his take, including this succinct summary:

[Pipes is] a service that generalizes the idea of the mashup, providing a drag and drop editor that allows you to connect internet data sources, process them, and redirect the output.

The thing is, Unix pipes were never this easy or pretty. The drag-and-drop UI is totally intuitive – I get it without having to read a manual. I especially like the way that the pipes themselves, connecting the different controls, are displayed – no clunky straight lines.

In about 2 minutes I threw together this pipe (is that what they will be called?) which provides an RSS feed of the latest Trade Me site announcements in French!

http://pipes.yahoo.com/pipes/EiPH51S32xGA_mfNqu5lkA/

:-)

Have a play – let me know if you come up with anything interesting.

UPDATE (9-Feb): All I’m getting this morning is an error message saying “Our pipes are clogged. We’ve called the plumber!”. That’s cute, but a real missed opportunity at the same time. It would seem they didn’t anticipate how popular this would be. Start small, but think big and scale quickly!

Something is happening

Today we added some Ajax-y goodness to the My Favourites page on Trade Me. This makes it much quicker and easier to save your favourite categories, searches and sellers. And on the page itself you can change the email frequency or delete existing favourites without a post back.

All good, and probably well overdue.

One of the lessons we’ve learnt with asynchronous buttons is the importance of an intermediate ‘Saving…’ state.

So, for example, as of today our ‘Save as a favourite search’ button starts out looking like this:

Save as favourite category

When you hover over it glows like this:

Save as favourite category

When you click, it immediately changes to look like this:

Saving

This makes it clear that the click was recorded and that “something is happeningTM“. Then we go ahead and make the asynchronous call to record this change in the database.

Finally, when the asynchronous call returns, we update the button to look like this:

Saved as favourite category

This is a simple thing, and very easy to implement. But, especially for people on slower connections, this can make the difference between understanding what is happening and confusion.

What other tricks are you using in your Ajax-based designs?

I think we’re starting to see some de-facto standards emerge, which is good news for users as all sites will tend to start working in a way that they are familiar with.

But there are still lots of problems which don’t have nice solutions.

For example, pages implemented using Ajax often break the back button. This is a problem as the back button is a concept that even the most novice web user quickly learns and relies on when they follow an incorrect link.

Also, using Ajax can cause the URL displayed in the address bar to get out of synch with the content displayed on the page. This makes it difficult to bookmark or link to the page. If you are relying on word-of-mouth to market your site then making it difficult for people to tell their friends (e.g. by sending them a link in an email) could be fatal.

A community site which documents these sort of simple common Ajax design patterns would be really useful. Is anything like this already out there?

On simplicity

There are two ways of constructing software; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
C. A. R. Hoare, inventor of the Quicksort algorithim

People often misinterpret complexity as sophistication
Niklaus Wirth, the father of Pascal