Push The Button

Photo: http://flickr.com/photos/unseelie/766346338/

How many steps does it take to get a change live on your website?

Ideally it should be a one click process.

Otherwise, when the pressure is on (i.e. when there is a bug on the site that you quickly need to fix) you’re sure to forget some critical step and make an even bigger hole for yourself.

What we called “the deployment process” changed a lot during my time at Trade Me.

In the very early days we just copied ASP scripts directly onto the production server. We only got away with this because there were not many people writing code and there were not many people using the site.

Later, as we moved to having multiple web servers which each required a copy of the code, we created a simple Windows application which copied the files from our local directory onto each of the web servers and would also execute selected SQL scripts against the production database. This was much better, but still relied on the developer doing the push to have the correct files on their local machine.

As the site got bigger there were some new complications. For a start there were more people involved. The teams responsible for testing and for maintaining the database and servers got increasingly nervous about developers having the ability to push code at any time. The code base got bigger, making it more difficult to keep in sync. The number of people using the site increased massively, making it less and less practical to just put code changes multiple times during the day. And, we also moved to using ASP.NET, which added the complication of having a build step in the process.

To address some of these issues we developed a new tool we called the “Release Manager” which hooked into source control and allowed us to package up changes so that they could be pushed to test or to production with one click (using simple NAnt scripts under the covers). This removed a lot of the complexity and stress from the process.

I’m sure it has continued to evolve since I left – if anybody from Trade Me is reading it would be interesting to hear about how you do it now.

Towards the end of my time there the test team, who had final sign-off on each release (twice per day at that point), got into the habit of queuing up ‘Push The Button’ by the Sugarbabes on the MP3 player when they were ready for changes to be deployed to production. Every time I hear that song now my pulse increases slightly at the prospect of some site changes going live!

I always thought it would be fun to wire up a proper red button to trigger the deployment, but never got the time …

If you’re interested, I wrote a little more about the tools and processes we used (as at April ’07) here:

Questions from Tim Haines, Part II

How do you manage deployment?

Ten years, or less

Peter Norvig, who I’ve written about here before, has a number of really interesting articles on his site.

Here’s one that stands out:

Teach Yourself Programming in Ten Years

I like this:

“There appear to be no real shortcuts: even Mozart, who was a musical prodigy at age 4, took 13 more years before he began to produce world-class music.”

It’s a short article and well worth a read.

While I’m referencing this, there is also a great quote in the appendix:

“When asked ‘what operating system should I use, Windows, Unix, or Mac?’, my answer is usually: ‘use whatever your friends use.’ The advantage you get from learning from your friends will offset any intrinsic difference between [operating systems]”

Nice – I think I’ll use that.

Features, Ease Of Use & Anti-gravity Machines

There is a curve that seems to apply to all software over time:

Features v Ease of Use

When you start you almost certainly don’t have enough features (and if you do, you probably launched too late)

So, adding features initially makes things easier for users – you’re able to support more user requirements with fewer work arounds.  The software continues to get better and better.

But, eventually features start to weigh the application down – more navigation, more options, more for new users to learn – until eventually you end up no better than a product with too few features.

What can you do?

It’s pretty simple. You need to either know when to stop adding features (which in practice probably means having a better system for prioritising your development work).

Or, failing that, an anti-gravity machine.

FAQs with attitude

I like websites which demonstrate the personality of the people behind them.

I’ve written about this here several times previously:

Here’s another nice example I spotted recently…

Two of the frequently asked questions listed on instapaper.com:

Will you add (useful feature)?

Maybe!

Instapaper is brand new, and it’s a side project of a developer who works on something bigger, so development time is limited. But great features are always possible, especially if enough people request them.

There are some great ideas in the works… stay tuned.

Will you add (obscure feature)?

Probably not, sorry.

There are plenty of other sites that offer similar functionality but with thousands of additional features to satisfy every obscure desire. Instapaper is great because it’s so simple, and keeping it simple is the first priority.

Nice!

Speed is not a problem you can solve

There are, in my experience, two types of websites:

  1. Websites which are slow; and
  2. Websites which are noticably slow.

It’s important to understand which of these categories applies to your site.

If the people using your site tell you that they think it’s slow then you are definitely in the second category.

What you can do about this?

Also, you can make sure that you include time in your work plans to make small performance improvements whenever you make changes to the site. This is important because (despite developers expectations to the contrary) it is unlikely that the day will ever come where you’ll be able to stop working on new features or bug fixes in order to just focus on performance.

Making your site faster needs to be part of what you constantly do, rather than something that you hope to have time to work on at some point in the future.

Using large data sets

Peter Norvig (the Director of Research at Google) started off his ETech presentation with a diagram showing how things used to be (back in the old, old days … like 1994):

Data

At the core, in the past, was the algorithm. Inputs were pretty simple (mouse clicks, keyboard entry). Outputs were equally simple (text, user interface). Data was used simply as a store of input and output. All of the effort and focus went into creating smart algorithms.

However the massive data sets that Google now has access to allows them to flip this model around. Rather than creating complex, elaborate, (and probably inaccurate) algorithms by hand they instead use a simple statistical model and let the data do the work.

He gave several examples. The most obvious is the Google spell checker which using this approach can guess what you might have meant, even where the words you’re looking for don’t appear in any dictionary (e.g. http://www.google.com/search?q=rowan+simson).

Another is their translation tool which can be trained to convert any text where there are enough examples to “learn” from. Ironically, the limiting factor now with this approach is not the software but the quality of the human translators used for training.

In each case being able to do this simply comes down to having enough data.

This is one of those ideas which is so obvious after you’ve seen it:

If you have lots of data the way you think about algorithms changes completely