Spider and web

Technical Blog: Spiders and Distance Matrix Generation in Route Optimisation

For this technical blog entry, I shall explore one of the lesser-known details of the route optimisation process (we’ll get to the spider later).

I’ll be talking about what I call the “distance matrix generation” step of the optimisation, giving a little insight into some of the innovative stuff that we developers work on along the way.

If you’d like to know any more about the underlying work that goes into our route optimisation technology, please comment below.

Three years ago, I wrote a route planning application using the underlying road network data, outputting distance, time, and turn-by-turn directions between two points.  Computation time varied according to the distance between the two points, but as a ball-park figure, it took around 100ms (a tenth of a second) to compute a route around 100 miles in length; the blink of an eye.  It was great.

Then one day, a customer came up to us and said: “if I give you 20 waypoints, can you tell me the best order to visit them in?”  “Sure”, I said, and route optimisation was born.

Finding the fastest route

Now, in order to find the best sequence of waypoints, the very first step is to compute the distance between every waypoint and every other waypoint.  The evolutionary optimisation algorithm that I wrote needs this information before it can even start the sequencing process.  With 20 waypoints, that’s 20 x 20 = 400 routes to compute.  (The route from A to B may differ to the route from B to A, due to one-way streets and such.)  Basic maths tells us that at 100ms a route, this computation will take 40 seconds.

So, route optimisation was a slow process.

But, customers being customers, the next one came along and said “that’s great, but I have 300 waypoints”.  And I, enthusiastic and immersed in the Postcode Anywhere philosophy, said merrily “that’s fine, I’ll see what I can do”.  But, oh dear, 300 x 300 is 90,000 routes, and at 100ms a route, that’s two and a half hours of computation.  A couple of hours later, our server would have still been chugging, the customer driven off and halfway round his own route already.

Escaping the speed trap

Still, there were a few easy things we could do, and in the past I’ve tried various tactics.  Multi-threading is a useful technique. With eight processing cores on our old servers we could plan eight routes simultaneously and get it down to 19 minutes, but that’s still far too long to be useful.  You can also approximate journey times by saying “Ok, I’ll compute the nearest ten routes to each waypoint and just estimate the other 290”, but that’s inaccurate and leads to sub-optimal results.  It was clear another approach was required.

I made the following real-world observation: if I pick 300 locations and plan a route to each of them from the office, many routes will have identical starts.  Half of them will probably require me to turn right out of the office, turn left through Hallow and head off towards the M5.  Aren’t I creating duplicate work on the servers by re-computing that section of route 150 times?

With this in mind, I set about re-engineering my route planner from scratch and in summer 2010, these improvements went live across our route optimisation web services.  The following is my attempt to shed some light on how the new algorithm improves on the old one.

When my old route planner calculates a route, it starts off at the origin and then repeatedly searches routes that it can take to expand the current range, taking into account traffic speeds, turn restrictions, low bridges, one way streets and so forth. As the search expands, you can think of the algorithm as creating a kind of ‘spider’ of routes radiating from the origin. Now, we can’t expand every little country lane in the world without the processing time going through the roof, so as we get further from the origin we start ignoring smaller roads. As we get closer to the destination, we start considering the smaller roads again and eventually (after 100ms!), we reach our end point and output the route time, distance and directions.

Then, for our 300-waypoint customer, we repeat 90,000 times, creating an army of 90,000 spiders.  And saturate the server.  And the customer has already given up waiting.

Two spiders are better than one

The new algorithm works a little differently; instead of creating one single spider for a route, it creates two – one for the origin and one for the destination.  It expands the two spiders simultaneously, and at some point, two of the legs will touch, completing our shortest route.

The really clever bit is this: we’re not limited to two spiders.  In my new algorithm, you could have six spiders from three different locations (one spider for routes from each waypoint, and one for routes to each waypoint), and when each of them touches each other one, you have six routes instead of one.  Scale this up to 600 spiders and you can compute your 300 x 300 waypoint matrix.

The fantastic news for us is that instead of 90,000 spiders in 2.5 hours, we compute just 600 spiders, taking just one minute, and all this only using a single processing core. The information we compute is exactly the same, but we did just did something amazing – we reduced the time taken to compute the matrix by a factor of 150.

To sum up

In language that will only mean something to geeks like me, I just took a computation that had O(n2) complexity and rewrote it to have O(n) complexity. That’s why I love algorithms – there is so much potential for improvement if you know where to look. Sure, you can throw more powerful computers at anything, but you’d need a 150-server datacentre to beat the speed improvements you can get out of this change!


  • Adam

    “Ok, I’ll compute the nearest ten routes to each waypoint and just estimate the other 290”

    Would it work if you divided map area into grid and then just computed distance between each grid square to start with then you could reduce the number of distances to calculate initially.

    Then you could have a spider for each square?

  • Adam

    Do you get the updated route information form the tom toms as well as the speed profile information?

    So you are learning about new rivers and one way streets etc?

    • http://www.routeoptimiser.com Rich

      Hi Adam,

      Thanks for the comments, it’s great to get a bit of feedback.

      The short answer is yes, that’s pretty much what I used to do before I created the new algorithm, but instead of grid squares I used postcode sectors. Postcodes tend to give nicer areas that fit more closely with the natural geography (e.g. not spanning a river with no bridge). When computing estimates, I simply looked up the pre-computed sector to sector route, and then tweaked it slightly based on the position of the start and end relative to the sector centres. This approach tended to give really close estimates, yet they were still only estimates. However, with this new approach detailed above, these kinds of estimates are a thing of the past!

      As for the other questions, we get our road network data from our data provider TomTom. We get the updates quarterly and apply them to the route planner, which means our API is always up to date. I believe I am correct in saying that one of their methods of data collection is linked to analysing users’ routes – for example, if suddenly one day they get 500 TomTom users heading across a field, their data department gets an alert and investigates to see if there’s a new road there. Of course, for the main roads and motorways, they know in advance that they’re going to be built and make sure to get that ready in their data.

      Rich.