OLC1

How Google could improve Open Location Codes

People who follow my posts know that I am not a fan of systems that attempt to code a polygon of the Earth’s surface by turning a latitude/longitude into a, theoretically, more humanly digestible and usable code.  There are a lot of these systems around – Geotude, Postude, GoCode, What3words and so on – and whilst they resist making wild claims we’ll just get along agreeing to disagree about their usefulness.

I was pulled up short, though, at a conference where Google were presenting about their offering – the Open Location Code (OLC). The presenter suggested that the code was built on the basis of a number of prerequisites, two of which were that it be language- and culturally-independent and that it could be read out over a telephone.

An example OLC looks like this:

7M4R9P3B+G5

The string before the + sign indicates a polygon on the Earth’s surface of approximately 14 x 14 metres. The extra string after it indicates a polygon of approximately 2.6 x 2.8 metres.

But what about those claims.

You can see immediately that the code contains letters from the Latin script basic 26-letter alphabet. People developing these codes recognise that you can’t create a code by country. There are too many disputed areas, moving borders, countries coming and going and so on, to make that a plausible approach. So they choose to divide the world up into 2×2 or 3×3 metre polygons and to create a code for each one, with each code representing the latitude and longitude of that polygon. By my reckoning (and I’m no mathematician) you’d need between 50 and 80 trillion (50 000 000 000 000 – 80 000 000 000 000) unique codes for global coverage. Using just the 10 numerals you would need a very long code to get this many variations, so the code creators use the alphabet to give them extra characters for the codes and allow the length of each code to be reduced.

But let me be very clear. Whilst the decimal numbering system is pretty universal (though the glyphs used to represent it are not – another obstacle to be considered), if you use an alph0.22abet which is used by languages spoken by the minority of the world’s population, then your code is neither language- nor culturally-independent. And allowing the codes to use other characters from other language scripts does not remove the language-dependency – it just changes it. The code may use letters which are familiar to you and I, but they would not be familiar to those whose languages use different scripts or writing systems, such as speakers of Armenian, Arabic, Russian, Chinese, Greek, Hebrew, Thai, Hindi and so on. And this has repercussions for the next prerequisite, that the code can be read over the phone. If the code looked like this:

7ᕋ4ᐃ9ᑎ3ᓄ+ᓯ5

would you know what each letter was called when you were asked to read the code back? I guess not, unless you speak Inuktitut. And it’s no different when you ask users of other language scripts to read back letters in Latin-based codes. Even if you teach a person to read their code by rote, what chance that the person to whom the code is being read would understand the names of the letters in, what would be to them, a foreign language?

But Google’s code is interesting to me in one respect. If it is used alongside traditional geographical pointers, such as a place name, you would not need to use the first part of the code because each last section is unique for about 100 km each way. So instead of:

7M4R9P3B+G5

which is meaningless on its own, not easily remembered, error prone and so on, how about:

9P3B+G5 KATMANDHU

which immediately provides a human dimension to the code. I think this is a promising basis to build upon.

We’re left with the problem of the Latin-script letters, which set me off thinking. It’s hard to find an alternative to using letters. Finding enough symbols, shapes and so on with some sort of universality and availability on keyboards seems to be a non-starter to replace 26 letters. So, if we’re stuck with (Latin-script) letters, what if you used the letters only in the first half of the code, that part which can be dropped if a place name is used, and used the numbers only to indicate the local position? In so doing, you do reduce the number of code combinations which are possible so the code would need to be longer – 12 characters according to my reckoning, possibly longer if you want to avoid any letter combinations from spelling words in any language.  So you might get:

PHIGTRA728564+632

Again, not easy to remember or very useful (in my opinion) outside a webpage or brochure, but:

728564+632 KATMANDHU

is much closer to a usable, human-readable and understandable code.  This code is less language-dependent, can be read over the phone, and can be used in all languages:

٧٢٨٥٦٤+٦٣٢ كاتماندو

728564+632 Катманду

728564+632 Κατμαντού

זבחהוד+וגב קאטמאנדו

Building on this, how about attempting to replace the 10 numerals with more universal symbols? Would:

#%<:^Δ+^*% KATMANDHU

be taking things too far?

What do you think?

I don’t believe that the diversity of the World is conducive to any code working flawlessly – I think that’s illustrated by the leaps one has to make to create any system which would work for as many as even half of us – but perhaps my imaginings come closer to a universal latitude/longitude “postal code” system than some of the other codes currently around, though there might need to be some tweaking required if two codes 100km apart happened to have the same place name. It does, at least, reduce, though not remove, the cultural- and language-dependency of the code.


  • Merton Hale

    I read you post with great interest.

    I have developed a geotagging system that assigns a unique 6 character alphanumeric code to each address or Point of Interest. The characters do not have to be from the Latin alphabet. The problem you mention about disputed areas can be easily solved using the method i have developed.

    Yes, it is database driven. You have to assign the codes to addresses first. The other methods such a what3words or Mapcode or Google Open Location Codes are not database driven. Four or 5 years ago the cost of the memory for a smartphone or tablet to hold the database would have been prohibitive. But today the cost is only a few dollars/euros/pounds. There are many other advantages to a database driven system but that is a longer conversation.

    Obviously for this method to become widely used I need to partner with a major player such a Google, or Next, or Apple, or … I’m attempting to do this but it is not easy as I’m sure you can imagine.

    Full details of this geotagging method can be found at my website http://www.our-qcodes.com

    I’d love the opportunity to discuss this with you. Posts such as this are too limiting.

    The qCode Geo-tagging method/system is patent pending in USA and Europe. I’d love to hear you comments on this.

    You can reach me at merton.hale@gmail.com

    Regards,
    Merton Hale