People who follow my posts know that I am not a fan of systems that attempt to code a polygon of the Earth’s surface by turning a latitude/longitude into a, theoretically, more humanly digestible and usable code. There are a lot of these systems around – Geotude, Postude, GoCode, What3words and so on – and whilst they resist making wild claims we’ll just get along agreeing to disagree about their usefulness.
I was pulled up short, though, at a conference where Google were presenting about their offering – the Open Location Code (OLC). The presenter suggested that the code was built on the basis of a number of prerequisites, two of which were that it be language- and culturally-independent and that it could be read out over a telephone.
An example OLC looks like this:
The string before the + sign indicates a polygon on the Earth’s surface of approximately 14 x 14 metres. The extra string after it indicates a polygon of approximately 2.6 x 2.8 metres.
But what about those claims.
You can see immediately that the code contains letters from the Latin script basic 26-letter alphabet. People developing these codes recognise that you can’t create a code by country. There are too many disputed areas, moving borders, countries coming and going and so on, to make that a plausible approach. So they choose to divide the world up into 2×2 or 3×3 metre polygons and to create a code for each one, with each code representing the latitude and longitude of that polygon. By my reckoning (and I’m no mathematician) you’d need between 50 and 80 trillion (50 000 000 000 000 – 80 000 000 000 000) unique codes for global coverage. Using just the 10 numerals you would need a very long code to get this many variations, so the code creators use the alphabet to give them extra characters for the codes and allow the length of each code to be reduced.
But let me be very clear. Whilst the decimal numbering system is pretty universal (though the glyphs used to represent it are not – another obstacle to be considered), if you use an alph0.22abet which is used by languages spoken by the minority of the world’s population, then your code is neither language- nor culturally-independent. And allowing the codes to use other characters from other language scripts does not remove the language-dependency – it just changes it. The code may use letters which are familiar to you and I, but they would not be familiar to those whose languages use different scripts or writing systems, such as speakers of Armenian, Arabic, Russian, Chinese, Greek, Hebrew, Thai, Hindi and so on. And this has repercussions for the next prerequisite, that the code can be read over the phone. If the code looked like this:
would you know what each letter was called when you were asked to read the code back? I guess not, unless you speak Inuktitut. And it’s no different when you ask users of other language scripts to read back letters in Latin-based codes. Even if you teach a person to read their code by rote, what chance that the person to whom the code is being read would understand the names of the letters in, what would be to them, a foreign language?
But Google’s code is interesting to me in one respect. If it is used alongside traditional geographical pointers, such as a place name, you would not need to use the first part of the code because each last section is unique for about 100 km each way. So instead of:
which is meaningless on its own, not easily remembered, error prone and so on, how about:
which immediately provides a human dimension to the code. I think this is a promising basis to build upon.
We’re left with the problem of the Latin-script letters, which set me off thinking. It’s hard to find an alternative to using letters. Finding enough symbols, shapes and so on with some sort of universality and availability on keyboards seems to be a non-starter to replace 26 letters. So, if we’re stuck with (Latin-script) letters, what if you used the letters only in the first half of the code, that part which can be dropped if a place name is used, and used the numbers only to indicate the local position? In so doing, you do reduce the number of code combinations which are possible so the code would need to be longer – 12 characters according to my reckoning, possibly longer if you want to avoid any letter combinations from spelling words in any language. So you might get:
Again, not easy to remember or very useful (in my opinion) outside a webpage or brochure, but:
is much closer to a usable, human-readable and understandable code. This code is less language-dependent, can be read over the phone, and can be used in all languages:
Building on this, how about attempting to replace the 10 numerals with more universal symbols? Would:
be taking things too far?
What do you think?
I don’t believe that the diversity of the World is conducive to any code working flawlessly – I think that’s illustrated by the leaps one has to make to create any system which would work for as many as even half of us – but perhaps my imaginings come closer to a universal latitude/longitude “postal code” system than some of the other codes currently around, though there might need to be some tweaking required if two codes 100km apart happened to have the same place name. It does, at least, reduce, though not remove, the cultural- and language-dependency of the code.