12th September 2019
Like any organisation that uses data, we know that we are only as good as the quality of the data we have, and how quickly we can deliver responses to queries against that data. For this reason, we’re always on the lookout for new technologies that can help us, such as micro-services that de-couple the physical architecture from functional platform requirements.
Recently, our engineering and support teams realised that current commercial and non-commercial databases are not optimised to store large amounts of telephone numbers and related data points. Our current dataset exceeded 1Bn numbers some time ago and now contains many attributes that need to be instantly retrieved by customers on a per-query basis, for a range of business use cases from optimised superfast A2P SMS routing, KYC or the prevention of fraud. Furthermore, with the growth in customers and use cases for our numbering data many customers are now sending tens of thousands of numbering and network queries every second.
Since the inception of our platform, the team has amassed a great deal of experience using, optimising and tuning Memcached, however our head of engineering Claudiu Filip and his team decided that the most efficient way to be able to instantly pull and push large datasets based upon telephone numbers was to build a totally new database system with our own specific algorithms to store, categorise and search.
After some careful testing and selection, the team settled on building our new architecture to implement the all-important hashing function using an in-house algorithm, optimised for telephone numbers and enriched with numbering plan data to minimise collisions inside the hash.
As well as being ideal for this type of application, the new database system also keeps with our ethos, of using best-of-breed and Open-source tools to maintain the lowest possible cost base, whilst not compromising on performance.
Now that the work has been completed, the results are staggering:
But most impressive of all in the above list is the thing that matters most to our customers, performance levels.
Using a more streamlined algorithm that is customised to the way TMT Analysis organises our data we have been able to deliver a massive increase in performance. From a single compute thread, the new lookup algorithm against the revolutionary database we built can respond to 100 Million queries in less than 3.5 seconds! Bearing in mind that all TMT Analysis Velocity customers are allocated multiple threads during the provisioning process, this gives some idea of the capability of the platform to cope with even the most unpredictable levels of demand without any impact on the performance of individual customers.
So what does this show?
Many businesses can find it hard to cope with sudden increases in demand, as well as find that a platform that was once performing will become slow and obsolete over time. TMT Analysis is different, we are constantly re-inventing what we do, using the latest and most fit-for-purpose technology, to deliver accuracy, scalability and speed to our customers, today and tomorrow.
If you are an existing TMT Velocity customer you may have noticed a performance increase recently, if you’re not and could benefit from it then please talk to us.
VP of Product