Servicing Bad Debt
Imagine 100s of millions of active debt instruments covering motorized vehicles of all sorts, all across North America, for a period of 7 years.
Now imagine having significant problems identifying poorly performing instruments, such that it takes more than 24 hours to make that determination. This places you at least a day behind the event, and leaving you in a reactive (not proactive) state.
Given the 1,000s of points of sale, across 4 times zones, managing all existing instruments as well as all new instruments, is a significant concurrency problem. Now add to that the need to quickly identify who missed a payment in the last 24 hours.
Problems like this tend to sneek up on successful companies. Over time the problem becomes overwhelming and it becomes impossible for the IT operations line and staff to see the forests from the trees.
By following the process and working together, we were able to identify the existing anti-patterns that were impeding their success. A solution that optimized concurrent process across distributed nodes (on premise and off-site) proved to meet their needs and provide rapid access in real-time to each and every instance of bad debt. The analytics were accomplished using BigQuery.
Monitoring Current Sales Performance
Imagine 100s of millions of insurance policies covering a home, auto, and personal, properties being sold across North America.
Now, imagine how difficult it is to understand the sales performance of each and every sales person for each and every product
As they acheived nation-wide success, the ability to known in a timely manner which sales person were not performing became nearly impossible.
By following the process and working together, we were able to identify the non-performing sales persons quickly - in fact they can now be measured every 15 minutes. A solution that optimized concurrent process across distributed nodes (off-site) proved to meet their needs and provide rapid access in near real-time.
Analyzing Gas and Electicity Utilization
Imagine an average sized metropolis, to which gas and electricity is provided by a utility company.
Now imagine over the course of 60 years how many changes there can be to equipment that meters the flow of natural gas and electricity through the distribution channels of said metropolis. Given the advances in technology, the changes in metering up and down those distribution channels is significant.
Consequently, if you need to know the trends in gas and electricity utilzation (down to a residential unit with a street address) the variation in the data sets over time are not to be underestimated.
By following the process and working together, we were able to identify the the critically important metrics and properties, design and develop all data input parsers, and then re-structure the raw data in such a way as to support their analysis of time-series data set that extended over 15 years. The analytics where accomplished using Qubole.
The Relational Database that couldn't
Imagine a successful software company with a software service that is based on a relational database. Now, imagine that a relational data model is, for their computations, an anti-pattern. For a while you can throw hardware at this type of problem, but when you need to scale both vertically and horizontally a relational database service is a poor choice, an unforgiving choice.
Fortunately, the software layers above the relational database were object-oriented and reasonable well un-coupled from the data service.
By following the process and working together, we were able to establish that their computations actually required a key-value data service. To that end, we delivered a Cassandra and Spark cluster (in AWS) that enabled them to support their growing client base.