About the Author: Peter Walker is Gallagher Bassett’s Executive Manager – Commercial Development. Through his 30 plus years of international experience, Pete has witnessed many changes in the claims industry and has ridden the waves of change in both technology and management styles. Pete will be presenting on How Big Data is Transforming Claims Decision Making at the AICLA/ANZIIF Claims Convention in August.
‘Big Data’ is one of the biggest buzz phrases at the moment, but it doesn’t seem to have a single universal definition and is certainly used in several different contexts. However, in its simplest form, I think we can all agree that the term means vast amounts of data available from numerous and disparate sources.
I love the picture above. I’m of the age that remembers visiting Imperial College London’s ‘state of the art’ computer centre in the late seventies, which didn’t look too dissimilar to this – although perhaps in an even bigger room with more of those large tape reels whirring.
Of course no one has data centres like these any more, but there are plenty of claims operations around the world that are using large mainframe computer systems, which have not changed significantly in their fundamental approach. The original green screen may have been replaced with a sexy-looking graphical user interface, but the main database still consists of a finite number of alphanumeric data fields, which have fixed limitations on what can be stored. This data is highly structured, and herein lays the problem that has hindered the claims industry in adopting the big data revolution in the way other markets have.
Almost irrespective of the type of claim, the truly meaningful ‘data’ is unstructured: statements of the circumstances of loss, loss adjuster reports, medical reports, historical weather information, legal advice, even the case manager’s own notes, which in many old-fashioned systems are not searchable. If we go back to our definition, we can see that ‘big data’ already exists in the claims industry today, but it remains inaccessible to many.
However, with advances in document imaging, optical character recognition and text analytics, some organisations have been able to grasp the opportunities that big data presents. Here are three examples of the ways in which some organisations are already using big data to transform their claim results.
1 – Improved reserving accuracy
Some believe that initial claim reserving is a science, while others believe it’s one of the ‘dark arts’ acquired over time.
Initial reserve tools, otherwise known as predictors, aren’t brand new in the better claim systems around the world but historically these have been driven by a simple arithmetic average figure taken across a portfolio. For example, for a particular injury type on a workers’ compensation claim, the system may predict X dollars based on the mean average amount paid across all claims in the system with an identical injury code. On a portfolio-wide basis it’s an acceptable approach, but it is wildly inaccurate for individual cases.
We know there are many factors that influence the final payment sum on that particular workers’ compensation claim. Some are obvious: the individual’s salary level will affect any time lost component and the age of the injured person will influence their recovery time. However, there are many less obvious factors, such as: gender, the person’s employment status (full-time/part-time/contractor), their work experience, location, personal relationship status, family demographics, etc. Big data creates the opportunity to identify the correlation between all of these parameters.
Now our ‘average reserve’ is no longer a simple arithmetic mean across an entire portfolio but is an average generated from very specific subset of data, increasing the accuracy significantly.
The reserve predictor can also be iterative. As more new unstructured data becomes available during the life of the claim, the parameters change and the current reserve can be automatically analysed against the new data points.
2 – Improved fraud detection
Every claims department has some form of fraud screening in place. Many use an accumulated points scoring system where the rating is the total of a number of separate indicators.
Some organisations have reverse engineered the big data associated with known fraud cases. By allowing the system to identify correlation between common data points additional, less obvious, indicators are established. These new data points refine and improve the original fraud rating system.
This approach to fraud management needn’t only look at personal data relating to the individual claimant, as it creates the opportunity for a broader perspective. For example, staged accidents often follow similar circumstances, and crime gangs have been known to repeat methods that proved successful in different parts of the country. A big data approach enables the system to look for repeating patterns outside of the immediate claimant – such as repairers and witnesses. One company identified different claims connected to the same crime ring by the cross-relationships between various witnesses identified through social media, even though the claimant always used a new alias.
Furthermore, big data means all types of data, not just text. Some claims departments are already using voice recording biometrics in an attempt to detect stress in the voice associated with lying.
3 – Predicting the path of the claim
As anyone who has ever worked in claims knows, the case is straightforward until you add the human element of the claimant. That said, while I am sure we have all encountered erratic or irrational behaviour, generally most human beings actually follow very predictable paths, at least until an event or set of circumstances happen that make them ‘crack’. By using the big data gained over millions of claims, we can identify these common patterns in behaviour, and build claim path profiles for both normal and exception cases.
Organisations can analyse the progress of each claim against these known profiles, and set alerts to warn the case manager when circumstances are occurring that, statistically, lead to problematic outcomes.
Once again, it is the sheer breadth of the big data that makes this predictive analytics approach so powerful.