Skip to main content
Industry

Image of a datacenter

What is digital stewardship in the insurance sector? To answer this, let’s begin with the background on data usage in the insurance industry. This industry is a unique combination of two things: (1) massive amounts of data and (2) a highly regulated and controlled process to use and store data. So a simple answer is this: digital stewardship is the process of managing, controlling and securing data, as it is used and generated through the insurance value-chain, in a way that meets all security and regulatory requirements. In this series of three blogs, we’ll explore digital stewardship as it relates to both data and data regulations.

Data and regulations in the insurance industry

Data is the lifeblood of the insurance industry. Data is used to assign and monitor risk, which drives pricing and renewal policies that culminate in overall profitability. Data is used by the core systems to track all customer interactions, helping to deliver differentiated policy holder experiences.

Regulations are the other side of the coin. Regulations define insurance products, and how and where such products can be sold. The regulations on data, data flows and data usage are the driving force for data stewardship.

Data categories

Data has many costs. For starters, there is the pure cost of maintaining data in some system—tape, hard drives, or solid-state drives. Then there is the cost of moving data, also known as bandwidth cost. For older data formats, it may be necessary to migrate to a new format. So, given the various costs that can accrue, it’s worth breaking down the sea of data into categories that can be understood.

Data use by insurance companies fall into three major categories:

  1. Historic policy level data
  2. Current, monthly transaction level data and current output from reserving calculations
  3. External data sources used to assign risk. In this category there are multiple levels of data used to determine the risk of what is being insured

Historic policy level data

Historic data has special problems and benefits. This kind of data comes from legacy systems—mostly from on-premises datacenters. These datasets are a great source of longitudinal data (data gathered over long stretches) once it is normalized between the various system formats. The data also persists—often for many years—with some life insurance policies being issued at birth of the policy holder and only ending at that person’s death. The problem with historical data is the cost of normalization. Unless the data is transformed into a modern, shareable format, it remains in silos. If you have enough data in an old format, then you can pay the cost of transforming it to a modern format. The cost-benefit ratio depends on the value of the data.

Current monthly transaction-level data and current output from reserving calculations

To satisfy regulators, the insurer must always keep an actuarial reserve—money that must be available in case of a claim. As time goes by, money is worth less, due to inflation—so insurers must invest to build the reserve—or risk being unable to pay. At the same time, the insured must make payments to keep their accounts in good standing, which is the transaction-level data. The current transactional data has some of the same issues as the historical data as it feeds the same administrative systems. Additional data elements are generated by customers for many reasons. For example, new data comes as policy features change and by on-going policy level processing; more data arises from additional controls, meta data, and intermediate data from the policy valuation system and results. The difficulty here is regulator control or keeping up with new processing requirements as they are announced.

External data

The data that is used to assign risk at time of policy issue and throughout the life of the policy is now coming from varied sources, including wearable devices. A few examples: Fitbits, IoT sensors in the home, and government data sources for geographical based risk scores. Such data arrives at different frequencies and different levels of efficacy. The amount of the data that is now available, about the insured item or person, is overwhelming for most insurance companies. The general approach is to have a third party score the risk of the place, activity or person; the insurer then only tracks the scores and change in scores.

Regulations: controls imposed on products

Regulations are tied to the types of products that the insurance company is selling and the location where the policy is sold. For US-based insurance companies, the regulations start at the state level and can vary significantly between states. The US federal government has additional requirements based on the size of the insurance companies. International regulations are becoming increasingly important as more companies try to reach a global client base. Examples include: reserving and policy definition at the state level, Sarbanes-Oxley Act (SOX) at the federal level, and the General Data Protection Regulation (GDPR) and IFRS-17 at the international level.

For SOX there’s the process control cost, as every stage has a control. Signoffs must occur between each stage. Here’s an example of a process control: when Microsoft releases a new version of Excel, calculations must be run with the new version and the results must then be compared with the older version. It’s a simple check to make sure the calculation engine is not introducing errors, but it’s still a control nonetheless. The same kind of testing/checking occurs if an operating system is changed. In this instance, only policy changes are allowed, not changes from software. The list of controls and signoffs can be huge and complicated. And so, it is expensive to maintain compliance with existing controls and to continually add new ones.

In brief, the result of compliance is the ability to repeat any calculation—on demand. The actuary must be able to duplicate the calculations for regulators and auditors. It’s the ultimate version of “show your work.”

Easing the burden with Microsoft

The complexity and scale of these problems mean automation and the constant evolution of software solutions. Microsoft constantly invents new tools and services, and partners innovate with the same devotion to solve problems. My next three blogs will explore a variety of use cases to illustrate how the insurance industry can use Microsoft technologies for enhanced digital stewardship. I’ll cover these topics:

  1. Data security, how to comply with all internal and external requirements (access and authorization automation, audit the personnel coming and going)
  2. Keeping your company GDPR-compliant
  3. Archiving solutions (for data)

Microsoft is driven to deliver solutions to enhance digital stewardship for the insurance industry across all platforms, from on-premises solutions, to hybrid, to full Azure cloud deployment and beyond. We continue to innovate with our partners to incorporate the latest technologies to meet the needs of our customers.

Next steps

Explore other ways Microsoft is helping insurers innovate and transform. Download the whitepaper “Inside the data science virtual maching” or visit the Microsoft insurance website.