Skip to Navigation Skip to Main Content Skip to Footer
Actuarial Modeling

Be a Better Actuary(7 min read)

[M]ore time for analysis and less time wasted. They were automatically better actuaries.

Be a Better Actuary

As part of our series on Software as a Service and how it benefits actuaries, we’re taking a deep dive into the specifics of one aspect of using SaaS in actuarial work: the fact that SaaS providers do a lot of work on behalf of their end-users. And usually, that happens automatically, as part of delivering a quality product.

SaaS businesses do a lot for their customers

It may look simple, but the process of getting from [RUN] to [RESULTS] in an actuarial forecast system is anything but. There are many different steps which must work seamlessly for end-users to be able to review and understand results.

Unlike a simple Excel spreadsheet, where all the calculations are updating dynamically all the time and everything is contained in one unit, modern actuarial systems have many moving parts. Generally, systems will separate the calculation process from results writing and storage, in order to appropriately allocate resources at critical points.

SLOPE is no different. In fact, as SLOPE does more than typical actuarial systems, we have a couple of additional steps.

Since SLOPE is a fully-hosted cloud-based system, the first step (if necessary) is to initiate (or “warm up”) virtual machines to run various scenarios. [Note – this step is only sometimes necessary, as we keep a bank of virtual machines ready to go at any time.]

Next, the calculation engine processes through all of the calculations of each projection on a single virtual machine. As results are produced they are inserted into a reporting database unique to each client.

The final step is for the end-user to view the results using the integrated business intelligence platform.

This work flow is illustrated in the following diagram:

The timeline from initiating a projection to viewing results.

Modularity is key to performance enhancements

SLOPE has been built to be modular from the very beginning. Various elements can be replaced for better performance. For example, early on warming up the virtual machines (step 1) could take up to 10 minutes. Now, we’ve reduced that to 2 minutes or less, without affecting anything else downstream.

[For more on modularity in building actuarial models, rather than the software supporting them, check out this free eBook.]

This kind of interchangeability means we can improve certain parts of the process without impacting others, as well as delivering performance improvements more regularly than waiting for a complete overhaul of the system.

We did have a bit of a challenge, though, when we looked forward to the future. While the current statistics were okay, we recognized some significant linearity in our reporting database.

That is, suppose during a run the calculation engine created a million rows of data to insert into the reporting database. And further suppose that took X units of time. Using our previous architecture, if we scaled-up output to 2 million rows (maybe by doubling the number of model points we were running), we might see something on the order of 2X units of time to insert those into the database. That’s not scalability. We suspected there could be a better way.

And another complication is that getting results out of the database also seemed to have a direct relationship with the amount of data inside it. Again, if we double the amount of data stored (because of more runs with larger data sets), then processing time of results could have increased nearly two-fold as well.

While this type of performance is typical, and even expected, it’s not good enough for us. One of the advantages of cloud technology is that we can leverage that technology for scalability. As a result, we always are scanning the horizon for better results out of any step in our process.

How could we make it better?

When SLOPE was first in development, there were quite a few options for the reporting database. We selected a provider as everyone does, based on important criteria like cost, usability, functionality, integration with the rest of our architecture, etc.

Recently, a new option caught our attention. One of those providers we had originally viewed but not selected reached back out to us for another look. Always searching for a better experience, we began investigating whether they would be a viable alternative.

One of the reasons we had not selected them initially was due to a question around concurrency. (Concurrency is the ability to handle multiple actions at the same time. This is different from “linearity”, which means handling one action at a time and then moving to the next.)

As it turns out, their solution was actually a better fit than previously assumed. This provider, like us, was built from the ground up to take advantage of cloud technology. As a result, it naturally merged with SLOPE’s outlook on product development and user experience.

And that whole concurrency issue? Turns out they absolutely could handle that. And very well, too! (We’ll get to results a little later.)

With these alignments in terms of what this new provider offered and what SLOPE needed, it was clear that further investigation was warranted.

Moving forward with implementation

First, like any new vendor relationship, we had to do some “minimum requirements” evaluations. Would it fit our architecture? Would it be able to do the things the sales reps promised? Would the security be appropriate for our clients? What about costs, would they be reasonable?

You know, all those things that any end-user would have done when considering changing any kind of vendor or sub-contractor. [This is essentially SLOPE acting on behalf of all our clients in this vetting process. We go through it once for multiple clients, so they don’t have to. They just get the benefits of our hard work.]

As our candidate passed all the initial tests, we moved into an actual use test. Could the database really handle concurrency issues we might create? Would it actually be faster, as claimed? Could it be ready for all the potential we saw for the future?

They passed all the tests. Our developers were very happy with what they saw. They were especially happy with the fact that for any concern they came across, there was always a way to resolve the issue. The solution is quite robust.

Finally, when it came time to implement, it was very straightforward. Since we incorporated modularity from the beginning, swapping out just this layer of our architecture did not require a complete overhaul of the system. Which led to a significant reduction in time and effort compared to replacing the full suite.

What are some improvements our clients have seen?

One would expect that an upgrade like this would result in end-user experience enhancements. And that’s exactly what happened.

Most visibly, they’ve seen shorter run times, both on creating results and in viewing them. One client said, “Everything was just snappier.” They woke up one morning and their runs were faster and their report generation was faster. This is because the new database improved both Step 3 (Results Writing) and Step 4 (Results Access).

An updated timeline after implementing a new database solution “under the hood”.

The beauty of it is, those clients didn’t have to do anything to get their results faster. Unlike a traditional actuarial system installation, they were not responsible for the maintenance and improvement of the system. They didn’t have to initiate the upgrade project. They didn’t have to vet various vendors. They didn’t have to do the testing. They didn’t have to do the backwards compatibility validation to ensure nothing had been inadvertently broken.

All they did was log on to the system the same way they had the day before and got results faster. Which means more time for analysis and less time wasted. They were automatically better actuaries.

What better solutions could you deliver with more time on actuarial tasks?

Here’s a quick comparison of just two projections using the prior architecture versus the new setup:

These runs included 134,199 model point records, distributed across 36 cores. There are two different projection in order to provide some idea of the scope of improvements possible. (Run #2 has much more data.)

Using prior database
insertion process
h:mm:ss
Using upgraded database insertion process
h:mm:ss
Reduction
(h:mm:ss)
Reduction
(%)
Run #10:44:080:39:220:04:4610.8%
Run #21:54:470:46:361:08:1159.4%

Because we had done a lot of work to make this happen seamlessly for clients, they woke up the next day and got model output in substantially less time.

As you can see, having this new database beneath the hood is a big improvement in performance for the end-user actuary. (Disclaimer: not all models will see the same kind of improvements. Each model is unique and improvements will vary by model complexity, duration of projection, number of model points, and so on.)

That’s going to unlock efficiencies for those clients and every other one we have on board.

Which is a big win for our SaaS delivery model.

Conclusion

Using SLOPE is like using multiple systems for the price of one. The fact that we become an intermediary for our clients means that they get access to technologies they wouldn’t be able to afford on their own. Which lowers their costs, gives them better output, and enables them to be better actuaries.


If you’re interested in seeing how adding a modern actuarial system could allow you to significantly reduce your time waiting for results, click here to set up a demonstration.