We have some significant experience on the Slope team processing actuarial models. A total of 90+ years, to be exact. And throughout that experience we’ve encountered many tasks that are not actuarial in nature and yet are required in order to just get to the actuarial work.
One great example is that when you use a proprietary software, you often have to convert some data into a format that’s readable by that software. Which may include steps such as: rearranging columns, changing headers, or even saving as a different format.
We even heard the story from an actuary who used to have to open certain files using a text editor, change a couple of digits in a header row, and save that file again, in order to then put it into his specialized actuarial software. That doesn’t seem efficient.
Of course, you can program a work-around to make that time shorter, but the steps are still there.
Still part of the process.
Still something that could go wrong.
Still something you need to push a button to get done. And those kinds of steps exist in lots of places – from data prep to combining results after runs are complete in order to make appropriate comparisons.
Which distracts actuaries from focusing on the analysis and management of the risks they’re responsible for managing
Did you ever think, “Is it just me? Am I the problem here?”
We wondered, is this consistent across the broader actuarial population? Or is it just us? In order to begin understanding this issue, we decided to create a survey.
That way, we could substitute facts for impressions and learn whether or not what we had experienced was common.
We decided to find out by a simple survey. There were 8 questions. (Click this link to see possible responses, too.)
- What is your current role?
- What is your current credential?
- What is your primary modeling tool?
- Which risk management tasks do you perform in a given year?
- Which of the following data or model management tasks have you recently experienced?
- About how much does each of these non-value-added tasks impact your ability to perform your actuarial (risk measurement and analysis) responsibilities?
- Which of the following hardware or software management tasks have you recently experienced?
- About how much does each of these hardware or software management tasks impact your ability to perform your actuarial (risk measurement and analysis) responsibilities?
We then published it on our website and promoted it on LinkedIn. [By the way, you can find Slope on LinkedIn here.]
We were hoping for 50 respondents, and got 57. Woohoo! Thank you to all those who participated. It’s been really encouraging to know that you’re willing to give of your time to help make the profession better.
So what do they look like?
And, the bigger question is, how should we think about them? I.e. “how will we create meaningful insights from this data for our audience?”
Well, we don’t have all those conclusions just yet. But we’ve started diving into the answers, and have some initial impressions we’d like to share.
Here’s how we initially approached this analysis. (Pro-tip, it’s pretty much how any actuary would approach any analysis – look for patterns to create insights, think about why that might be, think about what that tells you to do.)
Might there be differences between P&C, Life/Annuity, Health, Pension actuaries?
This would show up in results across designations.
Might there be differences between functions (experience studies vs. assumption setting vs. model changes vs. valuation vs. pricing vs. forecasting)?
We still think this is a viable dimension to consider, but it’s just not a lot of distinction right now that we can see.
Might there be differences in the types of systems used?
Obviously, we want this to be true, because then we can point to actuaries who use commercial systems as having more time for analysis compared to those who don’t.
While we’re not yet sure of any statistically significant differences across those dimensions, we do want to start sharing some ideas we have already seen.
For now, we’ve settled on two different ways of categorizing responses – by technology and by level (or credential).
Q3: What is your primary actuarial modeling tool?
We think this will ultimately be a driver of the differences in how actuaries allocate their time. Those who use Excel versus a commercially-available actuarial software will probably have different sets of task requirements.
For example: some commercial software packages read data via a proprietary format. In order to get that data ready for the system, there may be conversion steps to translate from your source data (such as a generic text or comma-separated-value file) into that proprietary format.
That takes time. And is not necessarily required for Excel-based systems, which may be able to read multiple data formats natively.
At the back end, too, output values from commercial systems sometimes need to be consolidated by the actuary for comparisons, because they don’t necessarily end up in a place where they can be easily accessed.
In contrast, Excel-based systems can be created to direct data to a central repository, or warehouse, or data mart, that allows for easy comparisons across runs.
Now, that’s not to say Excel is so easy to use and has all the advantages over commercial systems. There are challenges with model auditing, unstructured formulas, and inefficient calculation that come with building models in Excel.
Plus many Excel models have been created over time without appropriate consideration of end-user design experience, data structure, and documentation. [This process is affectionately known as building a “Frankenmodel”]
And further, those who are part of organizations that have built their own systems are likely to have yet another set of requirements quite different from either of the two options above. It makes sense to split them out as well.
These kinds of differences mean that actuaries working in different environments may have fundamentally different experiences of what diverts their time to management of the system and away from analysis of the results.
That’s why we decided to separate who is using each system when evaluating their time allocations.
And
Q2: What is your current credential (if any)?
It’s no secret that lower-level actuarial students are often the ones down in the nitty-gritty of running models – dealing with those data conversion tasks and import/export mundanities. Once actuaries qualify for membership in a society (i.e. they get their letters), they’re often also taking on increasing responsibilities and getting additional visibility throughout the company.
Which means they are often spending less time creating numbers, and more time explaining them or applying them to business decisions beyond the actuarial department.
Now, we recognize this is not absolute. There clearly are still instances where Fellows are performing tasks which felt very far below their credentials. [In fact, this is one of the major reasons why SLOPE was created. More about that here.]
Yet we recognize that this pattern holds generally true, which is why we made the distinction.
As a result, we could create a 2×3 grid (6 total segments) of what may be fairly similar types of work (not necessarily different “responsibilities”).
If we had 200 respondents (which we hope to get next time we do this – catch us again in 6 months), we might be able to have enough data to fully split out the “Other” category. For now, we’re lumping them together and accepting the wider error bars that come with it.
(*- see below for why this doesn’t quite match the total of the individual cells.)
So, who were our respondents?
We had 51 of 57 answer this question. In this, professionals could choose more than one. As you can see, we’re fairly SOA-heavy on this (ASA/FSA), as may be expected, since that’s the majority of our audience right now. The “Other” category includes many actuarial students, entry-level actuaries, and some non-actuaries as well.
What do they do?
A fairly balanced perspective on the tasks that they usually encounter. Which is good, because we’ll get a broader cross-section of the actuarial world that way.
And what do they use?
You’ll see that the numbers add up to more than 57. This was our error. In creating the survey, we did not adequately define the question, so we ended up allowing multiple answers on this one.
Which is fair. Most actuaries, even if they use a commercial system, also perform quite a bit of work in Excel or other spreadsheet software.
The question, though, was about which is primary. In our minds, that meant there is only one #1 system. Since we didn’t clearly spell out our terms, we miscommunicated and got answers we didn’t expect, such as people answering multiple times to this question.
We know there’s going to be some overlap in the demographic breakdowns we’re looking at, but we still think we can get some value out of looking at broad trends.
So, how do we think about that? Well, we analyze the best we can, we correct for the future, and we move forward.
Which means that the answers we’ll create aren’t as precise as we wish them to be, but they can still be useful. Much like models that aren’t quite so precise, but are accurate.
So, what have we seen from this so far?
It’s too early to look at everything right now. But from initial impressions, we have some conclusions about how different groups of actuaries are impacted by their different sets of tasks.
For Question 6 and Question 8, we asked respondents to categorize how impactful various tasks (they selected them on Questions 5 and 7) are to their daily work.
On Question 6, “About how much does each of these non-value-added tasks impact your ability to perform your actuarial (risk measurement and analysis) responsibilities?” you can see some differences:
Group | Top Impacts (moderately or significantly impacts) | Bottom Impacts (almost none) |
1 – Credential + Excel(16) | Tracing Individual Values through a calculation (11/14) Updating documentation (creating an audit trail) (11/14) | Backing up model results (12/15) |
2 – Credential + System(18) | Converting data to a proprietary format (11/17) Waiting for actuarial software to process (11/17) | Backing up model results (11/16) |
3 – Credential + Other(s) (12) | Troubleshooting ETL (extract, transform, load) issues (8/9) Tracing individual values through a calculation (8/9) | Converting output data to an alternate format for storage or analysis (6/9) |
4 / 5 / 6 Non-Cred (20) | Waiting for Excel to process (15/16) Tracing individual values through calculation (11/13) | Backing up model results (10/15) |
Do note, Top impacts are the total who answered “moderately or severely” impacts out of total who answered, and Bottom impacts is “almost no impact” out of the total who selected that item.
On Question 8, “About how much does each of these hardware or software management tasks impact your ability to perform your actuarial (risk measurement and analysis) responsibilities?”
Group | Top Impacts (moderately or significantly impacts) | Bottom Impacts (almost none) |
1 – Credential + Excel(16) | Slowed processing of other tasks while your Excel model (or other) runs “in the background” (13/14) Manually combining results from distributed workloads (8/14) | Updating hardware to meet minimum requirements of specialized actuarial software (11/14) |
2 – Credential + System(18) | Slowed processing of other tasks while your Excel model (or other) runs “in the background” (11/16) Manually prioritizing runs across various machines (or servers) (10/16) | Updating hardware to meet minimum requirements of specialized actuarial software (11/16) |
3 – Credential + Other(s) (12) | Coordinating with IT department on resources, timing, budget, or prioritization (7/7) Troubleshooting hardware failures (6/7) | Updating hardware to meet minimum requirements of specialized actuarial software (5/7) |
Non-Credentialed(20) | Slowed processing of other tasks while your Excel model (or other) runs “in the background” (11/14) Coordinating with IT department on resources, timing, budget, or prioritization (10/15) | Troubleshooting hardware failures (10/13) |
Remember that this is quite subjective here: not much impact, somewhat impacts, significantly impacts.
We’re hoping in the future to help actuaries quantify this amount of wasted time, and therefore benchmark themselves against their peers.
If you know, for example, that as an ASA who is in charge of modeling for a life insurance company and using a commercial system, you’re wasting X% of your time while everyone else is wasting X/2 % of their time, then you’ve got some work to do. Or, alternatively, it could be a recruiting advantage – “Our actuarial students are twice as fulfilled as the national average, because they spend significantly less time “getting the numbers” each month. See?” and pull up your company against the industry benchmark.
But that’s for another survey, at another time. Be on the lookout for that in the coming months. Immediately up, though, is a more detailed write-up of our findings from this experience, likely in September as we do our actuarial thing and crunch all the numbers.
Hopefully this preview will give you the incentive to come back later and take a deeper dive with us into the full results of the survey. Stay tuned!
When we do, we’ll also have some recommendations and ways to think about your actuarial practices going forward. We look forward to starting conversations within the profession about how to get unstuck from task paralysis and discover the time you need time for analysis.
Maybe model conversion shouldn’t be so infrequent: Find out about Hidden Benefits of Model Conversion by sending an email to info@slopesoftware.com.