Equivalent value of accumulation (EVAS) is similar in concept to a present value, which is scenario dependent. It is dependent upon the investment strategy used and is obtained by dividing the surplus at the end of the projection period by a growth factor. This factor represents the multiple by which a block of assets would grow from the valuation date to the end of the period in which we are interested. It is computed by accumulating existing assets or an initial lump-sum investment under the interest scenario in question on an after-tax basis with the initial investment and any reinvestment being made using the selected investment strategy. The growth factor is the resulting asset amount at the end of the projection period divided by the initial amount at the valuation date. These EVAS values are obtained at the end of the projection period of twenty years and discounted back to the valuation date. These values are somewhat liberal in that if the company became insolvent in an earlier year, but then recovered subsequently, we do not have knowledge of this event contained in the corresponding twenty-year EVAS value.

## Data Summary – Scenario Generation

This blog entry describes the model that was used in the generation of the various sets of scenarios from 1992 through 1994.

The generation of the yield curves used in the interest rate scenarios is not arbitrage free. This would require setting up a diffusion process of state variables and making sure that the various par bond prices are consistent with the resultant bond pricing partial differential equation. Instead, we used a two-factor model with a log-normal diffusion process on the short-rate (ninety-day) and a log-normal diffusion process on the long-rate (ten-year). This model does not have mean reversion and has fixed boundaries above and below. These fixed boundaries are not reflecting.

Below we use the notation , where denotes the maturity of the interest rate on the yield curve and denotes the time epoch. The only exception of this notation is that we use to denote the value of the ninety-day rate instead of . Note that .

First obtain the initial yield curve required, which will be from the last yield curve from the U. S. Treasury for the projection period. These are from 1992, 1993 and 1994 . Set to be U. S. constant maturity Treasury five-year interest rate for the last day of the year and calculate the ninety-day rate to be where and and are based on a historical log-normal analysis of the short and long rates. In the following formulas, we assume below that is zero.

With maturity , we use the log regression formula to assure a “nice” positive or inverted yield curve. This formula precludes the possibility of humped yield curves.

Define the spread slope constant Letting range from one to twenty, we obtain the entire initial yield curve from

For time , the subsequent yield curves are based on a lognormal diffusion processes of the ten-year rate and the ninety-day rate as follows. The ten-year rate is projected with this formula

The ninety-day rate is projected as where and are uncorrelated standard normal samples.

These values are then bracketed. The ninety-day brackets are 0.5% and 20% and the brackets of the ten-rates are 1% and 25%.

However, in the belief that inverted yield curves are only observed in a rising interest rate environment, if the yield curve is inverted and the rates are falling (measured by the fact that and ) then the is adjusted to be

This new value of is then bracketed as before.

Now define the spread slope constant and obtain the entire yield curve by interpolating by this formula

## Predictive Analytics and ERM modeling

Over the next few months, I will be reexamining some old surplus modeling data from the early 1990’s and using some of the new predictive analytics that have been in the press lately.

I have also have located an excellent article that discusses the issue between explanation and prediction.

Galit Shmueli’s article “To Explain or to Predict?”

Dr. Shmeli’s article describes the difference between these two in statistical modeling. He discusses the research areas that have nuanced issues around the topics as well.

In the future blogs and studies, I will be using the term predictive analytics, but my primary goal will be to use those techniques to explain or lead to new insights. Only when I discuss using these models as possible dashboard, will I be using the stochastic results to create possible predictive tools.

My primary research in the past has been to determine how I can extract as much information as possible from using stochastic modeling. This is because, in the past, the cost of the overhead with stochastic runs was prohibitive. I would use various tools from either fitting distributions, quantile regression or from extreme value theory to extract additional information from those results.

Using distribution fitting, I built several models that would replicate the overall support of a stochastic model, without the cost of the actual simulation. However, I found that there was no parametric distribution that could actually fully replicate the results of a stochastic model.

In the next post, I’ll share some of those results to give you insight into my filter that I will be using as I move forward with the new modeling techniques.