Data Analysis – Discussion of Equivalent Value of Accumulation (EVAS)

Equivalent value of accumulation (EVAS) is similar in concept to a present value, which is scenario dependent. It is dependent upon the investment strategy used and is obtained by dividing the surplus at the end of the projection period by a growth factor. This factor represents the multiple by which a block of assets would grow from the valuation date to the end of the period in which we are interested. It is computed by accumulating existing assets or an initial lump-sum investment under the interest scenario in question on an after-tax basis with the initial investment and any reinvestment being made using the selected investment strategy. The growth factor is the resulting asset amount at the end of the projection period divided by the initial amount at the valuation date.  These EVAS values are obtained at the end of the projection period of twenty years and discounted back to the valuation date. These values are somewhat liberal in that if the company became insolvent in an earlier year, but then recovered subsequently, we do not have knowledge of this event contained in the corresponding twenty-year EVAS value.

Data Summary – Scenario Generation

This blog  entry describes the model that was used in the generation of the various sets of scenarios from 1992 through 1994.

The generation of the yield curves used in the interest rate scenarios is not arbitrage free. This would require setting up a diffusion process of state variables and making sure that the various par bond prices are consistent with the resultant bond pricing partial differential equation. Instead, we used a two-factor model with a log-normal diffusion process on the short-rate (ninety-day) and a log-normal diffusion process on the long-rate (ten-year). This model does not have mean reversion and has fixed boundaries above and below. These fixed boundaries are not reflecting.

Below we use the notation Y^m_t, where m denotes the maturity of the interest rate on the yield curve and t denotes the time epoch. The only exception of this notation is that we use Y^{90}_{t} to denote the value of the ninety-day rate instead of Y^{.25}_{t}.  Note that m = \{1, \ldots , 20\}.

First obtain the initial yield curve required, which will be from the last yield curve from the U. S. Treasury for the projection period.  These are from 1992, 1993 and 1994 . Set Y^{5}_{0} to be U. S. constant  maturity Treasury five-year interest rate for the last day of the year and calculate the ninety-day rate to be Y^{90}_{0} = Y^{5}_{0}\,\exp(\mu_{90}) where \mu_{90} and \sigma_{90} and \sigma_{10} are based on a historical log-normal analysis of the short and long rates.  In the following formulas, we assume below that \mu_{10} is zero.

With maturity m, we use the log regression formula N(m)=1.349\,\log(2m+1) + 1.051\,\log(m+1) to assure a “nice” positive or inverted yield curve. This formula precludes the possibility of humped yield curves.

Define the spread slope constant C = (Y^{5}_{0}-Y^{90}_{0})/\,N(5). Letting m range from one to twenty, we obtain the entire initial yield curve from Y^m_{0} = Y^{90}_{0}+C\,N(m).

For time t>0, the subsequent yield curves are based on a lognormal diffusion processes of the ten-year rate and the ninety-day rate as follows. The ten-year rate is projected with this formula Y^{10}_{t+1} = Y^{10}_{t}\,\exp(\sigma_{10}\,Z_{10}).

The ninety-day rate is projected as Y^{90}_{t+1} = Y^{90}_{t}\,\exp({\mu_{90}\,+\sigma_{90}\,Z_{90}}) where Z_{90} and Z_{10} are uncorrelated standard normal samples.

These values are then bracketed. The ninety-day brackets are 0.5% and 20%  and the brackets of the ten-rates are 1% and 25%.

However, in the belief that inverted yield curves are only observed in a rising interest rate environment, if the yield curve is inverted and the rates are falling (measured by the fact that Y^{90}_{t+1}>Y^{10}_{t+1} and Y^{10}_{t+1} < Y^{10}_{t}) then the Y^{90}_{t+1} is adjusted to be Y^{90}_{t+1}=Y^{10}_{t+1}e^{\mu_{90}}

This new value of Y^{90}_{t+1} is then bracketed as before.

Now define the spread slope constant C = (Y^{10}_{t+1}-Y^{90}_{t+1})/N(10)  and obtain the entire yield curve by interpolating by this formula Ym_{t+1} = Y^{90}_{t+1}+C\,N(m).

Predictive Analytics and ERM modeling

Over the next few months, I will be reexamining some old surplus modeling data from the early 1990’s and using some of the new predictive analytics that have been in the press lately.

I have also have located an excellent article that discusses the issue between explanation and prediction.

Galit Shmueli’s article “To Explain or to Predict?”

Dr. Shmeli’s article describes the difference between these two in statistical modeling.  He discusses the research areas that have nuanced issues around the topics as well.

In the future blogs and studies, I will be using the term predictive analytics, but my primary goal will be to use those techniques to explain or lead to new insights.  Only when I discuss using these models as possible dashboard, will I be using the stochastic results to create possible predictive tools.

My primary research in the past has been to determine how I can extract as much information as possible from using stochastic modeling.  This is because, in the past, the cost of the overhead with stochastic runs was prohibitive.   I would use various tools from either fitting distributions, quantile regression or from extreme value theory to extract additional information from those results.

Using distribution fitting, I built several models that would replicate the overall support of a stochastic model, without the cost of the actual simulation.  However, I found that there was no parametric distribution that could actually fully replicate the results of a stochastic model.

In the next post, I’ll share some of those results to give you insight into my filter that I will be using as I move forward with the new modeling techniques.

Professional standards and the Daubert standard

Standards: Proof and verification

Modeling doesn’t have to be limited to quantitative areas. Models can be qualitative. A good subject to illustrate this is the concept of a “professional standard”. This is a widely discussed topic, but I’m hoping to look at it from the perspective of looking at a relevant class of theoretical models, proof systems, and use concepts from them to discuss professional standards and a particular legal standard potentially applicable to your professional work.

Continue reading “Professional standards and the Daubert standard”

Guest Blogger

My name is Matt Powell and Steven has been kind enough to invite me to use his blog to share some thoughts on risk management and modeling. I am just going to try to share some takes on some topics that I think are interesting and that help circulate ideas across different disciplines.

My background is specifically discrete mathematics, computer science, and optimization but I try to draw from practice in a variety of fields. My day job is working for Segal Consulting, primarily in defined benefit retirement plans. I am an Associate of the Society of Actuaries and an Enrolled Actuary.

New Causation Paper

Julia set

The website phys.org reported last week on a paper that addresses a enigmatic issue around downward causation.   The enigma is where a higher organized state of an organism or organization can cause changes at the lower levels that make up the upper level.   However, the argument that the higher level is temporary since it exists due the behavior of the lower levels, which appears contradictory.  This problem relates to complexity models, where there is evident self-similarity as you zoom in on the details.

This is almost a which came first, the chicken or the egg?

When creating your ERM models,  you can use either a top-down or a bottom-up design.  The best models address both of directions of design.

Bottom-up design is where you assess all of the known risks and controls.  Then you design your ERM program by prioritizing that list and create your models.

Top-down is where you determine what is needed for strategic planning and decision making.  For instance, you look to solve problems and set up controls at the corporate level.  In this situation you look at controlling management and financial risks.  Also, top-down design places a high priority on the modeling and the efficient use of the company’s capital.  Top-down usually leads to greater understanding and controls, but it is difficult to create buy-in from the divisions and subsidiaries.

Bottom-up design requires risk assessments at the lower levels and are more costly.  The summary of these assessments usually lead to some surprises to upper management, but ERM buy-in is natural.

Take a look at the Downward Causation article for more details on course-graining and natural systems.  Enjoy!

NYC Wind Speed – Fitting Distributions

Windy Day

Risk Modeling – NYC Wind

I just posted my first Mathematica model today.  It demonstrates modeling NYC Wind Speeds.  Look for it under the new Model entry under the main menu.  It is stored as both a notebook (.nb) and as a computable document (.cdf).  To use the CDF, you will need to install the Wolfram Mathematica CDF Player.

I am currently using Mathematica 11.2 and the notebook and CDF are  saved in Dropbox.

 

Windy Day
It is a little windy today isn’t it?

Description

The workbook reads in the maximum wind speeds from NYC using the WeatherData Mathematica function. Those values are converted from km/hour to mph. From that converted data, Mathematica then fits several different statistical distributions and displays that fit.  I chose these distributions because of their various properties, such as positive support including infinite support such as normal or log-normal.  I also included the simplest distribution used within ERM, which is the triangular distribution.  I also fit the extreme value distribution for modeling extreme winds.  However, I find that these distributions don’t seem to get wind speeds in excess of 100mph, which is the certified wind speed protection that is required by NYC skyscrapers.

I also use the Mathematica function FindDistribution to find the best ten distribution to fit the data as well.  Here we look at the maximum, mean and the 98% quantiles of these ten distributions and examine the Economic Capital metric.   Even though Economic capital doesn’t make sense with a wind speed model, it is a means to determine what wind speed above the average wind speed would occur in a 1 in 50 year event.  This is measured by the 98% percentile of a distribution less the mean of a distribution.  Since,  98% = 100% – 1/50, the 98% percentile would tell you what the speed would be in a 1 in 50 year event.  The excess of this over the mean, would be the excess wind speed above the mean, that you would need to address, if you wanted to cover a 1 in 50 events.

Wind Risk Links

Below are several useful wind risk links:

Windstorms and Tornadoes Hazard Analysis for New York City

SEVERE WEATHER: THUNDERSTORMS, TORNADOES, AND WINDSTORMS for NYC

NYC – Coastal Storms Risk Assessment

Sandy spared New York’s skyscrapers, but high-rises carry high risk during hurricanes

List of NY Hurricanes

The 1893 NY Hurricane and the Disappearance of Hog Island

Skyscrapers May Shiver and Sway, but They’re Perfectly Safe (Just Stay Away From the Windows)

ATC Wind Speed by Location

Severe Wind Gust Risk for Australian Capital Cities – A National Risk Assessment approach

A simulation model for assessing bird wind turbine collision risk

Model-Based Estimation of Collision Risks of Predatory Birds with Wind Turbines

Managing Wind Pool Risk with Portfolio Optimization

Wind Gust Forecasting: Managing Risk to Construction Projects

 

 

Miscellaneous Thoughts – What to Write?

Ideas about blog articles

Today, I want to discuss my miscellaneous thoughts about future posts.  Here is a list of some of my ideas:

  1. Discuss capital models and how economic capital (EC) models fit within that universe.
  2. Outline the various measures of EC and discuss the metric that I will actually use going forward with demonstrated models.
  3. Discuss the modeling environment that is used in the blog models.  The majority of my models use Mathematica, but this being a commercial product, I also want to re-engineer them in R or Excel, so they can be useful to a wider audience.
  4. Prepare a series of brief articles on the use of Mathematica and R.
  5. Outline the process of fitting of statistical distributions.
  6. Discuss the use of Copulas.
    1. Discuss positive definite matrices.
      1. How do you get the nearest positive definite matrix?
    2. Discuss Cholesky Decomposition
    3. How do you fit a copula.
  7. Talk about how to construct scenarios.
    1. How do you can use stochastic differential equations (SDE) as the fundamental definition for your scenarios.
    2. How do you define the SDE to replicate various market behavior.
  8. Construct some stand-alone risk models.
  9. Construct some toy ERM models using copulas.
  10. Outline techniques that I’ve see for modeling risk.
  11. Outline some ideas around legal risk by using Markov Chains.

Do any of you have any suggestions that you would like for me to discuss?    Please feel free to comment and I will add them to my list if I am able to discuss them. If not, I will try to locate other experts to discuss your issue.

 

Causal vs. Non-causal Models

Stork Bringing Baby Casual

In certain northern European countries, parents used to tell their children that storks bring babies.  For many years, I did not understand the actual link of how the old tale arose.  However, I found out that in these countries that after a baby was born, the child’s nursery was frequently placed at the top of the house.  Also, the parents would increase the heat within the house to keep the new born warm throughout the night.  What happened is that the storks would discover these warmer roofs and would exploit this extra heat and would build their nests over the same rooms at the top of the house.  So, the birth of the baby actually brought the storks and not the other way.

Stork on Nest Causal

This amusing analogy can relate to causal vs non-causal models and well as an example with the idea of model dependency.

When constructing ERM models, if you know that situation A impacts situation B which again impacts situation C, you want your ERM model to reflect this causality.   Usually a causal system is one that depends on current or past input only. If the the model depends on future values as well, you have an non-causal (or a-causal) model. Also see https://en.wikipedia.org/wiki/Causality  for a further discussion around this topic.

However, when modeling various risks, initially, you may not be able to determine how different risks are properly interrelated. In these situations, you might use non-causal modeling to set up various statistical models to estimate a specific risk.  Or you may set up a loose correlation model.  In this situation, you know storks and delivery of babies are related so you would have a strong positive correlation.

Non-causal models with correlations were more frequently used before the financial crisis of 2008, primarily, because no one actually knew how to model credit. So, credit derivatives were modeled for several years, by setting up a VaR model and using copulas for the aggregation of risk. However, we saw that the relationship of the high quality bonds issues that were built off of the sub-prime mortgages and additional collateral was actually highly correlated in the extreme scenarios.

Now it is more common to create causal models where you design your scenarios and models to interrelate, which is the best approach.  At least you would model the delivery of babies implying the arrival of storks.

However, in some companies, the non-causal models are still used, especially when that company wants to model a large diversity of risks. Also, if a company has several risks that are modeled separately with differing systems and scenarios, these risks may be segregated into silos. In these situations aggregation of capital may still require non-causal methods to handle the silos.