Professional standards and the Daubert standard

Standards: Proof and verification

Modeling doesn’t have to be limited to quantitative areas. Models can be qualitative. A good subject to illustrate this is the concept of a “professional standard”. This is a widely discussed topic, but I’m hoping to look at it from the perspective of looking at a relevant class of theoretical models, proof systems, and use concepts from them to discuss professional standards and a particular legal standard potentially applicable to your professional work.

Continue reading “Professional standards and the Daubert standard”

Guest Blogger

My name is Matt Powell and Steven has been kind enough to invite me to use his blog to share some thoughts on risk management and modeling. I am just going to try to share some takes on some topics that I think are interesting and that help circulate ideas across different disciplines.

My background is specifically discrete mathematics, computer science, and optimization but I try to draw from practice in a variety of fields. My day job is working for Segal Consulting, primarily in defined benefit retirement plans. I am an Associate of the Society of Actuaries and an Enrolled Actuary.

New Causation Paper

Julia set

The website phys.org reported last week on a paper that addresses a enigmatic issue around downward causation.   The enigma is where a higher organized state of an organism or organization can cause changes at the lower levels that make up the upper level.   However, the argument that the higher level is temporary since it exists due the behavior of the lower levels, which appears contradictory.  This problem relates to complexity models, where there is evident self-similarity as you zoom in on the details.

This is almost a which came first, the chicken or the egg?

When creating your ERM models,  you can use either a top-down or a bottom-up design.  The best models address both of directions of design.

Bottom-up design is where you assess all of the known risks and controls.  Then you design your ERM program by prioritizing that list and create your models.

Top-down is where you determine what is needed for strategic planning and decision making.  For instance, you look to solve problems and set up controls at the corporate level.  In this situation you look at controlling management and financial risks.  Also, top-down design places a high priority on the modeling and the efficient use of the company’s capital.  Top-down usually leads to greater understanding and controls, but it is difficult to create buy-in from the divisions and subsidiaries.

Bottom-up design requires risk assessments at the lower levels and are more costly.  The summary of these assessments usually lead to some surprises to upper management, but ERM buy-in is natural.

Take a look at the Downward Causation article for more details on course-graining and natural systems.  Enjoy!

NYC Wind Speed – Fitting Distributions

Windy Day

Risk Modeling – NYC Wind

I just posted my first Mathematica model today.  It demonstrates modeling NYC Wind Speeds.  Look for it under the new Model entry under the main menu.  It is stored as both a notebook (.nb) and as a computable document (.cdf).  To use the CDF, you will need to install the Wolfram Mathematica CDF Player.

I am currently using Mathematica 11.2 and the notebook and CDF are  saved in Dropbox.

 

Windy Day
It is a little windy today isn’t it?

Description

The workbook reads in the maximum wind speeds from NYC using the WeatherData Mathematica function. Those values are converted from km/hour to mph. From that converted data, Mathematica then fits several different statistical distributions and displays that fit.  I chose these distributions because of their various properties, such as positive support including infinite support such as normal or log-normal.  I also included the simplest distribution used within ERM, which is the triangular distribution.  I also fit the extreme value distribution for modeling extreme winds.  However, I find that these distributions don’t seem to get wind speeds in excess of 100mph, which is the certified wind speed protection that is required by NYC skyscrapers.

I also use the Mathematica function FindDistribution to find the best ten distribution to fit the data as well.  Here we look at the maximum, mean and the 98% quantiles of these ten distributions and examine the Economic Capital metric.   Even though Economic capital doesn’t make sense with a wind speed model, it is a means to determine what wind speed above the average wind speed would occur in a 1 in 50 year event.  This is measured by the 98% percentile of a distribution less the mean of a distribution.  Since,  98% = 100% – 1/50, the 98% percentile would tell you what the speed would be in a 1 in 50 year event.  The excess of this over the mean, would be the excess wind speed above the mean, that you would need to address, if you wanted to cover a 1 in 50 events.

Wind Risk Links

Below are several useful wind risk links:

Windstorms and Tornadoes Hazard Analysis for New York City

SEVERE WEATHER: THUNDERSTORMS, TORNADOES, AND WINDSTORMS for NYC

NYC – Coastal Storms Risk Assessment

Sandy spared New York’s skyscrapers, but high-rises carry high risk during hurricanes

List of NY Hurricanes

The 1893 NY Hurricane and the Disappearance of Hog Island

Skyscrapers May Shiver and Sway, but They’re Perfectly Safe (Just Stay Away From the Windows)

ATC Wind Speed by Location

Severe Wind Gust Risk for Australian Capital Cities – A National Risk Assessment approach

A simulation model for assessing bird wind turbine collision risk

Model-Based Estimation of Collision Risks of Predatory Birds with Wind Turbines

Managing Wind Pool Risk with Portfolio Optimization

Wind Gust Forecasting: Managing Risk to Construction Projects

 

 

Miscellaneous Thoughts – What to Write?

Ideas about blog articles

Today, I want to discuss my miscellaneous thoughts about future posts.  Here is a list of some of my ideas:

  1. Discuss capital models and how economic capital (EC) models fit within that universe.
  2. Outline the various measures of EC and discuss the metric that I will actually use going forward with demonstrated models.
  3. Discuss the modeling environment that is used in the blog models.  The majority of my models use Mathematica, but this being a commercial product, I also want to re-engineer them in R or Excel, so they can be useful to a wider audience.
  4. Prepare a series of brief articles on the use of Mathematica and R.
  5. Outline the process of fitting of statistical distributions.
  6. Discuss the use of Copulas.
    1. Discuss positive definite matrices.
      1. How do you get the nearest positive definite matrix?
    2. Discuss Cholesky Decomposition
    3. How do you fit a copula.
  7. Talk about how to construct scenarios.
    1. How do you can use stochastic differential equations (SDE) as the fundamental definition for your scenarios.
    2. How do you define the SDE to replicate various market behavior.
  8. Construct some stand-alone risk models.
  9. Construct some toy ERM models using copulas.
  10. Outline techniques that I’ve see for modeling risk.
  11. Outline some ideas around legal risk by using Markov Chains.

Do any of you have any suggestions that you would like for me to discuss?    Please feel free to comment and I will add them to my list if I am able to discuss them. If not, I will try to locate other experts to discuss your issue.

 

Causal vs. Non-causal Models

Stork Bringing Baby Casual

In certain northern European countries, parents used to tell their children that storks bring babies.  For many years, I did not understand the actual link of how the old tale arose.  However, I found out that in these countries that after a baby was born, the child’s nursery was frequently placed at the top of the house.  Also, the parents would increase the heat within the house to keep the new born warm throughout the night.  What happened is that the storks would discover these warmer roofs and would exploit this extra heat and would build their nests over the same rooms at the top of the house.  So, the birth of the baby actually brought the storks and not the other way.

Stork on Nest Causal

This amusing analogy can relate to causal vs non-causal models and well as an example with the idea of model dependency.

When constructing ERM models, if you know that situation A impacts situation B which again impacts situation C, you want your ERM model to reflect this causality.   Usually a causal system is one that depends on current or past input only. If the the model depends on future values as well, you have an non-causal (or a-causal) model. Also see https://en.wikipedia.org/wiki/Causality  for a further discussion around this topic.

However, when modeling various risks, initially, you may not be able to determine how different risks are properly interrelated. In these situations, you might use non-causal modeling to set up various statistical models to estimate a specific risk.  Or you may set up a loose correlation model.  In this situation, you know storks and delivery of babies are related so you would have a strong positive correlation.

Non-causal models with correlations were more frequently used before the financial crisis of 2008, primarily, because no one actually knew how to model credit. So, credit derivatives were modeled for several years, by setting up a VaR model and using copulas for the aggregation of risk. However, we saw that the relationship of the high quality bonds issues that were built off of the sub-prime mortgages and additional collateral was actually highly correlated in the extreme scenarios.

Now it is more common to create causal models where you design your scenarios and models to interrelate, which is the best approach.  At least you would model the delivery of babies implying the arrival of storks.

However, in some companies, the non-causal models are still used, especially when that company wants to model a large diversity of risks. Also, if a company has several risks that are modeled separately with differing systems and scenarios, these risks may be segregated into silos. In these situations aggregation of capital may still require non-causal methods to handle the silos.

Model Control Life Cycle

Life Cycle

At the center of ERM is the implementation of the model control life cycle. There are four components:

  1. Analyze and determine the key risks to which an entity is exposed,
  2. Design and implement models to estimate the impact of the risks,
  3. Simulate and aggregate and allocate results to quantify the capital impact of the risks, and
  4. Evaluate, report, and determine the strengths and weaknesses of the models. Once these steps are complete, you return to step 1 to determine how to improve the models or how to add another risk to the existing set of models.

So, as you continue to pursue your career in enterprise risk management, you will find that multiple skills are required to implement and maintain the ERM model life cycle. The first is to have the ability to examine an entity such as a company, line of business or a country and determine various risks to which that entity is exposed. This risk assessment skill is central to step one above. Also, using risk assessments, you will determine which risks will be in or out of scope for that specific model cycle.

After determining which risks are in scope, the second skill you develop is the ability to design and implement the models to estimate the impact of these risks, which meets step two of the life cycle. The final skill emphasized in this blog is the ability to use the models to simulate and aggregate the results, which corresponds to step three.

 

 

Dependency continued

I agree with Carlos that reading what Paul Embrechts has been saying about dependence modeling over the last 20 years is extremely useful and enlightening.   A link to one of his papers with Filip Lindskog and Alexander McNeil is Modelling Dependence with Copulas and Applications to Risk Management. A great read and an excellence reference to consider when trying to set up your dependency models.

Random Dice
Dice Thought Experiment

Thought Experiment

In the next few paragraphs I’m going to describe a thought experiment that may give some insight on how to think of the use of copulas in dependency models or dependency in general.

Imagine that you have a set of dice that has multiple sides. Each die represents a single risk. Also, that die has as many faces as the possible number of outcomes of the risk it represents. Now say that you have a portfolio of risks and your set of dice corresponds to that portfolio.

Environment

If you throw your dice individually, then there will be little to no interaction between the separate risks and you could say that the results of the dice are independent from one another. If you throw them all at once, maybe from a dice cup, there will be a small interaction between the dice that touch. However, if you throw the dice hard enough you might be able to assume that each dice doesn’t affect the faces that turn up on the other dice, but they would be affected more by the environment where they land. If you throw them on a infinitely flat table, they would all be affected similarly. However, if the surface is irregular then where the dice land would be affected by the local “geography” of the surface.

Copula Dice
Dependent Dice

Interaction

Now assume that you some way tie the dice to each other. Or perhaps you place the dice in a clear mesh bag. Now if you toss the bag and enumerate the separate dice faces, you now have a situation where the mesh bag effects the entire process. The results of the enumeration of the dice would then depend on the individual die, the relationship forced upon them by the mesh bag and also the environment of where the bag lands.

Bias

Realize too that the area of each die’s face doesn’t have to be the same for each face, so the die may be biased and may land on larger faces and this can affect the enumeration as well.

So now you can see where multiple ways that dependency can arise. It could arise from each separate die, the interaction between the separate dice, the constraints of the tossing of the dice and the environment that the dice land.

Materiality Dependency

If you are modeling risk dependency, you may want to model no dependency, all types of dependency combined, or separate types of dependency.  It is all relative to the materiality of the risks and their interaction.

Continue reading “Dependency continued”

Independence Dependence Correlation

Dependency

It may be odd that we are going to talk about dependency between risks before we talk about individual risks, but this issue is so key to ERM modeling that it is best to discuss it out of order.

Some of the largest ERM failures is frequently where a complex interrelationship between multiple risks causes a company failure.   The Black Swan events that Taleb is famous for discussing arises from this complex interplay between the government, company management, human frailty and the market.

Unfortunate Events

 

The Lemony Snicket book “A Series of Unfortunate Events” has led to the frequently used media phrase of “a confluence of unfortunate events”.   Since 2008, we have seen these interactions and ERM modeling has become obsessed with how to model them correctly.

In my first blog, I discussed how different techniques are used to model ERM.  The first one that was used by casualty actuaries and the Credit Derivatives market is the use of copulas or correlation matrices to set up the dependence modeling between the separate risks.  However, the Great Recession has led to the deterministic modeling in ERM.  This was because the banks were required to use multiple deterministic scenarios within their models to give them insight of the impact of the intradependency of the various risks that their book of business was exposed.  Again, Sim Segal’s ERM book discusses this approach and all of the issues behind this.

In the older ERM modeling methods, where dependency is a model assumption such as the use of correlation matrices or copulas, or the dependency that is a natural result of using the same set of stochastic scenarios in all (or most) of the corporate models, be able to come to a deep understanding of how separate risk affect each other is much more difficult than using Sim’s Value Added modeling approach.

Portfolio Effect

 

However, realize before the meltdown, dependency modeling was actually considered a positive component to ERM because through that assumption, a company’s economic capital was lower.  The idea of a portfolio effect was a major selling point for a company to introduce ERM.  This is where managment were led to believe that the interaction between their risks would actually help offset each other and so they would need to hold less capital that may be required by a regulator or a rating agency.  It was (and still is) a very popular idea that lower capital requirements gives  your company greater flexibility in their decisions and makes them more competitive.  However, historically the company equity is in a sense the last line of self-insurance that a company has to be able to continue to exist in bad times or because of bad decisions.