Determining Acceptable Risk

Author: Joe H., Inflow Engineer

In our last post, we talked about the basics of risk assessment. If you haven’t read the post, I recommend doing so now, since this post will build on that content. In our post titled Risk Assessment, we talked about how to determine the overall risk value for a project, and today we’ll look at how we can use that value to guide our decision making process. The key to this is determining our level of acceptable risk. Acceptable risk is the threshold above which your project is no longer worth pursuing. If you’ve ever looked at the side effects of a drug for headaches and decided that you’d rather have a headache than risk the possibility of suffering one of those effects, then you’re already familiar with the concept of acceptable risk. For professional purposes, however, we need to base our threshold value on something more than gut feelings or personal preference.

Determining acceptable risk is actually very simple. The first step is to determine the projected payout of the project in the same units that you will be calculating risk in, typically dollars. This gives you a value for your maximum risk. Next, you need to account for any profit margin you might need to generate with the project. Subtracting this number from the maximum risk value will give you the acceptable risk for your project. So now that we have this value, let’s take a look at how it can be used as a decision making tool.

The power of calculating an acceptable risk for a project is that it remains the same regardless of the approach used to complete the project. This allows us to look at multiple potential solutions, calculate the risk involved in each one, and compare them not just to each other, but to an objective standard. This in turn allows you to compare multiple projects. For example, if you needed to build a bridge, and wanted to determine whether you should use a suspension bridge, an arch bridge, or a truss bridge, you could simply calculate the overall risk of each approach and choose the lowest one. However, if you have to choose between building a bridge and building a subway somewhere else, looking at relative risk may not be accurate. If the subway is a high risk project, but has a much higher payout, you may still want to build the subway instead of the bridge. Looking at risk versus acceptable risk normalizes everything into one metric, allowing for an accurate comparison between highly complex options.

As some of you have probably realized, this type of risk analysis is essentially a cost-benefit analysis. The real value of approach from a risk perspective, rather than a cost perspective, is that it gives you an added level of granularity should you need to choose between two approaches with similar cost-benefit ratios. Maybe one project has a lot of low impact, high probability risks, and the other one has a few high impact, low probability risks, but the same cost-benefit ratio. Depending on your situation, this could make a huge impact on which option is most suitable.

Hopefully, these posts have given non-engineering readers a glimpse into the engineering mindset, and given engineering readers a look at how their training and methods can be applied in way that they may not have considered before. Next post, we’re going to look at some tips for communicating technical topics to a non-technical audience, so be sure to check back! 

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Risk Assessment

Author: Joe H., Inflow Engineer

Pop quiz time: which animal is more dangerous to people, a shark or a cow? I’m willing to be most people would say that sharks are obviously more dangerous than cows, after all, one is an apex predator with some impressive jaws, and one is a potential hamburger. If you were one of these people, I’m sorry to say that you’re wrong. If you’re asking why I would say that, then you’re ready to dig into the subject of risk assessment.

Risk assessment is a form of analysis that determines the impact of things going wrong in a given situation. It’s a discipline that is used in a variety of industries, from healthcare to project management to systems engineering, but the basics are the same regardless of the application. First, you make a list of potential negative outcomes, how likely each one is to occur, and how serious the consequences are for each outcome. For each outcome you will multiply the likelihood of occurrence by the potential impact to get a risk value. You can add all these risks together to get an overall risk value for whatever it is you’re analyzing.

The wonderful thing about this approach to risk assessment is that it allows you to compare high impact, low likelihood risks with low impact, high likelihood risks, and every combination in between on one scale. This can be presented in different ways depending on what you’re analyzing. For example, if you’re looking at investment risks, your risk assessment will be a dollar value, whereas a risk assessment for a medical procedure might be presented in terms of deaths per million. As an engineer, I tend to use a matrix to provide a quick overview of each risk, in addition to doing an actual calculation of the total risk. Since engineers often make design choices based between multiple options, being able to quickly look at the risks of each choice is extremely useful. The other reason that I prefer a matrix approach is that it’s generic enough to be applicable to many situations, and it works extremely well in situations where you’re faced with several options and need to choose between them based on the risk of each choice. It’s not always appropriate, but it lends itself well to many situations and can be understood at a glance.

Now that we’ve covered the basic concepts behind risk assessment, let’s take another look at our cow versus shark example. The risk we’re looking at here is “being killed” so the impact of the risk from a cow are the same as the impact of the risk from a shark. Now we need the likelihood of being killed by each animal. This step of defining the impacts and likelihoods is actually the most challenging part of risk assessment and can require research or industry knowledge to get correct. In this case, I am not an expert in animal attacks, so we’re going to do some research.

According to a CDC report on cattle-related deaths in Iowa, Kansas, Missouri, and Nebraska from 2003 to 2007, twenty-one people were killed by cows. Since the population of those four states is approximately 14 million, we can estimate that the odds of being killed by a cow are around 1 in 700 thousand. In comparison, from 2005 (the earliest year I could find good statistics) to 2009, a total of twenty people were killed by sharks globally, or 1 in 350 million. Since the impact of each of these events is the same, we can do a direct comparison of the two rates and determine that you are approximately 500 times more likely to be killed by a cow than by a shark. We could do a similar calculation for the likelihood of being injured in an attack instead of being killed. I’ll leave it to you as an exercise if you care to do so, but again sharks are significantly less likely to attack people than cows. 

I choose this example because it highlights the power of good risk assessment, which is the ability to remove biases from your analysis. No matter how scary someone thinks a shark is, it’s very hard to argue that being killed by a shark is more likely than being killed by a cow. Risk assessment forces you to explicitly address the questions “how bad could it be?” and “what are the odds?” and therefore allows you to catch mistakes that you might not have even known you were making. I’ve put together a basic risk assessment toolkit which you can download here. I encourage you to experiment with it using some real world examples to get a better sense of how this type of approach can be applied to your daily work processes. In our next post, we’re going to examine the concept of “acceptable risk” and look at ways to use risk assessment to guide your decision making processes, so be sure to check back for the second post on this topic.

-Joe H., Inflow Engineer

[1] http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5829a2.htm

[2] http://www.flmnh.ufl.edu/fish/sharks/statistics/statsw.htm

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

 

Optimization

Author: Joe H., Inflow Engineer

In my last post, Control Systems and Feedback Loops, we talked about a type of control system called a PID controller or a Proportional-Integral-Derivative controller. The key point to remember for the purposes of this article is that a PID controller uses three mathematical operations to adjust the input (and therefore the output) of a process, and each of those three operations has its own multiplier. By changing the values of each multiplier, we can change the effect of a PID controller on a process. In other words, a PID controller is a multi-variable control system. The practice of adjusting a multi-variable system to get a desired output is called optimization, and that’s the topic we’ll be covering in this post.

In order to optimize a system, you need to determine some details about the system in question. First, you need to know exactly what your desired output looks like. It’s important to note that in many cases, your system will never actually reach the exact desired output, but the goal of optimization is to get as close to that output as possible (we’ll come back to this later on). Secondly, you need to know what your control inputs are. These are the inputs that change the behavior of the system. In a PID controlled system, the control inputs are the multiplier values, but other systems will have different control inputs. In a stereo system, for example, the control inputs are the bass, treble, and volume setting values. Once you know these two details, you can optimize the system.

Optimization is a common engineering challenge and for widely used control systems, such as PID controllers, several optimization algorithms and software optimization programs have been developed. However, not every control system has these tools readily available, so we’re going to focus on manual tuning for today. The first step in manually tuning a multi-variable system is to determine which control input has the greatest impact on your system, which has the second greatest, etc. This can be done by changing one control input at a time and measuring the change in the output. With most systems, you’ll find that each control input you change has a different effect on the output. Changing one control input might cause the output to scale up, while changing another control input might cause the output to respond to input changes more quickly. Ideally, you want to be able to say “When I change this control input by this value, the output changes in this way, by this much.” There’s a good deal of trial and error involved in this process, although the more you know about the system you are optimizing, the more quickly you’ll be able to figure out the input/output relationships.

Once you’ve determined what each control input does to the output, you can move on to the next step of optimization. Starting with the most impactful input, adjust the input value until the output is in the ballpark of the desired value. In a PID system, the P multiplier is adjusted first, the I multiplier is adjusted second, and the D multiplier is adjusted last. (If you want to see this tuning in action, click here.)  In many systems, increasing a control input too much will cause the output to start behaving in an unstable manner, so it’s important to avoid over-adjusting at first. Move on to the next most impactful control input and so on, until you’ve adjusted all your controls. If you’ve done everything right, you should have an output that is very close to ideal. More likely, however, you’ll have an output that isn’t quite where you want it. To resolve that, use an iterative approach and go back to the most impactful control input. When you’ve reached a point where every change you make moves your output away from the desired output, you’ve optimized your system.

Again, an optimized system doesn’t necessarily produce an ideal output, but it should produce the best output the system is capable of generating. As such, optimizing a poorly designed system is, generally speaking, a waste of time since no amount of tuning will produce a good enough result. The upshot of this is that when we have a black box system and our optimization efforts fail, we know that we have to go back and redesign how the system works. It’s a very useful tool in determining when a black box approach may not be appropriate.

One other point to remember when optimizing a system is that in the real world, there may be a limit to how you can set your control inputs. For example, if you have a process where the output increases based on how many man-hours you can spend on it, you will only be able to dedicate a certain amount of time. You may have a perfect process, which could be highly optimized if you can spend 200 man-hours a week on it, but if you only have two employees, you will never be able to optimize your process. Often times, when you design a system or process, you get attached to your solution, and it can be tempting to blame failures on a lack of input resources, but the reality is, if your design can’t accomplish what is needed with the resources available, you haven’t actually created a solution. By understanding the nature of optimization, you can prevent wasting time and resources on a system that can never produce the desired output, and instead spend the time developing a better solution.

As I said at the beginning of this post, there are plenty of algorithms for optimizing specific systems. We’ve approached the topic from an engineering perspective, but there are professional organizations such as the Workflow Management Coalition and the Association of Business Process Management Professionals, which focus on optimization from a business process perspective, so if you’re interested in learning more about the topic from that viewpoint, I recommend taking a look at their websites. If you want to try your hand at manually tuning a PID controller, there are several PID simulators available for free online. I recommend the one offered by the PID Control Laboratory. In my next post, we’ll be covering risk assessment, so be sure to check back for that.

-Joe H., Inflow Engineer

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here