Determining Acceptable Risk

Author: Joe H., Inflow Engineer

In our last post, we talked about the basics of risk assessment. If you haven’t read the post, I recommend doing so now, since this post will build on that content. In our post titled Risk Assessment, we talked about how to determine the overall risk value for a project, and today we’ll look at how we can use that value to guide our decision making process. The key to this is determining our level of acceptable risk. Acceptable risk is the threshold above which your project is no longer worth pursuing. If you’ve ever looked at the side effects of a drug for headaches and decided that you’d rather have a headache than risk the possibility of suffering one of those effects, then you’re already familiar with the concept of acceptable risk. For professional purposes, however, we need to base our threshold value on something more than gut feelings or personal preference.

Determining acceptable risk is actually very simple. The first step is to determine the projected payout of the project in the same units that you will be calculating risk in, typically dollars. This gives you a value for your maximum risk. Next, you need to account for any profit margin you might need to generate with the project. Subtracting this number from the maximum risk value will give you the acceptable risk for your project. So now that we have this value, let’s take a look at how it can be used as a decision making tool.

The power of calculating an acceptable risk for a project is that it remains the same regardless of the approach used to complete the project. This allows us to look at multiple potential solutions, calculate the risk involved in each one, and compare them not just to each other, but to an objective standard. This in turn allows you to compare multiple projects. For example, if you needed to build a bridge, and wanted to determine whether you should use a suspension bridge, an arch bridge, or a truss bridge, you could simply calculate the overall risk of each approach and choose the lowest one. However, if you have to choose between building a bridge and building a subway somewhere else, looking at relative risk may not be accurate. If the subway is a high risk project, but has a much higher payout, you may still want to build the subway instead of the bridge. Looking at risk versus acceptable risk normalizes everything into one metric, allowing for an accurate comparison between highly complex options.

As some of you have probably realized, this type of risk analysis is essentially a cost-benefit analysis. The real value of approach from a risk perspective, rather than a cost perspective, is that it gives you an added level of granularity should you need to choose between two approaches with similar cost-benefit ratios. Maybe one project has a lot of low impact, high probability risks, and the other one has a few high impact, low probability risks, but the same cost-benefit ratio. Depending on your situation, this could make a huge impact on which option is most suitable.

Hopefully, these posts have given non-engineering readers a glimpse into the engineering mindset, and given engineering readers a look at how their training and methods can be applied in way that they may not have considered before. Next post, we’re going to look at some tips for communicating technical topics to a non-technical audience, so be sure to check back! 

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Risk Assessment

Author: Joe H., Inflow Engineer

Pop quiz time: which animal is more dangerous to people, a shark or a cow? I’m willing to be most people would say that sharks are obviously more dangerous than cows, after all, one is an apex predator with some impressive jaws, and one is a potential hamburger. If you were one of these people, I’m sorry to say that you’re wrong. If you’re asking why I would say that, then you’re ready to dig into the subject of risk assessment.

Risk assessment is a form of analysis that determines the impact of things going wrong in a given situation. It’s a discipline that is used in a variety of industries, from healthcare to project management to systems engineering, but the basics are the same regardless of the application. First, you make a list of potential negative outcomes, how likely each one is to occur, and how serious the consequences are for each outcome. For each outcome you will multiply the likelihood of occurrence by the potential impact to get a risk value. You can add all these risks together to get an overall risk value for whatever it is you’re analyzing.

The wonderful thing about this approach to risk assessment is that it allows you to compare high impact, low likelihood risks with low impact, high likelihood risks, and every combination in between on one scale. This can be presented in different ways depending on what you’re analyzing. For example, if you’re looking at investment risks, your risk assessment will be a dollar value, whereas a risk assessment for a medical procedure might be presented in terms of deaths per million. As an engineer, I tend to use a matrix to provide a quick overview of each risk, in addition to doing an actual calculation of the total risk. Since engineers often make design choices based between multiple options, being able to quickly look at the risks of each choice is extremely useful. The other reason that I prefer a matrix approach is that it’s generic enough to be applicable to many situations, and it works extremely well in situations where you’re faced with several options and need to choose between them based on the risk of each choice. It’s not always appropriate, but it lends itself well to many situations and can be understood at a glance.

Now that we’ve covered the basic concepts behind risk assessment, let’s take another look at our cow versus shark example. The risk we’re looking at here is “being killed” so the impact of the risk from a cow are the same as the impact of the risk from a shark. Now we need the likelihood of being killed by each animal. This step of defining the impacts and likelihoods is actually the most challenging part of risk assessment and can require research or industry knowledge to get correct. In this case, I am not an expert in animal attacks, so we’re going to do some research.

According to a CDC report on cattle-related deaths in Iowa, Kansas, Missouri, and Nebraska from 2003 to 2007, twenty-one people were killed by cows. Since the population of those four states is approximately 14 million, we can estimate that the odds of being killed by a cow are around 1 in 700 thousand. In comparison, from 2005 (the earliest year I could find good statistics) to 2009, a total of twenty people were killed by sharks globally, or 1 in 350 million. Since the impact of each of these events is the same, we can do a direct comparison of the two rates and determine that you are approximately 500 times more likely to be killed by a cow than by a shark. We could do a similar calculation for the likelihood of being injured in an attack instead of being killed. I’ll leave it to you as an exercise if you care to do so, but again sharks are significantly less likely to attack people than cows. 

I choose this example because it highlights the power of good risk assessment, which is the ability to remove biases from your analysis. No matter how scary someone thinks a shark is, it’s very hard to argue that being killed by a shark is more likely than being killed by a cow. Risk assessment forces you to explicitly address the questions “how bad could it be?” and “what are the odds?” and therefore allows you to catch mistakes that you might not have even known you were making. I’ve put together a basic risk assessment toolkit which you can download here. I encourage you to experiment with it using some real world examples to get a better sense of how this type of approach can be applied to your daily work processes. In our next post, we’re going to examine the concept of “acceptable risk” and look at ways to use risk assessment to guide your decision making processes, so be sure to check back for the second post on this topic.

-Joe H., Inflow Engineer

[1] http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5829a2.htm

[2] http://www.flmnh.ufl.edu/fish/sharks/statistics/statsw.htm

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

 

Optimization

Author: Joe H., Inflow Engineer

In my last post, Control Systems and Feedback Loops, we talked about a type of control system called a PID controller or a Proportional-Integral-Derivative controller. The key point to remember for the purposes of this article is that a PID controller uses three mathematical operations to adjust the input (and therefore the output) of a process, and each of those three operations has its own multiplier. By changing the values of each multiplier, we can change the effect of a PID controller on a process. In other words, a PID controller is a multi-variable control system. The practice of adjusting a multi-variable system to get a desired output is called optimization, and that’s the topic we’ll be covering in this post.

In order to optimize a system, you need to determine some details about the system in question. First, you need to know exactly what your desired output looks like. It’s important to note that in many cases, your system will never actually reach the exact desired output, but the goal of optimization is to get as close to that output as possible (we’ll come back to this later on). Secondly, you need to know what your control inputs are. These are the inputs that change the behavior of the system. In a PID controlled system, the control inputs are the multiplier values, but other systems will have different control inputs. In a stereo system, for example, the control inputs are the bass, treble, and volume setting values. Once you know these two details, you can optimize the system.

Optimization is a common engineering challenge and for widely used control systems, such as PID controllers, several optimization algorithms and software optimization programs have been developed. However, not every control system has these tools readily available, so we’re going to focus on manual tuning for today. The first step in manually tuning a multi-variable system is to determine which control input has the greatest impact on your system, which has the second greatest, etc. This can be done by changing one control input at a time and measuring the change in the output. With most systems, you’ll find that each control input you change has a different effect on the output. Changing one control input might cause the output to scale up, while changing another control input might cause the output to respond to input changes more quickly. Ideally, you want to be able to say “When I change this control input by this value, the output changes in this way, by this much.” There’s a good deal of trial and error involved in this process, although the more you know about the system you are optimizing, the more quickly you’ll be able to figure out the input/output relationships.

Once you’ve determined what each control input does to the output, you can move on to the next step of optimization. Starting with the most impactful input, adjust the input value until the output is in the ballpark of the desired value. In a PID system, the P multiplier is adjusted first, the I multiplier is adjusted second, and the D multiplier is adjusted last. (If you want to see this tuning in action, click here.)  In many systems, increasing a control input too much will cause the output to start behaving in an unstable manner, so it’s important to avoid over-adjusting at first. Move on to the next most impactful control input and so on, until you’ve adjusted all your controls. If you’ve done everything right, you should have an output that is very close to ideal. More likely, however, you’ll have an output that isn’t quite where you want it. To resolve that, use an iterative approach and go back to the most impactful control input. When you’ve reached a point where every change you make moves your output away from the desired output, you’ve optimized your system.

Again, an optimized system doesn’t necessarily produce an ideal output, but it should produce the best output the system is capable of generating. As such, optimizing a poorly designed system is, generally speaking, a waste of time since no amount of tuning will produce a good enough result. The upshot of this is that when we have a black box system and our optimization efforts fail, we know that we have to go back and redesign how the system works. It’s a very useful tool in determining when a black box approach may not be appropriate.

One other point to remember when optimizing a system is that in the real world, there may be a limit to how you can set your control inputs. For example, if you have a process where the output increases based on how many man-hours you can spend on it, you will only be able to dedicate a certain amount of time. You may have a perfect process, which could be highly optimized if you can spend 200 man-hours a week on it, but if you only have two employees, you will never be able to optimize your process. Often times, when you design a system or process, you get attached to your solution, and it can be tempting to blame failures on a lack of input resources, but the reality is, if your design can’t accomplish what is needed with the resources available, you haven’t actually created a solution. By understanding the nature of optimization, you can prevent wasting time and resources on a system that can never produce the desired output, and instead spend the time developing a better solution.

As I said at the beginning of this post, there are plenty of algorithms for optimizing specific systems. We’ve approached the topic from an engineering perspective, but there are professional organizations such as the Workflow Management Coalition and the Association of Business Process Management Professionals, which focus on optimization from a business process perspective, so if you’re interested in learning more about the topic from that viewpoint, I recommend taking a look at their websites. If you want to try your hand at manually tuning a PID controller, there are several PID simulators available for free online. I recommend the one offered by the PID Control Laboratory. In my next post, we’ll be covering risk assessment, so be sure to check back for that.

-Joe H., Inflow Engineer

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Control Systems and Feedback Loops

Author: Joe H., Inflow Engineer

We’ve talked a good deal about some of the processes that engineers use to design systems at a high level, but today we’re going to dive down into some of the implementation details and talk about feedback loops and control systems. We’ll look at what control systems are first, then look at feedback, and see how to use it to regulate the behavior of a system. Finally, we’ll talk about one of my favorite control systems, the PID controller, and how you can use it in a non-engineering setting.

A control system is the mechanism that governs how your system behaves. The thermostat in your house is a control system, and so is a light switch for that matter. Almost anything with moving parts is going to have some sort of control system. Generally speaking, a control system falls into one of two categories, open loop or closed loop. A thermostat is a closed loop system, and a light switch is an open loop system, so let’s use them to explore the difference between the two categories.

An open loop control system is one where the controller does not monitor the output system, such as the light switch we mentioned above. If you read my earlier post on black boxing, you should remember that one way to look at a system is to ignore how the system functions, and focus instead on the inputs and outputs of the system. In the case of a light, you have inputs of electricity, the light switch in an “on” position, and an output of light. The light switch doesn’t care how much light the bulb is generating; in fact, it doesn’t look at the system output at all, it just responds to the inputs a certain way and assumes that the system will function as designed. The lightbulb could be dead, or missing completely, and the control system would not change its behavior at all.

On the other hand, a thermostat monitors the temperature of the air and uses that to adjust the behavior of the air conditioning/heating system it controls. The output of the system, the temperature, is monitored and compared to the desired temperature, and the behavior of the system is adjusted as needed. This makes the thermostat a closed loop controller, one that takes the output of a system, compares it to a desired value, and then modifies the behavior of the system to get a more desirable output.

Looking at these two types of control systems, it should be clear that the difference is the use of feedback in the closed loop controller. Strictly speaking, feedback is any part of the output of a system which is passed back into the system. Feedback can actually be undesirable, such as when a microphone gets too close to a speaker. The microphone amplifies whatever sound it detects, and then the speaker outputs the sound, which is picked up by the microphone and amplified again, and so on in a feedback loop. In order to use feedback to control a system, we need to carefully control what part of the output gets passed back in, as well as what the system does when it gets that input. In our thermostat example, we’re not just pumping air back into the system, we’re measuring the temperature of the air. In other words, we’re only looking at part of the output, and we’re creating a buffer between that input and the output of the system by comparing the actual output with a theoretical desired output. With a good set of metrics, it’s fairly easy to figure out what parts of the output you need to measure for a closed loop system (for instance, we really don’t care about the chemical composition of the air in our thermostat example, even though it’s part of our output), but the comparison process can get a bit tricky.

One of the more common ways to handle the comparison of actual outputs with ideal outputs is with a PID, or Proportional-Integral-Derivative controller. A PID controller takes the output value of a system and subtracts it from the desired output value to create what is known as an error value. Using that error value, the controller then uses three types of mathematical operations to generate a new input value of the system, bringing the actual output closer to the desired output and reducing the error value. Note that we’ll need a separate PID controller for each input value we want to adjust.

As you may have guessed, the three operations performed by a PID controller are proportional scaling of the error value, integration of the error value over time, and derivation of the error value with respect to time. This allows a PID controller to correct the behavior of the system based on present error values, past error values, and predicted future error values. The results of each of these operations are added together to create the new input for the system being controlled. In order to understand exactly how a PID controller works, let’s look at each operation in a little more detail. We’ll keep the math to a minimum, since there’s a bit of calculus involved behind the scenes.

The first operation, proportional (P) scaling (or gain), adjusts for the present error value in the system. Since the error value is the desired output value minus the actual output value, when the output is lower than it should be, the error rate is positive. This increases the input value generated by the controller, causing the actual output value to increase as well. When the actual output value is larger than the desired output, the error rate is negative, and the controller reduces the input value, preventing a feedback loop.

The second operation, integration (I) of error over time, accounts for previous error values in the system. For those of you without a calculus background, an integral is used to calculate the area under a curve. This allows you to take a continuous series of measurements and add them together to get a single value. In a PID controller, we look at the output of the system as a function of time, and subtract that from the ideal output of the system as a function of time to get the error rate as a function of time. If the actual output value is lower than desired most of the time, the result of integrating the error rate function will be positive; conversely, an output value which is predominately greater than desired produces a negative result. In other words, the more often your output is low or high, the more the controller will reduce or increase the input value.

The final operation, the derivative (D) with respect to time, estimates what the error rate is going to be in the future. A derivative computes the change of the output of a function compared to change in the input of the function. Since the PID controller is looking at a continuous series of measurements over time, the derivative operation measures how the error value is changing with respect to time at any given moment. When the actual output value of the system is getting closer to the desired output value, the error rate decreases and the derivative operation gives a negative value; if the actual output value is getting further away from the desired output value, the operation gives a positive value. The larger the rate of change in either direction, the greater the magnitude of the derivative output. If the actual output value isn’t changing at all compared to the desired output value, the derivative is zero. Put simply, the derivative operation makes a large correction when the output value is moving away from the desired value quickly, causing the actual output of the system to converge with the desired output of the system more quickly than it would otherwise. It also reduces the amount of adjustment the PID controller makes as the actual and desired output values converge, preventing the controller from over-correcting.

Each of these operations responds to changes in output differently, and since different systems behave differently, a PID controller uses a multiplier for each operation to control how much influence each one has over the whole system. If needed, the multiplier for one or two of the operations can even be set to zero. We’ll talk about how to figure out what multiplier to use next week, but for now it’s important to notice that the PID control has no insight into how the system its controlling actually works. The system is treated as a black box, which allows the PID controller to be used for a wide variety of applications, although there are systems where PID controllers aren’t the best choice, it’s a great example of how powerful the concept of black boxing a system can be once you start to really apply it.

Now let’s look at how the ideas behind a PID controller can be used in non-engineering settings. First and foremost, you need to look at your process as a black box system. You need to be able to identify exactly what you want your process or system to produce, and also identify the key input for generating that output. For example, if you run an organization which generates policy papers, you could look at the number of papers as the output, and the number of employee man-hours as the input. Let’s assume that you want to produce 10 papers a week of a given minimum length. The first month your firm spends 10 man-hours on producing papers, and generates 2 papers; our output value is lower than desired. We could run exact calculations, which is what you would do with an autonomous system, but since we’re estimating, we remember that when the actual output value is lower than the desired value, both the proportional and integral operations are positive. Moreover, the result of the derivative operation gives a small negative value when the actual output moves slowly towards the desired output. That gives us two large positive values and one small negative value, so we increase the number of man-hours. Let’s go with 20 hours. If that results in 7 papers, then once again the proportional and integral operations are positive, but smaller than before. The result of the derivative operation is a slightly larger negative value than before, since the actual output moved towards the desired output more quickly, so for our next input value, let’s jump to 26 hours. For the sake of the example, we’ll assume that 26 man-hours results in 11 papers. Now we’ve overshot our goals, so the proportional value will be a small negative number. The result of the integral operation will be just a little lower than before, but still positive, since we’ve been undershooting our goal for most of the process. The result of the derivative will be about the same as it was before, since the change from 7 to 11 is close to the change from 2 to 7. We take that all into account and use 24 man-hours as our next input, and get 10 papers.

This was an extremely simple example, but hopefully it illustrated how applying the concepts of PID controllers to a process can allow you to fine tune it much more accurately then simply guessing at the input values. In the example we used, someone with a bit of real world experience could probably give an accurate off the cuff estimate without having to do as much work, but if you have a more complex process, this can be an incredibly powerful method of managing it without having to understand all the intricacies of the process. Because you’re not just looking at the present value of your process output, but also at how the process has behaved in the past and will likely behave in the future, you can make much more accurate judgements about how many resources to apply. In my next post, we’ll look at fine tuning this type of control system, and talk about process optimization in general.

If you want to brush up on your higher math skills, you can check out http://www.mathsisfun.com/calculus/index.html for a quick primer, or https://mooculus.osu.edu/ for a full, free, online introductory calculus course.

-Joe H., Inflow Engineer

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Factors of Safety

Author: Joe H., Inflow Engineer

As I discussed in last week’s post, Choosing the Right Metrics, developing good metrics requires a detailed understanding of the problem you are trying to solve. However, there are plenty of reasons why precise metrics can’t be generated. Maybe the system you’re designing will be used in an environment you can’t predict, or maybe you just don’t know enough about the problem you’re trying to solve. In this post, we’re going to take a look at how an engineer approaches a problem where good metrics can’t be defined.

Like all the other engineering techniques we’ve covered, our approach to dealing with ill-defined or non-existent metrics is going to focus on reducing the complexity of the problem. We know how to use iteration and black boxing to design systems with fully-defined metrics and requirements, so instead of figuring out a whole new approach, we need a method that lets us use the tools we already have. To do so, we need a way to convert metrics like “I need this car to be fast” into something more usable. The engineering solution is to use something called a factor of safety.

A factor of safety is essentially a requirements multiplier. When you have a very well defined metric, you don’t need much of a factor of safety, since you know exactly how your design will be used. For example, if you’re designing a light fixture, you know the fixture will be hooked up to a power source with a specific voltage and current, protected by a fuse box, so your power wires don’t need to be designed to deal with anything beyond that voltage. You can run the numbers, figure out how much insulation and what wire gauge you’ll need, and move on with your design. If you’re designing the fuse box, you know that you’ll normally be getting a certain voltage and current, and that sometimes you might get a surge, but there’s no good way to know how big that surge might be. You could do some research into typical power surge characteristics, but you might still get a larger surge. However, if you take the characteristics of a typical power surge and multiply them all by a factor of four or five, you’ll have a set of requirements which should cover a vast majority of possible power surges. That multiplier is the factor of safety.

The goal of using a factor of safety isn’t to generate a perfect set of requirements, but rather to create metrics that are good enough to be used. The less confident you are about the type of demands that will be placed on your design, the larger you make your factor of safety. When you have a more solid handle on the requirements, a large factor of safety will result in costly over-designing, so you use a much smaller factor, but even a design based on very well defined requirements will typically have a small factor of safety just in case something unexpected happens to the system. In either case, you’ve taken the uncertainty out of the design requirement, converting a complex problem into a simple one.

Within engineering fields, there are best practices regarding exactly what factor of safety you should use for a given design problem, but when you are applying the concept to a general problem, you’ll probably have to make a judgement call. One common real world scenario is providing an estimated completion date on a project. Typically, no one gets upset if you finish a project early or under budget, so it’s better to err on the side of caution. If it’s something that you’ve done before and are comfortable with, you should have a good idea of how long it will take, so whatever you estimate probably doesn’t need to be multiplied by very much. However, for a new project even your best guess might be off by a good deal. Again, the less confident you are in your estimate, the larger your factor of safety should be to compensate.

The concept of using what is essentially a fudge factor is certainly not unique to engineering, but engineering is, as far as I know, unique in formalizing the approach. In order to apply the concept like an engineer would, you need to be continuously analyzing how accurate your factor of safety estimates are. In our last example, if you typically multiply your time estimates by three for new projects, but consistently finish a week earlier then you estimated, you need to lower your factor of safety. With a little bit of effort, you’ll develop a powerful tool for accurately accounting for uncertainties in your work!

-Joe H., Inflow Engineer

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here

Choosing the Right Metrics

Author: Joe H., Inflow Engineer

When we started this series, I stated that engineering is the art and science of taking extremely complex systems and getting them to do approximately what you want them to do. In my previous post, Defining a Problem, we discussed how important it is to properly define exactly what it is you want a system to do and we also discussed metrics. In this post, we’re going to dig deeper into just what a metric is and how to use them to measure success.

A metric is a measurement or set of measurements used to determine success. For example, math tests are used to determine a student’s mastery of mathematical concepts. Typically, success is determined by one metric, the number of correctly solved problems, completed with the time limit of the test divided by the total number of problems on the test. If you get a certain number of problems correct, you’ve succeeded in your task. This seems obvious, but take a moment and reflect on what other metrics could possibly be used to gauge mathematical aptitude. For example, we could measure the amount of time spent on each question or the amount of time spent studying for the test. Would these be good indicators of mathematical skill?

These are the types of questions any engineer has to ask when approaching a design problem. In the case of a math test, we need to figure out if someone can solve math problems correctly, which is something we can measure completely with our first metric (number of problems solved). If we add a second metric, time spent on each problem, we might get a better idea of what types of math problems are easiest for the student, but we don’t actually gain any more insight into how likely they are to get a correct answer. Likewise, measuring the time spent studying isn’t going to suddenly change our understanding of what the student knows. If we’re viewing the student as a black box which solves math problems, study time and time spent per question are not meaningful measurements of success.

So, what happens if we change the purpose of our test from an analytical tool measuring how well a student solves problems to a diagnostic tool measuring how well a student is learning math? Suddenly, our additional metrics make a great deal of sense. We want the additional insights in order to make changes that will allow the student to become more proficient at solving math problems. We’re no longer analyzing the student’s mathematical ability, we’re trying to determine how to improve the student’s mathematical ability. We might also want to add a metric measuring the increase in performance over time, to make sure that we’re getting the results we want. The metrics we choose vary based on what we’re trying to accomplish.

This type of detailed understanding of what problem you’re trying to solve is the key to picking good metrics. It’s not enough to know what you’re trying to measure, you need to know why you’re measuring it. If an engineer is designing a bridge and I want to make sure that the bridge will stay standing, I’m going to use metrics that relate to the amount of stress the bridge can withstand, how well the materials hold up to environmental conditions, etc. All of my metrics will relate to the bridge and how it performs. If instead I want to measure the engineer, I’m going to use a different set of metrics. I’ll look at the amount of time taken to design the bridge, how many revisions of the design are needed, if the engineer followed good design practices, and maybe some other metrics related to engineering processes. In this case, my metrics are related to the act of designing the bridge instead of the design of the bridge.

As you can see, generating proper metrics isn’t as simple as it might appear, but learning how to clearly measure the success of a project or task can lead to a massive reduction of time and effort spent to achieve good results. There’s a reason that the terms “metrics-based management” and “performance metrics” have become buzzwords in many industries. Good metrics reduce the amount of time spent solving problems that aren’t relevant to your project. Perhaps even more importantly they allow for an increased amount of creative problem solving. Looking back to our first example, where we viewed the student as a problem solving black box, we can see that by focusing on the results instead of the process allows for all sorts of flexibility as far as the student’s approach to studying. We’ve enabled out of the box thinking, while still ensuring that the solution we reach meets our needs.

Every project and problem requires its own set of metrics, but the basic approach to developing them is fundamentally the same across the board. Again, this type of thought process is something that you can implement in your profession right away. You don't need new equipment or special training, you just need to take the time to look over your normal tasks and projects, and ask yourself what you're doing, why, and how you're measuring success.

-Joe H., Inflow Engineer

 

At Inflow we solve complex terror and criminal issues for the United States Government and their partners, by providing high quality and innovative solutions at the right price through the cultivation of a corporate culture dedicated to being #1 in employee and customer engagement. We Make it Matter, by putting people first! If you are interested in working for Inflow or partnering with us on future projects, contact us here