Risk Management Approach and Plan

Risk Management Approach and Plan

Risk Management Approach and Plan
 Risk Management Approach and Plan
 


Risk Identification
Definition: Risk identification is the process of determining risks that could potentially prevent the program, enterprise, or investment from achieving its objectives. It includes documenting and communicating the concern.

Background

Risk identification is the critical first step of the risk management process  The objective of risk identification is the early and continuous identification of events that, if they occur, will have negative impacts on the project's ability to achieve performance or capability outcome goals. They may come from within the project or from external sources. There are multiple types of risk assessments, including program risk assessments, risk assessments to support an investment decision, analysis of alternatives, and assessments of operational or cost uncertainty. 
 
Download Also:
Risk identification needs to match the type of assessment required to support risk-informed decision-making. For an acquisition program, the first step is to identify the program goals and objectives, thus fostering a common understanding across the team of what is needed for program success. This gives context and bounds the scope by which risks are identified and assessed.
 

Identifying Risks in the Systems Engineering Program

There are multiple sources of risk. For risk identification, the project team should review the program scope, cost estimates, schedule (to include evaluation of the critical path), technical maturity, key performance parameters, performance challenges, stakeholder expectations vs. current plan, external and internal dependencies, implementation challenges, integration, interoperability, supportability, supply-chain vulnerabilities, ability to handle threats, cost deviations, test event expectations, safety, security, and more. 
In addition, historical data from similar projects, stakeholder interviews, and risk lists provide valuable insight into areas for consideration of risk.  Risk identification is an iterative process. As the program progresses, more information will be gained about the program (e.g., specific design), and the risk statement will be adjusted to reflect the current understanding. New risks will be identified as the project progresses through the life cycle.

Best Practices and Lessons Learned

Operational Risk. Understand the operational nature of the capabilities you are supporting and the risk to the end users, their missions, and their operations of the capabilities. Understanding of the operational need/mission help you appreciate the gravity of risks and the impact they could have to the end users. This is a critical part of risk analysis realizing the real-world impact that can occur if a risk arises during operational use. Typically operational users are willing to accept some level of risk if they are able to accomplish their mission (e.g., mission assurance), but you need to help users to understand the risks they are accepting and to assess the options, balances, and alternatives available.

Technical maturity. Work with and leverage industry and academia to understand the technologies being considered for an effort and the likely transition of the technology over time. One approach is to work with vendors under a non-disclosure agreement to understand the capabilities and where they are going, so that the risk can be assessed.
Acquisition drivers. Emphasize critical capability enablers, particularly those that have limited alternatives. Evaluate and determine the primary drivers to an acquisition and emphasize their associated risk in formulating risk mitigation recommendations. If a particular aspect of a capability is not critical to its success, its risk should be assessed, but it need not be the primary focus of risk management. For example, if there is risk to a proposed user interface, but the marketplace has numerous alternatives, the success of the proposed approach is probably less critical to overall success of the capability. 
 
On the other hand, an information security feature is likely to be critical. If only one alternative approach satisfies the need, emphasis should be placed on it. Determine the primary success drivers by evaluating needs and designs, and determining the alternatives that exist. Is a unique solution on the critical path to success? Make sure critical path analyses include the entire system engineering cycle and not just development (i.e., system development, per se, may be a "piece of cake," but fielding in an active operational situation may be a major risk).

Use capability evolution to manage risk. If particular requirements are driving implementation of capabilities that are high risk due to unique development, edge-of-the-envelope performance needs, etc., the requirements should be discussed with the users for their criticality. It may be that the need could be postponed, and the development community should assess when it might be satisfied in the future. Help users and developers gauge how much risk (and schedule and cost impact) a particular capability should assume against the requirements to receive less risky capabilities sooner. 
In developing your recommendations, consider technical feasibility and knowledge of related implementation successes and failures to assess the risk of implementing now instead of the future. In deferring capabilities, take care not to fall into the trap of postponing ultimate failure by trading near-term easy successes for a future of multiple high-risk requirements that may be essential to overall success.

Key Performance Parameters (KPPs). Work closely with the users to establish KPPs. Overall risk of program cancelation can be centered on failure to meet KPPs. Work with the users to ensure the parameters are responsive to mission needs and technically feasible. The parameters should not be so lenient that they can easily be met, but not meet the mission need; nor should they be so stringent that they cannot be met without an extensive effort or pushing technology either of which can put a program at risk. Seek results of past operations, experiments, performance assessments, and industry implementations to help determine performance feasibility.

External and internal dependencies. Having an enterprise perspective can help acquirers, program managers, developers, integrators, and users appreciate risk from dependencies of a development effort. With the emergence of service-oriented approaches, a program will become more dependent on the availability and operation of services provided by others that they intend to use in their program's development efforts. Work with the developers of services to ensure quality services are being created, and thought has been put into the maintenance and evolution of those services. 
Work with the development program staff to assess the services that are available, their quality, and the risk that a program has in using and relying upon the service. Likewise, there is a risk associated with creating the service and not using services from another enterprise effort. Help determine the risks and potential benefits of creating a service internal to the development with possibly a transition to the enterprise service at some future time.

Integration and Interoperability (I&I). I&I will almost always be a major risk factor. They are forms of dependencies in which the value of integrating or interoperating has been judged to override their inherent risks. Techniques such as enterprise federation architecting, compostable capabilities on demand, and design patterns can help the government plan and execute a route to navigate I&I risks. Refer to the Enterprise Engineering  section of the Systems Engineering Guide for articles on techniques for addressing I&I associated risks.

Information security. Information security is a risk in nearly every development. Some of this is due to the uniqueness of government needs and requirements in this area. Some of this is because of the inherent difficulties in countering cyber attacks. Creating defensive capabilities to cover the spectrum of attacks is challenging and risky. Help the government develop resiliency approaches (e.g., contingency plans, backup/recovery, etc.). Enabling information sharing across systems in coalition operations with international partners presents technical challenges and policy issues that translate into development risk. The information security issues associated with supply chain management is so broad and complex that even maintaining rudimentary awareness of the threats is a tremendous challenge.
Skill level. The skill or experience level of the developers, integrators, government, and other stakeholders can lead to risks. Be on the lookout for insufficient skills and reach across the corporation to fill any gaps. In doing so, help educate government team members at the same time you are bringing corporate skills and experience to bear.  Cost risks. Programs will typically create a government's cost estimate that considers risk. As you develop and refine the program's technical and other risks, the associated cost estimates should evolve, as well. Cost estimation is not a one-time activity.

Historical information as a guide to risk identification. Historical information from similar government programs can provide valuable insight into future risks. Seek out information about operational challenges and risks in various operation lessons learned, after action reports, exercise summaries, and experimentation results. Customers often have repositories of these to access. Government leaders normally will communicate their strategic needs and challenges. Understand and factor these into your assessment of the most important capabilities needed by your customer and as a basis for risk assessments.

Historical data to help assess risk is frequently available from the past performance assessments and lessons learned of acquisition programs and contractors. In many cases, staff will assist the government in preparing performance information for a particular acquisition.  The AF has a Past Performance Evaluation Guide (PPEG) that identifies the type of information to capture that can be used for future government source selections.  This repository of information can help provide background information of previous challenges and where they might arise again both for the particular type of development activity as well as with the particular contractors.
There are numerous technical assessments for vendor products that can be accessed to determine the risk and viability of various products.  Repository of evaluations of tools is the Analysis Toolshed that contains guidance on and experience with analytical tools. Using resources like these and seeking others who have tried products and techniques in prototypes and experiments can help assess the risks for a particular effort.  How to write a risk—a best practice. A best-practice protocol for writing a risk statement is the Condition-If-Then construct. This protocol applies to risk management processes designed for almost any environment. It is recognition that a risk, by its nature is probabilistic and one that, if it occurs, has unwanted consequences.

What is the Condition-If-Then construct? The Condition reflects what is known today. It is the root cause of the identified risk event. Thus, the Condition is an event that has occurred, is presently occurring, or will occur with certainty. Risk events are future events that may occur because of the Condition present. Below is an illustration of this protocol.  The If is the risk event associated with the Condition present. It is critically important to recognize the If and the Condition as a dual. When examined jointly, there may be ways to directly intervene or remedy the risk event's underlying root (Condition) such that the consequences from this event, if it occurs, no longer threaten the project.

The If is the probabilistic portion of the risk statement.  The Then is the consequence, or set of consequences, that will impact the engineering system project if the risk event occurs. An example of a Condition-If-Then construct is illustrated  Encourage teams to identify risks. The culture in some government projects and programs discourages the identification of risks. This may arise because the risk management activities of tracking, monitoring, and mitigating the risks are seen as burdensome and unhelpful. In this situation, it can be useful to talk to the teams about the benefits of identifying risks and the inability to manage it all in your heads (e.g., determine priority, who needs to be involved, mitigation actions). 
Assist the government teams in executing a process that balances management investment with value to the outcomes of the project. In general, a good balance is being achieved when the project scope, schedule, and cost targets are being met or successfully mitigated by action plans, and the project team believes risk management activities provide value to the project. Cross-team representation is a must; risks should not be identified by an individual, or strictly by the systems engineering team (review sources of risk above).

Consider organizational and environmental factors. Organizational, cultural, political, and other environmental factors, such as stakeholder support or organizational priorities, can pose as much or more risk to a project than technical factors alone. These risks should be identified and actively mitigated throughout the life of the project.  Mitigation activities could include monitoring legislative mandates or emergency changes that might affect the program or project mission, organizational changes that could affect user requirements or capability usefulness, or changes in political support that could affect funding. In each case, consider the risk to the program and identify action options for discussion with stakeholders.

Include stakeholders in risk identification. Projects and programs usually have multiple stakeholders that bring various dimensions of risk to the outcomes. They include operators, who might be overwhelmed with new systems; users, who might not be properly trained or have fears for their jobs; supervisors who might not support a new capability because it appears to diminish their authority; and policy makers, who are concerned with legislative approval and cost.
In addition, it is important to include all stakeholders, such as certification and accreditation authorities who, if inadvertently overlooked, can pose major risks later in the program. Stakeholders may be keenly aware of various environmental factors, such as pending legislation or political program support that can pose risks to a project that are unknown to the government. Include stakeholders in the risk identification process to help surface these risks.

Write clear risk statements. Using the Condition-If-Then format helps communicate and evaluate a risk statement and develops a mitigation strategy. The root cause is the underlying Condition that has introduced the risk (e.g., a design approach might be the cause), the If reflects the probability (e.g., probability assessment that the If portion of the risk statement were to occur), and the Then communicates the impact to the program (e.g., increased resources to support testing, additional schedule, and concern to meet performance). The mitigation strategy is almost always better when based on a clearly articulated risk statement.

Expect risk statement modifications as the risk assessment and mitigation strategy is developed. It is common to have risk statements refined once the team evaluates the impact. When assessing and documenting the potential risk impact (cost, schedule, technical, or timeframe), the understanding and statement of the risk might change.  Do not include the mitigation statement in the risk statement. Be careful not to fall into the trap of having the mitigation statement introduced into the risk statement. A risk is an uncertainty with potential negative impact. Some jump quickly to the conclusion of mitigation of the risk and, instead of identifying the risk that should be mitigated (with mitigation options identified), they identify the risk as a sub-optimal design approach.
Do not jump to a mitigation strategy before assessing the risk probability and impact. A risk may be refined or changed given further analysis, which might affect what the most efficient/desired mitigation may be. Engineers often jump to the solution; it is best to move to the next step discussed in the Risk Impact Assessment and Prioritization  article to decompose and understand the problem first. Ultimately this will lead to a strategy that is closely aligned with the concern.

As a management process, risk management is used to identify and avoid the potential cost, schedule, and performance/technical risks to a system, take a proactive and structured approach to manage negative outcomes, respond to them if they occur, and identify potential opportunities that may be hidden in the situation The risk management approach and plan operationalize these management goals.

Because no two projects are exactly alike, the risk management approach and plan should be tailored to the scope and complexity of individual projects. Other considerations include the roles, responsibilities, and size of the project team, the risk management processes required or recommended by the government organization, and the risk management tools available to the project.
Risk occurs across the spectrum of government and its various enterprises, systems-of-systems, and individual systems. At the system level, the risk focus typically centers on development. Risk exists in operations, requirements, design, development, integration, testing, training, fielding, etc. For systems-of-systems, the dependency risks rise to the top. Working consistency across the system-of-systems, synchronizing capability development and fielding, considering whether to interface, interoperate, or integrate, and the risks associated with these paths all come to the forefront in the system-of-systems environment.

At the enterprise level, governance and complexity risks become more prominent. Governance risk of different guidance across the enterprise for the benefit of the enterprise will trickle down into the system-of-systems and individual systems, resulting in potentially unanticipated demands and perhaps suboptimal solutions at the low level that may be beneficial at the enterprise level.

Risk Management in System-Level Programs

System-level risk management is predominantly the responsibility of the team working to provide capabilities for a particular development effort. Within a system-level risk area, the primary responsibility falls to the system program manager and SE for working risk management, and the developers and integrators for helping identify and create approaches to reduce risk. In addition, a key responsibility is with the user community's decision maker on when to accept residual risk after it and its consequences have been identified.

Risk Management in System-of-Systems Programs

Today, the body of literature on engineering risk management is largely aimed at addressing traditional engineering system projects those systems designed and engineered against a set of well-defined user requirements, specifications, and technical standards. In contrast, little exists on how risk management principles apply to a system whose functionality and performance is governed by the interaction of a set of highly interconnected, yet independent, cooperating systems. Such systems may be referred to as systems-of-systems.

A system-of-systems can be thought of as a set or arrangement of systems that are related or interconnected to provide a given capability that, otherwise, would not be possible. The loss of any part of the supporting systems degrades or, in some cases, eliminates the performance or capabilities of the whole.  
 
What makes risk management in the engineering of systems-of-systems more challenging than managing risk in a traditional system-engineering project? The basic risk management process steps are the same. The challenge comes from implementing and managing the process steps across a large-scale, complex, system-of-systems—one whose subordinate systems, managers, and stakeholders may be geographically dispersed, organizationally distributed, and may not have fully intersecting user needs.
How does the delivery of capability over time affect how risks are managed in a system-of-systems? The difficulty is in aligning or mapping identified risks to capabilities planned to be delivered within a specified build by a specified time. Here, it is critically important that risk impact assessments are made as a function of which capabilities are affected, when these effects occur, and their impacts on users and stakeholders. Lack of clearly defined system boundaries, management lines of responsibility, and accountability further challenge the management of risk in the engineering of systems-of-systems. User and stakeholder acceptance of risk management, and their participation in the process, is essential for success.

Given the above, a program needs to establish an environment where the reporting of risks and their potential consequences is encouraged and rewarded. Without this, there will be an incomplete picture of risk. Risks that threaten the successful engineering of a system-of-systems may become evident only when it is too late to effectively manage or mitigate them.  Frequently a system-of-systems is planned and engineered to deliver capabilities through a series of evolutionary builds. Risks can originate from different sources and threaten the system-of-systems at different times during their evolution.

These risks and their sources should be mapped to the capabilities they potentially affect, according to their planned delivery date. Assessments should be made of each risk's potential impacts to planned capabilities, and whether they have collateral effects on dependent capabilities or technologies.  In most cases, the overall system-of-systems risk is not just a linear "roll-up" of its subordinate system-level risks. 
Rather, it is a combination of specific lower level individual system risks that, when put together, have the potential to adversely impact the system-of-systems in ways that do not equate to a simple roll-up of the system-level risks. The result is that some risks will be important to the individual systems and be managed at that level, while others will warrant the attention of system-of-systems engineering and management.
 

Risk Management in Enterprise Engineering Programs  

Risk management of enterprise systems poses an even greater challenge than risk management in systems-of-systems programs.  Enterprise environments (e.g., the Internet) offer users ubiquitous, cross-boundary access to wide varieties of services, applications, and information repositories. Enterprise systems engineering is an emerging discipline. It encompasses and extends "traditional" systems engineering to create and evolve "webs" of systems and systems-of-systems that operates in a network-centric way to deliver capabilities via services, data, and applications through an interconnected network of information and communications technologies. This is an environment in which systems engineering at its "water's-edge."

In an enterprise, risk management is viewed as the integration of people, processes, and tools that together ensure the early and continuous identification and resolution of enterprise risks. The goal is to provide decision makers an enterprise-wide understanding of risks, their potential consequences, interdependencies, and rippling effects within and beyond enterprise "boundaries." Ultimately risk management aims to establish and maintain a holistic view of risks across the enterprise, so capabilities and performance objectives are achieved via risk-informed resource and investment decisions.
Today we are in the early stage of understanding how systems engineering, engineering management, and social science methods weave together to create systems that "live" and "evolve" in enterprise environments.  Requirements for Getting Risk Management Started  Senior leadership commitment and participation is required.  Stakeholder commitment and participation is required.  Risk management is made a program-wide priority and "enforced" as such throughout the program's life cycle.

Technical and program management disciplines are represented and engaged. Both program management and engineering specialties need to be communicating risk information and progress toward mitigation. Program management needs to identify contracting, funding concerns, SEs need to engage across the team and identify risks, costs, and potential ramifications, if the risk were to occur, as well as mitigation plans (actions to reduce the risk, and cost/resources needed to execute successfully).

Risk management integrated into the program's business processes and systems engineering plans. Examples include risk status included in management meetings and/or program reviews, risk mitigation plan actions tracked in schedules, and cost estimates reflective of risk exposure.

The Risk Management Plan describes a process, such as the fundamental steps shown in Figure 1, that are intended to enable the engineering of a system that is accomplished within cost, delivered on time, and meets user needs.
Best Practices and Lessons Learned

Twenty-One "Musts"

Risk management must be a priority for leadership and throughout the program's management levels. Maintain leadership priority and open communication. Teams will not identify risks if they do not perceive an open environment to share risk information (messenger not shot) or management priority on wanting to know risk information (requested at program reviews and meetings), or if they do not feel the information will be used to support management decisions (lip service, information not informative, team members will not waste their time if the information is not used).  Risk management must never be delegated to staff that lack authority.

A formal and repeatable risk management process must be present one that is balanced in complexity and data needs, such that meaningful and actionable insights are produced with minimum burden.  The management culture must encourage and reward identifying risk by staff at all levels of program contribution.  Program leadership must have the ability to regularly and quickly engage subject matter experts. Risk management must be formally integrated into program management  Participants must be trained in the program's specific risk management practices and procedures.  A risk management plan must be written with its practices and procedures consistent with process training.

Risk management execution must be shared among all stakeholders.  Risks must be identified, assessed, and reviewed continuously not just prior to major reviews.  Risk considerations must be a central focus of program reviews.  Risk management working groups and review boards must be rescheduled when conflicts arise with other program needs. Risk mitigation plans must be developed, success criteria defined, and their implementation monitored relative to achieving success criteria outcomes. Risks must be assigned only to staff with authority to implement mitigation actions and obligate resources.
Risk management must never be outsourced.  Risks that extend beyond traditional impact dimensions of cost, schedule, and technical performance must be considered (e.g., programmatic, enterprise, cross-program/cross-portfolio, and social, political, economic impacts). Technology maturity and its future readiness must be understood. The adaptability of a program's technology to change in operational environments must be understood.  Risks must be written clearly using the Condition-If-Then protocol. 
 
The nature and needs of the program must drive the design of the risk management process within which a risk management tool/database conforms. Risk management tool/database must be maintained with current risk status information; preferably, employ a tool/database that rapidly produces "dashboard-like" status reports for management.t is important for project and program leaders to keep these minimum conditions in mind with each taking action appropriate for their roles.
 
In particular, the SE should provide effective support as follows:  Get top-level buy-in. SEs can help gain senior leadership support for risk management by highlighting some of the engineering as well as programmatic risks. SEs should prepare assessments of the impact that risks could manifest and back them by facts and data (e.g., increased schedule due to more development, increased costs, increased user training for unique, technology edge capabilities, and potential of risk that capabilities will not be used because they do not interoperate with legacy systems). SEs can highlight the various risk areas, present the pros and cons of alternative courses of mitigation actions (and their impacts), and help the decision makers determine the actual discriminators and residual impact to taking one action or another. In addition to data-driven technical assessments, success in getting top-level buy-in requires consideration of political, organizational/operational, and economic factors as seen through the eyes of the senior leadership. Refer to [6] for foundational information on the art of persuasion.

Get stakeholder trust. Gain the trust of stakeholders by clearly basing risk reduction or acceptance recommendations on getting mission capabilities to users.  Leverage your peers. Someone at generally knows a lot about every risk management topic imaginable. This includes technical, operational, programmatic dimensions of risks and mitigations. Bringing the company to bear is more that a slogan—it is a technique to use, as risks are determined, particularly in system-of-systems and enterprise. In all likelihood, is working other parts of these large problems.  
 
Think horizontal. Emphasize cross-program or cross-organization participation in risk identification, assessment, and management. Cross-team coordination and communication can be particularly useful in risk management. All 'ilities' (e.g., information assurance, security, logistics, software) should be represented in the risk reviews. Communication of risk information helps illuminate risks that have impact across organizations and amplifies the benefits of mitigations that are shared.

Stay savvy in risk management processes and tools. Become the knowledgeable advisor in available risk management processes and tools. Many government organizations have program management offices that have defined risk management processes, templates, and tools. These should be used as a starting point to develop the specific approach and plan for an individual project or program. Make sure the government sponsors or customers have the current information about the risk management approach and plan required by their organizations, and assist them in complying with it. Assist the sponsors or customers in determining the minimum set of activities for their particular program that will produce an effective risk management approach and plan.

Risk Impact Assessment and Prioritization

Definition: Risk impact assessment is the process of assessing the probabilities and consequences of risk events if they are realized. The results of this assessment are then used to prioritize risks to establish a most-to-least-critical importance ranking. Ranking risks in terms of their criticality or importance provides insights to the project's management on where resources may be needed to manage or mitigate the realization of high probability/high consequence risk events.  Risk impact assessment and prioritization are the second and third steps of the process. 

Risk Impact Assessment in the Systems Engineering Program

In this step, the impact each risk event could have on the project is assessed. Typically this assessment considers how the event could impact cost, schedule, or technical performance objectives. Impacts are not limited to these criteria, however; political or economic consequences may also need to be considered. The probability (chance) each risk event will occur is also assessed. This often involves the use of subjective probability assessment techniques, particularly if circumstances preclude a direct evaluation of the probability by objective methods (i.e., engineering analysis, modeling, and simulation).

As part of the risk assessment, risk dependencies, interdependencies, and the timeframe of the potential impact (near-, mid-, or far-term) need to be identified. When assessing risk, it is important to match the assessment impact to the decision framework. For program management, risks are typically assessed against cost, schedule, and technical performance targets. Some programs may also include oversight and compliance, or political impacts. 
Garvey provides an extensive set of rating scales for making these multicriteria assessments, as well as ways to combine them into an overall measure of impact or consequence. These scales provide a consistent basis for determining risk impact levels across cost, schedule, performance, and other criteria considered important to the project. In addition, the Risk Matrix tool can help evaluate these risks to particular programs
 
Performing SWOT (Strengths, Weaknesses, Opportunities, and Threats) assessments can help determine the drivers of the risks. For more details on these analyses.  For some programs or projects, the impacts of risk on enterprise or organizational goals and objectives are more meaningful to the managing organization. Risks are assessed against the potential negative impact on enterprise goals. Using risk management tools for the enterprise and its components can help with the consistency of risk determination. 
 
This consistency is similar to the scale example shown below, except that the assessment would be done at the enterprise level. Depending on the criticality of a component to enterprise success (e.g., risk of using commercial communications to support a military operation and the impact of the enterprise to mission success, versus risk of using commercial communications for peacetime transportation of military equipment), the risks may be viewed differently at the enterprise level even when the solution sets are the same or similar.
One way management plans for engineering an enterprise is to create capability portfolios of technology programs and initiatives that, when synchronized, will deliver time-phased capabilities that advance enterprise goals and mission outcomes. A capability portfolio is a time-dynamic organizing construct to deliver capabilities across specified epochs; a capability can be defined as the ability to achieve an effect to a standard under specified conditions using multiple combinations of means and ways to perform a set of tasks [2]. 
 
With the introduction of capability management, defining the impact of risk on functional or capability objectives may provide valuable insights into what capability is at risk, and which risks could potentially significantly impact the ability to achieve a capability and/or impact multiple capability areas.
 
In portfolio management, a set of investments is administered based on an overall goal(s), timing, and tolerance for risk, cost/price interdependencies, a budget, and changes in the relevant environment over time. These factors are generally applicable to the government acquisition environment . Scales are determined for each risk area, and each alternative is assessed against all categories. Risk assessment may also include operational consideration of threat and vulnerability. For cost-risk analysis, the determination of uncertainty bounds is the risk assessment.
When determining the appropriate risk assessment approach, it is important to consider the information need. Shown below are Probability of Occurrence, Program Risk Management Assessment Scale, and Investment Risk Assessment Scale examples used in 's SE work with government sponsors or clients.  Risk Probability of Occurrence Example

Table 1. Program Risk Management Assessment Scale Example  
Table 2. Example of the Investment Risk Assessment Scale

Risk Prioritization in the Systems Engineering Program

In the risk prioritization step, the overall set of identified risk events, their impact assessments, and their probabilities of occurrences are "processed" to derive a most-to-least-critical rank-order of identified risks. A major purpose of prioritizing risks is to form a basis for allocating resources. Multiple qualitative and quantitative techniques have been developed for risk impact assessment and prioritization. 
 
Qualitative techniques include analysis of probability and impact, developing a probability and impact matrix, risk categorization, risk frequency ranking (risks with multiple impacts), and risk urgency assessment. Quantitative techniques include weighting of cardinal risk assessments of consequence, probability, and timeframe; probability distributions; sensitivity analysis; expected monetary value analysis; and modeling and simulation.  Expert judgment is involved in all of these techniques to identify potential impacts, define inputs, and interpret the data..

Best Practices and Lessons Learned

Tailor the assessment criteria to the decision or project. When assessing risks, recommend techniques and tools that are suitable for the analysis. For example, if the project is an enterprise management or organizational oversight project, then risk impact might be most suitably assessed against goals in lieu of technical performance, cost, and schedule. If the assessment is to determine the risk of investment options, the risk area scale approach might be best suited. 
 
Document the rationale for the assessment of impact and probability. It is important to document the justification or rationale for each risk impact assessment and probability of occurrence rating. If the conditions or environment change, the assessment might need to be revisited. The rationale helps to communicate the significance of the risk. When using the investment assessment scale approach, the statement of risk is typically captured in the rationale.

Recognize the role of systems engineering. Risk assessment and management are roles of systems engineering, especially as projects and programs become more complex and interdependent. The judgments that are involved require a breadth of knowledge of system characteristics and the constituent technologies beyond that of design specialists. In addition, the judgments of risk criticality are at the system and program levels. Risk cuts across the life cycle of systems engineering, and SEs should be prepared to address risk throughout——concept and requirements satisfaction, architectural level risks, design and development risks, training risks, fielding, and environment risks.
Tailor the prioritization approach to the decision or project. Match the prioritizing algorithm, techniques, and tools to the assessment need (e.g., needs could include time criticality as a prioritization factor, the ability to see capability at risk, the need for a single risk score for the portfolio, the ability to have insight into risks with multiple impacts, and more). Each risk area——threat, operations, programmatic, etc.——will have different priorities. Typically, there will be a priority to these areas themselves —a major threat risk could be totally unacceptable and the effort may be abandoned. 
 
If the threat risks are acceptable but the operations cannot be effectively performed, then, again, the effort may be abandoned. Be sure to consider these various decisions and criticality to help the government assess the priorities of mitigating the risks that arise.Consider Monte Carlo simulations. Monte Carlo simulations use probability distributions to assess the likelihood of achieving particular outcomes, such as cost or completion date. Risk Mitigation Planning, Implementation, and Progress Monitoring

Definition: Risk mitigation planning is the process of developing options and actions to enhance opportunities and reduce threats to project objectives. Risk mitigation implementation is the process of executing risk mitigation actions. Risk mitigation progress monitoring includes tracking identified risks, identifying new risks, and evaluating risk process effectiveness throughout the project.  
Risk mitigation planning, implementation, and progress monitoring. As part of an iterative process, the risk-tracking tool is used to record the results of risk prioritization analysis (step 3) that provides input to both risk mitigation (step 4) and risk impact assessment (step 2).   The risk mitigation step involves development of mitigation plans designed to manage, eliminate, or reduce risk to an acceptable level. Once a plan is implemented, it is continually monitored to assess its efficacy with the intent of revising the course-of-action if needed.

Risk Mitigation Strategies
General guidelines for applying risk mitigation handling options are shown in Figure 2. These options are based on the assessed combination of the probability of occurrence and severity of the consequence for an identified risk. These guidelines are appropriate for many, but not all, projects and programs.

Risk mitigation handling options include:  
Assume/Accept: Acknowledge the existence of a particular risk, and make a deliberate decision to accept it without engaging in special efforts to control it. Approval of project or program leaders is required.
Avoid: Adjust program requirements or constraints to eliminate or reduce the risk. This adjustment could be accommodated by a change in funding, schedule, or technical requirements.
Control: Implement actions to minimize the impact or likelihood of the risk.
Transfer: Reassign organizational accountability, responsibility, and authority to another stakeholder willing to accept the risk.
Watch/Monitor: Monitor the environment for changes that affect the nature and/or the impact of the risk.

Each of these options requires developing a plan that is implemented and monitored for effectiveness. More information on handling options is discussed under best practices and lessons learned below. 
 
From a systems engineering perspective, common methods of risk reduction or mitigation with identified program risks include the following, listed in order of increasing seriousness of the risk [4]:
Intensified technical and management reviews of the engineering process  
Special oversight of designated component engineering  
Special analysis and testing of critical design items  
Rapid prototyping and test feedback  
Consideration of relieving critical design requirements  
Initiation of fallback parallel developments  Best Practices and Lessons Learned  
What actions are needed?  
When must actions be completed?  
Assume/Accept. Collaborate with the operational users to create a collective understanding of risks and their implications. Risks can be characterized as impacting traditional cost, schedule, and performance parameters. Risks should also be characterized as impact to mission performance resulting from reduced technical performance or capability. Develop an understanding of all these impacts. 
 
Bringing users into the mission impact characterization is particularly important to selecting which "assume/accept" option is ultimately chosen. Users will decide whether accepting the consequences of a risk is acceptable. Provide the users with the vulnerabilities affecting a risk, countermeasures that can be performed, and residual risk that may occur. Help the users understand the costs in terms of time and money. 

Avoid. Again, work with users to achieve a collective understanding of the implications of risks. Provide users with projections of schedule adjustments needed to reduce risk associated with technology maturity or additional development to improve performance. Identify capabilities that will be delayed and any impacts resulting from dependencies on other efforts. This information better enables users to interpret the operational implications of an "avoid" option.
Control. Help control risks by performing analyses of various mitigation options. For example, one option is to use a commercially available capability instead of a contractor developed one. In developing options for controlling risk in your program, seek out potential solutions from similar risk situations of other customers, industry, and academia. When considering a solution from another organization, take special care in assessing any architectural changes needed and their implications.

Transfer. Reassigning accountability, responsibility, or authority for a risk area to another organization can be a double-edged sword. It may make sense when the risk involves a narrow specialized area of expertise not normally found in program offices. But, transferring a risk to another organization can result in dependencies and loss of control that may have their own complications. Position yourself and your customer to consider a transfer option by acquiring and maintaining awareness of organizations within your customer space that focus on specialized needs and their solutions. Acquire this awareness as early in the program acquisition cycle as possible, when transfer options are more easily implemented.

Watch/Monitor. Once a risk has been identified and a plan put in place to manage it, there can be a tendency to adopt a "heads down" attitude, particularly if the execution of the mitigation appears to be operating on "cruise control." Resist that inclination. Periodically revisit the basic assumptions and premises of the risk. Scan the environment to see whether the situation has changed in a way that affects the nature or impact of the risk. The risk may have changed sufficiently so that the current mitigation is ineffective and needs to be scrapped in favor of a different one. On the other hand, the risk may have diminished in a way that allows resources devoted to it to be redirected.
Determining Mitigation Plans
Understand the users and their needs. The users/operational decision makers will be the decision authority for accepting and avoiding risks. Maintain a close relationship with the user community throughout the system-engineering life cycle. Realize that mission accomplishment is paramount to the user community and acceptance of residual risk should be firmly rooted in a mission decision.  
 
Seek out the experts and use them. Seek out the experts within and outside. 's technical centers exist to provide support in their specialty areas. They understand what's feasible, what's worked and been implemented, what's easy, and what's hard. They have the knowledge and experience essential to risk assessment in their area of expertise. Know our internal centers of excellence, cultivate relationships with them, and know when and how to use them.
Handling Options

Recognize risks that recur. Identify and maintain awareness of the risks that are "always there" — interfaces, dependencies, changes in needs, environment and requirements, information security, and gaps or holes in contractor and program office skill sets. Help create an acceptance by the government that these risks will occur and recur and that plans for mitigation are needed up front. Recommend various mitigation approaches — including adoption of an evolution strategy, prototyping, experimentation, engagement with broader stakeholder community, and the like.
Encourage risk taking. Given all that has been said in this article and its companions, this may appear to be an odd piece of advice. The point is that there are consequences of not taking risks, some of which may be negative. Help the customer and users understand that reality and the potential consequences of being overly timid and not taking certain risks in your program. An example of a negative consequence for not taking a risk when delivering a full capability is that an adversary might realize a gain against our operational users. Risks are not defeats, but simply bumps in the road that need to be anticipated and dealt with.

Recognize opportunities. Help the government understand and see opportunities that may arise from a risk. When considering alternatives for managing a particular risk, be sure to assess whether they provide an opportunistic advantage by improving performance, capacity, flexibility, or desirable attributes in other areas not directly associated with the risk.

Encourage deliberate consideration of mitigation options. This piece of advice is good anytime, but particularly when supporting a fast-paced, quick reaction government program that is juggling many competing priorities. Carefully analyze mitigation options and encourage thorough discussion by the program team. This is the form of the wisdom "go slow to go fast."
Not all risks require mitigation plans. Risk events assessed, as medium or high criticality should go into risk mitigation planning and implementation. On the other hand, consider whether some low criticality risks might just be tracked and monitored on a watch list. Husband your risk-related resources.

Mitigation Plan Content
Determine the appropriate risk manager. The risk manager is responsible for identifying and implementing the risk mitigation plan. He or she must have the knowledge, authority, and resources to implement the plan. Risk mitigation activities will not be effective without an engaged risk manager. It may be necessary to engage higher levels in the customer organization to ensure the need for the risk manager is addressed. 
 
This can be difficult and usually involves engaging more senior levels of the team as well.  Develop a high-level mitigation strategy. This is an overall approach to reduce the risk impact severity and/or probability of occurrence. It could affect a number of risks and include, for example, increasing staffing or reducing scope.
Identify actions and steps needed to implement the mitigation strategy. Ask these key questions:
What actions are needed?  
Make sure you have the right exit criteria for each. For example, appropriate decisions, agreements, and actions resulting from a meeting would be required for exit, not merely the fact that the meeting was held.  Look for evaluation, proof, and validation of met criteria. Consider, for example, metrics or test events.  Include only and all stakeholders relevant to the step, action, or decisions.  When must actions be completed?  
 
Backward Planning: Evaluate the risk impact and schedule of need for the successful completion of the program and evaluate test events, design considerations, and more.  Forward Planning: Determine the time needed to complete each action step and when the expected completion date should be.  Evaluate key decision points and determine when a move to a contingency plan should be taken.  Who is the responsible action owner?  What resources are required? Consider, for example, additional funding or collaboration.  How will this action reduce the probability or severity of impact?  
 
Develop a contingency plan ("fall back, plan B") for any high risk.  Are cues and triggers identified to activate contingency plans and risk reviews?  Include decision point dates to move to fallback plans. The date to move must allow time to execute the contingency plan.  Evaluate the status of each action. Determine when each action is expected to be completed successfully.  Integrate plans into IMS and program management baselines. Risk plans are integral to the program, not something apart from it.
Monitoring Risk
Include risk monitoring as part of the program review and manage continuously. Monitoring risks should be a standard part of program reviews. At the same time, risks should be managed continuously rather than just before a program review. Routinely review plans in management meetings.  Review and track risk mitigation actions for progress. Determine when each action is expected to be completed successfully.  
 
Refine and redefine strategies and action steps as needed.  Revisit risk analysis as plans and actions are successfully completed. Are the risks burning down? Evaluate impact to program critical path.  Routinely reassess the program's risk exposure. Evaluate the current environment for new risks or modification to existing risks.

Related Topics:
The Author: Ala'a Elbeheri
 
                                         Ala'a Elbeheri
About:
A versatile and highly accomplished senior certified IT risk management Advisor and Senior IT Lead Auditor with over 20 years of progressive experience in all domains of ICT.  
• Program and portfolio management, complex project management, and service delivery, and client relationship management.      
• Capable of providing invaluable information while making key strategic decisions and spearheading customer-centric projects in IT/ICT in diverse sectors.    
• Displays strong business and commercial acumen and delivers cost-effective solutions contributing to financial and operational business growth in international working environments.      
• Fluent in oral and written English, German, and Arabic with an Professional knowledge of French.  
• Energetic and dynamic relishes challenges and demonstrates in-depth analytical and strategic ability to facilitate operational and procedural planning.  
• Fully conversant with industry standards, with a consistent track record in delivering cost-effective strategic solutions.    
• Strong people skills, with proven ability to build successful, cohesive teams and interact well with individuals across all levels of the business. Committed to promoting the ongoing development of IT skills  throughout an organization.

Post a Comment

Previous Post Next Post