What’s a Disaster? What’s a Crisis?

Write 1-2 pages single spaced that addresses the following topic….Risk = The probability of Threat x vulnerability to that threat x consequences or loss of value of the asset. However, Risk is rarely isolated. Comment on the approach in (Griener. 2000) and top down versus bottom up risk analysis….
Here is the lecture:
What’s a Disaster? What’s a Crisis?
Terminology is not completely standardized, so this will complicate your reading for this course. Lindell, Prater, and Perry define the terms hazard, emergency, and disaster at the beginning of Chapter 1. FEMA uses similar definitions (e.g., in the Comprehensive Preparedness Guide [CPG] 101), but the definitions are a bit different in ASIS International’s 2009 national standard and other publications. In the field of information technology, incidents are actions that violate or threaten to violate security policies, while disasters overwhelm an organization’s ability to handle the event and typically require moving essential functions to an alternate location.
A crisis generally poses a strategic threat that requires attention from and involvement of the highest levels of an organization. The impact can be to the organization’s people, mission, its financial stability, or its reputation and credibility. Like disasters, crises typically imply single, sudden, large events, but they have secondary meanings that denote a state of affairs — something that has duration. The onset may also be preceded by smaller events or incidents, by the growth of a condition, or the compounding of inter-related failures. Using an analogy from medicine, think of small strokes preceding a large stroke or the buildup of plaque in arteries leading to a heart attack. These would be termed “slow-onset” crises.
Risks to Assets
The assets of an organization include people, intellectual property (both information/data and processes), facilities, equipment, and other physical components. ASIS identifies these assets at risk: “Human Resources and Intellectual Assets; Information/Data; Facilities and Premises; Ethics and Reputation; Transportation, Distribution, and Supply Chain; Environmental, Health, and Safety; Financial Assets; … Vendor/Outsourcing” (2008, p. 5). ASIS also identifies legal and regulatory risks to organizations.
For a community, assets encompass households, businesses (for-profit and not-for-profit), government agencies, infrastructure, and even special features, such as a waterfront. The assets need to be assigned a value, even if it is a ranking such as High, Moderate, and Low.
The diagram in the attached file presents the concepts that make up risk to assets. Here, too, the terminology can vary. For example:
• Threat, hazard
• Vulnerability, exposure, weakness
• Countermeasure, safeguard, control
• Consequence, impact, harm
One term missing from the diagram is the likelihood of the threat exploiting the vulnerability. And likelihood may be characterized in terms of probability or frequency. The following sections explore threats, vulnerabilities, controls, consequences, and likelihood.
For other terms, see the Department of Homeland Security’s DHS risk lexicon (2010), which defines nearly 125 terms associated with risk management and analysis from a homeland security perspective, from “absolute risk” to “vulnerability assessment.”
Threats are a potential cause of harm to assets; they can be events, physical objects, or humans. Lindell et al. present various types of natural threats; their list could be expanded to include disease (such as pandemic flu) and infestations (such as insects, rodents, etc.). They also list technological threats, emphasizing hazardous materials. Other kinds of technological threats include structural and infrastructure failures; examples include bridges or dams, machinery, utilities, and hardware and software. The more complex a system, the more potential there is for problems, and Chiles’ book Inviting disaster: Lessons from the edge of technology (2002) provides numerous examples. Many technological threats are exacerbated by human factors such as management and operational problems; the BP oil spill in the Gulf of Mexico in 2010 is a significant example. Perrow provides a more nuanced analysis, covering not only the human factors (as in the 2009 Metrorail disaster), but also distinguishing between “integrated” and “modular” designs of complex systems (2009).
Human threats are another major category. The actions of human threat agents may be intentional or deliberate; accidental; or negligent. Intentional acts may be criminal (sabotage, espionage, terrorism, product tampering or counterfeiting, fraud) or not (e.g., boycotting the products or blockading the location of a business; hostile takeovers by another company). Accidental threats may come from errors or omissions. In a business context, accidental threats can occur because of poor procedures (they are incorrect, hard to understand, or non-existent), poor communication or bad information, poor training, or other reasons. Negligence implies that a certain standard of care or due diligence is expected. Negligence is, for example, a failure to comply with existing regulations, policies, standards, or procedures. Acts of management or of employees may be negligent, such as overriding a policy, mismanaging resources, or deception. Note that sometimes it is difficult to determine if an action is accidental or negligent. Indeed, it may not be clear at the start of some events — for example, fires — whether humans or other factors cause them.
The study of human threats also looks at whether the agent is internal to an organization (such as employees or citizens) or external. In business, the distinction between internal and external is not always exact, as there can be a continuum from inside to outside. All sorts of people may be allowed access to an organization, such as customers, contractors and vendors, third-party partners, or visitors. Ex-employees may also have access.
Insider threats may be harder to detect and insiders may cause more significant damage. Think, for example, of FBI agent Robert Hanssen who was selling information to the Russians over the course of two decades. Insider threats and incidents for the banking and financial industry were the subject of a 2004 study conducted by the U.S. Secret Service’s National Threat Assessment Center and CERT/CC. The insiders typically had “minimal technical skills” but had authorization to use the systems. However, they misused or exceeded their authorization. The National Infrastructure Advisory Council’s 2008 report describes behavioral characteristics of insiders who may pose a threat, explains why organizations fail to deal with such persons, and makes recommendations.
Speed of onset is one characteristic of a threat or hazard. Other characteristics are the scope and the duration of the impact; how predictable the event is; what the primary impacts to assets are, and whether there are secondary impacts; and the state of preparation to respond and recover. Lindell et al. present a similar list in Chapter 6, while CPG 101 lists other factors (see pp. 4-8 through 4-9). These lists were designed for physical threats, but could be adapted for cyber threats or organizational crises such as fraud, sexual harassment, or other forms of malfeasance.
Vulnerabilities allow potential threats to cause actual events. In risk management terms, threats exploit vulnerabilities, which are weaknesses or susceptibilities to damage or disruption. Vulnerabilities may be understood as controls that are absent, weak, or flawed. (Controls are discussed in the next section.) For example,
• No fence = absent control
• Short fence = weak control
• Hole in fence or fence poorly constructed = flawed control
Some kinds of vulnerabilities are considered conditions, such as high population density for a community, the location of an organization, or the type of industry a business is in (e.g., airlines, shipping, etc.). Sometimes the distinction between threat and vulnerability becomes fuzzy around the concept of condition or circumstances. My preference is to treat circumstances and conditions as vulnerabilities that a threat agent can exploit.
Lindell et al. provide an in-depth discussion of vulnerabilities in Chapter 6, including what they call “vulnerability dynamics.” Vulnerabilities are categorized in different ways. Social vulnerabilities revolve around how well individuals, households, or other social units can deal with disasters. Physical vulnerabilities arise from design, the materials used, implementation or construction, as well as factors such as neglected or improper maintenance, all of which are influenced by costs. The same concerns apply to software in computers and other electronic devices. Geographic vulnerabilities play a role in natural disasters such as earthquakes, tornadoes, and hurricanes or, for business operations, political conditions or currency markets.
Complexity and interdependencies can obscure vulnerabilities and they can complicate efforts to respond effectively when an incident occurs, such as during the Three Mile Island nuclear reactor incident in 1979. Malfunctioning valves led to overheating of reactor coolant, causing a relief valve to open; the relief valve then did not close when it should have, and coolant drained out of the reactor. Operators, however, mistakenly believed that there was too much coolant, and drained out more coolant, leading to a meltdown of half of the reactor core (Chiles, 2002, p. 47). Perrow (2009) cites examples of airplane computers involved in accidents and near-accidents.
Controls to Mitigate (Reduce) Risk and Other Strategies
As mentioned above, one definition of vulnerability is a control that is absent, flawed or weak. The term control includes safeguards (proactive controls) and countermeasures (reactive controls). Controls mitigate risks by reducing a threat’s impacts or its likelihood, or both. Only rarely do they completely eliminate threats; that is a message that must be stated clearly (and often) to management.
One way to classify controls is whether they help to prevent (or protect or deter), detect, or respond to and recover from adverse events. Lindell et al. use the term hazard mitigation to refer to prevention controls. The levee system in New Orleans is a protection control, as are, for example, regulations about insider trading. Detection may take the form of monitoring. Examples include sensors for natural phenomena (earthquakes) or technology (pressure, temperature, traces of harmful materials, etc.). On the enterprise side, examples of monitoring might be collecting and analyzing reports of product failures or customer complaints, tracking of adherence to policies, etc. Other forms of detection include testing and quality control as well as auditing. The third area is controls for response or recovery. Here are some examples: Stockpiles of vaccines or medicines; enough trained emergency personnel; sufficient insurance or reserve funds to carry out recovery. The list is long and varied.
Consequences (Impacts)
Consequences of an adverse event are also referred to as its impacts. Some consequences are primary while others are secondary. For example, storms or flooding may have secondary consequences such as disruption of utilities or transportation.
Human impacts vary, depending on the type of crisis or disaster. Most people think first of death, injury, and illness when they discuss human impacts, but there may also be psychological and social impacts. Physical damage and destruction, whether with or without environmental consequences are created by many kinds of disasters. The consequences can disrupt vital critical infrastructures such as energy, water, communications, or transportation.
A disruption can impact an organization’s processes, the outputs of the processes, or the resources used to create the outputs, such as people, financial condition, facilities, equipment, materials and supplies. A disaster or crisis may slow or even disrupt cash flow to an organization (accounts receivable) or tourists may stay away, impacting the local economy. Responding to a crisis can also drain an organization’s or local government’s resources. There may be unplanned expenses to restore and rebuild, for example. A disaster or crisis may also result in increased insurance rates for an organization, or even being unable to obtain insurance. Primary threats to organizations’ financial assets include fraud, theft, or extortion, or negligence such as making extremely risky investments.
Other non-physical consequences may be political or legal. A crisis may draw the attention of politicians, as happened to Veterans Affairs in summer 2006 because a stolen laptop contained personally identifiable information of 26 million veterans. Regulations may be changed or added to respond to a crisis. There can be legal reactions to crises, such as government investigation and prosecution; for example, Hewlett-Packard was investigated by the California attorney general for spying on members of its board of directors and reporters to uncover leaks. Another legal reaction is filing of lawsuits. Defending against lawsuits incurs legal costs and loss of a lawsuit may require a substantial payout to the plaintiffs, such as in tobacco liability cases.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *