Emergency Management Blogs

Managing Crisis

by Lucien G. Canton: Melding theory and practice

| Contact Lucien G. Canton

April 11, 2014

In an article entitled Emergency Managers as Community Change Agents: An expanded vision of the profession in the January/February edition of the Journal of Emergency Management, Professor Thomas Drabek argues for changing our view of the role of emergency managers. Our profession has undergone significant change in the past decade and some of those changes raise concerns. Drabek identifies four:

  1. The extensive focus on terrorism following September 11th
  2. The greater centralization of authority at the federal level
  3. The increased role of the military
  4. Assaults on civil liberties.

To counteract these trends emergency managers need to rethink their role. I’ve argued elsewhere that we need to shift our thinking from a purely operational focus to a more strategic view. Drabek reinforces this need in a most elegant fashion.

The first recommendation that Drabek makes is that we expand our thinking on disasters to view them as “non-routine social problems” and shift our focus to the root causes of disasters. In other words, rather than solely focusing on the impacts of disasters, we must address the social vulnerability that is the true cause of the suffering in disasters. Addressing social issues can reduce vulnerability and increase resilience. Drabek puts it this way, “It is a holistic vision that reflects a focus on the local community as the key unit within the intergovernmental system.”

Drabek advocates rethinking how we deal with threats such as terrorism. He suggests shifting our focus from potential threats and instead ask the question, “What is the fundamental cause of our insecurity?” He makes the case that much of what we fear is, in fact, perceived rather than actual risk. By asking the question, we can reduce unnecessary fear in the public.

One of the areas Drabek believes deserves more attention from emergency managers is business continuity. If we are to take a community approach, the businesses must recognize that they are part of the community and become advocates for community preparedness. This ties in well with another area of concern, the mitigation of and adaptation to climate change effects. Dealing with the issues raised by mitigation will require widespread community support and emergency managers will be called upon to raise awareness and make informed decisions.

If we are indeed to embrace this new vision of the emergency manager as change agent, Drabek believes that emergency managers will require two major attributes: trust and credibility. To achieve this, he recommends strategies such as challenging the status quo, being open to learning, creating a compelling vision, and building competent teams.


1 comment

Emergency Management Degrees

Visit our education pages to learn more about higher education opportunities in emergency management:

Emergency Management Degrees
Homeland Security Degrees
Emergency Management Certificates

 

Latest Blog Posts RSS

Emergency Management Blog - Eric Holdeman: Disaster Zone Water Disaster: FEMA Reimbursement
Apr 20 Take note that there are eligible costs that can be reimbursed.…
Emergency Management Blog - Eric Holdeman: Disaster Zone How Politics Buries Science
Apr 20 This example is for landslides, but the damage goes much further.…
Alerting Failure Draws Fire in Portland
Apr 17 The City says a failed alert this week was the fault of the alerting vendor. The vendor says it was the city's fault.…

April 04, 2014

In response to last week’s article on scenario-based planning, my colleague Mike Selves, reminded us that as emergency managers we deal with impacts. This is a fundamental concept that we sometimes tend to overlook. But how do we determine those impacts and how can we be sure we’ve got it right? Here’s where true scenario-based planning can provide a valuable tool.

Last week I tried to make the case that what we generally call scenario-based planning really pertains more to operational planning and not, as was originally intended, strategic planning. It’s yet another example of where we in emergency management have not clearly defined terms and argue over names rather than concepts.

So how do we make use of scenario-based planning as a strategic planning tool? A basic concept in emergency management is the three-step risk assessment process: hazard identification, hazard analysis, and impact analysis. Scenario-based planning helps bridge the gap between hazard and impact analysis.

Our normal hazard analysis process calls for the ranking of hazards by measuring overall impact on the community, such potential damage, infrastructure, casualties, etc. This helps us narrow the field of hazards and focus on those with the greatest potential impact. However, we tend to work here in a macro-view of overall impact that doesn’t necessarily provide the detailed information we need in emergency planning. Consequently, our impact analyses are sometimes nothing more than guesswork.

This is where strategic scenario-based planning comes in. We can take those high impact hazards and construct realistic scenarios around them to identify the micro-level impacts that could occur. To be effective the scenarios must be:

  • Based on the best available research and predictions
  • Consider historical events that have occurred in the community or similar communities
  • Incorporate community vulnerability
  • Be developed with input from subject matter experts in a variety of disciplines

This is not a new idea, by the way. The USGS published an earthquake for Southern California scenario to accompany the first Great Shakeout event in 2008 and produced the ARKstorm Scenario in 2011 based on historical flooding in California’s Central Valley. Both have been extremely useful to emergency planners. San Francisco Bay Area planners have been using the detailed information developed by the Association of Bay Area Governments for years.

Why go to all this trouble? It’s simple. When you’re planning, there’s a big difference between, “There will be damage to the roads” and “We need to plan for over 30 road closures, including our two major resupply routes.” Or how about. “We can expect multiple fires” versus “We need to be able to cope with a minimum of 7 major fires and over 30 smaller ones.” The difference should be obvious.


Leave a comment
March 25, 2014

One of the terms I hear frequently is “scenario-based planning” but like much of emergency management it’s a term that seems to mean different things to each user. This is of concern because the use of scenarios in emergency planning can be a useful tool while true scenario-based planning can create problems if not used carefully.

Scenario-based planning was originally developed in the 1960’s as a military intelligence tool and was later picked up by the business community with varying degrees of success. The technique takes elements of a problem that are fairly certain and then couples these known quantities with variables to create scenarios of what might occur. These scenarios are then used to identify impacts and potential strategies.

The problem here is obvious – if you get the variables wrong, your scenario and subsequent strategies are wrong. Further, as researchers Karl Weike and Kathleen Sutcliffe wrote in Managing the Unexpected: Assuring high performance in an age of complexity, over-reliance on scenarios may create unrealistic expectations; we tend to see what we expect to see (the scenario) rather than what is actually occurring.

But are we really using scenario-based planning? I would submit that we really aren’t. Using a scenario to test an emergency plan in an exercise is not scenario-based planning. Neither is basing your emergency operations plan on your worst case scenario; you are really assessing your resource gaps against a realistic scenario and fully intend to use that plan for other purposes as well. While the Department of Homeland Security made many pronouncements at the time about its use of scenario-based planning, even the National Planning Scenarios issued in 2008 really weren’t. What DHS was really doing was establishing a common framework to measure capabilities across the country. There was no real assessment or analysis of the validity of the scenarios.

The one area where we come closest to scenario-based planning is contingency planning. Contingency plans are normally developed when we know a lot about the potential disaster. For example, when planning for evacuation of a dam inundation zone, we know quite a bit about the area that will be affected, the population, involved, the resources available. We can easily identify the variables and develop plans based on changes to those variables. We can make very specific plans because we can clearly identify the agent-generated needs.

True scenario-based planning is a strategic concept. It looks at what we know and then considers how that changes as we manipulate variables. It’s not the scenario that’s important but the process used to develop it and the analysis that follows. It’s a process that can be useful for developing long range strategies and helping to identify planning assumptions. It is a useful tool that certainly has a role in emergency management planning. But don’t confuse using a scenario with scenario-based planning.


3 comments
March 10, 2014

In a January article in our sister publication, Governing Magazine, editor Steve Towns identified three technology policy trends for 2014 that have a direct impact on emergency management: the use of data analytics, the increase in civic innovation, and government procurement policies.

Over the past decade or so we’ve seen a change from the type of data that fit neatly into forms and databases to what we now call “big data” – unstructured data from many sources such as video, email, social media, etc. Unfortunately, while government does fairly well in collecting massive amounts of data, we often find it difficult to make use of it. This is certainly the case in disasters where we see a marked increase in data, particularly from social media and internet sources. We’ve made some progress in developing analytical tools, but there’s still much to be done.

While government has been floundering, open access to government data has meant an increase in civic innovation. With limited budgets, governments are finding that partnerships with technology companies can be extremely effective. The State of Delaware, for example, is spending $3 million to develop a partnership with universities and technology companies that will focus on innovation, particularly cybersecurity. Another example is the partnership developed between government and Google during the Southern California fires in 2007 that provided for mapping of the fire perimeter using Google Earth. That partnership has evolved to where fire data is now readily available on Google Earth through the US Forest Service website.

Unfortunately, government procurement policies have made using this civic innovation difficult at times. The cumbersome process discourages newcomers and entrepreneurs while favoring companies with previous contracts and experiencing gaming the system. There is usually a demand for “previous experience” and a tendency to avoid new and innovative approaches in favor of approaches that have been used in the past. This usually means that government lags behind the private sector when it comes to innovating approaches to using data analytics.

So how do we as emergency managers leverage these policy issues? The obvious answer is to develop partnerships of our own. One way is to look for skills among our volunteers and encourage them to get involved. Supporting intensive collaboration among programmers focused on a specific problem (known as hackathons or hack fests) is relatively cheap and extremely productive. Another is to consider using grant funds not on contracts but as seed money to support the development of long term partnerships that actually solve problems. This presupposes, of course, that we provide open access to data, something we’re not always comfortable with doing.

Big data is not the future – it’s already here. It’s up to us to use innovative approaches to using this valuable resource.


1 comment
February 24, 2014

During a recent discussion on the Emergency Management Issues Facebook page, there seemed to be some confusion about why emergency management standards are produced by organizations that have no apparent connection with emergency management.

Technically, a standard is nothing more than a consensus document that has been developed by a standards development organization (SDO).  An SDO must in turn be accredited by an overseeing organization. In the United States, the overseeing organization is the American National Standards Institute (ANSI). ANSI does not itself set standards but instead accredits other organizations such as the American Society for Testing and Materials (ASTM) and the National Fire Protection Association (NFPA). Accreditation means that the standards-setting organization follows a structured process that ensures openness, balance, consensus and due process in the development of standards.

While ANSI oversees the process of developing standards it does not manage that process. That is, ANSI does not decide what should or should not become a standard. This is left to the SDO. The SDO bases its decisions on what standards to produce by demand, either from its members or from outside parties. This has led to a number of standards from different organizations related to emergency management such as NFPA 1600 Standard on Disaster/Emergency Management and Business Continuity or the ASTM EOC guidelines.

There are several things worth noting here. The first is that standards are voluntary. They do not, in themselves, have the force of law. However, once a standard is adopted by state and local jurisdictions, adherence becomes mandatory and organizations are bound to comply.  An example is NFPA 70, the National Electrical Code. While the NEC itself is not itself a law, its use is mandated by state and local laws. However, even where a standard is voluntary, its use may be considered an industry best practice, either through common use or as the result of litigation. Hospital accreditation is voluntary, for example, but a hospital that fails to become accredited faces considerable problems, including loss of government funding.

The second thing worth noting is that a standard represents the consensus of experts on the subject of the standard. It is not the work of the SDO staff. Under the structured process required by the SDO, these experts develop a draft standard. This standard is reviewed by the SDO for form and then sent to the membership of the SDO for review and comment. If necessary, the draft is revised and undergoes another review and comment period. This continues until all the concerns of the reviewers are either addressed or found unpersuasive. The standard is then published by the SDO, which charges a fee to cover its costs.

So if we reexamine emergency management standards, we find that they are not the product of the SDO but are, in fact, written by our own peers. There are two things you can do to help the process. The first is to volunteer for one of the peer groups helping to develop standards. The second is to submit proposed revisions to standards. This is part of the ANSI process required of SDOs and your comments must be reviewed and considered by the peer group. So get involved – standards are only useful if they truly reflect the consensus of our profession.


5 comments
February 13, 2014

While the ability to use vast amounts of unstructured data has greatly improved government’s situational awareness, it has paradoxically created a situation where there is so much information available that a clear picture of a crisis can sometimes be difficult to obtain. A common analogy is to liken this situation to trying to drink from a fire hose. This “information overload” can have a serious impact on decision making in crisis which depends heavily on the availability of relevant data. During a TED talk at the 2011 TED@AllianzGI conference Professor John Payne of Duke University summed up the problem saying, “In today’s world, I will argue, the scarce resource for decision making is not information, the scarce resource is attention.”

Payne suggested two ways of dealing with information overload.  One technique, which he calls “emotional fluency,” is focused on those who provide information to decision makers. Simply put, this means that information must be presented to decision makers in a form that allows them to quickly judge whether the information supports or argues against a course of action. Lacking this cognitive ease, the information will be ignored. The second technique focuses on the decision maker. Payne advocated the use of value-focused thinking in which the decision maker tries to gain clarity on their values and identifies objectives for making decisions before considering the information.

These two coping methods are strongly linked and highlight the importance of a close working relationship between a decision maker and the crisis management team. As Payne pointed out “…attention is both driven from top down - what our values are, so our values will drive what we pay attention to. But the environment we’re in - what’s salient, what’s easy to process - will also drive our attention. And in doing that our attention will drive our choices, our preferences.” This has very clear implications for an Incident Management Team’s Plans Section. It suggests that, in addition to meeting operational requirements, information collection must be focused on the needs of local government officials, both in terms of the type of information gathered and in the way the information is collated and presented to them.


1 comment
February 04, 2014

A number of years ago I was part of a team conducting interviews for the National Plan Review. A major focus for the review was catastrophic planning. One state director we interviewed was vehement in his belief that his state did not need to do any catastrophic planning because of its small population and limited risk. In response, we pointed to the map. Located just south of the state border was a major metropolitan area. “Where do you think they’re going in a catastrophic event?” we asked.

Emergency managers by nature focus inward on our communities and organizations. This is totally understandable and one of the things that makes us effective. However, it can also make us a bit insular.

With most of the country buried in record snow and suffering from winter storms, it’s hard to drum up any sympathy for California. We’re enjoying exceptionally fine weather at the moment. But the reason for that clear weather is that the state is experiencing its driest year on record and the Governor has just declared a drought emergency. The drought is expected to have a devastating effect on our agriculture and fisheries.

What’s the big deal? California produces a significant amount of the fruit and vegetables eaten across the country: over 95% of artichokes, broccoli, celery, garlic, kiwis, plums, and walnuts, for example. Pacific salmon accounts for one third of the salmon, including almost all the canned and frozen salmon, consumed in the US.  We also produce something like 90% of the wine. A drought in California means higher food prices and lower selections across the US.

In Managing for Long-term Community Recovery in the Aftermath of Disaster, researchers Daniel Alesch, Lucy Arendt, and James Holly referred to these consequences as “ripple reverberations,” the effects of a disaster that create changes outside the disaster, with consequences for both the affected community and the external environment. The give the examples of Kobe, where the city lost its preeminence as a port to other cities following the 1995 Hanshin earthquake, and the growth of Houston at the expense of Galveston following the 1900 hurricane. We could also add the effect of Katrina on oil production or the impact on the nuclear power industry of the 2011 Japanese earthquake and tsunami.

Now, I’m not suggesting that we need to routinely plan for these type of macro consequences. What I am saying is we need to break out of our insularity and recognize that events miles away can have an impact on our communities and organizations. We should be sensitive to the potential for increased risk when an event occurs and factor it into our planning if necessary or help raise the awareness of someone else with responsibility for mitigating that risk. If we don’t do it, who will?


1 comment
January 28, 2014

Yesterday, January 28th, was the anniversary of the Challenger disaster in 1986. As you may recall, the space shuttle Challenger was destroyed 73 seconds into its launch when its solid rocket booster exploded on liftoff. While the anniversary was largely unremarked by the media, I happened to be having dinner with a good friend who was on the naval dive recovery team at the time. After twenty eight years, he is still haunted by the disaster.

What makes the Challenger disaster so poignant is that, as we later learned, it was both predictable and preventable. The explosion was traced to a failed O-ring that allowed hot gases to burn through the rocket’s casing and ignite the fuel tanks. The potential for this failure had been identified by the contractor who built the rockets as early as 1971.

The reasons why the design flaw that led to the disaster was not corrected is the typical one of placing profits over potential loss and of political pressure overcoming common sense, with both the contractor and NASA sharing blame. However, one of the key lessons from the disaster that is often overlooked is how the decision to launch was influenced by the information provided to decision makers.

In a 1993 article for the Consortium for Computing Sciences in Colleges titled NASA's Challenger and Decision Support Systems Professor Craig Fisher noted that NASA’s managers worked with scattered documents totaling over 122,000 pages, forcing them to rely on information provided by contractors. On the night before the launch, contractor’s engineers were asked to provide documentation on why they were recommending against the launch. In response they faxed NASA 13 handwritten slides so dense with engineering data that they failed to convince the decision makers that a launch was inadvisable.

Dr. Edwarde Tufte examined the 13 Challenger slides in considerable detail in his classic book Visual Explanations: Images and quantities, evidence and narrative. He demonstrated how the slides separated cause from effect, failed to establish credibility, and focused on the wrong problem. He then showed how the data could have been displayed using a data matrix and a scatterplot to make it immediately apparent that the launch was extremely risky. Tufte’s conclusion was that the engineers had reached the right conclusion and had the data to support it. However, they had failed to display it effectively.

The lesson here is that information has to be displayed in a way that is useful to decision makers. Merely providing large quantities of data is not sufficient. Even worse, if the data is presented in a way that obscures the true issues it inevitable results in bad decisions. And this, as was the case with the Challenger disaster, can have tragic consequences.


Leave a comment
January 22, 2014

Last week I made the argument for prioritizing risk and planning for the risks most likely to affect your community. My colleague, Dan Hahn, raised the question of how we deal with high consequence events that are low frequency and thus tend to be ignored. He suggested that emergency managers should be futurists and consider hazards with future consequences. So how do we commit to all-hazards planning while hampered by limited resources and disinterest on the part of our officials?

As my colleague RMP quite correctly pointed out, “risk” is a term that can be defined in many ways. This is the reason why almost every academic paper or book I have seen on risk has to define the term. For emergency managers, the generally accepted definition of risk is the product of frequency and impact of the hazard. We further define impact as the relationship between the severity of the hazard and the vulnerability of the community. Social vulnerability is one of the key concepts of modern emergency management and it suggests that impact (or consequence, if you prefer) varies based on vulnerability. The same event will not have the same impact on a community with limited resources as it does on one with greater resources. Once we understand these relationships, we have the means to analyze and compare the various hazards facing a community.

The standard risk management matrix separates risk into four categories:

  1. Low consequence/low frequency: risks that hardly ever happen and, if they do, have such minor impact that they present no real response problems
  2. Low consequence/high frequency: risks that have no real impact but occur so often that we deal with them routinely
  3. High consequence/high frequency: risks that have a high impact but are dealt with frequently enough that we have the experience to respond
  4. High consequence/low frequency: risks that have a high impact but do not occur frequently enough for us to gain experience in responding to them

If we look closely at these categories, we note that much of our planning is operational and focused largely on high consequence/high frequency risks. This is entirely appropriate: you need to be able to deal with events that will have the most immediate impact on your community before tackling the future. Yet high consequence/low frequency events actually pose the greater threat precisely because we lack experience in dealing with them. They are also a hard sell to officials because they are low frequency events.

Here are four basic questions to ask when considering high consequence/low frequency events:

  1. What is the likelihood that the event will occur? Just because an event could happen doesn’t necessarily mean that it should receive priority in planning. You’re going to need to sell this to reluctant officials, so do your homework and have the facts to demonstrate that the risk is credible.
  2. If the event does occur, can I really do anything about it? Your local emergency plan and even a regional catastrophic plan have limits. Does the event you’re analyzing exceed those limits? If so, the problem may be one for the State and/or Federal government.
  3. What is are the unique agent-generated demands of the event? What planning do I need to do beyond my basic all-hazards plan? Can planning for these demands improve my all-hazards plan?
  4. Who owns the issue? Some events can be planned for operationally, others require a broader, more strategic approach or are properly the domain of another agency. Sometimes you can only be a gadfly, providing information and raising issues.


3 comments
January 12, 2014

In response to my recent article on climate change, my colleague, Rob Dale, raised a very important point. In his comment Rob said,

“I ’know’ that in the next three years we will have some sort of tornado/wind damage event, probably a little hazmat incident, and maybe a mass casualty. My planning efforts (and money) should be spent there. In a world of unlimited resources, that can change, but I find it hard to believe many EMs would go to their public and say ‘We are going to cut back on the funds we spend for flood mitigation and dedicate them to heat wave adaptation for conditions that might hit us in 2070.’"

Rob’s point is simple: you plan for the event most likely to affect your community. We sometimes forget that as emergency managers our job is to prioritize risk. A hazard is not a risk. Just because something can happen, doesn’t mean it will happen. Risk is dependent on the vulnerability of the community to the effects of a particular hazard.

This sounds like emergency management 101 but, unfortunately, we have created a system that is largely reactive. Terrorists attack and our priority becomes terrorism. Hurricanes strike and hurricane planning becomes the priority. Doing the “right” planning is rewarded with grant funds; true risk-based planning is often unrewarded.

Nevertheless, it’s our job to cut through the "disaster of the month" hype and keep our planning focused on true risk. When September 11th occurred, many members of the public were unaware that we had been working on Metropolitan Medical Task Forces for over two years following the Tokyo subway attack. But in San Francisco, we agreed to the program not because of concerns over terrorism but because the program helped us prepare for a large scale hazard materials release, a threat that we had identified but for which we had only a limited response capacity.

The problem is that because risk is relative, we need to define the community we are assessing. National risks do not always equate to local risks. For example, there is no doubt that terrorism is a risk we face as a nation. But the level of risk from terrorism varies from community to community. While it is possible for all communities to be attacked by terrorists, the probability for many communities is low. As Rob points out, planning for an event that has a high probability of occurrence is more important than planning for one with a low probability or long event horizon.


14 comments

Latest Emergency Management News

Moulage
Moulage Provides Realism for ShakeOut Disaster Drill

The morbid makeup ensures that emergency responders have realistic-looking victims for disaster preparation drills.
Police officer working on a laptop
FirstNet Explained

FirstNet, the proposed nationwide broadband public safety network, is big, expensive and complicated. Here are a few basic things you need to know.
FirstNet Testbeds (Interactive Map)

FirstNet began negotiations with eight testbed jurisdictions, and the projects that did reach agreements should help shape the FirstNet network.


4 Ways to Get EM

Subscribe to Emergency Management MagazineFollow Emergency Management on TwitterSubscribe to Emergency Management HeadlinesSubscribe to Emergency Management Newsletters

Featured Papers

Blog Archives