During a recent discussion on the Emergency Management Issues Facebook page, there seemed to be some confusion about why emergency management standards are produced by organizations that have no apparent connection with emergency management.
Technically, a standard is nothing more than a consensus document that has been developed by a standards development organization (SDO). An SDO must in turn be accredited by an overseeing organization. In the United States, the overseeing organization is the American National Standards Institute (ANSI). ANSI does not itself set standards but instead accredits other organizations such as the American Society for Testing and Materials (ASTM) and the National Fire Protection Association (NFPA). Accreditation means that the standards-setting organization follows a structured process that ensures openness, balance, consensus and due process in the development of standards.
While ANSI oversees the process of developing standards it does not manage that process. That is, ANSI does not decide what should or should not become a standard. This is left to the SDO. The SDO bases its decisions on what standards to produce by demand, either from its members or from outside parties. This has led to a number of standards from different organizations related to emergency management such as NFPA 1600 Standard on Disaster/Emergency Management and Business Continuity or the ASTM EOC guidelines.
There are several things worth noting here. The first is that standards are voluntary. They do not, in themselves, have the force of law. However, once a standard is adopted by state and local jurisdictions, adherence becomes mandatory and organizations are bound to comply. An example is NFPA 70, the National Electrical Code. While the NEC itself is not itself a law, its use is mandated by state and local laws. However, even where a standard is voluntary, its use may be considered an industry best practice, either through common use or as the result of litigation. Hospital accreditation is voluntary, for example, but a hospital that fails to become accredited faces considerable problems, including loss of government funding.
The second thing worth noting is that a standard represents the consensus of experts on the subject of the standard. It is not the work of the SDO staff. Under the structured process required by the SDO, these experts develop a draft standard. This standard is reviewed by the SDO for form and then sent to the membership of the SDO for review and comment. If necessary, the draft is revised and undergoes another review and comment period. This continues until all the concerns of the reviewers are either addressed or found unpersuasive. The standard is then published by the SDO, which charges a fee to cover its costs.
So if we reexamine emergency management standards, we find that they are not the product of the SDO but are, in fact, written by our own peers. There are two things you can do to help the process. The first is to volunteer for one of the peer groups helping to develop standards. The second is to submit proposed revisions to standards. This is part of the ANSI process required of SDOs and your comments must be reviewed and considered by the peer group. So get involved – standards are only useful if they truly reflect the consensus of our profession.
While the ability to use vast amounts of unstructured data has greatly improved government’s situational awareness, it has paradoxically created a situation where there is so much information available that a clear picture of a crisis can sometimes be difficult to obtain. A common analogy is to liken this situation to trying to drink from a fire hose. This “information overload” can have a serious impact on decision making in crisis which depends heavily on the availability of relevant data. During a TED talk at the 2011 TED@AllianzGI conference Professor John Payne of Duke University summed up the problem saying, “In today’s world, I will argue, the scarce resource for decision making is not information, the scarce resource is attention.”
Payne suggested two ways of dealing with information overload. One technique, which he calls “emotional fluency,” is focused on those who provide information to decision makers. Simply put, this means that information must be presented to decision makers in a form that allows them to quickly judge whether the information supports or argues against a course of action. Lacking this cognitive ease, the information will be ignored. The second technique focuses on the decision maker. Payne advocated the use of value-focused thinking in which the decision maker tries to gain clarity on their values and identifies objectives for making decisions before considering the information.
These two coping methods are strongly linked and highlight the importance of a close working relationship between a decision maker and the crisis management team. As Payne pointed out “…attention is both driven from top down - what our values are, so our values will drive what we pay attention to. But the environment we’re in - what’s salient, what’s easy to process - will also drive our attention. And in doing that our attention will drive our choices, our preferences.” This has very clear implications for an Incident Management Team’s Plans Section. It suggests that, in addition to meeting operational requirements, information collection must be focused on the needs of local government officials, both in terms of the type of information gathered and in the way the information is collated and presented to them.
A number of years ago I was part of a team conducting interviews for the National Plan Review. A major focus for the review was catastrophic planning. One state director we interviewed was vehement in his belief that his state did not need to do any catastrophic planning because of its small population and limited risk. In response, we pointed to the map. Located just south of the state border was a major metropolitan area. “Where do you think they’re going in a catastrophic event?” we asked.
Emergency managers by nature focus inward on our communities and organizations. This is totally understandable and one of the things that makes us effective. However, it can also make us a bit insular.
With most of the country buried in record snow and suffering from winter storms, it’s hard to drum up any sympathy for California. We’re enjoying exceptionally fine weather at the moment. But the reason for that clear weather is that the state is experiencing its driest year on record and the Governor has just declared a drought emergency. The drought is expected to have a devastating effect on our agriculture and fisheries.
What’s the big deal? California produces a significant amount of the fruit and vegetables eaten across the country: over 95% of artichokes, broccoli, celery, garlic, kiwis, plums, and walnuts, for example. Pacific salmon accounts for one third of the salmon, including almost all the canned and frozen salmon, consumed in the US. We also produce something like 90% of the wine. A drought in California means higher food prices and lower selections across the US.
In Managing for Long-term Community Recovery in the Aftermath of Disaster, researchers Daniel Alesch, Lucy Arendt, and James Holly referred to these consequences as “ripple reverberations,” the effects of a disaster that create changes outside the disaster, with consequences for both the affected community and the external environment. The give the examples of Kobe, where the city lost its preeminence as a port to other cities following the 1995 Hanshin earthquake, and the growth of Houston at the expense of Galveston following the 1900 hurricane. We could also add the effect of Katrina on oil production or the impact on the nuclear power industry of the 2011 Japanese earthquake and tsunami.
Now, I’m not suggesting that we need to routinely plan for these type of macro consequences. What I am saying is we need to break out of our insularity and recognize that events miles away can have an impact on our communities and organizations. We should be sensitive to the potential for increased risk when an event occurs and factor it into our planning if necessary or help raise the awareness of someone else with responsibility for mitigating that risk. If we don’t do it, who will?
Yesterday, January 28th, was the anniversary of the Challenger disaster in 1986. As you may recall, the space shuttle Challenger was destroyed 73 seconds into its launch when its solid rocket booster exploded on liftoff. While the anniversary was largely unremarked by the media, I happened to be having dinner with a good friend who was on the naval dive recovery team at the time. After twenty eight years, he is still haunted by the disaster.
What makes the Challenger disaster so poignant is that, as we later learned, it was both predictable and preventable. The explosion was traced to a failed O-ring that allowed hot gases to burn through the rocket’s casing and ignite the fuel tanks. The potential for this failure had been identified by the contractor who built the rockets as early as 1971.
The reasons why the design flaw that led to the disaster was not corrected is the typical one of placing profits over potential loss and of political pressure overcoming common sense, with both the contractor and NASA sharing blame. However, one of the key lessons from the disaster that is often overlooked is how the decision to launch was influenced by the information provided to decision makers.
In a 1993 article for the Consortium for Computing Sciences in Colleges titled NASA's Challenger and Decision Support Systems Professor Craig Fisher noted that NASA’s managers worked with scattered documents totaling over 122,000 pages, forcing them to rely on information provided by contractors. On the night before the launch, contractor’s engineers were asked to provide documentation on why they were recommending against the launch. In response they faxed NASA 13 handwritten slides so dense with engineering data that they failed to convince the decision makers that a launch was inadvisable.
Dr. Edwarde Tufte examined the 13 Challenger slides in considerable detail in his classic book Visual Explanations: Images and quantities, evidence and narrative. He demonstrated how the slides separated cause from effect, failed to establish credibility, and focused on the wrong problem. He then showed how the data could have been displayed using a data matrix and a scatterplot to make it immediately apparent that the launch was extremely risky. Tufte’s conclusion was that the engineers had reached the right conclusion and had the data to support it. However, they had failed to display it effectively.
The lesson here is that information has to be displayed in a way that is useful to decision makers. Merely providing large quantities of data is not sufficient. Even worse, if the data is presented in a way that obscures the true issues it inevitable results in bad decisions. And this, as was the case with the Challenger disaster, can have tragic consequences.
Leave a comment
Last week I made the argument for prioritizing risk and planning for the risks most likely to affect your community. My colleague, Dan Hahn, raised the question of how we deal with high consequence events that are low frequency and thus tend to be ignored. He suggested that emergency managers should be futurists and consider hazards with future consequences. So how do we commit to all-hazards planning while hampered by limited resources and disinterest on the part of our officials?
As my colleague RMP quite correctly pointed out, “risk” is a term that can be defined in many ways. This is the reason why almost every academic paper or book I have seen on risk has to define the term. For emergency managers, the generally accepted definition of risk is the product of frequency and impact of the hazard. We further define impact as the relationship between the severity of the hazard and the vulnerability of the community. Social vulnerability is one of the key concepts of modern emergency management and it suggests that impact (or consequence, if you prefer) varies based on vulnerability. The same event will not have the same impact on a community with limited resources as it does on one with greater resources. Once we understand these relationships, we have the means to analyze and compare the various hazards facing a community.
The standard risk management matrix separates risk into four categories:
- Low consequence/low frequency: risks that hardly ever happen and, if they do, have such minor impact that they present no real response problems
- Low consequence/high frequency: risks that have no real impact but occur so often that we deal with them routinely
- High consequence/high frequency: risks that have a high impact but are dealt with frequently enough that we have the experience to respond
- High consequence/low frequency: risks that have a high impact but do not occur frequently enough for us to gain experience in responding to them
If we look closely at these categories, we note that much of our planning is operational and focused largely on high consequence/high frequency risks. This is entirely appropriate: you need to be able to deal with events that will have the most immediate impact on your community before tackling the future. Yet high consequence/low frequency events actually pose the greater threat precisely because we lack experience in dealing with them. They are also a hard sell to officials because they are low frequency events.
Here are four basic questions to ask when considering high consequence/low frequency events:
- What is the likelihood that the event will occur? Just because an event could happen doesn’t necessarily mean that it should receive priority in planning. You’re going to need to sell this to reluctant officials, so do your homework and have the facts to demonstrate that the risk is credible.
- If the event does occur, can I really do anything about it? Your local emergency plan and even a regional catastrophic plan have limits. Does the event you’re analyzing exceed those limits? If so, the problem may be one for the State and/or Federal government.
- What is are the unique agent-generated demands of the event? What planning do I need to do beyond my basic all-hazards plan? Can planning for these demands improve my all-hazards plan?
- Who owns the issue? Some events can be planned for operationally, others require a broader, more strategic approach or are properly the domain of another agency. Sometimes you can only be a gadfly, providing information and raising issues.
In response to my recent article on climate change, my colleague, Rob Dale, raised a very important point. In his comment Rob said,
“I ’know’ that in the next three years we will have some sort of tornado/wind damage event, probably a little hazmat incident, and maybe a mass casualty. My planning efforts (and money) should be spent there. In a world of unlimited resources, that can change, but I find it hard to believe many EMs would go to their public and say ‘We are going to cut back on the funds we spend for flood mitigation and dedicate them to heat wave adaptation for conditions that might hit us in 2070.’"
Rob’s point is simple: you plan for the event most likely to affect your community. We sometimes forget that as emergency managers our job is to prioritize risk. A hazard is not a risk. Just because something can happen, doesn’t mean it will happen. Risk is dependent on the vulnerability of the community to the effects of a particular hazard.
This sounds like emergency management 101 but, unfortunately, we have created a system that is largely reactive. Terrorists attack and our priority becomes terrorism. Hurricanes strike and hurricane planning becomes the priority. Doing the “right” planning is rewarded with grant funds; true risk-based planning is often unrewarded.
Nevertheless, it’s our job to cut through the "disaster of the month" hype and keep our planning focused on true risk. When September 11th occurred, many members of the public were unaware that we had been working on Metropolitan Medical Task Forces for over two years following the Tokyo subway attack. But in San Francisco, we agreed to the program not because of concerns over terrorism but because the program helped us prepare for a large scale hazard materials release, a threat that we had identified but for which we had only a limited response capacity.
The problem is that because risk is relative, we need to define the community we are assessing. National risks do not always equate to local risks. For example, there is no doubt that terrorism is a risk we face as a nation. But the level of risk from terrorism varies from community to community. While it is possible for all communities to be attacked by terrorists, the probability for many communities is low. As Rob points out, planning for an event that has a high probability of occurrence is more important than planning for one with a low probability or long event horizon.
One of the problems I’ve noted with discussions on climate change is that people tend to fixate on terms rather than processes. Witness, for example, the recent spate of political cartoons pointing to the recent extreme cold weather across the globe and saying, “What do you mean ‘global warming’?” They either ignore or are unaware that the term “global warming” refers to average mean temperatures rather than suggesting that the temperature will get hotter everywhere.
This should come as no surprise since we’ve been experiencing inter-related weather patterns since the human race has been around; we just don’t always recognize them. A number of years ago people in California became familiar with a weather pattern called El Niño associated with heavy rainfall and flooding. Few realized that El Niño (technically the El Niño Southern Oscillation or ENSO) is part of a recurring weather pattern caused by the warming or cooling of Pacific Ocean surface temperatures relative to the average temperature value. While associated in the mind of the public with California ENSO can spawn typhoons in the Pacific, affect rainfall in Kenya, affect sea ice formation in Antarctica; in short, it has a global effect on weather patterns.
Whatever your belief on global warming, there is no question that the world is experiencing unusual and extreme weather patterns. As emergency managers, we need to divorce ourselves from partisan discussions on climate change and maintain a risk-based objectivity and commitment to facts. This is particularly important as planning for climate change is a long-term issue. It does not resonate well with politicians who cannot see past the next election.
There is some progress being made, particularly in those areas that see the most obvious effects of climate change. A number of coastal jurisdictions have recognized the need both to reduce their carbon footprint and to incorporate sea-level rise into building codes. The City of San Francisco, for example, requires individual department plans for reducing greenhouse gases. In 2011 the San Francisco Bay Conservation and Development Commission adopted a development plan for land within 100 feet of a coastline that require developers to assess the risk of sea-level rise on a project and develop appropriate mitigation measures. It is critical that we support and encourage these types of initiatives.
Planning for climate change is not popular and it tends to be outside many emergency managers’ comfort zones as it deals with long-term strategies and requires interaction with politicians, non-responder agencies, and the public. Yet our responsibility as all-hazards planners is to consider all the risks facing a community and contribute to their mitigation as much as we can.
Last week I made the case for the need to find the right mix between education and experience. Recent events in San Francisco offer an interesting case study that demonstrates that experience alone is not always enough when dealing with a high visibility event.
As you may recall, last July an Asiana Airlines flight crashed on landing at San Francisco International Airport, killing three passengers and injuring 181 others. Unfortunately, one of the fatalities was a young girl who was inadvertently run over twice by responding fire apparatus. While much about the accident is still unknown, it appears that the passenger was not wearing her seatbelt and was flung out of the aircraft to land well away from other passengers. Two San Francisco firefighters saw the passenger but, based on a quick visual assessment, assumed she was dead. Eventually, the girl became covered in foam and was not seen when the fire equipment was repositioned.
This week the San Francisco papers have been making much of the fact that the senior fire officers in charge of the scene had not received training on aircraft disaster operations. The implication is that their lack of knowledge somehow contributed to the young passenger’s death.
On the surface this would seem to be a major failure on the part of the San Francisco Fire Department. However, the facts are somewhat different:
- FAA regulations did not require that the senior officers be trained since they were not stationed permanently at the airport.
- The officers in charge were all highly experienced and were deployed from other locations to manage the incident because of its size and complexity. One officer is personally known to me and is a thorough professional.
- There is absolutely no suggestion that the senior officers mishandled the incident. Quite the contrary, the response to the crash was highly praised at the time.
- The proximate cause of the tragedy would appear to be the failure on the part of firefighters to physically examine the passenger before deciding she was dead and their failure to notify the incident commander of the location of the passenger as required by their general orders.
However, the public perception is that the experience of the officers is not sufficient to offset responsibility for the tragedy. The fact that these senior officers could deal with the incident was not enough; they have no way of documenting that they have the specialized knowledge needed to deal with aircraft disasters. They did everything right and according to regulations but there will always be a seed of doubt in the mind of the public. And it’s the public who comprise the jury in a liability case.
As emergency management gropes towards becoming a profession, I see more and more discussions based around the conflict between those with years of experience and limited formal education in emergency management and those with limited operational experience but considerable formal education. Unfortunately, it’s usually framed as an either/or argument which obscures several key issues.
As any personnel manager will tell you, success in a job depends on three things: knowledge (education) related to the job, specific skills required by the job (experience), and the ability to apply those knowledge and skills to the job. While ability grows with experience, it is because we are increasing our knowledge and learning new skills. Further, the mix of knowledge and skills can compensate for each other to a certain degree. That is, job skills learned in one job can be transferred to a new one, potentially offsetting knowledge requirements. Conversely, a broad knowledge base can sometimes compensate for lack of experience. While this is a bit oversimplified, the point is that both knowledge and skills are important and work together.
So why do see a conflict every time the discussion over education comes up? The problem is that we personalize the argument with a false logic. For example: “I don’t have a degree and I’m doing the job just fine therefore degrees aren’t necessary.” Yes, but your years of experience may be offsetting your lack of formal education in emergency management theory. What about the reverse? “I have a degree and I’m doing just fine without spending thirty years as a first responder.” Fine, but your knowledge may have given you an edge that compensates for previous experience.
The root of the problem, as many of my colleagues have pointed out, is that we still don’t have agreement on what constitutes the core knowledge, skills, and abilities necessary be an emergency manager (actually, we don’t really have a great deal of agreement on the term “emergency manager” either). Note that word “core”. There will always be exceptions that prove the rule. You may believe that your forty years as an underwater messkit repair technician in the Swiss navy may be just what your jurisdiction needs but it shouldn’t be a core requirement for the profession. Until we stop personalizing the argument over which is better, education or experience, and recognize that our ability to do our jobs depends on both, we’ll never make real progress towards being recognized as a profession. It’s not about you; it’s about the profession.
It is sad to read that many of the communities in the Philippines have had to resort to mass burials in the wake of Typhoon Haiyan. The news reports speak to hundreds of unidentified remains being placed in mass graves by officials citing concerns over the spread of disease. The use of mass graves or cremation in mass fatality events is not unusual in itself. Where local and regional forensic systems and experts in the processing of remains are not available the task of dealing with mass fatalities defaults to untrained responders who respond with good intentions but limited knowledge of how to deal with the dead. Fearing the spread of disease, these responders generally turn to mass burial or cremation.
This is unfortunate because burying hundreds of victims in mass graves has a profound effect on the survivors and on those family members who will never know what happened to their loved ones. It is also unfortunate because it is largely unnecessary. Research by the World Health Organization (WHO) and others, such as Dr. Joseph Scanlon, have demonstrated that disease is generally not a concern in disasters where most of the victims suffered traumatic injuries, the exception being if the bodies are in contact with the water supply.
In a 2007 article in the International Review of the Red Cross Dr. Morris Tidball-Binz, forensic coordinator for the Assistance Division of the International Committee of the Red Cross, noted that, "It is increasingly acknowledged worldwide that the proper management of the dead is a core component of any humanitarian response to catastrophes..." In its publication, Management of Dead Bodies in Disaster Situations, the World Health Organization expressly forbids both cremation and mass burial in its guiding principles and states, "Every effort must be taken to identify the bodies."
So how does an untrained responder deal with mass fatalities? Recognizing the need to provide guidance to responders the Pan American Health Organization and the International Committee of the Red Cross, with the participation of the World Health Organization and the International Federation of Red Cross and Red Crescent Societies published Management of Dead Bodies after Disasters: A Field Manual for First Responders in 2006. The manual addresses concerns about the risks of infectious disease and provides practical guidance on the recovery, storage, and disposal of remains. Among the techniques available is the use of temporary burial where the bodies are marked for later exhumation and identification. It’s a fine distinction easily overlooked by the media, which may well be the case in the Philippines.
Processing mass fatalities is something to which many emergency planners give only limited thought. Yet it goes to the heart of our mission to give comfort to the bereaved and to provide the dead with dignity and respect.
Leave a comment
Latest Emergency Management News
A professor and former police officer and others have created an app that alerts cops when they're too tired to continue working safely.
Spotters are the eyes on the skies and feet on the ground for the National Weather Service.
As the big data movement explodes, law enforcement in the Denver region reaps the benefits.