by Chief Ernst Piercy, CFO

Contributors: Joseph Yacker, Information Systems Director, Spokane Valley Fire Department; Leonard Chan, Management Analyst, Houston Fire Department; and Xavier Anderson, Management Analyst, Los Alamos County Fire Department

The fire service, long steeped in tradition, has provided superb services to our communities for many years. However, the use of data and information management in general is in need of an upgrade, and not just technologically speaking.

When fire service leaders think of data collection it is natural for them to refer to their familiarity with, arguably, one of the most successful data collection efforts in our industry; the National Fire Incident Reporting System (NFIRS). What distinguishes the NFIRS report is its completeness, and its structure. It also has some weaknesses that can skew our data collection efforts as we move forward.

As anyone who has written an NFIRS report knows, the completeness of the standard is obvious. If you print out a blank report form, it goes on for pages. As you complete a report you will eventually realize that just about anything you would want to report about an incident can be addressed somewhere in the form. It is clear that, while this design may have been built to answer some specific questions many years ago, it was also designed to so completely describe an incident that new ways to interpret and investigate the data would naturally evolve over the years. Unfortunately, the value of the data collected in the NFIRS database has suffered in a number of ways for this completeness. It takes a significant amount of effort to thoroughly document an incident, and this has led to users taking shortcuts. Zero codes represent the default general code at the beginning of a series and they have become a shortcut that users commonly use to avoid the requirement to enter further detail required by more accurate codes.

So, what does this have to do with fire service accreditation? In short, nothing, but yet, everything. Clean data are the genesis of accurate reporting and must be engrained in our daily lives. The modern fire service has, at its fingertips, an unprecedented number of tools with which to collect data, just at the time when it seems to be more poised to innovate its operations. Community risk reduction, community paramedicine, fire service accreditation, community outreach, and health and wellness all represent areas that are innovations to the fire service, relatively speaking. They drive data collection and are themselves driven by data collection.

As you read this, you likely have 4-5 devices within your reach that can be utilized as platforms for data collection. Portable devices, coupled with a global data network, increases your reach, capability, and reliability on a daily basis. An overwhelming market of software tools stands ready to serve your needs. Even so, technology cannot solve our data collection problem. We still need to be able to understand and communicate our data collection objectives, and make decisions based on those understandings and objectives. Those decisions will drive the platforms, the networks, and the software, and allow us to leverage their power for success. But only if we put effort into both the design and execution of our data collection efforts.

Data as a flashlight

As an example, if a department’s Community Risk Reduction program has analyzed census demographics, this could be overlaid with incident information about fires, their causes, and the losses they incur. Based on this, a conclusion could be made that many of the homes in the community are either not equipped with smoke alarms, or they are old enough that many no longer operate properly. We commit to improving home safety, many times by addressing the smoke alarm needs. We apply for grants and create community partnerships, we acquire equipment and devise a plan. You may ask firefighters to check for functioning smoke alarms when they are in a home for any reason and equip them with tools and equipment as needed. Your focus will increase on vulnerable populations and offer services before you are called into their home. But we want to ensure that we know if our efforts result in a measurable improvement. We need to collect data so we need data collection to dovetail with all of our efforts to not only provide conclusive results at the end of project (if there is one), but to dynamically drive continuous improvement during delivery.

How do we measure our progress?

There are a variety of ways, but let’s consider the following: input measures, which could include the value of resources, typically displayed as a number; output measures, which include a quantity or number of units; efficiency measures, which define the number of units used per output; service quality measures, which include customer satisfaction; and outcome measures, which are the gold standard of measurement. Outcome measures include the determination and evaluation of the results of an activity, as compared to your intended results.

How much data is enough?

So, let us consider the scope of your data collection, keeping in mind that it must support your outcome objectives. Some would say that an easy way to go wrong is to buy into the (perhaps correct) notion that there are valuable nuggets of information available to us whose value will only be revealed later. So the inclination is to guess at what data to collect, well beyond what we know we need to support our current objectives. Is there a downside to collecting more data than what you need? Considering that you have someone on site, so you may as well collect as much data as possible, correct? If that person is a robot, or an altruist, then the only risk is a few more minutes taken to collect more data. More than likely, however, that person that is working on your report has a real life, full of stresses and distractions and daily requirements. If you are collecting more data than is obviously in need, then they will start to take shortcuts. And, there is no way to ensure that they will cut out the least valuable data in your request; they may even give up entirely on the data collection effort.

The answer is to start with the data you really need. As responses start to drive additional questions, then add that to the data collection as you go. It is true that you may not be provided a full analyses of every record, but you may end up with a more complete data set at the end. Combine this conservative approach with a substantial and open discussion with your data collectors about why the data are needed, and what questions you hope to answer. Involve them in the process; maybe they will be the ones that will suggest that you are missing that one critical question on your form. A quick reminder about the importance of both quantitative data and qualitative data. Quantitative data are measurement about quantities (numbers) and are critical to the measurement of your programs. Qualitative data is descriptive, and are measurement regarding observed phenomena. Both are important, although quantitative data is certainly easier to validate. Finally, make sure your folks know you care about the data. A good dataset requires leadership.

So, how do can we tie data collection into the fire service accreditation model? Here are three quick examples:

Strategic Planning. Your data are important but needs should be summarized, themes identified, and, if feasible, quantified. Collecting and tracking information from stakeholder feedback reassures the stakeholders that their feedback was considered. Using this data in the internal development of your strategic plan ensures that your outcomes are aligned with community expectations. In the ninth edition of the model, this is critical in criterion 3B, during the development of your goals and objectives, which serve as the foundation for your strategic plan.

Community Risk Assessment-Standards of Cover (CRA-SOC). Your risk assessment must have information that is useful and digestible for all personnel. The document should not be just a “front office” product that is compiled simply to check off a box. The data should help improve day-to-day situational awareness and increase knowledge of the community served. The document should not only help shape the top-level function of placement of units and stations, but also guide equipment, training, and public education strategies. In other words, data should drive deployment. A process must be in place to validate your data, including the development and implementation of an outlier policy. Your data sources should be listed, so updates can be done seamlessly while ensuring credibility with the AHJ and stakeholders. This data are critical to the development of and responses to criteria 2B and 2C of the fire and emergency services self-assessment model.

The CRA-SOC provides a real glimpse into the performance of the agency. It identifies strengths and challenges and, by performing an annual appraisals for each response program, it provides a road map for the five years of data. In other words, collect data, then act upon it.

Writing to performance indicators in the model. Beyond just measuring performance times, many agencies struggle in truly measuring how successful they are meeting performance indicators. A good number of times some agencies just provide the generic “it is going well.” Stakeholders often lack context on what is considered doing a good job. An incident may have looked good from the public’s perspective but the crews may be frustrated by the lack of things such as working hydrants, late arriving units, or even near-misses. So how do we determine how well we are doing? Metrics should not be established for the sake of creating metrics, the numbers should have meaning and not lead to unintended consequences. The temptation to mold the self-assessment model into a public relations document should be avoided. The measures used, and the model itself, are not designed to make an agency to look good but rather to identify areas of improvement.

An argument could be made that data should be used in the response to all performance indicators in the model, but, admittingly, there are cases were that could be difficult. In situations whereby quantifiable data are not available, then qualifiable data (observed phenomena) should be used to appraise your programs. As an example, how do you address the appraisal for you response to performance indicator 2A.1? It states, “Service area boundaries for the agency are identified, documented, and legally adopted by the authority having jurisdiction.” If, within the description, you have told the reader that the service area boundaries are identified (and adopted), how do you appraise this? Certainly, quantifiable data may not be an option, although telling the reader how clearly defined your response areas are could lead to more accurate response plans, dropping borders, adding or deleting the need for mutual aid, etc.

On the other hand, there are situations whereby quantifiable data must be used to address your appraisals. In performance indicator 5E.1, which states in part, “Given its standards of cover and emergency deployment objectives, the agency meets its staffing, response time….” you should describe what your benchmarks (goals) are, and in the appraisal you would use data to provide your actual response performance, including any performance gaps. This should be replicated in each of the programs that follow (5F.1, 5G.1, etc.).

Benefits of data information and collection, include everything from data visualization (both in statistics and mapping) to justifying or shifting funding for your programs. With sound data collection, fire service leaders can easily justify the decisions they are making for the future success of the organization.

Ernst Piercy is a retired fire chief with more than 35 years in the fire service, most recently serving as the Regional Fire Chief for Navy Region Southwest in San Diego, California.  He served as the accreditation manager in the Air Force Academy Fire Department’s successful bid for international fire service accreditation in March 2001 and led his team through successful re-accreditation bids in 2006 and 2011.

Chief Piercy is a 2011 graduate of the Senior Executives in State and Local Government Program at Harvard University, has completed the Executive Fire Officer Program at the National Fire Academy (2007), and is a Chief Fire Officer Designate since 2003.