Integration Nation

1604frhinds-p01
Be it response time data, building inspection data, or any other, the fire service may be chock full of potential Big Data, but the fire service and data aren’t exactly peanut butter and jelly. (Photo by John Cetrino.)

In the shadow of 9/11, a great rallying cry went out across the fire service-interoperability. No longer could we operate within our own silos. Like the move to standardize hose couplings a generation before, having a radio grid that allows us to communicate and collaborate with our neighbors was no longer a luxury. Fast forward 15 years and interwoven radio networks, enormous mobile command units, and plain language communication are prevalent, if not universal. Yet, departmental silos remain as prevalent. The most pressing silos in the fire service these days concern data.

Now, you’re forgiven for rolling your eyes; Big Data, Little Data, and every data in between have all the hallmarks of yet another fad. Everywhere you turn, someone is pontificating data as the salve for all manner of problems. With this article, I suppose I am just as guilty. But I am not alone. In the December 2014 issue of FireRescue, Erich Roden and Matt Quinn’s article “Big Data in the Fire Service: A Primer” outlined extensively the multitude of ways Big Data is already poised to impact the fire service. As they put it: “Simply put, ‘Big Data’ can be defined as any collection of data that is too large to be processed by any of the standard tools commonly used to work with data.”1

Be it response time data, building inspection data, or any other, the fire service may be chock full of potential Big Data, but the fire service and data aren’t exactly peanut butter and jelly.

Data is something we do as much as it is something we use. Data is something we rush through in the middle of the night so we can get back to bed. It is the forms we begrudgingly complete to keep headquarters and city hall happy. Data is something we tend to do for someone else. Moving to digital data systems has multiplied the amount of data but not necessarily the ways we use data. As we start to improve the ways we use the data, the more we realize just how siloed our data and our data systems are. Data system integration is the next rallying cry that is poised to sweep the fire service.

This article focuses specifically on data integrations as they relate to the fire service. It briefly introduces a number of concepts around data architecture, software configurations, and data portability, as they are essential to understanding the capabilities and limitations of current data and records management systems (RMSs). This article is not intended to provide a comprehensive technical how-to manual on data integrations; it is intended to help a nontechnical audience understand the fundamentals of linking data systems.

Data System Integrations

How many computer systems does your organization have to record a firefighter’s certifications? How many different places does your organization record who worked, on what day, and on what unit? When a family narrowly escapes a residential fire, how many places would you have to look to determine if that was one of the houses where your department installed a smoke detector or had previous contact? Data system integration is the process by which different data and RMSs are linked on the back end so that data transfer directly, seamlessly, and accurately from one system to another to improve efficiency and effectiveness. Or, as IBM Analytics put it: “Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. A complete data integration solution delivers trusted data from a variety of sources.”2

For the end user, it should seem intuitive: A change is made in one system and is automatically updated in another system. No more keeping multiple lists and spreadsheets or logging into multiple systems to ensure they all match up.

To illustrate the concept, let’s begin with a scenario: A firefighter/paramedic has been diagnosed with meningitis and now the department needs to cover the employee’s upcoming shifts and identify personnel and patients the employee likely came into contact with. First, the automated shift-scheduling system pulls a list of certifications from the training RMS and identifies the next person from the overtime list that meets all the certifications and criteria required to fill the shift. Next, the department queries the incident RMS to identify all calls to which the sick employee responded. An automated message goes out to all potentially exposed personnel and other stakeholders, alerting them to the possible exposures. While departmental personnel will be actively engaged in overseeing this process, the data are exchanged directly, in real time, to the respective computer systems and persons. As far as the end user is concerned, the systems just “talk” to one another and pass the necessary information back and forth seamlessly.

There is a certain magic to data integrations. The data just go where they need to go, are sent in the format they need to be received, and trigger outcomes they need to accomplish. However, this magic is anything but. This requires extensive programming, system configuration, and information management planning. To be effective, data integrations need to be carefully planned and extensively project managed. The magicians of an integration are the IT staff and the software project managers. While the technical details and specifications may be best left to the experts, it is important for firefighters and fire service leaders to be at least broadly familiar with the language and terminology of “data.”

Data for … Firefighters

These broadly accepted definitions, borrowed from the data industry, are useful and necessary to ensure a fundamental baseline of knowledge about data systems:

Data: Data are* any values, labels, or information related to a thing. (*Data are always plural.) Said another way, data are information that has been translated into a form that is more convenient to move or process.

Data dictionaries: A data dictionary is a collection of descriptions of the data objects or items in a data model for the benefit of programmers and others who need to refer to them. Each data object or item is given a descriptive name, its relationship is described (or it becomes part of some structure that implicitly describes relationship), the type of data (such as text or image or binary value) is described, possible predefined values are listed, and a brief textual description is provided.

Data quality: Data quality is the term used for information that has five elements of quality: completeness, consistency, accuracy, being time-stamped, and standards-based. Within an organization, acceptable data quality is crucial to operational and transactional processes and to the reliability of business analytics (BA)/business intelligence (BI) reporting. Data quality is affected by the way data are entered, stored, and managed.

Data portability: Data portability is the ability to move data among different application programs, computing environments, or cloud services. Data portability has become commonplace (although not universal) among application programs designed for use on diverse vendors’ software.

Authorization and permissions: Authorization is the process of giving someone permission to do or have something within a computer network or program. Authorization is sometimes seen as both the preliminary setting up of permissions by a system administrator and the actual checking of the permission values that have been set up when a user is getting access.

Data architecture: Data architecture is a term applied to both the process and the outcome of thinking out and specifying the overall structure, logical components, and logical interrelationships of data.

Data mapping: Data mapping is a process used in data warehousing by which different data models are linked to each other using a defined set of methods to characterize the data in a specific definition. Data mapping serves as the initial step in data integration.

All-In-One Solutions vs. Integrations

When we begin discussing data system integrations, the most obvious question is: Why not just buy one piece of software that does everything? Many existing RMSs that fire departments already have do just that. Most of the major RMS vendors are designed to allow a department to enter fire and EMS incident data, maintain training logs, document fire investigations, manage various inventory and daily inspections, and even conduct building inspections. Some RMS solutions are bundled with the computer-aided dispatch (CAD) system to provide the whole package. So, with native built-in all-in-one functionality, why would a department even want to consider maintaining separate systems?

The all-in-one package approach served us well during the first generation of the digital fire service as we moved from paper forms to digital forms. Yet, as fire departments become more tech-savvy and tech-capable, many are looking to independent software solutions to meet specific needs. Moreover, even if a department uses an all-in-one solution, there is likely other software across the department and municipality that could be integrated: human resources/payroll software, building/planning department software, etc. Data integrations are just as important to all-in-one solutions.

Many software vendors, including all-in-one systems, have anticipated this move and have already created boilerplate integrations between popular software pairings to provide these linkages at reduced costs. Ask any software vendor at a fire service trade show what other software (even sometimes competitors) that they integrate with, and they are likely to have a list of software with which they currently successfully integrate and a list of other software with which they could potentially integrate. Integrations are becoming the bread and butter of the software industry.

Application Program Interfaces

Connecting computer systems, each with their own language, structure, and security, is a bit like fire pumps, each with their own threads, plumbing, and design. Like the move to standardize couplings, computer engineers created a standardized connection interface, known as an application program interface (API), to facilitate the linking of different computer systems.

Many of the data systems and RMSs fire departments are already using have built-in APIs, although many fire departments may not be using them to their full potential. More importantly, just because your data systems have an API doesn’t mean all your systems are automatically or necessarily inexpensively connected.

Identifying essential and preferred data integrations should be a vital component of specifying, bidding, or purchasing new software. Data integrations are likely to add sometimes significant costs to the overall purchase price of new software and should be anticipated in budget requisitions. In the tight financial climates in which most agencies operate, these potential additional costs are likely to face intense scrutiny. However, the upfront costs are often considerably less than deciding after the fact to integrate the systems to address the frustrating inefficiencies, errors, or duplications of new but still siloed computer systems.

Linking different systems requires a comprehensive data integration to ensure the data go in the correct directions, go in the correct form, and are linked to the correct information. When data exist in a silo, they can take any form that is convenient. When linked to other systems, the data architecture-the way it is configured, ordered, and populated-becomes very important.

Sharing Data

Discussing data integrations very quickly gets mired in complex technical jargon and concepts. Again, many of those specifics are best left to the IT professionals. For nontechnical staff, the question is: How could you improve your agency’s efficiency and effectiveness by sharing data? This is a question being asked all across the country and serves as the lynchpin of “Open Data Initiatives” in municipalities large and small. The challenge of linking data systems among city agencies can be as fundamental as ensuring that record A in one system relates to the same location as record B in another.

Traditionally, location linkages are based on the physical address of the property. However, given the complexity of matching addresses in large/subdivided buildings and the differences in data architecture, addresses are difficult to use for this purpose. For example, some systems concatenate addresses into one field as seen in Version 1 in Table 1. Others handle the various components of streets differently: street numbers, direction, name, type, post direction, etc.

When specific unit numbers or address ranges are included, they only further complicate matters, as can be seen in Table 2.

In other areas, parcel numbers are used for this purpose. Parcel numbers are less ideal, as most use both alpha and numeric characters. Also, parcel numbers are often spaced inconsistently and may even have different number of digits. Additionally, there may be multiple buildings on one parcel, or one large building may sit on multiple parcels. The three actual parcel numbers in Table 3 are all within the same city limits.

What is needed is a unique identification number that all municipal services would share to facilitate data integration across data systems. New York City is an example of best practice in this arena. Under the Bloomberg administration, the city adopted an Open Data Initiative to link previously siloed databases. The goal was to create more efficient and transparent municipal services. One of the lynchpins of that initiative was the creation of a unique building identification number (BIN) that would link each of the disparate systems together.

Interdepartmental Benchmarking

Similarly, as fire departments work to ensure they are providing the highest quality service to their respective communities, more communities and governments are asking: How are we doing relative to similar communities? Unspoken in that question is a presumption: How are we doing relative to similar communities currently? With the advent of the National Fire Incident Reporting System (NFIRS) Enterprise Data Warehouse (EDW), National Fire Operations Reporting System (NFORS), and Fire Community Assessment Response Evaluation System (FireCARES), we are on the cusp of an explosion of real-time, interdepartmental benchmarking and comparisons.

The forthcoming NFIRS EDW is, as fire data evangelist Sara Wood puts it, “the missing piece of NFIRS,” allowing more real-time access to not only their own departmental data but that of comparable, or not so comparable, departments across the country. While the data remain only as accurate as they are entered, the EDW will allow users to create reports to drill down into every corner of NFIRS data.

The NFORS system is designed to supplement and expand fire incident data capture beyond the current parameters of the NFIRS. NFORS is a Web-based system to capture, analyze, and benchmark data of what occurred on the fireground, how long it took, and what was the outcome. With the data available real-time, personnel can review what occurred; document positive and negative outcomes; and translate that data into usable information for policy makers, training personnel, and elected leaders. While each of the specific data points can be entered manually after the incident, NFORS is configured to allow importation of data from CAD and RMS solutions. Moreover, since many of the largest RMS vendors have been active stakeholders in designing NOFRS, many are working on direct integrations to allow seamless data entry directly into NFORS without having to enter data in multiple locations.

FireCARES is loosely related to NFORS, having also been borne out of the landmark Firefighter Safety and Deployment Study. FireCARES measures and scores a fire department for how well, comparatively, it performs: If fire department resources (both mobile and personnel) are deployed to match the risk levels inherent to hazards in the community, it has been scientifically demonstrated that the community will be far less vulnerable to negative outcomes in firefighter injuries and deaths, civilian injuries and deaths, and property losses. FireCARES analyzes massive amounts of fire department data to identify if resources are appropriately deployed to match a community’s risk level.

This initiative measures the fire loss outcomes of a fire department relative to an idealized version of itself, resulting in a performance score. The program is based on a complex mathematical model that factors in many data points and makes a number of theoretical assumptions to derive the performance score.

This is a prime example of the difference between Big Data and traditional approaches to analyzing performance. Traditionally, we want to look at specific incidents to figure out what caused positive or negative performance outcomes. Or we want to break it down to look at specific territories (“everyone here knows Battalion A is unique”) or other local nuance (“well the train along XYZ Street always slows Engine Q down”). With Big Data, the individual data points become less important; outliers, missing data, and other data quality problems are identified and removed using statistical techniques, often without anyone ever looking at the specific data point. Relationships among the data are identified by the analysis of the data.

Join the Chorus

The chorus for this rallying cry is growing louder. Fire departments across the country are becoming increasingly data savvy. Data are becoming less of what we create simply for someone else. Integrating data systems is the backbone for this paradigm shift. Fire service leaders don’t need to be experts in data architecture or API configurations to realize its value. They do, however, need to broadly understand what integrations can and cannot do and budget appropriately to ensure vital systems are appropriately integrated. With the current speed of technological development and data production, the only limitation of how interconnected our data systems will be in another 15 years is our willingness to embrace integrations. As expensive as integrations can be, can we afford not to?

References

1. Roden, Erich, and Matt Quinn, “Big Data in the Fire Service: A Primer,” FireRescue, December 2014.

2. IBM Analytics, “Data Integration,” www.ibm.com/analytics/us/en/technology/data-integration.

Table 1

Version 1

123 S. Smith St. SE

       

Version 2

123

S. Smith St. SE

     

Version 3

123

S. Smith St.

SE

   

Version 4

123

S. Smith

St.

SE

 

Version 5

123

S.

Smith

St.

SE

Table 2

Unit Numbers

123 S. Smith St. SE, Apartment 2304

Address Ranges

120 - 125 S. Smith St. SE

Table 3

Building 1

14 007700100445

Building 2

15 206 04 095

Building 3

17 0161 LL0158

 

Pennwell