Proposed Model Performance Measures for State Traffic Records Systems, 20438-20448 [2011-8738]
Download as PDF
20438
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
statutory requirement does not impose
any additional burden.
Number of respondents: We estimate
that there are roughly 1,000
manufacturers of motor vehicles that
collect and keep first purchaser
information.
Comments are invited on: whether the
proposed collection of information is
necessary for the proper performance of
the functions of the Department,
including whether the information will
have practical utility; the accuracy of
the Departments estimate of the burden
of the proposed information collection;
ways to enhance the quality, utility and
clarity of the information to be
collected; and ways to minimize the
burden of the collection of information
on respondents, including the use of
automated collection techniques or
other forms of information technology.
A comment to OMB is most effective
if OMB receives it within 30 days of
publication.
Issued on: April 6, 2011.
Frank Borris,
Director, Office of Defects Investigation.
[FR Doc. 2011–8746 Filed 4–11–11; 8:45 am]
BILLING CODE 4910–59–P
DEPARTMENT OF TRANSPORTATION
National Highway Traffic Safety
Administration
[U.S. DOT Docket Number NHTSA–2010–
0181 ]
Reports, Forms, and Recordkeeping
Requirements
National Highway Traffic
Safety Administration (NHTSA), U.S.
Department of Transportation.
ACTION: Notice.
AGENCY:
In compliance with the
Paperwork Reduction Act of 1995 (44
U.S.C. 3501 et seq.), this notice
announces that the Information
Collection Request (ICR) abstracted
below has been forwarded to the Office
of Management and Budget (OMB) for
review and comment. The ICR describes
the nature of the information collections
and their expected burden. The Federal
Register Notice with a 60-day comment
period was published on February 4,
2011 (76 FR 6515).
DATES: Comments must be submitted to
OMB on or before May 12, 2011.
ADDRESSES: Send comments to the
Office of Information and Regulatory
Affairs, OMB, 725 17th Street, NW.,
Washington, DC 20503, Attention: Desk
Officer.
srobinson on DSKHWCL6B1PROD with NOTICES
SUMMARY:
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
Alex
Ansley, Recall Management Division
(NVS–215), Room W46–412, NHTSA,
1200 New Jersey Ave., Washington, DC
20590. Telephone: (202) 493–0481.
SUPPLEMENTARY INFORMATION: Under the
Paperwork Reduction Act of 1995,
before an agency submits a proposed
collection of information to OMB for
approval, it must first publish a
document in the Federal Register
providing a 60-day comment period and
otherwise consult with members of the
public and affected agencies concerning
each proposed collection of information.
The OMB has promulgated regulations
describing what must be included in
such a document. Under OMB’s
regulation, see 5 CFR 1320.8(d), an
agency must ask for public comment on
the following:
(i) Whether the proposed collection of
information is necessary for the proper
performance of the functions of the
agency, including whether the
information will have practical utility;
(ii) The accuracy of the agency’s
estimate of the burden of the proposed
collection of information, including the
validity of the methodology and
assumptions used;
(iii) How to enhance the quality,
utility, and clarity of the information to
be collected; and
(iv) How to minimize the burden of
the collection of information on those
who are to respond, including the use
of appropriate automated, electronic,
mechanical, or other technological
collection techniques or other forms of
information technology, e.g. permitting
electronic submission of responses.
In compliance with these
requirements, NHTSA asks for public
comments on the following collection of
information:
Title: Petitions for Hearings on
Notification and Remedy of Defects.
Type of Request: Extension of a
currently approved information
collection.
OMB Control Number: 2127–0039.
Affected Public: Businesses or others
for profit.
Abstract: Sections 30118(e) and
30120(e) of Title 49 of the United States
Code specify that any interested person
may petition NHTSA to hold a hearing
to determine whether a manufacturer of
motor vehicles or motor vehicle
equipment has met its obligation to
notify owners, purchasers, and dealers
of vehicles or equipment of a safetyrelated defect or noncompliance with a
Federal motor vehicle safety standard in
the manufacturer’s products and to
remedy that defect or noncompliance.
To implement these statutory
provisions, NHTSA promulgated 49
FOR FURTHER INFORMATION CONTACT:
PO 00000
Frm 00134
Fmt 4703
Sfmt 4703
CFR part 557, Petitions for Hearings on
Notification and Remedy of Defects. Part
577 establishes procedures providing
the submission and disposition of
petitions for hearings on the issues of
whether the manufacturer has met its
obligation to notify owners, purchasers,
and dealers of safety-related defects or
noncompliance, or to remedy such
defect or noncompliance free of charge.
Estimated annual burden: During
NHTSA’s last renewal of this
information collection, the agency
estimated it would receive one petition
a year, with an estimated one hour of
preparation for each petition, for a total
of one burden hour per year. That
estimate remains unchanged with this
notice.
Number of respondents: 1.
Comments are invited on: Whether
the proposed collection of information
is necessary for the proper performance
of the functions of the Department,
including whether the information will
have practical utility; the accuracy of
the Departments estimate of the burden
of the proposed information collection;
ways to enhance the quality, utility and
clarity of the information to be
collected; and ways to minimize the
burden of the collection of information
on respondents, including the use of
automated collection techniques or
other forms of information technology.
A comment to OMB is most effective
if OMB receives it within 30 days of
publication.
Issued on: April 6, 2011.
Frank Borris,
Director, Office of Defects Investigation.
[FR Doc. 2011–8739 Filed 4–11–11; 8:45 am]
BILLING CODE 4910–59–P
DEPARTMENT OF TRANSPORTATION
National Highway Traffic Safety
Administration
[Docket No. NHTSA–2011–0044]
Proposed Model Performance
Measures for State Traffic Records
Systems
National Highway Traffic
Safety Administration (NHTSA),
Department of Transportation (DOT).
ACTION: Notice
AGENCY:
This notice announces the
publication of Model Performance
Measures for State Traffic Records
Systems DOT HS 811 44, which
proposes model performance measures
for State traffic record systems to
monitor the development and
implementation of traffic record data
SUMMARY:
E:\FR\FM\12APN1.SGM
12APN1
srobinson on DSKHWCL6B1PROD with NOTICES
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
systems, strategic plans, and dataimprovement grants. These model
performance measures are voluntary
and are to help States monitor and
improve the quality of the data in their
traffic record systems
DATES: Written comments may be
submitted to this agency and must be
received no later than June 13, 2011.
ADDRESSES: You may submit comments
identified by DOT Docket ID number
NHTSA–2011–0044 by any of the
following methods:
• Electronic Submissions: Go to
https://www.regulations.gov. Follow the
online instructions for submitting
comments.
• Fax: 202–366–2746.
• Mail: Docket Management Facility,
M–30 U.S. Department of
Transportation, West Building, Ground
Floor, Room W12–140, 1200 New Jersey
Ave., SE., Washington, DC 20590.
• Hand Delivery or Courier: Docket
Management Facility, M–30 U.S.
Department of Transportation, West
Building, Ground Floor, Room W12–
140, 1200 New Jersey Ave., SE.,
Washington, DC 20590, between 9 a.m.
and 5 p.m. Eastern time, Monday
through Friday, except Federal holidays.
Regardless of how you submit your
comments, you should identify the
Docket number of this document.
Instructions: For detailed instructions
on submitting comments and additional
information, see https://
www.regulations.gov. Note that all
comments received will be posted
without change to https://
www.regulations.gov, including any
personal information provided. Please
read the ‘‘Privacy Act’’ heading below.
Privacy Act: Anyone is able to search
the electronic form of all contents
received into any of our dockets by the
name of the individual submitting the
comment (or signing the comment, if
submitted on behalf of an association,
business, labor union, etc.). You may
review the complete User Notice and
Privacy Notice for Regulations.gov at
https://www.regulations.gov/search/
footer/privacyanduse.jsp.
Docket: For access to the docket to
read background documents or
comments received, go to https://
www.regulations.gov at any time or to
West Building Ground Floor, Room
W12–140, 1200 New Jersey Avenue, SE.,
Washington, DC between 9 a.m. and 5
p.m., Eastern Time, Monday through
Friday, except Federal holidays.
FOR FURTHER INFORMATION CONTACT: For
programmatic issues: Luke Johnson,
Office of Traffic Records and Analysis,
NPO–423, National Highway Traffic
Safety Administration, 400 Seventh
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
Street, SW., Washington, DC 20590.
Telephone (202) 366–1722. For legal
issues: Roland Baumann, Office of Chief
Counsel, NCC–113, National Highway
Traffic Safety Administration, 400
Seventh Street, SW., Washington, DC
20590. Telephone (202) 366–5260.
SUPPLEMENTARY INFORMATION: The
National Highway Traffic Safety
Administration (NHTSA) has identified
61 model performance measures for the
six core State traffic records data
systems: Crash, vehicle, driver,
roadway, citation/adjudication, and
EMS/injury surveillance. These model
performance measures address the six
performance attributes: Timeliness,
accuracy, completeness, uniformity,
integration, and accessibility. States can
use these measures to develop and track
performance goals in their Traffic
Records Strategic Plans, Traffic Records
Assessments, and Highway Safety Plans;
establish data-quality improvement
measures for specific traffic records
projects; and support data improvement
goals in the Strategic Highway Safety
Plan. The full text of the report Model
Performance Measures for State Traffic
Records Systems DOT HS 811 44, is
available at https://www.nhtsa.gov/.
Key Features of the Model Performance
Measures
Use is voluntary: States should use
the measures for those data system
performance attributes they wish to
monitor or improve. If the suggested
measures are not deemed appropriate,
States are free to modify them or
develop their own.
The measures are flexible: The
measures are models. States can modify
a measure to meet a specific need as
long as its overall intent remains the
same.
The measures do not set numerical
performance goals: They describe what
to measure and suggest how it should be
measured but are not intended to
establish a numerical performance goal.
Each State should set its own
performance goals.
The measures provide a template or
structure States can populate with
specific details: For example, the States
must decide what data files to use and
what data elements are critical. States
should take advantage of these decisionmaking opportunities to focus on their
most important performance features.
The measures are not exhaustive: The
measures attempt to capture one or two
key performance features of each data
system performance attribute. States
may wish to use additional or
alternative measures to address specific
performance issues.
PO 00000
Frm 00135
Fmt 4703
Sfmt 4703
20439
The measures are not intended to be
used to compare States: Their purpose
is to help each State improve its own
performance. Each State selects the
measures it uses, establishes its own
definitions of key terms, and may
modify the measures to fit its
circumstances. Since the measures will
vary considerably from State to State, it
is unlikely that they could be used for
any meaningful comparisons between
States. NHTSA has no intention of using
the measure to make interstate
comparisons.
Core Traffic Records Data Systems
The model performance measures
cover the six core traffic data systems.
1. Crash: The State repository that
stores law enforcement officer crash
reports.
2. Vehicle: The State repository that
stores information on registered vehicles
within the State (also known as the
vehicle registration system). This
database can also include records for
vehicles not registered in the State—e.g.,
a vehicle that crashed in the State but
registered in another State.
3. Driver: The State repository that
stores information on licensed drivers
within the State and their driver
histories. This is also known as the
driver license and driver history system.
The driver file also could contain a
substantial number of records for
drivers not licensed within the State—
e.g., an unlicensed driver involved in a
crash.
4. Roadway: The State repository that
stores information about the roadways
within the State. It should include
information on all roadways within the
State and is typically composed of
discrete sub-files that include: Roadway
centerline and geometric data, location
reference data, geographical information
system data, travel and exposure data,
etc.
5. Citation/Adjudication: The
component repositories, managed by
multiple State or local agencies, which
store traffic citation, arrest, and final
disposition of charge data.
6. EMS/Injury Surveillance: The
component repositories, managed by
multiple State or local agencies, which
store data on motor vehicle-related
injuries and deaths. Typical
components of an EMS/injury
surveillance system are pre-hospital
EMS data, hospital emergency
department data systems, hospital
discharge data systems, trauma
registries, and long term care/
rehabilitation patient data systems.
E:\FR\FM\12APN1.SGM
12APN1
20440
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
srobinson on DSKHWCL6B1PROD with NOTICES
Performance Attributes
The model performance measures are
based on six core characteristics:
1. Timeliness: Timeliness reflects the
span of time between the occurrence of
an event and entry of information into
the appropriate database. Timeliness
can also measure the time from when
the custodial agency receives the data to
the point when the data are entered into
the database.
2. Accuracy: Accuracy reflects the
degree to which the data are error-free,
satisfy internal consistency checks, and
do not exist in duplicate within a single
database. Error means the recorded
value for some data element of interest
is incorrect. Error does not mean the
information is missing from the record.
Erroneous information in a database
cannot always be detected. In some
cases, it is possible to determine that the
values entered for a variable or data
element are not legitimate codes. In
other cases, errors can be detected by
matching with external sources of
information. It may also be possible to
determine that duplicate records have
been entered for the same event (e.g.,
title transfer).
3. Completeness: Completeness
reflects both the number of records that
are missing from the database (e.g.,
events of interest that occurred but were
not entered into the database) and the
number of missing (blank) data elements
in the records that are in a database. In
the crash database, internal
completeness reflects the amount of
specified information captured in each
individual crash record. External crash
completeness reflects number or
percentage of crashes on which crash
reports are entered into the database.
However, it is not possible to determine
precisely external crash completeness as
it is impossible to determine the number
of unreported crashes. The measures in
this report only address internal
completeness by measuring what is not
missing.
4. Uniformity: Uniformity reflects the
consistency among the files or records
in a database and may be measured
against some independent standard,
preferably a national standard. Within a
State all jurisdictions should collect and
report the same data using the same
definitions and procedures. If the same
data elements are used in different State
files, they should be identical or at least
compatible (e.g., names, addresses,
geographic locations). Data collection
procedures and data elements should
also agree with nationally accepted
guidelines and standards (such as the
Model Minimum Uniform Crash
Criteria, [MMUCC]).
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
5. Integration: Integration reflects the
ability of records in a database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers. Integration differs in one
important respect from the first four
attributes of data quality. By definition,
integration is a performance attribute
that always involves two or more traffic
records subsystems (i.e., databases or
files). For integration, the model
performance measures offer a single
performance measure with databasespecific applications that typically are
of interest to many States. The samples
included are of course non-exhaustive.
Many States will be interested in
establishing links between databases
and sub-databases other than those
listed here, and therefore will be
interested in measuring the quality of
those other integrations. Note that some
of the specific examples herein involve
integration of files within databases
rather than the integration of entire
databases.
6. Accessibility: Accessibility reflects
the ability of legitimate users to
successfully obtain desired data is
different. For the other performance
attributes, the owners and operators of
the various databases and sub-files,
examine the data in the files and the
internal workings of the files. In
contrast, accessibility is measured in
terms of customer satisfaction. Every
database and file in a traffic records
system has a set of legitimate users who
are entitled to request and receive data.
The accessibility of the database or subfile is determined by obtaining the
users’ perceptions of how well the
system responds to their requests. Some
users’ perceptions may be more relevant
to measurement of accessibility than
others’. Each database manager should
decide which of the legitimate users of
the database would be classified as
principal users, whose satisfaction with
the system’s response to requests for
data and other transactions will provide
the basis for the measurement of
accessibility. Thus, the generic
approach to measurement of database
accessibility in the model performance
measured by (1) identifying the
principal users of the database; (2)
Querying the principal users to assess
(a) their ability to obtain the data or
other services requested and (b) their
satisfaction with the timeliness of the
response to their request; and (3)
documenting the method of data
collection and the principal users’
responses. How the principal users are
contacted and queried is up to the
database managers. Similarly, the extent
PO 00000
Frm 00136
Fmt 4703
Sfmt 4703
to which the principal users’ responses
are quantified is left to the managers to
determine. However, this measure does
require supporting documentation that
provides evidentiary support to the
claims of accessibility. This measure
would be best used to gauge the impact
of an improvement to a data system.
Surveying the principal users before and
after the rollout of a specific upgrade
would provide the most meaningful
measure of improved database
accessibility.
Performance Measure Criteria
Each model performance measure was
developed in accordance with the
following criteria:
Specific and well-defined: The
measures are appropriate and
understandable.
Performance based: The measures are
defined by data system performance, not
supporting activities or milestones:
‘‘awarded a contract’’ or ‘‘formed a
Traffic Records Coordinating
Committee’’ are not acceptable
performance measures.
Practical: The measures use data that
are readily available at reasonable cost
and can be duplicated.
Timely: The measures should provide
an accurate and current—near realtime—snapshot of the database’s
timeliness, accuracy, completeness,
uniformity, integration, and
accessibility.
Accurate: The measures use data that
are valid and consistent with values that
are properly calculated.
Important: The measures capture the
essence of this performance attribute for
the data system; for example, an
accuracy measure should not be
restricted to a single unimportant data
element.
Universal: The measures are usable by
all States, though not necessarily
immediately.
These criteria take a broad view of
performance measures. For example,
performance on some of the model
measures may not change from year to
year. Once a State has incorporated
uniform data elements, established data
linkages, or provided appropriate data
file access, further improvement may
not be expected. Some States cannot use
all measures. For example, States that
do not currently maintain a statewide
data file cannot use measures based on
this file (see in particular the injury data
files). Some measures require States to
define a set of critical data elements.
Many measures require States to define
their own performance goals or
standards. The model measures should
be a guide for States as they assess their
data systems and work to improve their
E:\FR\FM\12APN1.SGM
12APN1
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
performance. Each State should select
performance measures most appropriate
to the circumstance and should define
and modify them to fit their specific
needs.
Performance Measures
Listed below are the 61 measures
classified by data system and
performance attribute.
srobinson on DSKHWCL6B1PROD with NOTICES
Crash—Timeliness
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the crash database, the events of
interest are crashes. States must
measure the time between the
occurrence of a crash and the entry of
the report into the crash database. The
model performance measures offer two
approaches to measuring the timeliness
of a crash database:
C–T–1: The median or mean number
of days from (A) the crash date to (B) the
date the crash report is entered into the
database. The median value is the point
at which 50 percent of the crash reports
were entered into the database within a
period defined by the State.
Alternatively, the arithmetic mean
could be calculated for this measure.
C–T–2: The percentage of crash
reports entered into the database within
XX days after the crash. The XX usually
reflects a target or goal set by the State
for entry of reports into the database.
The higher percentage of reports entered
within XX days, the timelier the
database. Many States set the XX for
crash data entry at 30, 60, or 90 days but
any other target or goal is equally
acceptable.
Crash—Accuracy
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
Methods for detecting errors include: (1)
Determining that the values entered for
a variable or element are not legitimate
codes, (2) matching with external
sources of information, and (3)
identifying duplicate records entered for
the same event. The model performance
measures offer two approaches to
measuring crash database accuracy:
C–A–1: The percentage of crash
records with no errors in critical data
elements. The State selects one or more
crash data elements it considers critical
and assesses the accuracy of that
element or elements in all of the crash
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
records entered into the database within
a period defined by the State. Many
States consider the following crash
elements critical:
Environmental elements: Record #,
Location (on/at/distance from; lat/long,
location code), Date, time (can calculate
day of week from this too), Environment
contributing factors (up to 3) Location
description (roadway type, location
type, roadway-contributing factors—up
to 3) Crash type, severity, # involved
units, Harmful events (first harmful,
most harmful).
Vehicle/Unit elements: Crash record
#, vehicle/unit #, VIN decoded sub-file
of values for make, model, year, other
decode values, Sequence of events
(multiple codes), Harmful events (1st
and most harmful for each vehicle),
SafetyNet variables for reportable
vehicles/crashes (carrier name/ID,
additional vehicle codes, tow away due
to damage).
Person elements: Crash record #,
vehicle/unit #, person #, Person type
(driver, occupant, non-occupant),
Demographics (age, sex, other), Seating
position, Protective device type
(occupant protection, helmet, etc.),
Protective device use, Airbag (presence,
deployment: Front, side, both, none),
Injury severity (if this can be sourced
through EMS/Trauma/hospital records.
C–A–2: The percentage of in-State
registered vehicles on the State crash
file with Vehicle Identification Number
(VIN) matched to the State vehicle
registration file.
Crash—Completeness
Completeness reflects both the
number of records that are missing from
the database (e.g., events of interest that
occurred but were not entered into the
database) and the number of missing
(blank) data elements in the records that
are in a database. Completeness has
internal and external aspects. In the
crash database, external crash
completeness reflects the number or
percentage of crashes for which crash
reports are entered into the database. It
is impossible, however, to establish
precisely external crash completeness as
the number of unreported crashes
cannot be determined. Internal
completeness can be determined since it
reflects the amount of specified
information captured in each individual
crash record. The model performance
measures offer three approaches to
measuring the internal completeness of
a crash database:
C–C–1: The percentage of crash
records with no missing critical data
elements. The State selects one or more
crash data elements it considers critical
and assesses internal completeness by
PO 00000
Frm 00137
Fmt 4703
Sfmt 4703
20441
dividing the number of records not
missing a critical element by the total
number of records entered into the
database within a period defined by the
State.
C–C–2: The percentage of crash
records with no missing data elements.
The State can assess overall
completeness by dividing the number of
records missing no elements by the total
number of records entered into the
database within a period defined by the
State.
C–C–3: The percentage of unknowns
or blanks in critical data elements for
which unknown is not an acceptable
value. This measure should be used
when States wish to track improvements
on specific critical data values and
reduce the occurrence of illegitimate
null values.
Crash—Uniformity
Uniformity reflects the consistency
among the files or records in a database
and may be measured against some
independent standard, preferably a
national standard. The model
performance measures offer one
approach to measure crash database
uniformity:
C–U–1: The number of MMUCCcompliant data elements entered into
the crash database or obtained via
linkage to other database(s). The Model
Minimum Uniform Crash Criteria
(MMUCC) Guideline is the national
standard for crash records.
Crash-Integration
Integration reflects the ability of
records in the crash database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
C–I–1: The percentage of appropriate
records in the crash database that are
linked to another system or file. Linking
the crash database with the five other
core traffic records databases can
provide important information. For
example, a State may wish to determine
the percentage of in-State drivers on
crash records that link to the driver file.
Crash-Accessibility
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data. The below process
outlines one way of measuring crash
database accessibility:
C–X–1: To measure crash
accessibility: (1) Identify the principal
users of the crash database; (2) Query
the principal users to assess (A) their
ability to obtain the data or other
services requested and (B) their
satisfaction with the timeliness of the
E:\FR\FM\12APN1.SGM
12APN1
20442
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
response to their request; (3) Document
the method of data collection and the
principal users’ responses.
srobinson on DSKHWCL6B1PROD with NOTICES
Vehicle-Timeliness
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the vehicle database, the State
determines the events of principal
interest that will be used to measure
timeliness. For example, a State may
determine that the transfer of the title of
the vehicle constitutes a critical status
change of that vehicle record. There are
many ways to measure the timeliness of
the entry of a report on the transfer of
a vehicle title or any other critical status
change. The model performance
measures offer two general approaches
to measuring vehicle database
timeliness:
V–T–1: The median or mean number
of days from (A) the date of a critical
status change in the vehicle record to
(B) the date the status change is entered
into the database. The median value is
the point at which 50 percent of the
vehicle record updates were entered
into the database within a period
defined by the State. Alternatively, the
arithmetic mean could be calculated for
this measure.
V–T–2: The percentage of vehicle
record updates entered into the database
within XX days after the critical status
change. The XX usually reflects a target
or goal set by the State for entry of
reports into the database. The higher
percentage of reports entered within XX
days, the timelier is the database. Many
States set the XX for vehicle data entry
at one, five, or 10 days, but any target
or goal is equally acceptable.
Vehicle-Accuracy
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
Methods for detecting errors include: (1)
Determining that the values entered for
a variable or element are not legitimate
codes, (2) matching with external
sources of information, and (3)
identifying duplicate records have been
entered for the same event. The model
performance measures offer one
approach to measuring vehicle database
accuracy:
• V–A–1: The percentage of vehicle
records with no errors in critical data
elements. The State selects one or more
vehicle data elements it considers
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
critical and assesses the accuracy of that
element or elements in all of the
vehicles records entered into the
database within a period defined by the
State. Many Stats have identified the
following critical data elements: Vehicle
Identification Number (VIN), Current
registration status, Commercial or nonCMV, State of registration, State of title,
Stolen flag (as appropriate), Motor
carrier name, Motor carrier ID, and Title
brands.
Vehicle-Completeness
Completeness has internal and
external aspects. For the vehicle
database, external vehicle completeness
reflects the portion of the critical
changes to the vehicle status reported
and entered into the database. It is not
possible to determine precisely external
vehicle database completeness because
one can never know how many critical
status changes occurred but went
unreported. Internal completeness
reflects the amount of specified
information captured by individual
vehicle records. It is possible to
determine precisely internal vehicle
completeness; for example, one can
calculate the percentage of vehicle
records in the database that is missing
one or more critical data elements. The
model performance measures offer four
approaches to measuring the
completeness of a vehicle database:
V–C–1: The percentage of vehicle
records with no missing critical data
elements. The State selects one or more
vehicle data elements it considers
critical and assesses internal
completeness by dividing the number of
records not missing a critical element by
the total number of records entered into
the database within a period defined by
the State.
V–C–2: The percentage of records on
the State vehicle file that contain no
missing data elements. The State can
assess overall completeness by dividing
the number of records missing no
elements by the total number of records
entered into the database within a
period defined by the State.
V–C–3: The percentage of unknowns
or blanks in critical data elements for
which unknown is not an acceptable
value. This measure should be used
when States wish to track improvements
on specific critical data values to reduce
the occurrence of illegitimate null
values.
V–C–4: The percentage of vehicle
records from large trucks and buses that
have all of the following data elements:
Motor Carrier ID, Gross Vehicle Weight
Rating/Gross Combination Weight
Rating, Vehicle Configuration, Cargo
Body Type, and Hazardous Materials
PO 00000
Frm 00138
Fmt 4703
Sfmt 4703
(Cargo Only). This is a measure of
database completeness in specific
critical fields.
Vehicle-Uniformity
Uniformity reflects the consistency
among the files or records in a database
and may be measured against some
independent standard, preferably a
national standard. The model
performance measures offer one general
approach to measuring vehicle database
uniformity.
V–U–1: The number of standardscompliant data elements entered into a
database or obtained via linkage to other
database(s). These standards include the
Model Minimum Uniform Crash Criteria
(MMUCC).
Vehicle-Integration
Integration reflects the ability of
records in the vehicle database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
V–I–1: The percentage of appropriate
records in the vehicle file that are linked
to another system or file. Linking the
vehicle database with the five other core
traffic record databases can provide
important information. For example, a
State may wish to determine the
percentage of vehicle registration
records that link to a driver record.
Vehicle-Accessibility
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data. The below process
outlines one way of measuring the
vehicle database’s accessibility.
V–X–1: To measure accessibility: (1)
Identify the principal users of the
vehicle database; (2) Query the principal
users to assess (A) their ability to obtain
the data or other services requested and
(B) their satisfaction with the timeliness
of the response to their request; (3)
Document the method of data collection
and the principal users’ responses.
Driver-Timeliness
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the driver database, the State
determines the events of principal
interest that shall be used to measure
timeliness. For example, the State may
determine that an adverse action against
a driver’s license constitutes a critical
status change of that driver record.
There are many ways to measure the
timeliness of the entry of a report on an
adverse action against a driver’s license
or any other critical status change. The
E:\FR\FM\12APN1.SGM
12APN1
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
srobinson on DSKHWCL6B1PROD with NOTICES
model performance measures offer two
approaches to measuring the timeliness
of the driver database. The first is a true
measure of timeliness from time of
conviction to entry into the driver
database, while the second is a measure
internal to the agency with custody of
the driver database.
D–T–1: The median or mean number
of days from (A) the date of a driver’s
adverse action to (B) the date the
adverse action is entered into the
database. This measure represents the
time from final adjudication of a citation
to entry into the driver database within
a period defined by the State. This
process can occur in a number of ways,
from the entry of paper reports and data
conversion to a seamless electronic
process. An entry of a citation
disposition into the driver database
cannot occur until the adjudicating
agency (usually a court) notifies the
repository that the disposition has
occurred. Since the custodial agency of
the driver database in most States has
no control over the transmission of the
disposition notification many States
may wish to track the portion of driver
database timelines involving citation
dispositions that it can control. Measure
D–T–2 is offered for that purpose.
D–T–2: The median or mean number
of days from (A) the date of receipt of
citation disposition notification by the
driver repository to (B) the date the
disposition report is entered into the
driver’s record in the database within a
period determined by the State. This
measure represents the internal (to the
driver database) time lapse from the
receipt of disposition information to
entry into the driver database within a
period defined by the State.
Driver-Accuracy
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
Methods for detecting errors include: (1)
Determining that the values entered for
a variable or element are not legitimate
codes, (2) matching with external
sources of information, and (3)
identifying duplicate records have been
entered for the same event. The model
performance measures offer two
approaches to measuring driver
database accuracy:
D–A–1: The percentage of driver
records with no errors in critical data
elements. The State selects one or more
driver data elements it considers critical
and assesses the accuracy of that
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
element or elements in all of the driver
records entered into the database within
a period defined by the State. Several
States have identified the following
critical data elements: Name, Date of
birth, Sex, Driver license number, State
of driver license issuance, Date license
issued or renewed, Social Security
Number, License type, Restrictions,
Crash involvement, Conviction offenses,
Violation date per event, Conviction
date per event, Driver control actions
(Suspensions, Revocations,
Withdrawals), and Date of each action.
D–A–2: The percentage of records on
the State driver file with Social Security
Numbers (SSN) successfully verified
using Social Security Online
Verification (SSOLV) or other means.
Driver-Completeness
Completeness has internal and
external aspects. For the driver
database, external completeness reflects
the portion of critical driver status
changes that are reported and entered
into the database. It is not possible to
determine precisely the external
completeness of driver records because
one can never know how many critical
driver status change occurred but went
unreported. Internal completeness
reflects the amount of specified
information captured in individual
driver records. It is possible to
determine precisely internal driver
record completeness. One can, for
example, calculate the percentage of
driver records in the database that is
missing one or more critical data
elements. The model performance
measures offer three approaches to
measuring the internal completeness of
the driver database:
D–C–1: The percentage of driver
records with no missing critical data
elements. The State selects one or more
driver elements it considers critical and
assesses internal completeness by
dividing the number of records not
missing a critical element by the total
number of records entered into the
database within a period defined by the
State.
D–C–2: The percentage of driver
records with no missing data elements.
The State can assess overall
completeness by dividing the number of
records missing no elements by the total
number of records entered into the
database within a period defined by the
State.
D–C–3: The percentage of unknowns
or blanks in critical data elements for
which unknown is not an acceptable
value. This measure should be used
when States wish to track improvements
on specific critical data values and
PO 00000
Frm 00139
Fmt 4703
Sfmt 4703
20443
reduce the occurrence of illegitimate
null values.
Driver-Uniformity
Uniformity reflects the consistency
among the files or records in a database
and may be measured against an
independent standard, preferably a
national standard. The model
performance measures offer one general
approach to measuring driver database
uniformity:
D–U–1: The number of standardscompliant data elements entered into
the driver database or obtained via
linkage to other database(s). The
relevant standards include MMUCC.
Driver-Integration
Integration reflects the ability of
records in the driver database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
D–I–1: The percentage of appropriate
records in the driver file that are linked
to another system or file. Linking the
driver database with the five other core
traffic record databases can provide
important information. For example, a
State may wish to determine the
percentage of drivers in crashes linked
to the adjudication file.
Driver-Accessibility
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data. The below process
outlines one way of measuring the
driver database’s accessibility.
D–X–1: To measure accessibility: (1)
Identify the principal users of the driver
database; (2) Query the principal users
to assess (A) their ability to obtain the
data or other services requested and (B)
their satisfaction with the timeliness of
the response to their request; (3)
Document the method of data collection
and the principal users’ responses
Roadway-Timeliness
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the roadway database, the State
determines the events of principal
interest that will be used to measure
timeliness. A State may determine that
the completion of periodic collection of
a critical roadway data element or
elements constitutes a critical status
change of that roadway record. For
example, one critical roadway data
element that many States periodically
collect is annual average daily traffic
(AADT). Roadway database timeliness
can be validly gauged by measuring the
E:\FR\FM\12APN1.SGM
12APN1
20444
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
srobinson on DSKHWCL6B1PROD with NOTICES
time between the completion of data
collection and the entry into the
database of AADT for roadway segments
of interest. There are many ways to do
this. The model performance measures
offer two general approaches to
measuring vehicle database timeliness:
R–T–1: The median or mean number
of days from (A) the date a periodic
collection of a critical roadway data
element is complete (e.g., Annual
Average Daily Traffic) to (B) the date the
updated critical roadway data element
is entered into the database. The median
value is the duration within which 50
percent of the changes to critical
roadway elements were updated in the
database. Alternatively, the arithmetic
mean is the average number of days
between the completion of the
collection of critical roadway elements
and when the data are entered into the
database.
R–T–2: The median or mean number
of days from (A) roadway project
completion to (B) the date the updated
critical data elements are entered into
the roadway inventory file. The median
value is the point at which 50 percent
of the updated critical data elements
from a completed roadway project were
entered into the roadway inventory file.
Alternatively, the arithmetic mean
could be calculated for this measure.
Each State will determine its short list
of critical data elements, which should
be a subset of MIRE. For example, it
could be some or all of the elements
required for Highway Performance
Monitoring System (HPMS) sites. The
database should be updated at regular
intervals or when a change is made to
the inventory. For example, when a
roadway characteristic or attribute (e.g.,
traffic counts, speed limits, signs,
markings, lighting, etc.) that is
contained in the inventory is modified,
the inventory should be updated within
a reasonable period.
Roadway-Accuracy
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
Methods for detecting errors include: (1)
Determining that the values entered for
a variable or element are not legitimate
codes, (2) matching with external
sources of information, and (3)
identifying duplicate records have been
entered for the same event. The model
performance measures offer one
approach to measuring roadway
database accuracy:
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
R–A–1: The percentage of all road
segment records with no errors in
critical data elements. The State selects
one or more roadway data elements it
considers critical and assesses the
accuracy of that element or elements in
all of the roadway records within a
period defined by the State. Many States
consider the HPMS standards to be
critical.
Roadway-Completeness
Completeness has internal and
external aspects. For the roadway
database, external roadway
completeness reflects the portion of
road segments in the State for which
data are collected and entered into the
database. It is very difficult to determine
precisely external roadway
completeness because many States do
not know the characteristics or even the
existence of roadway segments that are
non-State owned, maintained, or
reported in the HPMS. Internal
completeness reflects the amount of
specified information that is captured in
individual road segment records. It is
possible to determine precisely internal
roadway completeness. One can, for
example, calculate the percentage of
roadway segment records in the
database that is missing one or more
critical elements (e.g., number of traffic
lanes. The model performance measures
offer four general approaches to
measuring the roadway database’s
internal completeness:
R–C–1: The percentage of road
segment records with no missing critical
data elements. The State selects one or
more roadway elements it considers
critical and assesses internal
completeness by dividing the number of
records not missing a critical element by
the total number of roadway records in
the database.
R–C–2: The percentage of public road
miles or jurisdictions identified on the
State’s basemap or roadway inventory
file. A jurisdiction may be defined by
the limits of a State, county, parish,
township, Metropolitan Planning
Organization (MPO), or municipality.
R–C–3: The percentage of unknowns
or blanks in critical data elements for
which unknown is not an acceptable
value. This measure should be used
when States wish to track improvements
on specific critical data elements and
reduce the occurrence of illegitimate
null values.
R–C–4: The percentage of total
roadway segments that include location
coordinates, using measurement frames
such as a GIS basemap. This is a
measure of the database’s overall
completeness.
PO 00000
Frm 00140
Fmt 4703
Sfmt 4703
Roadway-Uniformity
Uniformity reflects the consistency
among the files or records in a database
and may be measured against some
independent standard, preferably a
national standard. The model
performance measures offer one general
approach to measuring roadway
database uniformity:
R–U–1: The number of Model
Inventory of Roadway Elements (MIRE)compliant data elements entered into a
database or obtained via linkage to other
database(s).
Roadway-Integration
Integration reflects the ability of
records in the roadway database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
R–I–1: The percentage of appropriate
records in a specific file in the roadway
database that are linked to another
system or file. For example, a State may
wish to determine the percentage of
records in the State’s bridge inventory
that link to the basemap file.
Roadway-Accessibility
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data. The below process
outlines one way of measuring roadway
database accessibility:
R–X–1: To measure accessibility of a
specific file in the roadway database: (1)
Identify the principal users of the file;
(2) Query the principal users to assess
(A) their ability to obtain the data or
other services requested and (B) their
satisfaction with the timeliness of the
response to their request; (3) Document
the method of data collection and the
principal users’ responses.
Citation/Adjudication-Timeliness
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the citation and adjudication
databases, the State determines the
events of principal interest that will be
used to measure timeliness. Many States
will include the critical events of
citation issuance and citation
disposition among those events of
principal interest used to track
timeliness. There are many ways to
measure the timeliness of either citation
issuance or citation disposition. The
model performance measures offer one
general approach to measuring citation
and adjudication database timeliness:
C/A–T–1: The median or mean
number of days from (A) the date a
citation is issued to (B) the date the
E:\FR\FM\12APN1.SGM
12APN1
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
citation is entered into the statewide
citation database, or a first available
repository. The median value is the
point at which 50 percent of the citation
records were entered into the citation
database within a period defined by the
State. Alternatively, the arithmetic mean
could be calculated for this measure.
C/A–T–2: The median or mean
number of days from (A) the date of
charge disposition to (B) the date the
charge disposition is entered into the
statewide adjudication database, or a
first available repository. The median
value is the point at which 50 percent
of the charge dispositions were entered
into the statewide database.
Alternatively, the arithmetic mean
could be calculated for this measure.
srobinson on DSKHWCL6B1PROD with NOTICES
Note: Many States do not have statewide
databases for citation or adjudication records.
Therefore, in some citation and adjudication
data systems, timeliness and other attributes
of data quality should be measured at
individual first available repositories.
Citation/Adjudication-Accuracy
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
Methods for detecting errors include: (1)
Determining that the values entered for
a variable or element are not legitimate
codes, (2) matching with external
sources of information, and (3)
identifying duplicate records that have
been entered for the same event. The
State selects one or more citation data
elements and one or more charge
disposition data elements it considers
critical and assesses the accuracy of
those elements in all of the citation and
adjudication records entered into the
database within a period of interest. The
model performance measures offer two
approaches to measuring citation and
adjudication database accuracy:
C/A–A–1: The percentage of citation
records with no errors in critical data
elements. The State selects one or more
citation data elements it considers
critical and assesses the accuracy of that
element or elements in all of the citation
records entered into the database within
a period defined by the State. Below is
a list of suggested critical data elements.
C/A–A–2: The percentage of charge
disposition records with no errors in
critical data elements. The State selects
one or more charge disposition data
elements it considers critical and
assesses the accuracy of that element or
elements for the charge-disposition
records entered into the database within
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
a period defined by the State. Many
States have identified the following as
critical data elements: Critical elements
from the Issuing Agency include the
offense/charge code, date, time, officer,
Agency, citation number, crash report
number (as applicable), and BAC (as
applicable). Critical elements from the
Citation Data include the Offender’s
name, driver license number, age, and
sex. Critical data elements from the
Charge Disposition/Adjudication
include the offender’s name, driver
license number, age, sex, and citation
number. From the charge Disposition/
Adjudication: court, date of receipt, date
of disposition, disposition, and date of
transmittal to DMV (as applicable).
Citation/Adjudication-Completeness*
Completeness has internal and
external aspects. For the citation/
adjudication databases, external
completeness can only be assessed by
identifying citation numbers for which
there are no records. Missing citations
should be monitored at the place of first
repository. Internal completeness
reflects the amount of specified
information that is captured in
individual citation and charge
disposition records. It is possible to
determine precisely internal citation
and adjudication completeness. One
can, for example, calculate the
percentage of citation records in the
database that are missing one or more
critical data elements. The model
performance measures offer three
approaches to measuring internal
completeness:
C/A–C–1: The percentage of citation
records with no missing critical data
elements. The State selects one or more
citation data elements it considers
critical and assesses internal
completeness by dividing the number of
records not missing a critical element by
the total number of records entered into
the database within a period defined by
the State.
C/A–C–2: The percentage of citation
records with no missing data elements.
The State can assess overall
completeness by dividing the number of
records missing no elements by the total
number of records entered into the
database.
C/A–C–3: The percentage of
unknowns or blanks in critical citation
data elements for which unknown is not
an acceptable value. This measure
should be used when States wish to
track improvements on specific critical
data elements and reduce the
occurrence of illegitimate null values.
Note: These measures of completeness are
also applicable to the adjudication file.
PO 00000
Frm 00141
Fmt 4703
Sfmt 4703
20445
Citation/Adjudication-Uniformity *
Uniformity reflects the consistency
among the files or records in a database
and may be measured against some
independent standard, preferably a
national standard. The model
performance measures offer two general
approaches to measuring database
uniformity:
C/A–U–1: The number of Model
Impaired Driving Record Information
System (MIDRIS)-compliant data
elements entered into the citation
database or obtained via linkage to other
database(s).
C/A–U–2: The percentage of citation
records entered into the database with
common uniform statewide violation
codes. The State identifies the number
of citation records with common
uniform violation codes entered into the
database within a period defined by the
State and assesses uniformity by
dividing this number by the total
number of citation records entered into
the database during the same period.
* Note: These measures of uniformity are
also applicable to the adjudication file.
Citation/Adjudication-Integration *
Integration reflects the ability of
records in the citation database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
C/A–I–1: The percentage of
appropriate records in the citation files
that are linked to another system or file.
Linking the citation database with the
five other core traffic record databases
can provide important information. For
example, a State may wish to determine
the percentage of DWI citations that
have been adjudicated.
* Note: This measure of integration is also
applicable to the adjudication file.
Citation/Adjudication-Accessibility *
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data. The below process
outlines one way of measuring the
citation database’s accessibility.
C/A–X–1: To measure accessibility of
the citation database: (1) Identify the
principal users of the citation database;
(2) Query the principal users to assess
(A) their ability to obtain the data or
other services requested and (B) their
satisfaction with the timeliness of the
response to their request; (3) Document
the method of data collection and the
principal users’ responses. The EMS/
Injury Surveillance database is actually
a set of related databases. The principal
files of interest are: Pre-hospital
E:\FR\FM\12APN1.SGM
12APN1
20446
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
Emergency Medical Services (EMS)
data, Hospital Emergency Department
Data Systems, Hospital Discharge Data
Systems, and State Trauma Registry
File, State Vital Records. States typically
wish to measure data quality separately
for each of these files. These measures
may be applied to each of the EMS/
Injury Surveillance databases
individually.
Injury Surveillance-Timeliness *
Timeliness always reflects the span of
time between the occurrence of some
event and the entry of information from
the event into the appropriate database.
For the EMS/Injury Surveillance
databases, the State determines the
events of principal interest that will be
used to measure timeliness. A State
may, for example, determine that the
occurrence of an EMS run constitutes a
critical event to measure the timeliness
of the EMS database. As another
example, a State can select the
occurrence of a hospital discharge as the
critical event to measure the timeliness
of the hospital discharge data system.
There are many ways to measure the
timeliness of the EMS/Injury
Surveillance databases. The model
performance measures offer two general
approaches to measuring timeliness:
I–T–1: The median or mean number
of days from (A) the date of an EMS run
to (B) the date when the EMS patient
care report is entered into the database.
The median value is the point at which
50 percent of the EMS run reports were
entered into the database within a
period defined by the State.
Alternatively, the arithmetic mean
could be calculated for this measure.
I–T–2: The percentage of EMS patient
care reports entered into the State EMS
discharge file within XX* days after the
EMS run. The XX usually reflects a
target or goal set by the State for entry
of reports into the database. The higher
percentage of reports entered within XX
days, the timelier the database. Many
States set the XX for EMS data entry at
5, 30, or 90 days, but any target or goal
is equally acceptable.
srobinson on DSKHWCL6B1PROD with NOTICES
* Note: This measure of timeliness is also
applicable to the following files: State
Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, &
State Vital Records.
Injury Surveillance-Accuracy *
Accuracy reflects the number of errors
in information in the records entered
into a database. Error means the
recorded value for some data element of
interest is incorrect. Error does not
mean the information is missing from
the record. Erroneous information in a
database cannot always be detected.
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
Methods for detecting errors include: 1)
determining that the values entered for
a variable or element are not legitimate
codes, 2) matching with external
sources of information, and 3)
identifying duplicate records have been
entered for the same event. The model
performance measures offer one general
approach to measuring the accuracy of
the injury surveillance databases that is
applicable to each of the five principal
files:
I–A–1: The percentage of EMS patient
care reports with no errors in critical
data elements. The State selects one or
more EMS data elements it considers
critical—response times, for example—
and assesses the accuracy of that
element or elements for all the records
entered into the database within a
period defined by the State. Critical
EMS/Injury Surveillance Data elements
used by many States include: Hospital
Emergency Department/Inpatient Data
elements such as E-code, date of birth,
name, sex, admission date/time, zip
code of hospital, emergency dept.
disposition, inpatient disposition,
diagnosis codes, and discharge date/
time. Elements from the Trauma
Registry Data (National Trauma Data
Bank [NTDB] standard) such as E-code,
date of birth, name, sex, zip code of
injury, admission date, admission time,
inpatient disposition, diagnosis codes,
zip code of hospital, discharge date/
time, and EMS patient report number.
Data from the EMS Data (National
Emergency Medical Services
Information System [NEMSIS] standard)
includes date of birth, name, sex,
incident date/time, scene arrival date/
time, provider’s primary impression,
injury type, scene departure date/time,
destination arrival date/time, county/zip
code of hospital, and county/zip code of
injury Critical data elements from the
Death Certificate (Mortality) Data
(National Center for Health Statistics
[NCHS] standard) include date of birth,
date of death, name, sex, manner of
death, underlying cause of death,
contributory cause of death, county/zip
code of death, and location of death.
* Note: This measure of accuracy is also
applicable to the following files: State
Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, &
State Vital Records.
Injury Surveillance-Completeness*
Completeness has internal and
external aspects. For EMS/Injury
Surveillance databases, external
completeness reflects the portion of
critical events (e.g., EMS runs, hospital
admissions, etc.) that are reported and
entered into the databases. It is not
PO 00000
Frm 00142
Fmt 4703
Sfmt 4703
possible to determine precisely external
EMS/injury surveillance completeness
because once can never know the how
many critical events occurred but went
unreported. Internal completeness
reflects the amount of specified
information that is captured in
individual EMS run records, State
Emergency Department records, State
Hospital Discharge File records, and
State Trauma Registry File records. It is
possible to determine precisely internal
EMS/Injury Surveillance completeness.
One can, for example, calculate the
percentage of EMS run records in the
database that are missing one or more
critical data elements. The model
performance measures offer three
approaches to measuring completeness
for each of the files:
I–C–1: The percentage of EMS patient
care reports with no missing critical
data elements. The State selects one or
more EMS data elements it considers
critical and assesses internal
completeness by dividing the number of
EMS run records not missing a critical
element by the total number of EMS run
records entered into the database within
a period defined by the State.
I–C–2: The percentage of EMS patient
care reports with no missing data
elements. The State can assess overall
completeness by dividing the number of
records missing no elements by the total
number of records entered into the
database.
I–C–3: The percentage of unknowns
or blanks in critical data elements for
which unknown is not an acceptable
value. This measure should be used
when States wish to track improvement
on specific critical data values and
reduce the occurrence of illegitimate
null values. E-code, for example, is an
appropriate EMS/Injury Surveillance
data element that may be tracked with
this measure.
* Note: These measures of completeness
are also applicable to the following files:
State Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, &
State Vital Records.
Injury Surveillance-Uniformity
Uniformity reflects the consistency
among the files or records in a database
and may be measured against an
independent standard, preferably a
national standard. The model
performance measures offer one
approach to measuring uniformity that
can be applied to each discrete file
using the appropriate standard as
enumerated below.
I–U–1: The percentage of National
Emergency Medical Services
Information System (NEMSIS)-
E:\FR\FM\12APN1.SGM
12APN1
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
compliant data elements on EMS patient
care reports entered into the database or
obtained via linkage to other
database(s).
I–U–2: The number of National
Emergency Medical Services
Information System (NEMSIS)compliant data elements on EMS patient
care reports entered into the database or
obtained via linkage to other
database(s).
The national standards for many of
the other major EMS/Injury Surveillance
database files are: The Universal Billing
04 (UB04) for State Emergency
Department Discharge File and State
Hospital Discharge File; the National
Trauma Data Standards (NTDS) for State
Trauma Registry File; and the National
Association for Public Health Statistics
and Information Systems (NAPHSIS) for
State Vital Records.
Injury Surveillance-Integration*
Integration reflects the ability of
records in the EMS database to be
linked to a set of records in another of
the six core databases—or components
thereof—using common or unique
identifiers.
I–I–1: The percentage of appropriate
records in the EMS file that are linked
to another system or file. Linking the
EMS file to other files in the EMS/Injury
Surveillance database or any of the five
other core databases can provide
important information. For example, a
State may wish to determine the
percentage of EMS records that link to
the trauma file that are linked to the
EMS file.
* Note: This measure of integration is also
applicable to the following files: State
Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, &
State Vital Records.
Injury Surveillance-Accessibility *
srobinson on DSKHWCL6B1PROD with NOTICES
Accessibility reflects the ability of
legitimate users to successfully obtain
desired data.
I–X–1: To measure accessibility of the
EMS file: (1) Identify the principal users
of the EMS file, (2) Query the principal
users to assess (A) their ability to obtain
the data or other services requested and
(B) their satisfaction with the timeliness
of the response to their request, and (3)
Document the method of data collection
and the principal users’ responses
Note: This measure of accessibility is also
applicable to the State Emergency Dept. File,
the State Hospital Discharge File, the State
Trauma Registry File, & State Vital Records.
Recommendations
While use of the performance
measures is voluntary, States will be
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
better able to track the success of
upgrades and identify areas for
improvement in their traffic records
systems if they elect to utilize the
measures appropriate to their
circumstances. Adopting the measures
will also put States ahead of the curve
should performance metrics be
mandated in any future legislation. The
measures are not exhaustive. They
describe what to measure and suggest
how to measure it, but do not
recommend numerical performance
goals. The measures attempt to capture
one or two key performance features of
each data system performance attribute.
States may wish to use additional or
alternative measures to address specific
performance issues.
States that elect to use these measures
to demonstrate progress in a particular
system should start using them
immediately. States should begin by
judiciously selecting the appropriate
measures and modifying them as
needed. States should use only the
measures for the data system
performance attributes they wish to
monitor or improve. No State is
expected to use a majority of the
measures, and States may wish to
develop their own additional measures
to track State-specific issues or
programs.
Once States have developed their
specific performance indices, they
should be measured consistently to
track changes over time. Since the
measures will vary considerably from
State to State, it is unlikely that they
could be used for any meaningful
comparisons between States. In any
event, NHTSA does not anticipate using
the measures for interstate comparison
purposes.
Notes on Terminology Used
The following terms are used
throughout the document:
Data system: One of the six
component State traffic records
databases, such as crash, injury
surveillance, etc.
Data file (such as ‘‘crash file’’ or ‘‘State
Hospital Discharge file’’): A data system
may contain a single data file—such as
a State’s driver file—or more than one,
e.g., the injury system has several data
files.
Record: All the data entered in a file
for a specific event (a crash, a patient
hospital discharge, etc.).
Data element: Individual fields coded
within each record.
Data element code value: The
allowable code values or attributes for a
data element.
Data linkages: The links established
by matching at least one data element in
PO 00000
Frm 00143
Fmt 4703
Sfmt 4703
20447
a record in one file with the
corresponding element or elements in
one or more records in another file or
files.
State: The 50 States, the District of
Columbia, Puerto Rico, the territories,
and the Bureau of Indian Affairs. These
are the jurisdictions eligible to receive
State data improvement grants.
Defining and Calculating Performance
Measures
Specified number of days: Some
measures are defined in terms of a
specified number of days (such as 30,
60, or 90). Each State can establish its
own period for these measures.
Defining periods of interest: States
will need to define periods of interest
for several of the measures. These
periods should be of an appropriate
length for the data being gathered. A
State may wish to calculate the
timeliness of its crash database on an
annual basis. The same State may also
wish to calculate the timeliness of their
other databases (e.g., driver, vehicle) on
a monthly or weekly basis because of
their ability to generate revenue. These
decisions are left to the State to make
per the situation and their data needs.
Critical data elements: Some
measures are defined using a set of
‘‘critical data elements.’’ Unless a
measure is specifically defined in a
national standard, each State can define
its own set of critical data elements.
Data elements that many States use are
presented as examples for each data
system.
When measures should be calculated:
Many measures can be calculated and
monitored using data from some period
of time such as a month, a quarter, or
a year. All measures should be
calculated and monitored at least
annually. A few measures are defined
explicitly for annual files. States should
calculate measures at the same time or
times each year for consistency in
tracking progress.
Missing data: Some completeness
measures are defined in terms of
‘‘missing’’ data, such as C–C–1—the
percentage of crash records with no
missing critical data elements.
‘‘Missing’’ means that the data element is
not coded—nothing was entered. Many
data elements have null codes that
indicate that information is not
available for some reason. Typical null
codes are ‘‘not available,’’ ‘‘not
documented,’’ ‘‘not known,’’ or ‘‘not
recorded.’’ A data element with a null
value is not counted as missing data
because it does contain a valid code,
even though the data element may
contain no useful information. The
States should determine under what
E:\FR\FM\12APN1.SGM
12APN1
srobinson on DSKHWCL6B1PROD with NOTICES
20448
Federal Register / Vol. 76, No. 70 / Tuesday, April 12, 2011 / Notices
circumstances a null value is valid for
a particular data element. For accuracy
measures, a data element with missing
data or a null value is not considered an
error. It is up to the State—specifically,
the custodians of a database—to decided
if null codes should be accepted as
legitimate entries or treated as missing
values.
How to define ‘‘entered into a
database’’: Some records do not have all
their data entered into a database at the
same time. In general, an event is
considered to be ‘‘entered into a
database’’ when a specified set of critical
data elements has been entered. In fact,
many databases will not accept a record
until all data from a critical set are
available. States may define ‘‘entered
into a database’’ using their own data
entry and data access processes.
How to calculate a timeliness
measure: For all systems, there will be
a period of time between the event
generating the record and when the
information is entered into the file (or
is available for use). The model
performance measures include several
methods to define a single number that
captures the entire distribution of times.
Each method is appropriate in different
situations.
The median time for events to be
entered into the file can be calculated as
the point at which 50 percent of events
within a period of interest are entered
into the file.
The mean time for events to be
entered into the file (counting all
events). The mean can be calculated as
the average (the sum of the times for all
events divided by the number of
events).
The percentage of events on file
within some fixed time (such as 24
hours or 30 days).
Tradeoffs between timeliness and
completeness: Generally speaking, the
relationship between timeliness and
completeness is inversely proportional:
The more timely the data, the less
complete it is and vice versa. This is
because many data files have records or
data elements added well past the date
of the event producing the record, so the
files may be incomplete when the
performance measure is calculated.
There are three methods of choosing
data to calculate the performance
measures that offer different
combinations of timeliness and
completeness. Depending on the need
for greater timeliness or completeness,
users should choose accordingly.
For example, if timeliness is
important when calculating the first
Crash Completeness measure C–C–1—
the percentage of crash records with no
missing critical data elements—could be
VerDate Mar<15>2010
18:00 Apr 11, 2011
Jkt 223001
calculated in the following manner: (1)
Select the period: Calendar year 2007
crash file; (2) Select the date for
calculation: April 1 of the following
year. So calculate using the 2007 crash
file as it exists on April 1, 2008; (3)
Calculate: Take all crashes from 2007 on
file as of April 1, 2008; calculate the
percentage with missing data for one or
more critical data elements.
This method offers several
advantages. It is easy to understand and
use, and can produce performance
measures in a timely manner. Its
disadvantage is that performance
measures calculated fairly soon after the
end of the data file’s period may not be
based on complete data. For example,
NHTSA’s Fatality Analysis Reporting
System (FARS) is not closed and
complete for a full year; the 2007 file
was not closed until Dec. 31, 2008.
Timeliness measures will exclude any
records that have not yet been entered
by the calculation date, so timeliness
measures may make the file appear to be
timelier than it will be when the file is
closed and completed. Completeness
measures will exclude any information
entered after the calculation date for
records on file. Completeness measures
calculated on open files will make those
files appear less complete than
measures calculated on files that are
closed and completed.
When completeness is most important
the performance measure could be
calculated after a file (say an annual file)
is closed and no further information can
be added to it. This method reverses the
simple method’s advantages and
disadvantages, providing performance
measures that are accurate but not
timely. The final FARS file, for example,
is a very complete database. Its
completeness, however, comes at the
expense of timeliness. In comparison,
the annual FARS file is less complete,
but is more timely.
Another-preferable-method calculates
a performance measure using all records
entered into a file during a specified
period. The timeliness measures
produced by this method will be
accurate but the completeness and
accuracy measures may not, because the
records entered during a given time
period may not be complete when the
measure is calculated. For example, the
Crash Timeliness measure C–T–1—the
median or mean number of days from
(A) the crash date to (B) the crash report
is entered into the database—could be
calculated as follows: (1) Select the
period: calendar year 2007; (2) Take all
records entered into the State crash file
during the period: if the period is
calendar year 2007 the crashes could
have occurred in 2007 or 2006 (or
PO 00000
Frm 00144
Fmt 4703
Sfmt 4703
perhaps even earlier depending on the
State’s reporting criteria); (3) Calculate
the measure: The median or mean time
between the crash date and the date
when entered into the crash file.
States should choose methods that are
accurate, valid, reliable, and useful.
They may choose different methods for
different measures. Or they may use two
different methods for the same measure,
for example calculating a timeliness
measure first with an incomplete file
(for example the 2007 crash file on April
1, 2008) and again with the complete
and closed file (the 2007 crash file on
January 1, 2009, after it is closed). Once
methods have been selected for a
measure, States should be consistent
and use the same methods to calculate
that measure using the same files in the
same way each year. To accurately
gauge progress, States must compare
measures calculated by the same
method using the same files for
successive years.
Privacy issues in file access and
linkage: Data file access and linkage
both raise broad issues of individual
privacy and the use of personal
identifiers. The Driver Privacy
Protection Act (DPPA), the Health
Insurance Portability and
Accountability Act (HIPAA), and other
regulations restrict the release of
personal information on traffic safety
data files. Information in many files may
be sought for use in legal actions. All
data file linkage and all data file access
actions must consider these privacy
issues.
Authority: 44 U.S.C. Section 3506(c)(2)(A).
Jeffrey Michael,
Acting Associate Administrator, National
Center for Statistics and Analysis.
[FR Doc. 2011–8738 Filed 4–11–11; 8:45 am]
BILLING CODE 4910–59–P
DEPARTMENT OF THE TREASURY
Submission for OMB Review;
Comment Request
April 7, 2011.
The Department of the Treasury will
submit the following public information
collection requirements to OMB for
review and clearance under the
Paperwork Reduction Act of 1995,
Public Law 104–13 on or after the date
of publication of this notice. A copy of
the submissions may be obtained by
calling the Treasury Bureau Clearance
Officer listed. Comments regarding
these information collections should be
addressed to the OMB reviewer listed
and to the Treasury PRA Clearance
Officer, Department of the Treasury,
E:\FR\FM\12APN1.SGM
12APN1
Agencies
[Federal Register Volume 76, Number 70 (Tuesday, April 12, 2011)]
[Notices]
[Pages 20438-20448]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 2011-8738]
-----------------------------------------------------------------------
DEPARTMENT OF TRANSPORTATION
National Highway Traffic Safety Administration
[Docket No. NHTSA-2011-0044]
Proposed Model Performance Measures for State Traffic Records
Systems
AGENCY: National Highway Traffic Safety Administration (NHTSA),
Department of Transportation (DOT).
ACTION: Notice
-----------------------------------------------------------------------
SUMMARY: This notice announces the publication of Model Performance
Measures for State Traffic Records Systems DOT HS 811 44, which
proposes model performance measures for State traffic record systems to
monitor the development and implementation of traffic record data
[[Page 20439]]
systems, strategic plans, and data-improvement grants. These model
performance measures are voluntary and are to help States monitor and
improve the quality of the data in their traffic record systems
DATES: Written comments may be submitted to this agency and must be
received no later than June 13, 2011.
ADDRESSES: You may submit comments identified by DOT Docket ID number
NHTSA-2011-0044 by any of the following methods:
Electronic Submissions: Go to https://www.regulations.gov.
Follow the online instructions for submitting comments.
Fax: 202-366-2746.
Mail: Docket Management Facility, M-30 U.S. Department of
Transportation, West Building, Ground Floor, Room W12-140, 1200 New
Jersey Ave., SE., Washington, DC 20590.
Hand Delivery or Courier: Docket Management Facility, M-30
U.S. Department of Transportation, West Building, Ground Floor, Room
W12-140, 1200 New Jersey Ave., SE., Washington, DC 20590, between 9
a.m. and 5 p.m. Eastern time, Monday through Friday, except Federal
holidays.
Regardless of how you submit your comments, you should identify the
Docket number of this document.
Instructions: For detailed instructions on submitting comments and
additional information, see https://www.regulations.gov. Note that all
comments received will be posted without change to https://www.regulations.gov, including any personal information provided.
Please read the ``Privacy Act'' heading below.
Privacy Act: Anyone is able to search the electronic form of all
contents received into any of our dockets by the name of the individual
submitting the comment (or signing the comment, if submitted on behalf
of an association, business, labor union, etc.). You may review the
complete User Notice and Privacy Notice for Regulations.gov at https://www.regulations.gov/search/footer/privacyanduse.jsp.
Docket: For access to the docket to read background documents or
comments received, go to https://www.regulations.gov at any time or to
West Building Ground Floor, Room W12-140, 1200 New Jersey Avenue, SE.,
Washington, DC between 9 a.m. and 5 p.m., Eastern Time, Monday through
Friday, except Federal holidays.
FOR FURTHER INFORMATION CONTACT: For programmatic issues: Luke Johnson,
Office of Traffic Records and Analysis, NPO-423, National Highway
Traffic Safety Administration, 400 Seventh Street, SW., Washington, DC
20590. Telephone (202) 366-1722. For legal issues: Roland Baumann,
Office of Chief Counsel, NCC-113, National Highway Traffic Safety
Administration, 400 Seventh Street, SW., Washington, DC 20590.
Telephone (202) 366-5260.
SUPPLEMENTARY INFORMATION: The National Highway Traffic Safety
Administration (NHTSA) has identified 61 model performance measures for
the six core State traffic records data systems: Crash, vehicle,
driver, roadway, citation/adjudication, and EMS/injury surveillance.
These model performance measures address the six performance
attributes: Timeliness, accuracy, completeness, uniformity,
integration, and accessibility. States can use these measures to
develop and track performance goals in their Traffic Records Strategic
Plans, Traffic Records Assessments, and Highway Safety Plans; establish
data-quality improvement measures for specific traffic records
projects; and support data improvement goals in the Strategic Highway
Safety Plan. The full text of the report Model Performance Measures for
State Traffic Records Systems DOT HS 811 44, is available at https://www.nhtsa.gov/.
Key Features of the Model Performance Measures
Use is voluntary: States should use the measures for those data
system performance attributes they wish to monitor or improve. If the
suggested measures are not deemed appropriate, States are free to
modify them or develop their own.
The measures are flexible: The measures are models. States can
modify a measure to meet a specific need as long as its overall intent
remains the same.
The measures do not set numerical performance goals: They describe
what to measure and suggest how it should be measured but are not
intended to establish a numerical performance goal. Each State should
set its own performance goals.
The measures provide a template or structure States can populate
with specific details: For example, the States must decide what data
files to use and what data elements are critical. States should take
advantage of these decision-making opportunities to focus on their most
important performance features.
The measures are not exhaustive: The measures attempt to capture
one or two key performance features of each data system performance
attribute. States may wish to use additional or alternative measures to
address specific performance issues.
The measures are not intended to be used to compare States: Their
purpose is to help each State improve its own performance. Each State
selects the measures it uses, establishes its own definitions of key
terms, and may modify the measures to fit its circumstances. Since the
measures will vary considerably from State to State, it is unlikely
that they could be used for any meaningful comparisons between States.
NHTSA has no intention of using the measure to make interstate
comparisons.
Core Traffic Records Data Systems
The model performance measures cover the six core traffic data
systems.
1. Crash: The State repository that stores law enforcement officer
crash reports.
2. Vehicle: The State repository that stores information on
registered vehicles within the State (also known as the vehicle
registration system). This database can also include records for
vehicles not registered in the State--e.g., a vehicle that crashed in
the State but registered in another State.
3. Driver: The State repository that stores information on licensed
drivers within the State and their driver histories. This is also known
as the driver license and driver history system. The driver file also
could contain a substantial number of records for drivers not licensed
within the State--e.g., an unlicensed driver involved in a crash.
4. Roadway: The State repository that stores information about the
roadways within the State. It should include information on all
roadways within the State and is typically composed of discrete sub-
files that include: Roadway centerline and geometric data, location
reference data, geographical information system data, travel and
exposure data, etc.
5. Citation/Adjudication: The component repositories, managed by
multiple State or local agencies, which store traffic citation, arrest,
and final disposition of charge data.
6. EMS/Injury Surveillance: The component repositories, managed by
multiple State or local agencies, which store data on motor vehicle-
related injuries and deaths. Typical components of an EMS/injury
surveillance system are pre-hospital EMS data, hospital emergency
department data systems, hospital discharge data systems, trauma
registries, and long term care/rehabilitation patient data systems.
[[Page 20440]]
Performance Attributes
The model performance measures are based on six core
characteristics:
1. Timeliness: Timeliness reflects the span of time between the
occurrence of an event and entry of information into the appropriate
database. Timeliness can also measure the time from when the custodial
agency receives the data to the point when the data are entered into
the database.
2. Accuracy: Accuracy reflects the degree to which the data are
error-free, satisfy internal consistency checks, and do not exist in
duplicate within a single database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. In some cases, it is possible to
determine that the values entered for a variable or data element are
not legitimate codes. In other cases, errors can be detected by
matching with external sources of information. It may also be possible
to determine that duplicate records have been entered for the same
event (e.g., title transfer).
3. Completeness: Completeness reflects both the number of records
that are missing from the database (e.g., events of interest that
occurred but were not entered into the database) and the number of
missing (blank) data elements in the records that are in a database. In
the crash database, internal completeness reflects the amount of
specified information captured in each individual crash record.
External crash completeness reflects number or percentage of crashes on
which crash reports are entered into the database. However, it is not
possible to determine precisely external crash completeness as it is
impossible to determine the number of unreported crashes. The measures
in this report only address internal completeness by measuring what is
not missing.
4. Uniformity: Uniformity reflects the consistency among the files
or records in a database and may be measured against some independent
standard, preferably a national standard. Within a State all
jurisdictions should collect and report the same data using the same
definitions and procedures. If the same data elements are used in
different State files, they should be identical or at least compatible
(e.g., names, addresses, geographic locations). Data collection
procedures and data elements should also agree with nationally accepted
guidelines and standards (such as the Model Minimum Uniform Crash
Criteria, [MMUCC]).
5. Integration: Integration reflects the ability of records in a
database to be linked to a set of records in another of the six core
databases--or components thereof--using common or unique identifiers.
Integration differs in one important respect from the first four
attributes of data quality. By definition, integration is a performance
attribute that always involves two or more traffic records subsystems
(i.e., databases or files). For integration, the model performance
measures offer a single performance measure with database-specific
applications that typically are of interest to many States. The samples
included are of course non-exhaustive. Many States will be interested
in establishing links between databases and sub-databases other than
those listed here, and therefore will be interested in measuring the
quality of those other integrations. Note that some of the specific
examples herein involve integration of files within databases rather
than the integration of entire databases.
6. Accessibility: Accessibility reflects the ability of legitimate
users to successfully obtain desired data is different. For the other
performance attributes, the owners and operators of the various
databases and sub-files, examine the data in the files and the internal
workings of the files. In contrast, accessibility is measured in terms
of customer satisfaction. Every database and file in a traffic records
system has a set of legitimate users who are entitled to request and
receive data. The accessibility of the database or sub-file is
determined by obtaining the users' perceptions of how well the system
responds to their requests. Some users' perceptions may be more
relevant to measurement of accessibility than others'. Each database
manager should decide which of the legitimate users of the database
would be classified as principal users, whose satisfaction with the
system's response to requests for data and other transactions will
provide the basis for the measurement of accessibility. Thus, the
generic approach to measurement of database accessibility in the model
performance measured by (1) identifying the principal users of the
database; (2) Querying the principal users to assess (a) their ability
to obtain the data or other services requested and (b) their
satisfaction with the timeliness of the response to their request; and
(3) documenting the method of data collection and the principal users'
responses. How the principal users are contacted and queried is up to
the database managers. Similarly, the extent to which the principal
users' responses are quantified is left to the managers to determine.
However, this measure does require supporting documentation that
provides evidentiary support to the claims of accessibility. This
measure would be best used to gauge the impact of an improvement to a
data system. Surveying the principal users before and after the rollout
of a specific upgrade would provide the most meaningful measure of
improved database accessibility.
Performance Measure Criteria
Each model performance measure was developed in accordance with the
following criteria:
Specific and well-defined: The measures are appropriate and
understandable.
Performance based: The measures are defined by data system
performance, not supporting activities or milestones: ``awarded a
contract'' or ``formed a Traffic Records Coordinating Committee'' are
not acceptable performance measures.
Practical: The measures use data that are readily available at
reasonable cost and can be duplicated.
Timely: The measures should provide an accurate and current--near
real-time--snapshot of the database's timeliness, accuracy,
completeness, uniformity, integration, and accessibility.
Accurate: The measures use data that are valid and consistent with
values that are properly calculated.
Important: The measures capture the essence of this performance
attribute for the data system; for example, an accuracy measure should
not be restricted to a single unimportant data element.
Universal: The measures are usable by all States, though not
necessarily immediately.
These criteria take a broad view of performance measures. For
example, performance on some of the model measures may not change from
year to year. Once a State has incorporated uniform data elements,
established data linkages, or provided appropriate data file access,
further improvement may not be expected. Some States cannot use all
measures. For example, States that do not currently maintain a
statewide data file cannot use measures based on this file (see in
particular the injury data files). Some measures require States to
define a set of critical data elements. Many measures require States to
define their own performance goals or standards. The model measures
should be a guide for States as they assess their data systems and work
to improve their
[[Page 20441]]
performance. Each State should select performance measures most
appropriate to the circumstance and should define and modify them to
fit their specific needs.
Performance Measures
Listed below are the 61 measures classified by data system and
performance attribute.
Crash--Timeliness
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the crash database, the events of interest
are crashes. States must measure the time between the occurrence of a
crash and the entry of the report into the crash database. The model
performance measures offer two approaches to measuring the timeliness
of a crash database:
C-T-1: The median or mean number of days from (A) the crash date to
(B) the date the crash report is entered into the database. The median
value is the point at which 50 percent of the crash reports were
entered into the database within a period defined by the State.
Alternatively, the arithmetic mean could be calculated for this
measure.
C-T-2: The percentage of crash reports entered into the database
within XX days after the crash. The XX usually reflects a target or
goal set by the State for entry of reports into the database. The
higher percentage of reports entered within XX days, the timelier the
database. Many States set the XX for crash data entry at 30, 60, or 90
days but any other target or goal is equally acceptable.
Crash--Accuracy
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: (1) Determining that the values entered for a variable or
element are not legitimate codes, (2) matching with external sources of
information, and (3) identifying duplicate records entered for the same
event. The model performance measures offer two approaches to measuring
crash database accuracy:
C-A-1: The percentage of crash records with no errors in critical
data elements. The State selects one or more crash data elements it
considers critical and assesses the accuracy of that element or
elements in all of the crash records entered into the database within a
period defined by the State. Many States consider the following crash
elements critical:
Environmental elements: Record , Location (on/at/distance
from; lat/long, location code), Date, time (can calculate day of week
from this too), Environment contributing factors (up to 3) Location
description (roadway type, location type, roadway-contributing
factors--up to 3) Crash type, severity, involved units,
Harmful events (first harmful, most harmful).
Vehicle/Unit elements: Crash record , vehicle/unit
, VIN decoded sub-file of values for make, model, year, other
decode values, Sequence of events (multiple codes), Harmful events (1st
and most harmful for each vehicle), SafetyNet variables for reportable
vehicles/crashes (carrier name/ID, additional vehicle codes, tow away
due to damage).
Person elements: Crash record , vehicle/unit ,
person , Person type (driver, occupant, non-occupant),
Demographics (age, sex, other), Seating position, Protective device
type (occupant protection, helmet, etc.), Protective device use, Airbag
(presence, deployment: Front, side, both, none), Injury severity (if
this can be sourced through EMS/Trauma/hospital records.
C-A-2: The percentage of in-State registered vehicles on the State
crash file with Vehicle Identification Number (VIN) matched to the
State vehicle registration file.
Crash--Completeness
Completeness reflects both the number of records that are missing
from the database (e.g., events of interest that occurred but were not
entered into the database) and the number of missing (blank) data
elements in the records that are in a database. Completeness has
internal and external aspects. In the crash database, external crash
completeness reflects the number or percentage of crashes for which
crash reports are entered into the database. It is impossible, however,
to establish precisely external crash completeness as the number of
unreported crashes cannot be determined. Internal completeness can be
determined since it reflects the amount of specified information
captured in each individual crash record. The model performance
measures offer three approaches to measuring the internal completeness
of a crash database:
C-C-1: The percentage of crash records with no missing critical
data elements. The State selects one or more crash data elements it
considers critical and assesses internal completeness by dividing the
number of records not missing a critical element by the total number of
records entered into the database within a period defined by the State.
C-C-2: The percentage of crash records with no missing data
elements. The State can assess overall completeness by dividing the
number of records missing no elements by the total number of records
entered into the database within a period defined by the State.
C-C-3: The percentage of unknowns or blanks in critical data
elements for which unknown is not an acceptable value. This measure
should be used when States wish to track improvements on specific
critical data values and reduce the occurrence of illegitimate null
values.
Crash--Uniformity
Uniformity reflects the consistency among the files or records in a
database and may be measured against some independent standard,
preferably a national standard. The model performance measures offer
one approach to measure crash database uniformity:
C-U-1: The number of MMUCC-compliant data elements entered into the
crash database or obtained via linkage to other database(s). The Model
Minimum Uniform Crash Criteria (MMUCC) Guideline is the national
standard for crash records.
Crash-Integration
Integration reflects the ability of records in the crash database
to be linked to a set of records in another of the six core databases--
or components thereof--using common or unique identifiers.
C-I-1: The percentage of appropriate records in the crash database
that are linked to another system or file. Linking the crash database
with the five other core traffic records databases can provide
important information. For example, a State may wish to determine the
percentage of in-State drivers on crash records that link to the driver
file.
Crash-Accessibility
Accessibility reflects the ability of legitimate users to
successfully obtain desired data. The below process outlines one way of
measuring crash database accessibility:
C-X-1: To measure crash accessibility: (1) Identify the principal
users of the crash database; (2) Query the principal users to assess
(A) their ability to obtain the data or other services requested and
(B) their satisfaction with the timeliness of the
[[Page 20442]]
response to their request; (3) Document the method of data collection
and the principal users' responses.
Vehicle-Timeliness
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the vehicle database, the State determines
the events of principal interest that will be used to measure
timeliness. For example, a State may determine that the transfer of the
title of the vehicle constitutes a critical status change of that
vehicle record. There are many ways to measure the timeliness of the
entry of a report on the transfer of a vehicle title or any other
critical status change. The model performance measures offer two
general approaches to measuring vehicle database timeliness:
V-T-1: The median or mean number of days from (A) the date of a
critical status change in the vehicle record to (B) the date the status
change is entered into the database. The median value is the point at
which 50 percent of the vehicle record updates were entered into the
database within a period defined by the State. Alternatively, the
arithmetic mean could be calculated for this measure.
V-T-2: The percentage of vehicle record updates entered into the
database within XX days after the critical status change. The XX
usually reflects a target or goal set by the State for entry of reports
into the database. The higher percentage of reports entered within XX
days, the timelier is the database. Many States set the XX for vehicle
data entry at one, five, or 10 days, but any target or goal is equally
acceptable.
Vehicle-Accuracy
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: (1) Determining that the values entered for a variable or
element are not legitimate codes, (2) matching with external sources of
information, and (3) identifying duplicate records have been entered
for the same event. The model performance measures offer one approach
to measuring vehicle database accuracy:
V-A-1: The percentage of vehicle records with no errors in
critical data elements. The State selects one or more vehicle data
elements it considers critical and assesses the accuracy of that
element or elements in all of the vehicles records entered into the
database within a period defined by the State. Many Stats have
identified the following critical data elements: Vehicle Identification
Number (VIN), Current registration status, Commercial or non-CMV, State
of registration, State of title, Stolen flag (as appropriate), Motor
carrier name, Motor carrier ID, and Title brands.
Vehicle-Completeness
Completeness has internal and external aspects. For the vehicle
database, external vehicle completeness reflects the portion of the
critical changes to the vehicle status reported and entered into the
database. It is not possible to determine precisely external vehicle
database completeness because one can never know how many critical
status changes occurred but went unreported. Internal completeness
reflects the amount of specified information captured by individual
vehicle records. It is possible to determine precisely internal vehicle
completeness; for example, one can calculate the percentage of vehicle
records in the database that is missing one or more critical data
elements. The model performance measures offer four approaches to
measuring the completeness of a vehicle database:
V-C-1: The percentage of vehicle records with no missing critical
data elements. The State selects one or more vehicle data elements it
considers critical and assesses internal completeness by dividing the
number of records not missing a critical element by the total number of
records entered into the database within a period defined by the State.
V-C-2: The percentage of records on the State vehicle file that
contain no missing data elements. The State can assess overall
completeness by dividing the number of records missing no elements by
the total number of records entered into the database within a period
defined by the State.
V-C-3: The percentage of unknowns or blanks in critical data
elements for which unknown is not an acceptable value. This measure
should be used when States wish to track improvements on specific
critical data values to reduce the occurrence of illegitimate null
values.
V-C-4: The percentage of vehicle records from large trucks and
buses that have all of the following data elements: Motor Carrier ID,
Gross Vehicle Weight Rating/Gross Combination Weight Rating, Vehicle
Configuration, Cargo Body Type, and Hazardous Materials (Cargo Only).
This is a measure of database completeness in specific critical fields.
Vehicle-Uniformity
Uniformity reflects the consistency among the files or records in a
database and may be measured against some independent standard,
preferably a national standard. The model performance measures offer
one general approach to measuring vehicle database uniformity.
V-U-1: The number of standards-compliant data elements entered into
a database or obtained via linkage to other database(s). These
standards include the Model Minimum Uniform Crash Criteria (MMUCC).
Vehicle-Integration
Integration reflects the ability of records in the vehicle database
to be linked to a set of records in another of the six core databases--
or components thereof--using common or unique identifiers.
V-I-1: The percentage of appropriate records in the vehicle file
that are linked to another system or file. Linking the vehicle database
with the five other core traffic record databases can provide important
information. For example, a State may wish to determine the percentage
of vehicle registration records that link to a driver record.
Vehicle-Accessibility
Accessibility reflects the ability of legitimate users to
successfully obtain desired data. The below process outlines one way of
measuring the vehicle database's accessibility.
V-X-1: To measure accessibility: (1) Identify the principal users
of the vehicle database; (2) Query the principal users to assess (A)
their ability to obtain the data or other services requested and (B)
their satisfaction with the timeliness of the response to their
request; (3) Document the method of data collection and the principal
users' responses.
Driver-Timeliness
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the driver database, the State determines the
events of principal interest that shall be used to measure timeliness.
For example, the State may determine that an adverse action against a
driver's license constitutes a critical status change of that driver
record. There are many ways to measure the timeliness of the entry of a
report on an adverse action against a driver's license or any other
critical status change. The
[[Page 20443]]
model performance measures offer two approaches to measuring the
timeliness of the driver database. The first is a true measure of
timeliness from time of conviction to entry into the driver database,
while the second is a measure internal to the agency with custody of
the driver database.
D-T-1: The median or mean number of days from (A) the date of a
driver's adverse action to (B) the date the adverse action is entered
into the database. This measure represents the time from final
adjudication of a citation to entry into the driver database within a
period defined by the State. This process can occur in a number of
ways, from the entry of paper reports and data conversion to a seamless
electronic process. An entry of a citation disposition into the driver
database cannot occur until the adjudicating agency (usually a court)
notifies the repository that the disposition has occurred. Since the
custodial agency of the driver database in most States has no control
over the transmission of the disposition notification many States may
wish to track the portion of driver database timelines involving
citation dispositions that it can control. Measure D-T-2 is offered for
that purpose.
D-T-2: The median or mean number of days from (A) the date of
receipt of citation disposition notification by the driver repository
to (B) the date the disposition report is entered into the driver's
record in the database within a period determined by the State. This
measure represents the internal (to the driver database) time lapse
from the receipt of disposition information to entry into the driver
database within a period defined by the State.
Driver-Accuracy
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: (1) Determining that the values entered for a variable or
element are not legitimate codes, (2) matching with external sources of
information, and (3) identifying duplicate records have been entered
for the same event. The model performance measures offer two approaches
to measuring driver database accuracy:
D-A-1: The percentage of driver records with no errors in critical
data elements. The State selects one or more driver data elements it
considers critical and assesses the accuracy of that element or
elements in all of the driver records entered into the database within
a period defined by the State. Several States have identified the
following critical data elements: Name, Date of birth, Sex, Driver
license number, State of driver license issuance, Date license issued
or renewed, Social Security Number, License type, Restrictions, Crash
involvement, Conviction offenses, Violation date per event, Conviction
date per event, Driver control actions (Suspensions, Revocations,
Withdrawals), and Date of each action.
D-A-2: The percentage of records on the State driver file with
Social Security Numbers (SSN) successfully verified using Social
Security Online Verification (SSOLV) or other means.
Driver-Completeness
Completeness has internal and external aspects. For the driver
database, external completeness reflects the portion of critical driver
status changes that are reported and entered into the database. It is
not possible to determine precisely the external completeness of driver
records because one can never know how many critical driver status
change occurred but went unreported. Internal completeness reflects the
amount of specified information captured in individual driver records.
It is possible to determine precisely internal driver record
completeness. One can, for example, calculate the percentage of driver
records in the database that is missing one or more critical data
elements. The model performance measures offer three approaches to
measuring the internal completeness of the driver database:
D-C-1: The percentage of driver records with no missing critical
data elements. The State selects one or more driver elements it
considers critical and assesses internal completeness by dividing the
number of records not missing a critical element by the total number of
records entered into the database within a period defined by the State.
D-C-2: The percentage of driver records with no missing data
elements. The State can assess overall completeness by dividing the
number of records missing no elements by the total number of records
entered into the database within a period defined by the State.
D-C-3: The percentage of unknowns or blanks in critical data
elements for which unknown is not an acceptable value. This measure
should be used when States wish to track improvements on specific
critical data values and reduce the occurrence of illegitimate null
values.
Driver-Uniformity
Uniformity reflects the consistency among the files or records in a
database and may be measured against an independent standard,
preferably a national standard. The model performance measures offer
one general approach to measuring driver database uniformity:
D-U-1: The number of standards-compliant data elements entered into
the driver database or obtained via linkage to other database(s). The
relevant standards include MMUCC.
Driver-Integration
Integration reflects the ability of records in the driver database
to be linked to a set of records in another of the six core databases--
or components thereof--using common or unique identifiers.
D-I-1: The percentage of appropriate records in the driver file
that are linked to another system or file. Linking the driver database
with the five other core traffic record databases can provide important
information. For example, a State may wish to determine the percentage
of drivers in crashes linked to the adjudication file.
Driver-Accessibility
Accessibility reflects the ability of legitimate users to
successfully obtain desired data. The below process outlines one way of
measuring the driver database's accessibility.
D-X-1: To measure accessibility: (1) Identify the principal users
of the driver database; (2) Query the principal users to assess (A)
their ability to obtain the data or other services requested and (B)
their satisfaction with the timeliness of the response to their
request; (3) Document the method of data collection and the principal
users' responses
Roadway-Timeliness
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the roadway database, the State determines
the events of principal interest that will be used to measure
timeliness. A State may determine that the completion of periodic
collection of a critical roadway data element or elements constitutes a
critical status change of that roadway record. For example, one
critical roadway data element that many States periodically collect is
annual average daily traffic (AADT). Roadway database timeliness can be
validly gauged by measuring the
[[Page 20444]]
time between the completion of data collection and the entry into the
database of AADT for roadway segments of interest. There are many ways
to do this. The model performance measures offer two general approaches
to measuring vehicle database timeliness:
R-T-1: The median or mean number of days from (A) the date a
periodic collection of a critical roadway data element is complete
(e.g., Annual Average Daily Traffic) to (B) the date the updated
critical roadway data element is entered into the database. The median
value is the duration within which 50 percent of the changes to
critical roadway elements were updated in the database. Alternatively,
the arithmetic mean is the average number of days between the
completion of the collection of critical roadway elements and when the
data are entered into the database.
R-T-2: The median or mean number of days from (A) roadway project
completion to (B) the date the updated critical data elements are
entered into the roadway inventory file. The median value is the point
at which 50 percent of the updated critical data elements from a
completed roadway project were entered into the roadway inventory file.
Alternatively, the arithmetic mean could be calculated for this
measure. Each State will determine its short list of critical data
elements, which should be a subset of MIRE. For example, it could be
some or all of the elements required for Highway Performance Monitoring
System (HPMS) sites. The database should be updated at regular
intervals or when a change is made to the inventory. For example, when
a roadway characteristic or attribute (e.g., traffic counts, speed
limits, signs, markings, lighting, etc.) that is contained in the
inventory is modified, the inventory should be updated within a
reasonable period.
Roadway-Accuracy
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: (1) Determining that the values entered for a variable or
element are not legitimate codes, (2) matching with external sources of
information, and (3) identifying duplicate records have been entered
for the same event. The model performance measures offer one approach
to measuring roadway database accuracy:
R-A-1: The percentage of all road segment records with no errors in
critical data elements. The State selects one or more roadway data
elements it considers critical and assesses the accuracy of that
element or elements in all of the roadway records within a period
defined by the State. Many States consider the HPMS standards to be
critical.
Roadway-Completeness
Completeness has internal and external aspects. For the roadway
database, external roadway completeness reflects the portion of road
segments in the State for which data are collected and entered into the
database. It is very difficult to determine precisely external roadway
completeness because many States do not know the characteristics or
even the existence of roadway segments that are non-State owned,
maintained, or reported in the HPMS. Internal completeness reflects the
amount of specified information that is captured in individual road
segment records. It is possible to determine precisely internal roadway
completeness. One can, for example, calculate the percentage of roadway
segment records in the database that is missing one or more critical
elements (e.g., number of traffic lanes. The model performance measures
offer four general approaches to measuring the roadway database's
internal completeness:
R-C-1: The percentage of road segment records with no missing
critical data elements. The State selects one or more roadway elements
it considers critical and assesses internal completeness by dividing
the number of records not missing a critical element by the total
number of roadway records in the database.
R-C-2: The percentage of public road miles or jurisdictions
identified on the State's basemap or roadway inventory file. A
jurisdiction may be defined by the limits of a State, county, parish,
township, Metropolitan Planning Organization (MPO), or municipality.
R-C-3: The percentage of unknowns or blanks in critical data
elements for which unknown is not an acceptable value. This measure
should be used when States wish to track improvements on specific
critical data elements and reduce the occurrence of illegitimate null
values.
R-C-4: The percentage of total roadway segments that include
location coordinates, using measurement frames such as a GIS basemap.
This is a measure of the database's overall completeness.
Roadway-Uniformity
Uniformity reflects the consistency among the files or records in a
database and may be measured against some independent standard,
preferably a national standard. The model performance measures offer
one general approach to measuring roadway database uniformity:
R-U-1: The number of Model Inventory of Roadway Elements (MIRE)-
compliant data elements entered into a database or obtained via linkage
to other database(s).
Roadway-Integration
Integration reflects the ability of records in the roadway database
to be linked to a set of records in another of the six core databases--
or components thereof--using common or unique identifiers.
R-I-1: The percentage of appropriate records in a specific file in
the roadway database that are linked to another system or file. For
example, a State may wish to determine the percentage of records in the
State's bridge inventory that link to the basemap file.
Roadway-Accessibility
Accessibility reflects the ability of legitimate users to
successfully obtain desired data. The below process outlines one way of
measuring roadway database accessibility:
R-X-1: To measure accessibility of a specific file in the roadway
database: (1) Identify the principal users of the file; (2) Query the
principal users to assess (A) their ability to obtain the data or other
services requested and (B) their satisfaction with the timeliness of
the response to their request; (3) Document the method of data
collection and the principal users' responses.
Citation/Adjudication-Timeliness
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the citation and adjudication databases, the
State determines the events of principal interest that will be used to
measure timeliness. Many States will include the critical events of
citation issuance and citation disposition among those events of
principal interest used to track timeliness. There are many ways to
measure the timeliness of either citation issuance or citation
disposition. The model performance measures offer one general approach
to measuring citation and adjudication database timeliness:
C/A-T-1: The median or mean number of days from (A) the date a
citation is issued to (B) the date the
[[Page 20445]]
citation is entered into the statewide citation database, or a first
available repository. The median value is the point at which 50 percent
of the citation records were entered into the citation database within
a period defined by the State. Alternatively, the arithmetic mean could
be calculated for this measure.
C/A-T-2: The median or mean number of days from (A) the date of
charge disposition to (B) the date the charge disposition is entered
into the statewide adjudication database, or a first available
repository. The median value is the point at which 50 percent of the
charge dispositions were entered into the statewide database.
Alternatively, the arithmetic mean could be calculated for this
measure.
Note: Many States do not have statewide databases for citation
or adjudication records. Therefore, in some citation and
adjudication data systems, timeliness and other attributes of data
quality should be measured at individual first available
repositories.
Citation/Adjudication-Accuracy
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: (1) Determining that the values entered for a variable or
element are not legitimate codes, (2) matching with external sources of
information, and (3) identifying duplicate records that have been
entered for the same event. The State selects one or more citation data
elements and one or more charge disposition data elements it considers
critical and assesses the accuracy of those elements in all of the
citation and adjudication records entered into the database within a
period of interest. The model performance measures offer two approaches
to measuring citation and adjudication database accuracy:
C/A-A-1: The percentage of citation records with no errors in
critical data elements. The State selects one or more citation data
elements it considers critical and assesses the accuracy of that
element or elements in all of the citation records entered into the
database within a period defined by the State. Below is a list of
suggested critical data elements.
C/A-A-2: The percentage of charge disposition records with no
errors in critical data elements. The State selects one or more charge
disposition data elements it considers critical and assesses the
accuracy of that element or elements for the charge-disposition records
entered into the database within a period defined by the State. Many
States have identified the following as critical data elements:
Critical elements from the Issuing Agency include the offense/charge
code, date, time, officer, Agency, citation number, crash report number
(as applicable), and BAC (as applicable). Critical elements from the
Citation Data include the Offender's name, driver license number, age,
and sex. Critical data elements from the Charge Disposition/
Adjudication include the offender's name, driver license number, age,
sex, and citation number. From the charge Disposition/Adjudication:
court, date of receipt, date of disposition, disposition, and date of
transmittal to DMV (as applicable).
Citation/Adjudication-Completeness*
Completeness has internal and external aspects. For the citation/
adjudication databases, external completeness can only be assessed by
identifying citation numbers for which there are no records. Missing
citations should be monitored at the place of first repository.
Internal completeness reflects the amount of specified information that
is captured in individual citation and charge disposition records. It
is possible to determine precisely internal citation and adjudication
completeness. One can, for example, calculate the percentage of
citation records in the database that are missing one or more critical
data elements. The model performance measures offer three approaches to
measuring internal completeness:
C/A-C-1: The percentage of citation records with no missing
critical data elements. The State selects one or more citation data
elements it considers critical and assesses internal completeness by
dividing the number of records not missing a critical element by the
total number of records entered into the database within a period
defined by the State.
C/A-C-2: The percentage of citation records with no missing data
elements. The State can assess overall completeness by dividing the
number of records missing no elements by the total number of records
entered into the database.
C/A-C-3: The percentage of unknowns or blanks in critical citation
data elements for which unknown is not an acceptable value. This
measure should be used when States wish to track improvements on
specific critical data elements and reduce the occurrence of
illegitimate null values.
Note: These measures of completeness are also applicable to the
adjudication file.
Citation/Adjudication-Uniformity *
Uniformity reflects the consistency among the files or records in a
database and may be measured against some independent standard,
preferably a national standard. The model performance measures offer
two general approaches to measuring database uniformity:
C/A-U-1: The number of Model Impaired Driving Record Information
System (MIDRIS)-compliant data elements entered into the citation
database or obtained via linkage to other database(s).
C/A-U-2: The percentage of citation records entered into the
database with common uniform statewide violation codes. The State
identifies the number of citation records with common uniform violation
codes entered into the database within a period defined by the State
and assesses uniformity by dividing this number by the total number of
citation records entered into the database during the same period.
* Note: These measures of uniformity are also applicable to the
adjudication file.
Citation/Adjudication-Integration *
Integration reflects the ability of records in the citation
database to be linked to a set of records in another of the six core
databases--or components thereof--using common or unique identifiers.
C/A-I-1: The percentage of appropriate records in the citation
files that are linked to another system or file. Linking the citation
database with the five other core traffic record databases can provide
important information. For example, a State may wish to determine the
percentage of DWI citations that have been adjudicated.
* Note: This measure of integration is also applicable to the
adjudication file.
Citation/Adjudication-Accessibility *
Accessibility reflects the ability of legitimate users to
successfully obtain desired data. The below process outlines one way of
measuring the citation database's accessibility.
C/A-X-1: To measure accessibility of the citation database: (1)
Identify the principal users of the citation database; (2) Query the
principal users to assess (A) their ability to obtain the data or other
services requested and (B) their satisfaction with the timeliness of
the response to their request; (3) Document the method of data
collection and the principal users' responses. The EMS/Injury
Surveillance database is actually a set of related databases. The
principal files of interest are: Pre-hospital
[[Page 20446]]
Emergency Medical Services (EMS) data, Hospital Emergency Department
Data Systems, Hospital Discharge Data Systems, and State Trauma
Registry File, State Vital Records. States typically wish to measure
data quality separately for each of these files. These measures may be
applied to each of the EMS/Injury Surveillance databases individually.
Injury Surveillance-Timeliness *
Timeliness always reflects the span of time between the occurrence
of some event and the entry of information from the event into the
appropriate database. For the EMS/Injury Surveillance databases, the
State determines the events of principal interest that will be used to
measure timeliness. A State may, for example, determine that the
occurrence of an EMS run constitutes a critical event to measure the
timeliness of the EMS database. As another example, a State can select
the occurrence of a hospital discharge as the critical event to measure
the timeliness of the hospital discharge data system. There are many
ways to measure the timeliness of the EMS/Injury Surveillance
databases. The model performance measures offer two general approaches
to measuring timeliness:
I-T-1: The median or mean number of days from (A) the date of an
EMS run to (B) the date when the EMS patient care report is entered
into the database. The median value is the point at which 50 percent of
the EMS run reports were entered into the database within a period
defined by the State. Alternatively, the arithmetic mean could be
calculated for this measure.
I-T-2: The percentage of EMS patient care reports entered into the
State EMS discharge file within XX* days after the EMS run. The XX
usually reflects a target or goal set by the State for entry of reports
into the database. The higher percentage of reports entered within XX
days, the timelier the database. Many States set the XX for EMS data
entry at 5, 30, or 90 days, but any target or goal is equally
acceptable.
* Note: This measure of timeliness is also applicable to the
following files: State Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, & State Vital Records.
Injury Surveillance-Accuracy *
Accuracy reflects the number of errors in information in the
records entered into a database. Error means the recorded value for
some data element of interest is incorrect. Error does not mean the
information is missing from the record. Erroneous information in a
database cannot always be detected. Methods for detecting errors
include: 1) determining that the values entered for a variable or
element are not legitimate codes, 2) matching with external sources of
information, and 3) identifying duplicate records have been entered for
the same event. The model performance measures offer one general
approach to measuring the accuracy of the injury surveillance databases
that is applicable to each of the five principal files:
I-A-1: The percentage of EMS patient care reports with no errors in
critical data elements. The State selects one or more EMS data elements
it considers critical--response times, for example--and assesses the
accuracy of that element or elements for all the records entered into
the database within a period defined by the State. Critical EMS/Injury
Surveillance Data elements used by many States include: Hospital
Emergency Department/Inpatient Data elements such as E-code, date of
birth, name, sex, admission date/time, zip code of hospital, emergency
dept. disposition, inpatient disposition, diagnosis codes, and
discharge date/time. Elements from the Trauma Registry Data (National
Trauma Data Bank [NTDB] standard) such as E-code, date of birth, name,
sex, zip code of injury, admission date, admission time, inpatient
disposition, diagnosis codes, zip code of hospital, discharge date/
time, and EMS patient report number. Data from the EMS Data (National
Emergency Medical Services Information System [NEMSIS] standard)
includes date of birth, name, sex, incident date/time, scene arrival
date/time, provider's primary impression, injury type, scene departure
date/time, destination arrival date/time, county/zip code of hospital,
and county/zip code of injury Critical data elements from the Death
Certificate (Mortality) Data (National Center for Health Statistics
[NCHS] standard) include date of birth, date of death, name, sex,
manner of death, underlying cause of death, contributory cause of
death, county/zip code of death, and location of death.
* Note: This measure of accuracy is also applicable to the
following files: State Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, & State Vital Records.
Injury Surveillance-Completeness*
Completeness has internal and external aspects. For EMS/Injury
Surveillance databases, external completeness reflects the portion of
critical events (e.g., EMS runs, hospital admissions, etc.) that are
reported and entered into the databases. It is not possible to
determine precisely external EMS/injury surveillance completeness
because once can never know the how many critical events occurred but
went unreported. Internal completeness reflects the amount of specified
information that is captured in individual EMS run records, State
Emergency Department records, State Hospital Discharge File records,
and State Trauma Registry File records. It is possible to determine
precisely internal EMS/Injury Surveillance completeness. One can, for
example, calculate the percentage of EMS run records in the database
that are missing one or more critical data elements. The model
performance measures offer three approaches to measuring completeness
for each of the files:
I-C-1: The percentage of EMS patient care reports with no missing
critical data elements. The State selects one or more EMS data elements
it considers critical and assesses internal completeness by dividing
the number of EMS run records not missing a critical element by the
total number of EMS run records entered into the database within a
period defined by the State.
I-C-2: The percentage of EMS patient care reports with no missing
data elements. The State can assess overall completeness by dividing
the number of records missing no elements by the total number of
records entered into the database.
I-C-3: The percentage of unknowns or blanks in critical data
elements for which unknown is not an acceptable value. This measure
should be used when States wish to track improvement on specific
critical data values and reduce the occurrence of illegitimate null
values. E-code, for example, is an appropriate EMS/Injury Surveillance
data element that may be tracked with this measure.
* Note: These measures of completeness are also applicable to
the following files: State Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, & State Vital Records.
Injury Surveillance-Uniformity
Uniformity reflects the consistency among the files or records in a
database and may be measured against an independent standard,
preferably a national standard. The model performance measures offer
one approach to measuring uniformity that can be applied to each
discrete file using the appropriate standard as enumerated below.
I-U-1: The percentage of National Emergency Medical Services
Information System (NEMSIS)-
[[Page 20447]]
compliant data elements on EMS patient care reports entered into the
database or obtained via linkage to other database(s).
I-U-2: The number of National Emergency Medical Services
Information System (NEMSIS)-compliant data elements on EMS patient care
reports entered into the database or obtained via linkage to other
database(s).
The national standards for many of the other major EMS/Injury
Surveillance database files are: The Universal Billing 04 (UB04) for
State Emergency Department Discharge File and State Hospital Discharge
File; the National Trauma Data Standards (NTDS) for State Trauma
Registry File; and the National Association for Public Health
Statistics and Information Systems (NAPHSIS) for State Vital Records.
Injury Surveillance-Integration*
Integration reflects the ability of records in the EMS database to
be linked to a set of records in another of the six core databases--or
components thereof--using common or unique identifiers.
I-I-1: The percentage of appropriate records in the EMS file that
are linked to another system or file. Linking the EMS file to other
files in the EMS/Injury Surveillance database or any of the five other
core databases can provide important information. For example, a State
may wish to determine the percentage of EMS records that link to the
trauma file that are linked to the EMS file.
* Note: This measure of integration is also applicable to the
following files: State Emergency Dept. File, State Hospital
Discharge File, State Trauma Registry File, & State Vital Records.
Injury Surveillance-Accessibility *
Accessibility reflects the ability of legitimate users to
successfully obtain desired data.
I-X-1: To measure accessibility of the EMS file: (1) Identify the
principal users of the EMS file, (2) Query the principal users to
assess (A) their ability to obtain the data or other services requested
and (B) their satisfaction with the timeliness of the response to their
request, and (3) Document the method of data collection and the
principal users' responses
Note: This measure of accessibility is also applicable to the
State Emergency Dept. File, the State Hospital Discharge File, the
State Trauma Registry File, & State Vital Records.
Recommendations
While use of the performance measures is voluntary, States will be
better able to track the success of upgrades and identify areas for
improvement in their traffic records systems if they elect to utilize
the measures appropriate to their circumstances. Adopting the measures
will also put States ahead of the curve should performance metrics be
mandated in any future legislation. The measures are not exhaustive.
They describe what to measure and suggest how to measure it, but do not
recommend numerical performance goals. The measures attempt to capture
one or two key performance features of each data system performance
attribute. States may wish to use additional or alternative measures to
address specific performance issues.
States that elect to use these measures to demonstrate progress in
a particular system should start using them immediately. States should
begin by judiciously selecting the appropriate measures and modifying
them as needed. States should use only the measures for the data system
performance attributes they wish to monitor or improve. No State is
expected to use a majority of the measures, and States may wish to
develop their own additional measures to track State-specific issues or
programs.
Once States have developed their specific performance indices, they
should be measured consistently to track changes over time. Since the
measures will vary considerably from State to State, it is unlikely
that they could be used for any meaningful comparisons between States.
In any event, NHTSA does not anticipate using the measures for
interstate comparison purposes.
Notes on Terminology Used
The following terms are used throughout the document:
Data system: One of the six component State traffic records
databases, such as crash, injury surveillance, etc.
Data file (such as ``crash file'' or ``State Hospital Discharge
file''): A data system may contain a single data file--such as a
State's driver file--or more than one, e.g., the injury system has
several data files.
Record: All the data entered in a file for a specific event (a
crash, a patient hospital discharge, etc.).
Data element: Individual fields coded within each record.
Data element code value: The allowable code values or attributes
for a data element.
Data linkages: The links established by matching at least one data
element in a record in one file with the corresponding element or
elements in one or more records in another file or files.
State: The 50 States, the District of Columbia, Puerto Rico, the
territories, and the Bureau of Indian Affairs. These are the
jurisdictions eligible to receive State data improvement grants.
Defining and Calculating Performance Measures
Specified number of days: Some measures are defined in terms of a
specified number of days (such as 30, 60, or 90). Each State can
establish its own period for these measures.
Defining periods of interest: States will need to define periods of
interest for