Emissions Factors Program Improvements, 52723-52734 [E9-24684]
Download as PDF
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
application of those requirements would
be inconsistent with the CAA; and
• Does not provide EPA with the
discretionary authority to address, as
appropriate, disproportionate human
health or environmental effects, using
practicable and legally permissible
methods, under Executive Order 12898
(59 FR 7629, February 16, 1994).
In addition, this rule does not have
tribal implications as specified by
Executive Order 13175 (65 FR 67249,
November 9, 2000), because the SIP is
not approved to apply in Indian country
located in the state, and EPA notes that
it will not impose substantial direct
costs on tribal governments or preempt
tribal law.
List of Subjects
40 CFR Part 52
Environmental protection, Air
pollution control, Electric utilities,
Intergovernmental relations,
Incorporation by reference, Carbon
monoxide, Nitrogen oxides, Ozone,
Particulate matter, Reporting and
recordkeeping requirements, Sulfur
dioxide.
40 CFR Part 97
Environmental protection, Air
pollution control, Electric utilities,
Intergovernmental relations,
Incorporation by reference, Nitrogen
oxides, Ozone, Particulate matter,
Reporting and recordkeeping
requirements, Sulfur dioxide.
Dated: October 5, 2009.
Beverly H. Banister,
Acting Regional Administrator, Region 4.
[FR Doc. E9–24705 Filed 10–13–09; 8:45 am]
BILLING CODE 6560–50–P
ENVIRONMENTAL PROTECTION
AGENCY
40 CFR Parts 60, 61, and 63
[EPA–HQ–OAR–2009–0174; FRL–8968–8]
RIN 2060–AP63
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
Emissions Factors Program
Improvements
AGENCY: Environmental Protection
Agency (EPA).
ACTION: Advanced notice of proposed
rulemaking.
SUMMARY: The purpose of this Advanced
Notice of Proposed Rulemaking
(ANPRM) is to convey issues raised by
stakeholders about EPA’s emissions
factors program, inform the public of
our initial ideas on how to address these
issues, and solicit comments on our
current thinking to resolve these issues.
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
Our goal is to develop a self-sustaining
emissions factors program that produces
high quality, timely emissions factors,
better indicates the precision and
accuracy of emissions factors,
encourages the appropriate use of
emissions factors, and ultimately
improves emissions quantification.
Although initially developed for
emissions inventory purposes only, use
of emissions factors has been expanded
to a variety of air pollution control
activities including permitting,
enforcement, modeling, control strategy
development, and risk analysis. This
ANPRM discusses the appropriateness
of using emissions factors for these
activities.
DATES: Comments must be received on
or before November 13, 2009.
ADDRESSES: EPA has established a
docket for this action under Docket ID
No. EPA–HQ–OAR–2009–0174. All
documents in the docket are listed in
the Federal Docket Management System
index at https://www.regulations.gov.
Publicly available docket materials are
available either electronically through
https://www.regulations.gov or in hard
copy at the EPA Docket Center, Public
Reading Room, ANPRM Docket, EPA
West, Room 3334, 1301 Constitution
Ave., NW., Washington, DC. The Public
Reading Room is open from 8:30 a.m. to
4:30 p.m., Monday through Friday,
excluding legal holidays. The telephone
number for the Public Reading Room is
(202) 566–1744, and the telephone
number for the Air Docket is (202) 566–
1742.
Instructions: Direct your comments to
Docket ID No. EPA–HQ–OAR–2009–
0174. The U.S. Environmental
Protection Agency’s (EPA’s) policy is
that all comments received will be
included in the public docket without
change and may be made available
online at https://www.regulations.gov,
including any personal information
provided, unless the comment includes
information claimed to be Confidential
Business Information (CBI) or other
information whose disclosure is
restricted by statute. Do not submit
information that you consider to be CBI
or otherwise protected through https://
www.regulations.gov or e-mail. The
https://www.regulations.gov Web site is
an ‘‘anonymous access’’ system, which
means EPA will not know your identity
or contact information unless you
provide it in the body of your comment.
If you send an e-mail comment directly
to EPA without going through https://
www.regulations.gov, your e-mail
address will be automatically captured
and included as part of the comment
that is placed in the public docket and
PO 00000
Frm 00027
Fmt 4702
Sfmt 4702
52723
made available on the Internet. If you
submit an electronic comment, EPA
recommends that you include your
name and other contact information in
the body of your comment and with any
disk or CD–ROM you submit. If EPA
cannot read your comment due to
technical difficulties and cannot contact
you for clarification, EPA may not be
able to consider your comment.
Electronic files should avoid the use of
special characters, any form of
encryption, and be free of any defects or
viruses. For additional information
about EPA’s public docket, visit the EPA
Docket Center homepage at https://
www.epa.gov/epahome/dockets.htm.
Docket: All documents in the docket
are listed in the https://
www.regulations.gov index. Although
listed in the index, some information is
not publicly available, e.g., CBI or other
information whose disclosure is
restricted by statute. Certain other
material, such as copyrighted material,
will be publicly available only in hard
copy. Publicly available docket
materials are available either
electronically in https://
www.regulations.gov or in hard copy at
the Public Reading Room.
FOR FURTHER INFORMATION CONTACT: Mr.
Thomas A. Driscoll, Measurement
Policy Group (MPG), Office of Air
Quality Planning and Standards (D243–
05), Environmental Protection Agency,
Research Triangle Park, North Carolina
27711, telephone number: (919) 541–
5135; fax number: (919) 541–1039;
e-mail address: driscoll.tom@epa.gov.
SUPPLEMENTARY INFORMATION:
Outline. The information in this
preamble is organized as follows:
I. General Information
A. Does this action apply to me?
B. What should I consider as I prepare my
comments for EPA?
C. Where can I get a copy of this document
and other related information?
II. Background Information
A. The Role of Emissions Factors and
Stakeholder Comments
B. Overview of the Emissions Factors
Improvement Program
C. Goals for the Emissions Factors
Improvement Program
III. Emissions Factors Development Process
and Tools
A. WebFIRE
B. Electronic Reporting Tool (ERT)
C. Emissions Factors Development
Guidance
IV. Changes to the Emissions Factors
Program, Emissions Factors
Development, and Associated Tools
A. Potential Revisions to the Emissions
Factors Development Process: Overview
and Issues
B. Test Data Submittal Requirements
C. Emissions Factors Content and Format
D. Interacting with the SPECIATE Database
E:\FR\FM\14OCP1.SGM
14OCP1
52724
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
V. Request for Comment and Next Steps
VI. Statutory and Executive Order Reviews
I. General Information
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
A. Does this action apply to me?
This notice is likely to be of interest
to a variety of parties, including owners
and operators of stationary sources who
use emissions factors and, in particular,
those that are subject to source testing
requirements under EPA air rules (i.e.,
New Source Performance Standards
(NSPS), National Emissions Standards
for Hazardous Air Pollutants (NESHAP),
and Maximum Achievable Control
Technology (MACT) standards);
industry sectors that believe that the
emissions factors currently used to
characterize their emission sources
could be updated and improved;
industry sectors that currently lack
emissions factors; State, local, and tribal
air pollution control agencies (S/L/Ts)
and other individuals and organizations
with an interest in emissions factors. In
that the use of emissions factors has
expanded beyond developing emissions
inventories to other uses (e.g.,
developing emissions limits for
incorporation into New Source Review
(NSR) and Title V operating permits,
determining applicability to air
pollution regulations, determining
compliance with emissions standards,
conducting air quality impact analyses,
developing control strategies, and
performing risk analyses (i.e., section
112(f) residual risk requirements)),
S/L/Ts, industry representatives,
environmental action groups,
individuals and other organizations may
have a vested interest in this notice.
All of these parties are encouraged to
read this notice and to submit
comments for EPA’s consideration. We
realize that in many cases organizations
other than EPA develop emissions
factors for a variety of purposes, and, in
most cases, we do not require the use of
EPA emissions factors. However,
because the EPA factors are so broadly
used and accepted, we are soliciting
information and feedback on how they
are developed, currently used, and how
they can be improved.
B. What should I consider as I prepare
my comments for EPA?
Do not submit CBI to EPA through
https://www.regulations.gov or e-mail.
Clearly mark the part or all of the
information that you claim to be CBI.
For CBI information in a disk or CD–
ROM that you mail to EPA, mark the
outside of the disk or CD–ROM as CBI
and then identify electronically within
the disk or CD–ROM the specific
information that is claimed as CBI. In
addition to one complete version of the
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
comment that includes information
claimed as CBI, a copy of the comment
that does not contain the information
claimed as CBI must be submitted for
inclusion in the public docket.
Information so marked will not be
disclosed except in accordance with
procedures set forth in 40 CFR part 2.
C. Where can I get a copy of this
document and other related
information?
In addition to being available in the
docket, an electronic copy of this notice
will be available on the Worldwide Web
through the Technology Transfer
Network (TTN). The TTN provides
information and technology exchange in
various areas of air pollution control.
Following signature, an electronic
version of this document will be posted
at https://www.epa.gov/ttn/oarpg under
‘‘Recent Additions.’’
II. Background Information
A. The Role of Emissions Factors and
Stakeholder Comments
An emissions factor is a
representative value that attempts to
relate the quantity of a pollutant
released to the atmosphere with an
activity associated with the release of
that pollutant. These factors are usually
expressed as the mass of pollutant
divided by a unit mass, volume,
distance, or duration of the activity
emitting the pollutant (e.g., kilograms of
particulate emitted per megagram of
coal burned). Such factors facilitate
estimation of emissions from various
sources of air pollution. In most cases,
these factors are simply averages of all
available data of acceptable quality that
were collected through source
performance testing, and are generally
assumed to be representative of
population averages for all facilities in
the source category.
Quantifying air emissions is a vital
aspect of all air pollution programs.
Emissions factors have long been a
fundamental tool in developing
national, regional, state, and local
emissions inventories for air quality
management decisions and in
developing emissions control strategies.
More recently, emissions factors have
been applied in determining sitespecific applicability and emissions
limitations in operating permits by
federal agencies, S/L/Ts, consultants,
and industry. These users have
requested guidance on the use of
emissions factors and other emissions
quantification tools (e.g., emissions
testing and monitoring, mass balance
techniques) in developing permits that
are more practical in their enforcement.
PO 00000
Frm 00028
Fmt 4702
Sfmt 4702
Under ideal circumstances, all
emissions data users would quantify
emissions from ongoing operations with
continuous emissions monitoring,
periodic emissions performance testing,
or frequent calculation using wellaccepted engineering principles, such as
mass balances or other detailed
engineering calculations. Because these
methods can be time and resource
intensive, users sometimes do not have
or are unable to secure data sufficient to
allow detailed site-specific emissions
determinations. In some cases,
measurement via instruments or longterm performance testing, which would
provide such data, is not feasible or too
costly. Without such data, emissions
factors, which are assumed to be
representative of population-average
values, are frequently used, along with
production information as a quick, lowcost method to estimate emissions.
EPA’s Office of Air Quality Planning
and Standards (OAQPS) has long
recognized the importance of emissions
factors and has focused effort and
resources on developing and
documenting emissions factors. The
EPA-approved emissions factors are
contained in an online document called
the ‘‘AP–42 Compilation of Air
Pollutant Emissions Factors’’ (hereafter
referred to as ‘‘AP–42’’) available at
https://www.epa.gov/ttn/chief/ap42/
index.html. The document is organized
into 15 chapters that describe industrial
emission sources and the derivation of
industry-specific emissions factors.
Many of the individual sections of this
document are supported by an
associated background report providing
summaries of the individual test data
and a corresponding assigned quality
rating, the rationale for grouping and
using individual data, and the
assignment of the factor and factor
quality.
Emissions factors were originally
established only for use in estimating
emissions for developing national
emissions inventories. However, as
mentioned earlier, emissions factors are
used for many other air pollution
control activities for which they were
not designed.
AP–42, which was developed by
OAQPS, is not the only repository of
emissions factors. Emissions factors
have been developed for a number of
other programs and there are other
databases that contain emissions factors.
For example, EPA’s Office of
Atmospheric Programs has recently
proposed a greenhouse gas reporting
rule and provided many emissions
factors for sources to use in assessing
their emissions. In addition, EPA’s
Office of Research and Development
E:\FR\FM\14OCP1.SGM
14OCP1
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
administers the SPECIATE database that
contains many emissions factors.
Because the applications, uses, and
requirements of these other emissions
factors databases are different than AP–
42, these databases have operated in a
fairly autonomous manner. However,
we are seeking comment on whether
there should be more interaction among
these databases. For a discussion of
SPECIATE, see section IV.D.
As part of a reevaluation of the
emissions factors program, EPA
interviewed and surveyed various
emissions factors users and held a series
of workshops in 2003 and 2004 with
stakeholders to solicit their input on
what is needed to update and improve
the emissions factors program.1 First
and foremost, stakeholders (industry,
S/L/Ts, EPA program offices,
environmental action groups, and
others) indicated that EPA needs to
continue to maintain the AP–42 factors
information compilation and retrieval
system. In addition, they indicated that
it takes EPA too long to develop
emissions factors, that data submitted
for regulatory development have not
been used to develop new emissions
factors, that there have been several
inappropriate uses for emissions factors,
and that, in general, EPA is not
developing new emissions factors. The
stakeholders said that EPA should
develop criteria to address the
development and uses of emissions
factors for purposes other than just
emissions inventory development, such
as for use as screening tools for
compliance determinations,
applicability purposes, and preparing
air program permit applications. They
also said that the current program is
unresponsive to their needs, too
complex for their active participation,
and lacks transparency concerning data
manipulation. More recently, the
National Academy of Sciences (NAS)
(see National Research Council of the
National Academies, 2004, Air Quality
Management in the United States,
Washington, DC: The National
Academies Press) and EPA’s Office of
Inspector General (OIG) (see U.S. EPA,
Office of Inspector General Evaluation
Report: EPA Can Improve Emissions
Factors Development and Management,
Report No. 2006–P–00017, March 22,
2006) also reviewed and commented on
the emissions factors program. Their
comments echoed those of all other
stakeholders in that the EPA must
1 A copy of the draft report, Emissions Factors
Program Improvement Efforts (September 2005), is
available on EPA’s Web site at: https://www.epa.gov/
ttn/chief/efpac/workshops/efp_improvement_
efforts_draft.pdf.
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
continue to maintain the emissions
factors program, but it must be
improved to support EPA and
stakeholder uses. They also noted that
EPA should quantify uncertainty to
improve emissions factors and that EPA
should be developing and updating
emissions factors regularly.
B. Overview of the Emissions Factors
Improvement Program
Based on the results of the emissions
factors reevaluation process that
included collecting stakeholder input,
preparing an improvement plan, and an
internal effort to review and reexamine
our efforts, we have identified four
focus areas for improvement that are the
basis for this action:
• Designing a process for developing
and improving emissions factors to
allow easier and more effective
participation by interested parties, to be
open and transparent, to accommodate
the continuing (self-sustaining)
development and improvement of
factors rather than being a large, onetime effort to address the current needs,
and to provide an electronic mechanism
for test report submittal and review. We
want to develop a process that, at the
end of the emissions factors
development, will result in high quality
emissions factors.
• Improving methods for compiling
and providing emissions factors data
and other pertinent information to
users, including complete and easy
access to all available test data.
• Developing guidance on the
application of EPA’s default emissions
factor or the selection of a more
appropriate emissions factor for specific
applications, calculating emissions
factors from available test data or other
information, conducting emissions tests
to facilitate the development of
emissions factors, and evaluating and
considering data quality.
• Updating existing emissions factors
and developing more factors where gaps
currently exist.
EPA intends to implement a multipart process to improve the emissions
factors program. The first part involves
further development of the existing
electronic reporting tool (ERT) to make
it easier for S/L/Ts, industry, and other
stakeholders to plan, document, accept,
assess, and transmit emissions test data.
The second part involves upgrading the
AP–42 factors information system into
WebFIRE. WebFIRE is an Internet-based
application that compiles and retrieves
emissions factors and performance test
data and information; making it an
interactive, up-to-date, and easy to
expand and enhance replacement for
the current AP–42. Additionally, to
PO 00000
Frm 00029
Fmt 4702
Sfmt 4702
52725
make the emissions factors development
process easier and more transparent,
EPA plans to rewrite the existing
emissions factors development
procedures and reissue the revised
document following a public review and
comment process. Finally, in order to
acquire adequate data for the
development or improvement of the
emissions factors, we are considering
requiring the submission of certain
performance testing information by
industry to EPA’s OAQPS via electronic
reporting. Implementing this multi-part
effort will result in a self-sustaining
emissions factors program receiving
ongoing data submittals to improve
emissions estimation for regulatory
authorities and others to use in:
(1) Developing emissions inventories,
(2) updating emissions standards,
(3) identifying and evaluating control
strategies, (4) determining applicability
of permit and regulatory requirements,
(5) assessing risks, and (6) conducting
other air pollution control activities. We
believe this effort will reduce the
burden of handling test data, while
improving access to and the utility of
the data.
C. Goals for the Emissions Factors
Improvement Program
We believe the critical element in
improving the emissions factors
program is changing the role of OAQPS
from sole developer of emissions factors
to a facilitator who provides
stakeholders with the tools to
participate in all aspects of the process,
generates tools that capture the existing
work performed by stakeholders and
enhance consistency across the
program, audits and oversees the
program, and develops policies for the
appropriate use of emissions factors in
non-inventory applications where there
are no policies or where existing
policies are inadequate. To this end, we
encourage collection and submission of
critical site-specific process and testing
information that will allow stakeholders
to improve the predictive accuracy of
emissions factors and characterize the
associated uncertainties. We also want
to encourage and facilitate the electronic
documentation and transfer of source
test information to reduce stakeholder
workload, ease assessment, increase
communications, establish consistency
(content and assessment), increase the
transparency of the entire program, and
provide information transfer to critical
air programs (emissions factors
development, compliance verification,
emissions inventory, permitting, etc.).
Finally, we currently are considering
replacing the highly subjective manual
method of updating all emissions factors
E:\FR\FM\14OCP1.SGM
14OCP1
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
for a source category with a more
consistent, objective, and automated
system that better delineates source
descriptions so that emissions factors’
source categories are more meaningful
and useful. Guidance is a critical part of
developing emissions factors. As such,
we are updating guidance of procedures
for preparing emissions factors to make
the procedures clearer, improve the
predictive accuracy of the resulting
emissions factors, improve stakeholders’
confidence in the revised process, and
help us achieve our overall goals of
improving the emissions factors
program.
III. Emissions Factors Development
Process and Tools
As will be discussed in more detail in
section IV, we propose to move from
this subjective resource intensive
system where EPA relies on a relatively
open-ended set of criteria to make major
decisions such as the test data and
factor quality ratings to one that is
objective (more science based) and
designed to reduce the variability
associated with manual emissions factor
development. The new system will
provide an objective evaluation scheme
for grading the quality of each emissions
test, as well.
We are in the process of updating and
revising three key existing tools
(WebFIRE, ERT, and the emissions
factors guidance document) to help us
improve the current system. Note that
the revised emissions factors guidance
document will provide information for
implementing both WebFIRE and ERT.
The existing tools are described in the
remainder of this section. Section IV
describes how we plan to augment and
update these tools to develop the
improved emissions factors
development program.
A. WebFIRE
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
We seek to replace the manual
emissions factor development process,
which is shown in Figure 1. The manual
emissions factors development process
begins with the performance and
documentation of source tests at
individual facilities. After obtaining the
report of the source test, the emissions
factors developer (EPA) assesses the
documentation with respect to its
representativeness to the source
category and its precision and accuracy
of quantifying the facility’s emissions.
Test reports are then grouped by process
WebFIRE, on the EPA Web site at
https://cfpub.epa.gov/oarweb/
index.cfm?action=fire.main, is the
Internet version of the Factor
Information Retrieval (FIRE) Data
System software application (in a
Microsoft Access format) database.
WebFIRE contains EPA’s recommended
emission estimation factors for criteria
and hazardous air pollutants obtained
from AP–42, Locating and Estimating
(L&E) documents, and other documents.
The WebFIRE database usually contains
a single value (factor) for source
classification code (SCC),2 control, and
pollutant combination. Users can
conduct simple or detailed searches for
emissions factors by process, control
device, and/or pollutant. There is a
separate database (https://www.epa.gov/
ttn/chief/database/search.html) that is
available to access the complete test
2 There are currently a few emissions factors in
AP–42 with duplicate values (factors). EPA is
working to correct these emissions factors so that
there are no duplicates.
PO 00000
Frm 00030
Fmt 4702
Sfmt 4702
(using the source classification code, or
SCC), control device employed, and
pollutant. These groupings are reviewed
to combine related processes and
control technologies that will result in
comparable data being used to establish
or revise emissions factors. After making
determinations about the use of data
with differing test report quality ratings,
the emissions factors are calculated (or
recalculated) with an associated factor
quality rating. The public is notified of
the availability of the draft factors and
is given an opportunity to comment on
them. After consideration of the public
comments, EPA publishes the new or
revised factors in AP–42.
reports and other references cited in the
section and background report. Also, for
many AP–42 sections there is a
background report containing
summaries of the contents of the
supporting test reports, assessments of
the quality of these test reports,
judgments on the combining and
separation of reports for averaging, and
the final assessment of the quality rating
assigned to the final factor. We are
modifying WebFIRE to connect these
three components and provide
stakeholders with improved access and
management capabilities.
B. Electronic Reporting Tool (ERT)
In order to streamline the collection of
source test data and ensure the
completeness of data collection for the
development of emissions factors, we
created the ERT. The current version of
the ERT is available at https://
www.epa.gov/ttn/chief/ert/ert_tool.html.
The ERT is a Microsoft Access desktop
application that is currently an
electronic alternative to the submittal of
paper test plans, reports, and
E:\FR\FM\14OCP1.SGM
14OCP1
EP14OC09.000
52726
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
We have developed guidance to assist
in the emissions factors development
process titled, ‘‘Procedures for Preparing
Emissions Factors’’ (EPA–454/R–95–
015).3 This document is intended for
use by EPA employees, EPA contractors,
and external stakeholders. It describes
the procedures, technical criteria, and
standards and specifications for
developing and reporting air pollutant
emissions factors or equations for
publication in AP–42. The document
also includes background on emission
factors and their uses and limitations. It
describes the pollutant terminology
used in AP–42 and discusses some of
the emissions test methods used to
measure these pollutants. The reasons
and procedures for initiating revisions
to emissions factors are also discussed.
In addition, public participation
procedures are discussed. Many of the
changes discussed in the proposed
emissions factor development process
will be reflected in a revised procedures
document.
Under the proposed system, source
test data would be compiled
electronically via the ERT or another
electronic format by the source
submitting the data. Because the ERT
does not yet support all test methods
and because some users may prefer to
use a different format, we have provided
3 We have previously prepared a revised
procedures document (2006 draft) for public
IV. Changes to the Emissions Factors
Program, Emissions Factors
Development, and Associated Tools
review. Based on the comments we received, that
document was withdrawn and never finalized.
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
C. Emissions Factors Development
Guidance
PO 00000
Frm 00031
Fmt 4702
Sfmt 4702
A. Potential Revisions to the Emissions
Factors Development Process: Overview
and Issues
As described in this notice, our
current plans are to move from the
relatively static format for emissions
factors development to one that is more
flexible, current, and transparent. We
will strive for a balanced process that
may be more prescriptive in many
aspects of the program while providing
users with the flexibility to derive
factors that are more suitable for their
specific intended purpose. Figure 2
provides an overview of how this
process could work. We believe this
process can provide source owners or
operators with the tools they need to
develop emissions factors and provide
environmental authorities with the tools
they can use to assess the quality and
uncertainty of emissions test data. These
tools should reduce real or perceived
barriers to emissions factors
development and result in a
substantially improved emissions
factors development process.
a spreadsheet template that is to be used
to submit source test reports that do not
use the ERT. See https://www.epa.gov/
ttn/chief/ert/ert_tool.html for a copy of
E:\FR\FM\14OCP1.SGM
14OCP1
EP14OC09.001
reviewing, storing, and accessing test
data and reports.
evaluations. Currently, data collected
using 19 of EPA’s emissions
measurement methods for stationary
sources can be handled by the ERT. The
ERT supplements the time-intensive
manual preparation and transcription of
stationary source emissions test plans
and reports for emissions sources testing
with an electronic alternative where the
resulting data can be transmitted more
easily and quickly to the Agency and
S/L/Ts who choose to use this system.
The ERT provides a format and a
process that: (1) Documents the key
information and procedures required by
the existing EPA Federal Test Methods;
(2) facilitates coordination among the
source, the test contractor, and the
regulatory agency in planning and
preparing for the emissions test; (3)
provides for consistent criteria to
characterize quantitatively the quality of
the data collected during the emissions
test; (4) standardizes the form and
content of test reports; and (5) calculates
the emissions factor, and exports the
emissions factor and associated data to
WebFIRE. We expect the ERT to
significantly reduce the monitoring and
testing burden for testers, source owners
or operators, S/L/Ts, EPA, and other
interested stakeholders in collecting,
52727
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
52728
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
the current version of the spreadsheet.
We are also seeking comment on the
availability of other electronic formats
that currently may be used by sources
to report source test information to their
S/L/Ts and whether these formats could
be used or adapted to fit into this
proposed process.
In general, we believe that
standardization of the test report’s form
and content will enhance the emission
factor development process, while at the
same time increase accuracy of the
emissions factors. Performance test data
compiled in the ERT will also provide
value to the enforcement and
compliance monitoring community
through the readily-available
information from the tests in an
electronic format. The ERT will provide
other items of information from stack
tests that may be used for evaluation
that EPA’s stationary source compliance
monitoring/enforcement system, the Air
Facility System (AFS), does not
currently house such as method test
used, process being tested, emissions
levels and stack test review date.
However, we recognize that such report
standardization could have an impact
on S/L/T data systems and how they
electronically store such information.
Some sources might still be required to
submit paper or other reports to satisfy
S/L/T requirements. We request
comment on how the design of the ERT
might mitigate these concerns.
We expect that our improved
emissions factors’ development process,
including the ERT, will facilitate the
submittal of new test data from a
number of sources. As explained later in
this notice, we are considering requiring
certain facilities to submit electronically
their performance test data to WebFIRE.
In addition, it is possible that sources or
groups with an interest in adding or
revising emissions factors for certain
categories might be motivated to submit
data from previous tests or tests
conducted for other purposes than
complying with a Federal standard. To
the extent that these data are
representative of current practices in the
category, they could and should be
considered in emissions factor
development.
We believe that the field evaluations
and source test assessments performed
by S/L/Ts improve the reliability of the
test data. For example, such assessments
will help to ensure testing requirements
are met, the test plan was followed, and
results were accurately recorded while
also minimizing sample recovery/
handling errors and equipment errors.
We want to encourage this type of third
party review of all source tests. Ideally
the S/L/T would use the tools and
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
criteria we provide to conduct this
review, but in some cases acceptable
reviews might be provided by
independent contractors or others with
an interest in developing or revising
certain emissions factors. Well
conducted and documented source tests
that have been subject to such review
can potentially receive a higher quality
rating than tests that have not been
reviewed.
We seek comment on other ways that
we could encourage independent ‘‘third
party’’ reviews and the weight we
should give them in assigning a quality
rating. Even in the absence of quality
reviews for a test, there will be broader
quality assurance provisions in the
proposed process. EPA plans to conduct
audits of selected tests to ensure their
quality as part of the overall program. In
addition, we will retain the public
review and comment features of the
existing system to provide additional
assurance that tests submitted to the
system are assigned an appropriate
quality rating. However, at this time, it
is not our intent to make this process a
formal rulemaking process.
Under the current performance test
evaluation system, test data quality is
rated A through D, with A-ratings
assigned to well documented tests
performed by using an EPA reference
test method, or when not applicable, a
sound methodology that is welldocumented. At the other end of the
spectrum, a D-rated test is based on test
reports with minimal documentation or
where a generally unacceptable method
was employed. The test quality is
reported in enough detail for adequate
validation, and raw data are provided
that can be used to duplicate the
emission results presented in the test
report. In the absence of better test
reports, lower-rated tests may provide
an order-of-magnitude value for a source
category emission factor. Specific
criteria that are considered in assigning
the test report quality ratings include
source operation (e.g., whether the
source was conducting the test under
representative operating conditions),
test method and sampling procedures,
process information (extent to which
process variation explains variation in
test runs), and documentation of the
analysis and calculations. After
assigning a preliminary emission data
quality rating based on these criteria,
the quality of production data is
considered. Test data that include the
collection of production or process data
during the test are rated at a higher level
than tests that do not include
production data.
Under the process being considered,
the ERT or alternative electronic format
PO 00000
Frm 00032
Fmt 4702
Sfmt 4702
would be modified to provide a rating
for the quality of the individual test
based on specified algorithms and data
quality objectives. The very process of
using the ERT will address many of the
rating issues described above by
encouraging submittal of the
information needed for an A rating. We
are not seeking comment on specific
changes to the ERT and associated
procedures document. However, we are
interested in comments on the general
features we should incorporate to move
us to an automated system for compiling
test data and calculating or assigning
corresponding test ratings. We are also
seeking comments on whether the use of
different formats for the ratings might be
helpful for stakeholders. For example,
would a more prescriptive numerical
test report assessment rating focus more
attention on the quality of the test
reports, thereby improving the
information in these reports and provide
more information to the stakeholders on
the quality of the data? As described
above, should a well-documented
performance test conducted according
to the Federal Reference Method that
has been reviewed by an independent
third party receive a rating adjustment
to reflect the results of the third party
verification? Also, we are seeking
comment on whether the third party
reviewer should have the authority to
reduce the quality rating of a test report
(such as noting poor documentation or
test performance deficiencies).
Under our conceptual approach, the
source test data would be transferred
from ERT to EPA’s Central Data
Exchange 4 (CDX), which is the point of
entry on the Environmental Information
Exchange Network (Exchange Network)
for environmental data exchanges to the
Agency. In the future, we may consider
using the capabilities of the CDX to
provide for future exchanges of
information in these reports
electronically with facility, state, or
federal data systems. For example, as
mentioned earlier, it is possible that
there might be other audiences for the
ERT data such as the AFS. This EPA
database contains compliance
monitoring and enforcement data for
stationary sources of air pollution
regulated by EPA and S/L/Ts. The
environmental regulatory community
uses this information to track the
compliance status of point sources with
various programs regulated under the
Clean Air Act. With certain
modifications, the ERT could be
designed to collect information used by
AFS. We believe that by providing stack
4 For more information on the CDX, see https://
www.epa.gov/cdx/.
E:\FR\FM\14OCP1.SGM
14OCP1
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
test and facility data electronically
through the ERT in a format for S/L/Ts
to update AFS would result in a
decrease of some existing reporting
requirements’ burden for S/L/Ts. We
seek comments on whether the ERT
information should be used to provide
input to the AFS (and whether this
would decrease S/L/T reporting
burden). Transfers to other data systems
such as the National Emissions
Inventory, Toxics Release Inventory,
and Title V reporting also may be
desirable. We request comments on how
and whether the ERT could be
expanded to address other program
needs.
The Cross-Media Electronic Reporting
Regulation (CROMERR) 5 has been
recently promulgated to provide the
legal framework for electronic reporting
of information and data to EPA and
others who administer EPA programs.
CROMERR is intended to reduce the
cost and burden of electronic reporting
while maintaining the level of
corporate, legal, and individual
responsibility and accountability that
exists in the traditional paper format. At
this time, we intend to develop ERT to
fully comply with CROMERR.
Once received through CDX, the
source test data would be stored in
WebFIRE. We currently plan to update
WebFIRE to collate and integrate the
data into emissions factors calculations
for similar processes, pollutants, and
control devices. For example, our
current plan is to upgrade WebFIRE to
calculate automatically the arithmetic
mean of the data in individual source
test reports to provide updated
emissions factors on a periodic
schedule. Please note that we do not
envision that this approach would be
used to update emissions factors as each
source test is received. Source test data
will not be used for new or amended
emissions factors until the data have
been vetted through our public review
process. Additional features such as
calculations of other statistical and
distribution characteristics, including
the standard deviation and range of data
values, could also be added. We seek
comments on what kinds of statistical
information would be helpful for
stakeholders.
The frequency of emissions factors
updates is an issue for which we are
seeking comment. As noted above,
while WebFIRE might theoretically be
structured to calculate a new or revised
emissions factor whenever a qualified
test is submitted, we understand that
5 For more information on CROMERR, see EPA’s
Web site at: https://www.epa.gov/CROMERR/
index.html.
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
updating emissions factors very
frequently may be disruptive to
emissions factors users because it could
create a rapidly moving target that could
add significant uncertainty to users.
Instead, we think a better approach is to
schedule periodic updates. Such
updates might be based on a specified
calendar schedule to allow interested
parties to understand when an update
might be expected. Because updating
emissions factors impacts many other
programs, such as operating and new
source review permitting, modeling, risk
and technology analysis, control
strategy development, enforcement, and
others, we believe that updating specific
emissions factors more than once per
year would complicate activities of
these other programs. Other triggers
could be when a certain volume of new
data is submitted in certain categories,
or when the newly submitted data
results in significant changes to the
emissions factor. There also might be
value in making supplementary updates
whenever there is an associated review
of an existing standard (every 8 to 10
years). We are seeking comments on the
frequency and scheduling of emissions
factors updates.
Some stakeholders have expressed
concern that new data would be used to
automatically update emissions factors
and that there would be no opportunity
afforded to comment on the accuracy,
representativeness, and completeness of
the new data. We believe this is a valid
concern and are planning, as discussed
above, to only update emissions factors
on a periodic schedule. In addition, we
are planning on incorporating a full
public review and comment period into
WebFIRE, similar to the existing system
for updating emissions factors. When all
data for a specific source category,
control device, and pollutant are
compiled and resultant emissions
factors are drafted, we currently notify
all subscribers to the CHIEF list serve
(https://www.epa.gov/ttn/chief/
listserv.html) that new draft emissions
factors are available for public review.
We plan to add a feature into WebFIRE
that will automatically notify
subscribers of the availability of new
proposed emissions factors for review
and comment.
We plan to add flexibility to WebFIRE
so that the user may calculate their own
emissions factor using a different mix of
test reports than those used for the
existing emissions factor. Sources
already have the ability to suggest
alternative factors, but this change to
WebFIRE could help make the
development process more transparent.
This capability might lessen the need
for extremely frequent updates and
PO 00000
Frm 00033
Fmt 4702
Sfmt 4702
52729
would allow the calculation of
emissions factors for specific
applications for which the average
emissions factor is inappropriate.
However, the resulting ‘‘user
calculated’’ emissions factors would not
be considered ‘‘official’’ EPA factors and
we do not plan to retain these emissions
factors in WebFIRE.
We currently plan to build into
WebFIRE decision criteria that would be
used to select the test data to be used
in an emissions factor update. For
example, one of the current decision
criteria includes the exclusion of C- and
D-rated data whenever A- or B-rated test
data are available. We seek comment on
this approach and other criteria we
should consider. We anticipate that the
changes to the data reporting system
will generally result in higher quality
and significantly more data than may
have been available in the past for
developing some emissions factors. At
what point and under what conditions
do we drop lower quality data from the
emissions factor calculation? If we allow
the use of lower quality data, how
should it be incorporated? For example,
if we have an existing emissions factor
that is based upon several ‘‘C’’ rated
tests and we receive a new high quality
performance test, should we average
together all of the data or only use the
most recent high quality test? Would a
numerical quality rating that would
allow automated selection criteria be
more useful than the current letter
rating system?
WebFIRE will be revised to assign an
emissions factor quality rating based on
specified criteria. We presently assign
an emissions factor rating to indicate the
ability of the overall average factor to
represent a national annual average
emissions rate for the source category.
The emission factor rating is an overall
assessment of how good a factor is,
based on both the quality of the test(s)
or information that is the source of the
factor and on how well the factor
represents the emission source. Higher
ratings are for emission factors based on
many unbiased observations, or on
widely accepted test procedures. In the
current procedures guidance document,
we state as an example that an
emissions factor based on 20 or more
source tests on different randomly
selected plants would likely be assigned
an ‘‘A’’ rating if all tests are conducted
using a single valid federal reference
measurement method. Likewise, the
guidance indicates that a single
observation based on questionable
methods of testing would be assigned an
‘‘E’’ rating. Should the current EPA
approach for WebFIRE incorporate more
standardized and consistent criteria for
E:\FR\FM\14OCP1.SGM
14OCP1
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
52730
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
assigning emissions factor quality
ratings? Should the criteria be
predicated upon an estimated predictive
accuracy of the national average
emissions factor? How should the
quality rating of the supporting test data
be incorporated into the emissions
factor quality rating?
As we revise WebFIRE, a key issue
will be how it groups emissions data
into related clusters for which the
average emissions factors will be
developed. What groupings could be
performed automatically and which
ones would require external manual
assessment and management? Who
should be responsible and what
additional level of peer review should
be introduced? Examples of some of the
groupings we consider in the present
system include the source category,
process type, representativeness of
source, emission source, equipment
design, operating conditions, raw
material or fuel characteristics, control
devices, and test method used. We
request comment on the ways we
should incorporate these groupings into
WebFIRE and whether there are
additional criteria that should be added.
For example, what is the best way to
characterize facilities for emissions
factor development purposes? Currently
we are using SCC and pollutant codes
with control device type. Is the current
characterization system robust enough?
Once the SCC for the facility is tested,
the specific pollutant measured, and the
control device is determined, the
existing procedures should guide the
developer through a process of grouping
the data. One type of grouping may
result in combining data from several
SCCs (for example Utility, Industrial,
Commercial and Institutional
combustion, or the four types of
Portland Cement Manufacturing
processes). Another type of grouping
could result in data from different types
of control devices being combined. In
the emissions factor development
process, these characteristics (and
others) are evaluated to determine
whether there is a significant difference
in the factors when different SCC and/
or controls are represented. We
traditionally combine data from
different SCC and controls for some
pollutants, if the factors are not
significantly different. The criteria used
to determine whether to combine data
have varied. Should a more
standardized assessment and decision
criteria be developed? Should these
criteria be based upon a statistical
approach? Would a combination of
statistical and non-statistical approaches
be reasonable? If so, when would one
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
approach be preferred over the other
approach?
In some cases, a grouping of SCC and
control device type has what appears to
be a bimodal distribution of emissions.
When detailed information is available
in the test reports, these differences
could be attributed to differences in the
raw material, the production method,
the end product specification, or one or
more production or control device
parameters. What methods should be
used to assess and address these
situations? Should the same assessment
approach used to cluster data be used?
Should there be a more rigorous
approach adopted? In addressing
situations where there are significant
differences, how should they be
addressed? In the past, these situations
have been addressed through the
expansion of the available SCCs. In
some cases this has led to increased
confusion for the user of emissions
factors. In lieu of expanding the
available SCCs, should we develop
additional criteria in WebFIRE to allow
for broader differentiation of the
emissions factors?
How do we determine whether a
specific source has significantly
changed such that the existing
emissions factor is no longer
appropriate? There are many examples
of significant changes, including
variance in control device performance
over time or process changes that alter
emissions. We are seeking comment on
how to determine whether a process
change is significant enough to warrant
a new or revised emissions factor. We
are also seeking comment on how to
account for control device performance
in establishing emissions factors.
Another question is how WebFIRE
will assess data collected by non-EPA
reference methods, such as those
developed by the California Air
Resources Board or the American
Society for Testing and Materials
(ASTM). We believe that, in many cases,
these ‘‘other’’ methods may not be
significantly different from EPAreference methods and, as is the case of
some ASTM methods, can be used as
alternatives to EPA reference methods
or are referenced in some of EPA’s
reference methods. To the extent the
method is a close replica of the EPA
method, we believe that WebFIRE
should be able to note the different, but
similar, method when using its data to
develop emissions factors. We currently
accept performance test data collected
from non-EPA reference methods to
develop or revise emissions factors and
we are inclined to continue this
practice. We are seeking comment on
whether the use of methods other than
PO 00000
Frm 00034
Fmt 4702
Sfmt 4702
EPA-reference methods should be noted
when used to develop emissions factors.
Another similar issue is where multiple
methods can be employed to test a
pollutant. For example, there are several
federal reference methods for testing
particulate matter. The particulate
matter methods were usually designed
for a specific source category or process,
but now have been used for other
sources. One approach we have been
considering is a cross walk in WebFIRE
and/or the ERT to explain the
differences between the various
methods and pollutants being tested and
when such methods are appropriate.
Are there some methods that should be
excluded from WebFIRE? For example,
EPA Method 25A can be used to
develop a mass emissions factor.
However, it does not measure all the
components of hydrocarbons. We also
request comment on how the quality
rating might be adjusted to account for
methods that are less easy to compare
directly.
There are issues associated with the
process for developing draft factors. We
request comment on how new test data
should be presented (prior to WebFIRE
calculating the emissions factor), when
a commenter believes there are errors in
the test data. Some stakeholders have
suggested that we should make all data
available as they are submitted (for
public review and comment), but not to
be used to update the emissions factors
until all available data are compiled and
evaluated. Should the commenter
provide a third party review or update,
should the test be returned to the
facility for correction, or should EPA
perform the third party review? Should
the draft emissions factor be presented
(along with the new test data) and
should the draft factor quality be
presented? In general, what should be
the responsibilities of the commenters,
EPA, and the tested source? We are also
seeking comment on whether there
should be a specified time for
submitting comments? Should data be
posted to the site when it is submitted
or during some specified period prior to
the update of the emission factor in
WebFIRE?
There are several data handling
criteria associated with preparing draft
emission factors. These criteria are
addressed in the current procedures
document and include data averaging,
rounding, outliers, detection limits, use
of blanks, and format and unit of
measure of the factor. We are requesting
comment on whether any changes or
additions are needed regarding these
criteria as we develop changes to
WebFIRE. We are especially interested
in your comments on how to average
E:\FR\FM\14OCP1.SGM
14OCP1
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
test data that is below the detection
limits of the analyzer. Similarly, we
currently provide the arithmetic mean
as the best measure of an emissions
factor to provide a tool for estimating
emissions where there are gaps in
emissions inventories. However, other
descriptive statistics such as median,
mode, range, percentiles, and standard
deviation may also be useful in
characterizing emissions for other
purposes. How the precision of the
supporting data is characterized is a
related issue. In general, we believe that
the impact associated with the
emissions variability between sources
will be reduced when we obtain
improved test reports via the ERT or
alternative electronic format and as we
obtain a larger number of higher quality
tests. We expect that more high quality
data will yield more accurate emissions
factors. In addition, improved process
information will allow for developing a
process based factor which will improve
the predictive accuracy of the resulting
emissions estimate. We request
comment on our plans to provide
additional information on the precision
and accuracy of the emissions factors in
the new emissions factors development
process. This additional information
would include the median, mode, range,
and standard deviation of the data set
used to develop the emissions factor.
What methodologies and criteria should
be used to achieve more and better
factors? Should WebFIRE be limited
only to factors that have documented
supporting source test data? Should we
continue to allow the expansion of
emissions factors based upon
unsupported assessments (i.e., assumed
control efficiencies applied to average
controlled factors to arrive at an
uncontrolled factor, and then a
subsequent assumed control efficiency
applied to that uncontrolled factor to
arrive at a controlled factor)?
Some stakeholders have requested
development of emissions factors for
uncontrolled processes. It is not
surprising that the existing emissions
factors characterize emissions for
controlled processes, because these are
the emissions sources that typically are
subject to regulation and required to
conduct performance tests to
demonstrate compliance. However,
should a source desire to test
uncontrolled processes and enter the
information into the ERT, we would
accept such data. A broader issue might
be how we could encourage
stakeholders to provide any data
(controlled or uncontrolled) and/or to
adopt the use of the ERT for reporting
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
of testing programs not required for
federal regulatory purposes.
Some industry groups and trade
associations independently have
developed industry-specific emissions
factors. In some cases, these
stakeholders have asked us to include
their emissions factors in WebFIRE
without a critical review of the source
testing and resultant data. Should these
groups choose to submit their data
through the ERT or an alternative
electronic format and result in highly
rated tests, we believe their data should
be considered the same as any other
data for calculating emissions factors.
However, some of these tests may
involve information that the sources
being tested consider proprietary or the
test reports may lack critical details
because they were conducted for
different purposes. Where do we draw
the line in accepting such data for use
in developing emissions factors? If we
accept some lesser quality tests and
data, would others be encouraged to do
the same which may result in less
transparency in the process and poorer
quality emissions factors? If CBI data are
considered by us, how can we assure
the other stakeholders of the reliability
of the supporting data without incurring
a workload on ourselves that would
result in substantial slowing of the
process? A similar issue is whether we
should accept assessment of their source
test data by stakeholders. We believe
one way to address this concern is to
have an independent third party review.
We have discussed third party review to
ensure objectivity of the data elsewhere
in this notice.
We intend for the revised emissions
factors development process guidance to
retain the opportunity for public review
of the individual test data, the emissions
factor calculations, and associated
quality rating prior to finalizing any
new or revised emissions factor.
However, as previously discussed our
current thinking is to modify some of
the aspects of the review process. For
example, we currently plan to change
from revising entire sections in AP–42
at one time to a review of recently
added source test data. We are also
considering conducting a periodic
review of the entire WebFIRE (limited to
data that had been submitted since the
last review) at a single time. We request
comment on these changes and
suggestions for alternative approaches to
updating emissions factors and handling
data before they are used to update
emissions factors. We also recognize the
potential impact that changing
emissions factors can have on sources
(e.g., a higher revised emissions factor
could mean that the source may be out
PO 00000
Frm 00035
Fmt 4702
Sfmt 4702
52731
of compliance, or the source may
become subject to newly applicable
requirements such as Title V or Toxics
Release Inventory reporting). Should we
limit reviews to the additional source
tests or should we allow reviewers to
address the implications of these
additions? We request comment on any
steps that could enhance public review
of the emissions factor development
process and outcome and will
contribute to the timely development of
new and revised factors.
B. Test Data Submittal Requirements
We believe that an additional
enhancement to the current emissions
system is for us to take steps to increase
the quality and quantity of performance
test data submittals. With the ERT, we
believe we have a tool to encourage the
submission of higher quality test data.
However, the quantity of data submittals
has to be increased to ensure continuous
development of better emissions factors.
Unfortunately, while the ERT has been
available for several years, we are not
seeing widespread use of it to submit
data to EPA for use in emissions factors
development. There could be several
reasons that test data submittals to EPA
are not more widespread.
• There is no regulatory driver
requiring submission of data.
• Stakeholders are worried that data
submitted this way will result in
emissions factors being updated too
quickly, making the verification of
appropriate emissions factors a more
difficult process.
• The ERT is perceived as requiring
too much data or more data are required
than what is normally required by
S/L/Ts for performance testing.
• There are electronic compatibility
issues for agencies with electronic
reporting systems that are similar to
ERT in scope. Some agencies may have
their own electronic reporting systems,
but these may be limited to the
reporting of the test results only.
• There is a perception that using the
ERT costs more than the traditional
paper formats or that using the ERT will
increase the costs of performance testing
to collect the information required by
the ERT.
• Agencies still require paper reports
or a signed copy of the report.
In order to ensure we receive timely
submittal of data necessary for a robust
emissions factors program, we are
considering using the authority under
section 114 of the Clean Air Act to
require the electronic submission to
EPA of performance test reports
conducted for compliance certifications
or other regulatory purposes.
Specifically, we are considering
E:\FR\FM\14OCP1.SGM
14OCP1
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
52732
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
amending the reporting provisions of
the 40 CFR parts 60 (New Source
Performance Standards (NSPS)), 61
(National Emission Standards for
Hazardous Air Pollutants (NESHAP)),
and 63 (Maximum Achievable Control
Technology (MACT standards)) General
Provisions to require electronic
submittal of performance tests that are
already required by standards in these
parts. The General Provisions contain
requirements, such as monitoring,
recordkeeping, and reporting that are
common to all NSPS, NESHAP, and
MACT rules. We want to emphasize that
this approach would not add any
additional performance testing. Nor do
we anticipate that this requirement
would significantly increase the
reporting and recordkeeping burden of
sources that are already required to
submit their performance test data. As
described below, we think that using the
ERT will likely result in reducing the
overall burden of submitting test data by
standardizing the reporting form and
automating many of the quality
assurance and calculation features
associated with paper reporting. We are
seeking comments on the concept of
requiring electronic submittal of
performance reports. We are also
seeking comments on any perceived
reduction (or other benefits) or addition
in costs to stakeholders should we
require the submittal of performance
tests required by parts 60, 61, and 63.
Should we propose such requirements
in a future rulemaking, we will assess
this potential burden reduction.
We also request comment on whether
we should specify specific required
elements to be contained in source test
reports. The components would include
not only the documentation of the
conduct of the stack sampling activities,
but also the process parameters, such as,
process operations, control device
design, and monitoring parameters that
are indicative of the emissions
performance of the process and control
device. We believe that requiring these
components should not increase
performance test burdens, because this
kind of information is required in the
existing methods and are necessary to
evaluate the conformance to the test
method or for compliance with
applicable parts 60, 61, or 63 provisions.
The advantage of the ERT, which was
developed with input from stack testing
companies, is that it would provide a
standardized method and template to
collect and store all the documentation
required.
We believe that obtaining these test
data already collected for other
purposes and using them in the
emissions factors development program
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
will save industry, S/L/Ts, and EPA
time and money. A benefit of submitting
these data to WebFIRE electronically is
that these data will greatly improve the
overall quality of the existing and new
emissions factors by supplementing the
pool of emissions tests data upon which
the emission factor is based and by
ensuring that data are more
representative of current industry
operational procedures. Submitting
these data to EPA will address a
common complaint we hear from
industry and regulators that emissions
factors are out-dated and/or not
representative of a particular source
category. We also believe that having
these data will enable EPA to conduct
more effective residual risk analyses
(required under section 112(f) of the
Clean Air Act Amendments of 1990)
and periodical technology reviews for
parts 60 and 63 NESHAP and MACTs
respectively, without requiring industry
to submit additional data. Moreover, as
each source category emissions’ factors
are populated with more high-quality
tests, the accuracy of the emissions
factors will increase. The regulations at
40 CFR parts 60, 61, and 63, the NSPS,
NESHAP, and MACTs already have
performance test requirements and,
again, this rule would not add
additional testing. However, we will
need to revise the reporting
requirements for these rules. One option
we are contemplating is to amend the
reporting requirements of the general
provisions for 40 CFR parts 60, 61, and
63 to require submittal of required
performance testing to EPA. Hundreds
of these performance tests are
conducted each year and the resultant
test reports and pertinent data reside in
S/L/Ts’ filing cabinets. EPA does not
receive these tests routinely, and does
not have funding to travel to the S/L/T
offices to copy and/or scan these tests to
obtain the data. Subsequently emissions
factors remain static.
We are seeking comment on the scope
of required data submittals. For
example, there are some source
categories with numerous sources and
frequent testing requirements. In some
cases, this might result in hundreds of
submittals for the same category. Should
there be a process to limit the number
of reports in these situations? Also,
should there eventually be a cutoff in
the submittal requirement after several
years of data have been submitted?
Statistic analyses show that data from
more than 30 source tests normally do
not appreciably impact the mean value
of the emissions factor. On the other
hand, if we limit the number of source
test reports, then how would we
PO 00000
Frm 00036
Fmt 4702
Sfmt 4702
determine that there had been
significant changes in processes and/or
controls that might influence the
existing emissions factors, suggest the
need for new emissions factors, or the
need for new source classification
codes?
Requiring submission of performance
test data will require coordination with
respect to changes to ERT and WebFIRE.
For example, ERT will need to be
updated to accommodate other
pollutant measurements that may be
required in 40 CFR parts 60, 61, and 63.
The ERT also needs to be modified to
transmit data to a centralized point
(EPA’s Central Data Exchange), so that
it could be stored in WebFIRE for future
use.
We believe that ERT, or an alternate
system (such as some existing S/L/T
electronic performance test submittal
software), should be the preferred
method of submitting test data that
ensures the quality of the data that are
used in emissions factors development.
In addition to providing an easy way to
submit performance tests and more
consistency in these submissions, the
ERT addresses some source test
reporting deficiencies we have observed
over the years. For example, not all
source tests received from S/L/Ts
include the documentation necessary to
verify that the procedures established in
the applicable test method are being
performed. Test reports also may fail to
include reports and the requisite
documentation from laboratories
describing the analyses performed.
Documentation is sometimes lacking
regarding the facility’s production level,
process flow rate, secondary products,
final products, or other integral
information. Information regarding the
facility’s performance, i.e. at normal or
near maximum production levels at the
time of testing, may also be needed.
Critical design and operational
information on the equipment used to
control the pollutants being tested also
may be missing. Given our objective to
improve the quality of data used to
develop emissions factors, we think this
detailed information may be needed.
The absence of any of this information
will be considered in rating the quality
of the performance test data.
In summary, we request comment on
whether additional source and testing
information should be required to be
submitted to the ERT to enhance the
emissions factor development process.
To what extent should background
information, like a process flow data, on
the source be required to be provided?
Finally, additional data may be needed
to develop algorithms (based on
emissions factors), such as those used in
E:\FR\FM\14OCP1.SGM
14OCP1
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
the TANKS 6 program. In cases where
we seek information on process
conditions, we may find that a few
sources may consider this information
or data to be CBI. There are several
issues with requiring CBI, and we are
seeking comment on the receipt of CBI
to develop more accurate emissions
factors.
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
C. Emissions Factors Content and
Format
The existing AP–42 currently
expresses emissions factors as the
arithmetic mean, which generally is an
expeditious choice for use in traditional
applications such as emissions
inventories gap filling. However, our
current thinking is to identify ways to
expand the scope of emissions factors’
application into areas where the existing
format of the factors may not satisfy the
new application. For example, it may be
helpful to provide the range of the test
data to users, so that they can
understand the variability of the source
tests used to develop a particular
emissions factor. Also, WebFIRE could
be modified to calculate and provide
other relevant statistical and
distribution characteristics, including
the standard deviation, in order to
provide users with a more complete
description of the data. Such a
description, whether tabular or
graphical, could help educate users and
allow them to make better informed
decisions. We seek comment on the type
and format of emission factor
information beyond the mean value that
would be useful for stakeholders.
D. Interacting With the SPECIATE
Database
SPECIATE is the EPA repository of
total organic compound (TOC) and
particulate matter (PM) speciation
profiles for emissions from stationary
and mobile air pollution sources. The
profiles are key inputs to air quality
modeling and source-receptor modeling
applications. SPECIATE essentially
provides emissions factors and
information for pollutants, from both
controlled and uncontrolled processes,
at a level of detail that is not adequately
or traditionally presented in AP–42. The
emissions factors developed for
SPECIATE are gleaned from available
sources, such as test data, literature
searches or academic studies.
References and data quality ratings are
provided to guide the user. We are
6 TANKS is a Windows-based computer software
program that estimates volatile organic compound
(VOC) and hazardous air pollutant (HAP) emissions
from fixed- and floating-roof storage tanks. TANKS
is based on the emission estimation procedures
from Chapter 7 of AP–42.
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
seeking comment on whether SPECIATE
(or any other source of emissions
factors) should be linked to or contained
in WebFIRE.
V. Request for Comment and Next Steps
As described throughout this notice,
EPA is soliciting comments to help in
improving the way emissions factors are
developed and used. We also encourage
readers to submit other general
comments and supporting data that
could help us further improve the
emissions factors program. In order to
ensure a well balanced response and
develop the best possible product, we
encourage the submittal of both
comments offering suggestions and
changes and those supporting our
current thinking on potential emissions
factors program improvements.
For the convenience of the reader, the
following list summarizes the major
areas for which we are seeking
comment:
• Is it appropriate to amend the
reporting provisions of the 40 CFR parts
60, 61, and 63 General Provisions to
require electronic submittal of
performance tests that are already
required by standards in these parts?
• As acknowledged earlier, emissions
factors are used for many air pollution
control activities that were not
envisioned when this program was
established. We are seeking comment on
the appropriateness of using emissions
factors for these other purposes and, if
they are to be used for other purposes,
should there be any other requirements
for these emissions factors (such as
using only high rated emissions factors
for permitting) or more information
required for these emissions factors
(such as greater precision and accuracy).
• Are third party reviews of
performance tests needed and, if so,
then how could we encourage third
party reviews of test reports and what
weight should we give reviews in
assigning a quality rating?
• Should we require electronic
submittal of performance tests via the
ERT or some similar electronic
submittal software (such as existing S/
L/T submittal software)? What is the
availability of other electronic formats
that currently may be used by sources
to report source test information to their
S/L/Ts? Could these formats be used or
adapted to fit into our proposed
process?
• Would a different format for the
ratings of test data be useful? For
example, would a numerical system
provide more information on the quality
of the test rating?
• If needed, should additional
information be required as part of ERT
PO 00000
Frm 00037
Fmt 4702
Sfmt 4702
52733
to enhance the emissions factors
development process? Should we obtain
continuous emissions monitoring data
in a fashion that could be used for
emissions factors development in the
next versions of ERT and WebFIRE?
• We plan to build into WebFIRE
decision criteria that would be used to
select the test data to be used in an
emissions factors update. For example,
we may have four performance tests
conducted in 1979 and four
performance tests conducted in 1995
where the source made a slightly
different product. What tests should we
use to develop the emissions factors and
what criteria should we consider to
select the performance tests?
• How should emissions data be
grouped into related clusters for which
the average emissions factors will be
developed? Examples of some of the
criteria we consider in the present
system include the source category,
process type, representativeness of
source, emission source, equipment
design, operating conditions, raw
material or fuel characteristics, control
devices, and test method used.
• How should WebFIRE assess data
collected by non-EPA reference methods
(such as those developed by the
California Air Resources Board) or data
from two different methods that are
averaged to develop an emissions
factor? How might the quality rating be
adjusted to account for methods that are
less easy to compare directly?
• At what frequency or schedule
should emissions factors in WebFIRE be
updated?
• There are several data handling
criteria associated with preparing draft
emission factors. These criteria include
data averaging, rounding, outliers,
detection limits, use of blanks, and
format and unit of measure of the factor.
How should we account for these
potential variables in emissions factors?
• Besides calculating the arithmetic
mean to be used as the traditional
emissions factor, what other statistical
characteristics should additional
features such as calculations of median
and mode factors or other information
from the data sets also be provided and
in what format, i.e., tabular or graphical,
should they be provided?
• Should there be a process to limit
the number of performance test reports
from a particular source category
submitted to EPA? For example, should
we establish a threshold in the submittal
requirement after 50 or 100 performance
tests have been submitted? If so, then
how would EPA know when source
categories significantly change process
or controls, such that we would want
E:\FR\FM\14OCP1.SGM
14OCP1
52734
Federal Register / Vol. 74, No. 197 / Wednesday, October 14, 2009 / Proposed Rules
additional performance tests for
emissions factors revisions?
• What steps could enhance public
review of the emissions factors
development process and outcome and
contribute to the timely development of
new and revised factors?
When finalized, the Emissions Factors
Guidance will address many of these
issues.
We will consider the comments
submitted in response to this ANPRM as
we proceed to implement an improved
emissions factors program.
VI. Statutory and Executive Order
Reviews
cprice-sewell on DSK2BSOYB1PROD with PROPOSALS
Executive Order 12866: Regulatory
Planning and Review
Under Executive Order 12866,
entitled Regulatory Planning and
VerDate Nov<24>2008
15:31 Oct 13, 2009
Jkt 220001
Review (58 FR 51735, October 4, 1993),
this is a ‘‘significant regulatory action’’
because we expect this action to raise
novel legal or policy issues.
Accordingly, EPA submitted this action
to the Office of Management and Budget
(OMB) for review under Executive
Order 12866 and any changes made in
response to OMB recommendations
have been documented in the docket for
this action. Because this action does not
propose or impose any requirements,
and instead seeks comments and
suggestions for the Agency to consider
in possibly developing a subsequent
proposed rule, the various statutes and
Executive Orders that normally apply to
rulemaking do not apply in this case.
Should EPA subsequently determine to
pursue a rulemaking, EPA will address
PO 00000
Frm 00038
Fmt 4702
Sfmt 4702
the statutes and Executive Orders as
applicable to that rulemaking.
List of Subjects in 40 CFR Parts 60, 61,
and 63
Environmental protection, Air
pollution control, Hazardous
substances, Reporting and
recordkeeping requirements, Emissions
factors, Performance testing.
Dated: October 7, 2009.
Lisa P. Jackson,
Administrator.
[FR Doc. E9–24684 Filed 10–13–09; 8:45 am]
BILLING CODE 6560–50–P
E:\FR\FM\14OCP1.SGM
14OCP1
Agencies
[Federal Register Volume 74, Number 197 (Wednesday, October 14, 2009)]
[Proposed Rules]
[Pages 52723-52734]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: E9-24684]
-----------------------------------------------------------------------
ENVIRONMENTAL PROTECTION AGENCY
40 CFR Parts 60, 61, and 63
[EPA-HQ-OAR-2009-0174; FRL-8968-8]
RIN 2060-AP63
Emissions Factors Program Improvements
AGENCY: Environmental Protection Agency (EPA).
ACTION: Advanced notice of proposed rulemaking.
-----------------------------------------------------------------------
SUMMARY: The purpose of this Advanced Notice of Proposed Rulemaking
(ANPRM) is to convey issues raised by stakeholders about EPA's
emissions factors program, inform the public of our initial ideas on
how to address these issues, and solicit comments on our current
thinking to resolve these issues. Our goal is to develop a self-
sustaining emissions factors program that produces high quality, timely
emissions factors, better indicates the precision and accuracy of
emissions factors, encourages the appropriate use of emissions factors,
and ultimately improves emissions quantification.
Although initially developed for emissions inventory purposes only,
use of emissions factors has been expanded to a variety of air
pollution control activities including permitting, enforcement,
modeling, control strategy development, and risk analysis. This ANPRM
discusses the appropriateness of using emissions factors for these
activities.
DATES: Comments must be received on or before November 13, 2009.
ADDRESSES: EPA has established a docket for this action under Docket ID
No. EPA-HQ-OAR-2009-0174. All documents in the docket are listed in the
Federal Docket Management System index at https://www.regulations.gov.
Publicly available docket materials are available either electronically
through https://www.regulations.gov or in hard copy at the EPA Docket
Center, Public Reading Room, ANPRM Docket, EPA West, Room 3334, 1301
Constitution Ave., NW., Washington, DC. The Public Reading Room is open
from 8:30 a.m. to 4:30 p.m., Monday through Friday, excluding legal
holidays. The telephone number for the Public Reading Room is (202)
566-1744, and the telephone number for the Air Docket is (202) 566-
1742.
Instructions: Direct your comments to Docket ID No. EPA-HQ-OAR-
2009-0174. The U.S. Environmental Protection Agency's (EPA's) policy is
that all comments received will be included in the public docket
without change and may be made available online at https://www.regulations.gov, including any personal information provided,
unless the comment includes information claimed to be Confidential
Business Information (CBI) or other information whose disclosure is
restricted by statute. Do not submit information that you consider to
be CBI or otherwise protected through https://www.regulations.gov or e-
mail. The https://www.regulations.gov Web site is an ``anonymous
access'' system, which means EPA will not know your identity or contact
information unless you provide it in the body of your comment. If you
send an e-mail comment directly to EPA without going through https://www.regulations.gov, your e-mail address will be automatically captured
and included as part of the comment that is placed in the public docket
and made available on the Internet. If you submit an electronic
comment, EPA recommends that you include your name and other contact
information in the body of your comment and with any disk or CD-ROM you
submit. If EPA cannot read your comment due to technical difficulties
and cannot contact you for clarification, EPA may not be able to
consider your comment. Electronic files should avoid the use of special
characters, any form of encryption, and be free of any defects or
viruses. For additional information about EPA's public docket, visit
the EPA Docket Center homepage at https://www.epa.gov/epahome/dockets.htm.
Docket: All documents in the docket are listed in the https://www.regulations.gov index. Although listed in the index, some
information is not publicly available, e.g., CBI or other information
whose disclosure is restricted by statute. Certain other material, such
as copyrighted material, will be publicly available only in hard copy.
Publicly available docket materials are available either electronically
in https://www.regulations.gov or in hard copy at the Public Reading
Room.
FOR FURTHER INFORMATION CONTACT: Mr. Thomas A. Driscoll, Measurement
Policy Group (MPG), Office of Air Quality Planning and Standards (D243-
05), Environmental Protection Agency, Research Triangle Park, North
Carolina 27711, telephone number: (919) 541-5135; fax number: (919)
541-1039; e-mail address: driscoll.tom@epa.gov.
SUPPLEMENTARY INFORMATION:
Outline. The information in this preamble is organized as follows:
I. General Information
A. Does this action apply to me?
B. What should I consider as I prepare my comments for EPA?
C. Where can I get a copy of this document and other related
information?
II. Background Information
A. The Role of Emissions Factors and Stakeholder Comments
B. Overview of the Emissions Factors Improvement Program
C. Goals for the Emissions Factors Improvement Program
III. Emissions Factors Development Process and Tools
A. WebFIRE
B. Electronic Reporting Tool (ERT)
C. Emissions Factors Development Guidance
IV. Changes to the Emissions Factors Program, Emissions Factors
Development, and Associated Tools
A. Potential Revisions to the Emissions Factors Development
Process: Overview and Issues
B. Test Data Submittal Requirements
C. Emissions Factors Content and Format
D. Interacting with the SPECIATE Database
[[Page 52724]]
V. Request for Comment and Next Steps
VI. Statutory and Executive Order Reviews
I. General Information
A. Does this action apply to me?
This notice is likely to be of interest to a variety of parties,
including owners and operators of stationary sources who use emissions
factors and, in particular, those that are subject to source testing
requirements under EPA air rules (i.e., New Source Performance
Standards (NSPS), National Emissions Standards for Hazardous Air
Pollutants (NESHAP), and Maximum Achievable Control Technology (MACT)
standards); industry sectors that believe that the emissions factors
currently used to characterize their emission sources could be updated
and improved; industry sectors that currently lack emissions factors;
State, local, and tribal air pollution control agencies (S/L/Ts) and
other individuals and organizations with an interest in emissions
factors. In that the use of emissions factors has expanded beyond
developing emissions inventories to other uses (e.g., developing
emissions limits for incorporation into New Source Review (NSR) and
Title V operating permits, determining applicability to air pollution
regulations, determining compliance with emissions standards,
conducting air quality impact analyses, developing control strategies,
and performing risk analyses (i.e., section 112(f) residual risk
requirements)), S/L/Ts, industry representatives, environmental action
groups, individuals and other organizations may have a vested interest
in this notice.
All of these parties are encouraged to read this notice and to
submit comments for EPA's consideration. We realize that in many cases
organizations other than EPA develop emissions factors for a variety of
purposes, and, in most cases, we do not require the use of EPA
emissions factors. However, because the EPA factors are so broadly used
and accepted, we are soliciting information and feedback on how they
are developed, currently used, and how they can be improved.
B. What should I consider as I prepare my comments for EPA?
Do not submit CBI to EPA through https://www.regulations.gov or e-
mail. Clearly mark the part or all of the information that you claim to
be CBI. For CBI information in a disk or CD-ROM that you mail to EPA,
mark the outside of the disk or CD-ROM as CBI and then identify
electronically within the disk or CD-ROM the specific information that
is claimed as CBI. In addition to one complete version of the comment
that includes information claimed as CBI, a copy of the comment that
does not contain the information claimed as CBI must be submitted for
inclusion in the public docket. Information so marked will not be
disclosed except in accordance with procedures set forth in 40 CFR part
2.
C. Where can I get a copy of this document and other related
information?
In addition to being available in the docket, an electronic copy of
this notice will be available on the Worldwide Web through the
Technology Transfer Network (TTN). The TTN provides information and
technology exchange in various areas of air pollution control.
Following signature, an electronic version of this document will be
posted at https://www.epa.gov/ttn/oarpg under ``Recent Additions.''
II. Background Information
A. The Role of Emissions Factors and Stakeholder Comments
An emissions factor is a representative value that attempts to
relate the quantity of a pollutant released to the atmosphere with an
activity associated with the release of that pollutant. These factors
are usually expressed as the mass of pollutant divided by a unit mass,
volume, distance, or duration of the activity emitting the pollutant
(e.g., kilograms of particulate emitted per megagram of coal burned).
Such factors facilitate estimation of emissions from various sources of
air pollution. In most cases, these factors are simply averages of all
available data of acceptable quality that were collected through source
performance testing, and are generally assumed to be representative of
population averages for all facilities in the source category.
Quantifying air emissions is a vital aspect of all air pollution
programs. Emissions factors have long been a fundamental tool in
developing national, regional, state, and local emissions inventories
for air quality management decisions and in developing emissions
control strategies. More recently, emissions factors have been applied
in determining site-specific applicability and emissions limitations in
operating permits by federal agencies, S/L/Ts, consultants, and
industry. These users have requested guidance on the use of emissions
factors and other emissions quantification tools (e.g., emissions
testing and monitoring, mass balance techniques) in developing permits
that are more practical in their enforcement.
Under ideal circumstances, all emissions data users would quantify
emissions from ongoing operations with continuous emissions monitoring,
periodic emissions performance testing, or frequent calculation using
well-accepted engineering principles, such as mass balances or other
detailed engineering calculations. Because these methods can be time
and resource intensive, users sometimes do not have or are unable to
secure data sufficient to allow detailed site-specific emissions
determinations. In some cases, measurement via instruments or long-term
performance testing, which would provide such data, is not feasible or
too costly. Without such data, emissions factors, which are assumed to
be representative of population-average values, are frequently used,
along with production information as a quick, low-cost method to
estimate emissions.
EPA's Office of Air Quality Planning and Standards (OAQPS) has long
recognized the importance of emissions factors and has focused effort
and resources on developing and documenting emissions factors. The EPA-
approved emissions factors are contained in an online document called
the ``AP-42 Compilation of Air Pollutant Emissions Factors'' (hereafter
referred to as ``AP-42'') available at https://www.epa.gov/ttn/chief/ap42/. The document is organized into 15 chapters that
describe industrial emission sources and the derivation of industry-
specific emissions factors. Many of the individual sections of this
document are supported by an associated background report providing
summaries of the individual test data and a corresponding assigned
quality rating, the rationale for grouping and using individual data,
and the assignment of the factor and factor quality.
Emissions factors were originally established only for use in
estimating emissions for developing national emissions inventories.
However, as mentioned earlier, emissions factors are used for many
other air pollution control activities for which they were not
designed.
AP-42, which was developed by OAQPS, is not the only repository of
emissions factors. Emissions factors have been developed for a number
of other programs and there are other databases that contain emissions
factors. For example, EPA's Office of Atmospheric Programs has recently
proposed a greenhouse gas reporting rule and provided many emissions
factors for sources to use in assessing their emissions. In addition,
EPA's Office of Research and Development
[[Page 52725]]
administers the SPECIATE database that contains many emissions factors.
Because the applications, uses, and requirements of these other
emissions factors databases are different than AP-42, these databases
have operated in a fairly autonomous manner. However, we are seeking
comment on whether there should be more interaction among these
databases. For a discussion of SPECIATE, see section IV.D.
As part of a reevaluation of the emissions factors program, EPA
interviewed and surveyed various emissions factors users and held a
series of workshops in 2003 and 2004 with stakeholders to solicit their
input on what is needed to update and improve the emissions factors
program.\1\ First and foremost, stakeholders (industry, S/L/Ts, EPA
program offices, environmental action groups, and others) indicated
that EPA needs to continue to maintain the AP-42 factors information
compilation and retrieval system. In addition, they indicated that it
takes EPA too long to develop emissions factors, that data submitted
for regulatory development have not been used to develop new emissions
factors, that there have been several inappropriate uses for emissions
factors, and that, in general, EPA is not developing new emissions
factors. The stakeholders said that EPA should develop criteria to
address the development and uses of emissions factors for purposes
other than just emissions inventory development, such as for use as
screening tools for compliance determinations, applicability purposes,
and preparing air program permit applications. They also said that the
current program is unresponsive to their needs, too complex for their
active participation, and lacks transparency concerning data
manipulation. More recently, the National Academy of Sciences (NAS)
(see National Research Council of the National Academies, 2004, Air
Quality Management in the United States, Washington, DC: The National
Academies Press) and EPA's Office of Inspector General (OIG) (see U.S.
EPA, Office of Inspector General Evaluation Report: EPA Can Improve
Emissions Factors Development and Management, Report No. 2006-P-00017,
March 22, 2006) also reviewed and commented on the emissions factors
program. Their comments echoed those of all other stakeholders in that
the EPA must continue to maintain the emissions factors program, but it
must be improved to support EPA and stakeholder uses. They also noted
that EPA should quantify uncertainty to improve emissions factors and
that EPA should be developing and updating emissions factors regularly.
---------------------------------------------------------------------------
\1\ A copy of the draft report, Emissions Factors Program
Improvement Efforts (September 2005), is available on EPA's Web site
at: https://www.epa.gov/ttn/chief/efpac/workshops/efp_improvement_efforts_draft.pdf.
---------------------------------------------------------------------------
B. Overview of the Emissions Factors Improvement Program
Based on the results of the emissions factors reevaluation process
that included collecting stakeholder input, preparing an improvement
plan, and an internal effort to review and reexamine our efforts, we
have identified four focus areas for improvement that are the basis for
this action:
Designing a process for developing and improving emissions
factors to allow easier and more effective participation by interested
parties, to be open and transparent, to accommodate the continuing
(self-sustaining) development and improvement of factors rather than
being a large, one-time effort to address the current needs, and to
provide an electronic mechanism for test report submittal and review.
We want to develop a process that, at the end of the emissions factors
development, will result in high quality emissions factors.
Improving methods for compiling and providing emissions
factors data and other pertinent information to users, including
complete and easy access to all available test data.
Developing guidance on the application of EPA's default
emissions factor or the selection of a more appropriate emissions
factor for specific applications, calculating emissions factors from
available test data or other information, conducting emissions tests to
facilitate the development of emissions factors, and evaluating and
considering data quality.
Updating existing emissions factors and developing more
factors where gaps currently exist.
EPA intends to implement a multi-part process to improve the
emissions factors program. The first part involves further development
of the existing electronic reporting tool (ERT) to make it easier for
S/L/Ts, industry, and other stakeholders to plan, document, accept,
assess, and transmit emissions test data. The second part involves
upgrading the AP-42 factors information system into WebFIRE. WebFIRE is
an Internet-based application that compiles and retrieves emissions
factors and performance test data and information; making it an
interactive, up-to-date, and easy to expand and enhance replacement for
the current AP-42. Additionally, to make the emissions factors
development process easier and more transparent, EPA plans to rewrite
the existing emissions factors development procedures and reissue the
revised document following a public review and comment process.
Finally, in order to acquire adequate data for the development or
improvement of the emissions factors, we are considering requiring the
submission of certain performance testing information by industry to
EPA's OAQPS via electronic reporting. Implementing this multi-part
effort will result in a self-sustaining emissions factors program
receiving ongoing data submittals to improve emissions estimation for
regulatory authorities and others to use in: (1) Developing emissions
inventories, (2) updating emissions standards, (3) identifying and
evaluating control strategies, (4) determining applicability of permit
and regulatory requirements, (5) assessing risks, and (6) conducting
other air pollution control activities. We believe this effort will
reduce the burden of handling test data, while improving access to and
the utility of the data.
C. Goals for the Emissions Factors Improvement Program
We believe the critical element in improving the emissions factors
program is changing the role of OAQPS from sole developer of emissions
factors to a facilitator who provides stakeholders with the tools to
participate in all aspects of the process, generates tools that capture
the existing work performed by stakeholders and enhance consistency
across the program, audits and oversees the program, and develops
policies for the appropriate use of emissions factors in non-inventory
applications where there are no policies or where existing policies are
inadequate. To this end, we encourage collection and submission of
critical site-specific process and testing information that will allow
stakeholders to improve the predictive accuracy of emissions factors
and characterize the associated uncertainties. We also want to
encourage and facilitate the electronic documentation and transfer of
source test information to reduce stakeholder workload, ease
assessment, increase communications, establish consistency (content and
assessment), increase the transparency of the entire program, and
provide information transfer to critical air programs (emissions
factors development, compliance verification, emissions inventory,
permitting, etc.).
Finally, we currently are considering replacing the highly
subjective manual method of updating all emissions factors
[[Page 52726]]
for a source category with a more consistent, objective, and automated
system that better delineates source descriptions so that emissions
factors' source categories are more meaningful and useful. Guidance is
a critical part of developing emissions factors. As such, we are
updating guidance of procedures for preparing emissions factors to make
the procedures clearer, improve the predictive accuracy of the
resulting emissions factors, improve stakeholders' confidence in the
revised process, and help us achieve our overall goals of improving the
emissions factors program.
III. Emissions Factors Development Process and Tools
We seek to replace the manual emissions factor development process,
which is shown in Figure 1. The manual emissions factors development
process begins with the performance and documentation of source tests
at individual facilities. After obtaining the report of the source
test, the emissions factors developer (EPA) assesses the documentation
with respect to its representativeness to the source category and its
precision and accuracy of quantifying the facility's emissions. Test
reports are then grouped by process (using the source classification
code, or SCC), control device employed, and pollutant. These groupings
are reviewed to combine related processes and control technologies that
will result in comparable data being used to establish or revise
emissions factors. After making determinations about the use of data
with differing test report quality ratings, the emissions factors are
calculated (or recalculated) with an associated factor quality rating.
The public is notified of the availability of the draft factors and is
given an opportunity to comment on them. After consideration of the
public comments, EPA publishes the new or revised factors in AP-42.
[GRAPHIC] [TIFF OMITTED] TP14OC09.000
As will be discussed in more detail in section IV, we propose to
move from this subjective resource intensive system where EPA relies on
a relatively open-ended set of criteria to make major decisions such as
the test data and factor quality ratings to one that is objective (more
science based) and designed to reduce the variability associated with
manual emissions factor development. The new system will provide an
objective evaluation scheme for grading the quality of each emissions
test, as well.
We are in the process of updating and revising three key existing
tools (WebFIRE, ERT, and the emissions factors guidance document) to
help us improve the current system. Note that the revised emissions
factors guidance document will provide information for implementing
both WebFIRE and ERT. The existing tools are described in the remainder
of this section. Section IV describes how we plan to augment and update
these tools to develop the improved emissions factors development
program.
A. WebFIRE
WebFIRE, on the EPA Web site at https://cfpub.epa.gov/oarweb/index.cfm?action=fire.main, is the Internet version of the Factor
Information Retrieval (FIRE) Data System software application (in a
Microsoft Access format) database. WebFIRE contains EPA's recommended
emission estimation factors for criteria and hazardous air pollutants
obtained from AP-42, Locating and Estimating (L&E) documents, and other
documents. The WebFIRE database usually contains a single value
(factor) for source classification code (SCC),\2\ control, and
pollutant combination. Users can conduct simple or detailed searches
for emissions factors by process, control device, and/or pollutant.
There is a separate database (https://www.epa.gov/ttn/chief/database/search.html) that is available to access the complete test reports and
other references cited in the section and background report. Also, for
many AP-42 sections there is a background report containing summaries
of the contents of the supporting test reports, assessments of the
quality of these test reports, judgments on the combining and
separation of reports for averaging, and the final assessment of the
quality rating assigned to the final factor. We are modifying WebFIRE
to connect these three components and provide stakeholders with
improved access and management capabilities.
---------------------------------------------------------------------------
\2\ There are currently a few emissions factors in AP-42 with
duplicate values (factors). EPA is working to correct these
emissions factors so that there are no duplicates.
---------------------------------------------------------------------------
B. Electronic Reporting Tool (ERT)
In order to streamline the collection of source test data and
ensure the completeness of data collection for the development of
emissions factors, we created the ERT. The current version of the ERT
is available at https://www.epa.gov/ttn/chief/ert/ert_tool.html. The
ERT is a Microsoft Access desktop application that is currently an
electronic alternative to the submittal of paper test plans, reports,
and
[[Page 52727]]
evaluations. Currently, data collected using 19 of EPA's emissions
measurement methods for stationary sources can be handled by the ERT.
The ERT supplements the time-intensive manual preparation and
transcription of stationary source emissions test plans and reports for
emissions sources testing with an electronic alternative where the
resulting data can be transmitted more easily and quickly to the Agency
and S/L/Ts who choose to use this system.
The ERT provides a format and a process that: (1) Documents the key
information and procedures required by the existing EPA Federal Test
Methods; (2) facilitates coordination among the source, the test
contractor, and the regulatory agency in planning and preparing for the
emissions test; (3) provides for consistent criteria to characterize
quantitatively the quality of the data collected during the emissions
test; (4) standardizes the form and content of test reports; and (5)
calculates the emissions factor, and exports the emissions factor and
associated data to WebFIRE. We expect the ERT to significantly reduce
the monitoring and testing burden for testers, source owners or
operators, S/L/Ts, EPA, and other interested stakeholders in
collecting, reviewing, storing, and accessing test data and reports.
C. Emissions Factors Development Guidance
We have developed guidance to assist in the emissions factors
development process titled, ``Procedures for Preparing Emissions
Factors'' (EPA-454/R-95-015).\3\ This document is intended for use by
EPA employees, EPA contractors, and external stakeholders. It describes
the procedures, technical criteria, and standards and specifications
for developing and reporting air pollutant emissions factors or
equations for publication in AP-42. The document also includes
background on emission factors and their uses and limitations. It
describes the pollutant terminology used in AP-42 and discusses some of
the emissions test methods used to measure these pollutants. The
reasons and procedures for initiating revisions to emissions factors
are also discussed. In addition, public participation procedures are
discussed. Many of the changes discussed in the proposed emissions
factor development process will be reflected in a revised procedures
document.
---------------------------------------------------------------------------
\3\ We have previously prepared a revised procedures document
(2006 draft) for public review. Based on the comments we received,
that document was withdrawn and never finalized.
---------------------------------------------------------------------------
IV. Changes to the Emissions Factors Program, Emissions Factors
Development, and Associated Tools
A. Potential Revisions to the Emissions Factors Development Process:
Overview and Issues
As described in this notice, our current plans are to move from the
relatively static format for emissions factors development to one that
is more flexible, current, and transparent. We will strive for a
balanced process that may be more prescriptive in many aspects of the
program while providing users with the flexibility to derive factors
that are more suitable for their specific intended purpose. Figure 2
provides an overview of how this process could work. We believe this
process can provide source owners or operators with the tools they need
to develop emissions factors and provide environmental authorities with
the tools they can use to assess the quality and uncertainty of
emissions test data. These tools should reduce real or perceived
barriers to emissions factors development and result in a substantially
improved emissions factors development process.
[GRAPHIC] [TIFF OMITTED] TP14OC09.001
Under the proposed system, source test data would be compiled
electronically via the ERT or another electronic format by the source
submitting the data. Because the ERT does not yet support all test
methods and because some users may prefer to use a different format, we
have provided a spreadsheet template that is to be used to submit
source test reports that do not use the ERT. See https://www.epa.gov/ttn/chief/ert/ert_tool.html for a copy of
[[Page 52728]]
the current version of the spreadsheet. We are also seeking comment on
the availability of other electronic formats that currently may be used
by sources to report source test information to their S/L/Ts and
whether these formats could be used or adapted to fit into this
proposed process.
In general, we believe that standardization of the test report's
form and content will enhance the emission factor development process,
while at the same time increase accuracy of the emissions factors.
Performance test data compiled in the ERT will also provide value to
the enforcement and compliance monitoring community through the
readily-available information from the tests in an electronic format.
The ERT will provide other items of information from stack tests that
may be used for evaluation that EPA's stationary source compliance
monitoring/enforcement system, the Air Facility System (AFS), does not
currently house such as method test used, process being tested,
emissions levels and stack test review date. However, we recognize that
such report standardization could have an impact on S/L/T data systems
and how they electronically store such information. Some sources might
still be required to submit paper or other reports to satisfy S/L/T
requirements. We request comment on how the design of the ERT might
mitigate these concerns.
We expect that our improved emissions factors' development process,
including the ERT, will facilitate the submittal of new test data from
a number of sources. As explained later in this notice, we are
considering requiring certain facilities to submit electronically their
performance test data to WebFIRE. In addition, it is possible that
sources or groups with an interest in adding or revising emissions
factors for certain categories might be motivated to submit data from
previous tests or tests conducted for other purposes than complying
with a Federal standard. To the extent that these data are
representative of current practices in the category, they could and
should be considered in emissions factor development.
We believe that the field evaluations and source test assessments
performed by S/L/Ts improve the reliability of the test data. For
example, such assessments will help to ensure testing requirements are
met, the test plan was followed, and results were accurately recorded
while also minimizing sample recovery/handling errors and equipment
errors. We want to encourage this type of third party review of all
source tests. Ideally the S/L/T would use the tools and criteria we
provide to conduct this review, but in some cases acceptable reviews
might be provided by independent contractors or others with an interest
in developing or revising certain emissions factors. Well conducted and
documented source tests that have been subject to such review can
potentially receive a higher quality rating than tests that have not
been reviewed.
We seek comment on other ways that we could encourage independent
``third party'' reviews and the weight we should give them in assigning
a quality rating. Even in the absence of quality reviews for a test,
there will be broader quality assurance provisions in the proposed
process. EPA plans to conduct audits of selected tests to ensure their
quality as part of the overall program. In addition, we will retain the
public review and comment features of the existing system to provide
additional assurance that tests submitted to the system are assigned an
appropriate quality rating. However, at this time, it is not our intent
to make this process a formal rulemaking process.
Under the current performance test evaluation system, test data
quality is rated A through D, with A-ratings assigned to well
documented tests performed by using an EPA reference test method, or
when not applicable, a sound methodology that is well-documented. At
the other end of the spectrum, a D-rated test is based on test reports
with minimal documentation or where a generally unacceptable method was
employed. The test quality is reported in enough detail for adequate
validation, and raw data are provided that can be used to duplicate the
emission results presented in the test report. In the absence of better
test reports, lower-rated tests may provide an order-of-magnitude value
for a source category emission factor. Specific criteria that are
considered in assigning the test report quality ratings include source
operation (e.g., whether the source was conducting the test under
representative operating conditions), test method and sampling
procedures, process information (extent to which process variation
explains variation in test runs), and documentation of the analysis and
calculations. After assigning a preliminary emission data quality
rating based on these criteria, the quality of production data is
considered. Test data that include the collection of production or
process data during the test are rated at a higher level than tests
that do not include production data.
Under the process being considered, the ERT or alternative
electronic format would be modified to provide a rating for the quality
of the individual test based on specified algorithms and data quality
objectives. The very process of using the ERT will address many of the
rating issues described above by encouraging submittal of the
information needed for an A rating. We are not seeking comment on
specific changes to the ERT and associated procedures document.
However, we are interested in comments on the general features we
should incorporate to move us to an automated system for compiling test
data and calculating or assigning corresponding test ratings. We are
also seeking comments on whether the use of different formats for the
ratings might be helpful for stakeholders. For example, would a more
prescriptive numerical test report assessment rating focus more
attention on the quality of the test reports, thereby improving the
information in these reports and provide more information to the
stakeholders on the quality of the data? As described above, should a
well-documented performance test conducted according to the Federal
Reference Method that has been reviewed by an independent third party
receive a rating adjustment to reflect the results of the third party
verification? Also, we are seeking comment on whether the third party
reviewer should have the authority to reduce the quality rating of a
test report (such as noting poor documentation or test performance
deficiencies).
Under our conceptual approach, the source test data would be
transferred from ERT to EPA's Central Data Exchange \4\ (CDX), which is
the point of entry on the Environmental Information Exchange Network
(Exchange Network) for environmental data exchanges to the Agency. In
the future, we may consider using the capabilities of the CDX to
provide for future exchanges of information in these reports
electronically with facility, state, or federal data systems. For
example, as mentioned earlier, it is possible that there might be other
audiences for the ERT data such as the AFS. This EPA database contains
compliance monitoring and enforcement data for stationary sources of
air pollution regulated by EPA and S/L/Ts. The environmental regulatory
community uses this information to track the compliance status of point
sources with various programs regulated under the Clean Air Act. With
certain modifications, the ERT could be designed to collect information
used by AFS. We believe that by providing stack
[[Page 52729]]
test and facility data electronically through the ERT in a format for
S/L/Ts to update AFS would result in a decrease of some existing
reporting requirements' burden for S/L/Ts. We seek comments on whether
the ERT information should be used to provide input to the AFS (and
whether this would decrease S/L/T reporting burden). Transfers to other
data systems such as the National Emissions Inventory, Toxics Release
Inventory, and Title V reporting also may be desirable. We request
comments on how and whether the ERT could be expanded to address other
program needs.
---------------------------------------------------------------------------
\4\ For more information on the CDX, see https://www.epa.gov/cdx/.
---------------------------------------------------------------------------
The Cross-Media Electronic Reporting Regulation (CROMERR) \5\ has
been recently promulgated to provide the legal framework for electronic
reporting of information and data to EPA and others who administer EPA
programs. CROMERR is intended to reduce the cost and burden of
electronic reporting while maintaining the level of corporate, legal,
and individual responsibility and accountability that exists in the
traditional paper format. At this time, we intend to develop ERT to
fully comply with CROMERR.
---------------------------------------------------------------------------
\5\ For more information on CROMERR, see EPA's Web site at:
https://www.epa.gov/CROMERR/.
---------------------------------------------------------------------------
Once received through CDX, the source test data would be stored in
WebFIRE. We currently plan to update WebFIRE to collate and integrate
the data into emissions factors calculations for similar processes,
pollutants, and control devices. For example, our current plan is to
upgrade WebFIRE to calculate automatically the arithmetic mean of the
data in individual source test reports to provide updated emissions
factors on a periodic schedule. Please note that we do not envision
that this approach would be used to update emissions factors as each
source test is received. Source test data will not be used for new or
amended emissions factors until the data have been vetted through our
public review process. Additional features such as calculations of
other statistical and distribution characteristics, including the
standard deviation and range of data values, could also be added. We
seek comments on what kinds of statistical information would be helpful
for stakeholders.
The frequency of emissions factors updates is an issue for which we
are seeking comment. As noted above, while WebFIRE might theoretically
be structured to calculate a new or revised emissions factor whenever a
qualified test is submitted, we understand that updating emissions
factors very frequently may be disruptive to emissions factors users
because it could create a rapidly moving target that could add
significant uncertainty to users. Instead, we think a better approach
is to schedule periodic updates. Such updates might be based on a
specified calendar schedule to allow interested parties to understand
when an update might be expected. Because updating emissions factors
impacts many other programs, such as operating and new source review
permitting, modeling, risk and technology analysis, control strategy
development, enforcement, and others, we believe that updating specific
emissions factors more than once per year would complicate activities
of these other programs. Other triggers could be when a certain volume
of new data is submitted in certain categories, or when the newly
submitted data results in significant changes to the emissions factor.
There also might be value in making supplementary updates whenever
there is an associated review of an existing standard (every 8 to 10
years). We are seeking comments on the frequency and scheduling of
emissions factors updates.
Some stakeholders have expressed concern that new data would be
used to automatically update emissions factors and that there would be
no opportunity afforded to comment on the accuracy, representativeness,
and completeness of the new data. We believe this is a valid concern
and are planning, as discussed above, to only update emissions factors
on a periodic schedule. In addition, we are planning on incorporating a
full public review and comment period into WebFIRE, similar to the
existing system for updating emissions factors. When all data for a
specific source category, control device, and pollutant are compiled
and resultant emissions factors are drafted, we currently notify all
subscribers to the CHIEF list serve (https://www.epa.gov/ttn/chief/listserv.html) that new draft emissions factors are available for
public review. We plan to add a feature into WebFIRE that will
automatically notify subscribers of the availability of new proposed
emissions factors for review and comment.
We plan to add flexibility to WebFIRE so that the user may
calculate their own emissions factor using a different mix of test
reports than those used for the existing emissions factor. Sources
already have the ability to suggest alternative factors, but this
change to WebFIRE could help make the development process more
transparent. This capability might lessen the need for extremely
frequent updates and would allow the calculation of emissions factors
for specific applications for which the average emissions factor is
inappropriate. However, the resulting ``user calculated'' emissions
factors would not be considered ``official'' EPA factors and we do not
plan to retain these emissions factors in WebFIRE.
We currently plan to build into WebFIRE decision criteria that
would be used to select the test data to be used in an emissions factor
update. For example, one of the current decision criteria includes the
exclusion of C- and D-rated data whenever A- or B-rated test data are
available. We seek comment on this approach and other criteria we
should consider. We anticipate that the changes to the data reporting
system will generally result in higher quality and significantly more
data than may have been available in the past for developing some
emissions factors. At what point and under what conditions do we drop
lower quality data from the emissions factor calculation? If we allow
the use of lower quality data, how should it be incorporated? For
example, if we have an existing emissions factor that is based upon
several ``C'' rated tests and we receive a new high quality performance
test, should we average together all of the data or only use the most
recent high quality test? Would a numerical quality rating that would
allow automated selection criteria be more useful than the current
letter rating system?
WebFIRE will be revised to assign an emissions factor quality
rating based on specified criteria. We presently assign an emissions
factor rating to indicate the ability of the overall average factor to
represent a national annual average emissions rate for the source
category. The emission factor rating is an overall assessment of how
good a factor is, based on both the quality of the test(s) or
information that is the source of the factor and on how well the factor
represents the emission source. Higher ratings are for emission factors
based on many unbiased observations, or on widely accepted test
procedures. In the current procedures guidance document, we state as an
example that an emissions factor based on 20 or more source tests on
different randomly selected plants would likely be assigned an ``A''
rating if all tests are conducted using a single valid federal
reference measurement method. Likewise, the guidance indicates that a
single observation based on questionable methods of testing would be
assigned an ``E'' rating. Should the current EPA approach for WebFIRE
incorporate more standardized and consistent criteria for
[[Page 52730]]
assigning emissions factor quality ratings? Should the criteria be
predicated upon an estimated predictive accuracy of the national
average emissions factor? How should the quality rating of the
supporting test data be incorporated into the emissions factor quality
rating?
As we revise WebFIRE, a key issue will be how it groups emissions
data into related clusters for which the average emissions factors will
be developed. What groupings could be performed automatically and which
ones would require external manual assessment and management? Who
should be responsible and what additional level of peer review should
be introduced? Examples of some of the groupings we consider in the
present system include the source category, process type,
representativeness of source, emission source, equipment design,
operating conditions, raw material or fuel characteristics, control
devices, and test method used. We request comment on the ways we should
incorporate these groupings into WebFIRE and whether there are
additional criteria that should be added. For example, what is the best
way to characterize facilities for emissions factor development
purposes? Currently we are using SCC and pollutant codes with control
device type. Is the current characterization system robust enough?
Once the SCC for the facility is tested, the specific pollutant
measured, and the control device is determined, the existing procedures
should guide the developer through a process of grouping the data. One
type of grouping may result in combining data from several SCCs (for
example Utility, Industrial, Commercial and Institutional combustion,
or the four types of Portland Cement Manufacturing processes). Another
type of grouping could result in data from different types of control
devices being combined. In the emissions factor development process,
these characteristics (and others) are evaluated to determine whether
there is a significant difference in the factors when different SCC
and/or controls are represented. We traditionally combine data from
different SCC and controls for some pollutants, if the factors are not
significantly different. The criteria used to determine whether to
combine data have varied. Should a more standardized assessment and
decision criteria be developed? Should these criteria be based upon a
statistical approach? Would a combination of statistical and non-
statistical approaches be reasonable? If so, when would one approach be
preferred over the other approach?
In some cases, a grouping of SCC and control device type has what
appears to be a bimodal distribution of emissions. When detailed
information is available in the test reports, these differences could
be attributed to differences in the raw material, the production
method, the end product specification, or one or more production or
control device parameters. What methods should be used to assess and
address these situations? Should the same assessment approach used to
cluster data be used? Should there be a more rigorous approach adopted?
In addressing situations where there are significant differences, how
should they be addressed? In the past, these situations have been
addressed through the expansion of the available SCCs. In some cases
this has led to increased confusion for the user of emissions factors.
In lieu of expanding the available SCCs, should we develop additional
criteria in WebFIRE to allow for broader differentiation of the
emissions factors?
How do we determine whether a specific source has significantly
changed such that the existing emissions factor is no longer
appropriate? There are many examples of significant changes, including
variance in control device performance over time or process changes
that alter emissions. We are seeking comment on how to determine
whether a process change is significant enough to warrant a new or
revised emissions factor. We are also seeking comment on how to account
for control device performance in establishing emissions factors.
Another question is how WebFIRE will assess data collected by non-
EPA reference methods, such as those developed by the California Air
Resources Board or the American Society for Testing and Materials
(ASTM). We believe that, in many cases, these ``other'' methods may not
be significantly different from EPA-reference methods and, as is the
case of some ASTM methods, can be used as alternatives to EPA reference
methods or are referenced in some of EPA's reference methods. To the
extent the method is a close replica of the EPA method, we believe that
WebFIRE should be able to note the different, but similar, method when
using its data to develop emissions factors. We currently accept
performance test data collected from non-EPA reference methods to
develop or revise emissions factors and we are inclined to continue
this practice. We are seeking comment on whether the use of methods
other than EPA-reference methods should be noted when used to develop
emissions factors. Another similar issue is where multiple methods can
be employed to test a pollutant. For example, there are several federal
reference methods for testing particulate matter. The particulate
matter methods were usually designed for a specific source category or
process, but now have been used for other sources. One approach we have
been considering is a cross walk in WebFIRE and/or the ERT to explain
the differences between the various methods and pollutants being tested
and when such methods are appropriate. Are there some methods that
should be excluded from WebFIRE? For example, EPA Method 25A can be
used to develop a mass emissions factor. However, it does not measure
all the components of hydrocarbons. We also request comment on how the
quality rating might be adjusted to account for methods that are less
easy to compare directly.
There are issues associated with the process for developing draft
factors. We request comment on how new test data should be presented
(prior to WebFIRE calculating the emissions factor), when a commenter
believes there are errors in the test data. Some stakeholders have
suggested that we should make all data available as they are submitted
(for public review and comment), but not to be used to update the
emissions factors until all available data are compiled and evaluated.
Should the commenter provide a third party review or update, should the
test be returned to the facility for correction, or should EPA perform
the third party review? Should the draft emissions factor be presented
(along with the new test data) and should the draft factor quality be
presented? In general, what should be the responsibilities of the
commenters, EPA, and the tested source? We are also seeking comment on
whether there should be a specified time for submitting comments?
Should data be posted to the site when it is submitted or during some
specified period prior to the update of the emission factor in WebFIRE?
There are several data handling criteria associated with preparing
draft emission factors. These criteria are addressed in the current
procedures document and include data averaging, rounding, outliers,
detection limits, use of blanks, and format and unit of measure of the
factor. We are requesting comment on whether any changes or additions
are needed regarding these criteria as we develop changes to WebFIRE.
We are especially interested in your comments on how to average
[[Page 52731]]
test data that is below the detection limits of the analyzer.
Similarly, we currently provide the arithmetic mean as the best measure
of an emissions factor to provide a tool for estimating emissions where
there are gaps in emissions inventories. However, other descriptive
statistics such as median, mode, range, percentiles, and standard
deviation may also be useful in characterizing emissions for other
purposes. How the precision of the supporting data is characterized is
a related issue. In general, we believe that the impact associated with
the emissions variability between sources will be reduced when we
obtain improved test reports via the ERT or alternative electronic
format and as we obtain a larger number of higher quality tests. We
expect that more high quality data will yield more accurate emissions
factors. In addition, improved process information will allow for
developing a process based factor which will improve the predictive
accuracy of the resulting emissions estimate. We request comment on our
plans to provide additional information on the precision and accuracy
of the emissions factors in the new emissions factors development
process. This additional information would include the median, mode,
range, and standard deviation of the data set used to develop the
emissions factor. What methodologies and criteria should be used to
achieve more and better factors? Should WebFIRE be limited only to
factors that have documented supporting source test data? Should we
continue to allow the expansion of emissions factors based upon
unsupported assessments (i.e., assumed control efficiencies applied to
average controlled factors to arrive at an uncontrolled factor, and
then a subsequent assumed control efficiency applied to that
uncontrolled factor to arrive at a controlled factor)?
Some stakeholders have requested development of emissions factors
for uncontrolled processes. It is not surprising that the existing
emissions factors characterize emissions for controlled processes,
because these are the emissions sources that typically are subject to
regulation and required to conduct performance tests to demonstrate
compliance. However, should a source desire to test uncontrolled
processes and enter the information into the ERT, we would accept such
data. A broader issue might be how we could encourage stakeholders to
provide any data (controlled or uncontrolled) and/or to adopt the use
of the ERT for reporting of testing programs not required for federal
regulatory purposes.
Some industry groups and trade associations independently have
developed industry-specific emissions factors. In some cases, these
stakeholders have asked us to include their emissions factors in
WebFIRE without a critical review of the source testing and resultant
data. Should these groups choose to submit their data through the ERT
or an alternative electronic format and result in highly rated tests,
we believe their data should be considered the same as any other data
for calculating emissions factors. However, some of these tests may
involve information that the sources being tested consider proprietary
or the test reports may lack critical details because they were
conducted for different purposes. Where do we draw the line in
accepting such data for use in developing emissions factors? If we
accept some lesser quality tests and data, would others be encouraged
to do the same which may result in less transparency in the process and
poorer quality emissions factors? If CBI data are considered by us, how
can we assure the other stakeholders of the reliability of the
supporting data without incurring a workload on ourselves that would
result in substantial slowing of the process? A similar issue is
whether we should accept assessment of their source test data by
stakeholders. We believe one way to address this concern is to have an
independent third party review. We have discussed third party review to
ensure objectivity of the data elsewhere in this notice.
We intend for the revised emissions factors development process
guidance to retain the opportunity for public review of the individual
test data, the emissions factor calculations, and associated quality
rating prior to finalizing any new or revised emissions factor.
However, as previously discussed our current thinking is to modify some
of the aspects of the review process. For example, we currently plan to
change from revising entire sections in AP-42 at one time to a review
of recently added source test data. We are also considering conducting
a periodic review of the entire WebFIRE (limited to data that had been
submitted since the last review) at a single time. We request comment
on these changes and suggestions for alternative approaches to updating
emissions factors and handling data before they are used to update
emissions factors. We also recognize the potential impact that changing
emissions factors can have on sources (e.g., a higher revised emissions
factor could mean that the source may be out of compliance, or the
source may become subject to newly applicable requirements such as
Title V or Toxics Release Inventory reporting). Should we limit reviews
to the additional source tests or should we allow reviewers to address
the implications of these additions? We request comment on any steps
that could enhance public review of the emissions factor development
process and outcome and will contribute to the timely development of
new and revised factors.
B. Test Data Submittal Requirements
We believe that an additional enhancement to the current emissions
system is for us to take steps to increase the quality and quantity of
performance test data submittals. With the ERT, we believe we have a
tool to encourage the submission of higher quality test data. However,
the quantity of data submittals has to be increased to ensure
continuous development of better emissions factors. Unfortunately,
while the ERT has been available for several years, we are not seeing
widespread use of it to submit data to EPA for use in emissions factors
development. There could be several reasons that test data submittals
to EPA are not more widespread.
There is no regulatory driver requiring submission of
data.
Stakeholders are worried that data submitted this way will
result in emissions factors being updated too quickly, making the
verification of appropriate emissions factors a more difficult process.
The ERT is perceived as requiring too much data or more
data are required than what is normally required by S/L/Ts for
performance testing.
There are electronic compatibility issues for agencies
with electronic reporting systems that are similar to ERT in scope.
Some agencies may have their own electronic reporting systems, but
these may be limited to the reporting of the test results only.
There is a perception that using the ERT costs more than
the traditional paper formats or that using the ERT will increase the
costs of performance testing to collect the information required by the
ERT.
Agencies still require paper reports or a signed copy of
the report.
In order to ensure we receive timely submittal of data necessary
for a robust emissions factors program, we are considering using the
authority under section 114 of the Clean Air Act to require the
electronic submission to EPA of performance test reports conducted for
compliance certifications or other regulatory purposes. Specifically,
we are considering
[[Page 52732]]
amending the reporting provisions of the 40 CFR parts 60 (New Source
Performance Standards (NSPS)), 61 (National Emission Standards for
Hazardous Air Pollutants (NESHAP)), and 63 (Maximum Achievable Control
Technology (MACT standards)) General Provisions to require electronic
submittal of performance tests that are already required by standards
in these parts. The General Provisions contain requirements, such as
monitoring, recordkeeping, and reporting that are common to all NSPS,
NESHAP, and MACT rules. We want to emphasize that this approach would
not add any additional performance testing. Nor do we anticipate that
this requirement would significantly increase the reporting and
recordkeeping burden of sources that are already required to submit
their performance test data. As described below, we think that using
the ERT will likely result in reducing the overall burden of submitting
test data by standardizing the reporting form and automating many of
the quality assurance and calculation features associated with paper
reporting. We are seeking comments on the concept of requiring
electronic submittal of performance reports. We are also seeking
comments on any perceived reduction (or other benefits) or addition in
costs to stakeholders should we require the submittal of performance
tests required by parts 60, 61, and 63. Should we propose such
requirements in a future rulemaking, we will assess this potential
burden reduction.
We also request comment on whether we should specify specific
required elements to be contained in source test reports. The
components would include not only the documentation of the conduct of
the stack sampling activities, but also the process parameters, such
as, process operations, control device design, and monitoring
parameters that are indicative of the emissions performance of the
process and control device. We believe that requiring these components
should not increase performance test burdens, because this kind of
information is required in the existing methods and are necessary to
evaluate the conformance to the test method or for compliance with
applicable parts 60, 61, or 63 provisions. The advantage of the ERT,
which was developed with input from stack testing companies, is that it
would provide a standardized method and template to collect and store
all the documentation required.
We believe that obtaining these test data already collected for
other purposes and using them in the emissions factors development
program will save industry, S/L/Ts, and EPA time and money. A benefit
of submitting these data to WebFIRE electronically is that these data
will greatly improve the overall quality of the existing and new
emissions factors by supplementing the pool of emissions tests data
upon which the emission factor is based and by ensuring that data are
more representative of current industry operational procedures.
Submitting these data to EPA will address a common complaint we hear
from industry and regulators that emissions fact