Notice of Availability of Interim Staff Guidance Documents For Fuel Cycle Facilities, 36558-36568 [06-5738]
Download as PDF
36558
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
Week of July 31, 2006—Tentative
There are no meetings scheduled for
the Week of July 31, 2006.
*
*
*
*
*
* The schedule for Commission
meetings is subject to change on short
notice. To verify the status of meetings
call (recording)—(301) 415–1292.
staff at 1–800–397–4209, 301–415–4737,
or by e-mail to pdr@nrc.gov.
NUCLEAR REGULATORY
COMMISSION
Notice of Availability of Interim Staff
Guidance Documents For Fuel Cycle
Facilities
Nuclear Regulatory
Commission.
ACTION: Notice of availability.
AGENCY:
CONTACT PERSON FOR MORE INFORMATION:
Michelle Schroll, (301) 415–1662.
*
*
*
*
*
The NRC Commission Meeting
Schedule can be found on the Internet
at: https://www.nrc.gov/what-we-do/
policy-making/schedule.html.
The
Affirmation of ‘‘AmerGen Energy
Company, LLC (License Renewal for
Oyster Creek Nuclear Generating
Station) Docket No. 50–0219, Legal
challenges to LBP–06–07 and LBP–06–
11’’ which tentatively was scheduled on
Friday, June 23, 2006 at 9 a.m. has been
postponed and will be rescheduled.
*
*
*
*
*
The NRC provides reasonable
accommodation to individuals with
disabilities where appropriate. If you
need a reasonable accommodation to
participate in these public meetings, or
need this meeting notice or the
transcript or other information from the
public meetings in another format (e.g.
braille, large print) please notify the
NRC’s Disability Program Coordinator,
Deborah Chan, at 301–415–7041, TTD:
301–415–2100, or by e-mail at
DLC@nrc.gov. Determinations on
requests for reasonable accommodation
will be made on a case-by-case basis.
*
*
*
*
*
This notice is distributed by mail to
several hundred subscribers; if you no
longer wish to receive it, or would like
to be added to the distribution, please
contact the Office of the Secretary,
Washington, DC 20555 (301–415–1969).
In addition, distribution of this meeting
notice over the Internet system is
available. If you are interested in
receiving this Commission meeting
schedule electronically, please send an
electronic message to dkw@nrc.gov.
sroberts on PROD1PC70 with NOTICES
ADDITIONAL INFORMATION:
Dated: June 22, 2006.
R. Michelle Schroll,
Office of the Secretary.
[FR Doc. 06–5760 Filed 6–23–06; 12:20 pm]
BILLING CODE 7590–01–M
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
FOR FURTHER INFORMATION CONTACT:
James Smith, Project manager,
Technical Support Section, Division of
Fuel Cycle Safety and Safeguards, Office
of Nuclear Material Safety and
Safeguards, U.S. Nuclear Regulatory
Commission, Washington, DC 20005–
0001. Telephone: (301) 415–6459; fax
number: (301) 415–5370; e-mail:
jas4@nrc.gov.
SUPPLEMENTARY INFORMATION:
I. Introduction
The Nuclear Regulatory Commission
(NRC) continues to prepare and issue
Interim Staff Guidance (ISG) documents
for fuel cycle facilities. These ISG
documents provide clarifying guidance
to the NRC staff when reviewing
licensee integrated safety analysis,
license applications or amendment
requests or other related licensing
activities for fuel cycle facilities under
10 CFR part 70. FCSS–ISG–10 has been
issued and is provided for information.
II. Summary
The purpose of this notice is to
provide notice to the public of the
issuance of FCSS–ISG–10, Revision 0,
which provides guidance to NRC staff to
address justification for minimum
margin of subcriticality for safety
relative to license application or
amendment request under 10 CFR part
70, subpart H. FCSS–ISG–10, Revision
0, has been approved and issued after a
general revision based on NRC staff and
public comments on the initial draft.
III. Further Information
The document related to this action is
available electronically at the NRC’s
Electronic Reading Room at https://
www.nrc.gov/reading-rm/adams.html.
From this site, you can access the NRC’s
Agencywide Documents Access and
Management System (ADAMS), which
provides text and image files of NRC’s
public documents. The ADAMS
accession number for the document
related to this notice is provided in the
following table. If you do not have
access to ADAMS or if there are
problems in accessing the document
located in ADAMS, contact the NRC
Public Document Room (PDR) Reference
PO 00000
Frm 00043
Fmt 4703
Sfmt 4703
Interim staff guidance
FCSS Interim Staff Guidance—10, Revision 0 ....
ADAMS accession No.
ML061650370
This document may also be viewed
electronically on the public computers
located at the NRC’s PDR, O 1 F21, One
White Flint North, 11555 Rockville
Pike, Rockville, MD 20852. The PDR
reproduction contractor will copy
documents for a fee. Comments on these
documents may be forwarded to James
Smith, Project Manager, Technical
Support Section, Division of Fuel Cycle
Safety and Safeguards, Office of Nuclear
Material Safety and Safeguards, U.S.
Nuclear Regulatory Commission,
Washington, DC 20005–0001.
Comments can also be submitted by
telephone, fax, or e-mail which are as
follows: Telephone: (301) 415–6459; fax
number: (301) 415–5370; e-mail:
jas4@nrc.gov.
Dated at Rockville, Maryland this 15th day
of June 2006.
For the Nuclear Regulatory Commission.
Dennis C. Morey,
Acting Chief, Technical Support Section,
Special Projects Branch, Division of Fuel
Cycle Safety and Safeguards, Office of
Nuclear Material Safety and Safeguards.
FCSS Interim Staff Guidance—10,
Revision 0; Justification for Minimum
Margin of Subcriticality for Safety
Prepared by Division of Fuel Cycle
Safety and Safeguards Office of Nuclear
Material Safety and Safeguards
Issue
Technical justification for the
selection of the minimum margin of
subcriticality for safety for fuel cycle
facilities, as required by 10 CFR 70.61(d)
Introduction
10 CFR 70.61(d) requires, in part, that
licensees or applicants (henceforth to be
referred to as ‘‘licensees’’) demonstrate
that ‘‘under normal and credible
abnormal conditions, all nuclear
processes are subcritical, including use
of an approved margin of subcriticality
for safety.’’ There are a variety of
methods that may be used to
demonstrate subcriticality, including
use of industry standards, handbooks,
hand calculations, and computer
methods. Subcriticality is assured, in
part, by providing margin between
actual conditions and expected critical
conditions. This interim staff guidance
(ISG), however, applies only to margin
used in those methods that rely on
E:\FR\FM\27JNN1.SGM
27JNN1
calculation of keff, including
deterministic and probabilistic
computer methods. The use of other
methods (e.g., use of endorsed industry
standards, widely accepted
handbooks,certain hand calculations),
containing varying amounts of margin,
is outside the scope of this ISG.
For methods relying on calculation of
keff, margin may be provided either in
terms of limits on physical parameters
of the system (of which keff is a
function), or in terms of limits on keff
directly, or both. For the purposes of
this ISG, the term margin of safety will
be used to refer to the margin of
criticality in terms of system
parameters, and the term margin of
subcriticality (MoS) will refer to the
margin to criticality in terms of keff. A
common approach to ensuring
subcriticality is to determine a
maximum keff limit below which the
licensee’s calculations must fall. This
limit will be referred to in this ISG as
the Upper Subcritical Limit (USL).
Licensees using calculational methods
perform validation studies, in which
critical experiments similar to actual or
anticipated facility applications are
chosen and then analyzed to determine
the bias and uncertainty in the bias. The
bias is a measure of the systematic
differences between calculational
method results and experimental data.
The uncertainty in the bias is a measure
of both the accuracy and precision of
the calculations and the uncertainty in
the experimental data. A USL is then
established that includes allowances for
bias and bias uncertainty as well as an
additional margin, to be referred to in
this ISG as the minimum margin of
subcriticality (MMS). The MMS is
variously referred to in the nuclear
industry as minimum subcritical
margin, administrative margin, and
arbitrary margin, and the term MMS
should be regarded as synonymous with
those terms. The term MMS will be used
throughout this ISG, and has been
chosen for consistency with the rule.
The MMS is an allowance for any
unknown (or difficult to identify or
quantify) errors or uncertainties in the
method of calculating keff that may exist
beyond those which have been
accounted for explicitly in calculating
the bias and its uncertainty.
There is little guidance in the fuel
facility Standard Review Plans (SRPs) as
to what constitutes sufficient technical
justification for the MMS. NUREG–
1520, ‘‘Standard Review Plan for the
Review of a License Application for a
Fuel Cycle Facility,’’ Section 5.4.3.4.4,
states that there must be margin that
includes, among other uncertainties,
‘‘adequate allowance for uncertainty in
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
the methodology, data, and bias to
assure subcriticality.’’ An important
component of this overall margin is the
MMS. However, there has been almost
no guidance on how to determine an
appropriate MMS. Partly due to the lack
of historical guidance, and partly due to
differences between facilities’ processes
and methods of calculation, there have
been significantly different MMS values
approved for the various fuel cycle
facilities over time. In addition, the
different ways licensees have of
defining margins and calculating keff
limits have made a consistent approach
to reviewing keff limits difficult. Recent
licensing experience has highlighted the
need for further guidance to clarify what
constitutes an acceptable justification
for the MMS.
The MMS can have a substantial
effect on facility operations (e.g., storage
capacity, throughput) and there has,
therefore, been considerable recent
interest in decreasing margin in keff
below what has been licensed
previously. In addition, the increasing
sophistication of computer codes and
the ready availability of computing
resources means that there has been a
gradual move towards more realistic
(often resulting in less conservative)
modeling of process systems. The
increasing interest in reducing the MMS
and the reduction in modeling
conservatism make technical
justification of the MMS more risksignificant than it has been in the past.
In general, consistent with a riskinformed approach to regulation, a
smaller MMS requires a more
substantial technical justification.
This ISG is only applicable to fuel
enrichment and fabrication facilities
licensed under 10 CFR part 70.
Discussion
This guidance is applicable to
evaluating the MMS in methods of
evaluation that rely on calculation of
keff. The keff value of a fissionable
system depends, in general, on a large
number of physical variables. The
factors that can affect the calculated
value of keff may be broadly divided into
the following categories: (1) The
geometric configuration; (2) the material
composition; and (3) the neutron
distribution. The geometric form and
material composition of the system—
together with the underlying nuclear
data (e.g., v, X(E), cross section data)—
determine the spatial and energy
distribution of neutrons in the system
(flux and energy spectrum). An error in
the nuclear data or the geometric or
material modeling of these systems can
produce an error in the neutron flux and
energy spectrum, and thus in the
PO 00000
Frm 00044
Fmt 4703
Sfmt 4703
36559
calculated value of keff. The bias
associated with a single system is
defined as the difference between the
calculated and physical values of keff, by
the following equation:
β = k calc − k physical
Thus, determining the bias requires
knowing both the calculated and
physical keff values of the system. The
bias associated with a single critical
experiment can be known with a high
degree of confidence, because the
physical (experimental) value is known
a priori (kphysical ≈ 1). However, for
calculations performed to demonstrate
subcriticality of facility processes (to be
referred to as ‘‘applications’’), this is not
generally the case. The bias associated
with such an application (i.e., not a
known critical configuration) is not
typically known with this same high
degree of confidence, because the actual
physical keff of the system is usually not
known. In practice, the bias is
determined from the average calculated
keff for a set of experiments that cover
different aspects of the licensee’s
applications. The bias and its
uncertainty must be estimated by
calculating the bias associated with a set
of critical experiments having geometric
forms, material compositions, and
neutron spectra similar to those of the
application. Because of the large
number of factors that can affect the
bias, and the finite number of critical
experiments available, staff should
recognize that this is only an estimate of
the true bias of the system. The
experiments analyzed cannot cover all
possible combinations of conditions or
sources of error that may be present in
the applications to be evaluated. The
effect on keff of geometric, material, or
spectral differences between critical
experiments and applications cannot be
known with precision. Therefore, an
additional margin (MMS) must be
applied to allow for the effects of any
unknown uncertainties that may exist in
the calculated value of keff beyond those
accounted for in the calculation of the
bias and its uncertainty. As the MMS
decreases, there needs to be a greater
level of assurance that the various
sources of bias and uncertainty have
been taken into account, and that the
bias and uncertainty are known with a
high degree of accuracy. In general, the
more similar the critical experiments are
to the applications, the more confidence
there is in the estimate of the bias and
the less MMS is needed.
In determining an appropriate MMS,
the reviewer should consider the
specific conditions and process
characteristics present at the facility in
E:\FR\FM\27JNN1.SGM
27JNN1
EN27JN06.005
sroberts on PROD1PC70 with NOTICES
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
sroberts on PROD1PC70 with NOTICES
36560
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
question. However, the MMS should not
be reduced below 0.02. The nuclear
cross sections are not generally known
to better than ∼ 1–2%. While this does
not necessarily translate into a 2% Dkeff,
it has been observed over many years of
experience with criticality code
validation that biases and spreads in the
data of a few percent can be expected.
As stated in NUREG–1520, MoS should
be large compared to the uncertainty in
the bias. Moreover, errors in the
criticality codes have been discovered
over time that have produced keff
differences of roughly this same
magnitude of 1–2% (e.g., Information
Notice 2005–13, ‘‘Potential NonConservative Error in Modeling
Geometric Regions in the KENO–V.a
Criticality Code’’). While the possibility
of having larger undiscovered errors
cannot be entirely discounted, modeling
sufficiently similar critical experiments
with the same code options to be used
in modeling applications should
minimize the potential for this to occur.
However, many years of experience
with the typical distribution of
calculated keff values and with the
magnitude of code errors that have
occasionally surfaced support
establishing 0.02 as the minimum MMS
that should be considered acceptable
under the best possible conditions.
Staff should recognize the important
distinction between ensuring that
processes are safe and ensuring that
they are adequately subcritical. The
value of keff is a direct indication of the
degree of subcriticality of the system,
but is not fully indicative of the degree
of safety. A system that is very
subcritical (i.e., with keff ß1) may have
a small margin of safety if a small
change in a process parameter can result
in criticality. An example of this would
be a UO2 powder storage vessel, which
is subcritical when dry, but may require
only the addition of water for criticality.
Similarly, a system with a small MoS
(i.e., with keff ∼1) may have a very large
margin of safety if it cannot credibly
become critical. An example of this
would be a natural uranium system in
light water, which may have a keff value
close to 1 but will never exceed 1.
Because of this, a distinction should be
made between the margin of
subcriticality and the margin of safety.
Although a variety of terms are in use
in the nuclear industry, the term margin
of subcriticality will be taken to mean
the difference between the actual
(physical) value of keff and the value of
keff at which the system is expected to
be critical. The term margin of safety
will be taken to mean the difference
between the actual value of a parameter
VerDate Aug<31>2005
19:00 Jun 26, 2006
Jkt 208001
and the value of the parameter at which
the system is expected to be critical. The
MMS is intended to account for the
degree of confidence that applications
calculated to be subcritical will be
subcritical. It is not intended to account
for other aspects of the process (e.g.,
safety of the process or the ability to
control parameters within certain
bounds) that may need to be reviewed
as part of an overall licensing review.
There are a variety of different
approaches that a licensee could choose
in justifying the MMS. Some of these
approaches and means of reviewing
them are described in the following
sections, in no particular preferential
order. Many of these approaches consist
of qualitative arguments, and therefore
there will be some degree of subjectivity
in determining the adequacy of the
MMS. Because the MMS is an allowance
for unknown (or difficult to identify or
quantify) errors, the reviewer must
ultimately exercise his or her best
judgement in determining whether a
specific MMS is justified. Thus, the
topics listed below should be regarded
as factors the reviewer should take into
consideration in exercising that
judgement, rather than any kind of
prescriptive checklist.1
The reviewer should also bear in
mind that the licensee is not required to
use any or all of these approaches, but
may choose an approach that is
applicable to its facility or a particular
process within its facility. While it may
be desirable and convenient to have a
single keff limit or MMS value (and
single corresponding justification)
across an entire facility, it is not
necessary for this to be the case. The
MMS may be easier to justify for one
process than for another, or for a limited
application versus generically for the
entire facility. The reviewer should
expect to see various combinations of
these approaches, or entirely different
approaches, used, depending on the
nature of the licensee’s processes and
methods of calculation. Any approach
used must ultimately lead to a
determination that there is adequate
assurance of subcriticality.
(1) Conservatism in the Calculational
Models
The margin in keff produced by the
licensee’s modeling practices, together
with the MMS, provide the margin
1 In the discussion of these factors, the purpose
is not to impose any new requirements or standards
for acceptability on licensees. However, in many
cases it will be necessary to go beyond the
minimum requirements for a given factor, if that
factor is being used as part of the technical basis
for justifying a smaller MMS than would otherwise
be acceptable.
PO 00000
Frm 00045
Fmt 4703
Sfmt 4703
between actual conditions and expected
critical conditions. In terms of the
subcriticality criterion taken from ANSI/
ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR Fuel Outside
Reactors’ (as explained in Appendix A):
Mos ≥ Dkm + Dksa
where Dkm is the MMS and Dksa is the
margin in keff due to conservative
modeling of the system (i.e.,
conservative values of system
parameters).
Two different applications for which
the sums on the right hand side of the
equation above are equal to each other
are equally subcritical. Assurance of
subcriticality may thus be provided by
specifying a margin in keff (Dkm), or
specifying conservative modeling
practices (Dksa), or some combination
thereof. This principle will be
particularly useful to the reviewer
evaluating a proposed reduction in the
currently approved MMS; the review of
such a reduction should prove
straightforward in cases in which the
overall combination of modeling
conservatism and MMS has not
changed. Because of this straightforward
quantitative relationship, any modeling
conservatism that has not been
previously credited should be
considered before examining other
factors. Cases in which the overall MoS
has decreased may still be acceptable,
but would have to be justified by other
means.
In evaluating justification for the
MMS relying on conservatism in the
model, the reviewer should consider
only that conservatism in excess of any
manufacturing tolerances, uncertainties
in system parameters, or credible
process variations. That is, the
conservatism should consist of
conservatism beyond the worst-case
normal or abnormal conditions, as
appropriate, including allowance for
any tolerances. Examples of this added
conservatism may include assuming
optimum concentration in solution
processes, neglecting neutron absorbers
in structural materials, or assuming
minimum reflector conditions (e.g., at
least a 1-inch, tight-fitting reflector
around process equipment). These
technical practices used to perform
criticality calculations generally result
in conservatism of at least several
percent in keff. To credit this as part of
the justification for the MMS, the
reviewer should have assurance that the
modeling practices described will result
in a predictable and dependable amount
of conservatism in keff. In some cases,
the conservatism may be processdependent, in which case it may be
E:\FR\FM\27JNN1.SGM
27JNN1
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
sroberts on PROD1PC70 with NOTICES
relied on as justification for the MMS
for a particular process. However, only
modeling practices that result in a
global conservatism across the entire
facility should be relied on as
justification for a site-wide MMS.
Ensuring predictable and dependable
conservatism includes verifying that
this conservatism will be maintained
over the facility lifetime, such as
through the use of license commitments
or conditions.
If the licensee has a program that
establishes operating limits (to ensure
that subcritical limits are not exceeded)
below subcritical limits determined in
nuclear criticality safety evaluations, the
margin provided by this (optional)
practice may be credited as part of the
conservatism. In such cases, the
reviewer should credit only the
difference between operating and
subcritical limits that exceeds any
tolerances or process variation, and
should ensure that operating limits will
be maintained over the facility lifetime,
through the use of license commitments
or conditions.
Some questions that the reviewer may
ask in evaluating the use of modeling
conservatism as justification for the
MMS include:
• How much margin in keff is
provided due to conservatism in
modeling practices?
• How much of this margin exceeds
allowance for tolerances and process
variations?
• Is this margin specific to a
particular process or does it apply to all
facility processes?
• What provides assurance that this
margin will be maintained over the
facility lifetime?
(2) Validation Methodology and Results
Assurance of subcriticality for
methods that rely on the calculation of
keff requires that those methods be
appropriately validated. One of the
goals of validation is to determine the
method’s bias and the uncertainty in the
bias. After this has been done, an
additional margin (MMS) is specified to
account for any additional uncertainties
that may exist. The appropriate MMS
depends, in part, on the degree of
confidence in the validation results.
Having a high degree of confidence in
the bias and bias uncertainty requires
both that there be sufficient (for the
statistical method used) applicable
benchmark-quality experiments and that
there be a rigorous validation
methodology. Critical experiments that
do not rise to the level of benchmarkquality experiments may also be
acceptable, but may require additional
margin. If either the data or the
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
methodology is deficient, a high degree
of confidence in the results cannot be
attained, and a larger MMS may need to
be employed than would otherwise be
acceptable. Therefore, although
validation and determining the MMS
are separate exercises, they are related.
The more confidence one has in the
validation results, the less additional
margin (MMS) is needed. The less
confidence one has in the validation
results, the more MMS is needed.
Any review of a licensing action
involving the MMS should involve
examination of the licensee’s validation
methodology and results. While there is
no clear quantifiable relationship
between the validation and MMS (as
exists with modeling conservatism),
several aspects of validation should be
considered before making a qualitative
determination of the adequacy of the
MMS.
There are four factors that the
reviewer should consider in evaluating
the validation: (1) The similarity of
critical experiments to actual
applications; (2) sufficiency of the data
(including the number and quality of
experiments); (3) adequacy of the
validation methodology; and (4)
conservatism in the calculation of the
bias and its uncertainty. These factors
are discussed in more detail below.
Similarity of Critical Experiments
Because the bias and its uncertainty
must be estimated based on critical
experiments having geometric form,
material composition, and neutronic
behavior similar to specific
applications, the degree of similarity
between the critical experiments and
applications is a key consideration in
determining the appropriateness of the
MMS. The more closely critical
experiments represent the
characteristics of applications being
validated, the more confidence the
reviewer has in the estimate of the bias
and the bias uncertainty for those
applications.
The reviewer must understand both
the critical experiments and
applications in sufficient detail to
ascertain the degree of similarity
between them. Validation reports
generally contain a description of
critical experiments (including source
references). The reviewer may need to
consult these references to understand
the physical characteristics of the
experiments. In addition, the reviewer
may need to consult process
descriptions, nuclear criticality safety
evaluations, drawings, tables, input
files, or other information to understand
the physical characteristics of
applications. The reviewer must
PO 00000
Frm 00046
Fmt 4703
Sfmt 4703
36561
consider the full spectrum of normal
and abnormal conditions that may have
to be modeled when evaluating the
similarity of the critical experiments to
applications.
In evaluating the similarity of
experiments to applications, the
reviewer must recognize that some
parameters are more significant than
others to accurately calculate keff. The
parameters that have the greatest effect
on the calculated keff of the system are
those that are most important to match
when choosing critical experiments.
Because of this, there is a close
relationship between similarity of
critical experiments to applications and
system sensitivity. Historically, certain
parameters have been used to trend the
bias because these are the parameters
that have been found to have the
greatest effect on the bias. These
parameters include the moderator-tofuel ratio (e.g., H/U, H/X, vm/vf),
isotopic abundance (e.g., uranium-235
(235U), plutonium-239 (239Pu), or overall
Pu-to-uranium ratio), and parameters
that characterize the neutron energy
spectrum (e.g., energy of average
lethargy causing fission (EALF), average
energy group (AEG)). Other parameters,
such as material density or overall
geometric shape, are generally
considered to be of less importance. The
reviewer should consider all important
system characteristics that can
reasonably be expected to affect the
bias. For example, the critical
experiments should include any
materials that can have an appreciable
effect on the calculated keff, so that the
effect due to the cross sections of those
materials is included in the bias.
Furthermore, these materials should
have at least the same reactivity worth
in the experiments (which may be
evidenced by having similar number
densities) as in the applications.
Otherwise, the effect of any bias from
the underlying cross sections or the
assumed material composition may be
masked in the applications. The
materials must be present in a
statistically significant number of
experiments having similar neutron
spectra to the application. Conversely,
materials that do not have an
appreciable effect on the bias may be
neglected and would not have to be
represented in the critical experiments.
Merely having critical experiments
that are representative of applications is
the minimum acceptance criterion, and
does not alone justify having any
particular value of the MMS. There are
some situations, however, in which
there is an unusually high degree of
similarity between the critical
experiments and applications, and in
E:\FR\FM\27JNN1.SGM
27JNN1
sroberts on PROD1PC70 with NOTICES
36562
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
these cases, this fact may be credited as
justification for having a smaller MMS
than would otherwise be acceptable. If
the critical experiments have geometric
forms, material compositions, and
neutron spectra that are nearly
indistinguishable from those of the
applications, this may be justification
for a smaller MMS than would
otherwise be acceptable. For example,
justification for having a small MMS for
finished fuel assemblies could include
selecting critical experiments consisting
of fuel assemblies in water, where the
fuel has nearly the same pellet diameter,
pellet density, cladding materials, pitch,
absorber content, enrichment, and
neutron energy spectrum as the
licensee’s fuel. In this case, the
validation should be very specific to
this type of system, because including
other types of critical experiments could
mask variations in the bias. Therefore,
this type of justification is generally
easiest when the area of applicability
(AOA) is very narrowly defined. The
reviewer should pay particular attention
to abnormal conditions. In this example,
changes in process conditions such as
damage to the fuel or partial flooding
may significantly affect the applicability
of the critical experiments.
There are several tools available to the
reviewer to ascertain the degree of
similarity between critical experiments
and applications. Some of these are
listed below:
1. NUREG/CR–6698, ‘‘Guide to
Validation of Nuclear Criticality Safety
Calculational Method,’’ Table 2.3,
contains a set of screening criteria for
determining the applicability of critical
experiments. As is stated in the NUREG,
these criteria were arrived at by
consensus among experienced nuclear
criticality safety specialists and may be
considered to be conservative. The
reviewer should consider agreement on
all screening criteria to be justification
for demonstrating a very high degree of
critical experiment similarity.
(Agreement on the most significant
screening criteria for a particular system
should be considered as demonstration
of an acceptable degree of critical
experiment similarity.) Less
conservative (i.e., broader) screening
criteria may also be acceptable, if
appropriately justified.
2. Analytical methods that
systematically quantify the degree of
similarity between a set of critical
experiments and applications in pairwise fashion may be used. One example
of this is the TSUNAMI code in the
SCALE 5 code package. One strength of
TSUNAMI is that it calculates an overall
correlation that is a quantitative
measure of the degree of similarity
VerDate Aug<31>2005
19:00 Jun 26, 2006
Jkt 208001
between an experiment and an
application. Another strength is that this
code considers all the nuclear
phenomena and underlying cross
sections and weights them by their
importance to the calculated keff (i.e.,
sensitivity of keff to the data). The NRC
staff currently considers a correlation
coefficient of ck ≠ 0.95 to be indicative
of a very high degree of similarity. This
is based on the staff’s experience
comparing the results from TSUNAMI
to those from a more traditional
screening criterion approach. The NRC
staff also considers a correlation
coefficient between 0.90 and 0.95 to be
indicative of a high degree of similarity.
However, owing to the amount of
experience with TSUNAMI, in this
range use of the code should be
supplemented with other methods of
evaluating critical experiment
similarity. Conversely, a correlation
coefficient less than 0.90 should not be
used as a demonstration of a high or
very high degree of critical experiment
similarity. Because of limited use of the
code to date, all of these observations
should be considered tentative and thus
the reviewer should not use TSUNAMI
as a ‘‘black box,’’ or base conclusions of
adequacy solely on its use. However, it
may be used to test a licensee’s
statement that there is a high degree of
similarity between experiments and
applications.
3. Traditional parametric sensitivity
studies may be employed to
demonstrate that keff is highly sensitive
or insensitive to a particular parameter.
For example, if a 50% reduction in the
10B cross section is needed to produce
a 1% change in the system keff, then it
can be concluded that the system is
highly insensitive to the boron content,
in the amount present. This is because
a credible error in the 10B cross section
of a few percent will have a statistically
insignificant effect on the bias.
Therefore, in the amount present, the
boron content is not a parameter that is
important to match in order to conclude
that there is a high degree of similarity
between critical experiments and
applications.
4. Physical arguments may
demonstrate that keff is highly sensitive
or insensitive to a particular parameter.
For example, the fact that oxygen and
fluorine are almost transparent to
thermal neutrons (i.e., cross sections are
very low) may justify why experiments
consisting of UO2F2 may be considered
similar to UO2 or UF4 applications,
provided that both experiments and
applications occur in the thermal energy
range.
The reviewer should ensure that all
parameters which can measurably affect
PO 00000
Frm 00047
Fmt 4703
Sfmt 4703
the bias are considered when assessing
critical experiment similarity. For
example, comparison should not be
based solely on agreement in the 235U
fission spectrum for systems in which
the system keff is highly sensitive to 238U
fission, 10B absorption, or 1H scattering.
A method such as TSUNAMI that
considers the complete set of reactions
and nuclides present can be used to
rank the various system sensitivities,
and to thus determine whether it is
reasonable to rely on the fission
spectrum alone in assessing the
similarity of critical experiments to
applications.
Some questions that the reviewer may
ask in evaluating reliance on critical
experiment similarity as justification for
the MMS include:
• Do the critical experiments
adequately span the range of geometric
forms, material compositions, and
neutron energy spectra expected in
applications?
• Are the materials present with at
least the same reactivity worth as in
applications?
• Do the licensee’s criteria for
determining whether experiments are
sufficiently similar to applications
consider all nuclear reactions and
nuclides that can have a statistically
significant effect on the bias?
Sufficiency of the Data
Another aspect of evaluating the
selected critical experiments for a
specific MMS is evaluating whether
there is a sufficient number of
benchmark-quality experiments to
determine the bias across the entire
AOA. Having a sufficient number of
benchmark-quality experiments means
that: (1) There are enough (applicable)
critical experiments to make a
statistically meaningful calculation of
the bias and its uncertainty; (2) the
experiments somewhat evenly span the
entire range of all the important
parameters, without gaps requiring
extrapolation or wide interpolation; and
(3) the experiments are, preferably,
benchmark-quality experiments. The
number of critical experiments needed
is dependent on the statistical method
used to analyze the data. For example,
some methods require a minimum
number of data points to reliably
determine whether the data are
normally distributed. Merely having a
large number of experiments is not
sufficient to provide confidence in the
validation result, if the experiments are
not applicable to the application. The
reviewer should particularly examine
whether consideration of only the most
applicable experiments would result in
a larger negative bias (and thus a lower
E:\FR\FM\27JNN1.SGM
27JNN1
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
sroberts on PROD1PC70 with NOTICES
USL) than that determined based on the
full set of experiments. The experiments
should also ideally be sufficiently wellcharacterized (including experimental
parameters and their uncertainties) to be
considered benchmark experiments.
They should be drawn from established
sources (such as from the International
Handbook of Evaluated Criticality
Safety Benchmark Experiments
(IHECSBE), laboratory reports, or peerreviewed journals). For some
applications, benchmark-quality
experiments may not be available; when
necessary, critical experiments that do
not rise to the level of benchmarkquality experiments may be used.
However, the reviewer should take this
into consideration and should evaluate
the need for additional margin.
Some questions that the reviewer may
ask in evaluating the number and
quality of critical experiments as
justification for the MMS include:
• Are the critical experiments chosen
all high-quality benchmarks from
reliable (e.g., peer-reviewed and widelyaccepted) sources?
• Are the critical experiments chosen
taken from multiple independent
sources, to minimize the possibility of
systematic errors?
• Have the experimental uncertainties
associated with the critical experiments
been provided and used in calculating
the bias and bias uncertainty?
• Is the number and distribution of
critical experiments sufficient to
establish trends in the bias across the
entire range of parameters?
• Is the number of critical
experiments commensurate with the
statistical methodology being used?
Validation Methodological Rigor
Having a sufficiently rigorous
validation methodology means having a
methodology that is appropriate for the
number and distribution of critical
experiments, that calculates the bias and
its uncertainty using an established
statistical methodology, that accounts
for any trends in the bias, and that
accounts for all apparent sources of
uncertainty in the bias (e.g., the increase
in uncertainty due to extrapolating the
bias beyond the range covered by the
experimental data.) Examples of
deficiencies in the validation
methodology may include: (1) Using a
statistical methodology relying on the
data being normally distributed about
the mean keff to analyze data that are not
normally distributed; (2) using a linear
regression fit on data that has a nonlinear dependence on a trending
parameter; (3) use of a single pooled
bias when very different types of critical
experiments are being evaluated in the
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
same validation. These deficiencies
serve to decrease confidence in the
validation results and may warrant
additional margin (i.e., a larger MMS).
Additional guidance on some of the
more commonly observed deficiencies
is provided below.
The assumption that data is normally
distributed is generally valid, unless
there is a strong trend in the data or
different types of critical experiments
with different mean calculated keff
values are being combined. Tests for
normality require a minimum number of
critical experiments to attain a specified
confidence level (generally 95%). If
there is insufficient data to verify that
the data are normally distributed, or the
data are shown to be not normally
distributed, a non-parametric technique
should be used to analyze the data.
The critical experiments chosen
should ideally provide a continuum of
data across the entire validated range, so
that any variation in the bias as a
function of important system parameters
may be observed. The presence of
discrete clusters of experiments having
a calculated keff lower than the set of
critical experiments as a whole should
be examined closely to determine if
there is some systematic effect common
to a particular type of calculation that
makes use of the overall bias nonconservative. Because the bias can vary
with system parameters, if the licensee
has combined different subsets of data
(e.g., solutions and powders, low- and
high-enriched, homogeneous and
heterogeneous), the bias for the different
subsets should be analyzed. In addition,
the goodness-of-fit for any function used
to trend the bias should be examined to
ensure it is appropriate to the data being
analyzed.
If critical experiments do not cover
the entire range of parameters needed to
cover anticipated applications, it may be
necessary to extend the AOA by making
use of trends in the bias. Any
extrapolation (or wide interpolation) of
the data should be done by means of an
established mathematical methodology
that takes into account the functional
form of both the bias and its
uncertainty. The extrapolation should
not be based on judgement alone, such
as by observing that the bias is
increasing in the extrapolated range,
because this may not account for the
increase in the bias uncertainty that will
occur with increasing extrapolation. The
reviewer should independently confirm
that the derived bias is valid in the
extrapolated range and should ensure
that the extrapolation is not large.
NUREG/CR–6698 states that critical
experiments should be added if the data
must be extrapolated more than 10%.
PO 00000
Frm 00048
Fmt 4703
Sfmt 4703
36563
There is no corresponding guidance
given for interpolation; however, if the
gap represents a significant fraction of
the total range of the data (e.g., more
than 20% of the range of the data), then
the reviewer should consider this to be
a wide interpolation. If the extrapolation
or interpolation is too large, new factors
that could affect the bias may be
introduced as the physical phenomena
in the system change. The reviewer
should not view validation as a purely
mathematical exercise, but should bear
in mind the neutron physics and
underlying physical phenomena when
interpreting the results.
Discarding an unusually large number
of critical experiments as outliers (i.e.,
more than 1–2%) should also be viewed
with some concern. Apparent outliers
should not be discarded based purely
upon judgement or statistical grounds
(such as causing the data to fail tests for
normality), because they could be
providing valuable information on the
method’s validity for a particular
application. The reviewer should verify
that there are specific defensible
reasons, such as reported
inconsistencies in the experimental
data, for discarding any outliers. If any
of the critical experiments from a
particular data set are discarded, the
reviewer should examine other
experiments included to determine
whether they may be subject to the same
systematic errors. Outliers should be
examined carefully especially when
they have a lower calculated keff than
the other experiments included.
NUREG–1520 states that the MoS
should be large compared to the
uncertainty in the bias. The observed
spread of the data about the mean keff
should be examined as an indicator of
the overall precision of the calculational
method. The reviewer should ascertain
whether the statistical method of
validation considers both the observed
spread in the data and the experimental
and calculational uncertainty in
determining the USL. The reviewer
should also evaluate whether the
observed spread in the data is consistent
with the reported uncertainty (e.g.,
whether X2/N ≈ 1). If the spread in the
data is larger than, or comparable to, the
MMS, then the reviewer should
consider whether additional margin
(i.e., a larger MMS) is needed.
As a final test of the code’s accuracy,
the bias should be relatively small (i.e.,
bias ¨2 percent), or else the reason for
the bias should be determined. No
credit should be taken for positive bias,
because this would result in making
changes in a non-conservative direction
without having a clear understanding of
those changes. If the absolute value of
E:\FR\FM\27JNN1.SGM
27JNN1
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
sroberts on PROD1PC70 with NOTICES
the bias is very large—and especially if
the reason for the large bias cannot be
determined—this may indicate that the
calculational method is not very
accurate, and a larger MMS may be
appropriate.
Some questions that the reviewer may
ask in evaluating the rigor of the
validation methodology as justification
for the MMS include:
• Are the results from use of the
methodology consistent with the data
(e.g., normally distributed)?
• Is the normality of the data
confirmed prior to performing statistical
calculations? If the data does not pass
the tests for normality, is a nonparametric method used?
• Does the assumed functional form
of the bias represent a good fit to the
critical experiments? Is a goodness-of-fit
test performed?
• Does the method determine a
pooled bias across disparate types of
critical experiments, or does it consider
variations in the bias for different types
of experiments? Are there discrete
clusters of experiments for which the
bias appears to be non-conservative?
• Has additional margin been applied
to account for extrapolation or wide
interpolation? Is this done based on an
established mathematical methodology?
• Have critical experiments been
discarded as apparent outliers? Is there
a valid reason for doing so?
Performing an adequate code
validation is not by itself sufficient
justification for any specific MMS. The
reason for this is that the validation
analysis determines the bias and its
uncertainty, but not the MMS. The
MMS is added after the validation has
been performed to provide added
assurance of subcriticality. However,
having a validation methodology that
either exceeds or falls short of accepted
practices for validation may be a basis
for either reducing or increasing the
MMS.
Statistical Conservatism
In addition to having conservatism in
keff due to modeling practices, licensees
may also provide conservatism in the
statistical methods used to calculate the
USL. For example, NUREG/CR–6698
states that an acceptable method for
calculating the bias is to use the singlesided tolerance limit approach with a
95/95 confidence (i.e., 95% confidence
that 95% of all future critical
calculations will lie above the USL). If
the licensee decides to use the singlesided tolerance limit approach with a
95/99.9 confidence, this would result in
a more conservative USL than with a
95/95 confidence. This would be true of
other methods for which the licensee’s
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
confidence criteria exceed the minimum
accepted criteria. Generally, the NRC
has accepted 95% confidence levels for
validation results, so using more
stringent confidence levels may provide
conservatism. In addition, there may be
other reasons a larger bias and/or bias
uncertainty than necessary has been
used (e.g., because of the inclusion of
inapplicable critical experiments that
have a lower calculated keff).
The reviewer may credit this
conservatism towards having an
adequate MoS if: (1) The licensee
demonstrates that this translates into a
specific Dkeff; and (2) the licensee
demonstrates that the margin will be
dependably present, based on license or
other commitments.
(3) Additional Risk-Informed
Considerations
Besides modeling conservatism and
the validation results, other factors may
provide added assurance of
subcriticality. These factors should be
considered in evaluating whether there
is adequate MoS and are discussed
below.
System Sensitivity and Uncertainty
The sensitivity of keff to changes in
system parameters can be used to assess
the potential effect of errors on the
calculation of keff. If the calculated keff
is especially sensitive to a given
parameter, an error in that parameter
could have a correspondingly large
contribution to the bias. Conversely, if
keff is very insensitive to a given
parameter, then an error may have a
negligible effect on the bias. This is of
particular importance when assessing
whether the chosen critical experiments
are sufficiently similar to applications to
justify a small MMS.
The reviewer should not consider the
sensitivity in isolation, but should also
consider the magnitude of uncertainties
in the parameters. If keff is very sensitive
to a given parameter, but the value of
that parameter is known with very high
accuracy (and its variations are wellcontrolled), the potential contribution to
the bias may still be very small. Thus,
the contribution to the bias is a function
of the product of the keff sensitivity with
the uncertainty. To illustrate this,
suppose that keff is a function of a large
number of variables, x1, x2,..., xN. Then
the uncertainty in keff may be expressed
as follows, if all the individual terms are
independent:
PO 00000
2
∂k
2
δk = ∑
δx i
i =1 ∂x i
N
2
Frm 00049
Fmt 4703
Sfmt 4703
where the partial derivatives ∂k/∂xi are
proportional to the sensitivity and the
terms sxi represent the uncertainties, or
likely variations, in the parameters. (If
not all variables are dependent, then
there may be additional terms.) Each
term in this equation then represents the
contribution to the overall uncertainty
in keff.
There are several tools available to the
reviewer to ascertain the sensitivity of
keff to changes in the underlying
parameters. Some of these are listed
below:
1. Analytical tools that calculate the
sensitivity for each nuclide-reaction pair
present in the problem may be used.
One example of this is the TSUNAMI
code in the SCALE 5 code package.
TSUNAMI calculates both an integral
sensitivity coefficient (i.e., summed over
all energy groups) and a sensitivity
profile as a function of energy group.
The reviewer should recognize that
TSUNAMI only calculates the keff
sensitivity to changes in the underlying
nuclear data, and not to other
parameters that could affect the bias and
should be considered. (See section on
Critical Experiment Similarity for
caveats about using TSUNAMI.)
2. Direct sensitivity calculations may
be used, in which system parameters are
perturbed and the resulting impact on
keff determined. Perturbation of atomic
number densities can also be used to
confirm the sensitivity calculated by
other methods (e.g., TSUNAMI). Such
techniques are not limited to
considering the effect of the nuclear
data.
There are also several sources
available to the reviewer to ascertain the
uncertainty associated with the
underlying parameters. For process
parameters, these sources of uncertainty
may include manufacturing tolerances,
quality assurance records, and
experimental and/or measurement
results. For nuclear data parameters,
these sources of uncertainty may
include published data, uncertainty data
distributed with the cross section
libraries, or the covariance data used in
methods such as TSUNAMI.
Some systems are inherently more
sensitive to changes in the underlying
parameters than others. For example,
high-enriched uranium systems
typically exhibit a greater sensitivity to
changes in system parameters (e.g.,
mass, moderation) than low-enriched
systems. This has been the reason that
HEU (i.e., > 20wt% 235U) facilities have
been licensed with larger MMS values
than LEU (≤ 10wt% 235U) facilities. This
greater sensitivity would also be true of
weapons-grade Pu compared to low-
E:\FR\FM\27JNN1.SGM
27JNN1
EN27JN06.006
36564
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
sroberts on PROD1PC70 with NOTICES
assay mixed oxides (i.e., with a few
percent Pu/U). However, it is also true
that the uncertainties associated with
measurement of the 235U cross sections
are much smaller than those associated
with measurement of the 238U cross
sections. Both the greater sensitivity and
smaller uncertainty would need to be
considered in evaluating whether a
larger MMS is needed for high-enriched
systems.
Frequently, operating limits that are
more conservative than safety limits
determined using keff calculations are
established to prevent those safety
limits from being exceeded. For systems
in which keff is very sensitive to the
system parameters, more margin
between the operating and safety limits
may be needed. Systems in which keff is
very sensitive to the process parameters
may need both a larger margin between
operating and safety limits and a larger
MMS. This is because the system is
sensitive to any change, whether it be
caused by normal process variations or
caused by unknown errors. Because of
this, the assumption is often made that
the MMS is meant to account for
variations in the process or the ability
to control the process parameters.
However, the MMS is meant only to
allow for unknown (or difficult to
quantify) uncertainties in the
calculation of keff. The reviewer should
recognize that determination of an
appropriate MMS is not dependent on
the ability to control process parameters
within safety limits (although both may
depend on the system sensitivity).
Some questions that the reviewer may
ask in evaluating the system sensitivity
as justification for the MMS include:
• How sensitive is keff to changes in
the underlying nuclear data (e.g., cross
sections)?
• How sensitive is keff to changes in
the geometric form and material
composition?
• Are the uncertainties associated
with these underlying parameters wellknown?
• How does the MMS compare to the
expected magnitude of changes in keff
resulting from uncertainties in these
underlying parameters?
Knowledge of the Neutron Physics
Another important consideration that
may affect the appropriate MMS is the
extent to which the physical behavior of
the system is known. Fissile systems
which are known to be subcritical with
a high degree of confidence do not
require as much MMS as systems where
subcriticality is less certain. An example
of a system known to be subcritical with
high confidence is a light-water reactor
fuel assembly. The design of these
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
systems is such that they can only be
made critical when highly thermalized.
Due to extensive analysis and reactor
experience, the flooded isolated
assembly is known to be subcritical. In
addition, the thermal neutron cross
sections for materials in finished reactor
fuel have been measured with a very
high degree of accuracy (as opposed to
cross sections in the resonance region).
Other examples of systems in which
there is independent corroborating
evidence of subcriticality may include
systems consisting of very simple
geometric shapes, or other idealized
situations, in which there is strong
evidence that the system is subcritical
based on comparison with highly
similar systems in published sources
(e.g., standards and handbooks). In these
cases, the MMS may be significantly
reduced due to the fact that the
calculation of keff is not relied on alone
to provide assurance of subcriticality.
Reliance on independent knowledge
that a given system is subcritical
necessarily requires that the
configuration of the system be fixed. If
the configuration can change from the
reference case, there will be less
knowledge about the behavior of the
changed system. For example, a finished
fuel assembly is subject to strict quality
assurance checks and would not reach
final processing if it were outside
specifications. In addition, it has a form
that has both been extensively studied
and is highly stable. For these reasons,
there is a great deal of certainty that this
system is well-characterized and is not
subject to change. A typical solution or
powder system (other than one with a
simple geometric arrangement) would
not have been studied with the same
level of rigor as a finished fuel
assembly. Even if they were studied
with the same level of rigor, these
systems have forms that are subject to
change into forms whose neutron
physics has not been as extensively
studied.
Some questions that the reviewer may
ask in evaluating the knowledge of the
neutron physics as justification for the
MMS include:
• Is the geometric form and material
composition of the system fixed and
very unlikely to change?
• Is the geometric form and material
composition of the system subject to
strict quality assurance, such that
tolerances have been bounded?
• Has the system been extensively
studied in the nuclear industry and
shown to be subcritical (e.g., in reactor
fuel studies)?
• Are there other reasons besides
criticality calculations to conclude that
PO 00000
Frm 00050
Fmt 4703
Sfmt 4703
36565
the system will be subcritical (e.g.,
handbooks, standards, published data)?
• How well-known is the nuclear data
(e.g., cross sections) in the energy range
of interest?
Likelihood of the Abnormal Condition
Some facilities have been licensed
with different sets of keff limits for
normal and abnormal conditions.
Separate keff limits for normal and
abnormal conditions are permissible,
but are not required. There is some low
likelihood that processes calculated to
be subcritical will, in fact, be critical,
and this likelihood increases as the
MMS is reduced (though it cannot in
general be quantified). NUREG–1718,
‘‘Standard Review Plan for the Review
of an Application for a Mixed Oxide
(MOX) Fuel Fabrication Facility,’’ states
that abnormal conditions should be at
least unlikely from the standpoint of the
double contingency principle. Then, a
somewhat higher likelihood that a
system calculated to be subcritical is, in
fact, critical is more permissible for
abnormal conditions than for normal
conditions, because of the low
likelihood of the abnormal condition
being realized. The reviewer should
verify that the licensee has defined
abnormal conditions such that
achieving the abnormal condition
requires at least one contingency to have
occurred, that the system will be closely
monitored so that it is promptly
detected, and that it will be promptly
corrected upon detection. Also, there is
generally more conservatism present in
the abnormal case, because the
parameters that are assumed to have
failed are analyzed at their worst-case
credible condition.
The increased risk associated with
having a smaller MMS for abnormal
conditions should be commensurate
with, and offset by, the low likelihood
of achieving the abnormal condition.
That is, if the normal case keff limit is
judged to be acceptable, then the
abnormal case limit will also be
acceptable, provided the increased
likelihood (that a system calculated to
be subcritical will be critical) is offset by
the reduced likelihood of realizing the
abnormal condition because of the
controls that have been established.
Note that if two or more contingencies
must occur to reach a given condition,
there is no requirement to ensure that
the resulting condition is subcritical. If
a single keff limit is used (i.e., no credit
for unlikelihood of the abnormal
condition), then the limit must be found
acceptable to cover both normal and
credible abnormal conditions. The
reviewer should always make this
finding considering specific conditions
E:\FR\FM\27JNN1.SGM
27JNN1
36566
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
and controls in the process(es) being
evaluated.
sroberts on PROD1PC70 with NOTICES
(4) Statistical Justification for the MMS
The NRC does not consider statistical
justification an appropriate basis for a
specific MMS. Previously, some
licensees have attempted to justify
specific MMS values based on a
comparison of two statistical methods.
For example, the USLSTATS code
issued with the SCALE code package
contains two methods for calculating
the USL: (1) The Confidence Band with
Administrative Margin approach
(calculating USL–1), and (2) the Lower
Tolerance Band approach (calculating
USL–2). The value of the MMS is an
input parameter to the Confidence Band
approach but is not included explicitly
in the Lower Tolerance Band approach.
In this particular justification, adequacy
of the MMS is based on a comparison
of USL–1 and USL–2 (i.e., the condition
that USL–1, including the chosen MMS,
is less than USL–2). However, the
reviewer should not accept this
justification.
The condition that USL–1 (with the
chosen MMS) is less than USL–2 is
necessary, but is not sufficient, to show
that an adequate MMS has been used.
These methods are both statistical
methods, and a comparison can only
demonstrate whether the MMS is
sufficient to bound any statistical
uncertainties included in the Lower
Tolerance Band approach but not
included in the Confidence Band
approach. There may be other statistical
or systematic errors in calculating keff
that are not included in either statistical
treatment. Because of this, an MMS
value should be specified regardless of
the statistical method used. Therefore,
the reviewer should not consider such
a statistical approach an acceptable
justification for any specific value of the
MMS.
(5) Summary
Based on a review of the licensee’s
justification for its chosen MMS, taking
into consideration the aforementioned
factors, the staff should make a
determination as to whether the chosen
MMS provides reasonable assurance of
subcriticality under normal and credible
abnormal conditions. The staff’s review
should be risk-informed, in that the
review should be commensurate with
the MoS and should consider the
specific facility and process
characteristics, as well as the specific
modeling practices used. As an
example, approving an MMS value
greater than 0.05 for processes typically
encountered in enrichment and fuel
fabrication facilities should require only
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
a cursory review, provided that an
acceptable validation has been
performed and modeling practices at
least as conservative as those in
NUREG–1520 have been utilized. The
approval of a smaller MMS will require
a somewhat more detailed review,
commensurate with the MMS that is
requested. However, the MMS should
not be reduced below 0.02 due to
inherent uncertainties in the cross
section data and the magnitude of code
errors that have been discovered.
Quantitative arguments (such as
modeling conservatism) should be used
to the extent practical. However, in
many instances, the reviewer will need
to make a judgement based at least
partly on qualitative arguments. The
staff should document the basis for
finding the chosen MMS value to be
acceptable or unacceptable in the Safety
Evaluation Report (SER), and should
ensure that any factors upon which this
determination rests are ensured to be
present over the facility lifetime (e.g.,
through license commitment or
condition).
Regulatory Basis
In addition to complying with
paragraphs (b) and (c) of this section,
the risk of nuclear criticality accidents
must be limited by assuring that under
normal and credible abnormal
conditions, all nuclear processes are
subcritical, including use of an
approved margin of subcriticality for
safety. [10 CFR 70.61(d)]
Technical Review Guidance
Determination of an adequate MMS is
strongly dependent upon specific
processes, conditions, and calculational
practices at the facility being licensed.
Judgement and experience must be
employed in evaluating the adequacy of
the proposed MMS. In the past, an MMS
of 0.05 has generally been found
acceptable for most typical lowenriched fuel cycle facilities without a
detailed technical justification. A
smaller MMS may be acceptable but
will require some level of technical
review.2 However, for reasons stated
previously, the MMS should not be
reduced below 0.02.
An MMS of 0.05 should be found
acceptable for low-enriched fuel cycle
processes and facilities if:
2 For high-enriched and plutonium or other fuel
cycle facilities, no general guidance on the
appropriate MMS is given. The reviewer should
consider any relevant differences between these
facilities and low-enriched uranium facilities (e.g.,
generally increased sensitivity of keff generally
reduced cross section uncertainty) on a case-by-case
basis.
PO 00000
Frm 00051
Fmt 4703
Sfmt 4703
1. A validation has been performed
that meets accepted industry guidelines
(e.g., meets the requirements of ANSI/
ANS–8.1–1998, NUREG/CR–6361, and/
or NUREG/CR–6698).
2. There is an acceptable number of
critical experiments with similar
geometric forms, material compositions,
and neutron energy spectra to
applications. These experiments cover
the range of parameters of applications,
or else margin is provided to account for
extensions to the AOA.
3. The processes to be evaluated
include materials and process
conditions similar to those that occur in
low-enriched fuel cycle applications
(i.e., no new fissile materials, unusual
moderators or absorbers, or technologies
new to the industry that can affect the
types of systems to be modeled).
The reviewer should consider any
factors, including those enumerated in
the discussion above, that could result
in applying additional margin (i.e., a
larger MMS) or may justify reducing the
MMS. The reviewer must then exercise
judgment in arriving at an MMS that
provides for adequate assurance of
subcriticality.
Some of the factors that may serve to
justify reducing the MMS include:
1. There is a predictable and
dependable amount of conservatism in
modeling practices, in terms of keff, that
is assured to be maintained (in both
normal and abnormal conditions) over
the facility lifetime.
2. Critical experiments have nearly
identical geometric forms, material
compositions, and neutron energy
spectra to applications, and the
validation is specific to this type of
application.
3. The validation methodology
substantially exceeds accepted industry
guidelines (e.g., it uses a very
conservative statistical approach,
considers an unusually large number of
trending parameters, or analyzes the
bias for a large number of subgroups of
critical experiments).
4. The system keff is demonstrably
much less sensitive to uncertainties in
cross sections or variations in other
system parameters than typical lowenriched fuel cycle processes.
5. There is reliable information
besides results of calculations that
provides assurance that the evaluated
applications will be subcritical (e.g.,
experimental data, historical evidence,
industry standards or widely accepted
handbooks).
6. The MMS is only applied to
abnormal conditions, which are at least
unlikely to be achieved, based on
credited controls.
E:\FR\FM\27JNN1.SGM
27JNN1
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
Some of the factors that may
necessitate increasing (or not approving)
the MMS include:
1. The technical practices employed
by the licensee are less conservative
than standard industry modeling
practices (e.g., do not adequately bound
reflection or the full range of credible
moderation, do not take geometric
tolerances into account).
2. There are few similar critical
experiments of benchmark quality that
cover the range of parameters of
applications.
3. The validation methodology
substantially falls below accepted
industry guidelines (e.g., it uses less
than a 95% confidence in the statistical
approach, fails to consider trends in the
bias, fails to account for extensions to
the AOA).
4. The validation results otherwise
tend to cast doubt on the accuracy of the
bias and its uncertainty (i.e., the critical
experiments are not normally
distributed, there is a large number of
outliers discarded (≠ 2%), there are
distinct subgroups of experiments with
lower keff than the experiments as a
whole, trending fits do not pass
goodness-of-fit tests, etc.).
5. The system keff is demonstrably
much more sensitive to uncertainties in
cross sections or other system
parameters than typical low-enriched
fuel cycle processes.
6. There is reliable information that
casts doubt on the results of the
calculational method or the
subcriticality of evaluated applications
(e.g., experimental data, reported
concerns with the nuclear data).
The purpose of asking the questions
in the individual discussion sections is
to ascertain the degree to which these
factors either provide justification for
reducing the MMS or necessitate
increasing the MMS. These lists are not
all-inclusive, and any other technical
information that demonstrates the
degree of confidence in the calculational
method should be considered.
sroberts on PROD1PC70 with NOTICES
Recommendation
The guidance in this ISG should
supplement the current guidance in the
nuclear criticality safety chapters of the
fuel facility SRPs (NUREG–1520 and
–1718). However, NUREG–1718, Section
6.4.3.3.4, states that the licensee should
submit justification for the MMS, but
then states that an MMS of 0.05 is
‘‘generally considered to be acceptable
without additional justification when
both the bias and its uncertainty are
determined to be negligible.’’ These two
statements are inconsistent. Therefore,
NUREG–1718, Section 6.4.3.3.4, should
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
be revised to remove the following
sentence:
‘‘A minimum subcritical margin of
0.05 is generally considered to be
acceptable without additional
justification when both the bias and its
uncertainty are determined to be
negligible.’’
References
ANSI/ANS–8.1–1998, ‘‘Nuclear Criticality
Safety in Operations with Fissionable
Materials Outside Reactors,’’ American
Nuclear Society.
ANSI/ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR [Light Water Reactor]
Fuel Outside Reactors,’’ American Nuclear
Society.
‘‘International Handbook of Evaluated
Criticality Safety Experiments,’’ NEA/NSC/
DOC (95) 03, Nuclear Energy Agency,
Organization for Economic Co-operation and
Development, 2003.
IN 2005–13, ‘‘Potential Non-Conservative
Error in Modeling Geometric Regions in the
KENO–V.a Criticality Code,’’ May 17, 2005.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG–1520, ‘‘Standard Review Plan
for the Review of a License Application for
a Fuel Cycle Facility.’’ NRC: Washington, DC
March 2002.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG–1718, ‘‘Standard Review Plan
for the Review of an Application for a Mixed
Oxide (MOX) Fuel Fabrication Facility.’’
NRC: Washington, DC August 2000.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG/CR–6698, ‘‘Guide for
Validation of Nuclear Criticality Safety
Calculational Methodology.’’ NRC:
Washington, DC January 2001.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG/CR–6361, ‘‘Criticality
Benchmark Guide for Light-Water-Reactor
Fuel in Transportation and Storage
Packages.’’ NRC: Washington, DC March
1997.
Approved: lllllllllllllll
Robert C. Pierson, Director Division of Fuel
Cycle Safety and Safeguards, NMSS
Date: llllllllllllllllll
Appendix A—ANSI/ANS–8.17 Calculation of
Maximum keff
ANSI/ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR Fuel Outside
Reactors,’’ contains a detailed discussion of
the various factors that should be considered
in setting keff limits. This is consistent with,
but more detailed than, the discussion in
ANSI/ANS–8.1–1998.
The subcriticality criterion from Section
5.1 of ANSI/ANS–8.17–2004 is:
ks ≤ kc ¥ Dks ¥ Dkc ¥ Dkm
where ks is the calculated keff corresponding
to the application, Dks is its uncertainty, kc
is the mean keff resulting from the calculation
of critical experiments, Dkc is its uncertainty,
and Dkm is the MMS. The types of
uncertainties included in each of these
‘‘delta’’ terms is provided, and includes the
following:
PO 00000
Frm 00052
Fmt 4703
Sfmt 4703
36567
Dks = (1) Statistical uncertainties in
computing ks; (2) convergence uncertainties
in computing ks, (3) material tolerances; (4)
fabrication tolerances; (5) uncertainties due
to limitations in the geometric representation
used in the method; and (6) uncertainties due
to limitations in the material representations
used in the method.
Dkc = (7) Uncertainties in the critical
experiments; (8) statistical uncertainties in
computing kc; (9) convergence uncertainties
in computing kc; (10) uncertainties due to
extrapolating kc outside the range of
experimental data; (11) uncertainties due to
limitations in the geometric representations
used in the method; and (12) uncertainties
due to limitations in the material
representations used in the method.
Dkm = An allowance for any additional
uncertainties (MMS).
To the extent that not all 12 sources of
uncertainty listed above have been explicitly
taken into account, they may be allowed for
by increasing the value of Dkm. The more of
these sources of uncertainty that have been
taken into account, the smaller the necessary
additional margin Dkm. As a general
principle, however, the MMS should be large
compared to known uncertainties in the
nuclear data and limitations of the
methodology. However, a value of the MMS
below 0.02 should not be used.
Frequently, the terms in the above equation
relating to the application are grouped on the
left-hand side of the equation, so that the
equation is rewritten as follows:
ks + Dks ≤ Dkc ¥ Dkc ¥ Dkm
where the terms on the right-hand side of the
equation are often lumped together and
termed the Upper Subcritical Limit (USL), so
that the USL = kc ¥ Dkc ¥ Dkm.
Relation to the Minimum Subcritical Margin
(MMS)
The MoS has been defined as the
difference between the actual value of keff
and the value of keff at which the system is
expected to be critical. The expected (best
estimate) critical value of keff is the mean keff
value of all critical experiments analyzed
(i.e., kc), including consideration of the
uncertainty in the bias (i.e., Dkc). The
calculated value of keff for an application
generally exceeds the actual (physical) keff
value due to conservative assumptions in
modeling the system. In terms of the above
USL equation, the MoS may be expressed
mathematically as:
MoS = kc ¥ Dkc ¥ (ks ¥ Dksa) ¥ Dks
where the term in parentheses is equal to the
actual (physical) keff of the application, ksa.
A term, Dksa, has been added to represent the
difference between the actual and calculated
value of keff for the application (i.e., Dksa =
change in keff resulting from modeling
conservatism). In terms of the USL:
MoS = USL + Dkm ¥ks + Dksa ¥ Dks
The minimum allowed value of the MoS is
reached when the calculated keff for the
application, ks + Dks, is equal to the USL.
When this occurs, the minimum value of the
MoS is:
MoS ≥ Dkm + Dksa
Thus, adequate margin (MoS) may be
assured either by conservatism in modeling
E:\FR\FM\27JNN1.SGM
27JNN1
36568
Federal Register / Vol. 71, No. 123 / Tuesday, June 27, 2006 / Notices
practices or in the explicit specification of
Dkm (MMS). This is discussed in the ISG
section on modeling conservatism.
Glossary
Application: calculation of a fissionable
system in the facility performed to
demonstrate subcriticality under normal or
credible abnormal conditions.
Area of applicability (AOA): the ranges of
material compositions and geometric
arrangements within which the bias of a
calculational method is established.
Benchmark experiment: a critical
experiment that has been peer-reviewed and
published and is sufficiently well-defined to
be used for validation of calculational
methods.
Bias: a measure of the systematic
differences between calculational method
results and experimental data.
Bias uncertainty: a measure of both the
accuracy and precision of the calculations
and the uncertainty in the experimental data.
Calculational method: includes the
hardware platform, operating system,
computer algorithms and methods, nuclear
reaction data, and methods used to construct
computer models.
Critical experiment: a fissionable system
that has been experimentally determined to
be critical (with keff ≈ 1).
Margin of safety: the difference between
the actual value of a parameter and the value
of the parameter at which the system is
expected to be critical with critical defined
as keff = 1 ¥ bias ¥ bias uncertainty.
Margin of subcriticality (MoS): the
difference between the actual value of keff
and the value of keff at which the system is
expected to be critical with critical defined
as keff = 1 ¥ bias ¥ bias uncertainty.
Minimum margin of subcriticality (MMS): a
minimum allowed margin of subcriticality,
which is an allowance for any unknown
uncertainties in calculating keff.
Subcritical limit: the maximum allowed
value of a controlled parameter under normal
case conditions.
Upper subcritical limit (USL): the
maximum allowed value of keff (including
uncertainty in keff), under both normal and
credible abnormal conditions, including
allowance for the bias, the bias uncertainty,
and a minimum margin of subcriticality.
[FR Doc. 06–5738 Filed 6–26–06; 8:45 am]
BILLING CODE 7590–01–P
OFFICE OF MANAGEMENT AND
BUDGET
Executive Office of the President;
Acquisition Advisory Panel;
Notification of Upcoming Meetings of
the Acquisition Advisory Panel
Office of Management and
Budget, Executive Office of the
President.
ACTION: Notice of Federal Advisory
Committee Meetings.
sroberts on PROD1PC70 with NOTICES
AGENCY:
SUMMARY: The Office of Management
and Budget announces one meeting of
VerDate Aug<31>2005
17:33 Jun 26, 2006
Jkt 208001
the Acquisition Advisory Panel (AAP or
‘‘Panel’’) established in accordance with
the Services Acquisition Reform Act of
2003.
DATES: There is one meeting announced
in this Federal Register Notice. A public
meeting of the Panel will be held on
July 12, 2006 beginning at 9 a.m. Eastern
Time and ending no later than 5 p.m.
There are additional public meetings of
the Acquisition Advisory Panel for June
and July 2006 previously published in
the Federal Register. For a schedule of
all public meetings, visit https://
acquisition.gov/comp/aap/
and select the link called ‘‘Schedule.’’
ADDRESSES: The July 12th meeting will
be held at the new FDIC Building, 3501
N. Fairfax Drive, Arlington, VA in the
new auditorium Room C3050D. This
facility is 1⁄4 block off of the orange line
metro stop for Virginia Square. The
public is asked to pre-register one week
in advance of the meeting due to
security and/or seating limitations (see
below for information on preregistration).
FOR FURTHER INFORMATION CONTACT:
Members of the public wishing further
information concerning these meetings
or the Panel itself, or to pre-register for
the meeting, should contact Ms. Laura
Auletta, Designated Federal Officer
(DFO), at: laura.auletta@gsa.gov, phone/
voice mail (202) 208–7279, or mail at:
General Services Administration, 1800 F
Street, NW., Room 4006, Washington,
DC 20405. Members of the public
wishing to reserve speaking time must
contact Mr. Emile Monette, AAP Staff
Analyst, in writing at:
emile.monette@gsa.gov or by fax at 202–
501–3341, or mail at the address given
above for the DFO, no later than one
week prior to the meeting.
SUPPLEMENTARY INFORMATION:
(a) Background: The purpose of the
Panel is to provide independent advice
and recommendations to the Office of
Federal Procurement Policy and
Congress pursuant to Section 1423 of
the Services Acquisition Reform Act of
2003. The Panel’s statutory charter is to
review Federal contracting laws,
regulations, and governmentwide
policies, including the use of
commercial practices, performancebased contracting, performance of
acquisition functions across agency
lines of responsibility, and
governmentwide contracts. Interested
parties are invited to attend the meeting.
Opportunity for public comments will
be provided at this meeting. Any change
will be announced in the Federal
Register.
Meeting—While the Panel may hear
from additional invited speakers, the
PO 00000
Frm 00053
Fmt 4703
Sfmt 4703
focus of this meeting will be discussions
of and voting on working group findings
and recommendations from selected
working groups, established at the
February 28, 2005 and May 17, 2005
public meetings of the AAP (see
https://acquisition.gov/comp/aap/
index.html for a list of working groups).
The Panel welcomes oral public
comments at this meeting and has
reserved one-half hour for this purpose.
Members of the public wishing to
address the Panel during the meeting
must contact Mr. Monette, in writing, as
soon as possible to reserve time (see
contact information above).
(b) Posting of Draft Reports: Members
of the public are encouraged to regularly
visit the Panel’s Web site for draft
reports. Currently, the working groups
are staggering the posting of various
sections of their draft reports at https://
acquisition.gov/comp/aap/
under the link for ‘‘Working Group
Reports.’’ The most recent posting is
from the Commercial Practices Working
Group. The public is encouraged to
submit written comments on any and all
draft reports.
(c) Adopted Recommendations: The
Panel has adopted recommendations
presented by the Small Business,
Interagency Contracting, and
Performance-Based Acquisition
Working Groups. While additional
recommendations from some of these
working groups are likely, the public is
encouraged to review and comment on
the recommendations adopted by the
Panel to date by going to https://
acquisition.gov/comp/aap/
and selecting the link for ‘‘Adopted
Recommendations.’’
(d) Availability of Meeting Materials:
Please see the Panel’s Web site for any
available materials, including draft
agendas and minutes. Questions/issues
of particular interest to the Panel are
also available to the public on this Web
site on its front page, including
‘‘Questions for Government Buying
Agencies,’’ ‘‘Questions for Contractors
that Sell Commercial Goods or Services
to the Government,’’ ‘‘Questions for
Commercial Organizations,’’ and an
issue raised by one Panel member
regarding the rules of interpretation and
performance of contracts and liabilities
of the parties entitled ‘‘Revised
Commercial Practices Proposal for
Public Comment.’’ The Panel
encourages the public to address any of
these questions/issues when presenting
either oral public comments or written
statements to the Panel.
(e) Procedures for Providing Public
Comments: It is the policy of the Panel
to accept written public comments of
any length, and to accommodate oral
E:\FR\FM\27JNN1.SGM
27JNN1
Agencies
[Federal Register Volume 71, Number 123 (Tuesday, June 27, 2006)]
[Notices]
[Pages 36558-36568]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 06-5738]
-----------------------------------------------------------------------
NUCLEAR REGULATORY COMMISSION
Notice of Availability of Interim Staff Guidance Documents For
Fuel Cycle Facilities
AGENCY: Nuclear Regulatory Commission.
ACTION: Notice of availability.
-----------------------------------------------------------------------
FOR FURTHER INFORMATION CONTACT: James Smith, Project manager,
Technical Support Section, Division of Fuel Cycle Safety and
Safeguards, Office of Nuclear Material Safety and Safeguards, U.S.
Nuclear Regulatory Commission, Washington, DC 20005-0001. Telephone:
(301) 415-6459; fax number: (301) 415-5370; e-mail: jas4@nrc.gov.
SUPPLEMENTARY INFORMATION:
I. Introduction
The Nuclear Regulatory Commission (NRC) continues to prepare and
issue Interim Staff Guidance (ISG) documents for fuel cycle facilities.
These ISG documents provide clarifying guidance to the NRC staff when
reviewing licensee integrated safety analysis, license applications or
amendment requests or other related licensing activities for fuel cycle
facilities under 10 CFR part 70. FCSS-ISG-10 has been issued and is
provided for information.
II. Summary
The purpose of this notice is to provide notice to the public of
the issuance of FCSS-ISG-10, Revision 0, which provides guidance to NRC
staff to address justification for minimum margin of subcriticality for
safety relative to license application or amendment request under 10
CFR part 70, subpart H. FCSS-ISG-10, Revision 0, has been approved and
issued after a general revision based on NRC staff and public comments
on the initial draft.
III. Further Information
The document related to this action is available electronically at
the NRC's Electronic Reading Room at https://www.nrc.gov/reading-rm/
adams.html. From this site, you can access the NRC's Agencywide
Documents Access and Management System (ADAMS), which provides text and
image files of NRC's public documents. The ADAMS accession number for
the document related to this notice is provided in the following table.
If you do not have access to ADAMS or if there are problems in
accessing the document located in ADAMS, contact the NRC Public
Document Room (PDR) Reference staff at 1-800-397-4209, 301-415-4737, or
by e-mail to pdr@nrc.gov.
------------------------------------------------------------------------
ADAMS accession
Interim staff guidance No.
------------------------------------------------------------------------
FCSS Interim Staff Guidance--10, Revision 0........... ML061650370
------------------------------------------------------------------------
This document may also be viewed electronically on the public
computers located at the NRC's PDR, O 1 F21, One White Flint North,
11555 Rockville Pike, Rockville, MD 20852. The PDR reproduction
contractor will copy documents for a fee. Comments on these documents
may be forwarded to James Smith, Project Manager, Technical Support
Section, Division of Fuel Cycle Safety and Safeguards, Office of
Nuclear Material Safety and Safeguards, U.S. Nuclear Regulatory
Commission, Washington, DC 20005-0001. Comments can also be submitted
by telephone, fax, or e-mail which are as follows: Telephone: (301)
415-6459; fax number: (301) 415-5370; e-mail: jas4@nrc.gov.
Dated at Rockville, Maryland this 15th day of June 2006.
For the Nuclear Regulatory Commission.
Dennis C. Morey,
Acting Chief, Technical Support Section, Special Projects Branch,
Division of Fuel Cycle Safety and Safeguards, Office of Nuclear
Material Safety and Safeguards.
FCSS Interim Staff Guidance--10, Revision 0; Justification for Minimum
Margin of Subcriticality for Safety
Prepared by Division of Fuel Cycle Safety and Safeguards Office of
Nuclear Material Safety and Safeguards
Issue
Technical justification for the selection of the minimum margin of
subcriticality for safety for fuel cycle facilities, as required by 10
CFR 70.61(d)
Introduction
10 CFR 70.61(d) requires, in part, that licensees or applicants
(henceforth to be referred to as ``licensees'') demonstrate that
``under normal and credible abnormal conditions, all nuclear processes
are subcritical, including use of an approved margin of subcriticality
for safety.'' There are a variety of methods that may be used to
demonstrate subcriticality, including use of industry standards,
handbooks, hand calculations, and computer methods. Subcriticality is
assured, in part, by providing margin between actual conditions and
expected critical conditions. This interim staff guidance (ISG),
however, applies only to margin used in those methods that rely on
[[Page 36559]]
calculation of keff, including deterministic and
probabilistic computer methods. The use of other methods (e.g., use of
endorsed industry standards, widely accepted handbooks,certain hand
calculations), containing varying amounts of margin, is outside the
scope of this ISG.
For methods relying on calculation of keff, margin may
be provided either in terms of limits on physical parameters of the
system (of which keff is a function), or in terms of limits
on keff directly, or both. For the purposes of this ISG, the
term margin of safety will be used to refer to the margin of
criticality in terms of system parameters, and the term margin of
subcriticality (MoS) will refer to the margin to criticality in terms
of keff. A common approach to ensuring subcriticality is to
determine a maximum keff limit below which the licensee's
calculations must fall. This limit will be referred to in this ISG as
the Upper Subcritical Limit (USL). Licensees using calculational
methods perform validation studies, in which critical experiments
similar to actual or anticipated facility applications are chosen and
then analyzed to determine the bias and uncertainty in the bias. The
bias is a measure of the systematic differences between calculational
method results and experimental data. The uncertainty in the bias is a
measure of both the accuracy and precision of the calculations and the
uncertainty in the experimental data. A USL is then established that
includes allowances for bias and bias uncertainty as well as an
additional margin, to be referred to in this ISG as the minimum margin
of subcriticality (MMS). The MMS is variously referred to in the
nuclear industry as minimum subcritical margin, administrative margin,
and arbitrary margin, and the term MMS should be regarded as synonymous
with those terms. The term MMS will be used throughout this ISG, and
has been chosen for consistency with the rule. The MMS is an allowance
for any unknown (or difficult to identify or quantify) errors or
uncertainties in the method of calculating keff that may
exist beyond those which have been accounted for explicitly in
calculating the bias and its uncertainty.
There is little guidance in the fuel facility Standard Review Plans
(SRPs) as to what constitutes sufficient technical justification for
the MMS. NUREG-1520, ``Standard Review Plan for the Review of a License
Application for a Fuel Cycle Facility,'' Section 5.4.3.4.4, states that
there must be margin that includes, among other uncertainties,
``adequate allowance for uncertainty in the methodology, data, and bias
to assure subcriticality.'' An important component of this overall
margin is the MMS. However, there has been almost no guidance on how to
determine an appropriate MMS. Partly due to the lack of historical
guidance, and partly due to differences between facilities' processes
and methods of calculation, there have been significantly different MMS
values approved for the various fuel cycle facilities over time. In
addition, the different ways licensees have of defining margins and
calculating keff limits have made a consistent approach to
reviewing keff limits difficult. Recent licensing experience
has highlighted the need for further guidance to clarify what
constitutes an acceptable justification for the MMS.
The MMS can have a substantial effect on facility operations (e.g.,
storage capacity, throughput) and there has, therefore, been
considerable recent interest in decreasing margin in keff
below what has been licensed previously. In addition, the increasing
sophistication of computer codes and the ready availability of
computing resources means that there has been a gradual move towards
more realistic (often resulting in less conservative) modeling of
process systems. The increasing interest in reducing the MMS and the
reduction in modeling conservatism make technical justification of the
MMS more risk-significant than it has been in the past. In general,
consistent with a risk-informed approach to regulation, a smaller MMS
requires a more substantial technical justification.
This ISG is only applicable to fuel enrichment and fabrication
facilities licensed under 10 CFR part 70.
Discussion
This guidance is applicable to evaluating the MMS in methods of
evaluation that rely on calculation of keff. The
keff value of a fissionable system depends, in general, on a
large number of physical variables. The factors that can affect the
calculated value of keff may be broadly divided into the
following categories: (1) The geometric configuration; (2) the material
composition; and (3) the neutron distribution. The geometric form and
material composition of the system--together with the underlying
nuclear data (e.g., v, X(E), cross section data)--determine the spatial
and energy distribution of neutrons in the system (flux and energy
spectrum). An error in the nuclear data or the geometric or material
modeling of these systems can produce an error in the neutron flux and
energy spectrum, and thus in the calculated value of keff.
The bias associated with a single system is defined as the difference
between the calculated and physical values of keff, by the
following equation:
[GRAPHIC] [TIFF OMITTED] TN27JN06.005
Thus, determining the bias requires knowing both the calculated and
physical keff values of the system. The bias associated with
a single critical experiment can be known with a high degree of
confidence, because the physical (experimental) value is known a priori
(kphysical [ap] 1). However, for calculations performed to
demonstrate subcriticality of facility processes (to be referred to as
``applications''), this is not generally the case. The bias associated
with such an application (i.e., not a known critical configuration) is
not typically known with this same high degree of confidence, because
the actual physical keff of the system is usually not known.
In practice, the bias is determined from the average calculated
keff for a set of experiments that cover different aspects
of the licensee's applications. The bias and its uncertainty must be
estimated by calculating the bias associated with a set of critical
experiments having geometric forms, material compositions, and neutron
spectra similar to those of the application. Because of the large
number of factors that can affect the bias, and the finite number of
critical experiments available, staff should recognize that this is
only an estimate of the true bias of the system. The experiments
analyzed cannot cover all possible combinations of conditions or
sources of error that may be present in the applications to be
evaluated. The effect on keff of geometric, material, or
spectral differences between critical experiments and applications
cannot be known with precision. Therefore, an additional margin (MMS)
must be applied to allow for the effects of any unknown uncertainties
that may exist in the calculated value of keff beyond those
accounted for in the calculation of the bias and its uncertainty. As
the MMS decreases, there needs to be a greater level of assurance that
the various sources of bias and uncertainty have been taken into
account, and that the bias and uncertainty are known with a high degree
of accuracy. In general, the more similar the critical experiments are
to the applications, the more confidence there is in the estimate of
the bias and the less MMS is needed.
In determining an appropriate MMS, the reviewer should consider the
specific conditions and process characteristics present at the facility
in
[[Page 36560]]
question. However, the MMS should not be reduced below 0.02. The
nuclear cross sections are not generally known to better than ~ 1-2%.
While this does not necessarily translate into a 2%
[Delta]keff, it has been observed over many years of
experience with criticality code validation that biases and spreads in
the data of a few percent can be expected. As stated in NUREG-1520, MoS
should be large compared to the uncertainty in the bias. Moreover,
errors in the criticality codes have been discovered over time that
have produced keff differences of roughly this same
magnitude of 1-2% (e.g., Information Notice 2005-13, ``Potential Non-
Conservative Error in Modeling Geometric Regions in the KENO-V.a
Criticality Code''). While the possibility of having larger
undiscovered errors cannot be entirely discounted, modeling
sufficiently similar critical experiments with the same code options to
be used in modeling applications should minimize the potential for this
to occur. However, many years of experience with the typical
distribution of calculated keff values and with the
magnitude of code errors that have occasionally surfaced support
establishing 0.02 as the minimum MMS that should be considered
acceptable under the best possible conditions.
Staff should recognize the important distinction between ensuring
that processes are safe and ensuring that they are adequately
subcritical. The value of keff is a direct indication of the
degree of subcriticality of the system, but is not fully indicative of
the degree of safety. A system that is very subcritical (i.e., with
keff [Lt]1) may have a small margin of safety if a small
change in a process parameter can result in criticality. An example of
this would be a UO2 powder storage vessel, which is
subcritical when dry, but may require only the addition of water for
criticality. Similarly, a system with a small MoS (i.e., with
keff ~1) may have a very large margin of safety if it cannot
credibly become critical. An example of this would be a natural uranium
system in light water, which may have a keff value close to
1 but will never exceed 1. Because of this, a distinction should be
made between the margin of subcriticality and the margin of safety.
Although a variety of terms are in use in the nuclear industry, the
term margin of subcriticality will be taken to mean the difference
between the actual (physical) value of keff and the value of
keff at which the system is expected to be critical. The
term margin of safety will be taken to mean the difference between the
actual value of a parameter and the value of the parameter at which the
system is expected to be critical. The MMS is intended to account for
the degree of confidence that applications calculated to be subcritical
will be subcritical. It is not intended to account for other aspects of
the process (e.g., safety of the process or the ability to control
parameters within certain bounds) that may need to be reviewed as part
of an overall licensing review.
There are a variety of different approaches that a licensee could
choose in justifying the MMS. Some of these approaches and means of
reviewing them are described in the following sections, in no
particular preferential order. Many of these approaches consist of
qualitative arguments, and therefore there will be some degree of
subjectivity in determining the adequacy of the MMS. Because the MMS is
an allowance for unknown (or difficult to identify or quantify) errors,
the reviewer must ultimately exercise his or her best judgement in
determining whether a specific MMS is justified. Thus, the topics
listed below should be regarded as factors the reviewer should take
into consideration in exercising that judgement, rather than any kind
of prescriptive checklist.\1\
---------------------------------------------------------------------------
\1\ In the discussion of these factors, the purpose is not to
impose any new requirements or standards for acceptability on
licensees. However, in many cases it will be necessary to go beyond
the minimum requirements for a given factor, if that factor is being
used as part of the technical basis for justifying a smaller MMS
than would otherwise be acceptable.
---------------------------------------------------------------------------
The reviewer should also bear in mind that the licensee is not
required to use any or all of these approaches, but may choose an
approach that is applicable to its facility or a particular process
within its facility. While it may be desirable and convenient to have a
single keff limit or MMS value (and single corresponding justification)
across an entire facility, it is not necessary for this to be the case.
The MMS may be easier to justify for one process than for another, or
for a limited application versus generically for the entire facility.
The reviewer should expect to see various combinations of these
approaches, or entirely different approaches, used, depending on the
nature of the licensee's processes and methods of calculation. Any
approach used must ultimately lead to a determination that there is
adequate assurance of subcriticality.
(1) Conservatism in the Calculational Models
The margin in keff produced by the licensee's modeling
practices, together with the MMS, provide the margin between actual
conditions and expected critical conditions. In terms of the
subcriticality criterion taken from ANSI/ANS-8.17-2004, ``Criticality
Safety Criteria for the Handling, Storage, and Transportation of LWR
Fuel Outside Reactors' (as explained in Appendix A):
Mos >= [Delta]km + [Delta]ksa
where [Delta]km is the MMS and [Delta]ksa is the
margin in keff due to conservative modeling of the system
(i.e., conservative values of system parameters).
Two different applications for which the sums on the right hand
side of the equation above are equal to each other are equally
subcritical. Assurance of subcriticality may thus be provided by
specifying a margin in keff ([Delta]km), or
specifying conservative modeling practices ([Delta]ksa), or
some combination thereof. This principle will be particularly useful to
the reviewer evaluating a proposed reduction in the currently approved
MMS; the review of such a reduction should prove straightforward in
cases in which the overall combination of modeling conservatism and MMS
has not changed. Because of this straightforward quantitative
relationship, any modeling conservatism that has not been previously
credited should be considered before examining other factors. Cases in
which the overall MoS has decreased may still be acceptable, but would
have to be justified by other means.
In evaluating justification for the MMS relying on conservatism in
the model, the reviewer should consider only that conservatism in
excess of any manufacturing tolerances, uncertainties in system
parameters, or credible process variations. That is, the conservatism
should consist of conservatism beyond the worst-case normal or abnormal
conditions, as appropriate, including allowance for any tolerances.
Examples of this added conservatism may include assuming optimum
concentration in solution processes, neglecting neutron absorbers in
structural materials, or assuming minimum reflector conditions (e.g.,
at least a 1-inch, tight-fitting reflector around process equipment).
These technical practices used to perform criticality calculations
generally result in conservatism of at least several percent in
keff. To credit this as part of the justification for the
MMS, the reviewer should have assurance that the modeling practices
described will result in a predictable and dependable amount of
conservatism in keff. In some cases, the conservatism may be
process-dependent, in which case it may be
[[Page 36561]]
relied on as justification for the MMS for a particular process.
However, only modeling practices that result in a global conservatism
across the entire facility should be relied on as justification for a
site-wide MMS. Ensuring predictable and dependable conservatism
includes verifying that this conservatism will be maintained over the
facility lifetime, such as through the use of license commitments or
conditions.
If the licensee has a program that establishes operating limits (to
ensure that subcritical limits are not exceeded) below subcritical
limits determined in nuclear criticality safety evaluations, the margin
provided by this (optional) practice may be credited as part of the
conservatism. In such cases, the reviewer should credit only the
difference between operating and subcritical limits that exceeds any
tolerances or process variation, and should ensure that operating
limits will be maintained over the facility lifetime, through the use
of license commitments or conditions.
Some questions that the reviewer may ask in evaluating the use of
modeling conservatism as justification for the MMS include:
How much margin in keff is provided due to
conservatism in modeling practices?
How much of this margin exceeds allowance for tolerances
and process variations?
Is this margin specific to a particular process or does it
apply to all facility processes?
What provides assurance that this margin will be
maintained over the facility lifetime?
(2) Validation Methodology and Results
Assurance of subcriticality for methods that rely on the
calculation of keff requires that those methods be
appropriately validated. One of the goals of validation is to determine
the method's bias and the uncertainty in the bias. After this has been
done, an additional margin (MMS) is specified to account for any
additional uncertainties that may exist. The appropriate MMS depends,
in part, on the degree of confidence in the validation results. Having
a high degree of confidence in the bias and bias uncertainty requires
both that there be sufficient (for the statistical method used)
applicable benchmark-quality experiments and that there be a rigorous
validation methodology. Critical experiments that do not rise to the
level of benchmark-quality experiments may also be acceptable, but may
require additional margin. If either the data or the methodology is
deficient, a high degree of confidence in the results cannot be
attained, and a larger MMS may need to be employed than would otherwise
be acceptable. Therefore, although validation and determining the MMS
are separate exercises, they are related. The more confidence one has
in the validation results, the less additional margin (MMS) is needed.
The less confidence one has in the validation results, the more MMS is
needed.
Any review of a licensing action involving the MMS should involve
examination of the licensee's validation methodology and results. While
there is no clear quantifiable relationship between the validation and
MMS (as exists with modeling conservatism), several aspects of
validation should be considered before making a qualitative
determination of the adequacy of the MMS.
There are four factors that the reviewer should consider in
evaluating the validation: (1) The similarity of critical experiments
to actual applications; (2) sufficiency of the data (including the
number and quality of experiments); (3) adequacy of the validation
methodology; and (4) conservatism in the calculation of the bias and
its uncertainty. These factors are discussed in more detail below.
Similarity of Critical Experiments
Because the bias and its uncertainty must be estimated based on
critical experiments having geometric form, material composition, and
neutronic behavior similar to specific applications, the degree of
similarity between the critical experiments and applications is a key
consideration in determining the appropriateness of the MMS. The more
closely critical experiments represent the characteristics of
applications being validated, the more confidence the reviewer has in
the estimate of the bias and the bias uncertainty for those
applications.
The reviewer must understand both the critical experiments and
applications in sufficient detail to ascertain the degree of similarity
between them. Validation reports generally contain a description of
critical experiments (including source references). The reviewer may
need to consult these references to understand the physical
characteristics of the experiments. In addition, the reviewer may need
to consult process descriptions, nuclear criticality safety
evaluations, drawings, tables, input files, or other information to
understand the physical characteristics of applications. The reviewer
must consider the full spectrum of normal and abnormal conditions that
may have to be modeled when evaluating the similarity of the critical
experiments to applications.
In evaluating the similarity of experiments to applications, the
reviewer must recognize that some parameters are more significant than
others to accurately calculate keff. The parameters that
have the greatest effect on the calculated keff of the
system are those that are most important to match when choosing
critical experiments. Because of this, there is a close relationship
between similarity of critical experiments to applications and system
sensitivity. Historically, certain parameters have been used to trend
the bias because these are the parameters that have been found to have
the greatest effect on the bias. These parameters include the
moderator-to-fuel ratio (e.g., H/U, H/X, vm/vf),
isotopic abundance (e.g., uranium-235 (235U), plutonium-239
(239Pu), or overall Pu-to-uranium ratio), and parameters
that characterize the neutron energy spectrum (e.g., energy of average
lethargy causing fission (EALF), average energy group (AEG)). Other
parameters, such as material density or overall geometric shape, are
generally considered to be of less importance. The reviewer should
consider all important system characteristics that can reasonably be
expected to affect the bias. For example, the critical experiments
should include any materials that can have an appreciable effect on the
calculated keff, so that the effect due to the cross
sections of those materials is included in the bias. Furthermore, these
materials should have at least the same reactivity worth in the
experiments (which may be evidenced by having similar number densities)
as in the applications. Otherwise, the effect of any bias from the
underlying cross sections or the assumed material composition may be
masked in the applications. The materials must be present in a
statistically significant number of experiments having similar neutron
spectra to the application. Conversely, materials that do not have an
appreciable effect on the bias may be neglected and would not have to
be represented in the critical experiments.
Merely having critical experiments that are representative of
applications is the minimum acceptance criterion, and does not alone
justify having any particular value of the MMS. There are some
situations, however, in which there is an unusually high degree of
similarity between the critical experiments and applications, and in
[[Page 36562]]
these cases, this fact may be credited as justification for having a
smaller MMS than would otherwise be acceptable. If the critical
experiments have geometric forms, material compositions, and neutron
spectra that are nearly indistinguishable from those of the
applications, this may be justification for a smaller MMS than would
otherwise be acceptable. For example, justification for having a small
MMS for finished fuel assemblies could include selecting critical
experiments consisting of fuel assemblies in water, where the fuel has
nearly the same pellet diameter, pellet density, cladding materials,
pitch, absorber content, enrichment, and neutron energy spectrum as the
licensee's fuel. In this case, the validation should be very specific
to this type of system, because including other types of critical
experiments could mask variations in the bias. Therefore, this type of
justification is generally easiest when the area of applicability (AOA)
is very narrowly defined. The reviewer should pay particular attention
to abnormal conditions. In this example, changes in process conditions
such as damage to the fuel or partial flooding may significantly affect
the applicability of the critical experiments.
There are several tools available to the reviewer to ascertain the
degree of similarity between critical experiments and applications.
Some of these are listed below:
1. NUREG/CR-6698, ``Guide to Validation of Nuclear Criticality
Safety Calculational Method,'' Table 2.3, contains a set of screening
criteria for determining the applicability of critical experiments. As
is stated in the NUREG, these criteria were arrived at by consensus
among experienced nuclear criticality safety specialists and may be
considered to be conservative. The reviewer should consider agreement
on all screening criteria to be justification for demonstrating a very
high degree of critical experiment similarity. (Agreement on the most
significant screening criteria for a particular system should be
considered as demonstration of an acceptable degree of critical
experiment similarity.) Less conservative (i.e., broader) screening
criteria may also be acceptable, if appropriately justified.
2. Analytical methods that systematically quantify the degree of
similarity between a set of critical experiments and applications in
pair-wise fashion may be used. One example of this is the TSUNAMI code
in the SCALE 5 code package. One strength of TSUNAMI is that it
calculates an overall correlation that is a quantitative measure of the
degree of similarity between an experiment and an application. Another
strength is that this code considers all the nuclear phenomena and
underlying cross sections and weights them by their importance to the
calculated keff (i.e., sensitivity of keff to the
data). The NRC staff currently considers a correlation coefficient of
ck [gap] 0.95 to be indicative of a very high degree of
similarity. This is based on the staff's experience comparing the
results from TSUNAMI to those from a more traditional screening
criterion approach. The NRC staff also considers a correlation
coefficient between 0.90 and 0.95 to be indicative of a high degree of
similarity. However, owing to the amount of experience with TSUNAMI, in
this range use of the code should be supplemented with other methods of
evaluating critical experiment similarity. Conversely, a correlation
coefficient less than 0.90 should not be used as a demonstration of a
high or very high degree of critical experiment similarity. Because of
limited use of the code to date, all of these observations should be
considered tentative and thus the reviewer should not use TSUNAMI as a
``black box,'' or base conclusions of adequacy solely on its use.
However, it may be used to test a licensee's statement that there is a
high degree of similarity between experiments and applications.
3. Traditional parametric sensitivity studies may be employed to
demonstrate that keff is highly sensitive or insensitive to
a particular parameter. For example, if a 50% reduction in the
10B cross section is needed to produce a 1% change in the
system keff, then it can be concluded that the system is
highly insensitive to the boron content, in the amount present. This is
because a credible error in the 10B cross section of a few
percent will have a statistically insignificant effect on the bias.
Therefore, in the amount present, the boron content is not a parameter
that is important to match in order to conclude that there is a high
degree of similarity between critical experiments and applications.
4. Physical arguments may demonstrate that keff is
highly sensitive or insensitive to a particular parameter. For example,
the fact that oxygen and fluorine are almost transparent to thermal
neutrons (i.e., cross sections are very low) may justify why
experiments consisting of UO2F2 may be considered
similar to UO2 or UF4 applications, provided that
both experiments and applications occur in the thermal energy range.
The reviewer should ensure that all parameters which can measurably
affect the bias are considered when assessing critical experiment
similarity. For example, comparison should not be based solely on
agreement in the 235U fission spectrum for systems in which
the system keff is highly sensitive to 238U
fission, 10B absorption, or 1H scattering. A
method such as TSUNAMI that considers the complete set of reactions and
nuclides present can be used to rank the various system sensitivities,
and to thus determine whether it is reasonable to rely on the fission
spectrum alone in assessing the similarity of critical experiments to
applications.
Some questions that the reviewer may ask in evaluating reliance on
critical experiment similarity as justification for the MMS include:
Do the critical experiments adequately span the range of
geometric forms, material compositions, and neutron energy spectra
expected in applications?
Are the materials present with at least the same
reactivity worth as in applications?
Do the licensee's criteria for determining whether
experiments are sufficiently similar to applications consider all
nuclear reactions and nuclides that can have a statistically
significant effect on the bias?
Sufficiency of the Data
Another aspect of evaluating the selected critical experiments for
a specific MMS is evaluating whether there is a sufficient number of
benchmark-quality experiments to determine the bias across the entire
AOA. Having a sufficient number of benchmark-quality experiments means
that: (1) There are enough (applicable) critical experiments to make a
statistically meaningful calculation of the bias and its uncertainty;
(2) the experiments somewhat evenly span the entire range of all the
important parameters, without gaps requiring extrapolation or wide
interpolation; and (3) the experiments are, preferably, benchmark-
quality experiments. The number of critical experiments needed is
dependent on the statistical method used to analyze the data. For
example, some methods require a minimum number of data points to
reliably determine whether the data are normally distributed. Merely
having a large number of experiments is not sufficient to provide
confidence in the validation result, if the experiments are not
applicable to the application. The reviewer should particularly examine
whether consideration of only the most applicable experiments would
result in a larger negative bias (and thus a lower
[[Page 36563]]
USL) than that determined based on the full set of experiments. The
experiments should also ideally be sufficiently well-characterized
(including experimental parameters and their uncertainties) to be
considered benchmark experiments. They should be drawn from established
sources (such as from the International Handbook of Evaluated
Criticality Safety Benchmark Experiments (IHECSBE), laboratory reports,
or peer-reviewed journals). For some applications, benchmark-quality
experiments may not be available; when necessary, critical experiments
that do not rise to the level of benchmark-quality experiments may be
used. However, the reviewer should take this into consideration and
should evaluate the need for additional margin.
Some questions that the reviewer may ask in evaluating the number
and quality of critical experiments as justification for the MMS
include:
Are the critical experiments chosen all high-quality
benchmarks from reliable (e.g., peer-reviewed and widely-accepted)
sources?
Are the critical experiments chosen taken from multiple
independent sources, to minimize the possibility of systematic errors?
Have the experimental uncertainties associated with the
critical experiments been provided and used in calculating the bias and
bias uncertainty?
Is the number and distribution of critical experiments
sufficient to establish trends in the bias across the entire range of
parameters?
Is the number of critical experiments commensurate with
the statistical methodology being used?
Validation Methodological Rigor
Having a sufficiently rigorous validation methodology means having
a methodology that is appropriate for the number and distribution of
critical experiments, that calculates the bias and its uncertainty
using an established statistical methodology, that accounts for any
trends in the bias, and that accounts for all apparent sources of
uncertainty in the bias (e.g., the increase in uncertainty due to
extrapolating the bias beyond the range covered by the experimental
data.) Examples of deficiencies in the validation methodology may
include: (1) Using a statistical methodology relying on the data being
normally distributed about the mean keff to analyze data
that are not normally distributed; (2) using a linear regression fit on
data that has a non-linear dependence on a trending parameter; (3) use
of a single pooled bias when very different types of critical
experiments are being evaluated in the same validation. These
deficiencies serve to decrease confidence in the validation results and
may warrant additional margin (i.e., a larger MMS). Additional guidance
on some of the more commonly observed deficiencies is provided below.
The assumption that data is normally distributed is generally
valid, unless there is a strong trend in the data or different types of
critical experiments with different mean calculated keff
values are being combined. Tests for normality require a minimum number
of critical experiments to attain a specified confidence level
(generally 95%). If there is insufficient data to verify that the data
are normally distributed, or the data are shown to be not normally
distributed, a non-parametric technique should be used to analyze the
data.
The critical experiments chosen should ideally provide a continuum
of data across the entire validated range, so that any variation in the
bias as a function of important system parameters may be observed. The
presence of discrete clusters of experiments having a calculated
keff lower than the set of critical experiments as a whole
should be examined closely to determine if there is some systematic
effect common to a particular type of calculation that makes use of the
overall bias non-conservative. Because the bias can vary with system
parameters, if the licensee has combined different subsets of data
(e.g., solutions and powders, low- and high-enriched, homogeneous and
heterogeneous), the bias for the different subsets should be analyzed.
In addition, the goodness-of-fit for any function used to trend the
bias should be examined to ensure it is appropriate to the data being
analyzed.
If critical experiments do not cover the entire range of parameters
needed to cover anticipated applications, it may be necessary to extend
the AOA by making use of trends in the bias. Any extrapolation (or wide
interpolation) of the data should be done by means of an established
mathematical methodology that takes into account the functional form of
both the bias and its uncertainty. The extrapolation should not be
based on judgement alone, such as by observing that the bias is
increasing in the extrapolated range, because this may not account for
the increase in the bias uncertainty that will occur with increasing
extrapolation. The reviewer should independently confirm that the
derived bias is valid in the extrapolated range and should ensure that
the extrapolation is not large. NUREG/CR-6698 states that critical
experiments should be added if the data must be extrapolated more than
10%. There is no corresponding guidance given for interpolation;
however, if the gap represents a significant fraction of the total
range of the data (e.g., more than 20% of the range of the data), then
the reviewer should consider this to be a wide interpolation. If the
extrapolation or interpolation is too large, new factors that could
affect the bias may be introduced as the physical phenomena in the
system change. The reviewer should not view validation as a purely
mathematical exercise, but should bear in mind the neutron physics and
underlying physical phenomena when interpreting the results.
Discarding an unusually large number of critical experiments as
outliers (i.e., more than 1-2%) should also be viewed with some
concern. Apparent outliers should not be discarded based purely upon
judgement or statistical grounds (such as causing the data to fail
tests for normality), because they could be providing valuable
information on the method's validity for a particular application. The
reviewer should verify that there are specific defensible reasons, such
as reported inconsistencies in the experimental data, for discarding
any outliers. If any of the critical experiments from a particular data
set are discarded, the reviewer should examine other experiments
included to determine whether they may be subject to the same
systematic errors. Outliers should be examined carefully especially
when they have a lower calculated keff than the other
experiments included.
NUREG-1520 states that the MoS should be large compared to the
uncertainty in the bias. The observed spread of the data about the mean
keff should be examined as an indicator of the overall
precision of the calculational method. The reviewer should ascertain
whether the statistical method of validation considers both the
observed spread in the data and the experimental and calculational
uncertainty in determining the USL. The reviewer should also evaluate
whether the observed spread in the data is consistent with the reported
uncertainty (e.g., whether X\2\/N [ap] 1). If the spread in the data is
larger than, or comparable to, the MMS, then the reviewer should
consider whether additional margin (i.e., a larger MMS) is needed.
As a final test of the code's accuracy, the bias should be
relatively small (i.e., bias [lap]2 percent), or else the reason for
the bias should be determined. No credit should be taken for positive
bias, because this would result in making changes in a non-conservative
direction without having a clear understanding of those changes. If the
absolute value of
[[Page 36564]]
the bias is very large--and especially if the reason for the large bias
cannot be determined--this may indicate that the calculational method
is not very accurate, and a larger MMS may be appropriate.
Some questions that the reviewer may ask in evaluating the rigor of
the validation methodology as justification for the MMS include:
Are the results from use of the methodology consistent
with the data (e.g., normally distributed)?
Is the normality of the data confirmed prior to performing
statistical calculations? If the data does not pass the tests for
normality, is a non-parametric method used?
Does the assumed functional form of the bias represent a
good fit to the critical experiments? Is a goodness-of-fit test
performed?
Does the method determine a pooled bias across disparate
types of critical experiments, or does it consider variations in the
bias for different types of experiments? Are there discrete clusters of
experiments for which the bias appears to be non-conservative?
Has additional margin been applied to account for
extrapolation or wide interpolation? Is this done based on an
established mathematical methodology?
Have critical experiments been discarded as apparent
outliers? Is there a valid reason for doing so?
Performing an adequate code validation is not by itself sufficient
justification for any specific MMS. The reason for this is that the
validation analysis determines the bias and its uncertainty, but not
the MMS. The MMS is added after the validation has been performed to
provide added assurance of subcriticality. However, having a validation
methodology that either exceeds or falls short of accepted practices
for validation may be a basis for either reducing or increasing the
MMS.
Statistical Conservatism
In addition to having conservatism in keff due to
modeling practices, licensees may also provide conservatism in the
statistical methods used to calculate the USL. For example, NUREG/CR-
6698 states that an acceptable method for calculating the bias is to
use the single-sided tolerance limit approach with a 95/95 confidence
(i.e., 95% confidence that 95% of all future critical calculations will
lie above the USL). If the licensee decides to use the single-sided
tolerance limit approach with a 95/99.9 confidence, this would result
in a more conservative USL than with a 95/95 confidence. This would be
true of other methods for which the licensee's confidence criteria
exceed the minimum accepted criteria. Generally, the NRC has accepted
95% confidence levels for validation results, so using more stringent
confidence levels may provide conservatism. In addition, there may be
other reasons a larger bias and/or bias uncertainty than necessary has
been used (e.g., because of the inclusion of inapplicable critical
experiments that have a lower calculated keff).
The reviewer may credit this conservatism towards having an
adequate MoS if: (1) The licensee demonstrates that this translates
into a specific [Delta]keff; and (2) the licensee
demonstrates that the margin will be dependably present, based on
license or other commitments.
(3) Additional Risk-Informed Considerations
Besides modeling conservatism and the validation results, other
factors may provide added assurance of subcriticality. These factors
should be considered in evaluating whether there is adequate MoS and
are discussed below.
System Sensitivity and Uncertainty
The sensitivity of keff to changes in system parameters
can be used to assess the potential effect of errors on the calculation
of keff. If the calculated keff is especially
sensitive to a given parameter, an error in that parameter could have a
correspondingly large contribution to the bias. Conversely, if
keff is very insensitive to a given parameter, then an error
may have a negligible effect on the bias. This is of particular
importance when assessing whether the chosen critical experiments are
sufficiently similar to applications to justify a small MMS.
The reviewer should not consider the sensitivity in isolation, but
should also consider the magnitude of uncertainties in the parameters.
If keff is very sensitive to a given parameter, but the
value of that parameter is known with very high accuracy (and its
variations are well-controlled), the potential contribution to the bias
may still be very small. Thus, the contribution to the bias is a
function of the product of the keff sensitivity with the
uncertainty. To illustrate this, suppose that keff is a
function of a large number of variables, x1,
x2,..., xN. Then the uncertainty in
keff may be expressed as follows, if all the individual
terms are independent:
[GRAPHIC] [TIFF OMITTED] TN27JN06.006
where the partial derivatives [part]k/[part]xi are
proportional to the sensitivity and the terms [sigma]xi
represent the uncertainties, or likely variations, in the parameters.
(If not all variables are dependent, then there may be additional
terms.) Each term in this equation then represents the contribution to
the overall uncertainty in keff.
There are several tools available to the reviewer to ascertain the
sensitivity of keff to changes in the underlying parameters.
Some of these are listed below:
1. Analytical tools that calculate the sensitivity for each
nuclide-reaction pair present in the problem may be used. One example
of this is the TSUNAMI code in the SCALE 5 code package. TSUNAMI
calculates both an integral sensitivity coefficient (i.e., summed over
all energy groups) and a sensitivity profile as a function of energy
group. The reviewer should recognize that TSUNAMI only calculates the
keff sensitivity to changes in the underlying nuclear data,
and not to other parameters that could affect the bias and should be
considered. (See section on Critical Experiment Similarity for caveats
about using TSUNAMI.)
2. Direct sensitivity calculations may be used, in which system
parameters are perturbed and the resulting impact on keff
determined. Perturbation of atomic number densities can also be used to
confirm the sensitivity calculated by other methods (e.g., TSUNAMI).
Such techniques are not limited to considering the effect of the
nuclear data.
There are also several sources available to the reviewer to
ascertain the uncertainty associated with the underlying parameters.
For process parameters, these sources of uncertainty may include
manufacturing tolerances, quality assurance records, and experimental
and/or measurement results. For nuclear data parameters, these sources
of uncertainty may include published data, uncertainty data distributed
with the cross section libraries, or the covariance data used in
methods such as TSUNAMI.
Some systems are inherently more sensitive to changes in the
underlying parameters than others. For example, high-enriched uranium
systems typically exhibit a greater sensitivity to changes in system
parameters (e.g., mass, moderation) than low-enriched systems. This has
been the reason that HEU (i.e., > 20wt% \235\U) facilities have been
licensed with larger MMS values than LEU (<= 10wt% \235\U) facilities.
This greater sensitivity would also be true of weapons-grade Pu
compared to low-
[[Page 36565]]
assay mixed oxides (i.e., with a few percent Pu/U). However, it is also
true that the uncertainties associated with measurement of the \235\U
cross sections are much smaller than those associated with measurement
of the \238\U cross sections. Both the greater sensitivity and smaller
uncertainty would need to be considered in evaluating whether a larger
MMS is needed for high-enriched systems.
Frequently, operating limits that are more conservative than safety
limits determined using keff calculations are established to
prevent those safety limits from being exceeded. For systems in which
keff is very sensitive to the system parameters, more margin
between the operating and safety limits may be needed. Systems in which
keff is very sensitive to the process parameters may need
both a larger margin between operating and safety limits and a larger
MMS. This is because the system is sensitive to any change, whether it
be caused by normal process variations or caused by unknown errors.
Because of this, the assumption is often made that the MMS is meant to
account for variations in the process or the ability to control the
process parameters. However, the MMS is meant only to allow for unknown
(or difficult to quantify) uncertainties in the calculation of
keff. The reviewer should recognize that determination of an
appropriate MMS is not dependent on the ability to control process
parameters within safety limits (although both may depend on the system
sensitivity).
Some questions that the reviewer may ask in evaluating the system
sensitivity as justification for the MMS include:
How sensitive is keff to changes in the
underlying nuclear data (e.g., cross sections)?
How sensitive is keff to changes in the
geometric form and material composition?
Are the uncertainties associated with these underlying
parameters well-known?
How does the MMS compare to the expected magnitude of
changes in keff resulting from uncertainties in these
underlying parameters?
Knowledge of the Neutron Physics
Another important consideration that may affect the appropriate MMS
is the extent to which the physical behavior of the system is known.
Fissile systems which are known to be subcritical with a high degree of
confidence do not require as much MMS as systems where subcriticality
is less certain. An example of a system known to be subcritical with
high confidence is a light-water reactor fuel assembly. The design of
these systems is such that they can only be made critical when highly
thermalized. Due to extensive analysis and reactor experience, the
flooded isolated assembly is known to be subcritical. In addition, the
thermal neutron cross sections for materials in finished reactor fuel
have been measured with a very high degree of accuracy (as opposed to
cross sections in the resonance region). Other examples of systems in
which there is independent corroborating evidence of subcriticality may
include systems consisting of very simple geometric shapes, or other
idealized situations, in which there is strong evidence that the system
is subcritical based on comparison with highly similar systems in
published sources (e.g., standards and handbooks). In these cases, the
MMS may be significantly reduced due to the fact that the calculation
of keff is not relied on alone to provide assurance of
subcriticality.
Reliance on independent knowledge that a given system is
subcritical necessarily requires that the configuration of the system
be fixed. If the configuration can change from the reference case,
there will be less knowledge about the behavior of the changed system.
For example, a finished fuel assembly is subject to strict quality
assurance checks and would not reach final processing if it were
outside specifications. In addition, it has a form that has both been
extensively studied and is highly stable. For these reasons, there is a
great deal of certainty that this system is well-characterized and is
not subject to change. A typical solution or powder system (other than
one with a simple geometric arrangement) would not have been studied
with the same level of rigor as a finished fuel assembly. Even if they
were studied with the same level of rigor, these systems have forms
that are subject to change into forms whose neutron physics has not
been as extensively studied.
Some questions that the reviewer may ask in evaluating the
knowledge of the neutron physics as justification for the MMS include:
Is the geometric form and material composition of the
system fixed and very unlikely to change?
Is the geometric form and material composition of the
system subject to strict quality assurance, such that tolerances have
been bounded?
Has the system been extensively studied in the nuclear
industry and shown to be subcritical (e.g., in reactor fuel studies)?
Are there other reasons besides criticality calculations
to conclude that the system will be subcritical (e.g., handbooks,
standards, published data)?
How well-known is the nuclear data (e.g., cross sections)
in the energy range of interest?
Likelihood of the Abnormal Condition
Some facilities have been licensed with different sets of
keff limits for normal and abnormal conditions. Separate
keff limits for normal and abnormal conditions are
permissible, but are not required. There is some low likelihood that
processes calculated to be subcritical will, in fact, be critical, and
this likelihood increases as the MMS is reduced (though it cannot in
general be quantified). NUREG-1718, ``Standard Review Plan for the
Review of an Application for a Mixed Oxide (MOX) Fuel Fabrication
Facility,'' states that abnormal conditions should be at least unlikely
from the standpoint of the double contingency principle. Then, a
somewhat higher likelihood that a system calculated to be subcritical
is, in fact, critical is more permissible for abnormal conditions than
for normal conditions, because of the low likelihood of the abnormal
condition being realized. The reviewer should verify that the licensee
has defined abnormal conditions such that achieving the abnormal
condition requires at least one contingency to have occurred, that the
system will be closely monitored so that it is promptly detected, and
that it will be promptly corrected upon detection. Also, there is
generally more conservatism present in the abnormal case, because the
parameters that are assumed to have failed are analyzed at their worst-
case credible condition.
The increased risk associated with having a smaller MMS for
abnormal conditions should be commensurate with, and offset by, the low
likelihood of achieving the abnormal condition. That is, if the normal
case keff limit is judged to be acceptable, then the
abnormal case limit will also be acceptable, provided the increased
likelihood (that a system calculated to be subcritical will be
critical) is offset by the reduced likelihood of realizing the abnormal
condition because of the controls that have been established. Note that
if two or more contingencies must occur to reach a given condition,
there is no requirement to ensure that the resulting condition is
subcritical. If a single keff limit is used (i.e., no credit
for unlikelihood of the abnormal condition), then the limit must be
found acceptable to cover both normal and credible abnormal conditions.
The reviewer should always make this finding considering specific
conditions
[[Page 36566]]
and controls in the process(es) being evaluated.
(4) Statistical Justification for the MMS
The NRC does not consider statistical justification an appropriate
basis for a specific MMS. Previously, some licensees have attempted to
justify specific MMS values based on a comparison of two statistical
methods. For example, the USLSTATS code issued with the SCALE code
package contains two methods for calculating the USL: (1) The
Confidence Band with Administrative Margin approach (calculating USL-
1), and (2) the Lower Tolerance Band approach (calculating USL-2). The
value of the MMS is an input parameter to the Confidence Band approach
but is not included explicitly in the Lower Tolerance Band approach. In
this particular justification, adequacy of the MMS is based on a
comparison of USL-1 and USL-2 (i.e., the condition that USL-1,
including the chosen MMS, is less than USL-2). However, the reviewer
should not accept this justification.
The condition that USL-1 (with the chosen MMS) is less than USL-2
is necessary, but is not sufficient, to show that an adequate MMS has
been used. These methods are both statistical methods, and a comparison
can only demonstrate whether the MMS is sufficient to bound any
statistical uncertainties included in the Lower Tolerance Band approach
but not included in the Confidence Band approach. There may be other
statistical or systematic errors in calculating keff that
are not included in either statistical treatment. Because of this, an
MMS value should be specified regardless of the statistical method
used. Therefore, the reviewer should not consider such a statistical
approach an acceptable justification for any specific value of the MMS.
(5) Summary
Based on a review of the licensee's justification for its chosen
MMS, taking into consideration the aforementioned factors, the staff
should make a determination as to whether the chosen MMS provides
reasonable assurance of subcriticality under normal and credible
abnormal conditions. The staff's review should be risk-informed, in
that the review should be commensurate with the MoS and should consider
the specific facility and process characteristics, as well as the
specific modeling practices used. As an example, approving an MMS value
greater than 0.05 for processes typically encountered in enrichment and
fuel fabrication facilities should require only a cursory review,
provided that an acceptable validation has been performed and modeling
practices at least as conservative as those in NUREG-1520 have been
utilized. The approval of a smaller MMS will require a somewhat more
detailed review, commensurate with the MMS that is requested. However,
the MMS should not be reduced below 0.02 due to inherent uncertainties
in the cross section data and the magnitude of code errors that have
been discovered. Quantitative arguments (such as modeling conservatism)
should be used to the extent practical. However, in many instances, the
reviewer will need to make a judgement based at least partly on
qualitative arguments. The staff should document the basis for finding
the chosen MMS value to be acceptable or unacceptable in the Safety
Evaluation Report (SER), and should ensure that any factors upon which
this determination rests are ensured to be present over the facility
lifetime (e.g., through license commitment or condition).
Regulatory Basis
In addition to complying with paragraphs (b) and (c) of this
section, the risk of nuclear criticality accidents must be limited by
assuring that under normal and credible abnormal conditions, all
nuclear processes are subcritical, including use of an approved margin
of subcriticality for safety. [10 CFR 70.61(d)]
Technical Review Guidance
Determination of an adequate MMS is strongly dependent upon
specific processes, conditions, and calculational practices at the
facility being licensed. Judgement and experience must be employed in
evaluating the adequacy of the proposed MMS. In the past, an MMS of
0.05 has generally been found acceptable for most typical low-enriched
fuel cycle facilities without a detailed technical justification. A
smaller MMS may be acceptable but will require some level of technical
review.\2\ However, for reasons stated previously, the MMS should not
be reduced below 0.02.
---------------------------------------------------------------------------
\2\ For high-enriched and plutonium or other fuel cycle
facilities, no general guidance on the appropriate MMS is given. The
reviewer should consider any relevant differences between these
facilities and low-enriched uranium facilities (e.g., generally
increased sensitivity of keff generally reduced cross
section uncertainty) on a case-by-case basis.
---------------------------------------------------------------------------
An MMS of 0.05 should be found acceptable for low-enriched fuel
cycle processes and facilities if:
1. A validation has been performed that meets accepted industry
guidelines (e.g., meets the requirements of ANSI/ANS-8.1-1998, NUREG/
CR-6361, and/or NUREG/CR-6698).
2. There is an acceptable number of critical experiments with
similar geometric forms, material compositions, and neutron energy
spectra to applications. These experiments cover the range of
parameters of applications, or else margin is provided to