Notice of Availability of Draft Interim Staff Guidance Document for Fuel Cycle Facilities, 14018-14028 [06-2611]
Download as PDF
14018
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
contact the Office of the Secretary,
Washington, DC 20555 (301–415–1969).
In addition, distribution of this meeting
notice over the Internet system is
available. If you are interested in
receiving this Commission meeting
schedule electronically, please send an
electronic message to dkw@nrc.gov.
Dated: March 14, 2006.
R. Michelle Schroll,
Office of the Secretary.
[FR Doc. 06–2714 Filed 3–16–06; 2:27 pm]
BILLING CODE 7590–01–M
NUCLEAR REGULATORY
COMMISSION
Notice of Availability of Draft Interim
Staff Guidance Document for Fuel
Cycle Facilities
I. Introduction
The Nuclear Regulatory Commission
(NRC) continues to prepare and issue
Interim Staff Guidance (ISG) documents
for fuel cycle facilities. These ISG
documents provide clarifying guidance
to the NRC staff when reviewing
licensee integrated safety analysis,
license applications or amendment
requests or other related licensing
activities for fuel cycle facilities under
10 CFR part 70.
draft and the dispositioning of these
comments is also provided. These
documents are being issued to support
a public meeting scheduled for April 28,
2006, at the NRC Headquarters
Auditorium in which the NRC will
discuss revision of the guidance
document and its resolution of
comments received on Revision 1. A
separate meeting notice will be
provided shortly, to give specific details
regarding the meeting agenda.
III. Further Information
Commission, Washington, DC 20005–
0001. Telephone: (301) 415–6459; fax
number: (301) 415–5370; e-mail:
jas4@nrc.gov.
SUPPLEMENTARY INFORMATION:
II. Summary
Nuclear Regulatory
Commission.
ACTION: Notice of availability.
AGENCY:
FOR FURTHER INFORMATION CONTACT:
James Smith, Project Manager,
Technical Support Group, Division of
Fuel Cycle Safety and Safeguards, Office
of Nuclear Material Safety and
Safeguards, U.S. Nuclear Regulatory
The purpose of this notice is to
provide the public an opportunity to
review a draft ISG, FCSS–ISG–10,
Revision 2, which provides guidance to
NRC staff to determine whether the
minimum margin of subcriticality is
sufficient to provide an adequate
assurance of subcriticality for safety to
demonstrate compliance with the
performance requirements of 10 CFR
70.61(d). Additionally, listing of
comments received on the previous
The documents related to this action
are available electronically at the NRC’s
Electronic Reading Room at https://
www.nrc.gov/reading-rm/adams.html.
From this site, you can access the NRC’s
Agencywide Documents Access and
Management System (ADAMS), which
provides text and image files of NRC’s
public documents. The ADAMS
ascension numbers for the documents
related to this notice are provided in the
following table. If you do not have
access to ADAMS or if there are
problems in accessing the documents
located in ADAMS, contact the NRC
Public Document Room (PDR) Reference
staff at 1–800–397–4209, 301–415–4737,
or by e-mail to pdr@nrc.gov.
ADAMS accession
No.
Interim staff guidance
Draft FCSS Interim Staff Guidance—10, Revision 2 ..............................................................................................................
Comments on Draft FCSS ISG–10, Rev. 1 and Resolution ...................................................................................................
This document may also be viewed
electronically on the public computers
located at the NRC’s PDR, O 1 F21, One
White Flint North, 11555 Rockville
Pike, Rockville, MD 20852. The PDR
reproduction contractor will copy
documents for a fee.
For the Nuclear Regulatory Commission.
Dated at Rockville, Maryland this 7th day
of March 2006.
Melanie A. Galloway,
Chief, Technical Support Group, Division of
Fuel Cycle Safety and Safeguards, Office of
Nuclear Material Safety and Safeguards.
Draft FCSS Interim Staff Guidance–10,
Revision 2
Justification for Minimum Margin of
Subcriticality for Safety
wwhite on PROD1PC61 with NOTICES
Prepared by Division of Fuel Cycle
Safety and Safeguards Office of Nuclear
Material Safety and Safeguards
Issue
Technical justification for the
selection of the minimum margin of
subcriticality for safety for fuel cycle
facilities, as required by 10 CFR 70.61(d)
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
Introduction
10 CFR 70.61(d) requires, in part, that
licensees or applicants (henceforth to be
referred to as ‘‘licensees’’) demonstrate
that ‘‘under normal and credible
abnormal conditions, all nuclear
processes are subcritical, including use
of an approved margin of subcriticality
for safety.’’ There are a variety of
methods that may be used to
demonstrate subcriticality, including
use of industry standards, handbooks,
hand calculations, and computer
methods. Subcriticality is assured, in
part, by providing margin between
actual conditions and expected critical
conditions. This interim staff guidance
(ISG), however, applies only to margin
used in those methods that rely on
calculation of keff, including
deterministic and probabilistic
computer methods. The use of other
methods (e.g., use of endorsed industry
standards, widely accepted handbooks,
certain hand calculations), containing
varying amounts of margin, is outside
the scope of this ISG.
PO 00000
Frm 00075
Fmt 4703
Sfmt 4703
ML060260479
ML060470150
For methods relying on calculation of
keff, margin may be provided either in
terms of limits on physical parameters
of the system (of which keff is a
function), or in terms of limits on keff
directly, or both. For the purposes of
this ISG, the term margin of safety will
be used to refer to the margin to
criticality in terms of system
parameters, and the term margin of
subcriticality (MoS) will refer to the
margin to criticality in terms of keff. A
common approach to ensuring
subcriticality is to determine a
maximum keff limit below which the
licensee’s calculations must fall. This
limit will be referred to in this ISG as
the Upper Subcritical Limit (USL).
Licensees using calculational methods
perform validation studies, in which
critical experiments similar to actual or
anticipated facility applications are
chosen and then analyzed to determine
the bias and uncertainty in the bias. The
bias is a measure of the systematic
differences between calculational
method results and experimental data.
The uncertainty in the bias is a measure
E:\FR\FM\20MRN1.SGM
20MRN1
wwhite on PROD1PC61 with NOTICES
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
of both the accuracy and precision of
the calculations and the uncertainty in
the experimental data. A USL is then
established that includes allowances for
bias and bias uncertainty as well as an
additional margin, to be referred to in
this ISG as the minimum margin of
subcriticality (MMS). The MMS is
variously referred to in the nuclear
industry as minimum subcritical
margin, administrative margin, and
arbitrary margin, and the term MMS
should be regarded as synonymous with
those terms. The term MMS will be used
throughout this ISG, and has been
chosen for consistency with the rule.
The MMS is an allowance for any
unknown (or difficult to identify or
quantify) errors or uncertainties in the
method of calculating keff that may exist
beyond those which have been
accounted for explicitly in calculating
the bias and its uncertainty.
There is little guidance in the fuel
facility Standard Review Plans (SRPs) as
to what constitutes sufficient technical
justification for the MMS. NUREG–
1520, ‘‘Standard Review Plan for the
Review of a License Application for a
Fuel Cycle Facility,’’ Section 5.4.3.4.4,
states that there must be margin that
includes, among other uncertainties,
‘‘adequate allowance for uncertainty in
the methodology, data, and bias to
assure subcriticality.’’ An important
component of this overall margin is the
MMS. However, there has been almost
no guidance on how to determine an
appropriate MMS. Partly due to the lack
of historical guidance, and partly due to
differences between facilities’ processes
and methods of calculation, there have
been significantly different MMS values
approved for the various fuel cycle
facilities over time. In addition, the
different ways licensees have of
defining margins and calculating keff
limits have made a consistent approach
to reviewing keff limits difficult. Recent
licensing experience has highlighted the
need for further guidance to clarify what
constitutes an acceptable justification
for the MMS.
The MMS can have a substantial
effect on facility operations (e.g., storage
capacity, throughput) and there has,
therefore, been considerable recent
interest in decreasing margin in keff
below what has been licensed
previously. In addition, the increasing
sophistication of computer codes and
the ready availability of computing
resources means that there has been a
gradual move towards more realistic
(often resulting in less conservative)
modeling of process systems. The
increasing interest in reducing the MMS
and the reduction in modeling
conservatism make technical
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
justification of the MMS more risksignificant than it has been in the past.
In general, consistent with a riskinformed approach to regulation, a
smaller MMS requires a more
substantial technical justification.
This ISG is only applicable to fuel
enrichment and fabrication facilities
licensed under 10 CFR part 70.
Discussion
This guidance is applicable to
evaluating the MMS in methods of
evaluation that rely on calculation of
keff. The keff value of a fissionable
system depends, in general, on a large
number of physical variables. The
factors that can affect the calculated
value of keff may be broadly divided into
the following categories: (1) The
geometric configuration; (2) the material
composition; and (3) the neutron
distribution. The geometric form and
material composition of the system—
together with the underlying nuclear
data (e.g., v, X(E), cross section data)—
determine the spatial and energy
distribution of neutrons in the system
(flux and energy spectrum). An error in
the nuclear data or the geometric or
material modeling of these systems can
produce an error in the neutron flux and
energy spectrum, and thus in the
calculated value of keff. The bias
associated with a single system is
defined as the difference between the
calculated and physical values of keff, by
the following equation:
b = kcalc ¥ kphysical
Thus, determining the bias requires
knowing both the calculated and
physical keff values of the system. The
bias associated with a single critical
experiment can be known with a high
degree of confidence, because the
physical (experimental) value is known
a priori (kphysical ≈ 1). However, for
calculations performed to demonstrate
subcriticality of facility processes (to be
referred to as ‘‘applications’’), this is not
generally the case. The bias associated
with such an application (i.e., not a
known critical configuration) is not
typically known with this same high
degree of confidence, because the actual
physical keff of the system is usually not
known. In practice, the bias is
determined from the average calculated
keff for a set of experiments that cover
different aspects of the licensee’s
applications. The bias and its
uncertainty must be estimated by
calculating the bias associated with a set
of critical experiments having geometric
forms, material compositions, and
neutron spectra similar to those of the
application. Because of the large
number of factors that can affect the
PO 00000
Frm 00076
Fmt 4703
Sfmt 4703
14019
bias, and the finite number of critical
experiments available, staff should
recognize that this is only an estimate of
the true bias of the system. The
experiments analyzed cannot cover all
possible combinations of conditions or
sources of error that may be present in
the applications to be evaluated. The
effect on keff of geometric, material, or
spectral differences between critical
experiments and applications cannot be
known with precision. Therefore, an
additional margin (MMS) must be
applied to allow for the effects of any
unknown uncertainties that may exist in
the calculated value of keff beyond those
accounted for in the calculation of the
bias and its uncertainty. As the MMS
decreases, there needs to be a greater
level of assurance that the various
sources of bias and uncertainty have
been taken into account, and that the
bias and uncertainty are known with a
high degree of accuracy. In general, the
more similar the critical experiments are
to the applications, the more confidence
there is in the estimate of the bias and
the less MMS is needed.
In determining an appropriate MMS,
the reviewer should consider the
specific conditions and process
characteristics present at the facility in
question. However, the MMS should not
be reduced below 0.02. The nuclear
cross sections are not generally known
to better than ∼ 1–2%. While this does
not necessarily translate into a 2% Dkeff,
it has been observed over many years of
experience with criticality code
validation that biases and spreads in the
data of a few percent can be expected.
As stated in NUREG–1520, MoS should
be large compared to the uncertainty in
the bias. Moreover, errors in the
criticality codes have been discovered
over time that have produced keff
differences of roughly this same
magnitude of 1–2% (e.g., Information
Notice 2005–13, ‘‘Potential NonConservative Error in Modeling
Geometric Regions in the KENO–V.a
Criticality Code’’). While the possibility
of having larger undiscovered errors
cannot be entirely discounted, modeling
sufficiently similar critical experiments
with the same code options to be used
in modeling applications should
minimize the potential for this to occur.
However, many years of experience
with the typical distribution of
calculated keff values and with the
magnitude of code errors that have
occasionally surfaced support
establishing 0.02 as the minimum MMS
that should be considered acceptable
under the best possible conditions.
Staff should recognize the important
distinction between ensuring that
processes are safe and ensuring that
E:\FR\FM\20MRN1.SGM
20MRN1
wwhite on PROD1PC61 with NOTICES
14020
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
they are adequately subcritical. The
value of keff is a direct indication of the
degree of subcriticality of the system,
but is not fully indicative of the degree
of safety. A system that is very
subcritical (i.e., with keff ß 1) may have
a small margin of safety if a small
change in a process parameter can result
in criticality. An example of this would
be a UO2 powder storage vessel, which
is subcritical when dry, but may require
only the addition of water for criticality.
Similarly, a system with a small MoS
(i.e., with keff ∼1) may have a very large
margin of safety if it cannot credibly
become critical. An example of this
would be a natural uranium system in
light water, which may have a keff value
close to 1 but will never exceed 1.
Because of this, a distinction should be
made between the margin of
subcriticality and the margin of safety.
Although a variety of terms are in use
in the nuclear industry, the term margin
of subcriticality will be taken to mean
the difference between the actual
(physical) value of keff and the value of
keff at which the system is expected to
be critical. The term margin of safety
will be taken to mean the difference
between the actual value of a parameter
and the value of the parameter at which
the system is expected to be critical. The
MMS is intended to account for the
degree of confidence that applications
calculated to be subcritical will be
subcritical. It is not intended to account
for other aspects of the process (e.g.,
safety of the process or the ability to
control parameters within certain
bounds) that may need to be reviewed
as part of an overall licensing review.
There are a variety of different
approaches that a licensee could choose
in justifying the MMS. Some of these
approaches and means of reviewing
them are described in the following
sections, in no particular preferential
order. Many of these approaches consist
of qualitative arguments, and therefore
there will be some degree of subjectivity
in determining the adequacy of the
MMS. Because the MMS is an allowance
for unknown (or difficult to identify or
quantify) errors, the reviewer must
ultimately exercise his or her best
judgement in determining whether a
specific MMS is justified. Thus, the
topics listed below should be regarded
as factors the reviewer should take into
consideration in exercising that
judgement, rather than any kind of
prescriptive checklist.
The reviewer should also bear in
mind that the licensee is not required to
use any or all of these approaches, but
may choose an approach that is
applicable to its facility or a particular
process within its facility. While it may
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
be desirable and convenient to have a
single keff limit or MMS value (and
single corresponding justification)
across an entire facility, it is not
necessary for this to be the case. The
MMS may be easier to justify for one
process than for another, or for a limited
application versus generically for the
entire facility. The reviewer should
expect to see various combinations of
these approaches, or entirely different
approaches, used, depending on the
nature of the licensee’s processes and
methods of calculation. Any approach
used must ultimately lead to a
determination that there is adequate
assurance of subcriticality.
(1) Conservatism in the Calculational
Models
The margin in keff produced by the
licensee’s modeling practices, together
with the MMS, provide the margin
between actual conditions and expected
critical conditions. In terms of the
subcriticality criterion taken from ANSI/
ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR Fuel Outside
Reactors’’ (as explained in Appendix A):
MoS ≥ Dkm + Dksa
where Dkm is the MMS and Dksa is the
margin in keff due to conservative
modeling of the system (i.e.,
conservative values of system
parameters).
Two different applications for which
the sums on the right hand side of the
equation above are equal to each other
are equally subcritical. Assurance of
subcriticality may thus be provided by
specifying a margin in keff (Dkm), or
specifying conservative modeling
practices (Dksa), or some combination
thereof. This principle will be
particularly useful to the reviewer
evaluating a proposed reduction in the
currently approved MMS; the review of
such a reduction should prove
straightforward in cases in which the
overall combination of modeling
conservatism and MMS has not
changed. Because of this straightforward
quantitative relationship, any modeling
conservatism that has not been
previously credited should be
considered before examining other
factors. Cases in which the overall MoS
has decreased may still be acceptable,
but would have to be justified by other
means.
In evaluating justification for the
MMS relying on conservatism in the
model, the reviewer should consider
only that conservatism in excess of any
manufacturing tolerances, uncertainties
in system parameters, or credible
process variations. That is, the
PO 00000
Frm 00077
Fmt 4703
Sfmt 4703
conservatism should consist of
conservatism beyond the worst-case
normal or abnormal conditions, as
appropriate, including allowance for
any tolerances. Examples of this added
conservatism may include assuming
optimum concentration in solution
processes, neglecting neutron absorbers
in structural materials, or assuming
minimum reflector conditions (e.g., at
least a 1-inch, tight-fitting reflector
around process equipment). These
technical practices used to perform
criticality calculations generally result
in conservatism of at least several
percent in keff. To credit this as part of
the justification for the MMS, the
reviewer should have assurance that the
modeling practices described will result
in a predictable and dependable amount
of conservatism in keff. In some cases,
the conservatism may be processdependent, in which case it may be
relied on as justification for the MMS
for a particular process. However, only
modeling practices that result in a
global conservatism across the entire
facility should be relied on as
justification for a site-wide MMS.
Ensuring predictable and dependable
conservatism includes verifying that
this conservatism will be maintained
over the facility lifetime, such as
through the use of license commitments
or conditions.
If the licensee has a program that
establishes operating limits (to ensure
that subcritical limits are not exceeded)
below subcritical limits determined in
nuclear criticality safety evaluations, the
margin provided by this (optional)
practice may be credited as part of the
conservatism. In such cases, the
reviewer should credit only the
difference between operating and
subcritical limits that exceeds any
tolerances or process variation, and
should ensure that operating limits will
be maintained over the facility lifetime,
through the use of license commitments
or conditions.
Some questions that the reviewer may
ask in evaluating the use of modeling
conservatism as justification for the
MMS include:
• How much margin in keff is
provided due to conservatism in
modeling practices?
• How much of this margin exceeds
allowance for tolerances and process
variations?
• Is this margin specific to a
particular process or does it apply to all
facility processes?
• What provides assurance that this
margin will be maintained over the
facility lifetime?
E:\FR\FM\20MRN1.SGM
20MRN1
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
(2) Validation Methodology and Results
Assurance of subcriticality for
methods that rely on the calculation of
keff requires that those methods be
appropriately validated. One of the
goals of validation is to determine the
method’s bias and the uncertainty in the
bias. After this has been done, an
additional margin (MMS) is specified to
account for any additional uncertainties
that may exist. The appropriate MMS
depends, in part, on the degree of
confidence in the validation results.
Having a high degree of confidence in
the bias and bias uncertainty requires
both that there be sufficient (for the
statistical method used) applicable
benchmark-quality experiments and that
there be a rigorous validation
methodology. Critical experiments that
do not rise to the level of benchmarkquality experiments may also be
acceptable, but may require additional
margin. If either the data or the
methodology is deficient, a high degree
of confidence in the results cannot be
attained, and a larger MMS may need to
be employed than would otherwise be
acceptable. Therefore, although
validation and determining the MMS
are separate exercises, they are related.
The more confidence one has in the
validation results, the less additional
margin (MMS) is needed. The less
confidence one has in the validation
results, the more MMS is needed.
Any review of a licensing action
involving the MMS should involve
examination of the licensee’s validation
methodology and results. While there is
no clear quantifiable relationship
between the validation and MMS (as
exists with modeling conservatism),
several aspects of validation should be
considered before making a qualitative
determination of the adequacy of the
MMS.
There are four factors that the
reviewer should consider in evaluating
the validation: (1) The similarity of
critical experiments to actual
applications; (2) sufficiency of the data
(including the number and quality of
experiments); (3) adequacy of the
validation methodology; and (4)
conservatism in the calculation of the
bias and its uncertainty. These factors
are discussed in more detail below.
wwhite on PROD1PC61 with NOTICES
Similarity of Critical Experiments
Because the bias and its uncertainty
must be estimated based on critical
experiments having geometric form,
material composition, and neutronic
behavior similar to specific
applications, the degree of similarity
between the critical experiments and
applications is a key consideration in
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
determining the appropriateness of the
MMS. The more closely critical
experiments represent the
characteristics of applications being
validated, the more confidence the
reviewer has in the estimate of the bias
and the bias uncertainty for those
applications.
The reviewer must understand both
the critical experiments and
applications in sufficient detail to
ascertain the degree of similarity
between them. Validation reports
generally contain a description of
critical experiments (including source
references). The reviewer may need to
consult these references to understand
the physical characteristics of the
experiments. In addition, the reviewer
may need to consult process
descriptions, nuclear criticality safety
evaluations, drawings, tables, input
files, or other information to understand
the physical characteristics of
applications. The reviewer must
consider the full spectrum of normal
and abnormal conditions that may have
to be modeled when evaluating the
similarity of the critical experiments to
applications.
In evaluating the similarity of
experiments to applications, the
reviewer must recognize that some
parameters are more significant than
others to accurately calculate keff. The
parameters that have the greatest effect
on the calculated keff of the system are
those that are most important to match
when choosing critical experiments.
Because of this, there is a close
relationship between similarity of
critical experiments to applications and
system sensitivity. Historically, certain
parameters have been used to trend the
bias because these are the parameters
that have been found to have the
greatest effect on the bias. These
parameters include the moderator-tofuel ratio (e.g., H/U, H/X, vm/vf),
isotopic abundance (e.g., uranium-235
(235U), plutonium-239 (239Pu), or overall
Pu-to-uranium ratio), and parameters
that characterize the neutron energy
spectrum (e.g., energy of average
lethargy causing fission (EALF), average
energy group (AEG)). Other parameters,
such as material density or overall
geometric shape, are generally
considered to be of less importance. The
reviewer should consider all important
system characteristics that can
reasonably be expected to affect the
bias. For example, the critical
experiments should include any
materials that can have an appreciable
effect on the calculated keff, so that the
effect due to the cross sections of those
materials is included in the bias.
Furthermore, these materials should
PO 00000
Frm 00078
Fmt 4703
Sfmt 4703
14021
have at least the same reactivity worth
in the experiments (which may be
evidenced by having similar number
densities) as in the applications.
Otherwise, the effect of any bias from
the underlying cross sections or the
assumed material composition may be
masked in the applications. The
materials must be present in a
statistically significant number of
experiments having similar neutron
spectra to the application. Conversely,
materials that do not have an
appreciable effect on the bias may be
neglected and would not have to be
represented in the critical experiments.
Merely having critical experiments
that are representative of applications is
the minimum acceptance criterion, and
does not alone justify having any
particular value of the MMS. There are
some situations, however, in which
there is an unusually high degree of
similarity between the critical
experiments and applications, and in
these cases, this fact may be credited as
justification for having a smaller MMS
than would otherwise be acceptable. If
the critical experiments have geometric
forms, material compositions, and
neutron spectra that are nearly
indistinguishable from those of the
applications, this may be justification
for a smaller MMS than would
otherwise be acceptable. For example,
justification for having a small MMS for
finished fuel assemblies could include
selecting critical experiments consisting
of fuel assemblies in water, where the
fuel has nearly the same pellet diameter,
pellet density, cladding materials, pitch,
absorber content, enrichment, and
neutron energy spectrum as the
licensee’s fuel. In this case, the
validation should be very specific to
this type of system, because including
other types of critical experiments could
mask variations in the bias. Therefore,
this type of justification is generally
easiest when the area of applicability
(AOA) is very narrowly defined. The
reviewer should pay particular attention
to abnormal conditions. In this example,
changes in process conditions such as
damage to the fuel or partial flooding
may significantly affect the applicability
of the critical experiments.
There are several tools available to the
reviewer to ascertain the degree of
similarity between critical experiments
and applications. Some of these are
listed below:
1. NUREG/CR–6698, ‘‘Guide to
Validation of Nuclear Criticality Safety
Calculational Method,’’ Table 2.3,
contains a set of screening criteria for
determining the applicability of critical
experiments. As is stated in the NUREG,
these criteria were arrived at by
E:\FR\FM\20MRN1.SGM
20MRN1
wwhite on PROD1PC61 with NOTICES
14022
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
consensus among experienced nuclear
criticality safety specialists and may be
considered to be conservative. The
reviewer should consider agreement on
all screening criteria to be justification
for demonstrating a very high degree of
critical experiment similarity.
(Agreement on the most significant
screening criteria for a particular system
should be considered as demonstration
of an acceptable degree of critical
experiment similarity.) Less
conservative (i.e., broader) screening
criteria may also be acceptable, if
appropriately justified.
2. Analytical methods that
systematically quantify the degree of
similarity between a set of critical
experiments and applications in pairwise fashion may be used. One example
of this is the TSUNAMI code in the
SCALE 5 code package. One strength of
TSUNAMI is that it calculates an overall
correlation that is a quantitative
measure of the degree of similarity
between an experiment and an
application. Another strength is that this
code considers all the nuclear
phenomena and underlying cross
sections and weights them by their
importance to the calculated keff (i.e.,
sensitivity of keff to the data). The NRC
staff currently considers a correlation
coefficient of ck ≥ 0.95 to be indicative
of a very high degree of similarity. This
is based on the staff’s experience
comparing the results from TSUNAMI
to those from a more traditional
screening criterion approach. The NRC
staff also considers a correlation
coefficient between 0.90 and 0.95 to be
indicative of a high degree of similarity.
However, owing to the amount of
experience with TSUNAMI, in this
range use of the code should be
supplemented with other methods of
evaluating critical experiment
similarity. Conversely, a correlation
coefficient less than 0.90 should not be
used as a demonstration of a high or
very high degree of critical experiment
similarity. Because of limited use of the
code to date, all of these observations
should be considered tentative and thus
the reviewer should not use TSUNAMI
as a ‘‘black box,’’ or base conclusions of
adequacy solely on its use. However, it
may be used to test a licensee’s
statement that there is a high degree of
similarity between experiments and
applications.
3. Traditional parametric sensitivity
studies may be employed to
demonstrate that keff is highly sensitive
or insensitive to a particular parameter.
For example, if a 50% reduction in the
10B cross section is needed to produce
a 1% change in the system keff, then it
can be concluded that the system is
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
highly insensitive to the boron content,
in the amount present. This is because
a credible error in the 10B cross section
of a few percent will have a statistically
insignificant effect on the bias.
Therefore, in the amount present, the
boron content is not a parameter that is
important to match in order to conclude
that there is a high degree of similarity
between critical experiments and
applications.
4. Physical arguments may
demonstrate that keff is highly sensitive
or insensitive to a particular parameter.
For example, the fact that oxygen and
fluorine are almost transparent to
thermal neutrons (i.e., cross sections are
very low) may justify why experiments
consisting of UO2F2 may be considered
similar to UO2 or UF4 applications,
provided that both experiments and
applications occur in the thermal energy
range.
The reviewer should ensure that all
parameters which can measurably affect
the bias are considered when assessing
critical experiment similarity. For
example, comparison should not be
based solely on agreement in the 235U
fission spectrum for systems in which
the system keff is highly sensitive to
238U fission, 10B absorption, or 1H
scattering. A method such as TSUNAMI
that considers the complete set of
reactions and nuclides present can be
used to rank the various system
sensitivities, and to thus determine
whether it is reasonable to rely on the
fission spectrum alone in assessing the
similarity of critical experiments to
applications.
Some questions that the reviewer may
ask in evaluating reliance on critical
experiment similarity as justification for
the MMS include:
• Do the critical experiments
adequately span the range of geometric
forms, material compositions, and
neutron energy spectra expected in
applications?
• Are the materials present with at
least the same reactivity worth as in
applications?
• Do the licensee’s criteria for
determining whether experiments are
sufficiently similar to applications
consider all nuclear reactions and
nuclides that can have a statistically
significant effect on the bias?
Sufficiency of the Data
Another aspect of evaluating the
selected critical experiments for a
specific MMS is evaluating whether
there is a sufficient number of
benchmark-quality experiments to
determine the bias across the entire
AOA. Having a sufficient number of
benchmark-quality experiments means
PO 00000
Frm 00079
Fmt 4703
Sfmt 4703
that: (1) There are enough (applicable)
critical experiments to make a
statistically meaningful calculation of
the bias and its uncertainty; (2) the
experiments somewhat evenly span the
entire range of all the important
parameters, without gaps requiring
extrapolation or wide interpolation; and
(3) the experiments are, preferably,
benchmark-quality experiments. The
number of critical experiments needed
is dependent on the statistical method
used to analyze the data. For example,
some methods require a minimum
number of data points to reliably
determine whether the data are
normally distributed. Merely having a
large number of experiments is not
sufficient to provide confidence in the
validation result, if the experiments are
not applicable to the application. The
reviewer should particularly examine
whether consideration of only the most
applicable experiments would result in
a larger negative bias (and thus a lower
USL) than that determined based on the
full set of experiments. The experiments
should also ideally be sufficiently wellcharacterized (including experimental
parameters and their uncertainties) to be
considered benchmark experiments.
They should be drawn from established
sources (such as from the International
Handbook of Evaluated Criticality
Safety Benchmark Experiments
(IHECSBE), laboratory reports, or peerreviewed journals). For some
applications, benchmark-quality
experiments may not be available; when
necessary, critical experiments that do
not rise to the level of benchmarkquality experiments may be used.
However, the reviewer should take this
into consideration and should evaluate
the need for additional margin.
Some questions that the reviewer may
ask in evaluating the number and
quality of critical experiments as
justification for the MMS include:
• Are the critical experiments chosen
all high-quality benchmarks from
reliable (e.g., peer-reviewed and widelyaccepted) sources?
• Are the critical experiments chosen
taken from multiple independent
sources, to minimize the possibility of
systematic errors?
• Have the experimental uncertainties
associated with the critical experiments
been provided and used in calculating
the bias and bias uncertainty?
• Is the number and distribution of
critical experiments sufficient to
establish trends in the bias across the
entire range of parameters?
• Is the number of critical
experiments commensurate with the
statistical methodology being used?
E:\FR\FM\20MRN1.SGM
20MRN1
wwhite on PROD1PC61 with NOTICES
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
Validation Methodological Rigor
Having a sufficiently rigorous
validation methodology means having a
methodology that is appropriate for the
number and distribution of critical
experiments, that calculates the bias and
its uncertainty using an established
statistical methodology, that accounts
for any trends in the bias, and that
accounts for all apparent sources of
uncertainty in the bias (e.g., the increase
in uncertainty due to extrapolating the
bias beyond the range covered by the
experimental data). Examples of
deficiencies in the validation
methodology may include: (1) Using a
statistical methodology relying on the
data being normally distributed about
the mean keff to analyze data that are not
normally distributed; (2) using a linear
regression fit on data that has a nonlinear dependence on a trending
parameter; (3) use of a single pooled
bias when very different types of critical
experiments are being evaluated in the
same validation. These deficiencies
serve to decrease confidence in the
validation results and may warrant
additional margin (i.e., a larger MMS).
Additional guidance on some of the
more commonly observed deficiencies
is provided below.
The assumption that data is normally
distributed is generally valid, unless
there is a strong trend in the data or
different types of critical experiments
with different mean calculated keff
values are being combined. Tests for
normality require a minimum number of
critical experiments to attain a specified
confidence level (generally 95%). If
there is insufficient data to verify that
the data are normally distributed, or the
data are shown to be not normally
distributed, a non-parametric technique
should be used to analyze the data.
The critical experiments chosen
should ideally provide a continuum of
data across the entire validated range, so
that any variation in the bias as a
function of important system parameters
may be observed. The presence of
discrete clusters of experiments having
a calculated keff lower than the set of
critical experiments as a whole should
be examined closely to determine if
there is some systematic effect common
to a particular type of calculation that
makes use of the overall bias nonconservative. Because the bias can vary
with system parameters, if the licensee
has combined different subsets of data
(e.g., solutions and powders, low- and
high-enriched, homogeneous and
heterogeneous), the bias for the different
subsets should be analyzed. In addition,
the goodness-of-fit for any function used
to trend the bias should be examined to
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
ensure it is appropriate to the data being
analyzed.
If critical experiments do not cover
the entire range of parameters needed to
cover anticipated applications, it may be
necessary to extend the AOA by making
use of trends in the bias. Any
extrapolation (or wide interpolation) of
the data should be done by means of an
established mathematical methodology
that takes into account the functional
form of both the bias and its
uncertainty. The extrapolation should
not be based on judgement alone, such
as by observing that the bias is
increasing in the extrapolated range,
because this may not account for the
increase in the bias uncertainty that will
occur with increasing extrapolation. The
reviewer should independently confirm
that the derived bias is valid in the
extrapolated range and should ensure
that the extrapolation is not large.
NUREG/CR–6698 states that critical
experiments should be added if the data
must be extrapolated more than 10%.
There is no corresponding guidance
given for interpolation; however, if the
gap represents a significant fraction of
the total range of the data, then the
reviewer should question whether
interpolation is reasonable. The
reviewer should consider, for instance,
how rapidly the underlying physics or
neutronic behavior is changing in the
vicinity of the gap (e.g., if interpolation
in H/X is required, is the system fully
thermalized, or is the spectrum
changing from a fast-to-thermal
spectrum over the gap?) In general, if
the extrapolation or interpolation is too
large, new factors that could affect the
bias may be introduced as the physical
phenomena in the system change. The
reviewer should not view validation as
a purely mathematical exercise, but
should bear in mind the neutron
physics and underlying physical
phenomena when interpreting the
results.
Discarding an unusually large number
of critical experiments as outliers (i.e.,
more than 1–2%) should also be viewed
with some concern. Apparent outliers
should not be discarded based purely
upon judgement or statistical grounds
(such as causing the data to fail tests for
normality), because they could be
providing valuable information on the
method’s validity for a particular
application. The reviewer should verify
that there are specific defensible
reasons, such as reported
inconsistencies in the experimental
data, for discarding any outliers. If any
of the critical experiments from a
particular data set are discarded, the
reviewer should examine other
experiments included to determine
PO 00000
Frm 00080
Fmt 4703
Sfmt 4703
14023
whether they may be subject to the same
systematic errors. Outliers should be
examined carefully especially when
they have a lower calculated keff than
the other experiments included.
NUREG–1520 states that the MoS
should be large compared to the
uncertainty in the bias. The observed
spread of the data about the mean keff
should be examined as an indicator of
the overall precision of the calculational
method. The reviewer should ascertain
whether the statistical method of
validation considers both the observed
spread in the data and the experimental
and calculational uncertainty in
determining the USL. The reviewer
should also evaluate whether the
observed spread in the data is consistent
with the reported uncertainty (e.g.,
whether X2/N ≈ 1). If the spread in the
data is larger than, or comparable to, the
MMS, then the reviewer should
consider whether additional margin
(i.e., a larger MMS) is needed.
As a final test of the code’s accuracy,
the bias should be relatively small (i.e.,
bias ¨ 2 percent), or else the reason for
the bias should be determined. No
credit should be taken for positive bias,
because this would result in making
changes in a non-conservative direction
without having a clear understanding of
those changes. If the absolute value of
the bias is very large—and especially if
the reason for the large bias cannot be
determined—this may indicate that the
calculational method is not very
accurate, and a larger MMS may be
appropriate.
Some questions that the reviewer may
ask in evaluating the rigor of the
validation methodology as justification
for the MMS include:
• Are the results from use of the
methodology consistent with the data
(e.g., normally distributed)?
• Is the normality of the data
confirmed prior to performing statistical
calculations? If the data does not pass
the tests for normality, is a nonparametric method used?
• Does the assumed functional form
of the bias represent a good fit to the
critical experiments? Is a goodness-of-fit
test performed?
• Does the method determine a
pooled bias across disparate types of
critical experiments, or does it consider
variations in the bias for different types
of experiments? Are there discrete
clusters of experiments for which the
bias appears to be non-conservative?
• Has additional margin been applied
to account for extrapolation or wide
interpolation? Is this done based on an
established mathematical methodology?
E:\FR\FM\20MRN1.SGM
20MRN1
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
• Have critical experiments been
discarded as apparent outliers? Is there
a valid reason for doing so?
Performing an adequate code
validation is not by itself sufficient
justification for any specific MMS. The
reason for this is that the validation
analysis determines the bias and its
uncertainty, but not the MMS. The
MMS is added after the validation has
been performed to provide added
assurance of subcriticality. However,
having a validation methodology that
either exceeds or falls short of accepted
practices for validation may be a basis
for either reducing or increasing the
MMS.
wwhite on PROD1PC61 with NOTICES
Statistical Conservatism
In addition to having conservatism in
keff due to modeling practices, licensees
may also provide conservatism in the
statistical methods used to calculate the
USL. For example, NUREG/CR–6698
states that an acceptable method for
calculating the bias is to use the singlesided tolerance limit approach with a
95/95 confidence (i.e., 95% confidence
that 95% of all future critical
calculations will lie above the USL). If
the licensee decides to use the singlesided tolerance limit approach with a
95/99.9 confidence, this would result in
a more conservative USL than with a
95/95 confidence. This would be true of
other methods for which the licensee’s
confidence criteria exceed the minimum
accepted criteria. Generally, the NRC
has accepted 95% confidence levels for
validation results, so using more
stringent confidence levels may provide
conservatism. In addition, there may be
other reasons a larger bias and/or bias
uncertainty than necessary has been
used (e.g., because of the inclusion of
inapplicable critical experiments that
have a lower calculated keff).
The reviewer may credit this
conservatism towards having an
adequate MoS if: (1) The licensee
demonstrates that this translates into a
specific dkeff; and (2) the licensee
demonstrates that the margin will be
dependably present, based on license or
other commitments.
(3) Additional Risk-Informed
Considerations
Besides modeling conservatism and
the validation results, other factors may
provide added assurance of
subcriticality. These factors should be
considered in evaluating whether there
is adequate MoS and are discussed
below.
System Sensitivity and Uncertainty
The sensitivity of keff to changes in
system parameters can be used to assess
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
the potential effect of errors on the
calculation of keff. If the calculated keff
is especially sensitive to a given
parameter, an error in that parameter
could have a correspondingly large
contribution to the bias. Conversely, if
keff is very insensitive to a given
parameter, then an error may have a
negligible effect on the bias. This is of
particular importance when assessing
whether the chosen critical experiments
are sufficiently similar to applications to
justify a small MMS.
The reviewer should not consider the
sensitivity in isolation, but should also
consider the magnitude of uncertainties
in the parameters. If keff is very sensitive
to a given parameter, but the value of
that parameter is known with very high
accuracy (and its variations are wellcontrolled), the potential contribution to
the bias may still be very small. Thus,
the contribution to the bias is a function
of the product of the the keff sensitivity
with the uncertainty. To illustrate this,
suppose that keff is a function of a large
number of variables, x1, x2,* * *, xN.
Then the uncertainty in keff may be
expressed as follows, if all the
individual terms are independent:
Where the partial derivatives ∂k/∂xi are
proportional to the sensitivity and
the terms x1 represent the
uncertainties, or likely variations,
in the parameters. (If not all
variables are dependent, then there
may be additional terms.) Each term
in this equation then represents the
contribution to the overall
uncertainty in keff.
There are several tools available to the
reviewer to ascertain the sensitivity of
keff to changes in the underlying
parameters. Some of these are listed
below:
1. Analytical tools that calculate the
sensitivity for each nuclide-reaction pair
present in the problem may be used.
One example of this is the TSUNAMI
code in the SCALE 5 code package.
TSUNAMI calculates both an integral
sensitivity coefficient (i.e., summed over
all energy groups) and a sensitivity
profile as a function of energy group.
The reviewer should recognize that
TSUNAMI only calculates the keff
sensitivity to changes in the underlying
nuclear data, and not to other
parameters that could affect the bias and
should be considered. (See section on
Critical Experiment Similarity for
caveats about using TSUNAMI.)
PO 00000
Frm 00081
Fmt 4703
Sfmt 4703
2. Direct sensitivity calculations may
be used, in which system parameters are
perturbed and the resulting impact on
keff determined. Perturbation of atomic
number densities can also be used to
confirm the sensitivity calculated by
other methods (e.g., TSUNAMI). Such
techniques are not limited to
considering the effect of the nuclear
data.
There are also several sources
available to the reviewer to ascertain the
uncertainty associated with the
underlying parameters. For process
parameters, these sources of uncertainty
may include manufacturing tolerances,
quality assurance records, and
experimental and/or measurement
results. For nuclear data parameters,
these sources of uncertainty may
include published data, uncertainty data
distributed with the cross section
libraries, or the covariance data used in
methods such as TSUNAMI.
Some systems are inherently more
sensitive to changes in the underlying
parameters than others. For example,
high-enriched uranium systems
typically exhibit a greater sensitivity to
changes in system parameters (e.g.,
mass, moderation) than low-enriched
systems. This has been the reason that
HEU (i.e., >20wt% 235 U) facilities have
been licensed with larger MMS values
than LEU (≤10wt% 235 U) facilities. This
greater sensitivity would also be true of
weapons-grade Pu compared to lowassay mixed oxides (i.e., with a few
percent Pu/U). However, it is also true
that the uncertainties associated with
measurement of the 235 U cross sections
are much smaller than those associated
with measurement of the 238 U cross
sections. Both the greater sensitivity and
smaller uncertainty would need to be
considered in evaluating whether a
larger MMS is needed for high-enriched
systems.
Frequently, operating limits that are
more conservative than safety limits
determined using keff calculations are
established to prevent those safety
limits from being exceeded. For systems
in which keff is very sensitive to the
system parameters, more margin
between the operating and safety limits
may be needed. Systems in which keff is
very sensitive to the process parameters
may need both a larger margin between
operating and safety limits and a larger
MMS. This is because the system is
sensitive to any change, whether it be
caused by normal process variations or
caused by unknown errors. Because of
this, the assumption is often made that
the MMS is meant to account for
variations in the process or the ability
to control the process parameters.
However, the MMS is meant only to
E:\FR\FM\20MRN1.SGM
20MRN1
EN20MR06.019
14024
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
wwhite on PROD1PC61 with NOTICES
allow for unknown (or difficult to
quantify) uncertainties in the
calculation of keff. The reviewer should
recognize that determination of an
appropriate MMS is not dependent on
the ability to control process parameters
within safety limits (although both may
depend on the system sensitivity).
Some questions that the reviewer may
ask in evaluating the system sensitivity
as justification for the MMS include:
• How sensitive is keff to changes in
the underlying nuclear data (e.g., cross
sections)?
• How sensitive is keff to changes in
the geometric form and material
composition?
• Are the uncertainties associated
with these underlying parameters wellknown?
• How does the MMS compare to the
expected magnitude of changes in keff
resulting from uncertainties in these
underlying parameters?
Knowledge of the Neutron Physics
Another important consideration that
may affect the appropriate MMS is the
extent to which the physical behavior of
the system is known. Fissile systems
which are known to be subcritical with
a high degree of confidence do not
require as much MMS as systems where
subcriticality is less certain. An example
of a system known to be subcritical with
high confidence is a light-water reactor
fuel assembly. The design of these
systems is such that they can only be
made critical when highly thermalized.
Due to extensive analysis and reactor
experience, the flooded isolated
assembly is known to be subcritical. In
addition, the thermal neutron cross
sections for materials in finished reactor
fuel have been measured with a very
high degree of accuracy (as opposed to
cross sections in the resonance region).
Other examples of systems in which
there is independent corroborating
evidence of subcriticality may include
systems consisting of very simple
geometric shapes, or other idealized
situations, in which there is strong
evidence that the system is subcritical
based on comparison with highly
similar systems in published sources
(e.g., standards and handbooks). In these
cases, the MMS may be significantly
reduced due to the fact that the
calculation of keff is not relied on alone
to provide assurance of subcriticality.
Reliance on independent knowledge
that a given system is subcritical
necessarily requires that the
configuration of the system be fixed. If
the configuration can change from the
reference case, there will be less
knowledge about the behavior of the
changed system. For example, a finished
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
fuel assembly is subject to strict quality
assurance checks and would not reach
final processing if it were outside
specifications. In addition, it has a form
that has both been extensively studied
and is highly stable. For these reasons,
there is a great deal of certainty that this
system is well-characterized and is not
subject to change. A typical solution or
powder system (other than one with a
simple geometric arrangement) would
not have been studied with the same
level of rigor as a finished fuel
assembly. Even if they were studied
with the same level of rigor, these
systems have forms that are subject to
change into forms whose neutron
physics has not been as extensively
studied.
Some questions that the reviewer may
ask in evaluating the knowledge of the
neutron physics as justification for the
MMS include:
• Is the geometric form and material
composition of the system fixed and
very unlikely to change?
• Is the geometric form and material
composition of the system subject to
strict quality assurance, such that
tolerances have been bounded?
• Has the system been extensively
studied in the nuclear industry and
shown to be subcritical (e.g., in reactor
fuel studies)?
• Are there other reasons besides
criticality calculations to conclude that
the system will be subcritical (e.g.,
handbooks, standards, published data)?
• How well-known is the nuclear data
(e.g., cross sections) in the energy range
of interest?
Likelihood of the Abnormal Condition
Some facilities have been licensed
with different sets of keff limits for
normal and abnormal conditions.
Separate keff limits for normal and
abnormal conditions are permissible,
but are not required. There is some low
likelihood that processes calculated to
be subcritical will, in fact, be critical,
and this likelihood increases as the
MMS is reduced (though it cannot in
general be quantified). NUREG–1718,
‘‘Standard Review Plan for the Review
of an Application for a Mixed Oxide
(MOX) Fuel Fabrication Facility,’’ states
that abnormal conditions should be at
least unlikely from the standpoint of the
double contingency principle. Then, a
somewhat higher likelihood that a
system calculated to be subcritical is, in
fact, critical is more permissible for
abnormal conditions than for normal
conditions, because of the low
likelihood of the abnormal condition
being realized. The reviewer should
verify that the licensee has defined
abnormal conditions such that
PO 00000
Frm 00082
Fmt 4703
Sfmt 4703
14025
achieving the abnormal condition
requires at least one contingency to have
occurred, that the system will be closely
monitored so that it is promptly
detected, and that it will be promptly
corrected upon detection. Also, there is
generally more conservatism present in
the abnormal case, because the
parameters that are assumed to have
failed are analyzed at their worst-case
credible condition.
The increased risk associated with
having a smaller MMS for abnormal
conditions should be commensurate
with, and offset by, the low likelihood
of achieving the abnormal condition.
That is, if the normal case keff limit is
judged to be acceptable, then the
abnormal case limit will also be
acceptable, provided the increased
likelihood (that a system calculated to
be subcritical will be critical) is offset by
the reduced likelihood of realizing the
abnormal condition because of the
controls that have been established.
Note that if two or more contingencies
must occur to reach a given condition,
there is no requirement to ensure that
the resulting condition is subcritical. If
a single keff limit is used (i.e., no credit
for unlikelihood of the abnormal
condition), then the limit must be found
acceptable to cover both normal and
credible abnormal conditions. The
reviewer should always make this
finding considering specific conditions
and controls in the process(es) being
evaluated.
(4) Statistical Justification for the MMS
The NRC does not consider statistical
justification an appropriate basis for a
specific MMS. Previously, some
licensees have attempted to justify
specific MMS values based on a
comparison of two statistical methods.
For example, the USLSTATS code
issued with the SCALE code package
contains two methods for calculating
the USL: (1) The Confidence Band with
Administrative Margin approach
(calculating USL–1), and (2) the Lower
Tolerance Band approach (calculating
USL–2). The value of the MMS is an
input parameter to the Confidence Band
approach, but is not included explicitly
in the Lower Tolerance Band approach.
In this particular justification, adequacy
of the MMS is based on a comparison
of USL–1 and USL–2 (i.e., the condition
that USL–1, including the chosen MMS,
is less than USL–2). However, the
reviewer should not accept this
justification.
The condition that USL–1 (with the
chosen MMS) is less than USL–2 is
necessary, but is not sufficient, to show
that an adequate MMS has been used.
These methods are both statistical
E:\FR\FM\20MRN1.SGM
20MRN1
14026
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
wwhite on PROD1PC61 with NOTICES
methods, and a comparison can only
demonstrate whether the MMS is
sufficient to bound any statistical
uncertainties included in the Lower
Tolerance Band approach but not
included in the Confidence Band
approach. There may be other statistical
or systematic errors in calculating keff
that are not included in either statistical
treatment. Because of this, an MMS
value should be specified regardless of
the statistical method used. Therefore,
the reviewer should not consider such
a statistical approach an acceptable
justification for any specific value of the
MMS.
(5) Summary
Based on a review of the licensee’s
justification for its chosen MMS, taking
into consideration the aforementioned
factors, the staff should make a
determination as to whether the chosen
MMS provides reasonable assurance of
subcriticality under normal and credible
abnormal conditions. The staff’s review
should be risk-informed, in that the
review should be commensurate with
the MoS and should consider the
specific facility and process
characteristics, as well as the specific
modeling practices used. As an
example, approving an MMS value
greater than 0.05 for processes typically
encountered in enrichment and fuel
fabrication facilities should require only
a cursory review, provided that an
acceptable validation has been
performed and modeling practices at
least as conservative as those in
NUREG–1520 have been utilized. The
approval of a smaller MMS will require
a somewhat more detailed review,
commensurate with the MMS that is
requested. However, the MMS should
not be reduced below 0.02 due to
inherent uncertainties in the cross
section data and the magnitude of code
errors that have been discovered.
Quantitative arguments (such as
modeling conservatism) should be used
to the extent practical. However, in
many instances, the reviewer will need
to make a judgement based at least
partly on qualitative arguments. The
staff should document the basis for
finding the chosen MMS value to be
acceptable or unacceptable in the Safety
Evaluation Report (SER), and should
ensure that any factors upon which this
determination rests are ensured to be
present over the facility lifetime (e.g.,
through license commitment or
condition).
Regulatory Basis
In addition to complying with
paragraphs (b) and (c) of this section,
the risk of nuclear criticality accidents
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
must be limited by assuring that under
normal and credible abnormal
conditions, all nuclear processes are
subcritical, including use of an
approved margin of subcriticality for
safety. [10 CFR 70.61(d)]
Technical Review Guidance
Determination of an adequate MMS is
strongly dependent upon specific
processes, conditions, and calculational
practices at the facility being licensed.
Judgement and experience must be
employed in evaluating the adequacy of
the proposed MMS. In the past, an MMS
of 0.05 has generally been found
acceptable for most typical lowenriched fuel cycle facilities without a
detailed technical justification. A
smaller MMS may be acceptable but
will require some level of technical
review. (No specific guidance on the
appropriate MMS for other types of
facilities, such as high-enriched or
plutonium fuel cycle facilities, is
provided. Rather, the MMS for these
facilities should be evaluated on a caseby-case basis using the criteria in this
ISG; an example of the consideration of
sensitivity and uncertainty for highenriched uranium is given in the section
on ‘‘System Sensitivity and
Uncertainty.’’) Also, for reasons stated
previously, the MMS should not be
reduced below 0.02.
An MMS of 0.05 should be found
acceptable for low-enriched fuel cycle
processes and facilities if:
1. A validation has been performed
that meets accepted industry guidelines
(e.g., meets the requirements of ANSI/
ANS–8.1–1998, NUREG/CR–6361, and/
or NUREG/CR–6698).
2. There is an acceptable number of
critical experiments with similar
geometric forms, material compositions,
and neutron energy spectra to
applications. These experiments cover
the range of parameters of applications,
or else margin is provided to account for
extensions to the AOA.
3. The processes to be evaluated
include materials and process
conditions similar to those that occur in
low-enriched fuel cycle applications
(i.e., no new fissile materials, unusual
moderators or absorbers, or technologies
new to the industry that can affect the
types of systems to be modeled).
The reviewer should consider any
factors, including those enumerated in
the discussion above, that could result
in applying additional margin (i.e., a
larger MMS) or may justify reducing the
MMS. The reviewer must then exercise
judgement in arriving at an MMS that
provides for adequate assurance of
subcriticality.
PO 00000
Frm 00083
Fmt 4703
Sfmt 4703
Some of the factors that may serve to
justify reducing the MMS include:
1. There is a predictable and
dependable amount of conservatism in
modeling practices, in terms of keff, that
is assured to be maintained (in both
normal and abnormal conditions) over
the facility lifetime.
2. Critical experiments have nearly
identical geometric forms, material
compositions, and neutron energy
spectra to applications, and the
validation is specific to this type of
application.
3. The validation methodology
substantially exceeds accepted industry
guidelines (e.g., it uses a very
conservative statistical approach,
considers an unusually large number of
trending parameters, or analyzes the
bias for a large number of subgroups of
critical experiments).
4. The system keff is demonstrably
much less sensitive to uncertainties in
cross sections or variations in other
system parameters than typical lowenriched fuel cycle processes.
5. There is reliable information
besides results of calculations that
provides assurance that the evaluated
applications will be subcritical (e.g.,
experimental data, historical evidence,
industry standards or widely-accepted
handbooks).
6. The MMS is only applied to
abnormal conditions, which are at least
unlikely to be achieved, based on
credited controls.
Some of the factors that may
necessitate increasing (or not approving)
the MMS include:
1. The technical practices employed
by the licensee are less conservative
than standard industry modeling
practices (e.g., do not adequately bound
reflection or the full range of credible
moderation, do not take geometric
tolerances into account).
2. There are few similar critical
experiments of benchmark quality that
cover the range of parameters of
applications.
3. The validation methodology
substantially falls below accepted
industry guidelines (e.g., it uses less
than a 95% confidence in the statistical
approach, fails to consider trends in the
bias, fails to account for extensions to
the AOA).
4. The validation results otherwise
tend to cast doubt on the accuracy of the
bias and its uncertainty (i.e., the critical
experiments are not normally
distributed, there is a large number of
outliers discarded (≥2%), there are
distinct subgroups of experiments with
lower keff than the experiments as a
whole, trending fits do not pass
goodness-of-fit tests, etc.).
E:\FR\FM\20MRN1.SGM
20MRN1
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
5. The system keff is demonstrably
much more sensitive to uncertainties in
cross sections or other system
parameters than typical low-enriched
fuel cycle processes.
6. There is reliable information that
casts doubt on the results of the
calculational method or the
subcriticality of evaluated applications
(e.g., experimental data, reported
concerns with the nuclear data).
The purpose of asking the questions
in the individual discussion sections is
to ascertain the degree to which these
factors either provide justification for
reducing the MMS or necessitate
increasing the MMS. These lists are not
all-inclusive, and any other technical
information that demonstrates the
degree of confidence in the calculational
method should be considered.
Recommendation
The guidance in this ISG should
supplement the current guidance in the
nuclear criticality safety chapters of the
fuel facility SRPs (NUREG–1520 and
–1718). However, NUREG–1718, Section
6.4.3.3.4, states that the licensee should
submit justification for the MMS, but
then states that an MMS of 0.05 is
‘‘generally considered to be acceptable
without additional justification when
both the bias and its uncertainty are
determined to be negligible.’’ These two
statements are inconsistent. Therefore,
NUREG–1718, Section 6.4.3.3.4, should
be revised to remove the following
sentence:
‘‘A minimum subcritical margin of
0.05 is generally considered to be
acceptable without additional
justification when both the bias and its
uncertainty are determined to be
negligible.’’
wwhite on PROD1PC61 with NOTICES
References
ANSI/ANS–8.1–1998, ‘‘Nuclear Criticality
Safety in Operations with Fissionable
Materials Outside Reactors,’’ American
Nuclear Society.
ANSI/ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR [Light Water
Reactor] Fuel Outside Reactors,’’ American
Nuclear Society.
‘‘International Handbook of Evaluated
Criticality Safety Experiments,’’ NEA/NSC/
DOC (95) 03, Nuclear Energy Agency,
Organization for Economic Co-operation
and Development, 2003.
IN 2005–13, ‘‘Potential Non-Conservative
Error in Modeling Geometric Regions in
the KENO–V.a Criticality Code,’’ May 17,
2005.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG–1520, ‘‘Standard Review
Plan for the Review of a License
Application for a Fuel Cycle Facility.’’
NRC: Washington, DC March 2002.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG–1718, ‘‘Standard Review
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
Plan for the Review of an Application for
a Mixed Oxide (MOX) Fuel Fabrication
Facility.’’ NRC: Washington, DC August
2000.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG/CR–6698, ‘‘Guide for
Validation of Nuclear Criticality Safety
Calculational Methodology.’’ NRC:
Washington, DC January 2001.
U.S. Nuclear Regulatory Commission (U.S.)
(NRC). NUREG/CR–6361, ‘‘Criticality
Benchmark Guide for Light-Water-Reactor
Fuel in Transportation and Storage
Packages.’’ NRC: Washington, DC March
1997.
Approved: lllllllllllllll
Date: llllllllllllllllll
Director, Division of Fuel Cycle Safety and
Safeguards, NMSS.
Appendix A—ANSI/ANS–8.17
Calculation of Maximum keff
ANSI/ANS–8.17–2004, ‘‘Criticality Safety
Criteria for the Handling, Storage, and
Transportation of LWR Fuel Outside
Reactors,’’ contains a detailed discussion of
the various factors that should be considered
in setting keff limits. This is consistent with,
but more detailed than, the discussion in
ANSI/ANS–8.1–1998.
The subcriticality criterion from Section
5.1 of ANSI/ANS–8.17–2004 is:
Ks ≤ kc ¥ Dks ¥ Dkc – Dkm
Where ks is the calculated keff corresponding
to the application, Dks is its uncertainty,
kc is the mean keff resulting from the
calculation of critical experiments, Dkc is
its uncertainty, and Dkk is the MMS. The
types of uncertainties included in each
of these ‘‘delta’’ terms is provided, and
includes the following:
Dks = (1) statistical uncertainties in
computing ks ; (2) convergence
uncertainties in computing ks, (3)
material tolerances; (4) fabrication
tolerances; (5) uncertainties due to
limitations in the geometric
representation used in the method; and
(6) uncertainties due to limitations in the
material representations used in the
method.
Dkc = (7) uncertainties in the critical
experiments; (8) statistical uncertainties
in computing kc; (9) convergence
uncertainties in computing kc; (10)
uncertainties due to extrapolating kc
outside the range of experimental data;
(11) uncertainties due to limitations in
the geometric representations used in the
method; and (12) uncertainties due to
limitations in the material
representations used in the method.
Dkm = an allowance for any additional
uncertainties (MMS).
To the extent that not all 12 sources of
uncertainty listed above have been explicitly
taken into account, they may be allowed for
by increasing the value of Dkm. The more of
these sources of uncertainty that have been
taken into account, the smaller the necessary
additional margin Dkm. As a general
principle, however, the MMS should be large
compared to known uncertainties in the
nuclear data and limitations of the
PO 00000
Frm 00084
Fmt 4703
Sfmt 4703
14027
methodology. However, a value of the MMS
below 0.02 should not be used.
Frequently, the terms in the above equation
relating to the application are grouped on the
left-hand side of the equation, so that the
equation is rewritten as follows:
Ks + Dks ≤ kc ¥ Dkc ¥ Dkm
Where the terms on the right-hand side of the
equation are often lumped together and
termed the Upper Subcritical Limit
(USL), so that the USL = kc ¥ Dkc ¥ Dkm.
Relation to the Minimum Subcritical Margin
(MMS)
The MoS has been defined as the
difference between the actual value of keff
and the value of keff at which the system is
expected to be critical. The expected (best
estimate) critical value of keff is the mean keff
value of all critical experiments analyzed
(i.e., kc), including consideration of the
uncertainty in the bias (i.e., Dkc). The
calculated value of keff for an application
generally exceeds the actual (physical) keff
value due to conservative assumptions in
modeling the system. In terms of the above
USL equation, the MoS may be expressed
mathematically as:
MoS = kc – Dkc ¥ (ks ¥ Dksa) ¥ Dks
Where the term in parentheses is equal to the
actual (physical) keff of the application,
ksa. A term, Dksa, has been added to
represent the difference between the
actual and calculated value of keff for the
application (i.e., Dksa = change in keff
resulting from modeling conservatism).
In terms of the USL:
MoS = USL + Dkm – ks + ksa ¥ Dks
The minimum allowed value of the MoS is
reached when the calculated keff for the
application, ks + Dks, is equal to the USL.
When this occurs, the minimum value of the
MoS is:
MoS ≥ Dkm + Dksa
Thus, adequate margin (MoS) may be
assured either by conservatism in modeling
practices or in the explicit specification of
Dkm (MMS). This is discussed in the ISG
section on modeling conservatism.
Glossary
Application: calculation of a fissionable
system in the facility performed to
demonstrate subcriticality under normal or
credible abnormal conditions.
Area of Applicability (AOA): the ranges of
material compositions and geometric
arrangements within which the bias of a
calculational method is established.
Benchmark experiment: a critical
experiment that has been peer-reviewed and
published and is sufficiently well-defined to
be used for validation of calculational
methods.
Bias: a measure of the systematic
differences between calculational method
results and experimental data.
Bias uncertainty: a measure of both the
accuracy and precision of the calculations
and the uncertainty in the experimental data.
Calculational method: includes the
hardware platform, operating system,
computer algorithms and methods, nuclear
reaction data, and methods used to construct
computer models.
E:\FR\FM\20MRN1.SGM
20MRN1
14028
Federal Register / Vol. 71, No. 53 / Monday, March 20, 2006 / Notices
Critical experiment: a fissionable system
that has been experimentally determined to
be critical (with keff ≈ 1).
Margin of safety: the difference between
the actual value of a parameter and the value
of the parameter at which the system is
expected to be critical with critical defined
as keff = 1—bias—bias uncertainty.
Margin of Subcriticality (MoS): the
difference between the actual value of keff
and the value of keff at which the system is
expected to be critical with critical defined
as keff = 1—bias—bias uncertainty.
Minimum Margin of Subcriticality (MMS):
a minimum allowed margin of subcriticality,
which is an allowance for any unknown
uncertainties in calculating keff.
Subcritical limit: the maximum allowed
value of a controlled parameter under normal
case conditions.
Upper Subcritical Limit (USL): the
maximum allowed value of keff (including
uncertainty in keff), under both normal and
credible abnormal conditions, including
allowance for the bias, the bias uncertainty,
and a minimum margin of subcriticality.
[FR Doc. 06–2611 Filed 3–17–06; 8:45 am]
BILLING CODE 7590–01–P
SECURITIES AND EXCHANGE
COMMISSION
Proposed Collection; Comment
Request
Upon written request, copies available
from: Securities and Exchange
Commission, Office of Filings and
Information Services, Washington, DC
20549.
Extension: Rules 17Ad–6 and 17Ad–7, SEC
File No. 270–151, OMB Control No.
3235–0291.
wwhite on PROD1PC61 with NOTICES
Notice is hereby given that, pursuant
to the Paperwork Reduction Act of 1995
(44 U.S.C. 3501 et seq.), the Securities
and Exchange Commission
(‘‘Commission’’) is soliciting comments
on the collection of information
summarized below. The Commission
plans to submit this existing collection
of information to the Office of
Management and Budget for extension
and approval.
Rules 17Ad–6 and 17Ad–7:
Recordkeeping Requirements for
Transfer Agents
Rule 17Ad–6 under the Securities
Exchange Act of 1934 (15 U.S.C. 78b et
seq.) requires every registered transfer
agent to make and keep current records
about a variety of information, such as:
(1) Specific operational data regarding
the time taken to perform transfer agent
activities (to ensure compliance with
the minimum performance standards in
Rule 17Ad–2 (17 CFR 240.17Ad–2); (2)
written inquiries and requests by
shareholders and broker-dealers and
VerDate Aug<31>2005
20:35 Mar 17, 2006
Jkt 208001
response time thereto; (3) resolutions,
contracts or other supporting documents
concerning the appointment or
termination of the transfer agent; (4)
stop orders or notices of adverse claims
to the securities; and (5) all canceled
registered securities certificates.
Rule 17Ad–7 under the Securities
Exchange Act of 1934 (15 U.S.C. 78b et
seq.) requires each registered transfer
agent to retain the records specified in
Rule 17Ad–6 in an easily accessible
place for a period of six months to six
years, depending on the type of record
or document. Rule 17Ad–7 also
specifies the manner in which records
may be maintained using electronic,
microfilm, and microfiche storage
methods.
These recordkeeping requirements
ensure that all registered transfer agents
are maintaining the records necessary to
monitor and keep control over their own
performance and for the Commission to
adequately examine registered transfer
agents on an historical basis for
compliance with applicable rules.
We estimate that approximately 785
registered transfer agents will spend a
total of 392,500 hours per year
complying with Rules 17Ad–6 and
17Ad–7. Based on average cost per hour
of $50, the total cost of compliance with
Rule 17Ad–6 is $19,625,000.
Written comments are invited on: (a)
Whether the proposed collection of
information is necessary for the proper
performance of the functions of the
agency, including whether the
information shall have practical utility;
(b) the accuracy of the agency’s estimate
of the burden of the proposed collection
of information; (c) ways to enhance the
quality, utility, and clarity of the
information to be collected; and (d)
ways to minimize the burden of the
collection of information on
respondents, including through the use
of automated collection techniques or
other forms of information technology.
Consideration will be given to
comments and suggestions submitted in
writing within 60 days of this
publication.
Please direct your written comments
to R. Corey Booth, Director/Chief
Information Officer, Office of
Information Technology, Securities and
Exchange Commission, 100 F Street,
NE., Washington, DC 20549.
Dated: March 13, 2006.
Nancy M. Morris,
Secretary.
[FR Doc. E6–3981 Filed 3–17–06; 8:45 am]
BILLING CODE 8010–01–P
PO 00000
Frm 00085
Fmt 4703
Sfmt 4703
SECURITIES AND EXCHANGE
COMMISSION
[File No. 1–03701]
Issuer Delisting; Notice of Application
of Avista Corporation To Withdraw Its
Common Stock, No Par Value,
Together With the Preferred Share
Purchase Rights Appurtenant Thereto,
From Listing and Registration on the
Pacific Exchange, Inc.
March 14, 2006.
On March, 2006, Avista Corporation,
a Washington corporation (‘‘Issuer’’),
filed an application with the Securities
and Exchange Commission
(‘‘Commission’’), pursuant to section
12(d) of the Securities Exchange Act of
1934 (‘‘Act’’) 1 and Rule 12d2–2(d)
thereunder,2 to withdraw its common
stock, no par value, together with the
preferred share purchase rights
appurtenant thereto (collectively
‘‘Securities’’), from listing and
registration on the Pacific Exchange,
Inc. (‘‘PCX’’).
The Board of Directors (‘‘Board’’) of
the Issuer adopted resolutions on
February 10, 2006 to withdraw the
Securities from listing and registration
on PCX. The Issuer stated that the Board
determined the benefits of remaining
listed on PCX do not justify the
associated expense and administrative
burdens. The Issuer stated that the
Securities are listed on the New York
Stock Exchange, Inc. (‘‘NYSE’’) and will
remain listed on NYSE.
The Issuer stated in its application
that it has complied with applicable
rules of PCX by providing PCX with the
required documents governing the
withdrawal of securities from listing
and registration on PCX. The Issuer also
stated that withdrawal of the Securities
from PCX will not violate any law of the
State of Washington, the state in which
the Issuer is incorporated.
The Issuer’s application relates solely
to the withdrawal of the Securities from
listing on PCX and shall not affect their
continued listing on NYSE or their
obligation to be registered under section
12(b) of the Act.3
Any interested person may, on or
before April 7, 2006, comment on the
facts bearing upon whether the
application has been made in
accordance with the rules of PCX, and
what terms, if any, should be imposed
by the Commission for the protection of
investors. All comment letters may be
submitted by either of the following
methods:
1 15
U.S.C. 78l(d).
CFR 240.12d2–2(d).
3 15 U.S.C. 78l(b).
2 17
E:\FR\FM\20MRN1.SGM
20MRN1
Agencies
[Federal Register Volume 71, Number 53 (Monday, March 20, 2006)]
[Notices]
[Pages 14018-14028]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 06-2611]
-----------------------------------------------------------------------
NUCLEAR REGULATORY COMMISSION
Notice of Availability of Draft Interim Staff Guidance Document
for Fuel Cycle Facilities
AGENCY: Nuclear Regulatory Commission.
ACTION: Notice of availability.
-----------------------------------------------------------------------
FOR FURTHER INFORMATION CONTACT: James Smith, Project Manager,
Technical Support Group, Division of Fuel Cycle Safety and Safeguards,
Office of Nuclear Material Safety and Safeguards, U.S. Nuclear
Regulatory Commission, Washington, DC 20005-0001. Telephone: (301) 415-
6459; fax number: (301) 415-5370; e-mail: jas4@nrc.gov.
SUPPLEMENTARY INFORMATION:
I. Introduction
The Nuclear Regulatory Commission (NRC) continues to prepare and
issue Interim Staff Guidance (ISG) documents for fuel cycle facilities.
These ISG documents provide clarifying guidance to the NRC staff when
reviewing licensee integrated safety analysis, license applications or
amendment requests or other related licensing activities for fuel cycle
facilities under 10 CFR part 70.
II. Summary
The purpose of this notice is to provide the public an opportunity
to review a draft ISG, FCSS-ISG-10, Revision 2, which provides guidance
to NRC staff to determine whether the minimum margin of subcriticality
is sufficient to provide an adequate assurance of subcriticality for
safety to demonstrate compliance with the performance requirements of
10 CFR 70.61(d). Additionally, listing of comments received on the
previous draft and the dispositioning of these comments is also
provided. These documents are being issued to support a public meeting
scheduled for April 28, 2006, at the NRC Headquarters Auditorium in
which the NRC will discuss revision of the guidance document and its
resolution of comments received on Revision 1. A separate meeting
notice will be provided shortly, to give specific details regarding the
meeting agenda.
III. Further Information
The documents related to this action are available electronically
at the NRC's Electronic Reading Room at https://www.nrc.gov/reading-rm/
adams.html. From this site, you can access the NRC's Agencywide
Documents Access and Management System (ADAMS), which provides text and
image files of NRC's public documents. The ADAMS ascension numbers for
the documents related to this notice are provided in the following
table. If you do not have access to ADAMS or if there are problems in
accessing the documents located in ADAMS, contact the NRC Public
Document Room (PDR) Reference staff at 1-800-397-4209, 301-415-4737, or
by e-mail to pdr@nrc.gov.
------------------------------------------------------------------------
Interim staff guidance ADAMS accession No.
------------------------------------------------------------------------
Draft FCSS Interim Staff Guidance-- ML060260479
10, Revision 2.
Comments on Draft FCSS ISG-10, Rev. 1 ML060470150
and Resolution.
------------------------------------------------------------------------
This document may also be viewed electronically on the public
computers located at the NRC's PDR, O 1 F21, One White Flint North,
11555 Rockville Pike, Rockville, MD 20852. The PDR reproduction
contractor will copy documents for a fee.
For the Nuclear Regulatory Commission.
Dated at Rockville, Maryland this 7th day of March 2006.
Melanie A. Galloway,
Chief, Technical Support Group, Division of Fuel Cycle Safety and
Safeguards, Office of Nuclear Material Safety and Safeguards.
Draft FCSS Interim Staff Guidance-10, Revision 2
Justification for Minimum Margin of Subcriticality for Safety
Prepared by Division of Fuel Cycle Safety and Safeguards Office of
Nuclear Material Safety and Safeguards
Issue
Technical justification for the selection of the minimum margin of
subcriticality for safety for fuel cycle facilities, as required by 10
CFR 70.61(d)
Introduction
10 CFR 70.61(d) requires, in part, that licensees or applicants
(henceforth to be referred to as ``licensees'') demonstrate that
``under normal and credible abnormal conditions, all nuclear processes
are subcritical, including use of an approved margin of subcriticality
for safety.'' There are a variety of methods that may be used to
demonstrate subcriticality, including use of industry standards,
handbooks, hand calculations, and computer methods. Subcriticality is
assured, in part, by providing margin between actual conditions and
expected critical conditions. This interim staff guidance (ISG),
however, applies only to margin used in those methods that rely on
calculation of keff, including deterministic and
probabilistic computer methods. The use of other methods (e.g., use of
endorsed industry standards, widely accepted handbooks, certain hand
calculations), containing varying amounts of margin, is outside the
scope of this ISG.
For methods relying on calculation of keff, margin may
be provided either in terms of limits on physical parameters of the
system (of which keff is a function), or in terms of limits
on keff directly, or both. For the purposes of this ISG, the
term margin of safety will be used to refer to the margin to
criticality in terms of system parameters, and the term margin of
subcriticality (MoS) will refer to the margin to criticality in terms
of keff. A common approach to ensuring subcriticality is to
determine a maximum keff limit below which the licensee's
calculations must fall. This limit will be referred to in this ISG as
the Upper Subcritical Limit (USL). Licensees using calculational
methods perform validation studies, in which critical experiments
similar to actual or anticipated facility applications are chosen and
then analyzed to determine the bias and uncertainty in the bias. The
bias is a measure of the systematic differences between calculational
method results and experimental data. The uncertainty in the bias is a
measure
[[Page 14019]]
of both the accuracy and precision of the calculations and the
uncertainty in the experimental data. A USL is then established that
includes allowances for bias and bias uncertainty as well as an
additional margin, to be referred to in this ISG as the minimum margin
of subcriticality (MMS). The MMS is variously referred to in the
nuclear industry as minimum subcritical margin, administrative margin,
and arbitrary margin, and the term MMS should be regarded as synonymous
with those terms. The term MMS will be used throughout this ISG, and
has been chosen for consistency with the rule. The MMS is an allowance
for any unknown (or difficult to identify or quantify) errors or
uncertainties in the method of calculating keff that may
exist beyond those which have been accounted for explicitly in
calculating the bias and its uncertainty.
There is little guidance in the fuel facility Standard Review Plans
(SRPs) as to what constitutes sufficient technical justification for
the MMS. NUREG-1520, ``Standard Review Plan for the Review of a License
Application for a Fuel Cycle Facility,'' Section 5.4.3.4.4, states that
there must be margin that includes, among other uncertainties,
``adequate allowance for uncertainty in the methodology, data, and bias
to assure subcriticality.'' An important component of this overall
margin is the MMS. However, there has been almost no guidance on how to
determine an appropriate MMS. Partly due to the lack of historical
guidance, and partly due to differences between facilities' processes
and methods of calculation, there have been significantly different MMS
values approved for the various fuel cycle facilities over time. In
addition, the different ways licensees have of defining margins and
calculating keff limits have made a consistent approach to
reviewing keff limits difficult. Recent licensing experience
has highlighted the need for further guidance to clarify what
constitutes an acceptable justification for the MMS.
The MMS can have a substantial effect on facility operations (e.g.,
storage capacity, throughput) and there has, therefore, been
considerable recent interest in decreasing margin in keff
below what has been licensed previously. In addition, the increasing
sophistication of computer codes and the ready availability of
computing resources means that there has been a gradual move towards
more realistic (often resulting in less conservative) modeling of
process systems. The increasing interest in reducing the MMS and the
reduction in modeling conservatism make technical justification of the
MMS more risk-significant than it has been in the past. In general,
consistent with a risk-informed approach to regulation, a smaller MMS
requires a more substantial technical justification.
This ISG is only applicable to fuel enrichment and fabrication
facilities licensed under 10 CFR part 70.
Discussion
This guidance is applicable to evaluating the MMS in methods of
evaluation that rely on calculation of keff. The
keff value of a fissionable system depends, in general, on a
large number of physical variables. The factors that can affect the
calculated value of keff may be broadly divided into the
following categories: (1) The geometric configuration; (2) the material
composition; and (3) the neutron distribution. The geometric form and
material composition of the system--together with the underlying
nuclear data (e.g., v, X(E), cross section data)--determine
the spatial and energy distribution of neutrons in the system (flux and
energy spectrum). An error in the nuclear data or the geometric or
material modeling of these systems can produce an error in the neutron
flux and energy spectrum, and thus in the calculated value of
keff. The bias associated with a single system is defined as
the difference between the calculated and physical values of
keff, by the following equation:
[beta] = kcalc - kphysical
Thus, determining the bias requires knowing both the calculated and
physical keff values of the system. The bias associated with
a single critical experiment can be known with a high degree of
confidence, because the physical (experimental) value is known a priori
(kphysical [ap] 1). However, for calculations performed to
demonstrate subcriticality of facility processes (to be referred to as
``applications''), this is not generally the case. The bias associated
with such an application (i.e., not a known critical configuration) is
not typically known with this same high degree of confidence, because
the actual physical keff of the system is usually not known.
In practice, the bias is determined from the average calculated
keff for a set of experiments that cover different aspects
of the licensee's applications. The bias and its uncertainty must be
estimated by calculating the bias associated with a set of critical
experiments having geometric forms, material compositions, and neutron
spectra similar to those of the application. Because of the large
number of factors that can affect the bias, and the finite number of
critical experiments available, staff should recognize that this is
only an estimate of the true bias of the system. The experiments
analyzed cannot cover all possible combinations of conditions or
sources of error that may be present in the applications to be
evaluated. The effect on keff of geometric, material, or
spectral differences between critical experiments and applications
cannot be known with precision. Therefore, an additional margin (MMS)
must be applied to allow for the effects of any unknown uncertainties
that may exist in the calculated value of keff beyond those
accounted for in the calculation of the bias and its uncertainty. As
the MMS decreases, there needs to be a greater level of assurance that
the various sources of bias and uncertainty have been taken into
account, and that the bias and uncertainty are known with a high degree
of accuracy. In general, the more similar the critical experiments are
to the applications, the more confidence there is in the estimate of
the bias and the less MMS is needed.
In determining an appropriate MMS, the reviewer should consider the
specific conditions and process characteristics present at the facility
in question. However, the MMS should not be reduced below 0.02. The
nuclear cross sections are not generally known to better than ~ 1-2%.
While this does not necessarily translate into a 2%
[Delta]keff, it has been observed over many years of
experience with criticality code validation that biases and spreads in
the data of a few percent can be expected. As stated in NUREG-1520, MoS
should be large compared to the uncertainty in the bias. Moreover,
errors in the criticality codes have been discovered over time that
have produced keff differences of roughly this same
magnitude of 1-2% (e.g., Information Notice 2005-13, ``Potential Non-
Conservative Error in Modeling Geometric Regions in the KENO-V.a
Criticality Code''). While the possibility of having larger
undiscovered errors cannot be entirely discounted, modeling
sufficiently similar critical experiments with the same code options to
be used in modeling applications should minimize the potential for this
to occur. However, many years of experience with the typical
distribution of calculated keff values and with the
magnitude of code errors that have occasionally surfaced support
establishing 0.02 as the minimum MMS that should be considered
acceptable under the best possible conditions.
Staff should recognize the important distinction between ensuring
that processes are safe and ensuring that
[[Page 14020]]
they are adequately subcritical. The value of keff is a
direct indication of the degree of subcriticality of the system, but is
not fully indicative of the degree of safety. A system that is very
subcritical (i.e., with keff [Lt] 1) may have a small margin
of safety if a small change in a process parameter can result in
criticality. An example of this would be a UO2 powder
storage vessel, which is subcritical when dry, but may require only the
addition of water for criticality. Similarly, a system with a small MoS
(i.e., with keff ~1) may have a very large margin of safety
if it cannot credibly become critical. An example of this would be a
natural uranium system in light water, which may have a keff
value close to 1 but will never exceed 1. Because of this, a
distinction should be made between the margin of subcriticality and the
margin of safety. Although a variety of terms are in use in the nuclear
industry, the term margin of subcriticality will be taken to mean the
difference between the actual (physical) value of keff and
the value of keff at which the system is expected to be
critical. The term margin of safety will be taken to mean the
difference between the actual value of a parameter and the value of the
parameter at which the system is expected to be critical. The MMS is
intended to account for the degree of confidence that applications
calculated to be subcritical will be subcritical. It is not intended to
account for other aspects of the process (e.g., safety of the process
or the ability to control parameters within certain bounds) that may
need to be reviewed as part of an overall licensing review.
There are a variety of different approaches that a licensee could
choose in justifying the MMS. Some of these approaches and means of
reviewing them are described in the following sections, in no
particular preferential order. Many of these approaches consist of
qualitative arguments, and therefore there will be some degree of
subjectivity in determining the adequacy of the MMS. Because the MMS is
an allowance for unknown (or difficult to identify or quantify) errors,
the reviewer must ultimately exercise his or her best judgement in
determining whether a specific MMS is justified. Thus, the topics
listed below should be regarded as factors the reviewer should take
into consideration in exercising that judgement, rather than any kind
of prescriptive checklist.
The reviewer should also bear in mind that the licensee is not
required to use any or all of these approaches, but may choose an
approach that is applicable to its facility or a particular process
within its facility. While it may be desirable and convenient to have a
single keff limit or MMS value (and single corresponding
justification) across an entire facility, it is not necessary for this
to be the case. The MMS may be easier to justify for one process than
for another, or for a limited application versus generically for the
entire facility. The reviewer should expect to see various combinations
of these approaches, or entirely different approaches, used, depending
on the nature of the licensee's processes and methods of calculation.
Any approach used must ultimately lead to a determination that there is
adequate assurance of subcriticality.
(1) Conservatism in the Calculational Models
The margin in keff produced by the licensee's modeling
practices, together with the MMS, provide the margin between actual
conditions and expected critical conditions. In terms of the
subcriticality criterion taken from ANSI/ANS-8.17-2004, ``Criticality
Safety Criteria for the Handling, Storage, and Transportation of LWR
Fuel Outside Reactors'' (as explained in Appendix A):
MoS >= [Delta]km + [Delta]ksa
where [Delta]km is the MMS and [Delta]ksa is the
margin in keff due to conservative modeling of the system
(i.e., conservative values of system parameters).
Two different applications for which the sums on the right hand
side of the equation above are equal to each other are equally
subcritical. Assurance of subcriticality may thus be provided by
specifying a margin in keff ([Delta]km), or
specifying conservative modeling practices ([Delta]ksa), or
some combination thereof. This principle will be particularly useful to
the reviewer evaluating a proposed reduction in the currently approved
MMS; the review of such a reduction should prove straightforward in
cases in which the overall combination of modeling conservatism and MMS
has not changed. Because of this straightforward quantitative
relationship, any modeling conservatism that has not been previously
credited should be considered before examining other factors. Cases in
which the overall MoS has decreased may still be acceptable, but would
have to be justified by other means.
In evaluating justification for the MMS relying on conservatism in
the model, the reviewer should consider only that conservatism in
excess of any manufacturing tolerances, uncertainties in system
parameters, or credible process variations. That is, the conservatism
should consist of conservatism beyond the worst-case normal or abnormal
conditions, as appropriate, including allowance for any tolerances.
Examples of this added conservatism may include assuming optimum
concentration in solution processes, neglecting neutron absorbers in
structural materials, or assuming minimum reflector conditions (e.g.,
at least a 1-inch, tight-fitting reflector around process equipment).
These technical practices used to perform criticality calculations
generally result in conservatism of at least several percent in
keff. To credit this as part of the justification for the
MMS, the reviewer should have assurance that the modeling practices
described will result in a predictable and dependable amount of
conservatism in keff. In some cases, the conservatism may be
process-dependent, in which case it may be relied on as justification
for the MMS for a particular process. However, only modeling practices
that result in a global conservatism across the entire facility should
be relied on as justification for a site-wide MMS. Ensuring predictable
and dependable conservatism includes verifying that this conservatism
will be maintained over the facility lifetime, such as through the use
of license commitments or conditions.
If the licensee has a program that establishes operating limits (to
ensure that subcritical limits are not exceeded) below subcritical
limits determined in nuclear criticality safety evaluations, the margin
provided by this (optional) practice may be credited as part of the
conservatism. In such cases, the reviewer should credit only the
difference between operating and subcritical limits that exceeds any
tolerances or process variation, and should ensure that operating
limits will be maintained over the facility lifetime, through the use
of license commitments or conditions.
Some questions that the reviewer may ask in evaluating the use of
modeling conservatism as justification for the MMS include:
How much margin in keff is provided due to
conservatism in modeling practices?
How much of this margin exceeds allowance for tolerances
and process variations?
Is this margin specific to a particular process or does it
apply to all facility processes?
What provides assurance that this margin will be
maintained over the facility lifetime?
[[Page 14021]]
(2) Validation Methodology and Results
Assurance of subcriticality for methods that rely on the
calculation of keff requires that those methods be
appropriately validated. One of the goals of validation is to determine
the method's bias and the uncertainty in the bias. After this has been
done, an additional margin (MMS) is specified to account for any
additional uncertainties that may exist. The appropriate MMS depends,
in part, on the degree of confidence in the validation results. Having
a high degree of confidence in the bias and bias uncertainty requires
both that there be sufficient (for the statistical method used)
applicable benchmark-quality experiments and that there be a rigorous
validation methodology. Critical experiments that do not rise to the
level of benchmark-quality experiments may also be acceptable, but may
require additional margin. If either the data or the methodology is
deficient, a high degree of confidence in the results cannot be
attained, and a larger MMS may need to be employed than would otherwise
be acceptable. Therefore, although validation and determining the MMS
are separate exercises, they are related. The more confidence one has
in the validation results, the less additional margin (MMS) is needed.
The less confidence one has in the validation results, the more MMS is
needed.
Any review of a licensing action involving the MMS should involve
examination of the licensee's validation methodology and results. While
there is no clear quantifiable relationship between the validation and
MMS (as exists with modeling conservatism), several aspects of
validation should be considered before making a qualitative
determination of the adequacy of the MMS.
There are four factors that the reviewer should consider in
evaluating the validation: (1) The similarity of critical experiments
to actual applications; (2) sufficiency of the data (including the
number and quality of experiments); (3) adequacy of the validation
methodology; and (4) conservatism in the calculation of the bias and
its uncertainty. These factors are discussed in more detail below.
Similarity of Critical Experiments
Because the bias and its uncertainty must be estimated based on
critical experiments having geometric form, material composition, and
neutronic behavior similar to specific applications, the degree of
similarity between the critical experiments and applications is a key
consideration in determining the appropriateness of the MMS. The more
closely critical experiments represent the characteristics of
applications being validated, the more confidence the reviewer has in
the estimate of the bias and the bias uncertainty for those
applications.
The reviewer must understand both the critical experiments and
applications in sufficient detail to ascertain the degree of similarity
between them. Validation reports generally contain a description of
critical experiments (including source references). The reviewer may
need to consult these references to understand the physical
characteristics of the experiments. In addition, the reviewer may need
to consult process descriptions, nuclear criticality safety
evaluations, drawings, tables, input files, or other information to
understand the physical characteristics of applications. The reviewer
must consider the full spectrum of normal and abnormal conditions that
may have to be modeled when evaluating the similarity of the critical
experiments to applications.
In evaluating the similarity of experiments to applications, the
reviewer must recognize that some parameters are more significant than
others to accurately calculate keff. The parameters that
have the greatest effect on the calculated keff of the
system are those that are most important to match when choosing
critical experiments. Because of this, there is a close relationship
between similarity of critical experiments to applications and system
sensitivity. Historically, certain parameters have been used to trend
the bias because these are the parameters that have been found to have
the greatest effect on the bias. These parameters include the
moderator-to-fuel ratio (e.g., H/U, H/X, v\m\/v\f\), isotopic abundance
(e.g., uranium-235 (\235\U), plutonium-239 (\239\Pu), or overall Pu-to-
uranium ratio), and parameters that characterize the neutron energy
spectrum (e.g., energy of average lethargy causing fission (EALF),
average energy group (AEG)). Other parameters, such as material density
or overall geometric shape, are generally considered to be of less
importance. The reviewer should consider all important system
characteristics that can reasonably be expected to affect the bias. For
example, the critical experiments should include any materials that can
have an appreciable effect on the calculated keff, so that
the effect due to the cross sections of those materials is included in
the bias. Furthermore, these materials should have at least the same
reactivity worth in the experiments (which may be evidenced by having
similar number densities) as in the applications. Otherwise, the effect
of any bias from the underlying cross sections or the assumed material
composition may be masked in the applications. The materials must be
present in a statistically significant number of experiments having
similar neutron spectra to the application. Conversely, materials that
do not have an appreciable effect on the bias may be neglected and
would not have to be represented in the critical experiments.
Merely having critical experiments that are representative of
applications is the minimum acceptance criterion, and does not alone
justify having any particular value of the MMS. There are some
situations, however, in which there is an unusually high degree of
similarity between the critical experiments and applications, and in
these cases, this fact may be credited as justification for having a
smaller MMS than would otherwise be acceptable. If the critical
experiments have geometric forms, material compositions, and neutron
spectra that are nearly indistinguishable from those of the
applications, this may be justification for a smaller MMS than would
otherwise be acceptable. For example, justification for having a small
MMS for finished fuel assemblies could include selecting critical
experiments consisting of fuel assemblies in water, where the fuel has
nearly the same pellet diameter, pellet density, cladding materials,
pitch, absorber content, enrichment, and neutron energy spectrum as the
licensee's fuel. In this case, the validation should be very specific
to this type of system, because including other types of critical
experiments could mask variations in the bias. Therefore, this type of
justification is generally easiest when the area of applicability (AOA)
is very narrowly defined. The reviewer should pay particular attention
to abnormal conditions. In this example, changes in process conditions
such as damage to the fuel or partial flooding may significantly affect
the applicability of the critical experiments.
There are several tools available to the reviewer to ascertain the
degree of similarity between critical experiments and applications.
Some of these are listed below:
1. NUREG/CR-6698, ``Guide to Validation of Nuclear Criticality
Safety Calculational Method,'' Table 2.3, contains a set of screening
criteria for determining the applicability of critical experiments. As
is stated in the NUREG, these criteria were arrived at by
[[Page 14022]]
consensus among experienced nuclear criticality safety specialists and
may be considered to be conservative. The reviewer should consider
agreement on all screening criteria to be justification for
demonstrating a very high degree of critical experiment similarity.
(Agreement on the most significant screening criteria for a particular
system should be considered as demonstration of an acceptable degree of
critical experiment similarity.) Less conservative (i.e., broader)
screening criteria may also be acceptable, if appropriately justified.
2. Analytical methods that systematically quantify the degree of
similarity between a set of critical experiments and applications in
pair-wise fashion may be used. One example of this is the TSUNAMI code
in the SCALE 5 code package. One strength of TSUNAMI is that it
calculates an overall correlation that is a quantitative measure of the
degree of similarity between an experiment and an application. Another
strength is that this code considers all the nuclear phenomena and
underlying cross sections and weights them by their importance to the
calculated keff (i.e., sensitivity of keff to the
data). The NRC staff currently considers a correlation coefficient of
ck >= 0.95 to be indicative of a very high degree of
similarity. This is based on the staff's experience comparing the
results from TSUNAMI to those from a more traditional screening
criterion approach. The NRC staff also considers a correlation
coefficient between 0.90 and 0.95 to be indicative of a high degree of
similarity. However, owing to the amount of experience with TSUNAMI, in
this range use of the code should be supplemented with other methods of
evaluating critical experiment similarity. Conversely, a correlation
coefficient less than 0.90 should not be used as a demonstration of a
high or very high degree of critical experiment similarity. Because of
limited use of the code to date, all of these observations should be
considered tentative and thus the reviewer should not use TSUNAMI as a
``black box,'' or base conclusions of adequacy solely on its use.
However, it may be used to test a licensee's statement that there is a
high degree of similarity between experiments and applications.
3. Traditional parametric sensitivity studies may be employed to
demonstrate that keff is highly sensitive or insensitive to
a particular parameter. For example, if a 50% reduction in the \10\B
cross section is needed to produce a 1% change in the system
keff, then it can be concluded that the system is highly
insensitive to the boron content, in the amount present. This is
because a credible error in the \10\B cross section of a few percent
will have a statistically insignificant effect on the bias. Therefore,
in the amount present, the boron content is not a parameter that is
important to match in order to conclude that there is a high degree of
similarity between critical experiments and applications.
4. Physical arguments may demonstrate that keff is
highly sensitive or insensitive to a particular parameter. For example,
the fact that oxygen and fluorine are almost transparent to thermal
neutrons (i.e., cross sections are very low) may justify why
experiments consisting of UO2F2 may be considered
similar to UO2 or UF4 applications, provided that
both experiments and applications occur in the thermal energy range.
The reviewer should ensure that all parameters which can measurably
affect the bias are considered when assessing critical experiment
similarity. For example, comparison should not be based solely on
agreement in the \235\U fission spectrum for systems in which the
system keff is highly sensitive to \238\U fission, \10\B
absorption, or \1\H scattering. A method such as TSUNAMI that considers
the complete set of reactions and nuclides present can be used to rank
the various system sensitivities, and to thus determine whether it is
reasonable to rely on the fission spectrum alone in assessing the
similarity of critical experiments to applications.
Some questions that the reviewer may ask in evaluating reliance on
critical experiment similarity as justification for the MMS include:
Do the critical experiments adequately span the range of
geometric forms, material compositions, and neutron energy spectra
expected in applications?
Are the materials present with at least the same
reactivity worth as in applications?
Do the licensee's criteria for determining whether
experiments are sufficiently similar to applications consider all
nuclear reactions and nuclides that can have a statistically
significant effect on the bias?
Sufficiency of the Data
Another aspect of evaluating the selected critical experiments for
a specific MMS is evaluating whether there is a sufficient number of
benchmark-quality experiments to determine the bias across the entire
AOA. Having a sufficient number of benchmark-quality experiments means
that: (1) There are enough (applicable) critical experiments to make a
statistically meaningful calculation of the bias and its uncertainty;
(2) the experiments somewhat evenly span the entire range of all the
important parameters, without gaps requiring extrapolation or wide
interpolation; and (3) the experiments are, preferably, benchmark-
quality experiments. The number of critical experiments needed is
dependent on the statistical method used to analyze the data. For
example, some methods require a minimum number of data points to
reliably determine whether the data are normally distributed. Merely
having a large number of experiments is not sufficient to provide
confidence in the validation result, if the experiments are not
applicable to the application. The reviewer should particularly examine
whether consideration of only the most applicable experiments would
result in a larger negative bias (and thus a lower USL) than that
determined based on the full set of experiments. The experiments should
also ideally be sufficiently well-characterized (including experimental
parameters and their uncertainties) to be considered benchmark
experiments. They should be drawn from established sources (such as
from the International Handbook of Evaluated Criticality Safety
Benchmark Experiments (IHECSBE), laboratory reports, or peer-reviewed
journals). For some applications, benchmark-quality experiments may not
be available; when necessary, critical experiments that do not rise to
the level of benchmark-quality experiments may be used. However, the
reviewer should take this into consideration and should evaluate the
need for additional margin.
Some questions that the reviewer may ask in evaluating the number
and quality of critical experiments as justification for the MMS
include:
Are the critical experiments chosen all high-quality
benchmarks from reliable (e.g., peer-reviewed and widely-accepted)
sources?
Are the critical experiments chosen taken from multiple
independent sources, to minimize the possibility of systematic errors?
Have the experimental uncertainties associated with the
critical experiments been provided and used in calculating the bias and
bias uncertainty?
Is the number and distribution of critical experiments
sufficient to establish trends in the bias across the entire range of
parameters?
Is the number of critical experiments commensurate with
the statistical methodology being used?
[[Page 14023]]
Validation Methodological Rigor
Having a sufficiently rigorous validation methodology means having
a methodology that is appropriate for the number and distribution of
critical experiments, that calculates the bias and its uncertainty
using an established statistical methodology, that accounts for any
trends in the bias, and that accounts for all apparent sources of
uncertainty in the bias (e.g., the increase in uncertainty due to
extrapolating the bias beyond the range covered by the experimental
data). Examples of deficiencies in the validation methodology may
include: (1) Using a statistical methodology relying on the data being
normally distributed about the mean keff to analyze data
that are not normally distributed; (2) using a linear regression fit on
data that has a non-linear dependence on a trending parameter; (3) use
of a single pooled bias when very different types of critical
experiments are being evaluated in the same validation. These
deficiencies serve to decrease confidence in the validation results and
may warrant additional margin (i.e., a larger MMS). Additional guidance
on some of the more commonly observed deficiencies is provided below.
The assumption that data is normally distributed is generally
valid, unless there is a strong trend in the data or different types of
critical experiments with different mean calculated keff
values are being combined. Tests for normality require a minimum number
of critical experiments to attain a specified confidence level
(generally 95%). If there is insufficient data to verify that the data
are normally distributed, or the data are shown to be not normally
distributed, a non-parametric technique should be used to analyze the
data.
The critical experiments chosen should ideally provide a continuum
of data across the entire validated range, so that any variation in the
bias as a function of important system parameters may be observed. The
presence of discrete clusters of experiments having a calculated
keff lower than the set of critical experiments as a whole
should be examined closely to determine if there is some systematic
effect common to a particular type of calculation that makes use of the
overall bias non-conservative. Because the bias can vary with system
parameters, if the licensee has combined different subsets of data
(e.g., solutions and powders, low- and high-enriched, homogeneous and
heterogeneous), the bias for the different subsets should be analyzed.
In addition, the goodness-of-fit for any function used to trend the
bias should be examined to ensure it is appropriate to the data being
analyzed.
If critical experiments do not cover the entire range of parameters
needed to cover anticipated applications, it may be necessary to extend
the AOA by making use of trends in the bias. Any extrapolation (or wide
interpolation) of the data should be done by means of an established
mathematical methodology that takes into account the functional form of
both the bias and its uncertainty. The extrapolation should not be
based on judgement alone, such as by observing that the bias is
increasing in the extrapolated range, because this may not account for
the increase in the bias uncertainty that will occur with increasing
extrapolation. The reviewer should independently confirm that the
derived bias is valid in the extrapolated range and should ensure that
the extrapolation is not large. NUREG/CR-6698 states that critical
experiments should be added if the data must be extrapolated more than
10%. There is no corresponding guidance given for interpolation;
however, if the gap represents a significant fraction of the total
range of the data, then the reviewer should question whether
interpolation is reasonable. The reviewer should consider, for
instance, how rapidly the underlying physics or neutronic behavior is
changing in the vicinity of the gap (e.g., if interpolation in H/X is
required, is the system fully thermalized, or is the spectrum changing
from a fast-to-thermal spectrum over the gap?) In general, if the
extrapolation or interpolation is too large, new factors that could
affect the bias may be introduced as the physical phenomena in the
system change. The reviewer should not view validation as a purely
mathematical exercise, but should bear in mind the neutron physics and
underlying physical phenomena when interpreting the results.
Discarding an unusually large number of critical experiments as
outliers (i.e., more than 1-2%) should also be viewed with some
concern. Apparent outliers should not be discarded based purely upon
judgement or statistical grounds (such as causing the data to fail
tests for normality), because they could be providing valuable
information on the method's validity for a particular application. The
reviewer should verify that there are specific defensible reasons, such
as reported inconsistencies in the experimental data, for discarding
any outliers. If any of the critical experiments from a particular data
set are discarded, the reviewer should examine other experiments
included to determine whether they may be subject to the same
systematic errors. Outliers should be examined carefully especially
when they have a lower calculated keff than the other
experiments included.
NUREG-1520 states that the MoS should be large compared to the
uncertainty in the bias. The observed spread of the data about the mean
keff should be examined as an indicator of the overall
precision of the calculational method. The reviewer should ascertain
whether the statistical method of validation considers both the
observed spread in the data and the experimental and calculational
uncertainty in determining the USL. The reviewer should also evaluate
whether the observed spread in the data is consistent with the reported
uncertainty (e.g., whether X\2\/N [ap] 1). If the spread in the data is
larger than, or comparable to, the MMS, then the reviewer should
consider whether additional margin (i.e., a larger MMS) is needed.
As a final test of the code's accuracy, the bias should be
relatively small (i.e., bias [lap] 2 percent), or else the reason for
the bias should be determined. No credit should be taken for positive
bias, because this would result in making changes in a non-conservative
direction without having a clear understanding of those changes. If the
absolute value of the bias is very large--and especially if the reason
for the large bias cannot be determined--this may indicate that the
calculational method is not very accurate, and a larger MMS may be
appropriate.
Some questions that the reviewer may ask in evaluating the rigor of
the validation methodology as justification for the MMS include:
Are the results from use of the methodology consistent
with the data (e.g., normally distributed)?
Is the normality of the data confirmed prior to performing
statistical calculations? If the data does not pass the tests for
normality, is a non-parametric method used?
Does the assumed functional form of the bias represent a
good fit to the critical experiments? Is a goodness-of-fit test
performed?
Does the method determine a pooled bias across disparate
types of critical experiments, or does it consider variations in the
bias for different types of experiments? Are there discrete clusters of
experiments for which the bias appears to be non-conservative?
Has additional margin been applied to account for
extrapolation or wide interpolation? Is this done based on an
established mathematical methodology?
[[Page 14024]]
Have critical experiments been discarded as apparent
outliers? Is there a valid reason for doing so?
Performing an adequate code validation is not by itself sufficient
justification for any specific MMS. The reason for this is that the
validation analysis determines the bias and its uncertainty, but not
the MMS. The MMS is added after the validation has been performed to
provide added assurance of subcriticality. However, having a validation
methodology that either exceeds or falls short of accepted practices
for validation may be a basis for either reducing or increasing the
MMS.
Statistical Conservatism
In addition to having conservatism in keff due to
modeling practices, licensees may also provide conservatism in the
statistical methods used to calculate the USL. For example, NUREG/CR-
6698 states that an acceptable method for calculating the bias is to
use the single-sided tolerance limit approach with a 95/95 confidence
(i.e., 95% confidence that 95% of all future critical calculations will
lie above the USL). If the licensee decides to use the single-sided
tolerance limit approach with a 95/99.9 confidence, this would result
in a more conservative USL than with a 95/95 confidence. This would be
true of other methods for which the licensee's confidence criteria
exceed the minimum accepted criteria. Generally, the NRC has accepted
95% confidence levels for validation results, so using more stringent
confidence levels may provide conservatism. In addition, there may be
other reasons a larger bias and/or bias uncertainty than necessary has
been used (e.g., because of the inclusion of inapplicable critical
experiments that have a lower calculated keff).
The reviewer may credit this conservatism towards having an
adequate MoS if: (1) The licensee demonstrates that this translates
into a specific [delta]keff; and (2) the licensee
demonstrates that the margin will be dependably present, based on
license or other commitments.
(3) Additional Risk-Informed Considerations
Besides modeling conservatism and the validation results, other
factors may provide added assurance of subcriticality. These factors
should be considered in evaluating whether there is adequate MoS and
are discussed below.
System Sensitivity and Uncertainty
The sensitivity of keff to changes in system parameters
can be used to assess the potential effect of errors on the calculation
of keff. If the calculated keff is especially
sensitive to a given parameter, an error in that parameter could have a
correspondingly large contribution to the bias. Conversely, if
keff is very insensitive to a given parameter, then an error
may have a negligible effect on the bias. This is of particular
importance when assessing whether the chosen critical experiments are
sufficiently similar to applications to justify a small MMS.
The reviewer should not consider the sensitivity in isolation, but
should also consider the magnitude of uncertainties in the parameters.
If keff is very sensitive to a given parameter, but the
value of that parameter is known with very high accuracy (and its
variations are well-controlled), the potential contribution to the bias
may still be very small. Thus, the contribution to the bias is a
function of the product of the the keff sensitivity with the
uncertainty. To illustrate this, suppose that keff is a
function of a large number of variables, x1, x2,*
* *, xN. Then the uncertainty in keff may be
expressed as follows, if all the individual terms are independent:
[GRAPHIC] [TIFF OMITTED] TN20MR06.019
Where the partial derivatives [part]k/[part]xi are
proportional to the sensitivity and the terms x1 represent
the uncertainties, or likely variations, in the parameters. (If not all
variables are dependent, then there may be additional terms.) Each term
in this equation then represents the contribution to the overall
uncertainty in keff.
There are several tools available to the reviewer to ascertain the
sensitivity of keff to changes in the underlying parameters.
Some of these are listed below:
1. Analytical tools that calculate the sensitivity for each
nuclide-reaction pair present in the problem may be used. One example
of this is the TSUNAMI code in the SCALE 5 code package. TSUNAMI
calculates both an integral sensitivity coefficient (i.e., summed over
all energy groups) and a sensitivity profile as a function of energy
group. The reviewer should recognize that TSUNAMI only calculates the
keff sensitivity to changes in the underlying nuclear data,
and not to other parameters that could affect the bias and should be
considered. (See section on Critical Experiment Similarity for caveats
about using TSUNAMI.)
2. Direct sensitivity calculations may be used, in which system
parameters are perturbed and the resulting impact on keff
determined. Perturbation of atomic number densities can also be used to
confirm the sensitivity calculated by other methods (e.g., TSUNAMI).
Such techniques are not limited to considering the effect of the
nuclear data.
There are also several sources available to the reviewer to
ascertain the uncertainty associated with the underlying parameters.
For process parameters, these sources of uncertainty may include
manufacturing tolerances, quality assurance records, and experimental
and/or measurement results. For nuclear data parameters, these sources
of uncertainty may include published data, uncertainty data distributed
with the cross section libraries, or the covariance data used in
methods such as TSUNAMI.
Some systems are inherently more sensitive to changes in the
underlying parameters than others. For example, high-enriched uranium
systems typically exhibit a greater sensitivity to changes in system
parameters (e.g., mass, moderation) than low-enriched systems. This has
been the reason that HEU (i.e., >20wt% \235\ U) facilities have been
licensed with larger MMS values than LEU (<=10wt% \235\ U) facilities.
This greater sensitivity would also be true of weapons-grade Pu
compared to low-assay mixed oxides (i.e., with a few percent Pu/U).
However, it is also true that the uncertainties associated with
measurement of the \235\ U cross sections are much smaller than those
associated with measurement of the \238\ U cross sections. Both the
greater sensitivity and smaller uncertainty would need to be considered
in evaluating whether a larger MMS is needed for high-enriched systems.
Frequently, operating limits that are more conservative than safety
limits determined using keff calculations are established to
prevent those safety limits from being exceeded. For systems in which
keff is very sensitive to the system parameters, more margin
between the operating and safety limits may be needed. Systems in which
keff is very sensitive to the process parameters may need
both a larger margin between operating and safety limits and a larger
MMS. This is because the system is sensitive to any change, whether it
be caused by normal process variations or caused by unknown errors.
Because of this, the assumption is often made that the MMS is meant to
account for variations in the process or the ability to control the
process parameters. However, the MMS is meant only to
[[Page 14025]]
allow for unknown (or difficult to quantify) uncertainties in the
calculation of keff. The reviewer should recognize that
determination of an appropriate MMS is not dependent on the ability to
control process parameters within safety limits (although both may
depend on the system sensitivity).
Some questions that the reviewer may ask in evaluating the system
sensitivity as justification for the MMS include:
How sensitive is keff to changes in the
underlying nuclear data (e.g., cross sections)?
How sensitive is keff to changes in the
geometric form and material composition?
Are the uncertainties associated with these underlying
parameters well-known?
How does the MMS compare to the expected magnitude of
changes in keff resulting from uncertainties in these
underlying parameters?
Knowledge of the Neutron Physics
Another important consideration that may affect the appropriate MMS
is the extent to which the physical behavior of the system is known.
Fissile systems which are known to be subcritical with a high degree of
confidence do not require as much MMS as systems where subcriticality
is less certain. An example of a system known to be subcritical with
high confidence is a light-water reactor fuel assembly. The design of
these systems is such that they can only be made critical when highly
thermalized. Due to extensive analysis and reactor experience, the
flooded isolated assembly is known to be subcritical. In addition, the
thermal neutron cross sections for materials in finished reactor fuel
have been measured with a very high degree of accuracy (as opposed to
cross sections in the resonance region). Other examples of systems in
which there is independent corroborating evidence of subcriticality may
include systems consisting of very simple geometric shapes, or other
idealized situations, in which there is strong evidence that the system
is subcritical based on comparison with highly similar systems in
published sources (e.g., standards and handbooks). In these cases, the
MMS may be significantly reduced due to the fact that the calculation
of keff is not relied on alone to provide assurance of
subcriticality.
Reliance on independent knowledge that a given system is
subcritical necessarily requires that the configuration of the system
be fixed. If the configuration can change from the reference case,
there will be less knowledge about the behavior of the changed system.
For example, a finished fuel assembly is subject to strict quality
assurance checks and would not reach final processing if it were
outside specifications. In addition, it has a form that has both been
extensively studied and is highly stable. For these reasons, there is a
great deal of certainty that this system is well-characterized and is
not subject to change. A typical solution or powder system (other than
one with a simple geometric arrangement) would not have been studied
with the same level of rigor as a finished fuel assembly. Even if they
were studied with the same level of rigor, these systems have forms
that are subject to change into forms whose neutron physics has not
been as extensively studied.
Some questions that the reviewer may ask in evaluating the
knowledge of the neutron physics as justification for the MMS include:
Is the geometric form and material composition of the
system fixed and very unlikely to change?
Is the geometric form and material composition of the
system subject to strict quality assurance, such that tolerances have
been bounded?
Has the system been extensively studied in the nuclear
industry and shown to be subcritical (e.g., in reactor fuel studies)?
Are there other reasons besides criticality calculations
to conclude that the system will be subcritical (e.g., handbooks,
standards, published data)?
How well-known is the nuclear data (e.g., cross sections)
in the energy range of interest?
Likelihood of the Abnormal Condition
Some facilities have been licensed with different sets of
keff limits for normal and abnormal conditions. Separate
keff limits for normal and abnormal conditions are
permissible, but are not required. There is some low likelihood that
processes calculated to be subcritical will, in fact, be critical, and
this likelihood increases as the MMS is reduced (though it cannot in
general be quantified). NUREG-1718, ``Standard Review Plan for the
Review of an Application for a Mixed Oxide (MOX) Fuel Fabrication
Facility,'' states that abnormal conditions should be at least unlikely
from the standpoint of the double contingency principle. Then, a
somewhat higher likelihood that a system calculated to be subcritical
is, in fact, critical is more permissible for abnormal conditions than
for normal conditions, because of the low likelihood of the abnormal
condition being realized. The reviewer should verify that the licensee
has defined abnormal conditions such that achieving the abnormal
condition requires at least one contingency to have occurred, that the
system will be closely monitored so that it is promptly detected, and
that it will be promptly corrected upon detection. Also, there is
generally more conservatism present in the abnormal case, because the
parameters that are assumed to have failed are analyzed at their worst-
case credible condition.
The increased risk associated with having a smaller MMS for
abnormal conditions should be commensurate with, and offset by, the low
likelihood of achieving the abnormal condition. That is, if the normal
case keff limit is judged to be acceptable, then the
abnormal case limit will also be acceptable, provided the increased
likelihood (that a system calculated to be subcritical will be
critical) is offset by the reduced likelihood of realizing the abnormal
condition because of the controls that have been established. Note that
if two or more contingencies must occur to reach a given condition,
there is no requirement to ensure that the resulting condition is
subcritical. If a single keff limit is used (i.e., no credit
for unlikelihood of the abnormal condition), then the limit must be
found acceptable to cover both normal and credible abnormal conditions.
The reviewer should always make this finding considering specific
conditions and controls in the process(es) being evaluated.
(4) Statistical Justification for the MMS
The NRC does not consider statistical justification an appropriate
basis for a specific MMS. Previously, some licensees have attempted to
justify specific MMS values based on a comparison of two statistical
methods. For example, the USLSTATS code issued with the SCALE code
package contains two methods for calculating the USL: (1) The
Confidence Band with Administrative Margin approach (calculating USL-
1), and (2) the Lower Tolerance Band approach (calculating USL-2). The
value of the MMS is an input parameter to the Confidence Band approach,
but is not included explicitly in the Lower Tolerance Band approach. In
this particular justification, adequacy of the MMS is based on a
comparison of USL-1 and USL-2 (i.e., the condition that USL-1,
including the chosen MMS, is less than USL-2). However, the reviewer
should not accept this justification.
The condition that USL-1 (with the chosen MMS) is less than USL-2
is necessary, but is not sufficient, to show that an adequate MMS has
been used. These methods are both statistical
[[Page 14026]]
methods, and a comparison can only demonstrate whether the MMS is
sufficient to bound any statistical uncertainties included in the Lower
Tolerance Band approach but not included in the Confidence Band
approach. There may be other statistical or systematic errors in
calculating keff that are not included in either statistical
treatment. Because of this, an MMS value should be specified regardless
of the statistical method used. Therefore, the reviewer should not
consider such a statistical approach an acceptable justification for
any specific value of the MMS.
(5) Summary
Based on a review of the licensee's justification for its chosen
MMS, taking into consideration the aforementioned factors, the staff
should make a determination as to whether the chosen MMS provides
reasonable assurance of subcriticality under normal and credible
abnormal conditions. The staff's review should be risk-informed, in
that the review should be commensurate with the MoS and should consider
the specific facility and process characteristics, as well as the
specific modeling practices used. As an example, approving an MMS value
greater than 0.05 for processes typically encountered in enrichment and
fuel fabrication facilities should require only a cursory review,
provided that an acceptable validation has been performed and modeling
practices at least as conservative as those in NUREG-1520 have been
utilized. The approval of a smaller MMS will require a somewhat more
detailed review, commensurate with the MMS that is requested. However,
the MMS should not be reduced below 0.02 due to inherent uncertainties
in the cross section data and the magnitude of code errors that have
been discovered. Quantitative arguments (such as modeling conservatism)
should be used to the extent practical. However, in many instances, the
reviewer will need to make a judgement based at least partly on
qualitative arguments. The staff should document the basis for finding
the chosen MMS value to be acceptable or unacceptable in the Safety
Evaluation Report (SER), and should ensure that any factors upon which
this determination rests are ensured to be present over the facility
lifetime (e.g., through license commitment or condition).
Regulatory Basis
In addition to complying with paragraphs (b) and (c) of this
section, the risk of nuclear criticality accidents must be limited by
assuring that under normal and credible abnormal conditions, all
nuclear processes are subcritical, including use of an approved margin
of subcriticality for safety. [10 CFR 70.61(d)]
Technical Review Guidance
Determination of an adequate MMS is strongly dependent upon
specific processes, conditions, and calculational practices at the
facility being licensed. Judgement and experience must be employed in
evaluating the adequacy of the proposed MMS. In the past, an MMS of
0.05 has generally been found acceptable for most typical low-enriched
fuel cycle facilities without a detailed technical justification. A
smaller MMS may be acceptable but will require some level of technical
review. (No specific guidance on the appropriate MMS for other types of
facilities, such as high-enriched or plutonium fuel cycle facilities,
is provided. Rather, the MMS for these facilities should be evaluated
on a case-by-case basis using the criteria in this ISG; an example of
the consideration of sensitivity and uncertainty for high-enriched
uranium is given in the section on ``System Sensitivity and
Uncertainty.'') Also, for reasons stated previously, the MMS should not
be reduced below 0.02.
An MMS of 0.05 should be found acceptable for low-enriched fuel
cycle processes and facilities if:
1. A validation has been performed that meets accepted industry
guidelines (e.g., meets the requirements of ANSI/ANS-8.1-1998, NUREG/
CR-6361, and/or NUREG/CR-6698).
2. There is an acceptable number of critical experiments with
similar geometric forms, material compositions, and neutron energy
spectra to applications. These experiments cover the range of
parameters of applications, or else margin is provided to account for
extensions to the AOA.
3. The processes to be evaluated include materials and process
conditions similar to those that occur in low-enriched fuel cycle
applications (i.e., no new fissile materials, unusual moderators or
absorbers, or technologies new to the industry that can affect the
types of systems to be modeled).
The reviewer should consider any factors, including those
enumerated in the discussion above, that could result in applying
additional margin (i.e., a larger MMS) or may justify reducing the MMS.
The reviewer must then exercise judge