Notice: Request for Comments; National Registry of Evidence-Based Programs and Practices (NREPP), 50381-50390 [05-17034]
Download as PDF
50381
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
Prevention and Treatment (SAPT) Block
Grant will be collected in TEDS.
The request for OMB approval
includes a request to conduct the 2006
N–SSATS survey and the 2006 Mini-N–
SSATS. The Mini-N–SSATS is a
procedure for collecting services data
from newly identified facilities between
main cycles of the survey and will be
used to improve the listing of treatment
facilities in the on-line treatment facility
Locator. The 2006 N–SSATS
questionnaire will include several new
items, including the addition of nicotine
replacement therapy and psychiatric
medications to the pharmacotherapies
list and the addition of new services to
the list of services provided. The request
also includes a request to conduct a
pretest of the 2007 N–SSATS
questionnaire in 2006. The 2007 pretest
questionnaire will include several
changes, including the modification of
the treatment categories to better reflect
the practices and terminology currently
used in the treatment field; modification
of the detoxification question, including
the addition of a follow-up question on
whether the facility uses drugs in
detoxification and for which substances;
the addition of questions on treatment
approaches and behavioral
interventions; the addition of a question
on quality control procedures used by
the facility; and, the addition of a
question on whether the facility accepts
ATR vouchers and how many annual
admissions were funded by ATR
Number of
respondents
Type of respondent and activity
States:
TEDS Admission Data ..............................................................................
TEDS Discharge Data ..............................................................................
TEDS Discharge Crosswalks ...................................................................
I–SATS Update .........................................................................................
vouchers. Following the pretest of the
2007 N–SSATS, a separate request for
OMB approval will be submitted for the
2007 and 2008 N–SSATS, including the
Mini-N–SSATS for those years.
The request for OMB approval will
also include the addition of several new
NOMS data elements to the TEDS
client-level record. To the extent that
states already collect the elements from
their treatment providers, the following
elements will be included in the TEDS
data collection: Number of arrests at
admission and at discharge; substances
used/frequency of use at discharge;
employment at discharge; and living
arrangement at discharge.
Estimated annual burden for the
DASIS activities is shown below:
Responses
per
respondent
52
40
5
56
Hours per
response
4
4
1
67
6
8
10
.08
1,248
1,280
50
300
State subtotal ....................................................................................
Facilities:.
I–SATS Update .........................................................................................
Pretest of N–SSATS revisions .................................................................
N–SSATS questionnaire ...........................................................................
Augmentation screener ............................................................................
Mini-N–SSATS ..........................................................................................
100
200
17,000
1,000
700
Facility subtotal ..................................................................................
19,000
3,613
Total ...........................................................................................
19,056
6,491
Written comments and
recommendations concerning the
proposed information collection should
be sent by September 26, 2005 to:
SAMHSA Desk Officer, Human
Resources and Housing Branch, Office
of Management and Budget, New
Executive Office Building, Room 10235,
Washington, DC 20503; due to potential
delays in OMB’s receipt and processing
of mail sent through the U.S. Postal
Service, respondents are encouraged to
submit comments by fax to: 202–395–
6974.
Dated: August 12, 2005.
Anna Marsh,
Executive Officer, SAMHSA.
[FR Doc. 05–16989 Filed 8–25–05; 8:45 am]
BILLING CODE 4162–20–P
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
56
Total burden
hours
DEPARTMENT OF HEALTH AND
HUMAN SERVICES
Substance Abuse and Mental Health
Services Administration (SAMHSA)
Notice: Request for Comments;
National Registry of Evidence-Based
Programs and Practices (NREPP)
Authority: Sec. 501, Pub. L. 106–310
SUMMARY: The Substance Abuse and
Mental Health Services Administration
(SAMHSA) is committed to preventing
the onset and reducing the progression
of mental illness, substance abuse and
substance related problems among all
individuals, including youth. As part of
this effort, SAMHSA is expanding and
refining the agency’s National Registry
of Evidence-based Programs and
Practices (NREPP) so that the system
serves as a leading national resource for
contemporary and reliable information
on the scientific basis and practicality of
interventions to prevent and/or treat
PO 00000
Frm 00091
Fmt 4703
Sfmt 4703
2,878
1
1
1
1
1
.08
.17
.2
.08
.13
8
34
3,400
80
91
mental illness and substance use and
abuse.
NREPP represents a major agency
activity within SAMHSA’s Science to
Service initiative. The initiative seeks to
accelerate the translation of research
into practice by promoting the
implementation of effective, evidencebased interventions for preventing and/
or treating mental disorders and
substance use and abuse. Of equal
measure, the initiative emphasizes the
essential role of the services community
in providing input and feedback to
influence and better frame the research
questions and activities pursued by
researchers in these areas.
Through SAMHSA’s Science to
Service initiative, the agency ultimately
seeks to develop a range of tools that
will facilitate evidence-based decisionmaking in substance abuse prevention,
mental health promotion, and the
treatment of mental and substance use
disorders. In addition to NREPP,
SAMHSA is developing an
informational guide of web-based
E:\FR\FM\26AUN1.SGM
26AUN1
50382
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
resources on evidence-based
interventions that will be available in
2006. SAMHSA also is exploring the
feasibility of supporting a searchable
web database of evidence-based
information (e.g., systematic reviews,
meta-analyses, clinical guidelines) for
mental health and substance abuse
prevention and treatment providers.
Such a system could reduce the lag time
between the initial development and
broader application of research
knowledge by serving as a real-time
resource to providers for ‘‘keeping
current’’ in ways that will enhance their
delivery of high quality, effective
services. In combination, these three
tools—NREPP, guide to web-based
resources, and database of evidencebased information—would provide
valuable information that can be used in
a variety of ways by a range of interested
stakeholders.
With regard to NREPP, during the past
two years, SAMHSA convened a series
of scientific/stakeholder panels to
inform the agency’s expansion of the
system to include interventions in all
substance abuse and mental health
treatment and prevention domains.
These panels thoroughly assessed the
existing NREPP review process and
review criteria and provided comments
and suggestions for refining and
enhancing NREPP. As part of this
expansion effort, SAMHSA also engaged
a contractor to assess the NREPP process
and review criteria, including how the
system and criteria compare to other,
similar evidence review and rating
systems in the behavioral and social
sciences. The cumulative results of
these activities have guided efforts to
refine the NREPP review process and
review criteria, as well as inform the
agency’s plans for how such a system
may be used to promote greater
adoption of evidence-based
interventions within typical
community-based settings.
This Federal Register Notice (FRN)
provides an opportunity for interested
parties to become familiar with and
comment on SAMHSA’s plans for
expansion and use of NREPP.
DATES: Submit comments on or before
October 25, 2005.
ADDRESSES: Address all comments
concerning this notice to: SAMHSA
c/o NREPP Notice, 1 Choke Cherry
Road, Rockville, MD 20857. See
SUPPLEMENTARY INFORMATION for
information about electronic filing.
FOR FURTHER INFORMATION CONTACT:
Kevin D. Hennessy, PhD, Science to
Service Coordinator/SAMHSA, 1 Choke
Cherry Road, Room 8–1017, Rockville,
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
Maryland 20857. Dr. Hennessy may be
reached at (240) 276–2234.
SUPPLEMENTARY INFORMATION:
Electronic Access and Filing Addresses
You may submit comments by
sending electronic mail (e-mail) to
nrepp.comments@samhsa.hhs.gov.
Dated: August 18, 2005.
Charles G. Curie,
Administrator.
Overview
Increasingly, individuals and
organizations responsible for
purchasing, providing and receiving
services to prevent substance abuse and/
or treat mental and substance use
disorders are considering the extent to
which these services are ‘‘evidencebased’’—that there exists some degree of
documented scientific support for the
outcomes obtained by these services. As
the Federal agency responsible for
promoting the delivery of substance
abuse and mental health services,
SAMHSA is particularly interested in
supporting and advancing activities that
encourage greater adoption of effective,
evidence-based interventions to prevent
and/or treat mental and substance use
disorders. With this in mind, SAMHSA
proposes to refine and expand its
National Registry of Evidence-based
Programs and Practices (NREPP).
SAMHSA believes that the growth and
evolution of NREPP can serve as an
important mechanism for promoting
greater adoption of evidence-based
substance abuse and mental health
services,—one that can do so in
conjunction with an ever-growing array
of scientific knowledge, clinical
expertise and judgment, and patient/
recipient values and perspectives. By
clearly identifying and assessing the
scientific basis and disseminability of a
range of behavioral interventions,
NREPP is likely to prove an important
resource to both individuals and
systems seeking information on the
effectiveness of various services to
prevent and/or treat mental and
substance use disorders.
Background and Need
As SAMHSA promotes the
identification and greater use of
effective, evidence-based interventions
for individual-, population-, policy-,
and system-level changes, the agency
seeks to build upon the strong
foundation provided by the precursor to
the National Registry of Evidence-based
Programs and Practices—namely, the
National Registry of Effective Prevention
Programs. The previous system provides
an important building block in the
PO 00000
Frm 00092
Fmt 4703
Sfmt 4703
agency’s efforts to develop a SAMHSAwide registry.
The National Registry of Effective
Prevention Programs developed in
SAMHSA’s Center for Substance Abuse
Prevention (CSAP) beginning in 1997 as
a way to help professionals in the field
become better consumers of prevention
programs. Between 1997 and 2004,
NREPP reviewed and rated more than
1,100 prevention programs, with more
than 150 obtaining designation as a
Model, Effective, or Promising Program.
Information on all current NREPP
programs is available through the Model
Programs Web site at https://
www.modelprograms.samhsa.gov.
Additional details about the review
process, review criteria, and rating
system for the National Registry of
Effective Prevention Programs are
available in the SAMHSA publication
‘‘Science-Based Prevention programs
and Principles 2002,’’ which can be
downloaded from SAMHSA’s Model
Programs Web site https://
www.modelprograms.samhsa.gov) by
clicking on ‘‘Publications’’ on the tool
bar on the left side of the page; or by
requesting the publication through
SAMHSA’s National Clearinghouse for
Alcohol and Drug Information (NCADI)
at 1–800–729–6686 (or by visiting the
NCADI Web site at https://
www.health.org).
As SAMHSA expands the NREPP
system, one area of potential
improvement is in the efficient
screening and triage of applications.
Given the historical applications trends
among substance abuse prevention
programs, combined with the increased
demands on the system through
expansion to other SAMHSA domains,
it is essential that the agency develop a
transparent and scientifically defensible
process for screening and triaging
applications.
Moreover, as SAMHSA engaged
NREPP scientific/stakeholder panels
over the past 2 years, concerns about the
existing review process and review
criteria emerged. In particular, a range
of scientific experts voiced concerns
regarding specific review criteria and
other elements of the review process.
In addition, systematic efforts to
examine and compare the current
NREPP review criteria with other
evidence-grading systems in the social
and behavioral sciences has revealed
both areas of relative strength and
relative weakness. At a minimum, this
comparison has affirmed for SAMHSA
the importance and value of
reexamining and refining the NREPP
review process and review criteria in
ways that reflect to the public
SAMHSA’s commitment to identifying
E:\FR\FM\26AUN1.SGM
26AUN1
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
and promoting interventions that have
shown to be effective through prevailing
scientific standards. One important
element of this process is providing
support for the re-review of existing
NREPP programs against these
prevailing scientific standards (see
below), while another component is
identifying both SAMHSA and other
mechanisms and resources for
supporting efforts to evaluate and
document the evidence-base of
innovative interventions in ways that
will maximize their opportunity for
entry into NREPP.
Further, SAMHSA’s experience with
NREPP to date suggests that the system
is limited in its ability to identify and
rate interventions designed to promote
population-, policy-, and system-level
outcomes, such as those promoted by
community prevention coalitions.
SAMHSA’s plans for NREPP include an
expansion of the system in this area. As
part of this expansion, SAMHSA
proposes a second set of review criteria
for these interventions, with the
recognition that some interventions may
be designed to affect a community over
time, and that the prevailing scientific
standards for assessing the effectiveness
of these interventions may indeed be
different than those for interventions
seeking to change individual-level
outcomes. Finally, input into the NREPP
process to date suggests the need for
SAMHSA to provide greater policy
guidance on how best to use the system
to appropriately select specific
interventions, as well as contextual
guidance on how NREPP might be used
in conjunction with other important
information—such as clinical expertise,
patient values, and administrative and
policy perspectives and data—in
making decisions regarding the
financing and delivery of substance
abuse and mental health services.
Proposal
After extensive consultation with both
scientific experts and a range of
stakeholders. SAMHSA is seeking your
comments on a proposal to advance a
voluntary rating and classification
system for mental health and substance
abuse prevention and treatment
interventions—a system designed to
categorize and disseminate information
about programs and practices that meet
established evidentiary criteria. This
proposal presents in detail the new
NREPP system, including refinements to
the review process and review criteria
for programs and practices in substance
abuse and mental health, as well as an
expansion of the system to include
successful community coalition efforts
to achieve population-, policy-, or
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
system-level outcomes. The proposal
also describes SAMHSA’s plans for a
new Web site that will highlight the
scientific evidence-base of
interventions, as well as provide a range
of practical information needed by those
who are considering implementation of
programs or practices on the Registry.
SAMHSA further anticipates that
additional revisions and refinements to
the NREPP system may be needed on a
periodic basis, and proposes the
formation of an external advisory panel
to regularly assist the agency in
assessing proposed suggestions for
improvements to the system (see
Question # 10 below).
Initial Input From the Field
Upon determining that SAMHSA
would expand NREPP to include
interventions in all agency domains,
three expert panel meetings of both
scientist and nonscientist stakeholders
were convened to provide feedback on
the current review system and review
criteria, as well as solicit suggestions
about redesigning the system to promote
the goals noted above. Each meeting was
conducted over a 2-day period, and
included invited participants
representing a range of relevant
organizations, expertise, and
perspectives. All meetings took place in
Washington, DC, in 2003, with mental
health experts meeting in April,
substance abuse prevention and mental
health promotion experts meeting in
September, and substance abuse
treatment experts meeting in December.
Transcripts of these meetings are
available on-line at the NREPP Web
page accessible through the ‘‘Quick
Picks’’ section on SAMHSA’s Home
page (https://www.samhsa.gov).
SAMHSA also convened a meeting in
May 2005 to solicit recommendations
for integrating evidence-based findings
from community coalitions into NREPP.
The 2-day meeting brought together
prominent researchers and practitioners
who reaffirmed the importance of
including prevention coalitions within
NREPP, and offered suggestions as to
the types of outcomes and evidence
criteria appropriate to the assessment of
community coalitions. A summary of
this meeting is available on-line at the
NREPP web page accessible through the
‘‘Quick Picks’’ section on SAMHSA’s
Home page (https://www.samhsa.gov).
Review Process for Determining
Individual-Level Outcome Ratings for
Interventions
A primary goal of the Registry is to
provide the public with reliable
information about the evidence quality
and strength of scientific support for
PO 00000
Frm 00093
Fmt 4703
Sfmt 4703
50383
specific interventions. The strength of
scientific support includes: the quality
of evaluation design (e.g., experimental
or quasi-experimental designs); fidelity
to predetermined intervention
components; confidence in the link
between intervention components and
specific outcome(s) achieved; freedom
from internal and external sources of
bias and error; and the statistical
significance and practical magnitude
(e.g., effect size) of outcomes achieved.
An additional goal is to provide key
information about the transferability of
these programs and practices to realworld prevention and treatment
settings. NREPP utility descriptors
provide information about the
appropriate settings and populations for
implementation of specific
interventions, the availability of training
and dissemination materials, and their
practicality and costs.
This section describes the NREPP
review process, including the evidence
rating criteria and utility descriptors
that will form the basis for Web-based
information about programs and
practices.
Based on important feedback from
scientists and practitioners in the
prevention and treatment fields, the
NREPP review process has been
enhanced in several important respects:
—Programs and practices will be rated
on the strength of evidence for
specific outcomes achieved, rather
than on global assessments of the
effectiveness of intervention(s). In
addition, indicators of strength of
association or magnitude of outcome
effects, such as effect size statistics,
will be utilized in NREPP to
complement traditional, statistical
significance (null-hypothesis) testing.
—There will be multiple, outcomespecific ratings of evidence quality
strength. All programs and/or
practices listed on the Registry will be
considered ‘‘Effective’’ for
demonstrating specific outcomes
having varying levels of evidence
quality and confidence resulting from
independent (or applicant)
replication(s).
—Evidence rating criteria have been
refined and now emphasize
intervention impacts, evaluation
design and fidelity, quality of
comparison conditions, and
replications.
The section below is an overview of
the NREPP process for obtaining expert
reviewers’ ratings of the evidence
quality for outcome-specific program
and practice interventions. The process
includes an internal screening and
triage process conducted by qualified
E:\FR\FM\26AUN1.SGM
26AUN1
50384
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
NREPP contractor staff serving as review
coordinators, as well as an independent,
external scientific review process
conducted by qualified and trained
external scientists—working
independently of one another—to assess
the evidence quality of candidate
interventions.
I. Submitted Materials
Applicants will submit a variety of
documents that allow a panel of expert
reviewers to rate objectively the
evidence for an intervention’s impact on
substance abuse or mental health
outcomes. Materials submitted may
include:
• Descriptive program summaries,
including the specific outcomes
targeted;
• Research reports and published and
unpublished articles to provide
scientific evidence (experimental or
quasi-experimental studies) about the
effectiveness of the intervention for
improving specific outcomes;
• Documents to verify that
participants were assured privacy and
freedom from negative consequences
regardless of participating (used to
assess participant response bias, not
human subjects protections per se); and
• Documents to provide evidence that
outcomes and analytic approaches were
developed through a theory-driven or
hypothesis-based approach.
These materials will provide
SAMHSA and potential reviewers with
objective evidence to support
conclusions about the validity and
impact of the program or practice.
Reviewers must be assured that program
investigators did not capitalize on
chance findings or excessive
postintervention data analyses to find
effects that were not components of the
intervention design or theory.
II. Internal Review and Triage
Upon receipt, each set of materials
will be assigned to an NREPP review
coordinator (contractor staff), who will
inventory and document the contents of
the submission. The review coordinator
will contact the applicant by phone
and/or e-mail confirming receipt and
notifying the applicant if additional
application materials are required.
When all materials for a program have
been received by the review
coordinator, an internal review is
conducted to eliminate those programs
or practices that do not meet NREPP
minimum evidence-based standards.
These minimum standards are (1) An
intervention that is consistent with
SAMHSA’s matrix of program priority
areas; (2) one or more evaluations of the
intervention using an experimental or
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
quasi-experimental design; and (3)
statistically significant intervention
outcome(s) related to either the
prevention or treatment of mental or
substance use disorders. Non-reviewed
programs will receive written
notification of this decision, including
problematic or missing components of
their application, and may be
considered for re-review at a later date.
SAMHSA will maintain oversight
over the entire NREPP application and
selection process. Moreover, SAMHSA’s
Administrator and Center Directors may
establish specific program and practice
areas for priority review.
III. Ratings by Reviewers
For all NREPP applications
determined appropriate for review,
three (3) independent, external
reviewers will evaluate and rate the
intervention. Reviewers are doctorallevel researchers and practitioners who
have been selected based on their
training and expertise in the fields of
mental health promotion or treatment
and/or substance abuse prevention or
treatment. Moreover, reviewers will be
thoroughly trained in the NREPP review
process, and will be monitored and
provided feedback periodically on their
performance. Of note, interventions
targeting individuals or populations
with co-occurring mental health and
substance use disorders (or other crossdomain initiatives) will be assigned to
reviewers across these categorical
domains.
To maintain the objectivity and
autonomy of the peer review process,
SAMHSA will not disclose to the public
or applicants the identities of individual
reviewers assigned to specific reviews.
On a periodic basis, SAMHSA may post
listings of reviewer panels within
specific SAMHSA domains as an
informational resource to the public.
Reviewers will be selected based on
their qualifications and expertise related
to specific interventions and/or
SAMHSA priority areas. In addition to
reviewers identified by SAMHSA,
NREPP will consider third-party and
self-nominations to become part of the
reviewer pool. All reviewers will
provide written assurance, to be
maintained on file with the NREPP
contractor, that they do not have a
current or previous conflict of interest
(e.g., financial or business interest) that
might impact a fair and impartial review
of specific programs or practices
applying for NREPP review.
Reviewers provide independent
assessments of the evidence quality and
provide numerical summary scores
across the 16 outcome-specific evidence
quality criteria. Each criterion is scored
PO 00000
Frm 00094
Fmt 4703
Sfmt 4703
on a 0 to 4 scale. The 16 evidence
quality criteria are presented below.
Individual-Level Outcome Evidence
Rating Criteria
1. Theory-Driven Measure Selection
Outcome measures for a study should
be selected before data are collected and
should be based on a priori theories of
hypotheses.
0 = The applicant selected the measure
after data collection for the apparent
purpose of obtaining more favorable
results than would be expected from
using the measures originally
planned, OR there is no
documentation of selection prior to
data analysis.
4 = Documentation shows that the
applicant selected the measure prior
to study implementation, OR the
measure was selected after study
inception, but before data analysis,
and is supported by a peer review
panel or literature related to study
theories or hypotheses.
2. Reliability
Outcome measures should have
acceptable reliability to be interpretable.
‘‘Acceptable’’ here means reliability at a
level that is conventionally accepted by
experts in the field.
0 = No evidence of measure realibility.
1 = Reliability coefficients indicate that
some but not all relevant types of
reliability (e.g., test-retest, inter-rater,
inter-item) are acceptable.
3 = All relevant types of realibility have
been documented to be at acceptable
levels in studies by the applicant.
4 = All relevant types of reliability have
been documented to be acceptable
levels in studies by independent
investigators.
3. Validity
Outcome measures should have
acceptable validity to be interpretable.
‘‘Acceptable’’ here means validity at a
level that is conventionally accepted by
experts in the field.
0 = No evidence of measure validity, or
some evidence that the measure is not
valid.
1 = Measure has face validity.
3 = Studies by applicant show that
measure has one or more acceptable
forms of criterion-related validity that
are correlated with appropriate,
validated measures or objective
criteria; OR, as an objective measure
of response, there are procedural
checks to confirm data validity, but
they have not been adequately
documented.
4 = Studies by independent
investigators show that measure has
E:\FR\FM\26AUN1.SGM
26AUN1
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
one or more acceptable forms of
criterion-related validity that are
correlated with appropriate, validated
measures or objective criteria; OR, as
an objective measure of response,
there are adequately documented
procedural checks that confirm data
validity.
4. Intervention Fidelity
The ‘‘experimental’’ intervention
implemented in a study should have
fidelity to the intervention proposed by
the applicant. Instruments that have
tested acceptable psychometric
properties (e.g., interrator reliability,
validity as shown by positive
association with outcomes) provides the
highest level of evidence.
0 = There is evidence the intervention
implemented was substantially
different from the one proposed.
1 = There is only narrative evidence that
the applicant or provider believes the
intervention was implemented with
acceptable fidelity.
2 = There is evidence of acceptable
fidelity in the form of judgment(s) by
experts, based on limited on-site
evaluation and data collection.
3 = There is evidence of acceptable
fidelity, based on the systematic
collection of data on factors such as
dosage, time spent in training,
adherence to guidelines or a manual,
or a fidelity measure with unspecified
or unknown psychometric properties.
4 = There is evidence of acceptable
fidelity from a tested fidelity
instrument shown to have reliability
and validity.
5. Comparison Fidelity
A study’s comparison condition
should be implemented with fidelity to
the comparison condition proposed by
the applicant. Instruments for
measuring fidelity that have tested
acceptable psychometric properties
(e.g., interrater reliability, validity as
shown by predicted association with
outcomes) provide the highest level of
evidence.
0 = There is evidence that the
comparison condition implemented
was substantially different from one
proposed.
1 = There is only narrative evidence that
the applicant or provider believes the
comparison condition was
implemented with fidelity.
2 = Researchers report observational or
administrative data that the
comparison condition was
implemented with fidelity.
3 = Documentation confirms that
comparison group participants did
not receive interventions that were
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
very similar or identical to
intervention participants, AND there
is documentation of degree of
participation in any comparison
conditions such as lectures or
treatment.
4 = There is evidence from a tested
instrument suggesting that the
comparison condition was
implemented with fidelity.
6. Nature of Comparison Condition
The quality of evidence for an
intervention depends in part on the
nature of the comparison condition(s),
including assessments of their active
components and overall effectiveness.
Interventions have the potential to cause
more harm than good; therefore, an
active comparison intervention should
be shown to be better than no treatment.
0 = There was no comparison condition.
1 = The comparison condition is an
active intervention that has not been
proven to better than no treatment.
2 = The comparison condition is no
service or wait-list, or an active
intervention shown to be as effective
as or better than no treatment.
3 = The comparison condition is an
attention control.
4 = The comparison condition was
shown to be as safe or safer and more
effective than an attention control.
7. Assurances to Participants
Study participants should always be
assured that their responses will be kept
confidential and not affect their care or
services. When these procedures are in
place, participants are more likely to
disclose valid data.
0 = There was no effort to encourage
and reassure subjects about privacy
and that consent or participation
would have no effect on services.
1 = Data collector was the service
provider, AND there were
documented assurances to
participants about privacy and that
consent or participation would have
no effect on care or services.
2 = Data collector was not the service
provider. There were indications, but
no documentation, that participants
were assured about their privacy and
that consent or participation would
have no effect on care or services.
4 = Data collector was not the service
provider, AND there were
documented assurances to
participants about privacy and that
consent or participation would have
no effect on care or services; OR, data
were not collected directly from
participants.
PO 00000
Frm 00095
Fmt 4703
Sfmt 4703
50385
8. Participant Expectations
Participants can be biased by how an
intervention is introduced to them and
by an awareness of their study
condition. Information used to recruit
and inform study participants should be
carefully crafted to equalize
expectations. Masking treatment
conditions during implementation of
the study provides the strongest control
for participant expectancies.
0 = Investigators did not make adequate
attempts to mask study conditions or
equalize expectations among
participants in the experimental and
comparison conditions, or differential
participant expectations were
measured and found to be too great to
control for statistically.
2 = Investigators attempted to mask
study conditions or equalize
expectations among participants in
the experimental and comparison
conditions. Some participants
appeared likely to have known their
study condition assignment
(experimental or comparison).
3 = Investigators attempted to mask
study conditions or equalize
expectations among participants in
the experimental and comparison
conditions. Some participants
appeared likely to have known their
study condition assignment
(experimental or comparison), but
these differential participant
expectations were measured and
appropriately controlled for
statistically.
4 = Investigators adequately masked
study conditions. Participants did not
appear likely to have known their
study condition assignment.
9. Standardized Data Collection
All outcome data should be collected
in a standardized manner. Data
collectors trained and monitored for
adherence to standardized protocols
provide the highest quality evidence of
standardized data collection.
0 = Applicant did not use standardized
data collection protocols.
2 = Data was collected using
standardized protocol and trained
data collectors.
3 = Data was collected using
standardized protocol and trained
data collectors, with evidence of good
initial adherence by data collectors to
the standardized protocol.
4 = Data was collected using
standardized protocol and trained
data collectors, with evidence of good
initial adherence to data collectors to
the standardized protocol and
evidence of data collector retraining
E:\FR\FM\26AUN1.SGM
26AUN1
50386
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
when necessary to control for rater
‘‘drift.’’
10. Data Collector Bias
Data collector bias is most strongly
controlled when data collectors are not
aware of the conditions to which study
participants have been assigned. When
data collectors are aware of specific
study conditions, their expectations
should be controlled for through
training and/or statistical methods.
0 = Data collectors were not masked to
participants’ conditions, and nothing
was done to control for possible bias,
OR collector bias was measured and
found to be too great to control for
statistically.
2 = Data collectors were not masked to
participants’ conditions, but data
collectors received training to reduce
possible bias.
3 = Data collectors were not masked to
participants’ conditions; possible bias
was appropriately controlled for
statistically.
4 = Data collectors were masked to
participants’ conditions.
11. Selection Bias
Concealed random assignment of
participants provides the strongest
evidence of control for selection bias.
When participants are not randomly
assigned, covariates and confounding
variables should be controlled as
indicated by theory and research.
0 = There was no comparison condition,
OR participants or providers selected
conditions.
3 = Participants were not assigned
randomly, but researchers controlled
for theoretically relevant confounding
variables, OR participants were
assigned with non-concealed
randomization.
4 = Selection bias was controlled with
concealed random assignment.
12. Attrition
Study results can be biased by
participant attrition. Statistical methods
as supported by theory and research can
be employed to control for attrition that
would bias results, but studies with no
attrition needing adjustment provide the
strongest evidence that results are not
biased.
0 = Attrition was taken into account
inadequately, OR there was too much
attrition to control for bias.
1 = No significant differences were
found between participants lost to
attrition and remaining participants.
2 = Attrition was taken into account by
simpler methods that crudely estimate
data for missing observations.
3 = Attrition was taken into account by
more sophisticated methods that
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
model missing data, observations, or
participants.
4 = There was no attrition, OR there was
no attrition needing adjustment.
13. Missing Data
Study results can be biased by
missing data. Statistical methods as
supported by theory and research can be
employed to control for missing data
that would bias results, but studies with
no missing data needing adjustment
provide the strongest evidence.
0 = Missing data were an issue and were
taken into account inadequately, OR
levels of missing data were too high
to control for bias.
1= Missing data were an issue and were
taken into account, but high quantity
makes the control for bias suspect.
2= Missing data were an issue and were
taken into account by simpler
methods (mean replacement, last
point carried forward) that
simplistically estimate missing data;
control for missing data is plausible.
3= Missing data were an issue and were
taken into account by more
sophisticated methods that model
missing data; control for missing data
very plausible.
4= Missing data were not an issue.
14. Analysis Meets Data Assumptions
The appropriateness of statistical
analyses is a function of the properties
of the data being analyzed and the
degree to which meet statistical
assumptions.
0= Analyses were clearly inappropriate
to the data collected; severe
violation(s) of assumptions make
analysis uninterpretable.
1= Some data were analyzed
appropriately, but for other analyses
important violation(s) of assumptions
cast doubt on interpretation.
2= There were minor violations of
assumptions for most or all analyses,
making interpretation of results
arguable.
3= There were minor violations of
assumptions for only a few analyses;
results were generally interpretable.
4= There were no violations of
assumptions for any analysis.
15. Theory-Driven Selection of Analytic
Methods
Analytic methods should be selected
for a study based on a priori theories or
hypotheses underlying the intervention.
Changes to analytic methods after initial
data analysis (e.g., to ‘‘dredge’’ for
significant results) decrease the
confidence that can be placed in the
findings.
0= Analysis selected appears
inconsistent with the intervention
PO 00000
Frm 00096
Fmt 4703
Sfmt 4703
theory or hypotheses; insufficient
rational provided by investigator.
1= Analysis selected appears
inconsistent with the intervention
theory or hypotheses, but applicant
provides a potentially viable
rationale.
3= Analysis is widely accepted by the
field as the most consistent with
study theory or hypotheses; no
documentation showing methods
were selected prior to data analysis.
4= Analysis is widely accepted by the
field as the most consistent with
study theory or hypotheses;
documentation shows that methods
were selected prior to data analysis.
16. Anomalous Findings
Findings that contradict the theories
and hypotheses underlying an
intervention suggest the possibility of
confounding causal variables and limit
the validity of study findings.
0 = There were anomalous findings
suggesting alternate explanations for
outcomes reported.
4 = There were no anomalous findings,
OR researchers explained anomalous
findings in a way that preserves the
validity of results reported.
Based upon the independent reviewer
assessments, review coordinators will
compute average evidence quality
ratings for specific outcome measures
(based on the 16 evidence quality
criteria), and then ask reviewers to
determine the overall intervention
outcome evidence ratings according to
two components: quality of evidence
and intervention replications. Average
evidence quality ratings scores below
2.0 will be considered ‘‘insufficient
current evidence’’ for the effectiveness
of a given outcome, and will not be
included in the Registry. Evidence
quality rating scores of 2.0 to 2.5 will be
considered ‘‘emerging evidence’’ for
effectiveness, and scores of 2.5 and
higher (4.0 is the maximum) will be
considered ‘‘strong evidence.’’
Specific rating category labels for
effective outcomes remain to be
finalized, but might include categories
such as: (1) Strong evidence with
independent replication(s); (2) Strong
evidence with developer replication(s);
(3) Strong evidence without replication;
(4) Emerging evidence with
independent replication(s); (5) Emerging
evidence with developer replication(s);
and (6) Emerging evidence without
replication.
IV. Applicant Notification
Applicants will be notified in writing
of their final NREPP rating category(s)
by the SAMHSA Administrator or his/
E:\FR\FM\26AUN1.SGM
26AUN1
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
her designee within 2 weeks following
the completion of the review. This
notification will include the summary
comments of reviewers as well as the
consensus ratings on each review
criteria. Where relevant, the notification
letter will provide applicants with the
effective date of the program status
designation.
Applicants will have the opportunity
to appeal any review decision by
submitting a written request to the
NREPP contractor within 30 days of
notification. Appeals will be resolved
through the assignment of two (2)
additional reviewers to conduct focused
reviews of the evidence quality for
specific, disputed outcome ratings.
Reviews conducted as part of a formal
appeal process will be independent and
reviewers will be unaware of previous
ratings and decisions. The numeric
evidence ratings will be averaged across
the five (i.e., 3 original; 2 appeal)
reviews for a final determination of
intervention outcome rating(s).
V. Utility Descriptors
The NREPP utility descriptors will
provide information to program
purchasers and planners, service
providers, consumers and the general
public about the transferability of
intervention technologies to different
(including non-research-based) settings.
These descriptors complement NREPP’s
scientific evidence-based program and
practice ratings with information
pertaining to the following dimensions:
1. Implementation. What kinds of
materials are available to support
implementation and what audiences are
they appropriate for? What kinds of
training and training resources are
available to support implementation?
2. Quality Monitoring. What tools,
procedures, and documentation are
available to support quality monitoring
and quality improvement as the
program or practice is implemented?
3. Unintended or Adverse Events.
What procedures, systems and data have
been identified to indicate whether the
intervention has ever resulted in
unintended or adverse events?
4. Population Coverage. Were the
study samples recruited representative
of the persons identified to receive the
intervention in the theory/framework?
5. Cultural Relevance and Cultural
Competence. Were the outcomes
demonstrated to be effective and
applicable to specific demographic and
culturally defined groups? If so, are
training and other implementation
materials available to promote culturally
competent implementation of the
intervention?
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
6. Staffing. What is aggregate level of
staffing (e.g., FTEs) required to
implement the intervention effectively?
What are the individual skills, expertise,
and training required of staff to deliver
the intervention?
7. Cost. What are the estimated startup and annual costs per person served
and unit of service for the intervention
at full operation?
Interventions with outcomes
achieving any one of the effective
statuses will be asked to provide
descriptive information about the
intervention’s readiness for
implementation, appropriateness for
different populations, freedom from
unintended or adverse effects, and
staffing and cost requirements. These
utility descriptors will be featured on
the NREPP Web site to help assure a
proper match between specific
prevention and treatment interventions
and the settings and populations to
which they are most appropriate.
In light of SAMHSA’s commitment to
consumer and family involvement, the
agency is seeking ways to ensure that
these groups provide input into the
assessment of interventions that achieve
NREPP status. SAMHSA seeks
suggestions regarding the most useful
and efficient way to conduct this
process (see Question 7 below).
Review Process for Determining
Population-, Policy-, and System-Level
Outcome Ratings for Interventions
The NREPP Evidence Rating Criteria
for Population-, Policy-, and SystemLevel Outcomes are proposed as the
basis for reviewer ratings of outcomes
generated by community prevention
coalitions and other environmental
interventions to promote resiliency and
recovery at the community level.
SAMHSA’s rationale for use of these
separate criteria comes through a
recognition that some interventions may
be designed to affect a community over
time, and that the prevailing scientific
standards for assessing the effectiveness
of these interventions may indeed be
different than those for interventions
seeking to change individual-level
outcomes.
An outcome of a prevention or
treatment intervention qualifies for
review under these 12 criteria only
when it can be included in one of the
following three categories:
1. Population-Level Outcome—
measures the effect of an intervention of
an existing, predefined population.
Examples of such existing, predefined
populations include ‘‘all youth residing
in a neighborhood,’’ ‘‘all female
employees of a manufacturing plant,’’ or
‘‘all Native Americans receiving social
PO 00000
Frm 00097
Fmt 4703
Sfmt 4703
50387
services from a tribal government.’’ ‘‘All
patients receiving a specific treatment,’’
in contrast, cannot be defined as an
existing, predefined population because
that population would have come into
existence as a direct response to the
intervention.
2. Policy-Level Outcome—measures
the effect of an intervention on
enactment, maintenance, or
enforcement of policies that are
assumed to have a positive aggregate
impact on resiliency or recovery.
Examples of such outcomes include
‘‘the rate of passage of legislation
restricting access to alcoholic
beverages’’ or ‘‘the percentage of arrests
for illicit drug manufacturing that result
in convictions.’’
3. System-Level Outcome—measures
the effect of an intervention on
prevention and treatment capacity,
efficiency, or effectiveness in an existing
system or community. Examples of such
outcomes include ‘‘increased capacity of
a State government to quantify alcoholor drug-related problems’’ or ‘‘increased
effectiveness of a community treatment
system to respond to the comprehensive
needs of individuals with Axis I mental
health diagnoses.’’
To support the transparency of the
review process, SAMHSA wants
stakeholders to understand clearly the
NREPP procedures and decision-making
processes. All community coalition
interventions included in NREPP will
have demonstrated evidence of
effectiveness at the population, policy,
or system level. The ratings will
indicate the strength of the supporting
evidence, and may be as follows:
(1) Strong evidence with replication;
(2) Strong evidence without
replication;
(3) Emerging evidence with
replication; and
(4) Emerging evidence without
replication.
All NREPP evidence ratings are
defined at the level of specific
outcomes. The 12 evidence rating
criteria used for population-, policyand system-level outcomes, summarized
as an average Evidence Quality Score
(EQS) for each outcome, allow
independent expert reviewers to score
along dimensions of outcome
measurement, intervention fidelity,
comparison conditions, participant and
data collector biases, design and
analysis, and anomalous findings. Each
of the 12 criteria is assessed by
independent reviewers on a 0 to 2 scale,
in which a ‘‘1’’ indicates that
methodological rigor may have been
acceptable and a ‘‘2’’ indicates that
adequate methodological rigor was
achieved for this type of outcome.
E:\FR\FM\26AUN1.SGM
26AUN1
50388
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
Preliminary discussions of
classifications have suggested that
‘‘Strong evidence’’ be defined as an
average EQS of 1.75 or above (out of a
possible 2.0), and that ‘‘Emerging
evidence’’ be defined as an average EQS
between 1.50 and 1.74 (out of a possible
2.0).
The 12 criteria applied to each
population-, policy-, or system-level
outcome measures are:
1. Logic-Driven Selection of Measures
2. Reliability
3. Validity
4. Intervention Fidelity
5. Nature of Comparison Condition
6. Standardized Data Collection
7. Data Collector Bias
8. Population Studied
9. Missing Data
10. Analysis Meets Data Assumptions
11. Theory-Driven Selection of
Analytic Methods
12. Anomalous Findings
4. Intervention Fidelity
Outcome Measurement Criteria
The quality of evidence for an
intervention depends in part on the
nature of the comparison condition(s).
0 = Research design either lacks a
comparison condition, or employs a
before/after comparison.
1 = Comparison condition was no
service or wait-list (including baseline
comparison for a multipoint time
series), or an active intervention that
has not been shown to be safer or
more effective than no service.
2 = Comparison condition was an active
intervention shown to be as safe as, or
safer and more effective than, no
service.
1. Logic-Driven Selection of Measures
Outcome measures should be based
on a theory or logic model that
associates them with the intervention.
0 = The applicant appears to have
selected outcome measures for the
purpose of identifying favorable
results rather than from a logic-based
rationale.
1 = There is no explicit description of
a guiding logic model or theory for
measures, although a rationale for the
inclusion of most measures can be
inferred.
2 = Measures are supported by a theory
or logic model that associates the
intervention with the outcome.
2. Reliability
Outcome measures should have
acceptable reliability to be interpretable.
‘‘Acceptable’’ here means reliability at a
level that is conventionally accepted by
experts in the field.
0 = No evidence of reliability of
measures is presented.
1 = Relevant reliability measures are in
the marginal range.
2 = Relevant reliability measures are in
clearly acceptable ranges.
3. Validity
Outcome measures should have
acceptable validity to be interpretable.
0 = No evidence of validity of measures
is presented or evidence that is
presented suggests measures are not
valid.
1 = Measures has face validity.
2 = Relevant validity has been
documented to be at acceptable levels
in independent studies.
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
The intervention should be well
defined and its implementation should
be described in sufficient detail to
assess whether implementation affected
outcomes.
0 = The intervention and/or its
implementation are not described in
sufficient detail to verify that the
intervention was implemented as
intended.
1 = The intervention and its
implementation are described in
adequate detail, including
justification for significant variation
during implementation.
2 = The intervention and its
implementation are described in
adequate detail, reflecting variation
during implementation with little or
no plausible impact on outcomes.
5. Nature of Comparison Condition
6. Standardized Data Collection
All outcome data should be collected
in a standardized manner. Data
collectors trained and monitored for
adherence to standardized protocols
provide the highest quality evidence of
standardized data collection.
0 = Data collection or archival sources
used by the evaluation to assess
outcome did not use standardized
data collection protocol(s).
1 = All outcome data were collected
using standardized protocol(s).
2 = All outcome data were collected
using standardized protocol(s) and
trained data collectors.
7. Data Collector Bias
Data collector bias is most strongly
controlled when data collectors are not
aware of the interventions to which
populations have been exposed. When
data collectors are aware of specific
interventions, their expectations should
be controlled for through training and/
or statistical analysis methods on
resultant data.
PO 00000
Frm 00098
Fmt 4703
Sfmt 4703
0 = Data collectors were not masked to
the population’s condition, and
nothing was done to control for
possible bias, OR collector bias was
identified and not controlled for
statistically.
1 = Data collectors were not masked to
the population’s condition; possible
bias was appropriately controlled for
statistically or through training.
2 = Data collectors were masked to the
population’s condition, or only
archival data was employed.
8. Population Studied
0 = A single group pre/posttest design
was applied without a comparison
group, OR the alleged comparison
group is significantly different from
the population receiving the
intervention.
1 = Population(s) were studied using
time trend analysis, multiple baseline
design, or a regression-discontinuity
design that uses within-group
differences as a substitute for
comparison groups.
2 = Population matching or similar
techniques were used to compare
outcomes of population that received
the intervention with the outcomes of
a valid comparison group.
9. Missing Data
Study results can be biased by
missing data. Statistical methods as
supported by theory and research can be
employed to control for missing data
that would bias results, but studies with
no missing data needing adjustment
provide the strongest evidence.
0 = Missing data were an issue and were
taken into account inadequately, OR
levels of missing data were too high
to control for bias.
1 = Missing data were an issue and were
taken into account, but high quality
makes the control for bias suspect.
2 = Missing data were not an issue or
were taken into account by methods
that estimate missing data.
10. Analysis Meets Data Assumptions
The appropriateness of statistical
analysis is a function of the properties
of the data being analyzed and the
degree to which data meet statistical
assumptions.
0 = Analyses were clearly inappropriate
to the data collected; severe
violation(s) of assumptions make
analysis uninterpretable.
1 = There were minor violations of
assumptions, making interpretation of
results arguable.
2 = There were no or only very minor
violations of assumptions; result were
generally interpretable.
E:\FR\FM\26AUN1.SGM
26AUN1
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
11. Theory-Driven Selection of Analytic
Methods
In addition to the properties of the
data, analytic methods should be based
on a logic model or theory underlying
the intervention. Changes to analytic
methods after initial data analysis (e.g.,
to dredge for significant results)
decrease the confidence that can be
placed in the findings.
0 = Analysis selected appears
inconsistent with the intervention
theory or hypotheses; insufficient
rationale was provided by the
investigator.
1 = Analysis selected appears
inconsistent with the intervention
logic model or hypotheses, but the
investigator provides a potentially
viable rationale.
2 = Analysis is widely accepted by the
field as consistent with the
intervention logic model or
hypotheses.
12. Anomalous Findings
Findings that contradict the theories
and hypotheses underlying an
intervention suggest the possibility of
confounding causal variables and limit
the validity of study findings.
0 = There were anomalous findings
suggesting alternate explanations for
outcomes reported that were not
acknowledged by the applicant.
1 = There were a few anomalous
findings, but additional analysis or
previous literature cited by the
applicant provide a reasonable
explanation.
2 = There were no anomalous findings,
OR researchers explained anomalous
findings in a way that preserves the
validity of results reported.
Re-Review of Existing NREPP Programs
As noted above, SAMHSA believes it
is important to ensure that both current
and future NREPP interventions meet
consistent scientific standards so that
the public and other interested
stakeholders can be confident in the
effectiveness of these interventions.
With this in mind, SAMHSA is
committed to expeditiously rereviewing all existing NREPP programs
under the new process. As part of this
effort, SAMHSA plans to provide—
directly or indirectly—sufficient
resources to each existing NREPP
program to cover costs associated with
a re-review. In addition, programs
already received by NREPP and pending
review will be reviewed under the new
process. If additional support is needed
by these pending programs regarding
the new review processes, these
resources will also be provided.
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
In order to accomplish these rereviews efficiently and expeditiously,
NREPP review coordinators will work
with each program to obtain any
additional documentation that might be
needed for re-review. These review
coordinators will then conduct a rereview of each program against the new
review criteria. Programs with favorable
re-reviews will be included in the new
NREPP system when it is launched in
2006. Programs not receiving favorable
re-reviews will have the opportunity to
appeal the re-review decision, and will
be eligible for re-review by independent,
external reviewers. However, the
schedule for re-reviews of appealed
programs will be subject to SAMHSA
Administrator and SAMHSA Center
Director review priorities.
New Web Site
The primary goal of the revised
NREPP Web site—https://
www.nationalregistry.samhsa.gov—will
be to provide the public with
contemporary and reliable information
about the scientific basis and
practicality of interventions to prevent
and treat mental and substance use
disorders. All interventions achieving
NREPP status will be listed on the Web
site. Average ratings and evaluate scores
from scientific peer reviewers, as well as
information on the utility and
transferability of these interventions,
will be posted on the site.
In addition, a searchable outcomes
database of evidence-based
interventions will be a key feature. The
Web site will also contain a variety of
learning and self-assessment tools for
prospective and current NREPP
interventions to continuously improve
their scientific evidence base. Features
of the new Web site will include:
• Evidence rating criteria and utility
descriptors
• Detailed review guidelines
• Self-assessment tool to assist
interventions in determining if they are
ready to submit an application to
NREPP
• Detailed information on how to
apply
• Links to technical assistance
resources available to potential
applicants
• Relevant resources, including
publications, presentations, links, and
other supplemental materials
• Frequently Asked Questions (FAQs)
• Information on how to contact a
representative from the NREPP team
• Glossary of terms
Support for Innovative Interventions
SAMHSA recognizes that the long
term utility and value of NREPP rests,
PO 00000
Frm 00099
Fmt 4703
Sfmt 4703
50389
in part, on the ability of SAMHSA and
others to support efforts to evaluate and
document the evidence-base of
innovative interventions in ways that
will maximize their opportunity for
entry into NREPP. SAMHSA is
considering potential options for both
the direct and indirect provision of such
support, and will seek to clarify its
intentions in this area sometime in
Fiscal Year 2006.
Questions To Consider in Making Your
Comments
Responders should feel free to
comment on any, or all, questions, as
well as provide relevant suggestions not
included in the specific questions. In
order to facilitate the compilation and
analysis of comments, responders are
asked to be explicit about the questions
to which they are responding.
1. SAMHSA is seeking to establish an
objective, transparent, efficient, and
scientifically defensible process for
identifying effective, evidence-based
interventions to prevent and/or treat
mental and substance use disorders. Is
the proposed NREPP system—including
the suggested provisions for screening
and triage of applications, as well as
potential appeals by applicants—likely
to accomplish these goals?
2. SAMHSA’s NREPP priorities are
reflected in the agency’s matrix of
program priority areas. How might
SAMHSA engage interested
stakeholders on a periodic basis in
helping the agency determine
intervention priority areas for review by
NREPP?
3. There has been considerable
discussion in the scientific literature on
how to use statistical significance and
various measures of effect size in
assessing the effectiveness of
interventions based upon both single
and multiple studies (Schmidt &
Hunter, 1995; Rosenthal, 1996; Mason,
Schott, Chapman, & Tu, 2000: Rutledge
& Loh, 2004). How should SAMHSA use
statistical significance and measures of
effect size in NREPP? Note that
SAMHSA would appreciate receiving
citations for published materials
elaborating upon responders suggestions
in this area.
4. SAMHSA’s proposal for NREPP
would recognize as effective several
categories of interventions, ranging from
those with high-quality evidence and
more replication to those with lower
quality evidence and fewer replications.
This would allow for the recognition of
emerging as well as fully evidencebased interventions. Some view this as
a desirable feature that reflects the
continuous nature of evidence; provides
important options for interventions
E:\FR\FM\26AUN1.SGM
26AUN1
50390
Federal Register / Vol. 70, No. 165 / Friday, August 26, 2005 / Notices
recipients, providers, and funders when
no or few fully evidence-based
interventions are available; and helps
promote continued innovation in the
development of evidence-based
interventions. Others have argued that
several distinct categories will confuse
NREPP users. Please comment on
SAMHSA’s proposal in this area.
5. SAMHSA recognizes the
importance of considering the extent to
which interventions have been tested
with diverse populations and in diverse
settings. Therefore, the agency
anticipates incorporating this
information into the web site
descriptions of interventions listed on
NREPP. This may allow NREPP users to
learn if interventions are applicable to
their specific needs and situations, and
may also help to identify areas where
additional studies are needed to address
the effectiveness of interventions with
diverse populations and in diverse
locations.
SAMHSA is aware that more evidence
is needed on these topics. Please
comment on SAMHSA’s approach in
this area.
6. To promote consistent, reliable, and
transparent standards to the public,
SAMHSA proposes that all existing
programs on NREPP meet the prevailing
scientific criteria described in this
proposal, and that this be accomplished
through required re-reviews of all
programs currently on NREPP.
SAMHSA has considered an alternative
approach that would ‘‘grandfather’’ all
existing NREPP programs under the new
system, but would provide clear
communication that these existing
programs have not been assessed against
the new NREPP scientific standards.
Please comment on which approach you
believe to be in the best interests of
SAMHSA stakeholders.
7. What types of guidance, resources,
and/or specific technical assistance
activities are needed to promote greater
adoption of NREPP interventions, and
what direct and indirect methods
should SAMHSA consider in advancing
this goal?
8. SAMHSA is committed to
consumer, family, and other
nonscientist involvement in the NREPP
process. The panels convened by
SAMHSA and described earlier in this
notice suggested that these stakeholders
be included specifically to address
issues of intervention utility and
practicality. Please comment on how
consumer, family, and other
nonscientist stakeholders could be
involved in NREPP.
9. SAMHSA has identified NREPP as
one source of evidence-based
interventions for selection by potential
VerDate jul<14>2003
16:18 Aug 25, 2005
Jkt 205001
agency grantees in meeting the
requirements related to some of
SAMHSA’s discretionary grants. What
guidance, if any, should SAMHSA
provide related to NREPP as a source of
evidence-based interventions for use
under the agency’s substance abuse and
mental health block grants?
10. SAMHSA believes that NREPP
should serve as an important, but not
exclusive source, of evidence-based
interventions to prevent and/or treat
mental and substance use disorders.
What steps should SAMHSA take to
promote consideration of other sources
(e.g., clinical expertise, consumer or
recipient values) in stakeholders’
decisions regarding the selection,
delivery and financing of mental health
and substance abuse prevention and
treatment services?
11. SAMHSA anticipates that once
NREPP is in operation, various
stakeholders will make suggestions for
improving the system. To consider this
input in a respectful, deliberate, and
orderly manner, SAMHSA anticipates
annually reviewing these suggestions.
These reviews would be conducted by
a group of scientist and nonscientist
stakeholders knowledgeable about
evidence in behavioral health and the
social sciences. Please comment on
SAMHSA’s proposal in this area.
References
Mason, CA, Scott, KG, Chapman, DA, Tu,
S. A review of some individual and
community level effect size indices for the
study of risk factors for child and adolescent
development. Ed. Psychol. Measurement.
2000; 60(3): 385–410.
Rosenthal, JA. Qualitative descriptors of
strength of association and effect size. J. of
Soc. Serv. Research. 1996; 21(4): 37–59.
Rutledge, T, Loh, C. Effect size and
statistical testing in the determination of
clinical significance in behavioral medicine
research. Annals of Beh. Med. 2004; 27(2):
138–145.
Schmidt, F, Hunter, JE. The impact of dataanalysis methods on cumulative research
knowledge: statistical significance testing,
confidence intervals, and meta-analysis. Eval.
Health Prof. 1995 Dec; 18(4): 408–27.
[FR Doc. 05–17034 Filed 8–25–05; 8:45 am]
BILLING CODE 4160–01–M
DEPARTMENT OF HOMELAND
SECURITY
Customs and Border Protection
Entries of Antidumping and/or
Countervailing Duties Destroyed
September 11, 2001
Customs and Border Protection;
Department of Homeland Security.
ACTION: General notice.
AGENCY:
PO 00000
Frm 00100
Fmt 4703
Sfmt 4703
SUMMARY: The Bureau of Customs and
Border Protection (CBP) suspends the
liquidation of entries of merchandise
subject to antidumping and/or
countervailing duties (AD/CVD) until
liquidation instructions are received
from the Department of Commerce. Due
to the extended liquidation cycle of AD/
CVD entries, CBP is only now beginning
to receive liquidation instructions from
the Department of Commerce for many
AD/CVD entries from previous years.
Unfortunately, AD/CVD entry
documents which were maintained by
CBP at 6 World Trade Center in New
York, New York, were destroyed in the
terrorist attack of September 11, 2001.
This notice announces that CBP is
providing importers with the option to
provide a reconstructed entry summary
package to CBP for liquidation of these
entries. Failure by the importer to
provide a reconstructed entry summary
package within the time frame described
in this notice may result in liquidation
by CBP of the entry, or entries, based
upon the information available within
the Automated Commercial System
(ACS).
If a reconstructed entry summary
package is not received by the Bureau of
Customs and Border Protection within
30 days following publication by the
Department of Commerce that
suspension of the liquidation of the
subject entry, or entries, has been lifted,
and the Department of Commerce has
issued final assessment instructions,
CBP will begin liquidating the entries
based on the information available in
ACS.
DATES:
The reconstructed entry
package should be mailed to: Customs
and Border Protection, ATTN: ADCVD
6WTC Reconstructed Entry(s), 1100
Raymond Boulevard, Newark, NJ 07102.
ADDRESSES:
FOR FURTHER INFORMATION CONTACT:
Christine Furgason, Office of Field
Operations, (202) 344–2293. For
inquiries about specific entry summary
packages: Walter Springer, Supervisory
Import Specialist, Newark, N.J., (973)
368–6785. Importers, or their
representatives, may also directly
contact the Import Specialist Teams to
whom the entries were assigned. A
party making a telephonic inquiry
regarding a specific entry summary
package should be prepared to provide
its importer name and identification
number.
SUPPLEMENTARY INFORMATION:
Background
U.S. Antidumping and Countervailing
Duty (AD/CVD) laws are intended to
E:\FR\FM\26AUN1.SGM
26AUN1
Agencies
[Federal Register Volume 70, Number 165 (Friday, August 26, 2005)]
[Notices]
[Pages 50381-50390]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 05-17034]
-----------------------------------------------------------------------
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Substance Abuse and Mental Health Services Administration (SAMHSA)
Notice: Request for Comments; National Registry of Evidence-Based
Programs and Practices (NREPP)
Authority: Sec. 501, Pub. L. 106-310
SUMMARY: The Substance Abuse and Mental Health Services Administration
(SAMHSA) is committed to preventing the onset and reducing the
progression of mental illness, substance abuse and substance related
problems among all individuals, including youth. As part of this
effort, SAMHSA is expanding and refining the agency's National Registry
of Evidence-based Programs and Practices (NREPP) so that the system
serves as a leading national resource for contemporary and reliable
information on the scientific basis and practicality of interventions
to prevent and/or treat mental illness and substance use and abuse.
NREPP represents a major agency activity within SAMHSA's Science to
Service initiative. The initiative seeks to accelerate the translation
of research into practice by promoting the implementation of effective,
evidence-based interventions for preventing and/or treating mental
disorders and substance use and abuse. Of equal measure, the initiative
emphasizes the essential role of the services community in providing
input and feedback to influence and better frame the research questions
and activities pursued by researchers in these areas.
Through SAMHSA's Science to Service initiative, the agency
ultimately seeks to develop a range of tools that will facilitate
evidence-based decision-making in substance abuse prevention, mental
health promotion, and the treatment of mental and substance use
disorders. In addition to NREPP, SAMHSA is developing an informational
guide of web-based
[[Page 50382]]
resources on evidence-based interventions that will be available in
2006. SAMHSA also is exploring the feasibility of supporting a
searchable web database of evidence-based information (e.g., systematic
reviews, meta-analyses, clinical guidelines) for mental health and
substance abuse prevention and treatment providers. Such a system could
reduce the lag time between the initial development and broader
application of research knowledge by serving as a real-time resource to
providers for ``keeping current'' in ways that will enhance their
delivery of high quality, effective services. In combination, these
three tools--NREPP, guide to web-based resources, and database of
evidence-based information--would provide valuable information that can
be used in a variety of ways by a range of interested stakeholders.
With regard to NREPP, during the past two years, SAMHSA convened a
series of scientific/stakeholder panels to inform the agency's
expansion of the system to include interventions in all substance abuse
and mental health treatment and prevention domains. These panels
thoroughly assessed the existing NREPP review process and review
criteria and provided comments and suggestions for refining and
enhancing NREPP. As part of this expansion effort, SAMHSA also engaged
a contractor to assess the NREPP process and review criteria, including
how the system and criteria compare to other, similar evidence review
and rating systems in the behavioral and social sciences. The
cumulative results of these activities have guided efforts to refine
the NREPP review process and review criteria, as well as inform the
agency's plans for how such a system may be used to promote greater
adoption of evidence-based interventions within typical community-based
settings.
This Federal Register Notice (FRN) provides an opportunity for
interested parties to become familiar with and comment on SAMHSA's
plans for expansion and use of NREPP.
DATES: Submit comments on or before October 25, 2005.
ADDRESSES: Address all comments concerning this notice to: SAMHSA c/o
NREPP Notice, 1 Choke Cherry Road, Rockville, MD 20857. See
SUPPLEMENTARY INFORMATION for information about electronic filing.
FOR FURTHER INFORMATION CONTACT: Kevin D. Hennessy, PhD, Science to
Service Coordinator/SAMHSA, 1 Choke Cherry Road, Room 8-1017,
Rockville, Maryland 20857. Dr. Hennessy may be reached at (240) 276-
2234.
SUPPLEMENTARY INFORMATION:
Electronic Access and Filing Addresses
You may submit comments by sending electronic mail (e-mail) to
nrepp.comments@samhsa.hhs.gov.
Dated: August 18, 2005.
Charles G. Curie,
Administrator.
Overview
Increasingly, individuals and organizations responsible for
purchasing, providing and receiving services to prevent substance abuse
and/or treat mental and substance use disorders are considering the
extent to which these services are ``evidence-based''--that there
exists some degree of documented scientific support for the outcomes
obtained by these services. As the Federal agency responsible for
promoting the delivery of substance abuse and mental health services,
SAMHSA is particularly interested in supporting and advancing
activities that encourage greater adoption of effective, evidence-based
interventions to prevent and/or treat mental and substance use
disorders. With this in mind, SAMHSA proposes to refine and expand its
National Registry of Evidence-based Programs and Practices (NREPP).
SAMHSA believes that the growth and evolution of NREPP can serve as an
important mechanism for promoting greater adoption of evidence-based
substance abuse and mental health services,--one that can do so in
conjunction with an ever-growing array of scientific knowledge,
clinical expertise and judgment, and patient/recipient values and
perspectives. By clearly identifying and assessing the scientific basis
and disseminability of a range of behavioral interventions, NREPP is
likely to prove an important resource to both individuals and systems
seeking information on the effectiveness of various services to prevent
and/or treat mental and substance use disorders.
Background and Need
As SAMHSA promotes the identification and greater use of effective,
evidence-based interventions for individual-, population-, policy-, and
system-level changes, the agency seeks to build upon the strong
foundation provided by the precursor to the National Registry of
Evidence-based Programs and Practices--namely, the National Registry of
Effective Prevention Programs. The previous system provides an
important building block in the agency's efforts to develop a SAMHSA-
wide registry.
The National Registry of Effective Prevention Programs developed in
SAMHSA's Center for Substance Abuse Prevention (CSAP) beginning in 1997
as a way to help professionals in the field become better consumers of
prevention programs. Between 1997 and 2004, NREPP reviewed and rated
more than 1,100 prevention programs, with more than 150 obtaining
designation as a Model, Effective, or Promising Program.
Information on all current NREPP programs is available through the
Model Programs Web site at https://www.modelprograms.samhsa.gov.
Additional details about the review process, review criteria, and
rating system for the National Registry of Effective Prevention
Programs are available in the SAMHSA publication ``Science-Based
Prevention programs and Principles 2002,'' which can be downloaded from
SAMHSA's Model Programs Web site https://www.modelprograms.samhsa.gov)
by clicking on ``Publications'' on the tool bar on the left side of the
page; or by requesting the publication through SAMHSA's National
Clearinghouse for Alcohol and Drug Information (NCADI) at 1-800-729-
6686 (or by visiting the NCADI Web site at https://www.health.org).
As SAMHSA expands the NREPP system, one area of potential
improvement is in the efficient screening and triage of applications.
Given the historical applications trends among substance abuse
prevention programs, combined with the increased demands on the system
through expansion to other SAMHSA domains, it is essential that the
agency develop a transparent and scientifically defensible process for
screening and triaging applications.
Moreover, as SAMHSA engaged NREPP scientific/stakeholder panels
over the past 2 years, concerns about the existing review process and
review criteria emerged. In particular, a range of scientific experts
voiced concerns regarding specific review criteria and other elements
of the review process.
In addition, systematic efforts to examine and compare the current
NREPP review criteria with other evidence-grading systems in the social
and behavioral sciences has revealed both areas of relative strength
and relative weakness. At a minimum, this comparison has affirmed for
SAMHSA the importance and value of reexamining and refining the NREPP
review process and review criteria in ways that reflect to the public
SAMHSA's commitment to identifying
[[Page 50383]]
and promoting interventions that have shown to be effective through
prevailing scientific standards. One important element of this process
is providing support for the re-review of existing NREPP programs
against these prevailing scientific standards (see below), while
another component is identifying both SAMHSA and other mechanisms and
resources for supporting efforts to evaluate and document the evidence-
base of innovative interventions in ways that will maximize their
opportunity for entry into NREPP.
Further, SAMHSA's experience with NREPP to date suggests that the
system is limited in its ability to identify and rate interventions
designed to promote population-, policy-, and system-level outcomes,
such as those promoted by community prevention coalitions. SAMHSA's
plans for NREPP include an expansion of the system in this area. As
part of this expansion, SAMHSA proposes a second set of review criteria
for these interventions, with the recognition that some interventions
may be designed to affect a community over time, and that the
prevailing scientific standards for assessing the effectiveness of
these interventions may indeed be different than those for
interventions seeking to change individual-level outcomes. Finally,
input into the NREPP process to date suggests the need for SAMHSA to
provide greater policy guidance on how best to use the system to
appropriately select specific interventions, as well as contextual
guidance on how NREPP might be used in conjunction with other important
information--such as clinical expertise, patient values, and
administrative and policy perspectives and data--in making decisions
regarding the financing and delivery of substance abuse and mental
health services.
Proposal
After extensive consultation with both scientific experts and a
range of stakeholders. SAMHSA is seeking your comments on a proposal to
advance a voluntary rating and classification system for mental health
and substance abuse prevention and treatment interventions--a system
designed to categorize and disseminate information about programs and
practices that meet established evidentiary criteria. This proposal
presents in detail the new NREPP system, including refinements to the
review process and review criteria for programs and practices in
substance abuse and mental health, as well as an expansion of the
system to include successful community coalition efforts to achieve
population-, policy-, or system-level outcomes. The proposal also
describes SAMHSA's plans for a new Web site that will highlight the
scientific evidence-base of interventions, as well as provide a range
of practical information needed by those who are considering
implementation of programs or practices on the Registry.
SAMHSA further anticipates that additional revisions and
refinements to the NREPP system may be needed on a periodic basis, and
proposes the formation of an external advisory panel to regularly
assist the agency in assessing proposed suggestions for improvements to
the system (see Question 10 below).
Initial Input From the Field
Upon determining that SAMHSA would expand NREPP to include
interventions in all agency domains, three expert panel meetings of
both scientist and nonscientist stakeholders were convened to provide
feedback on the current review system and review criteria, as well as
solicit suggestions about redesigning the system to promote the goals
noted above. Each meeting was conducted over a 2-day period, and
included invited participants representing a range of relevant
organizations, expertise, and perspectives. All meetings took place in
Washington, DC, in 2003, with mental health experts meeting in April,
substance abuse prevention and mental health promotion experts meeting
in September, and substance abuse treatment experts meeting in
December. Transcripts of these meetings are available on-line at the
NREPP Web page accessible through the ``Quick Picks'' section on
SAMHSA's Home page (https://www.samhsa.gov).
SAMHSA also convened a meeting in May 2005 to solicit
recommendations for integrating evidence-based findings from community
coalitions into NREPP. The 2-day meeting brought together prominent
researchers and practitioners who reaffirmed the importance of
including prevention coalitions within NREPP, and offered suggestions
as to the types of outcomes and evidence criteria appropriate to the
assessment of community coalitions. A summary of this meeting is
available on-line at the NREPP web page accessible through the ``Quick
Picks'' section on SAMHSA's Home page (https://www.samhsa.gov).
Review Process for Determining Individual-Level Outcome Ratings for
Interventions
A primary goal of the Registry is to provide the public with
reliable information about the evidence quality and strength of
scientific support for specific interventions. The strength of
scientific support includes: the quality of evaluation design (e.g.,
experimental or quasi-experimental designs); fidelity to predetermined
intervention components; confidence in the link between intervention
components and specific outcome(s) achieved; freedom from internal and
external sources of bias and error; and the statistical significance
and practical magnitude (e.g., effect size) of outcomes achieved.
An additional goal is to provide key information about the
transferability of these programs and practices to real-world
prevention and treatment settings. NREPP utility descriptors provide
information about the appropriate settings and populations for
implementation of specific interventions, the availability of training
and dissemination materials, and their practicality and costs.
This section describes the NREPP review process, including the
evidence rating criteria and utility descriptors that will form the
basis for Web-based information about programs and practices.
Based on important feedback from scientists and practitioners in
the prevention and treatment fields, the NREPP review process has been
enhanced in several important respects:
--Programs and practices will be rated on the strength of evidence for
specific outcomes achieved, rather than on global assessments of the
effectiveness of intervention(s). In addition, indicators of strength
of association or magnitude of outcome effects, such as effect size
statistics, will be utilized in NREPP to complement traditional,
statistical significance (null-hypothesis) testing.
--There will be multiple, outcome-specific ratings of evidence quality
strength. All programs and/or practices listed on the Registry will be
considered ``Effective'' for demonstrating specific outcomes having
varying levels of evidence quality and confidence resulting from
independent (or applicant) replication(s).
--Evidence rating criteria have been refined and now emphasize
intervention impacts, evaluation design and fidelity, quality of
comparison conditions, and replications.
The section below is an overview of the NREPP process for obtaining
expert reviewers' ratings of the evidence quality for outcome-specific
program and practice interventions. The process includes an internal
screening and triage process conducted by qualified
[[Page 50384]]
NREPP contractor staff serving as review coordinators, as well as an
independent, external scientific review process conducted by qualified
and trained external scientists--working independently of one another--
to assess the evidence quality of candidate interventions.
I. Submitted Materials
Applicants will submit a variety of documents that allow a panel of
expert reviewers to rate objectively the evidence for an intervention's
impact on substance abuse or mental health outcomes. Materials
submitted may include:
Descriptive program summaries, including the specific
outcomes targeted;
Research reports and published and unpublished articles to
provide scientific evidence (experimental or quasi-experimental
studies) about the effectiveness of the intervention for improving
specific outcomes;
Documents to verify that participants were assured privacy
and freedom from negative consequences regardless of participating
(used to assess participant response bias, not human subjects
protections per se); and
Documents to provide evidence that outcomes and analytic
approaches were developed through a theory-driven or hypothesis-based
approach.
These materials will provide SAMHSA and potential reviewers with
objective evidence to support conclusions about the validity and impact
of the program or practice. Reviewers must be assured that program
investigators did not capitalize on chance findings or excessive
postintervention data analyses to find effects that were not components
of the intervention design or theory.
II. Internal Review and Triage
Upon receipt, each set of materials will be assigned to an NREPP
review coordinator (contractor staff), who will inventory and document
the contents of the submission. The review coordinator will contact the
applicant by phone and/or e-mail confirming receipt and notifying the
applicant if additional application materials are required.
When all materials for a program have been received by the review
coordinator, an internal review is conducted to eliminate those
programs or practices that do not meet NREPP minimum evidence-based
standards. These minimum standards are (1) An intervention that is
consistent with SAMHSA's matrix of program priority areas; (2) one or
more evaluations of the intervention using an experimental or quasi-
experimental design; and (3) statistically significant intervention
outcome(s) related to either the prevention or treatment of mental or
substance use disorders. Non-reviewed programs will receive written
notification of this decision, including problematic or missing
components of their application, and may be considered for re-review at
a later date.
SAMHSA will maintain oversight over the entire NREPP application
and selection process. Moreover, SAMHSA's Administrator and Center
Directors may establish specific program and practice areas for
priority review.
III. Ratings by Reviewers
For all NREPP applications determined appropriate for review, three
(3) independent, external reviewers will evaluate and rate the
intervention. Reviewers are doctoral-level researchers and
practitioners who have been selected based on their training and
expertise in the fields of mental health promotion or treatment and/or
substance abuse prevention or treatment. Moreover, reviewers will be
thoroughly trained in the NREPP review process, and will be monitored
and provided feedback periodically on their performance. Of note,
interventions targeting individuals or populations with co-occurring
mental health and substance use disorders (or other cross-domain
initiatives) will be assigned to reviewers across these categorical
domains.
To maintain the objectivity and autonomy of the peer review
process, SAMHSA will not disclose to the public or applicants the
identities of individual reviewers assigned to specific reviews. On a
periodic basis, SAMHSA may post listings of reviewer panels within
specific SAMHSA domains as an informational resource to the public.
Reviewers will be selected based on their qualifications and
expertise related to specific interventions and/or SAMHSA priority
areas. In addition to reviewers identified by SAMHSA, NREPP will
consider third-party and self-nominations to become part of the
reviewer pool. All reviewers will provide written assurance, to be
maintained on file with the NREPP contractor, that they do not have a
current or previous conflict of interest (e.g., financial or business
interest) that might impact a fair and impartial review of specific
programs or practices applying for NREPP review.
Reviewers provide independent assessments of the evidence quality
and provide numerical summary scores across the 16 outcome-specific
evidence quality criteria. Each criterion is scored on a 0 to 4 scale.
The 16 evidence quality criteria are presented below.
Individual-Level Outcome Evidence Rating Criteria
1. Theory-Driven Measure Selection
Outcome measures for a study should be selected before data are
collected and should be based on a priori theories of hypotheses.
0 = The applicant selected the measure after data collection for the
apparent purpose of obtaining more favorable results than would be
expected from using the measures originally planned, OR there is no
documentation of selection prior to data analysis.
4 = Documentation shows that the applicant selected the measure prior
to study implementation, OR the measure was selected after study
inception, but before data analysis, and is supported by a peer review
panel or literature related to study theories or hypotheses.
2. Reliability
Outcome measures should have acceptable reliability to be
interpretable. ``Acceptable'' here means reliability at a level that is
conventionally accepted by experts in the field.
0 = No evidence of measure realibility.
1 = Reliability coefficients indicate that some but not all relevant
types of reliability (e.g., test-retest, inter-rater, inter-item) are
acceptable.
3 = All relevant types of realibility have been documented to be at
acceptable levels in studies by the applicant.
4 = All relevant types of reliability have been documented to be
acceptable levels in studies by independent investigators.
3. Validity
Outcome measures should have acceptable validity to be
interpretable. ``Acceptable'' here means validity at a level that is
conventionally accepted by experts in the field.
0 = No evidence of measure validity, or some evidence that the measure
is not valid.
1 = Measure has face validity.
3 = Studies by applicant show that measure has one or more acceptable
forms of criterion-related validity that are correlated with
appropriate, validated measures or objective criteria; OR, as an
objective measure of response, there are procedural checks to confirm
data validity, but they have not been adequately documented.
4 = Studies by independent investigators show that measure has
[[Page 50385]]
one or more acceptable forms of criterion-related validity that are
correlated with appropriate, validated measures or objective criteria;
OR, as an objective measure of response, there are adequately
documented procedural checks that confirm data validity.
4. Intervention Fidelity
The ``experimental'' intervention implemented in a study should
have fidelity to the intervention proposed by the applicant.
Instruments that have tested acceptable psychometric properties (e.g.,
interrator reliability, validity as shown by positive association with
outcomes) provides the highest level of evidence.
0 = There is evidence the intervention implemented was substantially
different from the one proposed.
1 = There is only narrative evidence that the applicant or provider
believes the intervention was implemented with acceptable fidelity.
2 = There is evidence of acceptable fidelity in the form of judgment(s)
by experts, based on limited on-site evaluation and data collection.
3 = There is evidence of acceptable fidelity, based on the systematic
collection of data on factors such as dosage, time spent in training,
adherence to guidelines or a manual, or a fidelity measure with
unspecified or unknown psychometric properties.
4 = There is evidence of acceptable fidelity from a tested fidelity
instrument shown to have reliability and validity.
5. Comparison Fidelity
A study's comparison condition should be implemented with fidelity
to the comparison condition proposed by the applicant. Instruments for
measuring fidelity that have tested acceptable psychometric properties
(e.g., interrater reliability, validity as shown by predicted
association with outcomes) provide the highest level of evidence.
0 = There is evidence that the comparison condition implemented was
substantially different from one proposed.
1 = There is only narrative evidence that the applicant or provider
believes the comparison condition was implemented with fidelity.
2 = Researchers report observational or administrative data that the
comparison condition was implemented with fidelity.
3 = Documentation confirms that comparison group participants did not
receive interventions that were very similar or identical to
intervention participants, AND there is documentation of degree of
participation in any comparison conditions such as lectures or
treatment.
4 = There is evidence from a tested instrument suggesting that the
comparison condition was implemented with fidelity.
6. Nature of Comparison Condition
The quality of evidence for an intervention depends in part on the
nature of the comparison condition(s), including assessments of their
active components and overall effectiveness. Interventions have the
potential to cause more harm than good; therefore, an active comparison
intervention should be shown to be better than no treatment.
0 = There was no comparison condition.
1 = The comparison condition is an active intervention that has not
been proven to better than no treatment.
2 = The comparison condition is no service or wait-list, or an active
intervention shown to be as effective as or better than no treatment.
3 = The comparison condition is an attention control.
4 = The comparison condition was shown to be as safe or safer and more
effective than an attention control.
7. Assurances to Participants
Study participants should always be assured that their responses
will be kept confidential and not affect their care or services. When
these procedures are in place, participants are more likely to disclose
valid data.
0 = There was no effort to encourage and reassure subjects about
privacy and that consent or participation would have no effect on
services.
1 = Data collector was the service provider, AND there were documented
assurances to participants about privacy and that consent or
participation would have no effect on care or services.
2 = Data collector was not the service provider. There were
indications, but no documentation, that participants were assured about
their privacy and that consent or participation would have no effect on
care or services.
4 = Data collector was not the service provider, AND there were
documented assurances to participants about privacy and that consent or
participation would have no effect on care or services; OR, data were
not collected directly from participants.
8. Participant Expectations
Participants can be biased by how an intervention is introduced to
them and by an awareness of their study condition. Information used to
recruit and inform study participants should be carefully crafted to
equalize expectations. Masking treatment conditions during
implementation of the study provides the strongest control for
participant expectancies.
0 = Investigators did not make adequate attempts to mask study
conditions or equalize expectations among participants in the
experimental and comparison conditions, or differential participant
expectations were measured and found to be too great to control for
statistically.
2 = Investigators attempted to mask study conditions or equalize
expectations among participants in the experimental and comparison
conditions. Some participants appeared likely to have known their study
condition assignment (experimental or comparison).
3 = Investigators attempted to mask study conditions or equalize
expectations among participants in the experimental and comparison
conditions. Some participants appeared likely to have known their study
condition assignment (experimental or comparison), but these
differential participant expectations were measured and appropriately
controlled for statistically.
4 = Investigators adequately masked study conditions. Participants did
not appear likely to have known their study condition assignment.
9. Standardized Data Collection
All outcome data should be collected in a standardized manner. Data
collectors trained and monitored for adherence to standardized
protocols provide the highest quality evidence of standardized data
collection.
0 = Applicant did not use standardized data collection protocols.
2 = Data was collected using standardized protocol and trained data
collectors.
3 = Data was collected using standardized protocol and trained data
collectors, with evidence of good initial adherence by data collectors
to the standardized protocol.
4 = Data was collected using standardized protocol and trained data
collectors, with evidence of good initial adherence to data collectors
to the standardized protocol and evidence of data collector retraining
[[Page 50386]]
when necessary to control for rater ``drift.''
10. Data Collector Bias
Data collector bias is most strongly controlled when data
collectors are not aware of the conditions to which study participants
have been assigned. When data collectors are aware of specific study
conditions, their expectations should be controlled for through
training and/or statistical methods.
0 = Data collectors were not masked to participants' conditions, and
nothing was done to control for possible bias, OR collector bias was
measured and found to be too great to control for statistically.
2 = Data collectors were not masked to participants' conditions, but
data collectors received training to reduce possible bias.
3 = Data collectors were not masked to participants' conditions;
possible bias was appropriately controlled for statistically.
4 = Data collectors were masked to participants' conditions.
11. Selection Bias
Concealed random assignment of participants provides the strongest
evidence of control for selection bias. When participants are not
randomly assigned, covariates and confounding variables should be
controlled as indicated by theory and research.
0 = There was no comparison condition, OR participants or providers
selected conditions.
3 = Participants were not assigned randomly, but researchers controlled
for theoretically relevant confounding variables, OR participants were
assigned with non-concealed randomization.
4 = Selection bias was controlled with concealed random assignment.
12. Attrition
Study results can be biased by participant attrition. Statistical
methods as supported by theory and research can be employed to control
for attrition that would bias results, but studies with no attrition
needing adjustment provide the strongest evidence that results are not
biased.
0 = Attrition was taken into account inadequately, OR there was too
much attrition to control for bias.
1 = No significant differences were found between participants lost to
attrition and remaining participants.
2 = Attrition was taken into account by simpler methods that crudely
estimate data for missing observations.
3 = Attrition was taken into account by more sophisticated methods that
model missing data, observations, or participants.
4 = There was no attrition, OR there was no attrition needing
adjustment.
13. Missing Data
Study results can be biased by missing data. Statistical methods as
supported by theory and research can be employed to control for missing
data that would bias results, but studies with no missing data needing
adjustment provide the strongest evidence.
0 = Missing data were an issue and were taken into account
inadequately, OR levels of missing data were too high to control for
bias.
1= Missing data were an issue and were taken into account, but high
quantity makes the control for bias suspect.
2= Missing data were an issue and were taken into account by simpler
methods (mean replacement, last point carried forward) that
simplistically estimate missing data; control for missing data is
plausible.
3= Missing data were an issue and were taken into account by more
sophisticated methods that model missing data; control for missing data
very plausible.
4= Missing data were not an issue.
14. Analysis Meets Data Assumptions
The appropriateness of statistical analyses is a function of the
properties of the data being analyzed and the degree to which meet
statistical assumptions.
0= Analyses were clearly inappropriate to the data collected; severe
violation(s) of assumptions make analysis uninterpretable.
1= Some data were analyzed appropriately, but for other analyses
important violation(s) of assumptions cast doubt on interpretation.
2= There were minor violations of assumptions for most or all analyses,
making interpretation of results arguable.
3= There were minor violations of assumptions for only a few analyses;
results were generally interpretable.
4= There were no violations of assumptions for any analysis.
15. Theory-Driven Selection of Analytic Methods
Analytic methods should be selected for a study based on a priori
theories or hypotheses underlying the intervention. Changes to analytic
methods after initial data analysis (e.g., to ``dredge'' for
significant results) decrease the confidence that can be placed in the
findings.
0= Analysis selected appears inconsistent with the intervention theory
or hypotheses; insufficient rational provided by investigator.
1= Analysis selected appears inconsistent with the intervention theory
or hypotheses, but applicant provides a potentially viable rationale.
3= Analysis is widely accepted by the field as the most consistent with
study theory or hypotheses; no documentation showing methods were
selected prior to data analysis.
4= Analysis is widely accepted by the field as the most consistent with
study theory or hypotheses; documentation shows that methods were
selected prior to data analysis.
16. Anomalous Findings
Findings that contradict the theories and hypotheses underlying an
intervention suggest the possibility of confounding causal variables
and limit the validity of study findings.
0 = There were anomalous findings suggesting alternate explanations for
outcomes reported.
4 = There were no anomalous findings, OR researchers explained
anomalous findings in a way that preserves the validity of results
reported.
Based upon the independent reviewer assessments, review coordinators
will compute average evidence quality ratings for specific outcome
measures (based on the 16 evidence quality criteria), and then ask
reviewers to determine the overall intervention outcome evidence
ratings according to two components: quality of evidence and
intervention replications. Average evidence quality ratings scores
below 2.0 will be considered ``insufficient current evidence'' for the
effectiveness of a given outcome, and will not be included in the
Registry. Evidence quality rating scores of 2.0 to 2.5 will be
considered ``emerging evidence'' for effectiveness, and scores of 2.5
and higher (4.0 is the maximum) will be considered ``strong evidence.''
Specific rating category labels for effective outcomes remain to be
finalized, but might include categories such as: (1) Strong evidence
with independent replication(s); (2) Strong evidence with developer
replication(s); (3) Strong evidence without replication; (4) Emerging
evidence with independent replication(s); (5) Emerging evidence with
developer replication(s); and (6) Emerging evidence without
replication.
IV. Applicant Notification
Applicants will be notified in writing of their final NREPP rating
category(s) by the SAMHSA Administrator or his/
[[Page 50387]]
her designee within 2 weeks following the completion of the review.
This notification will include the summary comments of reviewers as
well as the consensus ratings on each review criteria. Where relevant,
the notification letter will provide applicants with the effective date
of the program status designation.
Applicants will have the opportunity to appeal any review decision
by submitting a written request to the NREPP contractor within 30 days
of notification. Appeals will be resolved through the assignment of two
(2) additional reviewers to conduct focused reviews of the evidence
quality for specific, disputed outcome ratings. Reviews conducted as
part of a formal appeal process will be independent and reviewers will
be unaware of previous ratings and decisions. The numeric evidence
ratings will be averaged across the five (i.e., 3 original; 2 appeal)
reviews for a final determination of intervention outcome rating(s).
V. Utility Descriptors
The NREPP utility descriptors will provide information to program
purchasers and planners, service providers, consumers and the general
public about the transferability of intervention technologies to
different (including non-research-based) settings. These descriptors
complement NREPP's scientific evidence-based program and practice
ratings with information pertaining to the following dimensions:
1. Implementation. What kinds of materials are available to support
implementation and what audiences are they appropriate for? What kinds
of training and training resources are available to support
implementation?
2. Quality Monitoring. What tools, procedures, and documentation
are available to support quality monitoring and quality improvement as
the program or practice is implemented?
3. Unintended or Adverse Events. What procedures, systems and data
have been identified to indicate whether the intervention has ever
resulted in unintended or adverse events?
4. Population Coverage. Were the study samples recruited
representative of the persons identified to receive the intervention in
the theory/framework?
5. Cultural Relevance and Cultural Competence. Were the outcomes
demonstrated to be effective and applicable to specific demographic and
culturally defined groups? If so, are training and other implementation
materials available to promote culturally competent implementation of
the intervention?
6. Staffing. What is aggregate level of staffing (e.g., FTEs)
required to implement the intervention effectively? What are the
individual skills, expertise, and training required of staff to deliver
the intervention?
7. Cost. What are the estimated start-up and annual costs per
person served and unit of service for the intervention at full
operation?
Interventions with outcomes achieving any one of the effective
statuses will be asked to provide descriptive information about the
intervention's readiness for implementation, appropriateness for
different populations, freedom from unintended or adverse effects, and
staffing and cost requirements. These utility descriptors will be
featured on the NREPP Web site to help assure a proper match between
specific prevention and treatment interventions and the settings and
populations to which they are most appropriate.
In light of SAMHSA's commitment to consumer and family involvement,
the agency is seeking ways to ensure that these groups provide input
into the assessment of interventions that achieve NREPP status. SAMHSA
seeks suggestions regarding the most useful and efficient way to
conduct this process (see Question 7 below).
Review Process for Determining Population-, Policy-, and System-Level
Outcome Ratings for Interventions
The NREPP Evidence Rating Criteria for Population-, Policy-, and
System-Level Outcomes are proposed as the basis for reviewer ratings of
outcomes generated by community prevention coalitions and other
environmental interventions to promote resiliency and recovery at the
community level. SAMHSA's rationale for use of these separate criteria
comes through a recognition that some interventions may be designed to
affect a community over time, and that the prevailing scientific
standards for assessing the effectiveness of these interventions may
indeed be different than those for interventions seeking to change
individual-level outcomes.
An outcome of a prevention or treatment intervention qualifies for
review under these 12 criteria only when it can be included in one of
the following three categories:
1. Population-Level Outcome--measures the effect of an intervention
of an existing, predefined population. Examples of such existing,
predefined populations include ``all youth residing in a
neighborhood,'' ``all female employees of a manufacturing plant,'' or
``all Native Americans receiving social services from a tribal
government.'' ``All patients receiving a specific treatment,'' in
contrast, cannot be defined as an existing, predefined population
because that population would have come into existence as a direct
response to the intervention.
2. Policy-Level Outcome--measures the effect of an intervention on
enactment, maintenance, or enforcement of policies that are assumed to
have a positive aggregate impact on resiliency or recovery. Examples of
such outcomes include ``the rate of passage of legislation restricting
access to alcoholic beverages'' or ``the percentage of arrests for
illicit drug manufacturing that result in convictions.''
3. System-Level Outcome--measures the effect of an intervention on
prevention and treatment capacity, efficiency, or effectiveness in an
existing system or community. Examples of such outcomes include
``increased capacity of a State government to quantify alcohol- or
drug-related problems'' or ``increased effectiveness of a community
treatment system to respond to the comprehensive needs of individuals
with Axis I mental health diagnoses.''
To support the transparency of the review process, SAMHSA wants
stakeholders to understand clearly the NREPP procedures and decision-
making processes. All community coalition interventions included in
NREPP will have demonstrated evidence of effectiveness at the
population, policy, or system level. The ratings will indicate the
strength of the supporting evidence, and may be as follows:
(1) Strong evidence with replication;
(2) Strong evidence without replication;
(3) Emerging evidence with replication; and
(4) Emerging evidence without replication.
All NREPP evidence ratings are defined at the level of specific
outcomes. The 12 evidence rating criteria used for population-, policy-
and system-level outcomes, summarized as an average Evidence Quality
Score (EQS) for each outcome, allow independent expert reviewers to
score along dimensions of outcome measurement, intervention fidelity,
comparison conditions, participant and data collector biases, design
and analysis, and anomalous findings. Each of the 12 criteria is
assessed by independent reviewers on a 0 to 2 scale, in which a ``1''
indicates that methodological rigor may have been acceptable and a
``2'' indicates that adequate methodological rigor was achieved for
this type of outcome.
[[Page 50388]]
Preliminary discussions of classifications have suggested that ``Strong
evidence'' be defined as an average EQS of 1.75 or above (out of a
possible 2.0), and that ``Emerging evidence'' be defined as an average
EQS between 1.50 and 1.74 (out of a possible 2.0).
The 12 criteria applied to each population-, policy-, or system-
level outcome measures are:
1. Logic-Driven Selection of Measures
2. Reliability
3. Validity
4. Intervention Fidelity
5. Nature of Comparison Condition
6. Standardized Data Collection
7. Data Collector Bias
8. Population Studied
9. Missing Data
10. Analysis Meets Data Assumptions
11. Theory-Driven Selection of Analytic Methods
12. Anomalous Findings
Outcome Measurement Criteria
1. Logic-Driven Selection of Measures
Outcome measures should be based on a theory or logic model that
associates them with the intervention.
0 = The applicant appears to have selected outcome measures for the
purpose of identifying favorable results rather than from a logic-based
rationale.
1 = There is no explicit description of a guiding logic model or theory
for measures, although a rationale for the inclusion of most measures
can be inferred.
2 = Measures are supported by a theory or logic model that associates
the intervention with the outcome.
2. Reliability
Outcome measures should have acceptable reliability to be
interpretable. ``Acceptable'' here means reliability at a level that is
conventionally accepted by experts in the field.
0 = No evidence of reliability of measures is presented.
1 = Relevant reliability measures are in the marginal range.
2 = Relevant reliability measures are in clearly acceptable ranges.
3. Validity
Outcome measures should have acceptable validity to be
interpretable.
0 = No evidence of validity of measures is presented or evidence that
is presented suggests measures are not valid.
1 = Measures has face validity.
2 = Relevant validity has been documented to be at acceptable levels in
independent studies.
4. Intervention Fidelity
The intervention should be well defined and its implementation
should be described in sufficient detail to assess whether
implementation affected outcomes.
0 = The intervention and/or its implementation are not described in
sufficient detail to verify that the intervention was implemented as
intended.
1 = The intervention and its implementation are described in adequate
detail, including justification for significant variation during
implementation.
2 = The intervention and its implementation are described in adequate
detail, reflecting variation during implementation with little or no
plausible impact on outcomes.
5. Nature of Comparison Condition
The quality of evidence for an intervention depends in part on the
nature of the comparison condition(s).
0 = Research design either lacks a comparison condition, or employs a
before/after comparison.
1 = Comparison condition was no service or wait-list (including
baseline comparison for a multipoint time series), or an active
intervention that has not been shown to be safer or more effective than
no service.
2 = Comparison condition was an active intervention shown to be as safe
as, or safer and more effective than, no service.
6. Standardized Data Collection
All outcome data should be collected in a standardized manner. Data
collectors trained and monitored for adherence to standardized
protocols provide the highest quality evidence of standardized data
collection.
0 = Data collection or archival sources used by the evaluation to
assess outcome did not use standardized data collection protocol(s).
1 = All outcome data were collected using standardized protocol(s).
2 = All outcome data were collected using standardized protocol(s) and
trained data collectors.
7. Data Collector Bias
Data collector bias is most strongly controlled when data
collectors are not aware of the interventions to which populations have
been exposed. When data collectors are aware of specific interventions,
their expectations should be controlled for through training and/or
statistical analysis methods on resultant data.
0 = Data collectors were not masked to the population's condition, and
nothing was done to control for possible bias, OR collector bias was
identified and not controlled for statistically.
1 = Data collectors were not masked to the population's condition;
possible bias was appropriately controlled for statistically or through
training.
2 = Data collectors were masked to the population's condition, or only
archival data was employed.
8. Population Studied
0 = A single group pre/posttest design was applied without a comparison
group, OR the alleged comparison group is significantly different from
the population receiving the intervention.
1 = Population(s) were studied using time trend analysis, multiple
baseline design, or a regression-discontinuity design that uses within-
group differences as a substitute for comparison groups.
2 = Population matching or similar techniques were used to compare
outcomes of population that received the intervention with the outcomes
of a valid comparison group.
9. Missing Data
Study results can be biased by missing data. Statistical methods as
supported by theory and research can be employed to control for missing
data that would bias results, but studies with no missing data needing
adjustment provide the strongest evidence.
0 = Missing data were an issue and were taken into account
inadequately, OR levels of missing data were too high to control for
bias.
1 = Missing data were an issue and were taken into account, but high
quality makes the control for bias suspect.
2 = Missing data were not an issue or were taken into account by
methods that estimate missing data.
10. Analysis Meets Data Assumptions
The appropriateness of statistical analysis is a function of the
properties of the data being analyzed and the degree to which data meet
statistical assumptions.
0 = Analyses were clearly inappropriate to the data collected; severe
violation(s) of assumptions make analysis uninterpretable.
1 = There were minor violations of assumptions, making interpretation
of results arguable.
2 = There were no or only very minor violations of assumptions; result
were generally interpretable.
[[Page 50389]]
11. Theory-Driven Selection of Analytic Methods
In addition to the properties of the data, analytic methods should
be based on a logic model or theory underlying the intervention.
Changes to analytic methods after initial data analysis (e.g., to
dredge for significant results) decrease the confidence that can be
placed in the findings.
0 = Analysis selected appears inconsistent with the intervention theory
or hypotheses; insufficient rationale was provided by the investigator.
1 = Analysis selected appears inconsistent with the intervention logic
model or hypotheses, but the investigator provides a potentially viable
rationale.
2 = Analysis is widely accepted by the field as consistent with the
intervention logic model or hypotheses.
12. Anomalous Findings
Findings that contradict the theories and hypotheses underlying an
intervention suggest the possibility of confounding causal variables
and limit the validity of study findings.
0 = There were anomalous findings suggesting alternate explanations for
outcomes reported that were not acknowledged by the applicant.
1 = There were a few anomalous findings, but additional analysis or
previous literature cited by the applicant provide a reasonable
explanation.
2 = There were no anomalous findings, OR researchers explained
anomalous findings in a way that preserves the validity of results
reported.
Re-Review of Existing NREPP Programs
As noted above, SAMHSA believes it is important to ensure that both
current and future NREPP interventions meet consistent scientific
standards so that the public and other interested stakeholders can be
confident in the effectiveness of these interventions. With this in
mind, SAMHSA is committed to expeditiously re-reviewing all existing
NREPP programs under the new process. As part of this effort, SAMHSA
plans to provide--directly or indirectly--sufficient resources to each
existing NREPP program to cover costs associated with a re-review. In
addition, programs already received by NREPP and pending review will be
reviewed under the new process. If additional support is needed by
these pending programs regarding the new review processes, these
resources will also be provided.
In order to accomplish these re-reviews efficiently and
expeditiously, NREPP review coordinators will work with each program to
obtain any additional documentation that might be needed for re-review.
These review coordinators will then conduct a re-review of each program
against the new review criteria. Programs with favorable re-reviews
will be included in the new NREPP system when it is launched in 2006.
Programs not receiving favorable re-reviews will have the opportunity
to appeal the re-review decision, and will be eligible for re-review by
independent, external reviewers. However, the schedule for re-reviews
of appealed programs will be subject to SAMHSA Administrator and SAMHSA
Center Director review priorities.
New Web Site
The primary goal of the revised NREPP Web site--https://
www.nationalregistry.samhsa.gov--will be to provide the public with
contemporary and reliable information about the scientific basis and
practicality of interventions to prevent and treat mental and substance
use disorders. All interventions achieving NREPP status will be listed
on the Web site. Average ratings and evaluate scores from scientific
peer reviewers, as well as information on the utility and
transferability of these interventions, will be posted on the site.
In addition, a searchable outcomes database of evidence-based
interventions will be a key feature. The Web site will also contain a
variety of learning and self-assessment tools for prospective and
current NREPP interventions to continuously improve their scientific
evidence base. Features of the new Web site will include:
Evidence rating criteria and utility descriptors
Detailed review guidelines
Self-assessment tool to assist interventions in
determining if they are ready to submit an application to NREPP
Detailed information on how to apply
Links to technical assistance resources available to
potential applicants
Relevant resources, including publications, presentations,
links, and other supplemental materials
Frequently Asked Questions (FAQs)
Information on how to contact a representative from the
NREPP team
Glossary of terms
Support for Innovative Interventions
SAMHSA recognizes that the long term utility and value of NREPP
rests, in part, on the ability of SAMHSA and others to support efforts
to evaluate and document the evidence-base of innovative interventions
in ways that will maximize their opportunity for entry into NREPP.
SAMHSA is considering potential options for both the direct and
indirect provision of such support, and will seek to clarify its
intentions in this area sometime in Fiscal Year 2006.
Questions To Consider in Making Your Comments
Responders should feel free to comment on any, or all, questions,
as well as provide relevant suggestions not included in the specific
questions. In order to facilitate the compilation and analysis of
comments, responders are asked to be explicit about the questions to
which they are responding.
1. SAMHSA is seeking to establish an objective, transparent,
efficient, and scientifically defensible process for identifying
effective, evidence-based interventions to prevent and/or treat mental
and substance use disorders. Is the proposed NREPP system--including
the suggested provisions for screening and triage of applications, as
well as potential appeals by applicants--likely to accomplish these
goals?
2. SAMHSA's NREPP priorities are reflected in the agency's matrix
of program priority areas. How might SAMHSA engage interested
stakeholders on a periodic basis in helping the agency determine
intervention priority areas for review by NREPP?
3. There has been considerable discussion in the scientific
literature on how to use statistical significance and various measures
of effect size in assessing the effectiveness of interventions based
upon both single and multiple studies (Schmidt & Hunter, 1995;
Rosenthal, 1996; Mason, Schott, Chapman, & Tu, 2000: Rutledge & Loh,
2004). How should SAMHSA use statistical significance and measures of
effect size in NREPP? Note that SAMHSA would appreciate receiving
citations for published materials elaborating upon responders
suggestions in this area.
4. SAMHSA's proposal for NREPP would recognize as effective several
categories of interventions, ranging from those with high-quality
evidence and more replication to those with lower quality evidence and
fewer replications. This would allow for the recognition of emerging as
well as fully evidence-based interventions. Some view this as a
desirable feature that reflects the continuous nature of evidence;
provides important options for interventions
[[Page 50390]]
recipients, providers, and funders when no or few fully evidence-based
interventions are available; and helps promote continued innovation in
the development of evidence-based interventions. Others have argued
that several distinct categories will confuse NREPP users. Please
comment on SAMHSA's proposal in this area.
5. SAMHSA recognizes the importance of considering the extent to
which interventions have been tested with diverse populations and in
diverse settings. Therefore, the agency anticipates incorporating this
information into the web site descriptions of interventions listed on
NREPP. This may allow NREPP users to learn if interventions are
applicable to their specific needs and situations, and may also help to
identify areas where additional studies are needed to address the
effectiveness of interventions with diverse populations and in diverse
locations.
SAMHSA is aware that more evidence is needed on these topics.
Please comment on SAMHSA's approach in this area.
6. To promote consistent, reliable, and transparent standards to
the public, SAMHSA proposes that all existing programs on NREPP meet
the prevailing scientific criteria described in this proposal, and that
this be accomplished through required re-reviews of all programs
currently on NREPP. SAMHSA has considered an alternative approach that
would ``grandfather'' all existing NREPP programs under the new system,
but would provide clear communication that these existing programs have
not been assessed against the new NREPP scientific standards. Please
comment on which approach you believe to be in the best interests of
SAMHSA stakeholders.
7. What types of guidance, resources, and/or specific technical
assistance activities are needed to promote greater adoption of NREPP
interventions, and what direct and indirect methods should SAMHSA
consider in advancing this goal?
8. SAMHSA is committed to consumer, family, and other nonscientist
involvement in the NREPP process. The panels convened by SAMHSA and
described earlier in this notice suggested that these stakeholders be
included specifically to address issues of intervention utility and
practicality. Please comment on how consumer, family, and other
nonscientist stakeholders could be involved in NREPP.
9. SAMHSA has identified NREPP as one source of evidence-based
interventions for selection by potential agency grantees in meeting the
requirements related to some of SAMHSA's discretionary grants. What
guidance, if any, should SAMHSA provide related to NREPP as a source of
evidence-based interventions for use under the agency's substance abuse
and mental health block grants?
10. SAMHSA believes that NREPP should serve as an important, but
not exclusive source, of evidence-based interventions to prevent and/or
treat mental and substance use disorders. What steps should SAMHSA take
to promote consideration of other sources (e.g., clinical expertise,
consumer or recipient values) in stakeholders' decisions regarding the
selection, delivery and financing of mental health and substance abuse
prevention and treatment services?
11. SAMHSA anticipates that once NREPP is in operation, various
stakeholders will make suggestions for improving the system. To
consider this input in a respectful, deliberate, and orderly manner,
SAMHSA anticipates annually reviewing these suggestions. These reviews
would be conducted by a group of scientist and nonscientist
stakeholders knowledgeable about evidence in behavioral health and the
social sciences. Please comment on SAMHSA's proposal in this area.
References
Mason, CA, Scott, KG, Chapman, DA, Tu, S. A review of some
individual and community level effect size indices for the study of
risk factors for child and adolescent development. Ed. Psychol.
Measurement. 2000; 60(3): 385-410.
Rosenthal, JA. Qualitative descriptors of strength of
association and effect size. J. of Soc. Serv. Research. 1996; 21(4):
37-59.
Rutledge, T, Loh, C. Effect size and statistical testing in the
determination of clinical significance in behavioral medicine
research. Annals of Beh. Med. 2004; 27(2): 138-145.
Schmidt, F, Hunter, JE. The impact of data-analysis methods on
cumulative research knowledge: statistical significance testing,
confidence intervals, and meta-analysis. Eval. Health Prof. 1995
Dec; 18(4): 408-27.
[FR Doc. 05-17034 Filed 8-25-05; 8:45 am]
BILLING CODE 4160-01-M