Changes to the National Registry of Evidence-Based Programs and Practices (NREPP), 13132-13155 [06-2313]
Download as PDF
wwhite on PROD1PC65 with NOTICES
13132
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
MSC 7844, Bethesda, MD 20892. (301) 435–
1119. mselmanoff@csr.nih.gov.
This notice is being published less than 15
days prior to the meeting due to the timing
limitations imposed by the review and
funding cycle.
Name of Committee: Center for Scientific
Review Special Emphasis Panel; Member
Conflict: Cellular/molecular Responses in
Dendritic Cells, Macrophages, and T cells.
Date: March 27, 2006.
Time: 3 p.m. to 5 p.m.
Agenda: To review and evaluate grant
applications and/or proposals.
Place: National Institutes of Health, 6701
Rockledge Drive, Bethesda, MD 20892.
(Telephone Conference Call).
Contact Person: Samuel C. Edwards, PhD,
Scientific Review Administrator, Center for
Scientific Review, National Institutes of
Health, 6701 Rockledge Drive, Room 4200,
MSC 7812, Bethesda, MD 20892. (301) 435–
1152. edwardss@csr.nih.gov.
This notice is being published less than 15
days prior to the meeting due to the timing
limitations imposed by the review and
funding cycle.
Name of Committee: Center for Scientific
Review Special Emphasis Panel;
Atherosclerosis and Macrophages.
Date: April 11, 2006.
Time: 2 p.m. to 3 p.m.
Agenda: To review and evaluate grant
applications.
Place: National Institutes of Health, 6701
Rockledge Drive, Bethesda, MD 20892.
(Telephone Conference Call).
Contact Person: Olga A. Tjurmina, PhD,
Scientific Review Administrator, Center for
Scientific Review, National Institutes of
Health, 6701 Rockledge Drive, Room 4030B,
MSC 7814, Bethesda, MD 20892. (301) 451–
1375. ot3d@nih.gov.
Name of Committee: Center for Scientific
Review Special Emphasis Panel; Atrial
Fibrillation and Pacing.
Date: April 12, 2006.
Time: 2 p.m. to 3:30 p.m.
Agenda: To review and evaluate grant
applications.
Place: National Institutes of Health, 6701
Rockledge Drive, Bethesda, MD 20892.
(Telephone Conference Call).
Contact Person: Olga A. Tjurmina, PhD,
Scientific Review Administrator, Center for
Scientific Review, National Institutes of
Health, 6701 Rockledge Drive, Room 4030B,
MSC 7814, Bethesda, MD 20892. (301) 451–
1375. ot3d@nih.gov.
Name of Committee: Center for Scientific
Review Special Emphasis Panel; Member
Conflict: Heart Failure Gene Therapy.
Date: April 17, 2006.
Time: 1 p.m. to 3 p.m.
Agenda: To review and evaluate grant
applications.
Place: National Institutes of Health, 6701
Rockledge Drive, Bethesda, MD 20892.
(Telephone Conference Call).
Contact Person: Rajiv Kumar, PhD,
Scientific Review Administrator, Center for
Scientific Review, National Institutes of
Health, 6701 Rockledge Drive, Room 4122,
MSC 7802, Bethesda, MD 20892. (301) 435–
1212. kumarra@csr.nih.gov.
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
Name of Committee: Center for Scientific
Review Special Emphasis Panel; Health
Services Organization and Delivery Member
Conflict Special Emphasis Panel.
Date: April 18, 2006.
Time: 10:30 a.m. to 1:30 p.m.
Agenda: To review and evaluate grant
applications.
Place: National Institutes of Health, 6701
Rockledge Drive, Bethesda, MD 20892.
(Telephone Conference Call).
Contact Person: Gertrude K. McFarland,
FAAN, RN, DNSC Scientific Review
Administrator, Center for Scientific Review,
National Institutes of Health, 6701 Rockledge
Drive, Room 3156, MSC 7770, Bethesda, MD
20892. (301) 435–1784. mcfarlag@csr.nih.gov.
(Catalogue of Federal Domestic Assistance
Program Nos. 93.306, Comparative Medicine;
93.333, Clinical Research, 93.306, 93.333,
93.337, 93.393–93.396, 93.837–93.844,
93.846–93.878, 93.892, 93.893, National
Institutes of Health, HHS)
Dated: March 6, 2006.
Anna Snouffer,
Acting Director, Office of Federal Advisory
Committee Policy.
[FR Doc. 06–2400 Filed 3–13–06; 8:45 am]
BILLING CODE 4140–01–M
DEPARTMENT OF HEALTH AND
HUMAN SERVICES
Substance Abuse and Mental Health
Service Administration
Changes to the National Registry of
Evidence-Based Programs and
Practices (NREPP)
Substance Abuse and Mental
Health Services Administration, HHS.
ACTION: Notice.
AGENCY:
SUMMARY: The Substance Abuse and
Mental Health Services Administration
(SAMHSA) is committed to preventing
the onset and reducing the progression
of mental illness, substance abuse, and
substance-related problems among all
individuals, including youth. As part of
this effort, SAMHSA has expanded and
refined the agency’s National Registry of
Evidence-based Programs and Practices
(NREPP) based on a systematic analysis
and consideration of public comments
received in response to a previous
Federal Register notice (70 FR 50381,
Aug. 26, 2005).
This Federal Register notice
summarizes SAMHSA’s redesign of
NREPP as a decision support tool for
promoting a greater adoption of
evidence-based interventions within
typical community-based settings, and
provides an opportunity for interested
parties to become familiar with the new
system.
FOR FURTHER INFORMATION CONTACT:
Kevin D. Hennessy, Ph.D., Science to
PO 00000
Frm 00067
Fmt 4703
Sfmt 4703
Service Coordinator/SAMHSA, 1 Choke
Cherry Road, Room 8–1017, Rockville,
MD 20857, (240) 276–2234.
Charles G. Curie,
Administrator, SAMHSA.
Advancing Evidence-Based Practice
Through Improved Decision Support
Tools: Reconceptualizing NREPP
Introduction
The Substance Abuse and Mental
Health Services Administration
(SAMHSA) strives to provide
communities with effective, highquality, and cost-efficient prevention
and treatment services for mental and
substance use disorders. To meet this
goal, SAMHSA recognizes the needs of
a wide range of decisionmakers at the
local, state, and national levels to have
readily available and timely information
about scientifically established
interventions to prevent and/or treat
these disorders.
SAMHSA, through its Science to
Service Initiative, actively seeks to
promote Federal collaboration (e.g.,
with the National Institutes of Health
[NIH]) in translating research into
practice. The ideal outcome of this
Initiative is that individuals at risk for
or directly experiencing mental and
substance abuse use disorders will be
more likely to receive appropriate
preventive or treatment services, and
that these services will be the most
effective and the highest quality that the
field has to offer.
This report provides a summary of
activities conducted during the past
year to critically evaluate SAMHSA’s
recent activities and future plans for the
National Registry of Evidence-based
Programs and Practices (NREPP). It
outlines the major themes that emerged
from a formal public comment process
and links this feedback to new review
procedures and Web-based decision
support tools that will enhance access to
evidence-based knowledge for multiple
audiences.
The report is presented in four
sections:
• Section I briefly states the
background of NREPP and SAMHSA’s
recent request for public comments.
• Section II discusses the analysis of
comments that was conducted and
presents the key recommendations for
NREPP based on this analysis.
• Section III describes the new
approach that SAMHSA is advancing
for NREPP.
• Section IV presents the specific
dimensions of the NREPP system in its
new framework as a decision support
tool.
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
• Section V describes future activities
at SAMHSA to support NREPP.
wwhite on PROD1PC65 with NOTICES
I. Background: The National Registry of
Evidence-Based Programs and Practices
The National Registry of Evidencebased Programs and Practices was
designed to represent a key component
of the Science to Service Initiative. It
was intended to serve as a voluntary
rating and classification system to
identify programs and practices with a
strong scientific evidence base. An
important reason for developing NREPP
was to reduce the significant time lag
between the generation of scientific
knowledge and its application within
communities.1 Quality treatment and
prevention services depend on service
providers’ ability to access evidencebased scientific knowledge,
standardized protocols, practice
guidelines, and other practical
resources.
The precursor of NREPP, the National
Registry of Effective Prevention
Programs, was developed by SAMHSA’s
Center for Substance Abuse Prevention
(CSAP) as a way to help professionals in
the field become better consumers of
substance abuse prevention programs.
Through CSAP’s Model Program
Initiative, over 1,100 programs were
reviewed, and more than 150 were
designated as Model, Effective, or
Promising Programs.
Over the past 2 years, SAMHSA
convened a number of scientific panels
to explore the expansion of the NREPP
review system to include interventions
in all domains of mental health and
substance abuse prevention and
treatment. In addition, SAMHSA
committed itself to three guiding
principles—transparency, timeliness,
and accuracy of information—in the
development of an evidence-based
registry of programs and practices.
During this process it was determined
that, to provide the most transparent
and accurate information to the public,
evidence should be assessed at the level
of outcomes targeted by an intervention,
not at the more global level of
interventions or programs. Based on this
decision, SAMHSA’s current NREPP
contractor conducted a series of pilot
studies to explore the validity and
feasibility of applying an outcomespecific, 16-criteria evidence rating
system to an expanded array of
1 As cited by the Institute of Medicine (2001),
studies have suggested it takes an average of 17
years for research evidence to diffuse to clinical
practice. Source: Balas, E.A., & Boren, S.A. (2000).
Managing clinical knowledge for health care
improvement. In: J. Bemmel & A.T. McCray (Eds.),
Yearbook of medical informatics 2000: Patientcentered systems. Stuttgart, Germany: Schattauer.
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
programs and practices. Through
extensive dialogues with the prevention
community, SAMHSA also explored
ways to provide evidence-based reviews
of population- and community-level
interventions within NREPP.
In an effort to augment the
information gained through these
activities, SAMHSA solicited formal
public comments through a notice
posted in the Federal Register on
August 26, 2005. The notice asked for
responses to the agency’s plans for
NREPP, including (1) revisions to the
scientific review process and review
criteria; (2) the conveying of practical
implementation information about
NREPP programs and practices to those
who might purchase, provide, or receive
these interventions; and (3) the types of
additional agency activities that may be
needed to promote wider adoption of
interventions on NREPP, as well as
support innovative interventions
seeking NREPP status. A brief summary
of the public comments and key public
recommendations is presented in
Section II. The complete analysis of the
public responses is included in the
Appendix to this report.
II. Public Responses to the Federal
Register Notice
Senior staff at SAMHSA engaged in a
comprehensive review of comments
received in response to the Federal
Register notice. Particular attention was
directed to comments from prominent
state and Federal stakeholders,
including providers and policymakers,
who stand to be the most affected by
whatever system is ultimately
implemented. Efforts were taken to
balance SAMHSA’s responsiveness to
public feedback with the need to adhere
to rigorous standards of scientific
accuracy and to develop a system that
will be fair and equitable to multiple
stakeholder groups.
Recommendations for NREPP
In the more than 100 comments
received as part of the public comment
process, a number of recurring themes
and recommendations were identified.
While all specific and general
recommendations for modification of
the NREPP review process were
carefully considered by SAMHSA, the
following are those that were considered
most essential to the development of an
accurate, efficient, and equitable system
that can meet the needs of multiple
stakeholders:
• Limit the system to interventions
that have demonstrated behavioral
change outcomes. it is inherently
appealing to the funders, providers, and
consumers of prevention and treatment
PO 00000
Frm 00068
Fmt 4703
Sfmt 4703
13133
services to know that an intervention
has a measurable effect on the actual
behavior of participants. As researchers
at the University of Washington
recommended, ‘‘the system should be
reserved for policies, programs, and
system-level changes that have
produced changes in actual drug use or
mental health outcomes.’’
• Rereview all existing programs.
There was near consensus among the
respondents to the notice that existing
programs with Model, Effective, and
Promising designations from the old
reviews should be rereviewed under the
new system. The Committee for
Children pointed out that ‘‘a
‘grandfather’ system may give the
impression to users, right or wrong, that
these interventions aren’t as good as
those that have undergone the new
review process.’’ One individual
suggested that programs and practices
needed to be rated ‘‘according to a
consistent set of criteria’’ so that ‘‘the
adoption of an intervention by a
provider can be made with confidence.’’
• Train and utilize panels of
reviewers with specific expertise related
to the intervention(s) under review.
Respondents to the notice noted that it
would be important for the NREPP
review process to utilize external
reviewers with relevant scientific and
practical expertise related to the
intervention being assessed. In addition,
the pool of available reviewers should
broadly include community-level and
individual-level prevention as well as
treatment perspectives. In order to
promote transparency of the review
process, the reviewer training protocols
should be available for review by the
public (e.g., posted on the NREPP Web
site).
• Provide more comprehensive and
balanced descriptions of evidence-based
practices, by emphasizing the important
dimension of readiness for
dissemination. The American
Psychological Association (APA)
Committee on Evidence-Based Practice
recommended greater emphasis on the
utility descriptors (i.e., those items
describing materials and resources to
support implementation), stating, ‘‘these
are key outcomes for implementation
and they are not adequately addressed
in the description of NREPP provided to
date. This underscores earlier concerns
noted about the transition from efficacy
to effectiveness.’’ The APA committee
noted that generalizability of programs
listed on NREPP will remain an issue
until this ‘‘gap between efficacy and
effectiveness’’ is explicitly addressed
under a revised review system.
• Avoid limiting flexibility and
innovation; implement a system that is
E:\FR\FM\14MRN1.SGM
14MRN1
wwhite on PROD1PC65 with NOTICES
13134
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
fair and inclusive of programs and
practices with limited funding, and
establish policies that seek to prevent
the misuse of information contained on
NREPP. The National Association for
Children of Alcoholics voiced this
concern: ‘‘It has been intrinsically unfair
that only grants [referring to NIH-funded
efforts] have been able to establish
‘evidence’ while many programs appear
very effective—often more effective in
some circumstances than NREPP
approved programs, but have not had
the Federal support or other major grant
support to evaluate them. The SAMHSA
grant programs continue to reinforce the
designation of NREPP programs in order
to qualify for funding, and the states
tend to strengthen this ‘stipulation’ to
local programs,who then drop good
(non-NREPP) work they have been
doing or purchase and manipulate
NREPP programs that make the grant
possible. This is not always in the best
interest of the client population to be
served.’’
• Recognize multiple ‘‘streams of
evidence’’ (e.g., researcher, practitioner,
and consumer) and the need to provide
information to a variety of stakeholders
in a decision support context. A number
of comments suggested that NREPP
should be more inclusive of the
practitioner and consumer perspective
on what defines evidence. For example,
one commenter noted: ‘‘The narrowed
interpretation of evidence-based
practice by SAMHSA focuses almost
solely on the research evidence to the
exclusion of clinical expertise and
patient values.’’ Several comments
noted that NREPP should be consistent
with the Institute of Medicine’s
definition of evidence-based practice,
which reflects multiple ‘‘streams of
evidence’’ that include research,
clinical, and patient perspectives.
• Provide a summary rating system
that reflects the continuous nature of
evidence quality. There was substantial
disagreement among those responding
to the notice concerning whether
NREPP should include multiple
categories of evidence quality. While a
number of individuals and
organizations argued for the use of
categorical evidence ratings, there were
many who suggested that NREPP should
provide an average, numeric scale rating
on specific evidence dimensions to
better reflect the ‘‘continuous nature of
evidence.’’ This approach would allow
the user of the system to determine what
level of evidence strength is required for
their particular application of an
intervention.
• Recognize the importance of
cultural diversity and provide complete
descriptive information on the
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
populations for which interventions
have been developed and applied. Most
comments reflected the knowledge that
cultural factors can play an important
role in determining the effectiveness of
interventions. The Oregon Office of
Mental Health and Addiction Services
noted, ‘‘SAMHSA should focus
considerable effort on identifying and
listing practices useful and applicable
for diverse populations and rural areas.
Providers and stakeholders from these
groups have repeatedly expressed the
concern they will be left behind if no
practices have been identified which fit
the need of their area. We need to take
particular care to ensure that their fear
is not realized.’’
• In addition to estimating the effect
size of intervention outcomes, NREPP
should include additional descriptive
information about the practical impacts
of programs and practices. In general,
comments suggested that that effect size
should not be used as an exclusionary
criterion in NREPP. It was widely noted
that effect size estimates for certain
types of interventions (e.g., communitylevel or population-based) will tend to
be of smaller magnitude, and that
‘‘professionals in the field have not
reached consensus on how to use effect
size.’’ Researchers at the University of
Washington suggested the inclusion of
information about the reach of an
intervention, when available, as
complementary information to effect
sizes. Several comments also suggested
that effect size is often confused with
the clinical significance of an
intervention and its impact on
participants.
• Acknowledge the need to develop
additional mechanisms of Federal
support for technical assistance and the
development of a scientific evidence
base within local prevention and
treatment communities. Nearly one
third of the comments directly
addressed the need for SAMHSA to
identify and/or provide additional
technical assistance resources to
communities to help them adapt and
implement evidence-based practices.
The Oregon Office of Mental Health and
Addiction Services wrote, ‘‘The
adoption of new practices by any entity
is necessarily a complex and long-term
process. Many providers will need
technical support if adoption and
implementation is to be accomplished
effectively. Current resources are not
adequate to meet this challenge.’’
In order to align NREPP with the
important recommendations solicited
through the public comment process,
SAMHSA also recognized the
importance of the following goals:
PO 00000
Frm 00069
Fmt 4703
Sfmt 4703
• Provide a user-friendly, searchable
array of descriptive summary
information as well as reviewer ratings
of evidence quality.
• Provide an efficient and costeffective system for the assessment and
review of prospective programs and
practices.
Section III, Streamlined Review
Procedures, provides a complete
description of the modified and
streamlined review process that
SAMHSA will adopt in conducting
evidence-based evaluations of mental
health and substance abuse
interventions.
III. Streamlined Review Procedures
The number and range of NREPP
reviews are likely to expand
significantly under the new review
system, requiring that SAMHSA
develop an efficient and cost-effective
review process. The streamlined review
procedures, protocols, and training
materials will be made available on the
NREPP Web site for access by all
interested individuals and
organizations.
Reviews of interventions will be
facilitated by doctoral-level Review
Coordinators employed by the NREPP
contractor. Each Review Coordinator
will support two external reviewers who
will assign numeric, criterion-based
ratings on the dimensions of Strength of
Evidence and Readiness for
Dissemination. Review Coordinators
will provide four important support and
facilitative functions within the peer
review process: (1) They will assess
incoming applications for the
thoroughness of documentation related
to the intervention, including
documentation of significant outcomes,
and will convey summaries of this
information to SAMHSA Center
Directors for their use in prioritizing
interventions for review; (2) they will
serve as the primary liaison with the
applicant to expedite the review of
interventions; (3) they will collaborate
with the NREPP applicant to draft the
descriptive dimensions for the
intervention summaries; and (4) they
will provide summary materials and
guidance to external reviewers to
facilitate initial review and consensus
discussions of intervention ratings.
Interventions Qualifying for Review
While NREPP will retain its open
submission policy, the new review
system emphasizes the important role of
SAMHSA’s Center Directors and their
staff (in consultation with key
stakeholders) in setting intervention
review priorities that will identify the
particular content areas, types of
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
intervention approaches, populations,
or even types of research designs that
will qualify for review under NREPP.
Under the streamlined review
procedures, the sole requirement for
potential inclusion in the NREPP review
process is for an intervention to have
demonstrated one or more significant
behavioral change outcomes. Centerspecific review priorities will be
established and communicated to the
field by posting them to the NREPP Web
site at the beginning of each fiscal year.2
Review of Existing NREPP Programs and
Practices
It will be the prerogative of SAMHSA
Center Directors to establish priorities
for the review and interventions already
on, and pending entry on, NREPP. As
indicated above, these decisions may be
linked to particular approaches,
populations, or strategic objectives as
identified by SAMHSA as priority areas.
Until reviews of existing NREPP
programs and practices are completed
and posted to the new NREPP Web site,
the current listing on the SAMHSA
Model Programs Web site will remain
intact.
Notifications to Program/Practice
Developers
Upon the completion of NREPP
reviews program/practice developers (or
principal investigators of a researchbased intervention) will be notified in
writing within 2 weeks of the review
results. A complete summary,
highlighting information from each of
the descriptive and rating dimensions,
will be provided for review. Program/
practice developers who disagree with
the descriptive information or ratings
contained in any of the dimensions will
have an opportunity to discuss their
concerns with the NREPP contractor
during the 2-week period following
receipt of the review outcome
notification. These concerns must be
expressed in writing to the contractor
within this 2-week period. If no
comments are received, the review is
deemed completed, and the results may
be posted to the NREPP Web site. If
points of disagreement cannot be
resolved by the end of this 2-week
period, then written appeals for a
rereview of the intervention may be
considered on a case-by-case basis.
wwhite on PROD1PC65 with NOTICES
NREPP Technical Expert Panel
SAMHSA will organize one or more
expert panels to perform periodic (e.g.,
annual assessments of the evidence
2 Except for FY06 when priorities will be
established and posted when the new system Web
site is launched (i.e., within the third FY quarter).
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
review system and recommend
enhancements to to the review
procedures and/or standards for
evidence-based science and practice.
Panel membership will represent a
balance of perspectives and expertise.
The panels will be comprised of
researchers with knowledge of
evidence-based practices and initiatives,
policymakers, program planners and
funders, practitioners, and consumers.
The modified NREPP system
embodies a commitment by SAMHSA
and its Science to Service Initiative to
broaden the appeal and utility of the
system to multiple audiences. While
maintaining the focus on the
documented outcomes achieved through
a program or practice, NREPP also is
being developed as a user-friendly
decision support tool to present
information along multiple dimensions
of evidence. Under the new system,
interventions will not receive single,
overall ratings as was the case with the
previous NREPP (e.g., Model, Effective,
or Promising). Instead, an array of
information from multiple evidence
dimensions will be provided to allow
different user audiences to both identify
(through Web-searchable means) and
prioritize the factors that are important
to them in assessing the relative
strengths of different evidence-based
approaches to prevention or treatment
services.
Section IV presents in more detail the
specific dimensions of descriptive
information and ratings that NREPP will
offer under this new framework.
IV. NREPP Decision Support Tool
Dimensions
The NREPP system will support
evidence-based decisionmaking by
providing a wide array of information
across multiple dimensions. Many of
these are brief descriptive dimensions
that will allow users to identify and
search for key intervention attributes of
interest. Descriptive dimensions would
frequently include a brief, searchable
keyword or attribute (e.g., ‘‘randomized
control trial’’ under the Evaluation
Design dimension) in addition to
narrative text describing that dimension.
Two dimensions, Strength of Evidence
and Readiness for Dissemination, will
consist of quantitative, criterion-based
ratings by reviewers. These quantitative
ratings will be accompanied by reviewer
narratives summarizing the strengths
and weaknesses or the intervention
along each dimension.
Considerations for Using NREPP as a
Decision Support Tool
It is essential for end-users to
understand that the descriptive
PO 00000
Frm 00070
Fmt 4703
Sfmt 4703
13135
information and ratings provided by
NREPP are only useful within a much
broader context that incorporates a wide
range of perspectives—including
clinical, consumer, administrative,
fiscal, organizational, and policy—into
decisions regarding the identification,
selection, and successful
implementation of evidence-based
services. In fact, an emerging body of
literature on implementation science 3
suggests that a failure to carefully attend
to this broader array of data and
perspectives may well lead to
disappointing or unsuccessful efforts to
adopt evidence-based interventions.
Because each NREPP user is likely to be
seeking somewhat different information,
and for varied purposes, it is unlikely
that any single intervention included on
NREPP will fulfill all of the specific
requirements and unique circumstances
of a given end-user. Appreciation of this
basic premise of NREPP as a decision
support tool to be utilized in a broader
context will thus enable system users to
make their own determinations
regarding how best to assess and apply
the information provided.
The NREPP decision support
dimensions include:
• Descriptive Dimensions
• Strength of Evidence Dimension
Ratings
• Readiness for Dissemination
Dimension Ratings
A complete description of these
dimensions is provided in the sections
below.
Descriptive Dimensions
• Intervention Name and Summary:
Provides a brief summary of the
intervention, including title, description
of conceptual or theoretical foundations,
and overall goals. Hyperlinks to graphic
logic model(s), when available, could be
accessed from this part of the summary.
• Contract Information: Lists key
contact information. Typically will
include intervention developer’s title(s),
affiliation, mailing address, telephone
and fax numbers, e-mail address, and
Web site address.
• Outcome(s): A searchable listing of
the behavioral outcomes that the
intervention has targeted.
• Effects and Impact: Provides a
description and quantification of the
effects observed for each outcome.
3 Fixsen, D.L., Naoom, S.F., Blase, K.A.,
Friedman, R.M., & Wallace, F. (2005).
Implementation research: A synthesis of the
literature. Tampa, Florida: University of South
Florida, Louis de la Parte Florida mental Health
Institute, The National Implementation Network
(FMHI Publication #231).
Rogers (1995). Diffusion of innovaations (5th Ed.)
New York: The Free Press.
E:\FR\FM\14MRN1.SGM
14MRN1
13136
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
Includes information on the statistical
significance of outcomes, the magnitude
of changes reported including effect size
and measures of clinical significance (if
available), and the typical duration of
behavioral changes produced by the
intervention.
• Relevant Populations and Settings:
Identifies the populations and sample
demographics that characterize existing
evaluations. The settings in which
different populations have been
evaluated will be characterized along a
dimension that ranges from highly
controlled and selective (i.e., efficacy
studies), to less controlled and more
representative (i.e., effectiveness
studies), to adoption in the most diverse
and realistic public health and clinical
settings (i.e., dissemination studies).4
• Costs: Provides a breakdown of
intervention cost(s) per recipient/
participant or annual as appropriate
(including capital costs, other direct
costs [travel, etc.]). Start-up costs
including staff training and
development. A standardized template
would be provided to applicants for
estimating and summarizing the
implementation and maintenance costs
of an intervention.
• Adverse Effects: Reported with
regard to type and number, amounts of
change reported, type of data collection,
analyses used, intervention and
comparison group, and subgroups.
• Evaluation Design: Contains both a
searchable index of specific
experimental and quasi-experimental
designs (e.g., pre-/posttest
nonequivalent groups designs,
regression-discontinuity designs,
wwhite on PROD1PC65 with NOTICES
4 For more description of these types of studies
and their role in supporting evidence-based
services, see the report: Bridging science and
service: A report by the National Advisory mental
Health Council’s Clinical Treatment and Services
Research Workgroup (https://www.nimh.nih.gov/
publicat/nimhbridge.pdf).
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
interrupted time series designs, etc.) 5 as
well as a narrative description of the
design (including intervention and
comparison group descriptions) used to
document intervention outcomes.
• Replication(s): Coded as ‘‘None,’’ or
will state the number of replications to
date (only those that have been
evaluated for outcomes). Replications
will be additionally characterized as
having been conducted in efficacy,
effectiveness, or dissemination contexts.
• Proprietary or Public Domain
Intervention: Typically will be one or
the other, but proprietary components
or instruments used as part of an
intervention will be identified.
• Cultural Appropriateness: Coded as
‘‘Not Available’’ (N/A) if either no data
or no implementation/training materials
for particular culturally identified
groups are available. When culturespecific data and/or implementation
materials exist for one or more groups,
the following two Yes/No questions will
be provided for each group:
• Was the intervention developed
with participation by members of the
culturally identified group?
• Are intervention and training
materials translated or adapted to
members of the culturally identified
group?
• Implementation History: Provides
information relevant to the
sustainability of interventions. Provides
descriptive information on (1) the
number of sites that have implemented
the intervention; (2) how many of those
have been evaluated for outcomes; (3)
the longest continuous length of
implementation (in years); (4) the
average or modal length of
implementation; and (5) the
approximate number of individuals who
5 Campbell, D.T., & Stanley, J.C. (1966).
Experimental and quasi-experimental designs for
research. Chicago: Rand McNally.
PO 00000
Frm 00071
Fmt 4703
Sfmt 4703
have received or participated in the
intervention.
Strength of Evidence Dimension Ratings
Quantitative, reviewer-based ratings
on this dimension will be provided
within specific categories of research/
evaluation design. In this manner, users
can search and select within those
categories of research designs that are
most relevant to their particular
standards of evidence-based knowledge.
The categories of research design that
are accepted within the NREPP system
are described below.
Research Design
Quality of evidence for an
intervention depends on the strength of
adequately implemented research
design controls, including comparison
conditions for quasi-experimental and
randomized experimental designs
(individual studies). Aggregation (e.g.,
meta-analysis and systematic research
reviews) and/or replication across welldesigned series of quasi-experimental
and randomized control studies provide
the strongest evidence. The evidence
pyramid presented below represents a
typical hierarchy for classifying the
strength of causal inferences that can be
obtained by implementing various
research designs with rigor.6 Designs at
the lowest level of evidence pyramid
(i.e., observational, pilot, or case
studies), while acceptable as evidence
in some knowledge development
contexts, would not be included in the
NREPP system.
6 Biglan, A., Mrazek, P., Carnine, D.W., & Flay, B.
R. (2003). The integration of research and practice
in the prevention of youth problem behaviors.
American Psychologist, 58, 433–440.
Chambless, D. L., & Hollon, S. (1998). Defining
empirically supported therapies. Journal of
Consulting and Clinical Psychology, 66, 7–18.
Gray, J. A. (1997), Evidence-based healthcare:
How to make health policy and management
decisions. New York: Churchill Livingstone.
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
3. Intervention Fidelity
7 Each criterion would be rated on an ordinal
scale ranging from 0 to 4. The endpoints and
midpoints of the scale would be anchored to a
narrative description of that rating. The remaining
integer points of the scale (i.e., 1 and 3) would not
be explicitly anchored, but could be used by
reviewers to assign intermediate ratings at their
discretion.
8 Marshall, M., Lockwood, A., Bradley, C.,
Adams, C., Joy, C., & Fenton, M. (2000).
Unpublished rating scales: A major source of bias
in randomised controlled trials of treatments for
schizophrenia. British Journal of Psychiatry, 176,
249–252.
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
0 = Absence of evidence measure
validity, or some evidence that the
measure is not valid.
2 = Measure has face validity; absence
of evidence that measure is not
valid.
4 = Measure has one or more acceptable
forms of criterion-related validity
(correlation with appropriate,
validated measures or objective
criteria); OR, for objective measures
of response, there are procedural
checks to confirm data validity;
absence of evidence that measure is
not valid.
The ‘‘experimental’’ intervention
implemented in a study should have
fidelity to the intervention proposed by
the applicant. Instruments that have
tested acceptable psychometric
properties (e.g., interrater reliability,
validity as shown by positive
association with outcomes) provide the
highest level of evidence.
0 = Absence of evidence or only
narrative evidence that the
applicant or provider believes the
intervention was implemented with
acceptable fidelity.
PO 00000
Frm 00072
Fmt 4703
Sfmt 4703
2 = There is evidence of acceptable
fidelity in the form of judgment(s)
by experts, systematic collection of
data (e.g. dosage, time spent in
training, adherence to guidelines or
a manual), or a fidelity measure
with unspecified or unknown
psychometric properties.
4 = There is evidence of acceptable
fidelity from a tested fidelity
instrument shown to have
reliability and validity.
4. Missing Data and Attrition
Study results can be biased by
participant attrition and other forms of
missing data. Statistical methods as
supported by theory and research can be
employed to control for missing data
and attrition that would bias results, but
studies with no attrition needing
adjustment provide the strongest
evidence that results are not biased.
0 = Missing data and attrition were
taken into account inadequately,
OR there was too much to control
for bias.
2 = Missing data and attrition were
taken into account by simple
estimates of data and observations,
or by demonstrations of similarity
between remaining participants and
those lost to attrition.
4 = Attrition was taken into account by
more sophisticated methods that
E:\FR\FM\14MRN1.SGM
14MRN1
EN14MR06.000
‘‘Acceptable’’ here means validity at a
level that is conventionally accepted by
experts in the field.
2. Validity
Outcome measures should have
acceptable validity to be interpretable.
wwhite on PROD1PC65 with NOTICES
1. Reliability 7
Outcome measures should have
acceptable reliability to be interpretable.
‘‘Acceptable’’ here means reliability at a
level that is conventionally accepted by
experts in the field.8
0 = Absence of evidence of reliability or
evidence that some relevant types
of reliability (e.g., test-retest,
interrater, interitem) did not reach
acceptable levels.
2 = All relevant types of reliability have
been documented to be at
acceptable levels in studies by the
applicant.
4 = All relevant types of reliability have
been documented to be at
acceptable levels in studies by
independent investigators.
13137
13138
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
model missing data, observations,
or participants; OR there was no
attrition needing adjustment.
5. Potential Confounding Variables
Often variables other than the
intervention may account for the
reported outcomes. The degree to which
confounds are accounted for affects the
strength of casual inference.
0 = Confounding variables or factors
were as likely to account for the
outcome(s) reported as were
hypothesized causes.
2 = One or more potential confounding
variables or factors were not
completely addressed, but the
intervention appears more likely
than these confounding factors to
account for the outcome(s) reported.
4 = All known potential confounding
variables appear to have been
completely addressed in order to
allow causal inference between
intervention and outcome(s)
reported.
adequate to support initial and
ongoing implementation.
4 = Applicant provides training and
support resources that are fully
adequate to support initial and
ongoing implementation (tested
training curricula, mechanisms for
ongoing supervision and
consultation).
3. Quality Improvement (QI) Materials
(e.g., Fidelity Measures, Outcome and
Performance Measures, Manuals on
How To Provide QI Feedback and
Improve Practices)
0 = Applicant has limited or no
materials.
2 = Applicant has materials that are
partially adequate to support initial
and ongoing implementation.
4 = Applicant provides resources that
are fully adequate to support initial
and ongoing implementation (tested
quality fidelity and outcome
measures, comprehensive and userfriendly QI materials).
6. Appropriateness of Analyses
Appropriate analysis is necessary to
make an inference that an intervention
caused reported outcomes.
0 = Analyses were not appropriate for
inferring relationships between
intervention and outcome, OR the
sample size was inadequate.
2 = Some analyses may not have been
appropriate for inferring
relationships between intervention
and outcome, OR the sample size
may have been inadequate.
4 = Analyses were appropriate for
inferring relationships between
intervention and outcome. Sample
size and power were adequate.
Scoring the Strength of Evidence and
Readiness for Dissemination
Dimensions
The ratings for the decision support
dimensions of Strength of Evidence and
Readiness for Dissemination are
calculated by averaging individual
rating criteria that have been scored by
reviewers according to a uniform fivepoint scale. For these two quantitative
dimensions, the average score on each
dimension (i.e., across criteria and
reviewers) as well as average score for
each rating criterion (across reviewers)
will be provided on the Web site for
each outcome targeted by the
intervention.9
Readiness for Dissemination Dimension
Ratings
V. Future Activities: Implementing and
Sustaining a Streamlined NREPP
SAMHSA plans to initiate reviews
using the new NREPP review process
and procedures in summer 2006. The
precise number and characteristics of
new interventions that will be
prioritized for the first series of reviews
have yet to be determined. SAMHSA
anticipates that many of the existing
programs and practices currently listed
on the SAMHSA Model Programs Web
site will undergo an expedited set of
reviews using the new system.
Regardless, the current Model Programs
Web site will remain intact until all
relevant programs have been included
in a new Web site, https://www.national
registry.samhsa.gov
wwhite on PROD1PC65 with NOTICES
1. Availability of Implementation
Materials (e.g., Treatment Manuals,
Brochures, Information for
Administrators, etc.)
0 = Applicant has insufficient
implementation materials.
2 = Applicant has provided a limited
range of implementation materials,
or a comprehensive range of
materials of varying or limited
quality.
4 = Applicant has provided a
comrephensive range of standard
implementation materials of
apparent high quality.
2. Availability of Training and Support
Resources
0 = Applicant has limited or no training
and support resources.
2 = Applicant provides training and
support resources that are partially
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
9 Note that it is unlikely that the Readiness for
Dissemination dimension will vary by targeted
outcome(s), insofar as the materials and resources
are usually program specific as opposed to outcome
specific.
PO 00000
Frm 00073
Fmt 4703
Sfmt 4703
The identification of collaborative
mechanisms for supporting the
continued development and refinement
of NREPP will represent a SAMHSA
priority in 2006. SAMHSA will explore
means for providing adequate technical
assistance resources to communities
seeking to initiate and/or augment
evidence-based practices. In addition,
appropriate technical advisors and other
scientific resources will be utilized to
assure the continued evolution of
NREPP as a state-of-the-art decision
support tool.
Appendix: Analysis of Public
Comments in Response to Federal
Register Notice
Background and Overview
The Substance Abuse and Mental
Health Services Administration
(SAMHSA), through its Science to
Service initiative, develops tools and
resources for providers of prevention
and treatment services to facilitate
evidence-based decisionmaking and
practice. An important informational
resource is the National Registry of
Evidence-based Programs and Practices
(NREPP). NREPP is a voluntary rating
and classification system designed to
provide the public with reliable
information on the scientific basis and
practicality of interventions designed to
prevent and/or treat mental and
addictive disorders. NREPP originated
in SAMHSA’s Center for Substance
Abuse Prevention (CSAP) in 1997 as a
way to help professionals in the field
become better consumers of prevention
programs. The program was expanded
in 2004 to include substance abuse
treatment interventions within
SAMHSA’s Center for Substance Abuse
Treatment (CSAT) and mental health
promotion and treatment interventions
within the Center for Mental Health
Services (CMHS).
During the past 2 years, SAMHSA
reviewed existing evidence rating
systems and developed and pilot-tested
a revised approach to the rating of
specific outcomes achieved by programs
and practices. This development effort
led SAMHSA to propose 16 evidence
rating criteria as well as a set of
proposed utility descriptors to describe
the potential of a given intervention to
be ‘‘transported’’ to real-world settings
and populations.
Considering the prominence of
NREPP within its Science-to-Service
initiative and the potential impact of
NREPP on the research and provider
communities, SAMHSA announced a
formal request for public comments in
the Federal Register on August 26, 2005
(70 FR 165, 50381–50390) with a 60-day
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
public comment period ending October
26, 2005. The notice outlined in some
detail the proposed review system,
including scientific criteria for evidence
reviews, the screening and triage of
NREPP applications, and the
identification by SAMHSA of priority
review areas. The notice invited general
as well as specific comments and
included 11 questions soliciting targeted
feedback. By request of the SAMHSA
Project Officer, MANILA Consulting
Group coded and analyzed the
responses received in response to the 11
questions posted in the Federal Register
notice. The results of the analysts are
presented below.
wwhite on PROD1PC65 with NOTICES
Method
A total of 135 respondents submitted
comments via e-mail, fax, and postal
mail during the comment period. Of
these 135 respondents, 109 (81%)
answered at least some of the 11
questions posted in the Federal Register
notice.
Respondents
The 135 respondents included 53
providers, 36 researchers, 4 consumers,
21 respondents with multiple roles, and
`
21 with unknown roles visa-a-vis
NREPP. Respondents were labeled as
having one or more of the following
domains of interest: substance abuse
prevention (N=68), substance abuse
treatment (N=48), mental health
promotion (N=22); and mental health
treatment (N=20). The domain of
interest was unknown for 33
respondents. The respondents
represented 16 national organizations,
10 state organizations, and 14 local
organizations; 90 were private citizens;
and 5 were individuals with unknown
affiliations. Fifty-one respondents (38%)
were labeled ‘‘noteworthy’’ at the
request of the SAMHSA Project Officer.
Noteworthy respondents included those
representing national or state
governments or national organizations,
and nationally known experts in
substance abuse or mental health
research or policy.
Twenty-six responses were judged by
the four MANILA coders and the
SAMHSA Project Officer to contain no
information relevant to the 11 questions
in the notice. These responses, labeled
‘‘unanalyzable’’ for the purposes of this
report, could be categorized as follows:
• Mentioned topics related to
SAMHSA but made no point relevant to
the questions posted in the Federal
Register notice (N=10);
• Mentioned only topics unrelated to
SAMHSA or incoherent text (N=7);
• Asked general questions about
NREPP and the Federal Register notice
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
(N=4);Wanted to submit a program for
NREPP review (N=4); and
• Wanted to submit a program for
NREPP review (N=4); and
• Responded to another Federal
Register notice (N=1).
Procedure
Before coding began, responses were
read to identify recurrent themes to
include in the codebook (presented in
Subpart A of this Appendix). Using this
codebook, each submission was then
assigned codes identifying respondent
characteristics (name, location, domain
of interest, affiliation/type of
organization, functional role, and level
of response) and the content or topical
themes contained in the response. One
pair of coders coded the respondent
data, while another pair coded the
content. Content coding was conducted
by two doctoral-level psychologists with
extensive training and experience in
social science research and
methodology.
Each response could be assigned
multiple codes for content. Coders
compared their initial code assignments
for all responses, discussed reasons for
their code assignments when there were
discrepancies, and then decided upon
final code assignments. In many cases,
coders initially assigned different codes
but upon discussion agreed that both
coders’ assignments were applicable.
Coding assignments were ultimately
unanimous for all text in all responses.
Results
The following discussion of key
themes in the public comments is
presented in order of the 11 questions
from the Federal Register notice. Tables
containing detailed frequencies of
themes in the comments and other
descriptive information are provided in
Subpart B.
Comments Addressing Question 1
Question 1. ‘‘SAMHSA is seeking to
establish an objective, transparent, efficient,
and scientifically defensible process for
identifying effective, evidence-based
interventions to prevent and/or treat mental
and substance use disorders. Is the proposed
NREPP system—including the suggested
provisions for screening and triage of
applications, as well as potential appeals by
applicants—likely to accomplish these
goals?’’
Respondents submitted a wide range
of comments addressing Question 1.
Highlights of these comments are
presented below, organized by topic as
follows:
1. Individual-Level Criteria
2. Population-, Policy-, and SystemLevel Criteria
3. Utility Descriptors
PO 00000
Frm 00074
Fmt 4703
Sfmt 4703
13139
4. Exclusion From NREPP Due to Lack
of Funding
5. Potential Impact on Minority
Populations
6. Potential Impact on Innovation
7. Provider Factors
8. Other Agencies’ Standards and
Resources
9. Reliance on Intervention
Developers To Submit Applications
10. Generalizability
11. Other Themes and Notable
Comments
1. Individual-Level Criteria
Number of respondents: 24 (22%).
Recommendations made by
respondents included adding cost
feasibility as a 13th criterion (one
respondent) and scoring all criteria
equally (two respondents). Comments
regarding specific criteria are presented
in Subpart C.
2. Population-, Policy-, and SystemLevel Criteria
Number of respondents: 29 (27%).
Comments on specific criteria are
presented in Subpart D. Highlights of
comments on more general issues are
presented below.
Differences in Evaluation Approaches
for Individual-Level and Population-,
Policy-, and System-Level Outcomes
Two respondents noted the proposed
NREPP approach does not acknowledge
key differences between evaluating
individual-level outcomes and
population-, policy-, and system-level
outcomes. One of these respondents
argued that NREPP is based on theories
of change that operate only at the
individual level of analysis, with the
assumption that discrete causes lead to
discrete effects, and therefore ‘‘many of
the NREPP criteria appear to be
insufficient or inappropriate for
determining the validity of communitybased interventions and their contextdependent effects.’’
Unclear What Interventions Are of
Interest to NREPP
One organization, Community AntiDrug Coalitions of America,
recommended that SAMHSA present a
clear, operational definition of the types
of interventions it wants to include in
NREPP.
Match Scale to Individual-Level
Outcomes
Twelve respondents, including the
Society for Prevention Research and a
group of researchers from a major
university, recommended that the same
scale be used for outcomes at the
E:\FR\FM\14MRN1.SGM
14MRN1
13140
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
individual level as for the population,
policy, and system levels.
Add Attrition Criterion
The same group of university
researchers suggested adding attrition as
a 13th criterion to the rating criteria for
studies of population outcomes. They
noted, ‘‘Just as attention to attrition of
individuals from conditions is essential
in individual-level studies, attention to
attrition of groups or communities from
studies is essential in group-level
studies. This is necessary in order to
assess attrition as a possible threat to the
validity of the claim that the
population-, policy-, or system-level
intervention produced observed
outcomes.’’
Include Only Interventions That Change
Behavior
It was recommended that NREPP only
include interventions proven to change
behavior. A group of university
researchers noted:
As currently described, these outcomes
refer to implementation of changes in policy
or community service systems, not to
changes in behavioral outcomes themselves.
In fact, as currently described, the policy or
system change would not be required to
show any effects on behavior in order to be
included in NREPP. This is a serious mistake.
The NREPP system should be reserved for
policies, programs, and system-level changes
that have produced changes in actual drug
use or mental health outcomes.
wwhite on PROD1PC65 with NOTICES
3. Utility Descriptors
Number of respondents: 15 (14%).
Only one respondent, the Committee
for Children, recommended specific
changes to the utility descriptors. Their
comments are presented in Subpart E of
this Appendix.
Seven other respondents
recommended using utility descriptors
in some way to score programs. The
American Psychological Association
(APA) Committee on Evidence-Based
Practice recommended more emphasis
on the utility descriptors ‘‘as these are
key outcomes for implementation and
they are not adequately addressed in the
description of NREPP provided to date.
This underscores earlier concerns noted
about the transition from effectiveness
to efficacy.’’
4. Exclusion From NREPP Due To Lack
of Funding
Number of respondents: 28 (26%).
The possibility that NREPP will
exclude programs due to lack of funding
was a concern voiced by several
organizations, including the National
Association for Children of Alcoholics,
the APA Committee on Evidence-Based
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
Practice, the National Association of
State Alcohol and Drug Abuse Directors,
Community Anti-Drug Coalitions of
America, and the California Association
of Alcohol and Drug Program
Executives. The National Association
for Children of Alcoholics provided the
following comment:
NREPP should establish differing criteria
for projects that collected data with [National
Institutes of Health] grant funds and projects
that collected data with no or very small
amounts of funds. It has been intrinsically
unfair that only grants have been able to
establish ‘‘evidence’’ while many programs
appear very effective—often more effective in
some circumstances than NREPP approved
programs—but have not had the Federal
support or other major grant support to
evaluate them. The SAMHSA grant programs
continue to reinforce the designation of
NREPP programs in order to qualify for
funding, and the states tend to strengthen
this ‘stipulation’ to local programs, who then
drop good (non-NREPP) work they have been
doing or purchase and manipulate NREPP
programs that make the grant possible. This
is not always in the best interest of the client
population to be served.
Another key concern was that funding
for replication research is rarely
available. Several respondents suggested
that SAMHSA consider funding
evaluation research, and many argued
that the lack of funding resources could
negatively impact minority populations
or inhibit treatment innovation. The
latter two themes were frequent enough
to be coded and analyzed separately.
Results are summarized in the following
sections.
5. Potential Impact on Minority
Populations
Number of respondents: 13 (12%).
Thirteen respondents noted that the
proposed NREPP approach could
negatively impact specific populations,
including minority client populations.
The Federation of Families for
Children’s Mental Health suggested that
NREPP would effectively promote
certain practices ‘‘simply because the
resources for promotion, training,
evaluation are readily accessible * * *
thus widening the expanse and
disparities that currently exist.’’
Another frequently noted concern was
that evidence-based practices are
currently too narrowly defined, and
thus as more funding sources begin to
require evidence-based practices as a
prerequisite for funding, some ethnic or
racial minority organizations may be
excluded from funding. One respondent
also pointed to potential validity
concerns, noting that ‘‘Very little
clinical trial evidence is available for
how to treat substance use disorders in
specific populations who may constitute
PO 00000
Frm 00075
Fmt 4703
Sfmt 4703
most or all of those seen in particular
agencies: HIV positive patients, native
Americans, adolescents, Hispanics, or
African Americans. Although it is
unreasonable to expect all EBTs to be
tested with all populations, the external
validity of existing studies remains a
serious concern.’’ For these reasons,
many respondents surmised that the
widespread application of interventions
developed in research contexts that
might tend to limit the inclusion of
minority and/or underserved
populations could ultimately result in
decreased cultural competence among
service providers.
6. Potential Impact on Innovation
Number of respondents: 21 (19%).
Twenty-one respondents cited
concerns that the proposed NREPP
approach could hamper innovation.
CAADPE noted that its main concerns
were ‘‘the focus on the premise that
treatment will improve if confined to
inteventions for which a certain type of
research evidence is available’’ and ‘‘the
issue of ‘branding,’ which could lead to
some of our most innovative and
effective small scale providers
eliminated from funding
considerations.’’
One respondent suggested that lists of
evidence-based treatments could ‘‘ossify
research and practice, and thus become
self-fulfilling prophecies * * * stifling
innovation and the validation of
existing alternatives.’’ Several
respondents observed that the potential
for stifling innovation is even greater
given that SAMHSA’s NREPP is not the
only list of evidence-based practices
used by funders.
The APA Practice Organization
recommended that NREPP focus on
‘‘developing and promoting a range of
more accessible and less stigmatized
services that are responsive to
consumers’ needs and preference, and
offer more extensive care
opportunities.’’
7. Provider Factors
Number of respondents: 22 (20%).
A number of respondents noted the
proposed NREPP approach does not
acknowledge provider effects on
treatment outcomes. The APA
Committee on Evidence-Based Practice
wrote, ‘‘Relationship factors in a
therapeutic process may be more
important than specific interventions
and may in fact be the largest
determinant in psychotherapy outcome
(see Lambert & Barley, 2002). How will
NREPP address this concern and make
this apparent to users?’’
Another respondent cited the Institute
of Medicine’s definition of evidence-
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
based practice as ‘‘the integration of the
best research evidence with clinical
expertise and client values,’’ noting that
‘‘The narrowed interpretation of
evidence-based practice by SAMHSA
focuses almost solely on the research
evidence to the exclusion of clinical
expertise and patient values.’’
Several respondents suggested that
NREPP could place too much emphasis
on highly prescriptive, annualized
treatments. Counselors can become
bored when they are not able ti ‘‘tinker’’
with or adapt treatments. In addition,
making minor modifications may
actually make treatments more effective
with different population groups.
8. Other Agencies’ Standards and
Resources
Number of respondents: 27 (25%).
Nineteen respondents suggested that,
in developing NREPP, SAMHSA should
consult other agencies’ standards and
resources related to evidence-based
practices—for example, the standards
published by the APA, American
Society for Addiction Medicine, and the
Society for Prevention Research. One
respondent suggested consulting with
National Institutes of Health scientists
about approaches for aggregating
evidence; another recommended
including in NREPP model programs
identified by other agencies. One
respondent submitted a bibliography of
references for assessing the rigor of
qualitative research.
One respondent suggested that
SAMHSA did not provide other
institutions the opportunity to provide
input on the development of NREPP
prior to the request for public
comments.
wwhite on PROD1PC65 with NOTICES
9. Reliance on Intervention Developers
To Submit Applications
Number of respondents: 4 (4%).
Four respondents cited problems with
NREPP’s reliance on intervention
developers to submit applications, and
suggested that literature reviews instead
be used to identify programs eligible for
NREPP. One private citizen wrote, ‘‘If
no one applies on behalf of a treatment
method, is that one ignored? Why not
simply start with the literature and
identify treatment methods with
adequate evidence of efficacy?’’
Another respondent observed that
requiring an application creates a bias
toward programs with advocates ‘‘either
ideologically or because of a vested
interest in sales, visibility, and profits.
An alternative is to select interventions
for NREPP consideration solely by
monitoring the peer-reviewed published
literature, and including them
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
regardless of whether or not the scientist
responds or furthers the registration
process.’’
The Society for Prevention Research
suggested that SAMHSA convene a
panel to periodically review available
interventions that might not be
submitted to NREPP because they ‘‘lack
a champion.’’
10. Generalizability
Number of respondents: 48 (44%).
Many respondents discussed the issue
of generalizability of evidence,
especially the concern that
interventions proven to work in clinical
trials do not always work in real-world
settings. Several respondents pointed
out the potential conflict between
implementing an intervention with
fidelity and having a adapt it for the
setting.
The APA Evidence-Based Practice
Committee suggested that the proposed
NREPP approach does not adequately
distinguish between ‘‘efficacy’’ and
‘‘effectiveness,’’ and strongly
recommended that SAMHSA look for
ways to bridge the two.
The Associations of Addiction
Services recommended paying more
attention to how and where treatments
are replicated: ‘‘The highest level of
evidence should be successful
replication of the approach in multiple
community treatment settings.
Experience with [the National Institute
on Drug Abuse] Clinical Trials Network
suggests that an approach that shows
meaningful outcome improvements in
the ‘noisy’ setting of a publicly funded
community treatment program is truly
an approach worth promoting.’’
A few respondents suggested that
NREPP score interventions according to
their readiness and amenability to
application in real-world settings.
11. Other Themes and Notable
Comments
Distinguishing Treatment and
Prevention
Number of respondents: 7 (6%).
A few respondents called or
evaluating treatment and prevention
approaches differently. One respondent
noted that some criteria appear to be
more appropriate for treatment
modalities than for preventive
interventions, and recommended that
SAMHSA ‘‘confer with research experts
in those respective fields and separate
out those criteria that are more relevant
to only treatment or prevention.’’
Another respondent suggested that
the criteria are more appropriate for
prevention that treatment:
PO 00000
Frm 00076
Fmt 4703
Sfmt 4703
13141
The criteria and selection for the peer
review panels should be separate for
prevention and treatment programs. The
criteria and models are different and the
panels should not be an across the board
effort, but rather representative of prevention
and treatment experts specific to the program
being evaluated. The plan is based as the
notice states on 1,100 prevention programs
with little experience with treatment
programs/practices.
Synthesizing Evidence
Three respondents suggested using
meta-analysis to synthesize evidence for
outcomes. One recommended SAMHSA
consult with National Institutes of
Health experts in this area.
Replications
The Teaching-Family Association
recommended considering replications
when evaluating evidence. The Society
for Prevention Research wrote that it is
unclear how replications would be used
in the proposed NREPP, and suggested
averaging ratings across studies.
Add Criteria
The National Student Assistance
Association Scientific Advisory Board
and one other respondent suggested
adding a cultural competence criterion.
The Society for Prevention Research
recommended adding a criterion to
assess the clarity of causal inference.
Range of Reviewer Perspectives
The APA Practice Association noted
the importance of having a ‘‘large and
broad’’ reviewer pool: ‘‘A small group of
reviewers representing a limited range
of perspectives and constituencies
would have an undue impact on the
entire system. We are pleased that a
nominations process is envisioned.’’
Cost Effectiveness
One respondent called for
incorporating program cost effectiveness
into NREPP. In choosing what program
to implement, end users often have to
decide between diverse possibilities,
such as attempting to pass a tax increase
on beer or implementing additional
classroom prevention curricula, each
with competing claims about
effectiveness. A cost-effectiveness
framework may be the only way to
compare these choices.
Comments Addressing Question 2
Question 2. ‘‘SAMHSA’s NREPP priorities
are reflected in the agency’s matrix of
program priority areas. How might SAMHSA
engage interested stakeholders on a periodic
basis in helping the agency determine
intervention priority areas for review by
NREPP?’’
Number of respondents: 16 (15%).
E:\FR\FM\14MRN1.SGM
14MRN1
13142
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
Respondents recommended a number
of approaches to engage stakeholders:
• Conduct meetings, conferences, and
seminars.
• Send and solicit information via email or a Web site.
• Send informational notices via
newletters.
• Survey stakeholders.
• Work with the Addiction
Technology Transfer Centers (ATTCs) to
administer surveys.
• Consult the National Prevention
Network and the Society for Prevention
Research, which ‘‘have forged a close
working relationship to foster the
integration of science and practice and
* * * would be very helpful in
answering this question.’’
Comments Addressing Question 3
Question 3. ‘‘There has been considerable
discussion in the scientific literature on how
to use statistical significance and various
measures of effect size in assessing the
effectiveness of interventions based upon
both single and multiple studies (Schmidt &
Hunter, 1995; Rosenthal, 1996; Mason,
Schott, Chapman, & Tu, 2000; Rutledge &
Loh, 2004). How should SAMHSA use
statistical significance and measures of effect
size in NREPP? Note that SAMHSA would
appreciate receiving citations for published
materials elaborating upon responders’
suggestions in this area.’’
Statistical Significance
Number of respondents: 13 (12%).
A group of university researchers
recommended that for programs to be
included in NREPP, they should be
required to provide statistically
significant results on drug use and/or
mental health outcomes using twotailed tests of significance at p<.05. The
APA Evidence-Based Practices
Committee recommended further
discussion and consideration by NREPP
of the conceptual distinction between
statistical and clinical significance.
The County of Los Angeles
Department of Health Services urged
SAMHSA ‘‘not to place undue
preference only on programs that offer
statistically significant results. Studies
of innovative approaches and of
emerging populations may not have
sample sizes large enough to support
sophisticated statistical analyses, yet
may offer valuable qualitative
information on effective approaches.’’
wwhite on PROD1PC65 with NOTICES
Effect Size
Number of respondents: 24 (22%).
Most of the respondents discussing
effect size noted that interventions
aimed at achieving population change
were likely to have small effect sizes,
even if they are very successful. Several
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
respondents recommended combining
effect size with reach. A group of
researchers from a major university
noted:
Effect sizes should be reported, but they
should not be used as a criterion for
inclusion or exclusion from NREPP. From a
public health perspective, the impact of an
intervention is a function of both its efficacy
and its reach (Glasgow, Vogt, & Boles, 1999).
An intervention with even a very modest
effect size can have a substantial impact on
public health if it reaches many people.
Therefore, NREPP should report effect sizes
for each statistically significant outcome
reported and NREPP should also include and
provide an assessment of the ‘‘reach’’ of that
intervention. Specifically, the inclusion
criteria for participation and the proportion
of the recruited population that participated
in the intervention study should be included
in describing the likely ‘‘reach’’ of the
program.
Three respondents noted that
professionals in the field have not
reached consensus on how to use effect
size. One noted, ‘‘Effect sizes may vary
with the difficulty of the prevention
goal and the methodological rigor of the
analysis. Applying standards for ‘weak,’
‘moderate,’ ‘strong’ or other labels fails
to take into account differences in
results that may be attributable to
differences in goals or methods.’’
One respondent suggested
considering other indicators of clinical
effectiveness, such as use of the RCI
(reliable change index; Jacobson &
Truax, 1984).
Other points made regarding effect
size included the following:
• Between-group effect sizes assume a
standard comparison condition, which
is rare in nonmedical interventions.
Meta-analyses with baseline-follow-up
effect sizes or a ‘‘network approach’’ to
effect sizes are ways to overcome this
problem.
• Effect size is not the equivalent of
client improvement and does not assess
the significance of interventions for
their clients.
• Effect size alone is not sufficient to
evaluate and rate programs; cost-benefit
information or other practical
information are also needed.
Comments Addressing Question 4
Question 4. ‘‘SAMHSA’s proposal for
NREPP would recognize as effective several
categories of interventions, ranging from
those with high-quality evidence and more
replication to those with lower quality
evidence and fewer replications. This would
allow for the recognition of emerging as well
as fully evidence-based interventions. Some
view this as a desirable feature that reflects
the continuous nature of evidence; provides
important options for interventions
recipients, providers, and funders when no
or few fully evidence-based interventions are
PO 00000
Frm 00077
Fmt 4703
Sfmt 4703
available; and helps promote continued
innovation in the development of evidencebased interventions. Others have argued that
several distinct categories will confuse
NREPP users. Please comment on SAMHSA’s
proposal in this area.’’
Number of respondents: 35 (32%).
Thirty-three respondents supported
the use of multiple categories as
outlined in Question 4; two respondents
were opposed. Of those in favor of
multiple categories, nine respondents
wrote that this approach would reflect
the process of emerging evidence and
encourage knowledge sharing early in
the process. The APA Evidence-Based
Practice Committee argued that
‘‘Including all of these NREPP products
is seen as a desirable feature that reflects
the continuous nature of evidence. This
may also be critical information for
providing reasonable options for
stakeholders when there are no or few
evidence-based practices available.’’
The State Associations of Addiction
Services pointed out that multiple
categories would lessen the likelihood
of misinterpreting information in
NREPP, and the California Department
of Alcohol and Drug Programs added
that including multiple categories of
intervention would give greater
flexibility to programs using the list.
Of the two respondents against
multiple categories, one suggested that a
clear designation of effectiveness is
needed if NREPP is to be useful to the
field.
Additional Comments
One respondent argued that only two
categories should be used, effective and
emergent: ‘‘While distinctions such as
whether a program has had independent
replications as opposed to developer
replications may be of interest to
researchers, the majority of those
responsible for choosing and
implementing programs may find this
level of detail to be confusing rather
than particularly helpful or relevant.’’
A group of university researchers
recommended assigning scores to
several categories of evidence quality:
theoretical foundation, design adequacy,
measure adequacy, fidelity, and analysis
adequacy.
Several other organizations suggesting
adding a category for programs not yet
shown to be evidence-based, but
recommended for further study. One
noted that categories of effectiveness
should be the same for individual-level
and population-, policy-, or system-level
outcomes.
One respondent proposed an
approach in which SAMHSA would
document the strength of evidence for
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
each approach, and allow consumers to
decide what is effective:
Various authorities have established
different and sometimes conflicting
standards for when there is enough evidence
to constitute an EBT. Part of the problem here
is drawing a discrete line (EBT or not) on
what is actually a continuous dimension.
* * * To inform and demystify the
dichotomous and somewhat arbitrary
decision as to which treatments are evidencebased and which are not, it is useful to have
a compilation of the strength of evidence for
(or against) different approaches. * * * Why
not just stick to your main emphasis on
documenting the strength of evidence for
each approach, and let others decide where
they want to draw the line for what they
regard to be ‘‘effective.’’
Another respondent argued that
providing information on replications
and having six potential categorizations
for evidence-based practices could be
too technical and confusing for some.
Most consumers will be most interested
in whether there is some body of
evidence that the program they are
considering works.
One respondent, a private citizen,
recommended that SAMHSA ask
stakeholders what categories would be
useful to them.
Comments Addressing Question 5
Question 5. ‘‘SAMHSA recognizes the
importance of considering the extent to
which interventions have been tested with
diverse populations and in diverse settings.
Therefore, the agency anticipates
incorporating this information into the Web
site descriptions of interventions listed on
NREPP. This may allow NREPP users to learn
if interventions are applicable to their
specific needs and situations, and may also
help to identify areas where additional
studies are needed to address the
effectiveness of interventions with diverse
populations and in diverse locations.
SAMHSA is aware that more evidence is
needed on these topics. Please comment on
SAMHSA’s approach in this area.
wwhite on PROD1PC65 with NOTICES
Number of respondents: 27 (25%).
Most respondents affirmed the
importance of the issues raised in
Question 5. Two respondents suggested
that SAMHSA should facilitate research
aimed at developing services for
minority populations. Comments
regarding what and how to report are
noted below.
What To Report
Regarding what to report, respondents
suggested tracking and reporting
demographic changes; reporting the
impact of interventions on different
populations; and requiring programs
that use NREPP interventions to report
to SAMHSA on the impact on their
client populations, as well as providers’
thoughts about the intervention’s
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
applicability to various client
populations.
The Oregon Office of Mental Health
and Addiction Services suggested that
SAMHSA ‘‘focus considerable effort on
identifying and listing practices useful
and applicable for diverse populations
and rural areas. Providers and
stakeholders from these groups have
repeatedly expressed the concern they
will be left behind if no practices have
been identified which fit the need of
their area. We need to take particular
care to ensure that their fear is not
realized.’’
The Committee for Children suggested
reporting data for two separate
dimensions: setting and population.
Setting dimensions would include
community data—size of community,
community context (e.g., suburb, town),
geographic location, community
socioeconomic status—and agency data,
which includes the type of agency (e.g.,
hospital, child care, school),
characteristics (e.g., outpatient vs.
inpatient, middle school vs. elementary
school), size, and resources required for
implementation. Population dimensions
would include age, socioeconomic
status, ethnicity, cultural identification,
immigrant/acculturation status, race,
and gender.
How To Report
Three respondents submitted
suggestions for how to report on
intervention effectiveness with diverse
populations. The APA Evidence-Based
Practices Committee suggested that
SAMHSA develop ‘‘a comprehensive
glossary that addresses definitions of
different constituencies, populations,
and settings.’’ The Family and Child
Guidance Clinic and the Native
American Health Center of Oakland
both suggested that a panel of Native
Americans be convened to decide which
evidence-based programs and practices
are effective for Native Americans, then
submit a monograph describing these
programs and practices.
Comments Addressing Question 6
Question 6. ‘‘To promote consistent,
reliable, and transparent standards to the
public, SAMHSA proposes that all existing
programs on NREPP meet the prevailing
scientific criteria described in this proposal,
and that this be accomplished through
required rereviews of all programs currently
on NREPP. SAMHSA has considered an
alternative approach that would
‘‘grandfather’’ all existing NREPP programs
under the new system, but would provide
clear communication that these existing
programs have not been assessed against the
new NREPP scientific standards. Please
comment on which approach you believe to
be in the best interests of SAMHSA
stakeholders.’’
PO 00000
Frm 00078
Fmt 4703
Sfmt 4703
13143
Number of respondents: 32 (29%).
Twenty-seven respondents proposed
rereviewing existing programs under the
revised NREPP criteria. Five
respondents advocated grandfathering
the programs into NREPP without
review. Highlights of these viewpoints
are provided below.
Arguments for Rereview
The Committee for Children wrote a
grandfathering system ‘‘may give the
impression to NREPP users, right or
wrong, that ‘grandfathered’
interventions aren’t as good as those
that have undergone the new review
process.’’
Another respondent supported a
single review process to assure
programs that ‘‘all programs and
practices are being rated according to a
consistent set of criteria, and therefore
that the adoption of an intervention by
a provider can be made with
confidence.’’
Two researchers (both SAMHSA
Model Program affiliates) noted that
grandfathering will ‘‘water down’’ the
NREPP criteria, and recommended
establishing a mechanism to remove
programs from NREPP when the
evidence warrants.
A program developer called for a
gradual transition from Model Program
to rereview:
I suggest that SAMHSA maintain the
current Model Program designation and grant
these programs status within the new NREPP
for up to 3 years. During that time period the
existing programs would be screened against
the new review criteria and provided an
opportunity to obtain additional research
findings, if needed, in order to help achieve
evidence-based status within the new
NREPP. * * * Many current model programs
have invested extensive time and financial
resources to reference SAMHSA Model
Program status is their informational,
training, and curricula materials, under the
auspices of their partnership agreements with
the SAMHSA Model Program Dissemination
Project. They did this in good faith. While
the SAMHSA Model Program Project has
been disbanded, it is reasonable to expect
SAMHSA to honor their agreements with the
model programs for a period of time during
the transitional phase. During this
transitional phase I recommend that the
model program not be earmarked as not
having been assessed against the new NREPP
scientific standards, but rather that they have
been found to be effective under the former
NREP and are awaiting review under the new
criteria.’’
Arguments for Grandfathering
Those who argued for grandfathering
previous Model Programs discussed the
possible detrimental effects that not
grandfathering would have. One
respondent described taking away the
E:\FR\FM\14MRN1.SGM
14MRN1
13144
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
Model Program designation as ‘‘a
breaking of faith that is just not
acceptable. A subjective change in
criteria does not justify harming
programs that previously met the grade
in all good faith * * * It also makes it
hard for the end user to take the list
seriously, especially if they have already
expended considerable resources to
replace a non-evidence-based program
with one currently designated evidencebased.’’
Another respondent described the
destabilizing effects and potential
impact on credibility of programs:
Imagine if the ‘‘model’’ you just selected
this year at the cost of thousands of dollars
(and redesigned your prevention delivery
system upon) is somehow diminished or
lessened in ‘‘scientific’’ credibility. Would
you not begin to wonder if you could trust
the next ‘‘model’’ to hold credibility? * * *
There is a very real need to be careful about
the criteria, and planning for a smooth and
gentle segue for change * * * at the
grassroots level if programs are rotating on
and off of the registry system. One might well
ask, how could a ‘‘model’’ program of today
not worthy of some level of inclusion
tomorrow?
Yet another respondent pointed out
that not grandfathering programs could
pose financial problems for
organizations offering model programs.
Since some organizations may only
receive funding for programs designated
as ‘‘model programs,’’ they may not be
able to offer the programs while
awaiting rereview.
Comments Addressing Question 7
Question 7. ‘‘What types of guidance,
resources, and/or specific technical
assistance activities are needed to promote
greater adoption of NREPP interventions, and
what direct and indirect methods should
SAMHSA consider in advancing this goal?’’
wwhite on PROD1PC65 with NOTICES
Venue, Channel, and Format for
Promoting Adoption of NREPP
Interventions
Number of respondents: 7.
Proposed strategies for promotion
(venue, channel, and format) include
the following:
• Identify stakeholders and take the
information to them (e.g., through
conferences, journals, professional
magazines, professional newsletters,
physicians, churches, and PTAs).
• Convene program developers and
state administrators for regular meetings
about programs and implementation.
• Showcase NREPP programs at
national, regional, and state
conferences.
• Develop fact sheets about NREPP
programs (in collaboration with the
program developers).
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
• Conduct training on NREPP
programs through the Addiction
Technology Transfer Centers (ATTCs).
• Work with the Office of National
Drug Control Policy’s National Media
campaign.
• On the NREPP Web site, offer
downloadable information on programs
as well as a way for consumers to
contact the program developers for more
information.
(Note: SAMHSA’s Model Program Web site
currently does provide program summaries
and contact information for program
developers).
Technical Assistance for Promoting
Adoption of NREPP Interventions
Number of respondents: 30 (28%).
Many respondents noted the
importance of providing technical
assistance to those looking to adopt
NREPP-listed interventions. The Oregon
Office of Mental Health and Addiction
Services wrote, ‘‘The adoption of new
practices by any entity is necessarily a
complex and long-term process. Many
providers will need technical support if
adoption and implementation is to be
accomplished effectively. Current
resources are not adequate to meet this
challenge.’’
Another respondent suggested that
SAMHSA identify point people, either
at the Federal level or through the
CAPTs, who can ‘‘partner with
developers to gain a clear understanding
of their evidence-based interventions
and become knowledgeable enough to
accurately discuss them with
community-based preventionists.’’
A group of university researchers
agreed that substantial training and
technical assistance are required for the
effective implementation of preventive
interventions. They recommended using
SAMHSA’s Communities That Care,
which has been shown to increase the
adoption of tested and effective
preventive interventions in
communities, to increase adoption of
NREPP interventions.
The National Student Assistance
Association Scientific Advisory Board
recommended that SAMHSA use
existing effective program and practice
structures, such as Student Assistance
Programs, for technical assistance,
resources, and guidance.
Guidance on Adopting NREPP
Interventions
Number of respondents: 10 (9%).
Several respondents recommended
that SAMHSA provide guidance to
individuals and organizations looking to
adopt NREPP interventions. The Center
for Evidence-Based Interventions for
Crime and Addiction wrote, ‘‘We do not
PO 00000
Frm 00079
Fmt 4703
Sfmt 4703
believe that just providing information
about model programs on the Web will
result in much diffusion of the
innovation. NREPP must pay attention
to training, dissemination, fidelity, and
sustainability.’’
The Society for Prevention Research
suggested that SAMHSA survey
decisionmakers and practitioners to
determine their perceptions of NREPP
as well as about other factors
influencing their decisions in order to
determine how to encourage adoption of
NREPP interventions.
The APA Evidence-Based Practice
Committee recommended that SAMHSA
‘‘anticipate misuses of NREPP so as to
insure that funding bodies do not
mistakenly assume that improving
treatment comes from confining
treatment to a list of recommended
techniques.’’
Resources for Promoting NREPP
Interventions
Number of respondents: 27 (25%).
Many respondents articulated ways
that SAMHSA could support and
promote NREPP interventions. One
common suggestion was that SAMHSA
should provide the funding for and/or
help create the infrastructure that is
required for program implementation.
For example, the California-based
Coalition of Alcohol and Drug
Associations wrote:
The existing treatment infrastructure
cannot handle the expectation for data
collection. It is currently unlikely that most
community-based treatment programs could
meet the standard to be listed on the registry.
How can the infrastructure be strengthened?
What funding streams is SAMHSA promoting
to accomplish this? * * * The initiative
promises technical assistance, but this is not
substitute for missing infrastructure. The
financial resources to support such efforts
[have] always been absent, yet the
expectations and demands continue to be
placed upon underfunded community-based
providers, driving some out of business and
requiring others to reduce services.
The Coalition of Alcohol and Drug
Associations also asked how SAMHSA
plans to protect providers from
exploitation: ‘‘Already there are
examples of large sums of money being
asked for training materials on
interventions developed with tax
dollars. Consultants representing
particular practices (especially those
listed on RFAs or on SAMHSA lists) are
charging fees of $3,000 per day. This is
not something most nonprofits can
afford.’’
Another respondent, a private citizen,
suggested that SAMHSA fund Services
to Science grants, ‘‘a category of funding
which was originally designed by
SAMHSA but [is] rarely utilized.’’
E:\FR\FM\14MRN1.SGM
14MRN1
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
The State Associations of Addiction
Services suggested that SAMHSA
‘‘consider new mechanisms for funding
the development of the organizational
capacity needed by providers to
implement and sustain evidence-based
practices. Such mechanisms might
require new legislative authority and/or
new funding.’’
Comments Addressing Question 8
Question 8. ‘‘SAMHSA is committed to
consumer, family, and other nonscientist
involvement in the NREPP process. The
panels convened by SAMHSA and described
earlier in this notice suggested that these
stakeholders be included specifically to
address issues of intervention utility and
practicality. Please comment on how
consumer, family, and other nonscientist
stakeholders could be involved in NREPP.’’
wwhite on PROD1PC65 with NOTICES
Development of NREPP Process
Number of responses: 22 (20%).
A number of respondents discussed
the need to involve nonscientist
stakeholder (primarily providers) in
developing the NREPP process. Seven
respondents said consumers should be
involved in NREPP development. The
Pennsylvania Department of Health
pointed out that ‘‘the use of such
approaches depends heavily on local,
state, and national networks of
community-based providers who need
to be in a position to be an active
participant in discussions related to the
evaluation of interventions, practices,
and programs.’’
The Oregon Office of Mental Health
and Addiction Services argued that
‘‘Practices that are not readily
acceptable by consumers and families
may have limited usefulness, regardless
of the evidence of technical adequacy.
Consumers and families should be
involved in advising SAMHSA at every
level of design, development and
implementation of NREPP. SAMHSA
may wish to establish a specific
consumer and family advisory group to
provide advice on NREPP issues.’’
Community Anti-Drug Coalitions of
America suggested that nonscientists
should review publications and
recommendations to ensure they are
clear to nonresearchers.
Role in NREPP Reviews
Number of respondents: 21 (19%)
Suggestions for NREPP reviews
included the following:
• Involve consumers and
practitioners in reviewing programs.
• Have practitioners assess the degree
to which a program is implementable.
• Have consumer groups rate
programs’ utility.
• Have clinicians review materials for
clarity.
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
Comments Addressing Question 9
Question 9. ‘‘SAMHSA has identified
NREPP as one source of evidence-based
interventions for selection by potential
agency grantees in meeting the requirements
related to some of SAMHSA’s discretionary
grants. What guidance, if any, should
SAMHSA provide related to NREPP as a
source of evidence-based interventions for
use under the agency’s substance abuse and
mental health block grants?’’
Technical Assistance
Number of respondents: 11 (10%).
A number of respondents suggested
that SAMHSA provide training to users
on the NREPP review process, as well as
guidance on the appropriate use of
NREPP and how to avoid misuse. For
example, Student Assistance Programs
(SAPs) and CAPTs could be used as
technical assistance resources. One
respondent wrote, ‘‘SAMHSA needs to
make it clear that the NREPP ratings are
established as recommendations for the
field, rather than as demands upon
agencies and programs—that it
discourages thinking of NREPPapproved programs or practices as a
finite list and encourages efforts that
further refine and extend these
programs and practices to new
populations and settings.’’
Another respondent noted that
government agencies responsible for
block grant allocation may need
protection fro mandates about using
NREPP interventions that may not be
affordable or appropriate for their client
populations.
Regulation
A number of respondents provided
recommendations related to regulation
and funding priority tied to NREPP.
Twelve respondents said block grant
funds should not be restricted based on
NREPP status. The Society for
Prevention Research and several other
organizations recommended giving
priority to NREPP programs, while
reserving some funds specifically for
innovation. One respondent suggested
that block grant funding should give
priority to NREPP interventions. The
Maryland Alcohol and Drug Abuse
Administration argued that state
authority should supersede Federal
authority in block grant allocation.
Another respondent recommended
giving funding priority to systems that
implement practices known to be
effective, except where evidence-based
practices have not yet been identified:
‘‘Although it is clear that funding
cannot entirely be limited to existing
evidence-based programs because of the
chilling effect on innovation that such a
stance would have, nevertheless, it
PO 00000
Frm 00080
Fmt 4703
Sfmt 4703
13145
might be appropriate to require that a
certain percentage of block grant dollars
be committed to the dissemination and
use of block grant monies, or to
establish additional incentives for the
adoption of such programs.’’
One respondent warned of the
potential danger of unfunded mandates:
‘‘The worst case scenario is that best of
practices could cost the most money but
by law or regulation become an
unfunded mandate for a governmentfunded or not-for-profit program.’’
The APA Practice Association noted
that as NREPP is voluntary, ‘‘applicants
should not be penalized for studying
programs or interventions that are not
on the NREPP.’’
Two organizations, the State
Associations of Addiction Services and
California Alcohol and Drug Programs,
considered the revised NREPP approach
to be too new to use as a block grant
requirement.
Comments Addressing Question 10
Question 10. ‘‘SAMHSA believes that
NREPP should serve as an important, but not
exclusive source, of evidence-based
interventions to prevent and/or treat mental
and substance use disorders. What steps
should SAMHSA take to promote
consideration of other sources (e.g., clinical
expertise, consumer or recipient values) in
stakeholders’ decisions regarding the
selection, delivery and financing of mental
health and substance abuse prevention and
treatment services?’’
Number of respondents: 25 (23%).
The following suggestions were noted:
• Develop a directory of other sources
of evidence-based practices. Some
suggested providing links to these
sources on the NREPP Web site.
• Use an external advisory committee
to identify other sources of evidencebased practices.
• Include a disclaimer page that
includes an introduction consistent
with the issues raised in Question 10.
Advertising or other promotional
material created around NREPP could
also include this information.
• List other sources of evaluation
research such as the Collaborative for
Academic, Social, and Emotional
Learning, the U.S. Department of
Education, the Office of Juvenile Justice
and Delinquency Prevention, and the
National Institute of Mental Health.
The National Association of State
Alcohol/Drug Abuse directors wrote
that its Exemplary Awards Program
should ‘‘serve as an ‘incubator’ for
programs that may wish to consider
submitting into the NREPP process.’’
Comments Addressing Question 11
Question 11. ‘‘SAMHSA anticipates that
once NREPP is in operation, various
E:\FR\FM\14MRN1.SGM
14MRN1
13146
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
stakeholders will make suggestions for
improving the system. To consider this input
in a respectful, deliberate, and orderly
manner, SAMHSA anticipates annually
reviewing these suggestions. These reviews
would be conducted by a group of scientist
and nonscientist stakeholders knowledgeable
about evidence in behavioral health and the
social sciences. Please comment on
SAMHSA’s proposal in this area.’’
Number of respondents: 35 (32%).
Many of the 35 responses stated that
annual review of suggestions from
stakeholders is important. Four
respondents noted that feedback should
be reviewed more frequently than once
per year. Other themes included the
following:
• Use the annual review process as a
mechanism for fostering innovation.
• Use marketing strategies to
encourage participation in the annual
review process.
• Solicit annual feedback from
NREPP applicants whose programs have
been labeled effective, as well as those
whose programs have not been labeled
effective.
• Compare NREPP results to those in
other similar systems.
• Include a mechanism in NREPP for
programs to be dropped from, or
improve their status on, the registry
(possible through the annual review).
• Periodically conduct a metaanalysis of evaluation results (possible
through the annual review).
• To ensure the stability of NREPP,
the criteria should be maintained
without changes for a set period of time
(e.g., 5 years).
Comments Beyond the 11 Posted
Questions
Twenty-two respondents (20%)
submitted comments on issues that were
relevant but not specifically within the
parameters of the 11 posted questions.
These are summarized below.
Programs Versus Practices
wwhite on PROD1PC65 with NOTICES
Fourteen respondents (13%) objected
to using the terms ‘‘programs’’ and
‘‘practices’’ as if they were
interchangeable. One private citizen
who submitted comments wrote:
It is important to distinguish between the
value of rating practices and the value of
rating programs. although it makes sense for
reviewers to rate the quality/strength of
evidence regarding a treatment practice, it is
a much different proposition to rate the
effectiveness of a program. The effectiveness
of a treatment program is a function, among
other things, of the treatment practices it
employs, the ancillary services (e.g.,
employment counseling) it provides, the
qualities and behaviors of its treatment
providers * * * One could imagine a very
ineffective program using evidence-based
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
practices (e.g., one having disengaged or
poorly trained counselors), and a very
effective program that used other than
evidence-based practices (e.g., one with
committed, empathic counselors using
practices that had not yet been subjected to
research. Furthermore, given the multiple
elements that contribute to a program’s
overall effectiveness, its effectiveness could
change rapidly (e.g., when a charismatic
program leader leaves, when there is
significant counselor turnover, when funding
source/amount changes, etc.). Thus, it makes
much less sense to rate the effectiveness of
individual programs than it does to rate the
strength of evidence supporting specific
treatment practices.
Terminology
The APA Evidence-Based Practices
Committee suggested using a site
glossary to define diagnostic
terminology and client populations and
communities.
Standard Outcomes
One respondent recommended
including a standard set of outcomes to
be evaluated.
Effect of Including Mental Health
Interventions
One national organization expressed a
concern that included mental health
interventions will detract from the focus
on substance abuse:
The proposed expansion of NREPP to
include substance abuse treatment and
mental health will dramatically dilute the
focus of substance abuse prevention. The
resources NREPP require will necessarily be
diluted across a broader range of issues and
inevitably detract from a focused mission of
supporting efforts to prevent substance
abuse.
Reporting the Date of Reviews
One respondent recommended that
SAMHSA document and report the date
on which a review was conducted. This
will allow users to know how much
time has passed since the review and
prompt them to search for more recent
evidence if needed.
Rationale for Revising NREPP
One respondent questioned if
SAMHSA had sufficiently evaluated the
existing system before deciding to revise
it.
Subpart A.—Federal Register Notice
Comment Codebook
Comment ID Number:
Coded by:
Date coded:
Coded by: (each item is coded by two
individual coders)
Date coded:
Entered by:
Date entered:
PO 00000
Frm 00081
Fmt 4703
Sfmt 4703
1. Respondent Category
1.1 Commenter Name
1.1.1 First
1.1.2 MI
1.1.3 Last
1.2 Location
1.2.1 City
1.2.2 State
1.2.3 ZIP code
1.2.4 Unknown
1.3 Domain Interest
1.3.1 SAP
1.3.2 SAT
1.3.3 MHP
1.3.5 Unknown
1.4 Affiliation
1.4.1 Private
1.4.2 Organization
1.4.2.1 National
1.4.2.2 State
1.4.2.3 Local
1.4.2.4 Unknown
1.5 Functional Role
1.5.1 Provider
1.5.2 Researcher
1.5.3 Consumer
1.5.4 Multiple
1.5.5 Unknown
1.6 Response Level
1.6.1 Nonresponsive
1.6.2 Routine
1.6.3 Noteworthy (responder or comment
content)
2. Topical Themes
2.1 Will the proposed NREPP system
identify effective interventions
2.1.1 General, not criteria specific
2.1.2 Individual-level outcome criteria
2.1.3 Population/policy/system-level
outcome criteria
2.1.4 Utility descriptors
2.1.5 Exclusion due to lack of funding
2.1.6 Negative impact on minority
populations
2.1.7 Negative impact on program
innovation
2.1.8 Lack of acknowledgment of
provider factors
2.1.9 Use of other agencies’ standards and
resources
2.1.10 Reliance on developers for
submitting applications
2.1.11 Generalizability issues
2.2 How can stakeholders be engaged to
identify priority review areas
2.2.1 Identification (of priority areas)
2.2.2 Engagement (of stakeholders)
2.3 How should statistical significance
and effect size be used to judge
effectiveness
2.3.1 Statistical significance
2.3.2 Effect size
2.3.3 General, NEC
2.4 Should NREPP use multiple
categories of effectiveness
2.4.1 General, not outcome specific
2.4.1 Pro
2.4.2 Con
2.4.2 Individual-level outcome rating
categories
2.4.2.1 Pro
2.4.2.2 Con
2.4.3 Population/policy/system-level
outcome rating categories
2.4.3.1 Pro
2.4.3.2 Con
E:\FR\FM\14MRN1.SGM
14MRN1
13147
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
2.5 How can NREPP best provide
information on population-specific
needs and situations
2.5.1 General comment
2.5.2 Venue (e.g., organized events/
meetings, national or regional
organizations)
2.5.3 Channel (distribution mechanisms,
e.g., listservs, clearinghouses, etc.)
2.5.4 Format (media type, document type,
e.g., fact sheets, white papers, policy
publications, etc.)
2.6 Should current NREPP programs be
‘‘grandfathered’’ or rereviewed
2.6.1 Grandfathered
2.6.2 Rereviewed
2.6.3 General, NEC
2.7 How should SAMHSA promote
greater adoption of NREPP interventions
2.7.1 General comment
2.7.2 Venue
2.7.3 Channel
2.7.4 Format
2.7.5 Technical assistance
2.7.6 Guidance
2.7.7 Resources
2.8 How should nonscientist stakeholders
be involved in the NREPP process
2.8.1 General comment
2.8.2 Venue, channel, format
2.8.3 Potential stakeholders
2.8.4 Involvement in the development of
the NREPP process
2.8.5 Involvement in program reviews
2.9 What relationship should exist
between NREPP and SAMHSA block
grants
2.9.1 Technical assistance provision
2.9.2 Funding support
2.9.3 Regulatory (required to use)
2.10 What additional sources of
information should be considered
regarding SAMHSA services
2.10.1 Steps SAMHSA should take
2.10.2 Source
2.11 How should an annual review of
NREPP procedures and practices be
conducted
2.12 Other issues
2.12.1 Program vs. practice
Subpart B.—Comments on SAMHSA’s
Federal Register Notice: Frequencies
and Percentages
TABLE 1.—CHARACTERISTICS OF RESPONDENTS
[N=135]
n
Percent
Domain interest (not mutually exclusive)
Substance abuse prevention ...................................................................................................................................................
Substance abuse treatment .....................................................................................................................................................
Mental health promotion ..........................................................................................................................................................
Mental health treatment ...........................................................................................................................................................
Unknown ..................................................................................................................................................................................
68
48
22
20
33
50.4
35.6
16.3
14.8
24.4
90
16
10
14
5
66.7
11.9
7.4
10.4
3.7
53
36
4
21
21
39.3
26.7
3.0
15.6
15.6
51
58
26
37.8
43.0
19.3
10
125
7.4
92.6
Affiliation
Private ......................................................................................................................................................................................
National organization ...............................................................................................................................................................
State organization ....................................................................................................................................................................
Local organization ....................................................................................................................................................................
Unknown organization .............................................................................................................................................................
Functional role
Provider ....................................................................................................................................................................................
Researcher ..............................................................................................................................................................................
Consumer ................................................................................................................................................................................
Multiple roles ............................................................................................................................................................................
Unknown ..................................................................................................................................................................................
Respondent clout
Noteworthy ...............................................................................................................................................................................
Responsive ..............................................................................................................................................................................
Unanalyzable ...........................................................................................................................................................................
Current program status
Affiliated with a current program .............................................................................................................................................
No known affiliation with a current program ............................................................................................................................
TABLE 2.—COMMENTS REGARDING THE PROPOSED NREPP SYSTEM ACCOMPLISHING ITS GOALS
[Question 1]
National org.
n
wwhite on PROD1PC65 with NOTICES
State org.
%1
n
Local org.
%
n
Unknown org.
%
n
%
Private
n
%
‘‘Noteworthy’’ respondents
General, not criteria
specific 2 ................
Individual-level outcome criteria .........
VerDate Aug<31>2005
19:18 Mar 13, 2006
11
78.6
4
50.0
2
100
2
66.7
16
84.2
1
7.1
1
12.5
0
0.0
1
33.3
14
73.7
Jkt 208001
PO 00000
Frm 00082
Fmt 4703
Sfmt 4703
E:\FR\FM\14MRN1.SGM
14MRN1
13148
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 2.—COMMENTS REGARDING THE PROPOSED NREPP SYSTEM ACCOMPLISHING ITS GOALS—Continued
[Question 1]
National org.
%1
n
Population-, policy-,
or system-level
outcome criteria ....
Utility descriptors ......
Funding ....................
Minority populatons ..
Program innovation ..
Provider factors ........
Use of other agencies’ standards and
resources ..............
Developers submitting applications ....
Generalizability .........
State org.
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
2
4
7
1
4
4
14.3
28.6
50.0
7.1
28.6
28.6
4
1
3
0
4
4
50.0
12.5
37.5
0.0
50.0
50.0
1
0
1
1
2
1
50.0
0.0
50.0
50.0
100
50.0
1
0
0
0
0
1
33.3
0.0
0.0
0.0
0.0
33.3
14
3
3
2
2
4
73.7
15.8
15.8
10.5
10.5
21.1
4
28.6
2
25.0
0
0.0
0
0.0
12
63.2
1
7
7.1
50.0
0
5
0.0
62.5
0
2
0.0
100
0
0
0.0
0.0
2
5
10.5
26.3
‘‘Responsive’’ respondents
General, not criteria
specific 2 ................
Individual-level outcome criteria .........
Population-, policy-,
or system-level
outcome criteria ....
Utility descriptors ......
Funding ....................
Minority populations
Program innovation ..
Provider factors ........
Use of other agencies’ standards and
resource ................
Developers submitting applicaitons ....
Generalizability .........
1 All
0
0.0
0
0.0
4
40.0
2
100
18
43.9
0
0.0
0
0.0
1
10.0
0
0.0
6
14.6
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
2
0
5
2
3
4
20.0
0.0
50.0
20.0
30.0
40.0
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
5
7
9
7
6
4
12.2
17.1
22.0
17.1
14.6
9.8
0
0.0
0
0.0
3
30.0
0
0.0
6
14.6
0
0
0.0
0.0
0
0
0.0
0.0
0
7
0.0
70.0
0
1
0.0
50.0
1
21
2.4
51.2
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 3.—COMMENTS REGARDING HOW SAMHSA MIGHT ENGAGE INTERESTED STAKEHOLDERS TO DETERMINE
INTERVENTION PRIORITY AREAS FOR REVIEW
[Question 2]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
Identification of priority areas 2 ...........
Engagement of
stakeholders .........
3
42.9
0
0.0
0
0.0
0
0.0
2
100
5
71.4
1
100
1
100
0
0.0
1
50.0
wwhite on PROD1PC65 with NOTICES
‘‘Responsive’’ respondents
Identification of priority areas 2 ...........
Engagement of
stakeholders .........
1 All
0
0.0
0
0.0
1
50.0
0
0.0
1
33.3
0
0.0
0
0.0
2
100
0
0.0
3
100
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
PO 00000
Frm 00083
Fmt 4703
Sfmt 4703
E:\FR\FM\14MRN1.SGM
14MRN1
13149
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 4.—COMMENTS REGARDING STATISTICAL SIGNIFICANCE AND EFFECT SIZE
[Question 3]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
Statistical significance 2 ..................
Effect size .................
General .....................
1
2
2
25.0
50.0
50.0
0
3
0
0.0
100
0.0
1
1
0
50.0
50.0
0.0
0
1
0
0.0
100
0.0
11
13
2
84.6
100
15.4
0.0
100
0.0
0
0
0
0.0
0.0
0.0
0
6
1
0.0
85.7
14.3
‘‘Responsive’’ respondents
Statistical significance 2 ..................
Effect size .................
General .....................
1 All
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
3
0
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 4.—COMMENTS REGARDING STATISTICAL SIGNIFICANCE AND EFFECT SIZE
[Question 3]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
General, not outcome
specific:.
General comment 2 ............
Pro .....................
Con ....................
Individual-level outcome rating categories:
General comment ...............
Pro .....................
Con ....................
Population-, policy-,
or system-level
outcome rating categories:
General comment ...............
Pro .....................
Con ....................
2
10
0
20.0
100
0.0
0
3
0
0.0
100
0.0
0
1
0
0.0
100
0.0
0
0
0
0.0
0.0
0.0
3
12
1
20.0
80.0
6.7
0
1
0
0.0
10.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
1
0
0.0
33.3
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
wwhite on PROD1PC65 with NOTICES
‘‘Responsive’’ respondents
General, not outcome
specific:.
General comment 2 ............
Pro .....................
Con ....................
Individual-level outcome rating categories:
General comment ...............
Pro .....................
Con ....................
Population-, policy-,
or system-level
outcome rating categories:
General comment ...............
VerDate Aug<31>2005
19:18 Mar 13, 2006
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
1
1
1
50.0
50.0
50.0
0
1
0
0.0
100
0.0
3
6
0
37.5
75.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
0
0.0
0
0.0
0
0.0
0
0.0
0
0.0
Jkt 208001
PO 00000
Frm 00084
Fmt 4703
Sfmt 4703
E:\FR\FM\14MRN1.SGM
14MRN1
13150
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 4.—COMMENTS REGARDING STATISTICAL SIGNIFICANCE AND EFFECT SIZE—Continued
[Question 3]
National org.
%1
n
Pro .....................
Con ....................
1 All
State org.
0
0
n
0.0
0.0
Local org.
%
0
0
n
0.0
0.0
Unknown org.
%
0
0
n
0.0
0.0
Private
%
0
0
n
0.0
0.0
%
0
0
0.0
0.0
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 6.—COMMENTS REGARDING SAMHSA’S APPROACH FOR INCORPORATING INFORMATION ON THE EXTENT TO WHICH
INTERVENTIONS HAVE BEEN TESTED WITH DIVERSE POPULATIONS AND IN DIVERSE SETTINGS
[Question 5]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
General comment 2 ..
Venue .......................
Channel ....................
Format ......................
6
0
0
1
100
0.0
0.0
16.7
2
0
0
0
100
0.0
0.0
0.0
1
0
0
0
100
0.0
0.0
0.0
0
0
0
0
0.0
0.0
0.0
0.0
12
0
0
0
100
0.0
0.0
0.0
100
0.0
0.0
0.0
0
0
0
0
0.0
0.0
0.0
0.0
4
0
0
1
80.0
0.0
0.0
20.0
‘‘Responsive’’ respondents
‘‘Responsive’’ respondents.
General comment 2 ..
Venue .......................
Channel ....................
Format ......................
1 All
0
0
0
0
0.0
0.0
0.0
0.0
0
0
0
0
0.0
0.0
0.0
0.0
1
0
0
0
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 7.—COMMENTS REGARDING WHETHER ALL EXISTING PROGRAMS ON NREPP SHOULD BE REREVIEWED OR
‘‘GRANDFATHERED’’
[Question 6]
Noteworthy
Responsive
Percent
of those
providing
comments
n
Percent
of those
providing
comments
n
Comments from individuals affiliated with an existing NREPP program
(8 individuals [3 Noteworthy, 5 Responsive] provided comments on this question)
Rereview* .........................................................................................................................................
Grandfather ......................................................................................................................................
General comment ............................................................................................................................
2
1
1
66.7
33.3
33.3
1
3
2
20.0
60.0
40.0
5
1
2
62.5
12.5
25.0
Comments from individuals not known to be affiliated with an existing NREPP program
(29 individuals [21 Noteworthy, 8 Responsive] provided comments on this question)
wwhite on PROD1PC65 with NOTICES
Rereview ..........................................................................................................................................
Grandfather ......................................................................................................................................
General comment ............................................................................................................................
19
0
2
90.5
0.0
9.5
*Note: These categories are not mutually exclusive. There were instances of individuals who both commented specifically on whether to rereview or grandfather a program and also provided a general comment with regard to this question.
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
PO 00000
Frm 00085
Fmt 4703
Sfmt 4703
E:\FR\FM\14MRN1.SGM
14MRN1
13151
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 8.—COMMENTS REGARDING GUIDANCE, RESOURCES, AND/OR TECHNICAL ASSISTANCE TO PROMOTE GREATER
ADOPTION OF NREPP INTERVENTIONS
[Question 7]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
General comment 2 ..
Venue .......................
Channel ....................
Format ......................
Technical assistance
Guidance ..................
Resources ................
3
0
2
1
5
4
6
30.0
0.0
20.0
10.0
50.0
40.0
60.0
2
0
0
0
5
0
5
25.0
0.0
0.0
0.0
62.5
0.0
62.5
0
0
0
0
1
0
0
0.0
0.0
0.0
0.0
100
0.0
0.0
0
0
0
0
0
0
1
0.0
0.0
0.0
0.0
0.0
0.0
100
2
0
1
0
11
1
3
11.8
0.0
5.9
0.0
64.7
5.9
17.6
0.0
0.0
20.0
0.0
40.0
20.0
60.0
0
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
3
2
3
1
6
4
9
16.7
11.1
16.7
5.6
33.3
22.2
50.0
‘‘Responsive’’ respondents
General comment 2 ..
Venue .......................
Channel ....................
Format ......................
Technical assistance
Guidance ..................
Resources ................
1 All
0
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
0
0
0
0
0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0
0
1
0
2
1
3
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 9.—COMMENTS REGARDING HOW CONSUMER, FAMILY, AND OTHER NONSCIENTIST STAKEHOLDERS COULD BE
INVOLVED IN NREPP
[Question 8]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
General comment 2 ..
Venue, channel, format ........................
Potential stakeholders ..................
Involvement in the
development of the
NREPP process ....
Involvement in program reviews ........
0
0.0
0
0.0
1
50.0
0
0.0
1
50.0
2
20.0
0
0.0
0
0.0
0
0.0
1
50.0
7
70.0
5
71.4
0
0.0
0
0.0
0
0.0
5
50.0
4
57.1
1
50.0
0
0.0
0
0.0
6
60.0
5
71.4
1
50.0
0
0.0
0
0.0
‘‘Responsive’’ respondents
General comment 2 ..
Venue, channel, format ........................
Potential stakeholders ..................
Involvement in the
development of the
NREPP process ....
Involvement in program reviews ........
wwhite on PROD1PC65 with NOTICES
1 All
0
0.0
0
0.0
1
16.7
0
0.0
1
5.6
0
0.0
0
0.0
1
16.7
0
0.0
4
22.2
0
0.0
0
0.0
4
66.7
1
100
14
77.8
0
0.0
0
0.0
4
66.7
0
0.0
8
44.4
0
0.0
0
0.0
2
33.3
1
100
6
33.3
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
PO 00000
Frm 00086
Fmt 4703
Sfmt 4703
E:\FR\FM\14MRN1.SGM
14MRN1
13152
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 10.—COMMENTS REGARDING GUIDANCE SAMHSA SHOULD PROVIDE FOR USE UNDER THE AGENCY’S SUBSTANCE
ABUSE AND MENTAL HEALTH BLOCK GRANTS
[Question 9]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
Technical assistance 2 ....................
Funding support .......
Regulatory ................
1
4
6
11.1
44.4
66.7
2
3
1
50.0
75.0
25.0
0
1
0
0.0
100
0.0
0
1
0
0.0
100
0.0
1
9
2
8.3
75.0
16.7
50.0
100
50.0
0
0
0
0.0
0.0
0.0
2
9
2
18.2
81.8
18.2
‘‘Responsive’’ respondents
Technical assistance 2 ....................
Funding support .......
Regulatory ................
1 All
0
0
0
0.0
0.0
0.0
0
0
0
0.0
0.0
0.0
1
2
1
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 11.—COMMENTS REGARDING STEPS SAMHSA SHOULD TAKE TO PROMOTE CONSIDERATION OF OTHER SOURCES
OF EVIDENCE-BASED INTERVENTIONS
[Questions 10]
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
Steps SAMHSA
should take 2 .........
Source ......................
4
1
80.0
20.0
1
0
100
0.0
0
1
0.0
100
0
1
0.0
100
12
0
100
0.0
100
0.0
0
0
0.0
0.0
2
2
66.7
66.7
‘‘Responsive’’ respondents
Steps SAMHSA
should take 2 .........
Source ......................
1 All
0
0
0.0
0.0
0
0
0.0
0.0
2
0
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
TABLE 12.—COMMENTS REGARDING ANNUAL REVIEWS OF SUGGESTIONS FOR IMPROVING THE SYSTEM
[Question 11]
National org.
n
State org.
%
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
General comment .....
8
100
3
100
1
100
0
0.0
14
100
100
0
0.0
7
100
‘‘Responsive’’ respondents
General comment .....
1 All
0
0.0
0
0.0
2
percentages are calculated based on those providing comments.
TABLE 13.—ADDITIONAL COMMENTS NOT CLASSIFIED ELSEWHERE
National org.
n
wwhite on PROD1PC65 with NOTICES
State org.
%1
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Noteworthy’’ respondents
Other issues 2 ...........
Defining terms ..........
VerDate Aug<31>2005
19:18 Mar 13, 2006
4
5
Jkt 208001
66.7
83.3
PO 00000
1
3
Frm 00087
25.0
75.0
Fmt 4703
1
1
Sfmt 4703
50.0
50.0
E:\FR\FM\14MRN1.SGM
0
0
14MRN1
0.0
0.0
1
1
100
100
13153
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
TABLE 13.—ADDITIONAL COMMENTS NOT CLASSIFIED ELSEWHERE—Continued
National org.
State org.
%1
n
n
Local org.
%
n
Unknown org.
%
n
Private
%
n
%
‘‘Responsive’’ respondents
issues 2
Other
...........
Defining terms ..........
1 All
0
0
0.0
0.0
0
0
0.0
0.0
1
2
50.0
100
0
0
0.0
0.0
5
2
71.4
28.6
percentages are calculated based on those providing comments.
categories are not mutually exclusive.
2 These
Subpart C.—Comments on Specific
Evidence Rating Criteria
Some of the respondents to
SAMHSA’s August 2005 Federal
Register notice submitted comments
about specific evidence rating criteria. A
summary and highlights of key
comments about these criteria are
presented below.
Intervention Fidelity
Two respondents commented on this
criterion. One noted that it is difficult to
monitor or confirm how treatment is
delivered and how staff are trained in
programs with complex approaches,
such as community reinforcement or
family training.
Comparison Fidelity
Eleven respondents commented on
this criterion. Ten of the respondents, a
group of researchers from a major
university, wrote:
The comparison fidelity evidence quality
criterion assumes the implementation and
fidelity monitoring of a ‘‘comparison
condition.’’ In universal and selective
prevention trials, this is not standard
protocol. Rather, individuals or communities
selected for comparison/control conditions
receive standard prevention services
available in the community. In such studies,
it does not make sense to measure the
‘‘fidelity’’ of the comparison condition.
However, as currently scored, this criterion
will penalize prevention studies. I
recommend the criterion and rating system
be changed to reflect this difference between
prevention and treatment research.
wwhite on PROD1PC65 with NOTICES
Nature of Comparison Condition
Fourteen respondents provided
comments on this criterion. One
respondent, a director of research and
evaluation for a prevention program
noted:
19:18 Mar 13, 2006
Jkt 208001
Another service provider noted that
studies that include the target
intervention, comparison intervention,
and attention control ‘‘would require
funding at extremely high levels to have
enough N in each group for statistical
analysis. To conduct such a study in
today’s economic climate is probably
impractical.’’
A private citizen who submitted
comments wrote:
This is a critical criterion and should be
weighted more heavily than many, if not all,
of the other criteria. With the proposed
system, if one were trying to ‘‘game the
system,’’ it would be advantageous to choose
a comparison intervention that was
ineffective (and thus receive a low score on
this criterion), so as to increase the likelihood
of a significant treatment effect. Nevertheless,
the practice being evaluated could have
‘‘strong evidence’’ by scoring highly on other
criteria.
A group of university researchers said
that it is unclear how prevention
practices being compared to existing
prevention services would be scored
using this criterion.
Assurances to Participants
One respondent questioned ‘‘whether
such studies [without documented
assurances to participants] should ever
clear the bar for NREPP consideration.
If investigators do not observe
appropriate procedures to safeguard
study participants’ interests, it is at least
questionable whether their products
should receive any degree of attention
and support from SAMHSA.’’
Participant Expectations
Many program participants are drawn from
undeserved or marginalized populations, e.g.
incarcerated youth, the mentally ill,
linguistically isolated subgroups, or those
suffering from Human Immunodeficiency
Virus (HIV). For these populations, there may
be no option to withhold active treatment
only to the intervention group, due to legal
requirements, health and safety
considerations, or other ethical constraints.
VerDate Aug<31>2005
The American Evaluation Association (AEA)
duly notes this consideration in its 2003
commentary on scientifically based
evaluation methods.
Three respondents commented on this
criterion. Two respondents listed
potential problems with controlling
expectations in school settings. For
example, for an intervention to be
implemented effectively by teachers, the
teachers would have to be trained and
therefore would be aware of the
intervention they implement.
PO 00000
Frm 00088
Fmt 4703
Sfmt 4703
Two respondents pointed out that
expectations might be an active
component of the intervention. One
wrote that ‘‘trying to control
[expectations] might reduce
generalization of the eventual findings.
In addition, given current ethical
guidelines and human subjects policies,
it is hard to see how one could ‘mask’
study conditions in many studies. In
obtaining consent, one has to tell
participants about the conditions to
which they might be assigned and it is
likely that participants will know to
which condition they have been
assigned.’’
Data Collector Bias
Three respondents commented on this
criterion. One noted, ‘‘Changes to this
criterion should recognize the critical
need to ensure the fidelity of
psychosocial treatment interventions.
Fidelity, in these cases, can only be
ensured through staff awareness of the
actions required of them. Masking
conditions actually inhibits
psychosocial treatment fidelity.’’
Selection Bias
Three respondents commented on this
criterion. One suggested that approaches
other than random assignment, such as
blocking variables of interest, should
qualify for the highest score on this
item. Another pointed out that random
assignment to psychosocial
interventions might not be possible due
to ethical problems with nondisclosure.
He suggested rewording the item to
clarify that random assignment does not
refer only to ‘‘blinding’’ participants to
their treatment condition.
Attrition
Two respondents commented on this
criterion. One pointed out that the
criterion is unclear, and that ‘‘attrition
needing adjustment’’ is not defined, nor
is the difference between ‘‘crude’’ and
‘‘sophisticated’’ methods of adjusting for
attrition. This respondent also pointed
out that ‘‘sophisticated’’ does not
necessarily mean better than ‘‘crude’’
(this comment also applied to the
Missing Data criterion).
E:\FR\FM\14MRN1.SGM
14MRN1
13154
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
Theory-Driven Method Selection
Eleven respondents commented on
this criterion. A group of university
researchers wrote:
This is an important criterion. However,
this criterion should recognize that a number
of preventive interventions seek to address
and reduce risk factors or enhance protective
factors that research has shown are common
shared predictors of a range of drug use,
mental health, and other outcomes. It is
important to explicitly recognize this fact in
formulating and describing this criterion
* * * Not all reviewers, especially those
from treatment backgrounds, will be familiar
with the concept of addressing shared
predictors of broader outcomes in preventive
trials in order to affect wide-ranging
outcomes. This criterion needs to educate
reviewers about this in the same way that the
criterion currently warns against ‘‘dredging’’
for current significant results.
Subpart D.—Criterion-Specific Themes
for Population-, Policy-, and SystemLevel Outcomes
Logic-Driven Selection of Measures
A group of researchers from a major
university suggested that this item and
the parallel item for individual-level
outcomes, Theory-Driven Measure
Selection, should have the same label.
Intervention Fidelity
The seven respondents who
commented on this criterion observed
that interventions must be adapted for
individual communities to be effective.
The criterion as written does not
account for this.
wwhite on PROD1PC65 with NOTICES
Nature of Comparison Condition
One respondent stated that there is
not consensus among evaluation
researchers on this topic, and until there
is, ‘‘we should reserve judgment on how
best to define the nature of comparison
conditions within community level
interventions.’’ She also pointed out,
‘‘Since the collective behaviors of
members in each community will vary
* * * how can they possibly be
compared to each other in a valid and
reliable way.’’
Data Collector Bias
A group of university researchers
pointed out that the item assumes
archival data are unbiased, while they
may be biased by institutional practices.
They suggested that the highest rating
‘‘be reserved for studies in which data
collectors were masked to the
population’s condition.’’
Another respondent, a national
organization, wrote:
The very nature of coalition work requires
coalition members to be involved in its
evaluation and research efforts. It is
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
culturally detrimental and unethical to work
with coalitions in such a way that they are
not involved in the evaluation process.
Expecting the data collectors to be blind to
the efforts of the community means that the
researchers are outside the community and
would have no understanding of the context
in which the coalition works. Many
evaluators and researchers view this as the
absolute wrong way to work with coalitions.
Criterion Seven [Data Collector Bias] runs
counter to participatory research which is the
standard in working with coalitions.
Population Studied
Eleven respondents commented on
this criterion. One respondent stated
that quasi-experimental time-series
designs might be as internally valid as
randomized control designs, and felt
this should be reflected in the criterion.
A group of university researchers
advocated excluding single-group pre-/
posttest design studies from NREPP.
They wrote, ‘‘A group randomized
design with adequate numbers of groups
in each condition holds the greatest
potential for ruling out threats to
internal validity in community-level
studies. This criterion should be
expanded to provide a rating of four for
group randomized studies with
adequate Ns.’’
Subpart E.—Comment for Children’s
Suggestions for Utility Descriptors
1. Implementation Support
Regarding the ease of acquiring
materials is there centralized ordering
for all materials? What implementation
support materials are included in initial
program cost, and are they adequate?
Are basic program updates and
replacement parts all easily available?
Regarding start-up support, research
suggests that there are several features
that are important to the effectiveness
and sustainability of programs. These
include an active steering committee,
administrator support, engagement of
family members, and wholeschool
implementation (for school-based
programs). Do the basic program
materials provided supply adequate
guidance for effectively gaining these
sources of support? On the other hand,
some clients are not in the position to
achieve all of these goals. Is it possible
to effectively implement the program
without them? Are needs assessment
tools offered? This is important for
determining whether implementation
should take place at all. What is the
nature of the start-up implementation
support? What is the nature of the
ongoing implementation support? Is
client support differentiated for new
and experienced clients? Do client
support personnel have adequate
PO 00000
Frm 00089
Fmt 4703
Sfmt 4703
training to answer sophisticated
questions from the most highly
experienced program implementers? Is
there implementation support through a
variety of media? What support is there
for transfer of learning? For example,
practice beyond specific lessons,
opportunities for population served to
demonstrate, and be reinforced for skills
beyond specific lessons, support for
staff awareness of skills, how to
recognize skills, how to reinforce skills,
examples typical in the daily setting,
materials for engaging family members
of the population served, materials for
engaging staff outside the implementers
of the program (e.g., residential
housekeeping staff, school playground
monitors), support for engaging
community members outside the
implementation setting, what training is
required, what training is available
beyond that which is required?
2. Quality Monitoring
Are the tools supplied for quality
monitoring user-friendly and
inexpensive? How well are they adapted
specifically to the program? What are
their psychometric characteristics?
3. Unintended or Adverse Events
No further comments.
4. Population Coverage
Are the materials appropriate to the
population to be served in regard to, for
example: length of lessons, vocabulary,
concepts and behavioral expectations,
teaching strategies.
5. Cultural Relevance and Cultural
Competence
To what extent was cultural relevance
addressed during the development of
the program? Is there a theoretical basis
to the program that addresses cultural
relevance? Were stakeholders from a
variety of relevant backgrounds engaged
in the development process? How early
in the development process were they
involved? In what ways were they
involved? Were professionals with
multicultural expertise involved in the
development process? How early in the
development process were they
involved? In what ways were they
involved?
6. Staffing
Since FTEs are often difficult to
estimate and estimates many therefore
be unreliable, the required time should
be estimated for the following: Required
training time, on-site start-up activities,
implementer preparation time per week,
lesson length × number of lessons per
implementer, time required for other
activities.
E:\FR\FM\14MRN1.SGM
14MRN1
13155
Federal Register / Vol. 71, No. 49 / Tuesday, March 14, 2006 / Notices
7. Cost
No further comments on this
descriptor except to reiterate that cost
considerations play into several of the
other descriptors.
ACTION:
8. Motivational Issues Affecting
Implementation
We suggest that consideration be
given to examining what further
motivational issues may impact whether
the programs are implemented and
sustained with fidelity. These include:
appeal of materials and activities for the
population to be served, appeal of
materials and activities for the staff who
will implement the programs, support of
the program for the preexisting goals
and programs of the site (e.g., schoolbased programs that support
academics), how well the program
otherwise integrates with existing goals,
programs, and activities of the site (e.g.,
teachers are expected to direct student
discussions, but not therapy), support
offered for adapting the program to
specific local populations, fit of
materials to the typical structures of the
setting (e.g., short enough lessons to fit
within a class period, necessary
equipment is usually available in the
setting).
[FR Doc. 06–2313 Filed 3–13–06; 8:45 am]
BILLING CODE 4160–01–M
DEPARTMENT OF HOUSING AND
URBAN DEVELOPMENT
[Docket No. FR–5037–N–12]
Notice of Submission of Proposed
Information Collection to OMB; Deedin-Lieu of Foreclosure (Corporate
Mortgagors or Mortgagors Owning
More than One Property)
Office of the Chief Information
Officer, HUD.
AGENCY:
Notice.
SUMMARY: The proposed information
collection requirement described below
has been submitted to the Office of
Management and Budget (OMB) for
review, as required by the Paperwork
Reduction Act. The Department is
soliciting public comments on the
subject proposal.
Mortgagee’s must obtain written
consent from HUD’s National Servicing
Center to accept a deed-in-lieu of
foreclosure when the mortgagor is a
corporate mortgagor or a mortgagor
owning more than one property insured
by the Department of Housing and
Urban Development (HUD). Mortgagees
must provide HUD with specific
information,
DATES: Comments Due Date: April 13,
2006.
ADDRESSES: Interested persons are
invited to submit comments regarding
this proposal. Comments should refer to
the proposal by name and/or OMB
approval Number (2502–0301) and
should be sent to: HUD Desk Officer,
Office of Management and Budget, New
Executive Office Building, Washington,
DC 20503; fax: 202–395–6974.
FOR FURTHER INFORMATION CONTACT:
Lillian Deitzer, Reports Management
Officer, AYO, Department of Housing
and Urban Development, 451 Seventh
Street, SW., Washington, DC 20410; email Lillian Deitzer at
Lillian_L_Deitzer@HUD.gov or
telephone (202) 708–2374. This is not a
toll-free number.
Copies of available documents
submitted to OMB may be obtained
from Ms. Deitzer.
SUPPLEMENTARY INFORMATION: This
notice informs the public that the
Department of Housing and Urban
Development has submitted to OMB a
request for approval of the information
collection described below. This notice
is soliciting comments from members of
the public and affecting agencies
concerning the proposed collection of
information to: (1) Evaluate whether the
proposed collection of information is
necessary for the proper performance of
the functions of the agency, including
whether the information will have
practical utility; (2) Evaluate the
accuracy of the agency’s estimate of the
burden of the proposed collection of
information; (3) Enhance the quality,
utility, and clarity of the information to
be collected; and (4) Minimize the
burden of the collection of information
on those who are to respond; including
through the use of appropriate
automated collection techniques or
other forms of information technology,
e.g., permitting electronic submission of
responses.
This notice also lists the following
information:
Title of Proposal: Deed-in-Lieu of
Foreclosure (Corporate Mortgagors or
Mortgagors Owning More than One
Property).
OMB Approval Number: 2502–0301.
Form Numbers: None.
Description of the Need for the
Information and Its Proposed Use:
Mortgagee’s must obtain written consent
from HUD’s National Servicing Center
to accept a deed-in-lieu of foreclosure
when the mortgagor is a corporate
mortgagor or a mortgagor owning more
than one property insured by the
Department of Housing and Urban
Development (HUD). Mortgagees must
provide HUD with specific information.
Frequency of Submission: On
occasion.
Number of
respondents
Annual
responses
600
0.041
Reporting Burden: .............................................................................
DEPARTMENT OF THE INTERIOR
Authority: Section 3507 of the Paperwork
Reduction Act of 1995, 44 U.S.C. 35, as
amended.
wwhite on PROD1PC65 with NOTICES
Total Estimated Burden Hours: 12.5.
Status: Extension of a currently
approved collection.
Draft Conservation Agreement for the
Yellow-Billed Loon (Gavia adamsii)
Dated: March 9, 2006.
Lillian L. Deitzer,
Departmental Paperwork Reduction Act
Officer, Office of the Chief Information
Officer.
[FR Doc. E6–3616 Filed 3–13–06; 8:45 am]
BILLING CODE 4210–67–P
VerDate Aug<31>2005
19:18 Mar 13, 2006
Jkt 208001
Fish and Wildlife Service
U.S. Fish and Wildlife Service,
Interior.
ACTION: Notice of document availability
for review and comment.
AGENCY:
SUMMARY: We, the U.S. Fish and
Wildlife Service, announce the
PO 00000
Frm 00090
Fmt 4703
Sfmt 4703
×
Hours per
response
0.5
=
Burden
hours
12.5
availability of the Draft Conservation
Agreement for the Yellow-billed Loon
(Gavia adamsii) for public review and
comment.
DATES: Comments on the draft
conservation agreement must be
received on or before April 13, 2006.
ADDRESSES: Copies of the conservation
agreement are available for inspection,
by appointment, during normal business
hours at the following location: U.S.
Fish and Wildlife Service, Fairbanks
Fish and Wildlife Field Office, 101 12th
E:\FR\FM\14MRN1.SGM
14MRN1
Agencies
[Federal Register Volume 71, Number 49 (Tuesday, March 14, 2006)]
[Notices]
[Pages 13132-13155]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 06-2313]
-----------------------------------------------------------------------
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Substance Abuse and Mental Health Service Administration
Changes to the National Registry of Evidence-Based Programs and
Practices (NREPP)
AGENCY: Substance Abuse and Mental Health Services Administration, HHS.
ACTION: Notice.
-----------------------------------------------------------------------
SUMMARY: The Substance Abuse and Mental Health Services Administration
(SAMHSA) is committed to preventing the onset and reducing the
progression of mental illness, substance abuse, and substance-related
problems among all individuals, including youth. As part of this
effort, SAMHSA has expanded and refined the agency's National Registry
of Evidence-based Programs and Practices (NREPP) based on a systematic
analysis and consideration of public comments received in response to a
previous Federal Register notice (70 FR 50381, Aug. 26, 2005).
This Federal Register notice summarizes SAMHSA's redesign of NREPP
as a decision support tool for promoting a greater adoption of
evidence-based interventions within typical community-based settings,
and provides an opportunity for interested parties to become familiar
with the new system.
FOR FURTHER INFORMATION CONTACT: Kevin D. Hennessy, Ph.D., Science to
Service Coordinator/SAMHSA, 1 Choke Cherry Road, Room 8-1017,
Rockville, MD 20857, (240) 276-2234.
Charles G. Curie,
Administrator, SAMHSA.
Advancing Evidence-Based Practice Through Improved Decision Support
Tools: Reconceptualizing NREPP
Introduction
The Substance Abuse and Mental Health Services Administration
(SAMHSA) strives to provide communities with effective, high-quality,
and cost-efficient prevention and treatment services for mental and
substance use disorders. To meet this goal, SAMHSA recognizes the needs
of a wide range of decisionmakers at the local, state, and national
levels to have readily available and timely information about
scientifically established interventions to prevent and/or treat these
disorders.
SAMHSA, through its Science to Service Initiative, actively seeks
to promote Federal collaboration (e.g., with the National Institutes of
Health [NIH]) in translating research into practice. The ideal outcome
of this Initiative is that individuals at risk for or directly
experiencing mental and substance abuse use disorders will be more
likely to receive appropriate preventive or treatment services, and
that these services will be the most effective and the highest quality
that the field has to offer.
This report provides a summary of activities conducted during the
past year to critically evaluate SAMHSA's recent activities and future
plans for the National Registry of Evidence-based Programs and
Practices (NREPP). It outlines the major themes that emerged from a
formal public comment process and links this feedback to new review
procedures and Web-based decision support tools that will enhance
access to evidence-based knowledge for multiple audiences.
The report is presented in four sections:
Section I briefly states the background of NREPP and
SAMHSA's recent request for public comments.
Section II discusses the analysis of comments that was
conducted and presents the key recommendations for NREPP based on this
analysis.
Section III describes the new approach that SAMHSA is
advancing for NREPP.
Section IV presents the specific dimensions of the NREPP
system in its new framework as a decision support tool.
[[Page 13133]]
Section V describes future activities at SAMHSA to support
NREPP.
I. Background: The National Registry of Evidence-Based Programs and
Practices
The National Registry of Evidence-based Programs and Practices was
designed to represent a key component of the Science to Service
Initiative. It was intended to serve as a voluntary rating and
classification system to identify programs and practices with a strong
scientific evidence base. An important reason for developing NREPP was
to reduce the significant time lag between the generation of scientific
knowledge and its application within communities.\1\ Quality treatment
and prevention services depend on service providers' ability to access
evidence-based scientific knowledge, standardized protocols, practice
guidelines, and other practical resources.
---------------------------------------------------------------------------
\1\ As cited by the Institute of Medicine (2001), studies have
suggested it takes an average of 17 years for research evidence to
diffuse to clinical practice. Source: Balas, E.A., & Boren, S.A.
(2000). Managing clinical knowledge for health care improvement. In:
J. Bemmel & A.T. McCray (Eds.), Yearbook of medical informatics
2000: Patient-centered systems. Stuttgart, Germany: Schattauer.
---------------------------------------------------------------------------
The precursor of NREPP, the National Registry of Effective
Prevention Programs, was developed by SAMHSA's Center for Substance
Abuse Prevention (CSAP) as a way to help professionals in the field
become better consumers of substance abuse prevention programs. Through
CSAP's Model Program Initiative, over 1,100 programs were reviewed, and
more than 150 were designated as Model, Effective, or Promising
Programs.
Over the past 2 years, SAMHSA convened a number of scientific
panels to explore the expansion of the NREPP review system to include
interventions in all domains of mental health and substance abuse
prevention and treatment. In addition, SAMHSA committed itself to three
guiding principles--transparency, timeliness, and accuracy of
information--in the development of an evidence-based registry of
programs and practices.
During this process it was determined that, to provide the most
transparent and accurate information to the public, evidence should be
assessed at the level of outcomes targeted by an intervention, not at
the more global level of interventions or programs. Based on this
decision, SAMHSA's current NREPP contractor conducted a series of pilot
studies to explore the validity and feasibility of applying an outcome-
specific, 16-criteria evidence rating system to an expanded array of
programs and practices. Through extensive dialogues with the prevention
community, SAMHSA also explored ways to provide evidence-based reviews
of population- and community-level interventions within NREPP.
In an effort to augment the information gained through these
activities, SAMHSA solicited formal public comments through a notice
posted in the Federal Register on August 26, 2005. The notice asked for
responses to the agency's plans for NREPP, including (1) revisions to
the scientific review process and review criteria; (2) the conveying of
practical implementation information about NREPP programs and practices
to those who might purchase, provide, or receive these interventions;
and (3) the types of additional agency activities that may be needed to
promote wider adoption of interventions on NREPP, as well as support
innovative interventions seeking NREPP status. A brief summary of the
public comments and key public recommendations is presented in Section
II. The complete analysis of the public responses is included in the
Appendix to this report.
II. Public Responses to the Federal Register Notice
Senior staff at SAMHSA engaged in a comprehensive review of
comments received in response to the Federal Register notice.
Particular attention was directed to comments from prominent state and
Federal stakeholders, including providers and policymakers, who stand
to be the most affected by whatever system is ultimately implemented.
Efforts were taken to balance SAMHSA's responsiveness to public
feedback with the need to adhere to rigorous standards of scientific
accuracy and to develop a system that will be fair and equitable to
multiple stakeholder groups.
Recommendations for NREPP
In the more than 100 comments received as part of the public
comment process, a number of recurring themes and recommendations were
identified. While all specific and general recommendations for
modification of the NREPP review process were carefully considered by
SAMHSA, the following are those that were considered most essential to
the development of an accurate, efficient, and equitable system that
can meet the needs of multiple stakeholders:
Limit the system to interventions that have demonstrated
behavioral change outcomes. it is inherently appealing to the funders,
providers, and consumers of prevention and treatment services to know
that an intervention has a measurable effect on the actual behavior of
participants. As researchers at the University of Washington
recommended, ``the system should be reserved for policies, programs,
and system-level changes that have produced changes in actual drug use
or mental health outcomes.''
Rereview all existing programs. There was near consensus
among the respondents to the notice that existing programs with Model,
Effective, and Promising designations from the old reviews should be
rereviewed under the new system. The Committee for Children pointed out
that ``a `grandfather' system may give the impression to users, right
or wrong, that these interventions aren't as good as those that have
undergone the new review process.'' One individual suggested that
programs and practices needed to be rated ``according to a consistent
set of criteria'' so that ``the adoption of an intervention by a
provider can be made with confidence.''
Train and utilize panels of reviewers with specific
expertise related to the intervention(s) under review. Respondents to
the notice noted that it would be important for the NREPP review
process to utilize external reviewers with relevant scientific and
practical expertise related to the intervention being assessed. In
addition, the pool of available reviewers should broadly include
community-level and individual-level prevention as well as treatment
perspectives. In order to promote transparency of the review process,
the reviewer training protocols should be available for review by the
public (e.g., posted on the NREPP Web site).
Provide more comprehensive and balanced descriptions of
evidence-based practices, by emphasizing the important dimension of
readiness for dissemination. The American Psychological Association
(APA) Committee on Evidence-Based Practice recommended greater emphasis
on the utility descriptors (i.e., those items describing materials and
resources to support implementation), stating, ``these are key outcomes
for implementation and they are not adequately addressed in the
description of NREPP provided to date. This underscores earlier
concerns noted about the transition from efficacy to effectiveness.''
The APA committee noted that generalizability of programs listed on
NREPP will remain an issue until this ``gap between efficacy and
effectiveness'' is explicitly addressed under a revised review system.
Avoid limiting flexibility and innovation; implement a
system that is
[[Page 13134]]
fair and inclusive of programs and practices with limited funding, and
establish policies that seek to prevent the misuse of information
contained on NREPP. The National Association for Children of Alcoholics
voiced this concern: ``It has been intrinsically unfair that only
grants [referring to NIH-funded efforts] have been able to establish
`evidence' while many programs appear very effective--often more
effective in some circumstances than NREPP approved programs, but have
not had the Federal support or other major grant support to evaluate
them. The SAMHSA grant programs continue to reinforce the designation
of NREPP programs in order to qualify for funding, and the states tend
to strengthen this `stipulation' to local programs,who then drop good
(non-NREPP) work they have been doing or purchase and manipulate NREPP
programs that make the grant possible. This is not always in the best
interest of the client population to be served.''
Recognize multiple ``streams of evidence'' (e.g.,
researcher, practitioner, and consumer) and the need to provide
information to a variety of stakeholders in a decision support context.
A number of comments suggested that NREPP should be more inclusive of
the practitioner and consumer perspective on what defines evidence. For
example, one commenter noted: ``The narrowed interpretation of
evidence-based practice by SAMHSA focuses almost solely on the research
evidence to the exclusion of clinical expertise and patient values.''
Several comments noted that NREPP should be consistent with the
Institute of Medicine's definition of evidence-based practice, which
reflects multiple ``streams of evidence'' that include research,
clinical, and patient perspectives.
Provide a summary rating system that reflects the
continuous nature of evidence quality. There was substantial
disagreement among those responding to the notice concerning whether
NREPP should include multiple categories of evidence quality. While a
number of individuals and organizations argued for the use of
categorical evidence ratings, there were many who suggested that NREPP
should provide an average, numeric scale rating on specific evidence
dimensions to better reflect the ``continuous nature of evidence.''
This approach would allow the user of the system to determine what
level of evidence strength is required for their particular application
of an intervention.
Recognize the importance of cultural diversity and provide
complete descriptive information on the populations for which
interventions have been developed and applied. Most comments reflected
the knowledge that cultural factors can play an important role in
determining the effectiveness of interventions. The Oregon Office of
Mental Health and Addiction Services noted, ``SAMHSA should focus
considerable effort on identifying and listing practices useful and
applicable for diverse populations and rural areas. Providers and
stakeholders from these groups have repeatedly expressed the concern
they will be left behind if no practices have been identified which fit
the need of their area. We need to take particular care to ensure that
their fear is not realized.''
In addition to estimating the effect size of intervention
outcomes, NREPP should include additional descriptive information about
the practical impacts of programs and practices. In general, comments
suggested that that effect size should not be used as an exclusionary
criterion in NREPP. It was widely noted that effect size estimates for
certain types of interventions (e.g., community-level or population-
based) will tend to be of smaller magnitude, and that ``professionals
in the field have not reached consensus on how to use effect size.''
Researchers at the University of Washington suggested the inclusion of
information about the reach of an intervention, when available, as
complementary information to effect sizes. Several comments also
suggested that effect size is often confused with the clinical
significance of an intervention and its impact on participants.
Acknowledge the need to develop additional mechanisms of
Federal support for technical assistance and the development of a
scientific evidence base within local prevention and treatment
communities. Nearly one third of the comments directly addressed the
need for SAMHSA to identify and/or provide additional technical
assistance resources to communities to help them adapt and implement
evidence-based practices. The Oregon Office of Mental Health and
Addiction Services wrote, ``The adoption of new practices by any entity
is necessarily a complex and long-term process. Many providers will
need technical support if adoption and implementation is to be
accomplished effectively. Current resources are not adequate to meet
this challenge.''
In order to align NREPP with the important recommendations
solicited through the public comment process, SAMHSA also recognized
the importance of the following goals:
Provide a user-friendly, searchable array of descriptive
summary information as well as reviewer ratings of evidence quality.
Provide an efficient and cost-effective system for the
assessment and review of prospective programs and practices.
Section III, Streamlined Review Procedures, provides a complete
description of the modified and streamlined review process that SAMHSA
will adopt in conducting evidence-based evaluations of mental health
and substance abuse interventions.
III. Streamlined Review Procedures
The number and range of NREPP reviews are likely to expand
significantly under the new review system, requiring that SAMHSA
develop an efficient and cost-effective review process. The streamlined
review procedures, protocols, and training materials will be made
available on the NREPP Web site for access by all interested
individuals and organizations.
Reviews of interventions will be facilitated by doctoral-level
Review Coordinators employed by the NREPP contractor. Each Review
Coordinator will support two external reviewers who will assign
numeric, criterion-based ratings on the dimensions of Strength of
Evidence and Readiness for Dissemination. Review Coordinators will
provide four important support and facilitative functions within the
peer review process: (1) They will assess incoming applications for the
thoroughness of documentation related to the intervention, including
documentation of significant outcomes, and will convey summaries of
this information to SAMHSA Center Directors for their use in
prioritizing interventions for review; (2) they will serve as the
primary liaison with the applicant to expedite the review of
interventions; (3) they will collaborate with the NREPP applicant to
draft the descriptive dimensions for the intervention summaries; and
(4) they will provide summary materials and guidance to external
reviewers to facilitate initial review and consensus discussions of
intervention ratings.
Interventions Qualifying for Review
While NREPP will retain its open submission policy, the new review
system emphasizes the important role of SAMHSA's Center Directors and
their staff (in consultation with key stakeholders) in setting
intervention review priorities that will identify the particular
content areas, types of
[[Page 13135]]
intervention approaches, populations, or even types of research designs
that will qualify for review under NREPP. Under the streamlined review
procedures, the sole requirement for potential inclusion in the NREPP
review process is for an intervention to have demonstrated one or more
significant behavioral change outcomes. Center-specific review
priorities will be established and communicated to the field by posting
them to the NREPP Web site at the beginning of each fiscal year.\2\
---------------------------------------------------------------------------
\2\ Except for FY06 when priorities will be established and
posted when the new system Web site is launched (i.e., within the
third FY quarter).
---------------------------------------------------------------------------
Review of Existing NREPP Programs and Practices
It will be the prerogative of SAMHSA Center Directors to establish
priorities for the review and interventions already on, and pending
entry on, NREPP. As indicated above, these decisions may be linked to
particular approaches, populations, or strategic objectives as
identified by SAMHSA as priority areas. Until reviews of existing NREPP
programs and practices are completed and posted to the new NREPP Web
site, the current listing on the SAMHSA Model Programs Web site will
remain intact.
Notifications to Program/Practice Developers
Upon the completion of NREPP reviews program/practice developers
(or principal investigators of a research-based intervention) will be
notified in writing within 2 weeks of the review results. A complete
summary, highlighting information from each of the descriptive and
rating dimensions, will be provided for review. Program/practice
developers who disagree with the descriptive information or ratings
contained in any of the dimensions will have an opportunity to discuss
their concerns with the NREPP contractor during the 2-week period
following receipt of the review outcome notification. These concerns
must be expressed in writing to the contractor within this 2-week
period. If no comments are received, the review is deemed completed,
and the results may be posted to the NREPP Web site. If points of
disagreement cannot be resolved by the end of this 2-week period, then
written appeals for a rereview of the intervention may be considered on
a case-by-case basis.
NREPP Technical Expert Panel
SAMHSA will organize one or more expert panels to perform periodic
(e.g., annual assessments of the evidence review system and recommend
enhancements to to the review procedures and/or standards for evidence-
based science and practice. Panel membership will represent a balance
of perspectives and expertise. The panels will be comprised of
researchers with knowledge of evidence-based practices and initiatives,
policymakers, program planners and funders, practitioners, and
consumers.
The modified NREPP system embodies a commitment by SAMHSA and its
Science to Service Initiative to broaden the appeal and utility of the
system to multiple audiences. While maintaining the focus on the
documented outcomes achieved through a program or practice, NREPP also
is being developed as a user-friendly decision support tool to present
information along multiple dimensions of evidence. Under the new
system, interventions will not receive single, overall ratings as was
the case with the previous NREPP (e.g., Model, Effective, or
Promising). Instead, an array of information from multiple evidence
dimensions will be provided to allow different user audiences to both
identify (through Web-searchable means) and prioritize the factors that
are important to them in assessing the relative strengths of different
evidence-based approaches to prevention or treatment services.
Section IV presents in more detail the specific dimensions of
descriptive information and ratings that NREPP will offer under this
new framework.
IV. NREPP Decision Support Tool Dimensions
The NREPP system will support evidence-based decisionmaking by
providing a wide array of information across multiple dimensions. Many
of these are brief descriptive dimensions that will allow users to
identify and search for key intervention attributes of interest.
Descriptive dimensions would frequently include a brief, searchable
keyword or attribute (e.g., ``randomized control trial'' under the
Evaluation Design dimension) in addition to narrative text describing
that dimension. Two dimensions, Strength of Evidence and Readiness for
Dissemination, will consist of quantitative, criterion-based ratings by
reviewers. These quantitative ratings will be accompanied by reviewer
narratives summarizing the strengths and weaknesses or the intervention
along each dimension.
Considerations for Using NREPP as a Decision Support Tool
It is essential for end-users to understand that the descriptive
information and ratings provided by NREPP are only useful within a much
broader context that incorporates a wide range of perspectives--
including clinical, consumer, administrative, fiscal, organizational,
and policy--into decisions regarding the identification, selection, and
successful implementation of evidence-based services. In fact, an
emerging body of literature on implementation science \3\ suggests that
a failure to carefully attend to this broader array of data and
perspectives may well lead to disappointing or unsuccessful efforts to
adopt evidence-based interventions. Because each NREPP user is likely
to be seeking somewhat different information, and for varied purposes,
it is unlikely that any single intervention included on NREPP will
fulfill all of the specific requirements and unique circumstances of a
given end-user. Appreciation of this basic premise of NREPP as a
decision support tool to be utilized in a broader context will thus
enable system users to make their own determinations regarding how best
to assess and apply the information provided.
---------------------------------------------------------------------------
\3\ Fixsen, D.L., Naoom, S.F., Blase, K.A., Friedman, R.M., &
Wallace, F. (2005). Implementation research: A synthesis of the
literature. Tampa, Florida: University of South Florida, Louis de la
Parte Florida mental Health Institute, The National Implementation
Network (FMHI Publication 231).
Rogers (1995). Diffusion of innovaations (5th Ed.) New York: The
Free Press.
---------------------------------------------------------------------------
The NREPP decision support dimensions include:
Descriptive Dimensions
Strength of Evidence Dimension Ratings
Readiness for Dissemination Dimension Ratings
A complete description of these dimensions is provided in the
sections below.
Descriptive Dimensions
Intervention Name and Summary: Provides a brief summary of
the intervention, including title, description of conceptual or
theoretical foundations, and overall goals. Hyperlinks to graphic logic
model(s), when available, could be accessed from this part of the
summary.
Contract Information: Lists key contact information.
Typically will include intervention developer's title(s), affiliation,
mailing address, telephone and fax numbers, e-mail address, and Web
site address.
Outcome(s): A searchable listing of the behavioral
outcomes that the intervention has targeted.
Effects and Impact: Provides a description and
quantification of the effects observed for each outcome.
[[Page 13136]]
Includes information on the statistical significance of outcomes, the
magnitude of changes reported including effect size and measures of
clinical significance (if available), and the typical duration of
behavioral changes produced by the intervention.
Relevant Populations and Settings: Identifies the
populations and sample demographics that characterize existing
evaluations. The settings in which different populations have been
evaluated will be characterized along a dimension that ranges from
highly controlled and selective (i.e., efficacy studies), to less
controlled and more representative (i.e., effectiveness studies), to
adoption in the most diverse and realistic public health and clinical
settings (i.e., dissemination studies).\4\
---------------------------------------------------------------------------
\4\ For more description of these types of studies and their
role in supporting evidence-based services, see the report: Bridging
science and service: A report by the National Advisory mental Health
Council's Clinical Treatment and Services Research Workgroup (http:/
/www.nimh.nih.gov/publicat/nimhbridge.pdf).
---------------------------------------------------------------------------
Costs: Provides a breakdown of intervention cost(s) per
recipient/participant or annual as appropriate (including capital
costs, other direct costs [travel, etc.]). Start-up costs including
staff training and development. A standardized template would be
provided to applicants for estimating and summarizing the
implementation and maintenance costs of an intervention.
Adverse Effects: Reported with regard to type and number,
amounts of change reported, type of data collection, analyses used,
intervention and comparison group, and subgroups.
Evaluation Design: Contains both a searchable index of
specific experimental and quasi-experimental designs (e.g., pre-/
posttest nonequivalent groups designs, regression-discontinuity
designs, interrupted time series designs, etc.) \5\ as well as a
narrative description of the design (including intervention and
comparison group descriptions) used to document intervention outcomes.
---------------------------------------------------------------------------
\5\ Campbell, D.T., & Stanley, J.C. (1966). Experimental and
quasi-experimental designs for research. Chicago: Rand McNally.
---------------------------------------------------------------------------
Replication(s): Coded as ``None,'' or will state the
number of replications to date (only those that have been evaluated for
outcomes). Replications will be additionally characterized as having
been conducted in efficacy, effectiveness, or dissemination contexts.
Proprietary or Public Domain Intervention: Typically will
be one or the other, but proprietary components or instruments used as
part of an intervention will be identified.
Cultural Appropriateness: Coded as ``Not Available'' (N/A)
if either no data or no implementation/training materials for
particular culturally identified groups are available. When culture-
specific data and/or implementation materials exist for one or more
groups, the following two Yes/No questions will be provided for each
group:
Was the intervention developed with participation by
members of the culturally identified group?
Are intervention and training materials translated or
adapted to members of the culturally identified group?
Implementation History: Provides information relevant to
the sustainability of interventions. Provides descriptive information
on (1) the number of sites that have implemented the intervention; (2)
how many of those have been evaluated for outcomes; (3) the longest
continuous length of implementation (in years); (4) the average or
modal length of implementation; and (5) the approximate number of
individuals who have received or participated in the intervention.
Strength of Evidence Dimension Ratings
Quantitative, reviewer-based ratings on this dimension will be
provided within specific categories of research/evaluation design. In
this manner, users can search and select within those categories of
research designs that are most relevant to their particular standards
of evidence-based knowledge. The categories of research design that are
accepted within the NREPP system are described below.
Research Design
Quality of evidence for an intervention depends on the strength of
adequately implemented research design controls, including comparison
conditions for quasi-experimental and randomized experimental designs
(individual studies). Aggregation (e.g., meta-analysis and systematic
research reviews) and/or replication across well-designed series of
quasi-experimental and randomized control studies provide the strongest
evidence. The evidence pyramid presented below represents a typical
hierarchy for classifying the strength of causal inferences that can be
obtained by implementing various research designs with rigor.\6\
Designs at the lowest level of evidence pyramid (i.e., observational,
pilot, or case studies), while acceptable as evidence in some knowledge
development contexts, would not be included in the NREPP system.
---------------------------------------------------------------------------
\6\ Biglan, A., Mrazek, P., Carnine, D.W., & Flay, B. R. (2003).
The integration of research and practice in the prevention of youth
problem behaviors. American Psychologist, 58, 433-440.
Chambless, D. L., & Hollon, S. (1998). Defining empirically
supported therapies. Journal of Consulting and Clinical Psychology,
66, 7-18.
Gray, J. A. (1997), Evidence-based healthcare: How to make
health policy and management decisions. New York: Churchill
Livingstone.
---------------------------------------------------------------------------
[[Page 13137]]
[GRAPHIC] [TIFF OMITTED] TN14MR06.000
1. Reliability \7\
---------------------------------------------------------------------------
\7\ Each criterion would be rated on an ordinal scale ranging
from 0 to 4. The endpoints and midpoints of the scale would be
anchored to a narrative description of that rating. The remaining
integer points of the scale (i.e., 1 and 3) would not be explicitly
anchored, but could be used by reviewers to assign intermediate
ratings at their discretion.
---------------------------------------------------------------------------
Outcome measures should have acceptable reliability to be
interpretable. ``Acceptable'' here means reliability at a level that is
conventionally accepted by experts in the field.\8\
---------------------------------------------------------------------------
\8\ Marshall, M., Lockwood, A., Bradley, C., Adams, C., Joy, C.,
& Fenton, M. (2000). Unpublished rating scales: A major source of
bias in randomised controlled trials of treatments for
schizophrenia. British Journal of Psychiatry, 176, 249-252.
---------------------------------------------------------------------------
0 = Absence of evidence of reliability or evidence that some relevant
types of reliability (e.g., test-retest, interrater, interitem) did not
reach acceptable levels.
2 = All relevant types of reliability have been documented to be at
acceptable levels in studies by the applicant.
4 = All relevant types of reliability have been documented to be at
acceptable levels in studies by independent investigators.
2. Validity
Outcome measures should have acceptable validity to be
interpretable. ``Acceptable'' here means validity at a level that is
conventionally accepted by experts in the field.
0 = Absence of evidence measure validity, or some evidence that the
measure is not valid.
2 = Measure has face validity; absence of evidence that measure is not
valid.
4 = Measure has one or more acceptable forms of criterion-related
validity (correlation with appropriate, validated measures or objective
criteria); OR, for objective measures of response, there are procedural
checks to confirm data validity; absence of evidence that measure is
not valid.
3. Intervention Fidelity
The ``experimental'' intervention implemented in a study should
have fidelity to the intervention proposed by the applicant.
Instruments that have tested acceptable psychometric properties (e.g.,
interrater reliability, validity as shown by positive association with
outcomes) provide the highest level of evidence.
0 = Absence of evidence or only narrative evidence that the applicant
or provider believes the intervention was implemented with acceptable
fidelity.
2 = There is evidence of acceptable fidelity in the form of judgment(s)
by experts, systematic collection of data (e.g. dosage, time spent in
training, adherence to guidelines or a manual), or a fidelity measure
with unspecified or unknown psychometric properties.
4 = There is evidence of acceptable fidelity from a tested fidelity
instrument shown to have reliability and validity.
4. Missing Data and Attrition
Study results can be biased by participant attrition and other
forms of missing data. Statistical methods as supported by theory and
research can be employed to control for missing data and attrition that
would bias results, but studies with no attrition needing adjustment
provide the strongest evidence that results are not biased.
0 = Missing data and attrition were taken into account inadequately, OR
there was too much to control for bias.
2 = Missing data and attrition were taken into account by simple
estimates of data and observations, or by demonstrations of similarity
between remaining participants and those lost to attrition.
4 = Attrition was taken into account by more sophisticated methods that
[[Page 13138]]
model missing data, observations, or participants; OR there was no
attrition needing adjustment.
5. Potential Confounding Variables
Often variables other than the intervention may account for the
reported outcomes. The degree to which confounds are accounted for
affects the strength of casual inference.
0 = Confounding variables or factors were as likely to account for the
outcome(s) reported as were hypothesized causes.
2 = One or more potential confounding variables or factors were not
completely addressed, but the intervention appears more likely than
these confounding factors to account for the outcome(s) reported.
4 = All known potential confounding variables appear to have been
completely addressed in order to allow causal inference between
intervention and outcome(s) reported.
6. Appropriateness of Analyses
Appropriate analysis is necessary to make an inference that an
intervention caused reported outcomes.
0 = Analyses were not appropriate for inferring relationships between
intervention and outcome, OR the sample size was inadequate.
2 = Some analyses may not have been appropriate for inferring
relationships between intervention and outcome, OR the sample size may
have been inadequate.
4 = Analyses were appropriate for inferring relationships between
intervention and outcome. Sample size and power were adequate.
Readiness for Dissemination Dimension Ratings
1. Availability of Implementation Materials (e.g., Treatment Manuals,
Brochures, Information for Administrators, etc.)
0 = Applicant has insufficient implementation materials.
2 = Applicant has provided a limited range of implementation materials,
or a comprehensive range of materials of varying or limited quality.
4 = Applicant has provided a comrephensive range of standard
implementation materials of apparent high quality.
2. Availability of Training and Support Resources
0 = Applicant has limited or no training and support resources.
2 = Applicant provides training and support resources that are
partially adequate to support initial and ongoing implementation.
4 = Applicant provides training and support resources that are fully
adequate to support initial and ongoing implementation (tested training
curricula, mechanisms for ongoing supervision and consultation).
3. Quality Improvement (QI) Materials (e.g., Fidelity Measures, Outcome
and Performance Measures, Manuals on How To Provide QI Feedback and
Improve Practices)
0 = Applicant has limited or no materials.
2 = Applicant has materials that are partially adequate to support
initial and ongoing implementation.
4 = Applicant provides resources that are fully adequate to support
initial and ongoing implementation (tested quality fidelity and outcome
measures, comprehensive and user-friendly QI materials).
Scoring the Strength of Evidence and Readiness for Dissemination
Dimensions
The ratings for the decision support dimensions of Strength of
Evidence and Readiness for Dissemination are calculated by averaging
individual rating criteria that have been scored by reviewers according
to a uniform five-point scale. For these two quantitative dimensions,
the average score on each dimension (i.e., across criteria and
reviewers) as well as average score for each rating criterion (across
reviewers) will be provided on the Web site for each outcome targeted
by the intervention.\9\
---------------------------------------------------------------------------
\9\ Note that it is unlikely that the Readiness for
Dissemination dimension will vary by targeted outcome(s), insofar as
the materials and resources are usually program specific as opposed
to outcome specific.
---------------------------------------------------------------------------
V. Future Activities: Implementing and Sustaining a Streamlined NREPP
SAMHSA plans to initiate reviews using the new NREPP review process
and procedures in summer 2006. The precise number and characteristics
of new interventions that will be prioritized for the first series of
reviews have yet to be determined. SAMHSA anticipates that many of the
existing programs and practices currently listed on the SAMHSA Model
Programs Web site will undergo an expedited set of reviews using the
new system. Regardless, the current Model Programs Web site will remain
intact until all relevant programs have been included in a new Web
site, https://www.national registry.samhsa.gov
The identification of collaborative mechanisms for supporting the
continued development and refinement of NREPP will represent a SAMHSA
priority in 2006. SAMHSA will explore means for providing adequate
technical assistance resources to communities seeking to initiate and/
or augment evidence-based practices. In addition, appropriate technical
advisors and other scientific resources will be utilized to assure the
continued evolution of NREPP as a state-of-the-art decision support
tool.
Appendix: Analysis of Public Comments in Response to Federal Register
Notice
Background and Overview
The Substance Abuse and Mental Health Services Administration
(SAMHSA), through its Science to Service initiative, develops tools and
resources for providers of prevention and treatment services to
facilitate evidence-based decisionmaking and practice. An important
informational resource is the National Registry of Evidence-based
Programs and Practices (NREPP). NREPP is a voluntary rating and
classification system designed to provide the public with reliable
information on the scientific basis and practicality of interventions
designed to prevent and/or treat mental and addictive disorders. NREPP
originated in SAMHSA's Center for Substance Abuse Prevention (CSAP) in
1997 as a way to help professionals in the field become better
consumers of prevention programs. The program was expanded in 2004 to
include substance abuse treatment interventions within SAMHSA's Center
for Substance Abuse Treatment (CSAT) and mental health promotion and
treatment interventions within the Center for Mental Health Services
(CMHS).
During the past 2 years, SAMHSA reviewed existing evidence rating
systems and developed and pilot-tested a revised approach to the rating
of specific outcomes achieved by programs and practices. This
development effort led SAMHSA to propose 16 evidence rating criteria as
well as a set of proposed utility descriptors to describe the potential
of a given intervention to be ``transported'' to real-world settings
and populations.
Considering the prominence of NREPP within its Science-to-Service
initiative and the potential impact of NREPP on the research and
provider communities, SAMHSA announced a formal request for public
comments in the Federal Register on August 26, 2005 (70 FR 165, 50381-
50390) with a 60-day
[[Page 13139]]
public comment period ending October 26, 2005. The notice outlined in
some detail the proposed review system, including scientific criteria
for evidence reviews, the screening and triage of NREPP applications,
and the identification by SAMHSA of priority review areas. The notice
invited general as well as specific comments and included 11 questions
soliciting targeted feedback. By request of the SAMHSA Project Officer,
MANILA Consulting Group coded and analyzed the responses received in
response to the 11 questions posted in the Federal Register notice. The
results of the analysts are presented below.
Method
A total of 135 respondents submitted comments via e-mail, fax, and
postal mail during the comment period. Of these 135 respondents, 109
(81%) answered at least some of the 11 questions posted in the Federal
Register notice.
Respondents
The 135 respondents included 53 providers, 36 researchers, 4
consumers, 21 respondents with multiple roles, and 21 with unknown
roles visa-[agrave]-vis NREPP. Respondents were labeled as having one
or more of the following domains of interest: substance abuse
prevention (N=68), substance abuse treatment (N=48), mental health
promotion (N=22); and mental health treatment (N=20). The domain of
interest was unknown for 33 respondents. The respondents represented 16
national organizations, 10 state organizations, and 14 local
organizations; 90 were private citizens; and 5 were individuals with
unknown affiliations. Fifty-one respondents (38%) were labeled
``noteworthy'' at the request of the SAMHSA Project Officer. Noteworthy
respondents included those representing national or state governments
or national organizations, and nationally known experts in substance
abuse or mental health research or policy.
Twenty-six responses were judged by the four MANILA coders and the
SAMHSA Project Officer to contain no information relevant to the 11
questions in the notice. These responses, labeled ``unanalyzable'' for
the purposes of this report, could be categorized as follows:
Mentioned topics related to SAMHSA but made no point
relevant to the questions posted in the Federal Register notice (N=10);
Mentioned only topics unrelated to SAMHSA or incoherent
text (N=7);
Asked general questions about NREPP and the Federal
Register notice (N=4);Wanted to submit a program for NREPP review
(N=4); and
Wanted to submit a program for NREPP review (N=4); and
Responded to another Federal Register notice (N=1).
Procedure
Before coding began, responses were read to identify recurrent
themes to include in the codebook (presented in Subpart A of this
Appendix). Using this codebook, each submission was then assigned codes
identifying respondent characteristics (name, location, domain of
interest, affiliation/type of organization, functional role, and level
of response) and the content or topical themes contained in the
response. One pair of coders coded the respondent data, while another
pair coded the content. Content coding was conducted by two doctoral-
level psychologists with extensive training and experience in social
science research and methodology.
Each response could be assigned multiple codes for content. Coders
compared their initial code assignments for all responses, discussed
reasons for their code assignments when there were discrepancies, and
then decided upon final code assignments. In many cases, coders
initially assigned different codes but upon discussion agreed that both
coders' assignments were applicable. Coding assignments were ultimately
unanimous for all text in all responses.
Results
The following discussion of key themes in the public comments is
presented in order of the 11 questions from the Federal Register
notice. Tables containing detailed frequencies of themes in the
comments and other descriptive information are provided in Subpart B.
Comments Addressing Question 1
Question 1. ``SAMHSA is seeking to establish an objective,
transparent, efficient, and scientifically defensible process for
identifying effective, evidence-based interventions to prevent and/
or treat mental and substance use disorders. Is the proposed NREPP
system--including the suggested provisions for screening and triage
of applications, as well as potential appeals by applicants--likely
to accomplish these goals?''
Respondents submitted a wide range of comments addressing Question
1. Highlights of these comments are presented below, organized by topic
as follows:
1. Individual-Level Criteria
2. Population-, Policy-, and System-Level Criteria
3. Utility Descriptors
4. Exclusion From NREPP Due to Lack of Funding
5. Potential Impact on Minority Populations
6. Potential Impact on Innovation
7. Provider Factors
8. Other Agencies' Standards and Resources
9. Reliance on Intervention Developers To Submit Applications
10. Generalizability
11. Other Themes and Notable Comments
1. Individual-Level Criteria
Number of respondents: 24 (22%).
Recommendations made by respondents included adding cost
feasibility as a 13th criterion (one respondent) and scoring all
criteria equally (two respondents). Comments regarding specific
criteria are presented in Subpart C.
2. Population-, Policy-, and System-Level Criteria
Number of respondents: 29 (27%).
Comments on specific criteria are presented in Subpart D.
Highlights of comments on more general issues are presented below.
Differences in Evaluation Approaches for Individual-Level and
Population-, Policy-, and System-Level Outcomes
Two respondents noted the proposed NREPP approach does not
acknowledge key differences between evaluating individual-level
outcomes and population-, policy-, and system-level outcomes. One of
these respondents argued that NREPP is based on theories of change that
operate only at the individual level of analysis, with the assumption
that discrete causes lead to discrete effects, and therefore ``many of
the NREPP criteria appear to be insufficient or inappropriate for
determining the validity of community-based interventions and their
context-dependent effects.''
Unclear What Interventions Are of Interest to NREPP
One organization, Community Anti-Drug Coalitions of America,
recommended that SAMHSA present a clear, operational definition of the
types of interventions it wants to include in NREPP.
Match Scale to Individual-Level Outcomes
Twelve respondents, including the Society for Prevention Research
and a group of researchers from a major university, recommended that
the same scale be used for outcomes at the
[[Page 13140]]
individual level as for the population, policy, and system levels.
Add Attrition Criterion
The same group of university researchers suggested adding attrition
as a 13th criterion to the rating criteria for studies of population
outcomes. They noted, ``Just as attention to attrition of individuals
from conditions is essential in individual-level studies, attention to
attrition of groups or communities from studies is essential in group-
level studies. This is necessary in order to assess attrition as a
possible threat to the validity of the claim that the population-,
policy-, or system-level intervention produced observed outcomes.''
Include Only Interventions That Change Behavior
It was recommended that NREPP only include interventions proven to
change behavior. A group of university researchers noted:
As currently described, these outcomes refer to implementation
of changes in policy or community service systems, not to changes in
behavioral outcomes themselves. In fact, as currently described, the
policy or system change would not be required to show any effects on
behavior in order to be included in NREPP. This is a serious
mistake. The NREPP system should be reserved for policies, programs,
and system-level changes that have produced changes in actual drug
use or mental health outcomes.
3. Utility Descriptors
Number of respondents: 15 (14%).
Only one respondent, the Committee for Children, recommended
specific changes to the utility descriptors. Their comments are
presented in Subpart E of this Appendix.
Seven other respondents recommended using utility descriptors in
some way to score programs. The American Psychological Association
(APA) Committee on Evidence-Based Practice recommended more emphasis on
the utility descriptors ``as these are key outcomes for implementation
and they are not adequately addressed in the description of NREPP
provided to date. This underscores earlier concerns noted about the
transition from effectiveness to efficacy.''
4. Exclusion From NREPP Due To Lack of Funding
Number of respondents: 28 (26%).
The possibility that NREPP will exclude programs due to lack of
funding was a concern voiced by several organizations, including the
National Association for Children of Alcoholics, the APA Committee on
Evidence-Based Practice, the National Association of State Alcohol and
Drug Abuse Directors, Community Anti-Drug Coalitions of America, and
the California Association of Alcohol and Drug Program Executives. The
National Association for Children of Alcoholics provided the following
comment:
NREPP should establish differing criteria for projects that
collected data with [National Institutes of Health] grant funds and
projects that collected data with no or very small amounts of funds.
It has been intrinsically unfair that only grants have been able to
establish ``evidence'' while many programs appear very effective--
often more effective in some circumstances than NREPP approved
programs--but have not had the Federal support or other major grant
support to evaluate them. The SAMHSA grant programs continue to
reinforce the designation of NREPP programs in order to qualify for
funding, and the states tend to strengthen this `stipulation' to
local programs, who then drop good (non-NREPP) work they have been
doing or purchase and manipulate NREPP programs that make the grant
possible. This is not always in the best interest of the client
population to be served.
Another key concern was that funding for replication research is
rarely available. Several respondents suggested that SAMHSA consider
funding evaluation research, and many argued that the lack of funding
resources could negatively impact minority populations or inhibit
treatment innovation. The latter two themes were frequent enough to be
coded and analyzed separately. Results are summarized in the following
sections.
5. Potential Impact on Minority Populations
Number of respondents: 13 (12%).
Thirteen respondents noted that the proposed NREPP approach could
negatively impact specific populations, including minority client
populations. The Federation of Families for Children's Mental Health
suggested that NREPP would effectively promote certain practices
``simply because the resources for promotion, training, evaluation are
readily accessible * * * thus widening the expanse and disparities that
currently exist.''
Another frequently noted concern was that evidence-based practices
are currently too narrowly defined, and thus as more funding sources
begin to require evidence-based practices as a prerequisite for
funding, some ethnic or racial minority organizations may be excluded
from funding. One respondent also pointed to potential validity
concerns, noting that ``Very little clinical trial evidence is
available for how to treat substance use disorders in specific
populations who may constitute most or all of those seen in particular
agencies: HIV positive patients, native Americans, adolescents,
Hispanics, or African Americans. Although it is unreasonable to expect
all EBTs to be tested with all populations, the external validity of
existing studies remains a serious concern.'' For these reasons, many
respondents surmised that the widespread application of interventions
developed in research contexts that might tend to limit the inclusion
of minority and/or underserved populations could ultimately result in
decreased cultural competence among service providers.
6. Potential Impact on Innovation
Number of respondents: 21 (19%).
Twenty-one respondents cited concerns that the proposed NREPP
approach could hamper innovation. CAADPE noted that its main concerns
were ``the focus on the premise that treatment will improve if confined
to inteventions for which a certain type of research evidence is
available'' and ``the issue of `branding,' which could lead to some of
our most innovative and effective small scale providers eliminated from
funding considerations.''
One respondent suggested that lists of evidence-based treatments
could ``ossify research and practice, and thus become self-fulfilling
prophecies * * * stifling innovation and the validation of existing
alternatives.'' Several respondents observed that the potential for
stifling innovation is even greater given that SAMHSA's NREPP is not
the only list of evidence-based practices used by funders.
The APA Practice Organization recommended that NREPP focus on
``developing and promoting a range of more accessible and less
stigmatized services that are responsive to consumers' needs and
preference, and offer more extensive care opportunities.''
7. Provider Factors
Number of respondents: 22 (20%).
A number of respondents noted the proposed NREPP approach does not
acknowledge provider effects on treatment outcomes. The APA Committee
on Evidence-Based Practice wrote, ``Relationship factors in a
therapeutic process may be more important than specific interventions
and may in fact be the largest determinant in psychotherapy outcome
(see Lambert & Barley, 2002). How will NREPP address this concern and
make this apparent to users?''
Another respondent cited the Institute of Medicine's definition of
evidence-
[[Page 13141]]
based practice as ``the integration of the best research evidence with
clinical expertise and client values,'' noting that ``The narrowed
interpretation of evidence-based practice by SAMHSA focuses almost
solely on the research evidence to the exclusion of clinical expertise
and patient values.''
Several respondents suggested that NREPP could place too much
emphasis on highly prescriptive, annualized treatments. Counselors can
become bored when they are not able ti ``tinker'' with or adapt
treatments. In addition, making minor modifications may actually make
treatments more effective with different population groups.
8. Other Agencies' Standards and Resources
Number of respondents: 27 (25%).
Nineteen respondents suggested that, in developing NREPP, SAMHSA
should consult other agencies' standards and resources related to
evidence-based practices--for example, the standards published by the
APA, American Society for Addiction Medicine, and the Society for
Prevention Research. One respondent suggested consulting with National
Institutes of Health scientists about approaches for aggregating
evidence; another recommended including in NREPP model programs
identified by other agencies. One respondent submitted a bibliography
of references for assessing the rigor of qualitative research.
One respondent suggested that SAMHSA did not provide other
institutions the opportunity to provide input on the development of
NREPP prior to the request for public comments.
9. Reliance on Intervention Developers To Submit Applications
Number of respondents: 4 (4%).
Four respondents cited problems with NREPP's reliance on
intervention developers to submit applications, and suggested that
literature reviews instead be used to identify programs eligible for
NREPP. One private citizen wrote, ``If no one applies on behalf of a
treatment method, is that one ignored? Why not simply start with the
literature and identify treatment methods with adequate evidence of
efficacy?''
Another respondent observed that requiring an application creates a
bias toward programs with advocates ``either ideologically or because
of a vested interest in sales, visibility, and profits. An alternative
is to select interventions for NREPP consideration solely by monitoring
the peer-reviewed published literature, and including them regardless
of whether or not the scientist responds or furthers the registration
process.''
The Society for Prevention Research suggested that SAMHSA convene a
panel to periodically review available interventions that might not be
submitted to NREPP because they ``lack a champion.''
10. Generalizability
Number of respondents: 48 (44%).
Many respondents discussed the issue of generalizability of
evidence, especially the concern that interventions proven to work in
clinical trials do not always work in real-world settings. Several
respondents pointed out the potential conflict between implementing an
intervention with fidelity and having a adapt it for the setting.
The APA Evidence-Based Practice Committee suggested that the
proposed NREPP approach does not adequately distinguish between
``efficacy'' and ``effectiveness,'' and strongly recommended that
SAMHSA look for ways to bridge the two.
The Associations of Addiction Services recommended paying more
attention to how and where treatments are replicated: ``The highest
level of evidence should be successful replication of the approach in
multiple community treatment settings. Experience with [the National
Institute on Drug Abuse] Clinical Trials Network suggests that an
approach that shows meaningful outcome improvements in the `noisy'
setting of a publicly funded community treatment program is truly an
approach worth promoting.''
A few respondents suggested that NREPP score interventions
according to their readiness and amenability to application in real-
world settings.
11. Other Themes and Notable Comments
Distinguishing Treatment and Prevention
Number of respondents: 7 (6%).
A few respondents called or evaluating treatment and prevention
approaches differently. One respondent noted that some criteria appear
to be more appropriate for treatment modalities than for preventive
interventions, and recommended that SAMHSA ``confer with research
experts in those respective fields and separate out those criteria that
are more relevant to only treatment or prevention.''
Another respondent suggested that the criteria are more appropriate
for prevention that treatment:
The criteria and selection for the peer review panels should be
separate for prevention and treatment programs. The criteria and
models are different and the panels should not be an across the
board effort, but rather representative of prevention and treatment
experts specific to the program being evaluated. The plan is based
as the notice states on 1,100 prevention programs with little
experience with treatment programs/practices.
Synthesizing Evidence
Three respondents suggested using meta-analysis to synthesize
evidence for outcomes. One recommended SAMHSA consult with National
Institutes of Health experts in this area.
Replications
The Teaching-Family Association recommended considering
replications when evaluating evidence. The Society for Prevention
Research wrote that it is unclear how replications would be used in the
proposed NREPP, and suggested averaging ratings across studies.
Add Criteria
The National Student Assistance Association Scientific Advisory
Board and one other respondent suggested adding a cultural competence
criterion. The Society for Prevention Research recommended adding a
criterion to assess the clarity of causal inference.
Range of Reviewer Perspectives
The APA Practice Association noted the importance of having a
``large and broad'' reviewer pool: ``A small group of reviewers
representing a limited range of perspectives and constituencies would
have an undue impact on the entire system. We are pleased that a
nominations process is envisioned.''
Cost Effectiveness
One respondent called for incorporating program cost effectiveness
into NREPP. In choosing what program to implement, end users often have
to decide between diverse possibilities, such as attempting to pass a
tax increase on beer or implementing additional classroom prevention
curricula, each with competing claims about effectiveness. A cost-
effectiveness framework may be the only way to compare these choices.
Comments Addressing Question 2
Question 2. ``SAMHSA's NREPP priorities are reflected in the
agency's matrix of program priority areas. How might SAMHSA engage
interested stakeholders on a periodic basis in helping the agency
determine intervention priority areas for review by NREPP?''
Number of respondents: 16 (15%).
[[Page 13142]]
Respondents recommended a number of approaches to engage
stakeholders:
Conduct meetings, conferences, and seminars.
Send and solicit information via e-mail or a Web site.
Send informational notices via newletters.
Survey stakeholders.
Work with the Addiction Technology Transfer Centers
(ATTCs) to administer surveys.
Consult the National Prevention Network and the Society
for Prevention Research, which ``have forged a close working
relationship to foster the integration of science and practice and * *
* would be very helpful in answering this question.''
Comments Addressing Question 3
Question 3. ``There has been considerable discussion in the
scientific literature on how to use statistical significance and
various measures of effect size in assessing