Teacher Preparation Issues, 75494-75622 [2016-24856]
Download as PDF
75494
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
DEPARTMENT OF EDUCATION
34 CFR Parts 612 and 686
[Docket ID ED–2014–OPE–0057]
RIN 1840–AD07
Teacher Preparation Issues
Office of Postsecondary
Education, Department of Education.
ACTION: Final regulations.
AGENCY:
The Secretary establishes new
regulations to implement requirements
for the teacher preparation program
accountability system under title II of
the Higher Education Act of 1965, as
amended (HEA), that will result in the
collection and dissemination of more
meaningful data on teacher preparation
program quality (title II reporting
system). The Secretary also amends the
regulations governing the Teacher
Education Assistance for College and
Higher Education (TEACH) Grant
program under title IV of the HEA to
condition TEACH Grant program
funding on teacher preparation program
quality and to update, clarify, and
improve the current regulations and
align them with title II reporting system
data.
DATES: The regulations in 34 CFR part
612 are effective November 30, 2016.
The amendments to part 686 are
effective on July 1, 2017, except for
amendatory instructions 4.A., 4.B.,
4.C.iv., 4.C.x. and 4.C.xi., amending 34
CFR 686.2(d) and (e), and amendatory
instruction 6, amending 34 CFR 686.11,
which are effective on July 1, 2021.
FOR FURTHER INFORMATION CONTACT:
Sophia McArdle, Ph.D., U.S.
Department of Education, 400 Maryland
Avenue SW., Room 6W256,
Washington, DC 20202. Telephone:
(202) 453–6318 or by email:
sophia.mcardle@ed.gov.
If you use a telecommunications
device for the deaf (TDD) or a text
telephone (TTY), call the Federal Relay
Service (FRS), toll free, at 1–800–877–
8339.
SUMMARY:
SUPPLEMENTARY INFORMATION:
Executive Summary
sradovich on DSK3GMQ082PROD with RULES2
Purpose of This Regulatory Action
Section 205 of the HEA requires
States and institutions of higher
education (IHEs) annually to report on
various characteristics of their teacher
preparation programs, including an
assessment of program performance.
These reporting requirements exist in
part to ensure that members of the
public, prospective teachers and
employers (districts and schools), and
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
the States, IHEs, and programs
themselves have accurate information
on the quality of these teacher
preparation programs. These
requirements also provide an impetus to
States and IHEs to make improvements
where they are needed. Thousands of
novice teachers enter the profession
every year 1 and their students deserve
to have well-prepared teachers.
Research from States such as
Tennessee, North Carolina, and
Washington indicates that some teacher
preparation programs report statistically
significant differences in the student
learning outcomes of their graduates.2
Statutory reporting requirements on
teacher preparation program quality for
States and IHEs are broad. The
Department’s existing title II reporting
system framework has not, however,
ensured sufficient quality feedback to
various stakeholders on program
performance. A U.S. Government
Accountability Office (GAO) report
found that some States are not assessing
whether teacher preparation programs
are low-performing, as required by law,
and so prospective teachers may have
difficulty identifying low-performing
teacher preparation programs, possibly
resulting in teachers who are not fully
prepared to educate children.3 In
addition, struggling teacher preparation
programs may not receive the technical
assistance they need and, like the
teaching candidates themselves, school
districts, and other stakeholders, will
not be able to make informed decisions.
Moreover, section 205 of the HEA
requires States to report on the criteria
they use to assess whether teacher
preparation programs are lowperforming or at-risk of being lowperforming, but it is difficult to identify
programs in need of remediation or
closure because few of the reporting
requirements ask for information
indicative of program quality. The GAO
report noted that half the States said
1 U.S. Department of Education, Digest of
Education Statistics (2013). Public and private
elementary and secondary teachers, enrollment,
pupil/teacher ratios, and new teacher hires:
Selected years, fall 1955 through fall 2023 [Data
File]. Retrieved from: https://nces.ed.gov/programs/
digest/d13/tables/dt13_208.20.asp.
2 See Report Card on the Effectiveness of Teacher
Training Programs, Tennessee 2014201420142014
Report Card. (n.d.). Retrieved from: www.tn.gov/
thec/article/report-card; Goldhaber, D., & Liddle, S.
(2013). The Gateway to the Profession: Assessing
Teacher Preparation Programs Based on Student
Achievement. Economics of Education Review, 34,
29–44.
3 See U.S. Government Accountability Office
(GAO) (2015). Teacher Preparation Programs:
Education Should Ensure States Identify LowPerforming Programs and Improve InformationSharing. GAO–15–598. Washington, DC. Retrieved
from: https://gao.gov/products/GAO-15-598.
(Hereafter referred to as ‘‘GAO.’’)
PO 00000
Frm 00002
Fmt 4701
Sfmt 4700
current title II reporting system data
were ‘‘slightly useful,’’ ‘‘neither useful
nor not useful,’’ or ‘‘not useful’’; over
half the teacher preparation programs
surveyed said the data were not useful
in assessing their programs; and none of
the surveyed school district staff said
they used the data.4 The Secretary is
committed to ensuring that the
measures by which States judge the
quality of teacher preparation programs
reflect the true quality of the programs
and provide information that facilitates
program improvement and, by
extension, improvement in student
achievement.
The final regulations address
shortcomings in the current system by
defining the indicators of quality that a
State must use to assess the performance
of its teacher preparation programs,
including more meaningful indicators of
program inputs and program outcomes,
such as the ability of the program’s
graduates to produce gains in student
learning 5 (understanding that not all
students will learn at the same rate).
The final regulations build on current
State data systems and linkages and
create a much-needed feedback loop to
facilitate program improvement and
provide valuable information to
prospective teachers, potential
employers, and the general public.
The final regulations also link
assessments of program performance
under HEA title II to eligibility for the
Federal TEACH Grant program. The
TEACH Grant program, authorized by
section 420M of the HEA, provides
grants to eligible IHEs, which, in turn,
use the funds to provide grants of up to
$4,000 annually to eligible teacher
preparation candidates who agree to
serve as full-time teachers in high-need
fields at low-income schools for not less
than four academic years within eight
years after completing their courses of
study. If a TEACH Grant recipient fails
to complete his or her service
obligation, the grant is converted into a
Federal Direct Unsubsidized Stafford
Loan that must be repaid with interest.
Pursuant to section 420L(1)(A) of the
HEA, one of the eligibility requirements
for an institution to participate in the
TEACH Grant program is that it must
provide high-quality teacher
preparation. However, of the 38
programs identified by States as ‘‘lowperforming’’ or ‘‘at-risk,’’ 22 programs
were offered by IHEs participating in the
TEACH Grant program. The final
4 GAO
at 26.
S. G., Hanushek, E. A., & Kain, J. F.
(2005). Teachers, schools, and academic
achievement. Econometrica, 73(2), 417–458. https://
doi.org/10.1111/j.1468-0262.2005.00584.x.
5 Rivkin,
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
regulations limit TEACH Grant
eligibility to only those programs that
States have identified as ‘‘effective’’ or
higher in their assessments of program
performance under HEA title II.
sradovich on DSK3GMQ082PROD with RULES2
Summary of the Major Provisions of
This Regulatory Action
The final regulations—
• Establish necessary definitions and
requirements for IHEs and States related
to the quality of teacher preparation
programs, and require States to develop
measures for assessing teacher
preparation performance.
• Establish indicators that States must
use to report on teacher preparation
program performance, to help ensure
that the quality of teacher preparation
programs is judged on reliable and valid
indicators of program performance.
• Establish the areas States must
consider in identifying teacher
preparation programs that are lowperforming and at-risk of being lowperforming, the actions States must take
with respect to those programs, and the
consequences for a low-performing
program that loses State approval or
financial support. The final regulations
also establish the conditions under
which a program that loses State
approval or financial support may
regain its eligibility for title IV, HEA
funding.
• Establish a link between the State’s
classification of a teacher preparation
program’s performance under the title II
reporting system and that program’s
identification as ‘‘high-quality’’ for
TEACH Grant eligibility purposes.
• Establish provisions that allow
TEACH Grant recipients to satisfy the
requirements of their agreement to serve
by teaching in a high-need field that was
designated as high-need at the time the
grant was received.
• Establish conditions that allow
TEACH Grant recipients to have their
service obligations discharged if they
are totally and permanently disabled.
The final regulations also establish
conditions under which a student who
had a prior service obligation
discharged due to total and permanent
disability may receive a new TEACH
Grant.
Costs and Benefits
The benefits, costs, and transfers
related to the regulations are discussed
in more detail in the Regulatory Impact
Analysis (RIA) section of this document.
Significant benefits of the final
regulations include improvements to the
HEA title II accountability system that
will enable prospective teachers to make
more informed choices about their
enrollment in a teacher preparation
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
program, and will enable employers of
prospective teachers to make more
informed hiring decisions. Further, the
final regulations will create incentives
for States and IHEs to monitor and
continuously improve the quality of
their teacher preparation programs.
Most importantly, the final regulations
will help support elementary and
secondary school students because the
changes will lead to better prepared,
higher quality teachers in classrooms,
including for students in high-need
schools and communities, who are
disproportionately taught by less
experienced teachers.
The net budget impact of the final
regulations is approximately $0.49
million in reduced costs over the
TEACH Grant cohorts from 2016 to
2026. We estimate that the total cost
annualized over 10 years of the final
regulations is between $27.5 million
and $27.7 million (see the Accounting
Statement section of this document).
On December 3, 2014, the Secretary
published a notice of proposed
rulemaking (NPRM) for these parts in
the Federal Register (79 FR 71820). The
final regulations contain changes from
the NPRM, which are fully explained in
the Analysis of Comments and Changes
section of this document. Some
commenters requested clarification
regarding how the proposed State
reporting requirements would affect
teacher preparation programs provided
through distance education and TEACH
Grant eligibility for students enrolled in
teacher preparation programs provided
through distance education. In response
to these comments, on April 1, 2016, the
Department published a supplemental
notice of proposed rulemaking
(Supplemental NPRM) in the Federal
Register (81 FR 18808) that reopened
the public comments period for 30 days
solely to seek comment on those
specific issues. The Department
specifically requested on public
comments on issues related to reporting
by States on teacher preparation
programs provided through distance
education, and TEACH Grant eligibility
requirements for teacher preparation
programs provided through distance
education. The comment period for the
Supplemental NPRM closed on May 2,
2016.
Public Comment: In response to our
invitation in the December 3, 2014,
NPRM, approximately 4,800 parties
submitted comments on the proposed
regulations. In response to our
invitation in the Supplemental NPRM,
the Department received 58 comments.
We discuss substantive issues under
the sections of the proposed regulations
to which they pertain. Generally, we do
PO 00000
Frm 00003
Fmt 4701
Sfmt 4700
75495
not address technical or other minor
changes.
Analysis of Comments and Changes:
An analysis of the comments and of any
changes in the regulations since
publication of the NPRM and the
Supplemental NPRM follows.
Part 612—Title II Reporting System
Subpart A—Scope, Purpose, and
Definitions
Section 612.1
Scope and Purpose
Statutory Authority
Comments: A number of commenters
raised concerns about whether the
Department has authority under the
HEA to issue these regulations. In this
regard, several commenters asserted that
the Department does not have the
statutory authority to require States to
include student learning outcomes,
employment outcomes, and survey
outcomes among the indicators of
academic content knowledge and
teaching skills that would be included
in the State’s report card under § 612.5.
Commenters also claimed that the HEA
does not authorize the Department to
require States, in identifying lowperforming or at-risk teacher
preparation programs, to use those
indicators of academic content
knowledge and teaching skills as would
be required under § 612.6. These
commenters argued that section 207 of
the HEA provides that levels of
performance shall be determined solely
by the State, and that the Department
may not provide itself authority to
mandate these requirements through
regulations when the HEA does not do
so.
Commenters argued that only the
State may determine whether to include
student academic achievement data
(and by inference our other proposed
indicators of academic content
knowledge and teaching skills) in their
assessments of teacher preparation
program performance. One commenter
contended that the Department’s
attempt to ‘‘shoehorn’’ student
achievement data into the academic
content knowledge and teaching skills
of students enrolled in teacher
preparation programs (section
205(b)(1)(F)) would render meaningless
the language of section 207(a) that gives
the State the authority to establish levels
of performance, and what those levels
contain. These commenters argued that,
as a result, the HEA prohibits the
Department from requiring States to use
any particular indicators. Other
commenters argued that such State
authority also flows from section
205(b)(1)(F) of the HEA, which provides
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75496
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
that, in the State Report Card (SRC), the
State must include a description of the
method of assessing teacher preparation
program performance. This includes
indicators of the academic content
knowledge and teaching skills of the
students enrolled in such programs.
Commenters also stated that the
Department does not have the authority
to require that a State’s criteria for
assessing the performance of any
teacher preparation program include the
indicators of academic content
knowledge and teaching skills,
including, ‘‘in significant part,’’ student
learning outcomes and employment
outcomes for high-need schools. See
proposed §§ 612.6(a)(1) and 612.4(b)(1).
Similar concerns were expressed with
respect to proposed § 612.4(b)(2), which
provided that a State could determine
that a teacher preparation program was
effective (or higher) only if the program
was found to have ‘‘satisfactory or
higher’’ student learning outcomes.
Discussion: Before we respond to the
comments about specific regulations
and statutory provisions, we think it
would be helpful to outline the statutory
framework under which we are issuing
these regulations. Section 205(a) of the
HEA requires that each IHE that
provides a teacher preparation program
leading to State certification or licensure
and that enrolls students who receive
HEA student financial assistance report
on a statutorily enumerated series of
data elements for the programs it
provides. Section 205(b) of the HEA
requires each State that receives funds
under the HEA to provide to the
Secretary and make widely available to
the public information on, among other
things, the quality of traditional and
alternative route teacher preparation
programs that includes not less than the
statutorily enumerated series of data
elements. The State must do so in a
uniform and comprehensible manner,
conforming to definitions and methods
established by the Secretary. Section
205(c) of the HEA directs the Secretary
to prescribe regulations to ensure the
validity, reliability, accuracy, and
integrity of the data submitted. Section
206(b) requires that IHEs provide
assurance to the Secretary that their
teacher training programs respond to the
needs of LEAs, are closely linked with
the instructional decisions novice
teachers confront in the classroom, and
prepare candidates to work with diverse
populations and in urban and rural
settings, as applicable. Section 207(a) of
the HEA provides that in order to
receive funds under the HEA, a State
must conduct an assessment to identify
low-performing teacher preparation
programs in the State, and help those
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
programs through provision of technical
assistance. Section 207(a) further
provides that the State’s report identify
programs that the State determines to be
low-performing or at risk of being lowperforming, and that levels of
performance are to be determined solely
by the State.
The proposed regulations, like the
final regulations, reflect the
fundamental principle and the statutory
requirement that the assessment of
teacher preparation program
performance must be conducted by the
State, with criteria the State establishes
and levels of differentiated performance
that are determined by the State. Section
205(b)(1)(F) of the HEA provides that a
State must include in its report card a
description of its criteria for assessing
the performance of teacher preparation
programs within IHEs in the State and
that those criteria must include
indicators of the academic content
knowledge and teaching skills of
students enrolled in such programs.
Significantly, section 205(b)(1) further
provides that the State’s report card
must conform with definitions and
methods established by the Secretary,
and section 205(c) authorizes the
Secretary to prescribe regulations to
ensure the reliability, validity, integrity,
and accuracy of the data submitted in
the report cards.
Consistent with those statutory
provisions, § 612.5 establishes the
indicators States must use to comply
with the reporting requirement in
section 205(b)(1)(F), namely by having
States include in the report card their
criteria for program assessment and the
indicators of academic content
knowledge and teaching skills that they
must include in those criteria. While the
term ‘‘teaching skills’’ is defined in
section 200(23) of the HEA, the
definition is complex and the statute
does not indicate what are appropriate
indicators of academic content
knowledge and teaching skills of those
who complete teacher preparation
programs. Thus, in § 612.5, we establish
reasonable definitions of these basic, but
ambiguous statutory phrases in an
admittedly complex area—how States
may reasonably assess the performance
of their teacher preparation programs—
so that the conclusions States reach
about the performance of individual
programs are valid and reliable in
compliance with the statute. We discuss
the reasonableness of the four general
indicators of academic content
knowledge and teaching skills that the
Secretary has established in § 612.5 later
in this preamble under the heading
What indicators must a State use to
report on teacher preparation program
PO 00000
Frm 00004
Fmt 4701
Sfmt 4700
performance for purposes of the State
report card?. Ultimately though, section
205(b) clearly permits the Secretary to
establish definitions for the types of
information that must be included in
the State report cards, and, in doing so,
complements the Secretary’s general
authority to define statutory phrases
that are ambiguous or require
clarification.
The provisions of § 612.5 are also
wholly consistent with section 207(a) of
the HEA. Section 207(a) provides that
States determine the levels of program
performance in their assessments of
program performance and discusses the
criteria a State ‘‘may’’ include in those
levels of performance. However, section
207(a) does not negate the basic
requirement in section 205(b) that States
include indicators of academic content
knowledge and teaching skills within
their program assessment criteria or the
authority of the Secretary to establish
definitions for report card elements.
Moreover, the regulations do not limit a
State’s authority to establish, use, and
report other criteria that the State
determines are appropriate for
generating a valid and reliable
assessment of teacher preparation
program performance. Section 612.5(b)
of the regulations expressly permits
States to supplement the required
indicators with other indicators of a
teacher’s effect on student performance,
including other indicators of academic
content and knowledge and teaching
skills, provided that the State uses the
same indicators for all teacher
preparation programs in the State. In
addition, working with stakeholders,
States are free to determine how to
apply these various criteria and
indicators in order to determine, assess,
and report whether a preparation
program is low-performing or at-risk of
being low-performing.
We appreciate commenters’ concerns
regarding the provisions in
§§ 612.4(b)(1) and (b)(2) and 612.6(b)(1)
regarding weighting and consideration
of certain indicators. Based on
consideration of the public comments
and the potential complexity of these
requirements, we have removed these
provisions from the final regulations.
While we have taken this action, we
continue to believe strongly that
providing significant weight to these
indicators when determining a teacher
preparation program’s level of
performance is very important. The
ability of novice teachers to promote
positive student academic growth
should be central to the missions of all
teacher preparation programs, and
having those programs focus on
producing well-prepared novice
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
teachers who work and stay in highneed schools is critical to meeting the
Nation’s needs. Therefore, as they
develop their measures and weights for
assessing and reporting the performance
of each teacher preparation program in
their SRCs, we strongly encourage
States, in consultation with their
stakeholders, to give significant weight
to these indicators.
Changes: We have revised
§§ 612.4(b)(1) and 612.6(a)(1) to remove
the requirement for States to include
student learning outcomes and
employment outcomes, ‘‘in significant
part,’’ in their use of indicators of
academic content knowledge and
teaching skills as part of their criteria for
assessing the performance of each
teacher preparation program. We also
have revised § 612.4(b)(2) to remove the
requirement that permitted States to
determine that a teacher preparation
program was effective (or higher quality)
only if the State found the program to
have ‘‘satisfactory or higher’’ student
learning outcomes.
Comments: Several commenters
objected to the Department’s proposal to
establish four performance levels for
States’ assessment of their teacher
preparation programs. They argued that
section 207(a), which specifically
requires States to report those programs
found to be either low-performing or atrisk of being low-performing, establishes
the need for three performance levels
(low-performing, at-risk of being lowperforming, and all other programs) and
that the Department lacks authority to
require reporting on the four
performance levels proposed in the
NPRM, i.e., those programs that are
‘‘low-performing,’’ ‘‘at-risk,’’
exceptional,’’ and everything else. These
commenters stated that these provisions
of the HEA give to the States the
authority to determine whether to
establish more than three performance
levels.
Discussion: Section 205(b) of the HEA
provides that State reports ‘‘shall
include not less than the following,’’
and this provision authorizes the
Secretary to add reporting elements to
the State reports. It was on this basis
that we proposed, in § 612.4(b)(1), to
supplement the statutorily required
elements to require States, when making
meaningful differentiation in teacher
preparation program performance, to
use at least four performance levels,
including exceptional. While we
encourage States to identify programs
that are exceptional in order to
recognize and celebrate outstanding
programs, and so that prospective
teachers and their employers know of
them and others may learn from them,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
in consideration of comments that urged
the Secretary not to require States to
report a fourth performance level and
other comments that expressed concerns
about overall implementation costs, we
are not adopting this proposal in the
final regulations.
Changes: We have revised
§ 612.4(b)(1) to remove the requirement
for States to rate their teacher
preparation programs using the category
‘‘exceptional.’’ We have also removed
the definition of ‘‘exceptional teacher
preparation program’’ from the
Definitions section in § 612.2.
Comments: Several commenters
raised concerns about whether the
provisions of § 612.6 are consistent with
section 205(b)(2) of the HEA, which
prohibits the Secretary from creating a
national list or ranking of States,
institutions, or schools using the scaled
scores required under section 205. Some
of these commenters acknowledged the
usefulness of a system for public
information on teacher preparation.
However, the commenters argued that, if
these regulations are implemented, the
Federal government would instead be
creating a program rating system in
violation of section 205(b)(2).
Commenters also stated that by
mandating a system for rating teacher
preparation programs, including the
indicators by which teacher preparation
programs must be rated, what a State
must consider in identifying lowperforming or at-risk teacher
preparation programs, and the actions a
State must take with respect to lowperforming programs (proposed
§§ 612.4, 612.5, and 612.6), the Federal
government is impinging on the
authority of States, which authorize,
regulate, and approve IHEs and their
teacher preparation programs.
Discussion: Although section 207(a) of
the HEA expressly requires States to
include in their SRCs a list of programs
that they have identified as lowperforming or at-risk of being lowperforming, the regulations do not in
any other way require States to specify
or create a list or ranking of institutions
or programs and the Department has no
intention of requiring States to do so.
Nor will the Department be creating a
national list or ranking of States,
institutions, or teacher preparation
programs. Thus, there is no conflict
with section 205(b)(2).
As we discussed in response to the
prior set of comments, these regulations
establish definitions for terms provided
in title II of the HEA in order to help
ensure that the State and IHE reporting
system meet its purpose. In authorizing
the Secretary to define statutory terms
and establish reporting methods needed
PO 00000
Frm 00005
Fmt 4701
Sfmt 4700
75497
to properly implement the title II
reporting system, neither Congress nor
the Department is abrogating State
authority to authorize, regulate, and
approve IHEs and their teacher
preparation programs. Finally, in
response to the comments that proposed
§§ 612.4, 612.5, and 612.6 would
impermissibly impinge on the authority
of States in terms of actions they must
take with respect to low-performing
programs, we note that the regulations
do little more than clarify the sanctions
that Congress requires in section 207(b)
of the HEA. Those sanctions address the
circumstances in which students
enrolled in a low-performing program
may continue to receive or regain
Federal student financial assistance, and
thus the Federal government has a
direct interest in the subject.
Changes: None.
Comments: One commenter
contended that Federal law provides no
authority to compel LEAs to develop the
criteria and implement the collection
and reporting of student learning
outcome data, and that there is little that
the commenter’s State can do to require
LEA compliance with those reporting
requirements.
Discussion: Section 205(b) of the HEA
requires all States receiving HEA funds
to provide the information the law
identifies ‘‘in a uniform and
comprehensible manner that conforms
with the definitions and methods
established by the Secretary.’’ These
regulations place responsibility for
compliance upon the States, not the
LEAs.
Since all LEAs stand to benefit from
the success of the new reporting system
through improved transparency and
information about the quality of teacher
preparation programs from which they
may recruit and hire new teachers, we
assume that all LEAs will want to work
with their States to find manageable
ways to implement the regulations.
Moreover, without more information
from the commenter, we cannot address
why a particular State would not have
the authority to insist that an LEA
provide the State with the information
it needs to meet these reporting
requirements.
Changes: None.
Federal-State-Institution Relationship,
Generally
Comments: Many commenters
commented generally that the proposed
regulations are an example of Federal
overreach and represent a profound and
improper shift in the historic
relationship among institutions, States,
school districts, accrediting agencies,
and the Federal government in the area
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75498
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
of teacher preparation and certification.
For example, one commenter stated that
the proposal threatens the American
tradition of Federal non-interference
with academic judgments, and makes
the Department the national arbiter of
what teacher preparation programs
should teach, who they should teach,
and how they should teach.
Commenters also contended that the
proposed regulations impermissibly
interfere with local and State control
and governance by circumventing
States’ rights delegated to local school
districts and the citizens of those
districts to control the characteristics of
quality educators and to determine
program approval.
Discussion: The need for teacher
preparation programs to produce
teachers who can adequately and
effectively teach to the needs of the
Nation’s elementary and secondary
school students is national in scope and
self-evident. Congress enacted the HEA
title II reporting system as an important
tool to address this need. Our final
regulations are intended to give the
public confidence that, as Congress
anticipated when it enacted sections
205(b) and 207 of the HEA, States have
reasonably determined whether teacher
preparation programs are, or are not,
meeting the States’ expectations for
their performance. While the regulations
provide for use of certain minimum
indicators and procedures for
determining and reporting program
performance, they provide States with a
substantial amount of discretion in how
to measure these indicators, what
additional indicators a State may choose
to add, and how to weight and combine
these indicators and criteria into an
overall assessment of a teacher
preparation program’s performance.
Thus, the final regulations are
consistent with the traditional
importance of State decision-making in
the area of evaluating educational
performance. The public, however, must
have confidence that the procedures and
criteria that each State uses to assess
program performance and to report
programs as low-performing or at-risk
are reasonable and transparent.
Consistent with the statutory
requirement that States report annually
to the Secretary and to the public ‘‘in a
uniform and comprehensible manner
that conforms to the definitions and
methods established by the Secretary,’’
the regulations aim to help ensure that
each State report meets this basic test.
We disagree with comments that
allege that the regulations reflect
overreach by the Federal government
into the province of States regarding the
approval of teacher preparation
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
programs and the academic domain of
institutions that conduct these
programs. The regulations do not
constrain the academic judgments of
particular institutions, what those
institutions should teach in their
specific programs, which students
should attend those programs, or how
those programs should be conducted.
Nor do they dictate which teacher
preparation programs States should
approve or should not approve. Rather,
by clarifying limited areas in which
sections 205 and 207 of the HEA are
unclear, the regulations implement the
statutory mandate that, consistent with
definitions and reporting methods the
Secretary establishes, States assess the
quality of the teacher preparation
programs in their State, identify those
that are low-performing or at-risk of
being low-performing, and work to
improve the performance of those
programs.
With the changes we are making in
these final regulations, the system for
determining whether a program is lowperforming or at-risk of being lowperforming is unarguably a Statedetermined system. Specifically, as
noted above, in assessing and reporting
program performance, each State is free
to (1) adopt and report other measures
of program performance it believes are
appropriate, (2) use discretion in how to
measure student learning outcomes,
employment outcomes, survey
outcomes, and minimum program
characteristics, and (3) determine for
itself how these indicators of academic
content knowledge and teaching skills
and other criteria a State may choose to
use will produce a valid and reliable
overall assessment of each program’s
performance. Thus, the assessment
system that each State will use is
developed by the State, and does not
compromise the ability of the State and
its stakeholders to determine what is
and is not a low-performing or at-risk
teacher preparation program.
Changes: None.
Constitutional Issues
Comments: One commenter stated
that the proposed regulations amounted
to a coercive activity that violates the
U.S. Constitution’s Spending Clause
(i.e., Article I, Section 8, Clause 1 of the
U.S. Constitution). The commenter
argued that sections 205 and 207 of the
HEA are grounded in the Spending
Clause and Spending Clause
jurisprudence, including cases such as
Arlington C. Sch. Dist. Bd. of Educ. v.
Murphy, 548 U.S. 291 (2006), which
provides that States are not bound by
requirements of which they have no
clear notice. In particular, the
PO 00000
Frm 00006
Fmt 4701
Sfmt 4700
commenter asserted that, in examining
the text of the statute in order to decide
whether to accept Federal financial
assistance, a State would not have clear
notice that it would be required to
commit substantial amounts of funds to
develop the infrastructure required to
include student learning outcome data
in its SRC or include student learning
outcomes in its evaluation of teacher
preparation programs. Some
commenters stated that the proposed
regulations violate the Tenth
Amendment to the U.S. Constitution.
Discussion: Congress’ authority to
enact the provisions in title II of the
HEA governing the State reporting
system flows from its authority to ‘‘. . .
provide for general Welfare of the
United States.’’ Article I, Section 8,
Clause 1 (commonly referred to as
Congress’ ‘‘spending authority’’). Under
that authority, Congress authorized the
Secretary to implement the provisions
of sections 205 through 207. Thus, the
regulations do not conflict with
Congress’ authority under the Spending
Clause. With respect to cases such as
Arlington C. Sch. Dist. Bd. of Educ. v.
Murphy, States have full notice of their
responsibilities under the reporting
system through the rulemaking process
the Department has conducted under
the Administrative Procedure Act and
the General Education Provisions Act to
develop these regulations.
We also do not perceive a legitimate
Tenth Amendment issue. The Tenth
Amendment provides in pertinent part
that powers not delegated to the Federal
government by the Constitution are
reserved to the States. Congress used its
spending authority to require
institutions that enroll students who
receive Federal student financial
assistance in teacher preparation
programs, and States that receive HEA
funds, to submit information as required
by the Secretary in their institutional
report cards (IRCs) and SRCs. Thus, the
Secretary’s authority to define the
ambiguous statutory term ‘‘indicators of
academic content knowledge and
teaching skills’’ to include the measures
the regulations establish, coupled with
the authority States have under section
205(b)(1)(F) of the HEA to establish
other criteria with which they assess
program performance, resolves any
claim that the assessment of program
performance is a matter left to the States
under the Tenth Amendment.
Changes: None.
Unfunded Mandates
Comments: Some commenters stated
that the proposed regulations would
amount to an unfunded mandate, in that
they would require States, institutions
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
with teacher preparation programs, and
public schools to bear significant
implementation costs, yet offer no
Federal funding to cover them. To pay
for this unfunded mandate, several
commenters stated that costs would be
passed on to students via tuition
increases, decreases in funding for
higher education, or both.
Discussion: These regulations do not
constitute an unfunded mandate.
Section 205(b) makes reporting ‘‘in a
uniform and comprehensible manner
that conforms with the definitions and
methods established by the Secretary’’ a
condition of the State’s receipt of HEA
funds. And, as we have stated, the
regulations implement this statutory
mandate.
Changes: None.
Loss of Eligibility To Enroll Students
Who Receive HEA-Funded Student
Financial Aid
Comments: Many commenters stated
that the Department lacks authority to
establish Federally defined performance
criteria for the purpose of determining
a teacher preparation program’s
eligibility for student financial aid
under title IV of the HEA. Commenters
expressed concern that the Department
is departing from the current model, in
which the Department determines
institutional eligibility for title IV
student aid, to a model in which this
function would be outsourced to the
States. While some commenters
acknowledged that, under the HEA, a
teacher preparation program loses its
title IV eligibility if its State decides to
withdraw approval or financial support,
commenters asserted that the HEA does
not intend for this State determination
to be coupled with a prescriptive
Federal mandate governing how the
determination should be made. A
number of commenters also stated that
the regulations would result in a process
of determining eligibility for Federal
student aid that will vary by State.
Similarly, some commenters stated
that the proposed requirements in
§ 612.8(b)(1) for regaining eligibility to
enroll students who receive title IV aid
exceed the statutory authority in section
207(b)(4) of the HEA, which provides
that a program is reinstated upon a
demonstration of improved
performance, as determined by the
State. Commenters expressed concern
that the proposed regulations would
shift this responsibility from the State to
the Federal government, and stated that
teacher preparation programs could be
caught in limbo. They argued that if a
State had already reinstated funding and
identified that a program had improved
performance, the program’s ability to
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
enroll students who receive student
financial aid would be conditioned on
the Secretary’s approval. The
commenters contended that policy
changes as significant as these should
come from Congress, after scrutiny and
deliberation of a reauthorized HEA.
Discussion: Section 207(b) of the HEA
states, in relevant part:
Any teacher preparation program
from which the State has withdrawn the
State’s approval, or terminated the
State’s financial support, due to the low
performance of the program based upon
the State assessment described in
subsection (a)—
(1) Shall be ineligible for any funding
for professional development activities
awarded by the Department;
(2) May not be permitted to accept or
enroll any student who receives aid
under title IV in the institution’s teacher
preparation program;
(3) Shall provide transitional support,
including remedial services if necessary,
for students enrolled at the institution at
the time of termination of financial
support or withdrawal of approval; and
(4) Shall be reinstated upon
demonstration of improved
performance, as determined by the
State.
Sections 612.7 and 612.8 implement
this statutory provision through
procedures that mirror existing
requirements governing termination and
reinstatement of student financial
support under title IV of the HEA. As
noted in the preceding discussion, our
regulations do not usurp State authority
to determine how to assess whether a
given program is low-performing, and
our requirement that States do so using,
among other things, the indicators of
novice teachers’ academic content
knowledge and teaching skills identified
in § 612.5 is consistent with title II of
the HEA.
Consistent with section 207(a) of the
HEA, a State determines a teacher
preparation program’s performance
level based on the State’s use of those
indicators and any other criteria or
indicators the State chooses to use to
measure the overall level of the
program’s performance. In addition,
consistent with section 207(b), the loss
of eligibility to enroll students receiving
Federal student financial aid does not
depend upon a Department decision.
Rather, the State determines whether
the performance of a particular teacher
preparation program is so poor that it
withdraws the State’s approval of, or
terminates the State’s financial support
for, that program. Each State may use a
different decision model to make this
determination, as contemplated by
section 207(b).
PO 00000
Frm 00007
Fmt 4701
Sfmt 4700
75499
Commenters’ objections to our
proposal for how a program subject to
section 207(b) may regain eligibility to
enroll students who receive title IV aid
are misplaced. Section 207(b)(4) of the
HEA provides that a program found to
be low-performing is reinstated upon
the State’s determination that the
program has improved, which
presumably would need to include the
State’s reinstatement of State approval
or financial support, since otherwise the
institution would continue to lose its
ability to accept or enroll students who
receive title IV aid in its teacher
preparation programs. However, the
initial loss of eligibility to enroll
students who receive title IV aid is a
significant event, and we believe that
Congress intended that section 207(b)(4)
be read and implemented not in
isolation, but rather in the context of the
procedures established in 34 CFR
600.20 for reinstatement of eligibility
based on the State’s determination of
improved performance.
Changes: None.
Relationship to Department Waivers
Under ESEA Flexibility
Comments: A number of commenters
stated that the proposed regulations
inappropriately extend the Federal
requirements of the Department’s
Elementary and Secondary Education
Act (ESEA) flexibility initiative to States
that have either chosen not to seek a
waiver of certain ESEA requirements or
have applied for a waiver but not
received one. The commenters argued
that requiring States to assess all
students in non-tested grades and
subjects (i.e., those grades and subjects
for which testing is not required under
title I, part A of the ESEA)—a practice
that is currently required only in States
with ESEA flexibility or in States that
have chosen to participate in the Race
to the Top program—sets a dangerous
precedent.
Discussion: While the regulations are
similar to requirements the Department
established for States that received
ESEA flexibility or Race to the Top
grants regarding linking data on student
growth to individual teachers of nontested grades and subjects under ESEA
title I, part A, they are independent of
those requirements. While section 4(c)
of the Every Student Succeeds Act
(ESSA) 6 ends conditions of waivers
granted under ESEA flexibility on
August 1, 2016, States that received
ESEA flexibility or a Race to the Top
grant may well have a head start in
6 ESSA, which was signed into law in December
2015 (e.g., after the NPRM was published),
reauthorizes and amends the ESEA.
E:\FR\FM\31OCR2.SGM
31OCR2
75500
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
implementing systems for linking
academic growth data for elementary
and secondary school students to
individual novice teachers, and then
linking data on these novice teachers to
individual teacher preparation
programs. However, we believe that all
States have a strong interest and
incentive in finding out whether each of
their teacher preparation programs is
meeting the needs of their K–12
students and the expectations of their
parents and the public. We therefore
expect that States will seek to work with
other stakeholders to find appropriate
ways to generate the data needed to
perform the program assessments that
these regulations implementing section
205 of the HEA require.
Changes: None.
Consistency With State Law and
Practice
Comments: A number of commenters
expressed concerns about whether the
proposed regulations were consistent
with State law. Some commenters stated
that California law prohibits the kind of
data sharing between the two State
agencies, the California Commission on
Teacher Credentialing (CTC) and the
California Department of Education
(CDE), that would be needed to
implement the proposed regulations.
Specifically, the commenter stated that
section 44230.5 of the California
Education Code (CEC) does not allow
CTC to release information on credential
holders to any entity other than the type
of credential and employing district. In
addition, the commenter noted that
California statutes (sections 44660–
44665 of the CEC) authorize each of the
approximately 1,800 districts and
charter schools to independently
negotiate and implement teacher
evaluations, so there is no statewide
collection of teacher evaluation data.
The commenter also noted that current
law prohibits employers from sharing
teacher evaluation data with teacher
preparation programs or with the State
if an individual teacher would be
identifiable.
Another commenter argued that in
various ways the proposed regulations
constitute a Federal overreach with
regard to what Missouri provides in
terms of State and local control and
governance. Specifically, the commenter
stated that proposed regulations
circumvent: The rights of Missouri
school districts and citizens under the
Missouri constitution to control the
characteristics of quality education; the
authority of the Missouri legislative
process and the State Board of
Education to determine program quality;
State law, specifically, according to the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
commenter Missouri House Bill 1490
limits how school districts can share
locally held student data such as
student learning outcomes; and the
process already underway to improve
teacher preparation in Missouri.
Other commenters expressed concern
that our proposal to require States to use
student learning outcomes, employment
outcomes, and survey outcomes, as
defined in the proposed regulations,
would create inconsistencies with what
they consider to be the more
comprehensive and more nuanced way
in which their States assess teacher
preparation program performance and
then provide relevant feedback to
programs and the institutions that
operate them.
Finally, a number of commenters
argued that requirements related to
indicators of academic content
knowledge and teaching skills are
unnecessary because there is already an
organization, the Council for the
Accreditation of Educator Preparation
(CAEP), which requires IHEs to report
information similar to what the
regulations require. These commenters
claimed that the reporting of data on
indicators of academic content
knowledge and teaching skills related to
each individual program on the SRC
may be duplicative and unnecessary.
Discussion: With respect to comments
on the CEC, we generally defer to each
State to interpret its own laws.
However, assuming that the CTC will
play a role in how California would
implement these regulations, we do not
read section 44230.5 of the CEC to
prohibit CTC from releasing information
on credential holders to any entity other
than the type of credential and
employing district, as the commenters
state. Rather, the provision requires CTC
to ‘‘establish a nonpersonally
identifiable educator identification
number for each educator to whom it
issues a credential, certificate, permit, or
other document authorizing that
individual to provide a service in the
public schools.’’ Moreover, while
sections 44660 through 44665 of the
CEC authorize each LEA in California to
independently negotiate and implement
teacher evaluations, we do not read this
to mean that California is prohibited
from collecting data relevant to the
student learning outcomes of novice
teachers and link them to the teachers’
preparation program. Commenters did
not cite any provision of the CEC that
prohibits LEAs from sharing teacher
evaluation data with teacher preparation
programs or the State if it is done
without identifying any individual
teachers. We assume that use of the
nonpersonally identifiable educator
PO 00000
Frm 00008
Fmt 4701
Sfmt 4700
identification number that section
44230.5 of the CEC directs would
provide one way to accomplish this
task. Finally, we have reviewed the
commenters’ brief description of the
employer surveys and teacher entry and
retention data that California is
developing for use in its assessments of
teacher preparation programs. Based on
the comments, and as discussed more
fully under the subheading Student
Learning Outcomes, we believe that the
final regulations are not inconsistent
with California’s approach.
While the commenter who referred to
Missouri law raised several broad
concerns about purported Federal
overreach of the State’s laws, these
concerns were very general. However,
we note that in previously applying for
and receiving ESEA flexibility, the
Missouri Department of Elementary and
Secondary Education (MDESE) agreed to
have LEAs in the State implement basic
changes in their teacher evaluation
systems that would allow them to
generate student growth data that would
fulfill the student learning outcomes
requirement. In doing so the MDESE
demonstrated that it was fully able to
implement these types of activities
without conflict with State law.
Moreover, the regulations address
neither how a State or LEA are to
determine the characteristics of effective
educators, nor State procedures and
authority for determining when to
approve a teacher preparation program.
Nor do the regulations undermine any
State efforts to improve teacher
preparation; in implementing their
responsibilities under sections 205(b)
and 207(a) of the HEA, they simply
require that, in assessing the level of
performance of each teacher preparation
program, States examine and report data
about the performance of novice
teachers the program produces.
Finally, we note that, as enacted,
House Bill 1490 specifically directs the
Missouri State Board of Education to
issue a rule regarding gathering student
data in the Statewide Longitudinal Data
System in terms of the Board’s need to
make certain data elements available to
the public. This is the very process the
State presumably would use to gather
and report the data that these
regulations require. In addition, we read
House Bill 1490 to prohibit the MDESE,
unless otherwise authorized, ‘‘to
transfer personally identifiable student
data’’, something that the regulations do
not contemplate. Further, we do not
read House Bill 1490 as establishing the
kind of limitation on LEAs’ sharing
student data with the MDESE that the
commenter stresses. House Bill 1490
also requires the State Board to ensure
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
compliance with the Family
Educational Rights and Privacy Act
(FERPA) and other laws and policies;
see our discussion of comment on
FERPA and State privacy laws under
§ 612.4(b)(3)(ii)(E).
We are mindful that a number of
States have begun their own efforts to
use various methods and procedures to
examine how well their teacher
preparation programs are performing.
For the title II reporting system, HEA
provides that State reporting must use
common definitions and reporting
methods as the Secretary shall
determine necessary. While the
regulations require all States to use data
on student learning outcomes,
employment outcomes, survey
outcomes, and minimum program
characteristics to determine which
programs are low-performing or at-risk
of being low-performing, States may,
after working with their stakeholders,
also adopt other criteria and indicators.
We also know from the recent GAO
report that more than half the States
were already using information on
program graduates’ effectiveness in their
teacher preparation program approval or
renewal processes and at least 10 others
planned to do so—data we would
expect to align with these reporting
requirements.7 Hence, we trust that
what States report in the SRCs will
complement their own systems of
assessing program performance.
Finally, with regard to the work of
CAEP, we agree that CAEP may require
some institutional reporting that may be
similar to the reporting required under
the title II reporting system; however,
reporting information to CAEP does not
satisfy the reporting requirements under
title II. Regardless of the information
reported to CAEP, States and
institutions still have a statutory
obligation to submit SRCs and IRCs. The
CAEP reporting requirements include
the reporting of data associated with
student learning outcomes, employment
outcomes, and survey outcomes;
however, CAEP standards do not require
the disaggregation of data for individual
teacher preparation programs but this
disaggregation is necessary for title II
reporting.
Changes: None.
Cost Implications
Comments: A number of commenters
raised concerns about the costs of
implementing the regulations. They
stated that the implementation costs,
such as those for the required statewide
data systems to be designed,
implemented, and refined in the pilot
7 GAO
at 13–14.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
year, would require States either to take
funds away from other programs or raise
taxes or fees to comply. The
commenters noted that these costs could
be passed on to students via tuition
increases or result in decreased State
funding for higher education, and that
doing so would create many other
unintended consequences, such as
drawing State funding away from hiring
of educators, minority-serving
institutions, or future innovation,
reforms, and accountability initiatives.
Commenters also stated that the cost to
institutions of implementing the
regulations could pull funding away
from earning national accreditation.
Some commenters also expressed
concern about the costs to States of
providing technical assistance to teacher
preparation programs that they find to
be low-performing, and suggested that
those programs could lose State
approval or financial support.
Finally, in view of the challenges in
collecting accurate and meaningful data
on teacher preparation program
graduates who fan out across the United
States, commenters argued that the
Department should find ways to provide
financial resources to States and
institutions to help them gather the
kinds of data the regulations will
require.
Discussion: The United States has a
critical need to ensure that it is getting
a good return on the billions of dollars
of public funds it spends producing
novice teachers. The teacher preparation
program reporting system established in
title II of the HEA provides an important
tool for understanding whether these
programs are making good on this
investment. But the system can only
serve its purpose if States measure and
report a program’s performance in a
variety of ways—in particular, based on
important inputs, such as good clinical
education and support, as well as on
important outcomes, such as novice
teachers’ success in improving student
performance.
The regulations are designed to
achieve these goals, while maintaining
State responsibility for deciding how to
consider the indicators of academic
content knowledge and teaching skills
described in § 612.5, along with other
relevant criteria States choose to use.
We recognize that moving from the
current system—in which States, using
criteria of their choosing, identified only
39 programs nationally in 2011 as lowperforming or at-risk of being lowperforming (see the NPRM, 79 FR
71823)—to one in which such
determinations are based on meaningful
indicators and criteria of program
effectiveness is not without cost. We
PO 00000
Frm 00009
Fmt 4701
Sfmt 4700
75501
understand that States will need to
make important decisions about how to
provide for these costs. However, as
explained in the Regulatory Impact
Analysis section of this document, we
concluded both that (1) these costs are
manageable, regardless of States’ current
ability to establish the systems they will
need, and (2) the benefits of a system in
which the public has confidence that
program reporting is valid and reliable
are worth those costs.
While providing technical assistance
to low-performing teacher preparation
programs will entail some costs,
§ 612.6(b) simply codifies the statutory
requirement Congress established in
section 207(a) of the HEA and offers
examples of what this technical
assistance could entail. Moreover, we
assume that a State would want to
provide such technical assistance rather
than have the program continue to be
low-performing and so remain at-risk of
losing State support (and eligibility to
enroll students who receive title IV aid).
Finally, commenters requested that
we identify funding sources to help
States and IHEs gather the required data
on students who, upon completing their
programs, do not stay in the State. We
encourage States to gather and use data
on all program graduates regardless of
the State to which they ultimately move.
However, given the evident costs of
doing so on an interstate basis, the final
regulations permit States to exclude
these students from their calculations of
student learning outcomes, their teacher
placement and retention rates and from
the employer and teacher survey (see
the definitions of teacher placement and
retention rate in § 612.2) and provisions
governing student learning outcomes
and survey outcomes in § 612.5(a)(1)(iii)
and (a)(3)(ii).
Changes: None.
Section 612.2
Definitions
Content and Pedagogical Knowledge
Comments: Several commenters
requested that we revise the definition
of ‘‘content and pedagogical
knowledge’’ to specifically refer to a
teacher’s ability to factor students’
cultural, linguistic, and experiential
backgrounds into the design and
implementation of productive learning
experiences. The commenters stated
that pedagogical diversity is an
important construct in elementary and
secondary education and should be
included in this definition.
Additional commenters requested that
this definition specifically refer to
knowledge and skills regarding
assessment. These commenters stated
that the ability to measure student
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75502
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
learning outcomes depends upon a
teacher’s ability to understand the
assessment of such learning and not just
from the conveyance and explanation of
content.
Another commenter recommended
that we specifically mention the distinct
set of instructional skills necessary to
address the needs of students who are
gifted and talented. This commenter
stated that there is a general lack of
awareness of how to identify and
support advanced and gifted learners,
and that this lack of awareness has
contributed to concerns about how well
the Nation’s top students are doing
compared to top students around the
world. The commenter also stated that
this disparity could be rectified if
teachers were required to address the
specific needs of this group of students.
Multiple commenters requested that
we develop data definitions and metrics
related to the definition of ‘‘content and
pedagogical knowledge,’’ and then
collect related data on a national level.
They stated that such a national
reporting system would facilitate
continuous improvement and quality
assurance on a systemic level, while
significantly reducing burden on States
and programs.
Other commenters recommended that
to directly assess for content knowledge
and pedagogy, the definition of the term
include rating graduates of teacher
preparation programs based on a
portfolio of the teaching candidates’
work over the course of the academic
program. These commenters stated that
reviewing a portfolio reflecting a recent
graduate’s pedagogical preparation
would be more reliable than rating an
individual based on student learning,
which cannot be reliably measured.
Discussion: The proposed definition
of ‘‘content and pedagogical
knowledge’’ reflected the specific and
detailed suggestions of a consensus of
non-Federal negotiators. We believe that
the definition is sufficiently broad to
address, in general terms, the key areas
of content and pedagogical knowledge
that aspiring teachers should gain in
their teacher preparation programs.
In this regard, we note that the
purpose here is not to offer a
comprehensive definition of the term
that all States must use, as the
commenters appear to recommend.
Rather, it is to provide a general
roadmap for States to use as they work
with stakeholders (see § 612.4(c)) to
decide how best to determine whether
programs that lack the accreditation
referenced in § 612.5(a)(4)(i) will ensure
that students have the requisite content
and pedagogical knowledge they will
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
need as teachers before they complete
the programs.
For this reason, we believe that
requiring States to use a more
prescriptive definition or to develop
common data definitions and metrics
aligned to that definition, as many
commenters urged, would create
unnecessary costs and burdens.
Similarly, we do not believe that
collecting this kind of data on a national
level through the title II reporting
system is worth the significant cost and
burden that it would entail. Instead, we
believe that States, working in
consultation with stakeholders, should
determine whether their State systems
for evaluating program performance
should include the kinds of additions to
the definition of content and
pedagogical knowledge that the
commenters recommend.
We also stress that our definition
underscores the need for teacher
preparation programs to train teachers
to have the content knowledge and
pedagogical skills needed to address the
learning needs of all students. It
specifically refers to the need for a
teacher to possess the distinct skills
necessary to meet the needs of English
learners and students with disabilities,
both because students in these two
groups face particular challenges and
require additional support, and to
emphasize the need for programs to
train aspiring teachers to teach to the
learning needs of the most vulnerable
students they will have in their
classrooms. While the definition’s focus
on all students plainly includes
students who are gifted and talented, as
well as students in all other subgroups,
we do not believe that, for purposes of
this title II reporting system, the
definition of ‘‘content and pedagogical
skills’’ requires similar special reference
to those or other student groups.
However, we emphasize again that
States are free to adopt many of the
commenters’ recommendations. For
example, because the definition refers to
‘‘effective learning experiences that
make the discipline accessible and
meaningful for all students,’’ States may
consider a teacher’s ability to factor
students’ cultural, linguistic, and
experiential backgrounds into the
design and implementation of
productive learning experiences, just as
States may include a specific focus on
the learning needs of students who are
gifted and talented.
Finally, through this definition we are
not mandating a particular method for
assessing the content and pedagogical
knowledge of teachers. As such, under
the definition, States may allow teacher
preparation programs to use a portfolio
PO 00000
Frm 00010
Fmt 4701
Sfmt 4700
review to assess teachers’ acquisition of
content and pedagogical knowledge.
Changes: None.
Employer Survey
Comments: None.
Discussion: The proposed definition
of ‘‘survey outcomes’’ specified that a
State would be required to survey the
employers or supervisors of new
teachers who were in their first year of
teaching in the State where their teacher
preparation program is located. To
avoid confusion with regard to teacher
preparation programs provided through
distance education, in the final
regulations we have removed the phrase
‘‘where their teacher preparation
program is located’’ from the final
definition of ‘‘employer survey.’’ In
addition to including a requirement to
survey those in their first year of
teaching in the State and their
employers in the ‘‘survey outcomes’’
provision that we have moved to
§ 612.5(a)(3) of the final regulations, we
are including the same clarification in
the definitions of ‘‘employer survey’’
and ‘‘teacher survey’’. We also changed
the term ‘‘new teacher’’ to ‘‘novice
teacher’’ for the reasons discussed in
this document under the definition of
‘‘novice teacher.’’
Changes: We have revised the
definition of ‘‘employer survey’’ to
clarify that this survey is of employers
or supervisors of novice teachers who
are in their first year of teaching.
Employment Outcomes
Comments: None.
Discussion: Upon review of the
proposed regulations, we recognized
that the original structure of the
regulations could have generated
confusion. We are concerned that
having a definition for the term
‘‘employment outcomes’’ in § 612.2,
when that provision largely serves to
operationalize other definitions in the
context of § 612.5, was not the clearest
way to present these requirements. We
therefore are moving the explanations
and requirements of those terms into the
text of § 612.5(a).
Changes: We have removed the
proposed definition of ‘‘employment
outcomes’’ from § 612.2, and moved the
text and requirements from the
proposed definition to § 612.5(a)(2).
Exceptional Teacher Preparation
Program
Comments: Many commenters
opposed having the regulations define,
and having States identify in their SRCs,
‘‘exceptional teacher preparation
programs’’, stating that section 207(a) of
the HEA only gives the Department
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
authority to require reporting of three
categories of teacher preparation
programs: Low-performing, at-risk of
being low-performing, and teacher
preparation programs that are neither
low-performing nor at-risk. A number of
commenters noted that some States have
used a designation of exceptional and
found that the rating did not indicate
truly exceptional educational quality.
They also stated that teacher
preparation programs have used that
rating in their marketing materials, and
that it may mislead the public as to the
quality of the program. In addition,
commenters noted that, with respect to
the determination of a high-quality
teacher preparation program for TEACH
Grant program eligibility, it makes no
practical difference whether a teacher
preparation program is rated as effective
or exceptional because eligible students
would be able to receive TEACH Grants
whether the programs in which they
enroll are effective, exceptional, or some
other classification above effective.
Discussion: Section 207(a) of the HEA
requires that a State identify programs
as low-performing or at-risk of being
low-performing, and report those
programs in its SRC. However, section
205(b) of the HEA authorizes the
Secretary to require States to include
other information in their SRCs.
Therefore, we proposed that States
report which teacher preparation
programs they had identified as
exceptional because we believe the
public should know which teacher
preparation programs each State has
concluded are working very well. We
continue to urge States to identify for
the public those teacher preparation
programs that are indeed exceptional.
Nonetheless, based on our consideration
of the concerns raised in the comments,
and the costs of reporting using this
fourth performance level, we have
decided to remove this requirement
from the final regulations. Doing so has
no impact on TEACH Grants because, as
commenters noted, an institution’s
eligibility to offer TEACH Grants is
impacted only where a State has
identified a teacher preparation program
as low-performing or at-risk. Despite
these changes, we encourage States to
adopt and report on this additional
performance level.
Changes: We have removed the
proposed definition of ‘‘exceptional
teacher preparation program,’’ and
revised the proposed definition of
‘‘effective teacher preparation program’’
under § 612.2 to mean a teacher
preparation program with a level of
performance that is higher than lowperforming or at-risk. We have also
revised § 612.4(b)(1) to remove the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
requirement that an SRC include
‘‘exceptional’’ as a fourth teacher
preparation program performance level.
High-Need School
Comments: Multiple commenters
requested that States be allowed to
develop and use their own definitions of
‘‘high-need school’’ so that State
systems do not need to be modified to
comply with the regulations. These
commenters stated that many States had
made great strides in improving the
quality of teacher preparation programs,
and that the definition of ‘‘high-need
school’’ may detract from the reforms
already in place in those States. In
addition, the commenters noted that
States are in the best position to define
a high-need school since they can do so
with better knowledge of State-specific
context.
Some commenters suggested,
alternatively, that the Department
include an additional disaggregation
requirement for high-need subject areas.
These commenters stated that targeting
high-need subject areas would have a
greater connection to employment
outcomes than would high-need schools
and, as such, should be tracked as a
separate category when judging the
quality of teacher preparation programs.
A number of commenters requested
that the definition of high-need school
include schools with low graduation
rates. Other commenters agreed that this
definition should be based on poverty,
as defined in section 200(11) of the
HEA, but also recommended that a
performance component should be
included. Specifically, these
commenters suggested that high schools
in which one-third or more of the
students do not graduate on time be
designated as high-need schools. Other
commenters recommended including
geography as an indicator of a school’s
need, arguing that, in their experience,
high schools’ urbanicity plays a
significant role in determining student
success.
Other commenters expressed
concerns with using a quartile-based
ranking of all schools to determine
which schools are considered high
need. These commenters stated that
such an approach may lead to schools
with very different economic conditions
being considered high need. For
example, a school in one district might
fall into the lowest quartile with only 15
percent of students living in poverty
while a school in another district would
need to have 75 percent of students
living in poverty to meet the same
designation.
Discussion: Our definition of ‘‘highneed school’’ mirrors the definition of
PO 00000
Frm 00011
Fmt 4701
Sfmt 4700
75503
that term in section 200(11)(A) of the
HEA and, we believe, provides
sufficient breadth and flexibility for all
States to use it to help determine the
performance of their teacher preparation
programs. Under the definition, all
schools that are in an LEA’s highest
quartile of schools ranked by family
need based on measures that include
student eligibility for free and reduced
price lunch are deemed high-need
schools. (We focus here on this measure
of poverty because we believe that this
is the primary measure on which many
LEAs will collect data.) So, too, are
schools with high individual family
poverty rates measured by large
numbers or percentages of students who
are eligible for free and reduced price
lunches. Hence, for purposes of title II
reporting, not only will all schools with
sufficiently high family poverty rates be
considered high-need schools, but,
regardless of the school’s level of family
poverty level, every LEA in the Nation
with four or more schools will have at
least one high-need school. The
definition therefore eliminates a novice
teacher’s LEA preference as a factor
affecting the placement or retention rate
in high-need schools, and thus permits
these measures to work well with this
definition of high-need school. This
would not necessarily be true if we
permitted States to adopt their own
definitions of this term.
We acknowledge the concern
expressed by some commenters that the
definition of ‘‘high-need school’’
permits schools in different LEAs (and
indeed, depending on the breakdown of
an LEA’s schools in the highest quartile
based on poverty, in the same LEA as
well) that serve communities with very
different levels of poverty all to be
considered high-need. However, for a
reporting system that will use
placement and retention rates in highneed schools as factors bearing on the
performance of each teacher preparation
program, States may consider applying
significantly greater weight to
employment outcomes for novice
teachers who work in LEAs and schools
that serve high-poverty areas than for
novice teachers who work in LEAs and
schools that serve low-poverty areas.
Moreover, while we acknowledge that
the definition of ‘‘high-need school’’ in
section 200(11)(A) of the HEA does not
apply to the statutory provisions
requiring the submission of SRCs and
IRCs, we believe that if we use the term
in the title II reporting system it is
reasonable that we should give some
deference to the definition used
elsewhere in title II of the HEA. For
reasons provided above, we believe the
definition can work well for the
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75504
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
indicators concerning teacher placement
and retention rates in high-need
schools.
Furthermore, we disagree with the
comments that the definition of ‘‘highneed school’’ should include high-need
subject areas. As defined in the
regulations, a ‘‘teacher preparation
program’’ is a program that leads to an
initial State teacher certification or
licensure in a specific field. Thus, the
State’s assessment of a teacher
preparation program’s performance
already focuses on a specific subject
area, including those we believe States
would generally consider to be highneed. In addition, maintaining focus on
placement of teachers in schools where
students come from families with high
actual or relative poverty levels, and not
on the subject areas they teach in those
schools, will help maintain a focus on
the success of students who have fewer
opportunities. We therefore do not see
the benefit of further burdening State
reporting by separately carrying into the
definition of a ‘‘high-need school’’ as
commenters recommend, factors that
focus on high-need subjects.
We also disagree that the definition of
‘‘high-need school’’ should include an
additional criterion of low graduation
rates. While we agree that addressing
the needs of schools with low
graduation rates is a major priority, we
believe the definition of ‘‘high-need
school’’ should focus on the poverty
level of the area the school serves. The
measure is easy to calculate and
understand, and including this
additional component would
complicate the data collection and
analysis process for States. However, we
believe there is a sufficiently high
correlation between schools in highpoverty areas, which our definition
would deem high-need, and the schools
with low graduation rates on which the
commenters desire to have the
definition focus. We believe this
correlation means that a large
proportion of low-performing schools
would be included in a definition of
high-need schools that focuses on
poverty.
Changes: None.
Comments: None.
Discussion: Under paragraphs (i)(B)
and (ii) of the definition of ‘‘high-need
school’’ in the regulations, the
identification of a high-need school may
be based, in part, on the percentage of
students enrolled in the school that are
eligible for free or reduced price school
lunch under the Richard B. Russell
National School Lunch Act. With the
passage of the Healthy, Hunger-Free
Kids Act of 2010, the National School
Lunch Program (NSLP) now includes a
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
new universal meal option, the
‘‘Community Eligibility Provision’’ (CEP
or Community Eligibility). CEP reduces
burden at the household and local level
by eliminating the need to obtain
eligibility data from families through
individual household applications, and
permits schools, if they meet certain
criteria, to provide meal service to all
students at no charge to the students or
their families. To be eligible to
participate in Community Eligibility,
schools must: (1) Have at least 40
percent of their students qualify for free
meals through ‘‘direct certification’’ 8 in
the year prior to implementing
Community Eligibility; (2) agree to serve
free breakfasts and lunches to all
students; and, (3) agree to cover, with
non-Federal funds, any costs of
providing free meals to students above
the amounts provided by Federal
assistance.
CEP schools are not permitted to use
household applications to determine a
reimbursement percentage from the
USDA. Rather, the USDA determines
meal reimbursement for CEP schools
based on ‘‘claiming percentages,’’
calculated by multiplying the
percentage of students identified
through the direct certification data by
a multiplier established in the Healthy,
Hunger-Free Kids Act of 2010 and set in
regulation at 1.6. The 1.6 multiplier
provides an estimate of the number of
students that would be eligible for free
and reduced-price meals in CEP schools
if the schools determined eligibility
through traditional means, using both
direct certification and household
applications. If a State uses NSLP data
from CEP schools when determining
whether schools are high-need schools,
it should not use the number of children
actually receiving free meals in CEP
schools to determine the percentage of
students from low-income families
because, in those schools, some children
receiving free meals live in households
that do not meet a definition of lowincome. Therefore, States that wish to
use NSLP data for purposes of
determining the percentage of children
from low-income families in schools
that are participating in Community
Eligibility should use the number of
children for whom the LEA is receiving
reimbursement from the USDA (direct
certification total with the 1.6
multiplier), not to exceed 100 percent of
children enrolled. For example, we can
consider a school that participates in
8 ‘‘Direct certification’’ is a process by which
schools identify students as eligible for free meals
using data from, among other sources, the
Supplemental Nutrition Assistance Program (SNAP)
or the Temporary Assistance for Needy Families
(TANF) program.
PO 00000
Frm 00012
Fmt 4701
Sfmt 4700
Community Eligibility with an
enrollment of 1,000 children. The
school identifies 600 children through
direct certification data as eligible for
the NSLP. The school multiplies 600 by
1.6, and that result is 960. The LEA
would receive reimbursement through
the NSLP for meals for 960 children, or
96 percent of students enrolled. In a
ranking of schools in the LEA on the
basis of the percentage of students from
low-income families, even though 100
percent of students are receiving free
meals through NSLP, the school would
be ranked on the basis of 96 percent of
students from low-income families. The
use of claiming percentages for
identifying CEP schools as high-need
schools, rather than the number of
students actually receiving free lunch
through NSLP ensures comparability,
regardless of an individual school’s
decision regarding participation in the
program.
Changes: None.
Novice Teacher
Comments: Many commenters
expressed concerns about the proposed
definition of ‘‘new teacher.’’ These
commenters noted that the definition
distinguishes between traditional
teacher preparation programs and
alternative route teacher preparation
programs. The commenters argued that,
because alternative route teacher
preparation programs place their
participants as teachers while they are
still enrolled, these participants will
have already established teacher
retention rates by the time they
complete their programs. Traditional
program participants, on the other hand,
are only placed as teachers after earning
their credential, leaving their programs
at a comparative disadvantage under the
indicators of academic content
knowledge and teaching skills. Many of
these commenters contended that, as a
result, comparisons between traditional
teacher preparation programs and
alternative route teacher preparation
programs will be invalid. Others
recommended that the word ‘‘licensure’’
be changed to ‘‘professional licensure’’
to alleviate the need for States to
compare traditional teacher preparation
programs and alternative route teacher
preparation programs.
A number of commenters claimed that
the proposed definition confused the
attainment of certification or licensure
with graduation from a program, which
is often a precursor for certification or
licensure. They stated that the proposed
definition was not clear regarding how
States would report on recent program
completers who are entering the
classroom. Others noted that some
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
States allow individuals to be employed
as full-time teachers for up to five years
before obtaining licensure. They
contended that reporting all of these
categories together would provide
misleading statistics on teacher
preparation programs.
Other commenters specifically
requested that the definition include
pre-kindergarten teachers (if a State
requires postsecondary education and
training for pre-kindergarten teachers),
and that pre-kindergarten teachers be
reflected in teacher preparation program
assessment.
A number of commenters also
recommended that the word ‘‘recent’’ be
removed from the definition of ‘‘new
teacher’’ so that individuals who take
time off between completing their
teaching degree and obtaining a job in
a classroom are still considered to be
new teachers. They argued that
individuals who take time off to raise a
family or who do not immediately find
a full-time teaching position should still
be considered new teachers if they have
not already had full-time teaching
experience. Other commenters stated
that the term ‘‘new teacher’’ may result
in confusion based on State decisions
about when an individual may begin
teaching. For example, the commenters
stated that in Colorado teachers may
obtain an alternative license and begin
teaching before completing a formal
licensure program. As such, new
teachers may have been teaching for up
to three years at the point that the
proposed definition would consider
them to be a ‘‘new teacher,’’ and the
proposed definition therefore may cause
confusion among data entry staff about
which individuals should be reported as
new teachers. They recommended the
we replace the term ‘‘new teacher’’ with
the term ‘‘employed completer’’ because
the latter more clearly reflects that an
individual would need to complete his
or her program and have found
employment to be included in the
reporting requirements.
Discussion: The intent of the
proposed definition of ‘‘new teacher’’
was to capture those individuals who
have newly entered the classroom and
become responsible for student
outcomes. Upon review of the public
comments, we agree that the proposed
definition of ‘‘new teacher’’ is unclear
and needs revision.
We understand that many alternative
route teacher preparation programs
place their participants as teachers
while they are enrolled in their
programs, and many traditional
preparation program participants are
only placed after earning their
credential. Furthermore, we agree that
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
direct comparisons between alternative
route and traditional teacher
preparation programs could be
misleading if done without a more
complete understanding of the inherent
differences between the two types of
programs. For example, a recent
completer of an alternative route
program may actually have several more
years of teaching experience than a
recent graduate of a traditional teacher
preparation program, so apparent
differences in their performance may be
based more on the specific teacher’s
experience than the quality of the
preparation program.
In addition, we agree with
commenters that the preparation of
preschool teachers is a critical part of
improving early childhood education,
and inclusion of these staff in the
assessment of teacher preparation
program quality could provide valuable
insights. We strongly encourage States
that require preschool teachers to obtain
either the same level of licensure as
elementary school teachers, or a level of
licensure focused on preschool or early
childhood education, to include
preschool teachers who teach in public
schools in their assessment of the
quality of their teacher preparation
programs. However, we also recognize
that preschool licensure and teacher
evaluation requirements vary among
States and among settings, and therefore
believe that it is important to leave the
determination of whether and how to
include preschool teachers in this
measure to the States. We hope that
States will base their determination on
what is most supportive of high-quality
early childhood education in their State.
We also agree with commenters that
the proposed term ‘‘new teacher’’ may
result in confusion based on State
decisions about when individuals in an
alternative route program have the
certification they need to begin
teaching, and that, in some cases, these
individuals may have taught for up to
three years before the proposed
definition would consider them to be
new teachers. We believe, however, that
the term ‘‘employed completer’’ could
be problematic for alternative route
programs because, while their
participants are employed, they may not
have yet completed their program.
Likewise, we agree with commenters
who expressed concern that our
proposed definition of ‘‘new teacher’’
confuses the attainment of certification
or licensure with graduation from a
program leading to recommendation for
certification or licensure.
For all of these reasons, we are
removing the term and definition of
‘‘new teacher’’ and replacing it with the
PO 00000
Frm 00013
Fmt 4701
Sfmt 4700
75505
term ‘‘novice teacher,’’ which we are
defining as ‘‘a teacher of record in the
first three years of teaching who teaches
elementary or secondary public school
students, which may include, at a
State’s discretion, preschool students.’’
We believe this new term and definition
more clearly distinguish between
individuals who have met all the
requirements of a teacher preparation
program (recent graduates), and those
who have been assigned the lead
responsibility for a student’s learning
(i.e., a teacher of record as defined in
this document) but who may or may not
have completed their teacher
preparation program. In doing so, we
also have adopted language that
captures as novice teachers those
individuals who are responsible for
student outcomes, because these are the
teachers on whom a program’s student
learning outcomes should focus. We
chose a period of three years because we
believe this is a reasonable timeframe in
which one could consider a teacher to
be a novice, and because it is the length
of time for which retention rate data
will be collected. In this regard, the
definition of novice teacher continues to
include three cohorts of teachers, but
treats the first year of teaching as the
first year as a teacher of record
regardless of whether the teacher has
completed a preparation program (as is
the case for most traditional programs)
or is still in process of completing it (as
is the case for alternate route programs).
Finally, we agree with commenters
that we should remove the word
‘‘recent’’ from the definition, and have
made this change. As commenters
suggest, making this change will ensure
that individuals who take time off
between completing their teacher
preparation program and obtaining a job
in a classroom, or who do not
immediately find a full-time teaching
position, are still included in the
definition of ‘‘novice teacher.’’
Therefore, our definition of ‘‘novice
teacher’’ does not include the word
‘‘recent’’; the term instead clarifies that
a novice teacher is an individual who is
responsible for student outcomes, while
still allowing individuals who are recent
graduates to be categorized as novice
teachers for three years in order to
account for delays in placement.
Changes: We have removed the term
‘‘new teacher’’ and replaced it with the
term ‘‘novice teacher,’’ which we define
as ‘‘a teacher of record in the first three
years of teaching who teaches
elementary or secondary public school
students, which may include, at a
State’s discretion, preschool students.’’
See the discussion below regarding the
definition of ‘‘teacher of record.’’
E:\FR\FM\31OCR2.SGM
31OCR2
75506
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
Quality Clinical Preparation
Comments: Commenters provided a
number of specific suggestions for
revising the proposed definition of
‘‘quality clinical preparation.’’
Commenters suggested that the
definition include a requirement that
mentor teachers be ‘‘effective.’’ While
our proposed definition did not use the
term ‘‘mentor teacher,’’ we interpret the
comments as pertaining to the language
of paragraph (1) of the proposed
definition—the requirement that those
LEA-based personnel who provide
training be qualified clinical instructors.
Commenters also suggested that we
eliminate the phrase ‘‘at least in part’’
when referring to the training to be
provided by qualified clinical
instructors, and that we require the
clinical practice to include experience
with high-need and high-ability
students, as well as the use of data
analysis and development of classroom
management skills.
Other commenters suggested that the
definition require multiple clinical or
field experiences, or both, with effective
mentor teachers who (1) address the
needs of diverse, rural, or
underrepresented student populations
in elementary and secondary schools,
including English learners, students
with disabilities, high-need students,
and high-ability students, and (2) assess
the clinical experiences using a
performance-based protocol to
demonstrate teacher candidates’ mastery
of content and pedagogy.
Some commenters suggested that the
definition require that teacher
candidates use specific research-based
practices in addition to those currently
listed in the definition, including data
analysis, differentiation, and classroom
management. The commenters
recommended that all instructors be
qualified clinical instructors, and that
they ensure that clinical experiences
include working with high-need and
high-ability students because doing so
will provide a more robust and realistic
clinical experience.
Commenters further suggested that
‘‘quality clinical preparation’’ use a
program model similar to that utilized
by many alternative route programs.
This model would include significant
in-service training and support as a
fundamental and required component,
alongside an accelerated pre-service
training program. Another commenter
suggested the inclusion of residency
programs in the definition.
Commenters also suggested that the
Department adopt, for the title II
reporting system, the definitions of the
terms ‘‘clinical experience’’ and
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
‘‘clinical practice’’ used by CAEP so that
the regulatory definitions describe a
collaborative relationship between a
teacher preparation program and a
school district. Commenters explained
that CAEP defines ‘‘clinical
experiences’’ as guided, hands-on,
practical applications and
demonstrations of professional
knowledge of theory to practice, skills,
and dispositions through collaborative
and facilitated learning in field-based
assignments, tasks, activities, and
assessments across a variety of settings.
Commenters further explained that
CAEP defines ‘‘clinical practice’’ as
student teaching or internship
opportunities that provide candidates
with an intensive and extensive
culminating field-based set of
responsibilities, assignments, tasks,
activities, and assessments that
demonstrate candidates’ progressive
development of the professional
knowledge, skills, and dispositions to be
effective educators. Another commenter
recommended that we develop common
definitions of data and metrics on
quality clinical preparation.
Discussion: We agree with the
commenters that it is important to
ensure that mentor teachers and
qualified clinical instructors are
effective. Effective instructors play an
important role in ensuring that students
in teacher preparation programs receive
the best possible clinical training if they
are to become effective educators.
However, we believe that defining the
term ‘‘quality clinical preparation’’ to
provide that all clinical instructors,
whether LEA-based or not, meet specific
established qualification requirements
and use a training standard that is
publicly available (as required by
paragraph (1) of our definition)
reasonably ensures that students are
receiving clinical training from effective
instructors.
We agree with the recommendation to
remove the phrase ‘‘at least in part’’
from the definition, so that all training
must be provided by quality clinical
instructors.
We decline to revise the definition to
provide that quality clinical preparation
specifically include work with highneed or high-ability students, using data
analysis and differentiation, and
developing classroom management
skills. We agree that these are important
elements in developing highly effective
educators and could be an important
part of clinical preparation. However,
the purpose of this definition is to
highlight general characteristics of
quality clinical instruction that must be
reflected in how a State assesses teacher
preparation program performance,
PO 00000
Frm 00014
Fmt 4701
Sfmt 4700
rather than provide a comprehensive list
of elements of quality clinical
preparation. We believe that including
the additional elements suggested by the
commenters would result in an overly
prescriptive definition. We note,
however, that States are free to
supplement this definition with
additional criteria for assessing teacher
preparation program performance.
We also decline to revise the
definition to provide that quality
clinical preparation be assessed using a
performance-based protocol as a means
of demonstrating student mastery of
content and pedagogy. While this is a
strong approach that States may choose
to take, we are not revising the
definition to prescribe this particular
method because we believe it may in
some cases be overly burdensome.
We decline commenters’
recommendation to include significant
in-service training and support as a
fundamental and required component,
alongside an accelerated pre-service
training program. Similarly, we reject
the suggestion to include residency
programs in the definition. Here again,
we feel that both of these additional
qualifications would result in a
definition that is too prescriptive.
Moreover, as noted above, this
definition is meant to highlight general
characteristics of quality clinical
instruction that must be reflected in
how a State assesses teacher preparation
program performance, rather than to
provide a comprehensive list of
elements of quality clinical preparation.
Furthermore, while we understand
why commenters recommended that we
use CAEP’s definitions, we do not want
to issue an overly prescriptive definition
of what is and is not quality clinical
preparation, nor do we want to endorse
any particular organization’s approach.
Rather, we are defining a basic indicator
of teacher preparation program
performance for programs that do not
meet the program accreditation
provision in § 612.5(a)(4)(i). However,
States are free to build the CAEP
definitions into their own criteria for
assessing teacher preparation program
performance; furthermore, programs
may implement CAEP criteria.
We encourage States and teacher
preparation programs to adopt researchbased practices of effective teacher
preparation for all aspects of their
program accountability systems. Indeed,
we believe the accountability systems
that States establish will help programs
and States to gather more evidence
about what aspects of clinical training
and other parts of preparation programs
lead to the most successful teachers.
However, we decline to develop more
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
precise regulatory definitions of data
and metrics on quality clinical
preparation because we feel that these
should be determined by the State in
collaboration with IHEs, LEAs, and
other stakeholders (see § 612.4(c)).
Changes: We have revised the
definition of ‘‘quality clinical
preparation’’ by removing the phrase ‘‘at
least in part’’ to ensure that all training
is provided by quality clinical
instructors.
Recent Graduate
Comments: Multiple commenters
recommended replacing the term
‘‘recent graduate’’ with the term
‘‘program completer’’ to include
candidates who have met all program
requirements, regardless of enrollment
in a traditional teacher preparation
program or an alternative route teacher
preparation program. In addition, they
recommended that States be able to
determine the criteria that a candidate
must satisfy in order to be considered a
program completer.
Other commenters recommended
changing the definition of ‘‘recent
graduate’’ to limit it to those graduates
of teacher preparation programs who are
currently credentialed and practicing
teachers. The commenters stated that
this would avoid having programs with
completers who become gainfully
employed in a non-education field or
enroll in graduate school being
penalized when the State determines
the program’s performance.
Discussion: We intended the term
‘‘recent graduate’’ to capture those
individuals who have met all the
requirements of the teacher preparation
program within the last three title II
reporting years. We recognize that a
number of alternative route programs do
not use the term ‘‘graduate’’ to refer to
individuals who have met those
requirements. However, using the term
‘‘recent graduate’’ to encompass both
individuals who complete traditional
teacher preparation programs and those
who complete alternative route
programs is simpler than creating a
separate term for alternative route
participants. Thus, we continue to
believe that the term ‘‘recent graduate,’’
as defined, appropriately captures the
relevant population for purposes of the
regulations.
Furthermore, we decline to amend the
definition to include only those
individuals who are currently
credentialed and practicing teachers.
Doing so would create confusion
between this term and ‘‘novice teacher’’
(defined elsewhere in this document).
The term ‘‘novice teacher’’ is designed
to capture individuals who are in their
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
first three years of teaching, whereas the
definition of ‘‘recent graduate’’ is
designed to capture individuals who
have completed a program, regardless of
whether they are teaching. In order to
maintain this distinction, we have
retained the prohibitions that currently
exist in the definitions in the title II
reporting system against using
recommendation to the State for
licensure or becoming a teacher of
record as a condition of being identified
as a recent graduate.
We are, however, making slight
modifications to the proposed
definition. Specifically, we are
removing the reference to being hired as
a full-time teacher and instead using the
phrase ‘‘becoming a teacher of record.’’
We do not believe this substantially
changes the meaning of ‘‘recent
graduate,’’ but it does clarify which
newly hired, full-time teachers are to be
captured under the definition.
We decline to provide States with
additional flexibility in establishing
other criteria for making a candidate a
program completer because we believe
that the revised definition of the term
‘‘recent graduate’’ provides States with
sufficient flexibility. We believe that the
additional flexibility suggested by the
commenters would result in definitions
that stray from the intent of the
regulations.
Some commenters expressed concern
that programs would be penalized if
some individuals who have completed
them go on to become gainfully
employed in a non-education field or
enroll in graduate school. We feel that
it is important for the public and
prospective students to know the degree
to which participants in a teacher
preparation program do not become
teachers, regardless of whether they
become gainfully employed in a noneducation field. However, we think it is
reasonable to allow States flexibility to
exclude certain individuals when
determining the teacher placement and
retention rates (i.e., those recent
graduates who have taken teaching
positions in another State, or who have
enrolled in graduate school or entered
military service). For these reasons, we
have not adopted the commenters’
recommendation to limit the definition
of ‘‘recent graduate’’ to those graduates
of teacher preparation programs who are
currently credentialed and practicing
teachers.
Changes: We have revised the
definition of ‘‘recent graduate’’ to clarify
that a teacher preparation program may
not use the criterion ‘‘becoming a
teacher of record’’ when it determines if
an individual has met all of the program
requirements.
PO 00000
Frm 00015
Fmt 4701
Sfmt 4700
75507
Rigorous Teacher Candidate Exit
Qualifications
Comments: One commenter
recommended that we remove the
reference to entry requirements from the
proposed definition of ‘‘rigorous teacher
entry and exit requirements’’ because
using rigorous entry requirements to
assess teacher preparation program
performance could compromise the
mission of minority-serving institutions,
which often welcome disadvantaged
students and develop them into
profession-ready teachers. Commenters
said that those institutions and others
seek, in part, to identify potential
teacher candidates whose backgrounds
are similar to students they may
ultimately teach but who, while not
meeting purely grade- or test-based
entry requirements, could become wellqualified teachers through an effective
preparation program.
Commenters recommended adding a
number of specific items to the
definition of exit qualifications, such as
classroom management, differentiated
instructional planning, and an
assessment of student growth over time.
Another commenter suggested
amending the definition to include
culturally competent teaching, which
the commenter defined as the ability of
educators to teach students intellectual,
social, emotional, and political
knowledge by utilizing their diverse
cultural knowledge, prior experiences,
linguistic needs, and performance
styles. This commenter stated that
culturally competent teaching is an
essential pedagogical skill that teachers
must possess. The commenter also
recommended that we include as
separate terms and define ‘‘culturally
competent education’’ and ‘‘culturally
competent leadership’’. Finally, this
commenter requested that we develop
guidance on culturally and
linguistically appropriate approaches in
education.
Discussion: Although overall research
findings regarding the effect of teacher
preparation program selectivity on
student outcomes are generally mixed,
some research indicates there is a
correlation between admission
requirements for teacher preparation
programs and the teaching effectiveness
of program graduates.9 In addition,
under our proposed definition, States
and programs could define ‘‘rigorous
entry requirements’’ in many and varied
ways, including through evidence of
other skills and characteristics
9 See, for example: Henry, G., & Bastian, K.
(2015). Measuring Up: The National Council on
Teacher Quality’s Ratings of Teacher Preparation
Programs and Measures of Teacher Performance.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75508
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
determined by programs to correlate
with graduates’ teaching effectiveness,
such as grit, disposition, or
performance-based assessments relevant
to teaching. Nonetheless, we understand
that prospective teachers who
themselves come from high-need
schools—and who may therefore bring a
strong understanding of the
backgrounds of students they may
eventually teach—could be
disproportionately affected by gradebased or test-based entry requirements.
Additionally, because the primary
emphasis of the regulations is to ensure
that candidates graduate from teacher
preparation programs ready to teach, we
agree that measures of program
effectiveness should emphasize rigorous
exit requirements over program entry
requirements. Therefore, we are revising
the regulations to require only rigorous
exit standards.
In our definition of rigorous exit
requirements, we identified four basic
characteristics that we believe all
teacher candidates should possess.
Regarding the specific components of
rigorous exit requirements that
commenters suggested (such as
standards-based and differentiated
planning, classroom management, and
cultural competency), the definition
does not preclude States from including
those kinds of elements as rigorous exit
requirements. We acknowledge that
these additional characteristics,
including cultural competency, may
also be important, but we believe that
the inclusion of these additional
characteristics should be left to the
discretion of States, in consultation with
their stakeholders. To the extent that
they choose to include them, States
would need to develop definitions for
each additional element. We also
encourage interested parties to bring
these suggestions forward to their States
in the stakeholder engagement process
required of all States in the design of
their performance rating systems (see
§ 612.4(c)). Given that we are not adding
cultural competency into the definition
of rigorous candidate exit requirements,
we are not adding the recommended
related definitions or developing
guidance on this topic at this time.
In addition, as we reviewed
comments, we realized both that the
phrase ‘‘at a minimum’’ was misplaced
in the sentence and should refer not to
the use of an assessment but to the use
of validated standards and measures of
the candidate’s effectiveness, and that
the second use of ‘‘measures of’’ in the
phrase ‘‘measures of candidate
effectiveness including measures of
curriculum planning’’ was redundant.
VerDate Sep<11>2014
20:10 Oct 28, 2016
Jkt 241001
Changes: We have revised the term
‘‘rigorous teacher candidate entry and
exit qualifications’’ by removing entry
qualifications. We have also revised the
language in § 612.5(a)(4)(ii)(C)
accordingly. In addition, we have
moved the phrase ‘‘at a minimum’’ from
preceding ‘‘assessment of candidate
performance’’ to preceding ‘‘on
validated professional teaching
standards.’’ Finally, we have revised the
phrase ‘‘measures of candidate
effectiveness including measures of
curriculum planning’’ to read ‘‘measures
of candidate effectiveness in curriculum
planning.’’
Student Achievement in Non-Tested
Grades and Subjects
Comments: Multiple commenters
opposed the definition of the term
‘‘student achievement in non-tested
grades and subjects,’’ and provided
different recommendations on how the
definition should be revised. Some
commenters recommended removing
the definition from the regulations
altogether, noting that, for some subjects
(such as music, art, theater, and
physical education), there simply are
not effective or valid ways to judge the
growth of student achievement by test
scores. Others recommended that
student achievement in non-tested
grades and subjects be aligned to State
and local standards. These commenters
asserted that alignment with State and
local standards will ensure rigor and
consistency for non-tested grades and
subjects. A number of commenters also
recommended that teachers who teach
in non-tested subjects should be able to
use scores from an already administered
test to count toward their effectiveness
rating, a policy that some States have
already implemented to address student
achievement in non-tested subjects.
Discussion: We have adopted the
recommendation to remove the
definition of ‘‘student achievement in
non-tested grades and subjects,’’ and
have moved the substance of this
definition to the definition of ‘‘student
growth.’’ Upon review of comments
regarding this definition, as well as
comments pertaining to student learning
outcomes more generally, we have also
altered the requirements in
§ 612.5(a)(1)(ii) for the calculation of
student learning outcomes—specifically
by permitting a State to use another
State-determined measure relevant to
calculating student learning outcomes
instead of only student growth or a
teacher evaluation measure. We believe
that the increased flexibility resulting
from these changes sufficiently
addresses commenter concerns
regarding the definition of ‘‘student
PO 00000
Frm 00016
Fmt 4701
Sfmt 4700
achievement in non-tested grades and
subjects.’’ We also believe it is
important that the regulations permit
States to determine an effective and
valid way to measure growth for
students in all grades and subjects not
covered by section 1111(b)(2222of the
ESEA, as amended by the ESSA, and
that the revisions we have made provide
sufficient flexibility for States to do so.
Under the revised definition of
student growth, States must use
measures of student learning and
performance, such as students’ results
on pre-tests and end-of-course-tests,
objective performance-based
assessments, student learning
objectives, student performance on
English language proficiency
assessments, and other measures of
student achievement that are rigorous,
comparable across schools, and
consistent with State requirements.
Further, as a number of commenters
recommended that the definition of
student achievement in non-tested
grades and subjects includes alignment
to State and local standards, we feel that
this new definition of student growth, in
conjunction with altered requirements
in the calculation of student learning
outcomes, is sufficiently flexible to
allow such alignment. Further, a State
could adopt the commenters’
recommendations summarized above
under the revised requirements for the
calculation of student learning
outcomes and the revised definition of
‘‘student growth.’’
We note that the quality of individual
teachers is not being measured by the
student learning outcomes indicator.
Rather, it will help measure overall
performance of a teacher preparation
program through an examination of
student growth in the many grades and
subjects taught by novice teachers that
are not part of the State’s assessment
system under section 1111(b) of the
ESEA, as amended by the ESSA.
Changes: The definition of student
achievement in non-tested grades and
subjects has been removed. The
substance of the definition has been
moved to the definition of student
growth.
Student Achievement in Tested Grades
and Subjects
Comments: A number of commenters
opposed the definition of ‘‘student
achievement in tested grades and
subjects’’ because of its link to ESEA
standardized test scores and the
definitions used in ESEA flexibility.
Commenters found this objectionable
because these sources are subject to
change, which could present
complications in future implementation
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
of the regulations. Further, the
commenters asserted that standardized
testing and value-added models
(VAM) 10 are not valid or reliable and
should not be used to assess teacher
preparation programs.
Discussion: We have adopted the
recommendation to remove the
definition of ‘‘student achievement in
tested grades and subjects.’’ While we
have moved the substance of this
definition to the definition of ‘‘student
growth,’’ we have also altered the
requirements for the calculation of
student learning outcomes upon review
of comments related to this definition
and comments pertaining to student
learning outcomes more generally. We
believe that the increased flexibility
resulting from these changes sufficiently
addresses commenter concerns
regarding the definition of ‘‘student
achievement in tested grades and
subjects.’’ We believe it is important
that the regulations permit States to
determine an effective and valid way to
measure growth for students in grades
and subjects covered by section
1111(b)(2) of the ESEA, as amended by
ESSA, and that the revisions we have
made provide sufficient flexibility for
States to do so.
While the revised requirement does
not necessitate the use of ESEA
standardized test scores, we believe that
the use of such scores could be a valid
and reliable measure of student growth
and encourage its use in determining
student learning outcomes where
appropriate.11
We now turn to the comments from
those who asserted that maintaining a
link between this definition and
conditions of waivers granted to States
under ESEA flexibility is problematic.
While we maintain the substance of this
definition in the definition of ‘‘student
growth,’’ in view of section 4(c) of
ESSA, which terminates waivers the
Department granted under ESEA
flexibility as of August 1, 2016, we have
revised the requirements for calculation
of student learning outcomes in
§ 612.5(a)(1)(ii) to allow States the
flexibility to use ‘‘another State10 In various comments, commenters used the
phrases ‘‘value-added modeling,’’ ‘‘value-added
metrics,’’ ‘‘value-added measures,’’ ‘‘value-added
methods,’’ ‘‘value-added estimation,’’ and ‘‘valueadded analysis.’’ For purposes of these comments,
we understand the use of these terms to reflect
similar ideas and concepts, so for ease of
presentation of our summary of the comments and
our responses to them, we use the single phrase
‘‘value-added models,’’ abbreviated as VAM.
11 See, for example: Chetty, R., Friedman, J., &
Rockoff, J. (2014). Measuring the Impacts of
Teachers II: Teacher Value-Added and Student
Outcomes in Adulthood. American Economic
Review, 104(9), 2633–2679 (hereafter referred to as
‘‘Chetty et al.’’)
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
determined measure relevant to
calculating student learning outcomes.’’
We believe that doing so allows the
flexibility recommended by
commenters. In addition, as we have
stressed above in the discussion of
Federal-State-Institution Relationship,
Generally, under the regulations States
have flexibility in how to weight each
of the indicators of academic content
knowledge and teaching skills.
Finally, the use of value-added
measures are not specifically included
in the definition in the revised
requirements for the calculation of
student learning outcomes, or otherwise
required by the regulations. However,
we believe that there is convincing
evidence that value-added scores, based
on standardized tests, can be valid and
reliable measures of teacher
effectiveness and a teacher’s effect on
long-term student outcomes.12 See our
response to comments regarding
§ 612.5(a)(1), which provides an indepth discussion of the use of student
growth and VAM, and why we firmly
believe that our student learning
outcome measure, which references
‘‘student achievement in tested grades
and subjects’’ is valid and reliable.
Changes: The definition of student
achievement in tested grades and
subjects has been removed. The
substance of the definition has been
moved to the definition of student
growth.
Student Growth
Comments: Multiple commenters
opposed the proposed definition of
‘‘student growth’’ because the
definition, which was linked to ESEA
standardized test scores and definitions
of terms used for Race to the Top, would
also be linked to VAM, which
commenters stated are not valid or
reliable. Additionally, other
commenters disagreed with the
suggestion that student growth may be
defined as a simple comparison of
achievement between two points in
time, which they said downplays the
potential challenges of incorporating
such measures into evaluation systems.
A number of commenters also stated
that the definition of ‘‘student growth’’
has created new testing requirements in
areas that were previously not tested.
They urged that non-tested grades and
subjects should not be a part of the
definition of student growth. By
including them in this definition, the
commenters argued, States and school
districts would be required to test
students in currently non-tested areas,
which they contended should remain
12 See,
PO 00000
for example: Chetty, et al. at 2633–2679.
Frm 00017
Fmt 4701
Sfmt 4700
75509
non-tested. Several commenters also
stated that, even as the value of yearly
student testing is being questioned, the
regulations would effectively add cost
and burden to States that have not
sought ESEA flexibility or received Race
to the Top funds.
Discussion: These regulations define
student growth as the change in student
achievement between two or more
points in time, using a student’s score
on the State’s assessments under section
1111(b)(2) of the ESEA, as amended by
ESSA, or other measures of student
learning and performance, such as
student results on pre-tests and end-ofcourse tests; objective performancebased assessments; student learning
objectives; student performance on
English language proficiency
assessments; and other measures that
are rigorous, comparable across schools,
and consistent with State guidelines.
Due to the removal of separate
definitions of student achievement in
tested grades and subjects and student
achievement in non-tested grades and
subjects, and their replacement by one
flexible definition of student growth, we
believe we have addressed many
concerns raised by commenters. This
definition, for example, no longer
requires States to use ESEA
standardized test scores to measure
student growth in any grade or subject,
and does not require the use of
definitions of terms used for Race to the
Top.
We recognize commenters’ assertion
that student growth defined as a
comparison of achievement between
two points in time downplays the
potential challenges of incorporating
such measures into evaluation systems.
However, since the revised definition of
student growth and the revised
requirements for calculating student
learning outcomes allow States a large
degree of flexibility in how such
measures are applied, we do not believe
the revised definition will place a
significant burden on States to
implement and incorporate these
concepts into their teacher preparation
assessment systems.
We have addressed commenters’
recommendation that non-tested grades
and subjects not be a part of the
definition of student growth by
removing the definition of student
achievement in non-tested grades and
subjects, and providing States with
flexibility in how they apply the
definition of student growth, should
they choose to use it for measuring a
program’s student learning outcomes.
However, we continue to believe that
student growth in non-tested grades and
subjects can and should be measured at
E:\FR\FM\31OCR2.SGM
31OCR2
75510
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
regular intervals. Further, the revisions
to the definition address commenters’
concerns that the regulations would
effectively add cost and burden to States
that have not sought ESEA flexibility or
received Race to the Top funds.
Consistent with the definition, and in
conjunction with the altered
requirements for the calculation of
student learning outcomes, and the
removal of the definition of student
achievement in tested grades and
subjects as well as the definition of
student achievement in non-tested
grades and subjects, States have
significant flexibility to determine the
methods they use for measuring student
growth and the extent to which it is
factored into a teacher preparation
program’s performance rating. The
Department’s revised definition of
‘‘student growth’’ is meant to provide
States with more flexibility in response
to commenters. Additionally, if a State
chooses to use a method that controls
for additional factors affecting student
and teacher performance, like VAM, the
regulations permit it to do so. See our
response to comments in § 612.5(a)(1),
which provides an in-depth discussion
of the use of student growth and VAM.
Changes: The definition of student
growth has been revised to be the
change in student achievement between
two or more points in time, using a
student’s scores on the State’s
assessments under section 1111(b)(2) of
the ESEA or other measures of student
learning and performance, such as
student results on pre-tests and end-ofcourse tests; objective performancebased assessments; student learning
objectives; student performance on
English language proficiency
assessments; and other measures that
are rigorous, comparable across schools,
and consistent with State guidelines,
rather than the change between two or
more points in time in student
achievement in tested grades and
subjects and non-tested grades and
subjects.
Student Learning Outcomes
Comments: None.
Discussion: Due to many commenters’
concerns regarding State flexibility, the
use of ESEA standardized test scores,
and the relationships between our
original proposed requirements and
those under ESEA flexibility, we have
included a provision in
§ 612.5(a)(1)(ii)(C) allowing States to use
a State-determined measure relevant to
calculating student learning outcomes.
This measure may be used alone, or in
combination with student growth and a
teacher evaluation measure, as defined.
As with the measure for student growth,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
State-determined learning outcomes
must be rigorous, comparable across
schools, and consistent with state
guidelines. Additionally, such measures
should allow for meaningful
differentiation between teachers. If a
State did not select an indicator that
allowed for such meaningful
differentiation among teachers, and
instead chose an indicator that led to
consistently high results among teachers
without reflecting existing
inconsistencies in student learning
outcomes—such as average daily
attendance in schools, which is often
uniformly quite high even in the lowest
performing schools—the result would
be very problematic. This is because
doing so would not allow the State to
meaningfully differentiate among
teachers for the purposes of identifying
which teachers, and thus which teacher
preparation programs, are making a
positive contribution to improving
student learning outcomes.
Further, upon review of the proposed
regulations, we recognized that the
structure could be confusing. In
particular, we were concerned that
having a definition for the term ‘‘student
learning outcomes’’ in § 612.2, when it
largely serves to operationalize other
definitions in the context of § 612.5, was
not the clearest way to present these
requirements. We therefore are moving
the explanations and requirements of
this term into the text of § 612.5(a).
Changes: We have altered the
requirements in § 612.5(a)(1)(ii) for
calculating ‘‘student learning outcomes’’
to provide States with additional
flexibility. We have also removed the
proposed definition of ‘‘student learning
outcomes’’ from § 612.2, and moved the
substance of the text and requirements
of the student learning outcomes
definition to § 612.5(a)(1).
Survey Outcomes
Comments: Commenters argued that
States need flexibility on the types of
indicators used to evaluate and improve
teacher preparation programs. They
suggested that States be required to
gather data through teacher and
employer surveys in a teacher’s first
three years of teaching, but be afforded
the flexibility to determine the content
of the surveys. Commenters added that
specific content dictated from the
Federal level would limit innovation in
an area where best practices are still
developing.
Some commenters also stated that it is
important to follow graduates through
surveys for their first five years of
employment, rather than just their first
year of teaching (as proposed in the
regulations) to obtain a rich and well-
PO 00000
Frm 00018
Fmt 4701
Sfmt 4700
informed understanding of the
profession over time, as the first five
years is a significant period when
teachers decide whether to leave or stay
in the profession.
Commenters were concerned about
the inclusion of probationary certificate
teachers in surveys of teachers and
employers for purposes of reporting
teacher preparation program
performance. Commenters noted that, in
Texas, alternate route participants may
be issued a probationary certificate that
allows the participants to be employed
as teachers of record for a period of up
to three years while they are completing
the requirements for a standard
certificate. As a result, these
probationary certificate holders would
meet the proposed definition of ‘‘new
teacher’’ and, therefore, they and their
supervisors would be asked to respond
to surveys that States would use to
determine teacher preparation program
performance, even though they have not
completed their programs.
In addition, commenters asked which
States are responsible for surveying
teachers from a distance education
program and their employers or
supervisors.
Discussion: The regulations do not
specify the number or type of questions
to be included in employer or teacher
surveys. Rather, we have left decisions
about the content of these surveys to
each State. We also note that, under the
regulations, States may survey novice
teachers and their employers for a
number of consecutive years, even
though they are only required to survey
during the first year of teaching.
The goal of every teacher preparation
program is to effectively prepare
aspiring teachers to step into a
classroom and teach all of their students
well. As the regulations are intended to
help States determine whether each
teacher preparation program is meeting
this goal, we have decided to focus on
novice teachers in their first year of
teaching, regardless of the type of
certification the teachers have or the
type of teacher preparation program
they attended or are attending. When a
teacher is given primary responsibility
for the learning outcomes of a group of
students, the type of program she
attended or is still attending is largely
irrelevant—she is expected to ensure
that her students learn. We expect that
alternative route teacher preparation
programs are ensuring that the teachers
they place in classrooms prior to
completion of their coursework are
sufficiently prepared to ensure student
growth in that school year. We
recognize that these teachers, and those
who completed traditional teacher
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
preparation programs, will grow and
develop as teachers in their first few
years in the classroom.
We agree with commenters who
suggested that surveying teachers and
their employers about the quality of
training in the teachers’ preparation
program would provide a more rich and
well-informed understanding of the
programs over time. However, we
decline to require that States survey
novice teachers and their employers for
more than one year. As an indicator of
novice teachers’ academic content
knowledge and teaching skills, these
surveys are a much more robust
indicator of program performance in
preparing novice teachers for teaching
when completed in the first year of
teaching. In this way, the program is
still fresh and teachers and employers
can best focus on the unique impact of
the program independent of other
factors that may contribute to teaching
quality such as on-the-job training.
However, if they so choose, States are
free to survey novice teachers and their
employers in subsequent years beyond a
teacher’s first year of teaching, and
consider the survey results in their
assessment of teacher preparation
program effectiveness.
For teacher preparation programs
provided through distance education, a
State must survey the novice teachers
described in the definition of ‘‘teacher
survey’’ who have completed such a
program and who teach in that State, as
well as the employers of those same
teachers.
Changes: None.
Comments: None.
Discussion: Upon review, we
recognized that the structure of the
proposed regulations could be
confusing. In particular, we were
concerned that having a definition for
the term ‘‘survey outcomes’’ in § 612.2,
when it largely serves to operationalize
other definitions in the context of
§ 612.5, was not the clearest way to
present these requirements. We
therefore are removing the definition of
‘‘survey outcomes’’ from § 612.2 and
moving its explanations and
requirements into § 612.5(a)(3).
Through this change, we are clarifying
that the surveys will assess whether
novice teachers possess the academic
content knowledge and teaching skills
needed to succeed in the classroom. We
do so for consistency with § 612.5(a),
which requires States to assess, for each
teacher preparation program, indicators
of academic content knowledge and
teaching skills of novice teachers from
that program. We also have removed the
provision that the survey is of teachers
in their first year of teaching in the State
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
where the teacher preparation is
located, and instead provide that the
survey is of teachers in their first year
teaching in the State. This change is
designed to be consistent with new
language related to the reporting of
teacher preparation programs provided
through distance education, as
discussed later in this document.
Finally, we are changing the term ‘‘new
teacher’’ to ‘‘novice teacher’’ for the
reasons discussed under the definition
of ‘‘novice teacher.’’
Changes: We have moved the content
of the proposed definition of ‘‘survey
outcomes’’ from § 612.2, with edits for
clarity, to § 612.5(a)(3). We have also
replaced the term ‘‘new teacher’’ with
‘‘novice teacher’’ in § 612.5(a)(3).
Teacher Evaluation Measure
Comments: Many commenters noted
that the proposed definition of ‘‘teacher
evaluation measure’’ is based on the
definition of ‘‘student growth.’’
Therefore, commenters stated that the
definition is based on VAM, which they
argued, citing research, is not valid or
reliable for this purpose.
Discussion: The proposed definition
of ‘‘teacher evaluation measure’’ did
include a measure of student growth.
However, while VAM reflects a
permissible way to examine student
growth, neither in the final definition of
teacher evaluation measure nor
anywhere else in these regulations is the
use of VAM required. For a more
detailed discussion of the use of VAM,
please see the discussion of
§ 612.5(a)(1).
Changes: None.
Comments: Commenters stated that
the proposed definitions of ‘‘teacher
evaluation measure’’ and ‘‘student
growth’’ offer value from a reporting
standpoint and should be used when
available. Commenters also noted that it
would be useful to understand novice
teachers’ impact on student growth and
recommended that States be required to
report student growth outcomes
separately from teacher evaluation
measures where both are available.
Commenters also noted that not all
States may have teacher evaluation
measures that meet the proposed
definition because not all States require
student growth to be a significant factor
in teacher evaluations, as required by
the proposed definition. Other
commenters suggested that, while
student growth or achievement should
be listed as the primary factors in
calculating teacher evaluation measures,
other factors such as teacher portfolios
and student and teacher surveys should
be included as secondary
considerations.
PO 00000
Frm 00019
Fmt 4701
Sfmt 4700
75511
Some commenters felt that any use of
student performance to evaluate
effectiveness of teacher instruction
needs to include multiple measures over
a period of time (more than one to two
years) and take into consideration the
context (socioeconomic, etc.) in which
the instruction occurred.
Discussion: We first stress that the
regulations allow States to use ‘‘teacher
evaluation measures’’ as one option for
student learning outcomes; use of these
measures is not required. States also
may use student growth or, another
State-determined measure relevant to
calculating student learning outcomes,
or combination of these three options.
Furthermore, while we agree that
reporting on student growth separately
from teacher evaluation measures would
likely provide the public with more
information about the performance of
novice teachers, we are committed to
providing States the flexibility to
develop performance systems that best
meet their specific needs. In addition,
because of the evident cost and burden
of disaggregating student growth data
from teacher evaluation measures, we
do not believe that the HEA title II
reporting system is the right vehicle for
gathering this information. As a result,
we decline to require separate reporting.
States may consider having LEAs
incorporate teacher portfolios and
student and teacher surveys into teacher
evaluation measures, as the commenters
recommended. In this regard, we note
that the definition of ‘‘teacher
evaluation measure’’ requires use of
multiple valid measures, and we believe
that teacher evaluation systems that use
such additional measures of
professional practice provide the best
information on a teacher’s effectiveness.
We also note that, because the definition
of ‘‘novice teacher’’ encompasses the
first three years as a teacher of record,
teacher evaluation measures that
include up to three years of student
growth data are acceptable measures of
student learning outcomes under
§ 612.5(a)(1). In addition, States can
control for different kinds of student
and classroom characteristics in ways
that apply our definition of student
learning outcomes and student growth.
See the discussion of § 612.5(a)(2) for
further information of the student
learning outcomes indicator.
With regard to the comment that some
States lack teacher evaluation measures
that meet the proposed definition
because they do not require student
growth to be a significant factor in
teacher evaluations, we previously
explained in our discussion of § 612.1
(and do so again in our discussion of
§ 612.6) our reasons for removing any
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75512
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
proposed weightings of indicators from
these regulations. Thus we have
removed the phrase, ‘‘as a significant
factor,’’ from the definition of teacher
evaluation measure.
Changes: We have removed the words
‘‘as a significant factor’’ from the second
sentence of the definition.
Comments: None.
Discussion: In response to the student
learning outcomes indicator, some
commenters recommended that States
be allowed to use the teacher evaluation
system they have in place. By proposing
definitions relevant to student learning
outcomes that align with previous
Department initiatives, our intention
was that the teacher evaluation systems
of States that include student growth as
a significant factor, especially those that
had been granted ESEA flexibility,
would meet the requirements for
student learning outcomes under the
regulations. Upon further review, we
determined that revision to the
definition of ‘‘teacher evaluation
measure’’ is necessary to ensure that
States are able to use teacher evaluation
measures to collect data for student
learning outcomes if the teacher
evaluation measures include student
growth, and in order to ensure that the
definition describes the measure itself,
which is then operationalized through a
State’s calculation.
We understand that some States and
districts that use student growth in their
teacher evaluation systems do not do so
for teachers in their first year, or first
several years, of teaching. We are
satisfied that such systems meet the
requirements of the regulations so long
as student growth is used as one of the
multiple valid measures to assess
teacher performance within the first
three years of teaching. To ensure such
systems meet the definition of ‘‘teacher
evaluation measure,’’ we are revising
the phrase ‘‘in determining each
teacher’s performance level’’ in the first
sentence of the definition so that it
reads ‘‘in determining teacher
performance.’’
Furthermore, for the reasons included
in the discussion of §§ 612.1 and 612.6,
we are removing the phrase ‘‘as a
significant factor’’ from the definition.
In addition, we are removing the phrase
‘‘of performance levels’’ from the second
sentence of the definition, as inclusion
of that phrase in the NPRM was an
error.
In addition, we have determined that
the parenthetical phrase beginning
‘‘such as’’ could be shortened without
changing the intent, which is to provide
examples of other measures of
professional practice.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Finally, in response to commenters’
desire for additional flexibility in
calculating student learning outcomes,
and given the newly enacted ESSA,
under which waivers granted under
ESEA flexibility will terminate as of
August 1, 2016, we have revised the
regulations so that States may use any
State-determined measure relevant to
calculating student learning outcomes,
or combination of these three options.
Changes: We have revised the
definition of ‘‘teacher evaluation
measure’’ by removing the phrase ‘‘By
grade span and subject area and
consistent with statewide guidelines,
the percentage of new teachers rated at
each performance level under’’ and
replaced it with ‘‘A teacher’s
performance level based on’’. We have
removed the final phrase ‘‘determining
each teacher’s performance level’’ and
replaced it with ‘‘assessing teacher
performance.’’ We have also revised the
parenthetical phrase beginning ‘‘such
as’’ so that it reads ‘‘such as
observations based on rigorous teacher
performance standards, teacher
portfolios, and student and parent
surveys.’’
Teacher of Record
Comments: Commenters requested
that the Department establish a
definition of ‘‘teacher of record,’’ but
did not provide us with recommended
language.
Discussion: We used the term
‘‘teacher of record’’ in the proposed
definition of ‘‘new teacher,’’ and have
retained it as part of the definitions of
‘‘novice teacher’’ and ‘‘recent graduate.’’
We agree that a definition of ‘‘teacher of
record’’ will be helpful and will add
clarity to those two definitions.
We are adopting a commonly used
definition of ‘‘teacher of record’’ that
focuses on a teacher or co-teacher who
is responsible for student outcomes and
determining a student’s proficiency in
the grade or subject being taught.
Changes: We have added to § 612.2 a
definition of ‘‘teacher of record,’’ and
defined it to mean a teacher (including
a teacher in a co-teaching assignment)
who has been assigned the lead
responsibility for student learning in a
subject or course section.
Teacher Placement Rate
Comments: Some commenters
questioned whether it was beyond the
Department’s authority to set detailed
expectations for teacher placement
rates. Several commenters expressed
concerns about which individuals
would and would not be counted as
‘‘placed’’ when calculating this rate. In
this regard, the commenters argued that
PO 00000
Frm 00020
Fmt 4701
Sfmt 4700
the Federal government should not
mandate the definitive list of
individuals whom a State may exclude
from the placement rate calculation;
rather, they stated that those decisions
should be entirely up to the States.
Discussion: In response to
commenters who questioned the
Department’s authority to establish
detailed expectations for a program’s
teacher placement rate, we note that the
regulations simply define the teacher
placement rate and how it is to be
calculated. The regulations also
generally require that States use it as an
indicator of academic content and
teaching skills when assessing a
program’s level of performance. And
they require this use because we
strongly believe both (1) that a
program’s teacher placement rate is an
important indicator of academic content
knowledge and teaching skills of recent
graduates, and (2) that a rate that is very
low, like one that is very high, is a
reasonable indicator of whether the
program is successfully performing one
of its basic functions—to produce
individuals who become hired as
teachers of record.
The regulations do not, as the
commenters state, establish any detailed
expectations of what such a low (or
high) teacher placement rate is or
should be. This they leave up to each
State, in consultation with its group of
stakeholders as required under
§ 612.4(c).
We decline to accept commenters’
recommendations to allow States to
determine who may be excluded from
placement rate calculations beyond the
exclusions the regulations permit in the
definition of ‘‘teacher placement rate.’’
Congress has directed that States report
their teacher placement rate data ‘‘in a
uniform and comprehensible manner
that conforms to the definitions and
methods established by the Secretary.’’
See section 205(a) of the HEA. We
believe the groups of recent graduates
that we permit States, at their
discretion, to exclude from these
calculations—teachers teaching out of
State and in private schools, and
teachers who have enrolled in graduate
school or entered the military—reflect
the most common and accepted groups
of recent graduates that States should be
able to exclude, either because States
cannot readily track them or because
individual decisions to forgo becoming
teachers does not speak to the program’s
performance. Commenters did not
propose another comparable group
whose failure to become novice teachers
should allow a State to exclude them in
calculations of a program’s teacher
placement rate, and upon review of the
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
comments we have not identified such
a group.
We accept that, in discussing this
matter with its group of stakeholders, a
State may identify one or more such
groups of recent graduates whose
decisions to pass up opportunities to
become novice teachers are also
reasonable. However, as we said above,
a teacher placement rate becomes an
indicator of a teacher preparation
program’s performance when it is
unreasonably low, i.e., below a level of
reasonableness the State establishes
based on the fact that the program exists
to produce new teachers. We are not
aware of any additional categories of
recent graduates that are not already
included in the allowable exclusions
that would be both sufficiently large and
whose circumstances are out of the
control of the teacher preparation
program that would, without their
exclusion, result in an unreasonably low
teacher placement rate. Given this, we
believe States do not need the
additional flexibility that the
commenters propose.
Changes: None.
Comments: Commenters also
expressed concern about participants
who are hired in non-teaching jobs
while enrolled and then withdraw from
the program to pursue those jobs,
suggesting that these students should
not be counted against the program.
Some commenters questioned the
efficacy of teacher placement rates as an
indicator of teacher preparation program
performance, given the number of
teachers who may be excluded from the
calculation for various reasons (e.g.,
those who teach in private schools).
Other commenters were more generally
concerned that the discretion granted to
States to exclude certain categories of
novice teachers meant that the
information available on teacher
preparation programs would not be
comparable across States.
Some commenters objected to
permitting States to exclude teachers or
recent graduates who take teaching
positions out of State, arguing that, to be
useful, placement rate data need to be
gathered across State boundaries as
program graduates work in numerous
States.
Discussion: We believe that the
revised definition of ‘‘recent graduate,’’
as well as the allowable exclusions in
the definitions of both teacher
placement and retention rates, not only
alleviate obvious sources of burden, but
provide States with sufficient flexibility
to calculate these rates in reasonable
ways. Program participants who do not
complete the program do not become
recent graduates, and would not be
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
included in calculations of the teacher
placement rate. However, if the
commenters intended to address recent
graduates who were employed in nonteaching positions while in or after
completing the program, we would
decline to accept the recommendation
to exclude individuals because we
believe that, except for those who
become teachers out of State or in
private schools, those who enroll in
graduate school, or those who enter the
military (which the regulations permit
States to exclude), it is important to
assess teacher preparation programs
based on factors that include their
success rates in having recent graduates
hired as teachers of record.
With regard to the efficacy of the
teacher placement rate as an indicator of
program performance, we understand
that employment outcomes, including
teacher placement rates, are influenced
by many factors, some of which are
outside of a program’s control. However,
we believe that employment outcomes
are, in general, a good reflection of
program because they signal a program’s
ability to produce graduates whom
schools and districts deem to be
qualified and seek to hire and retain.
Moreover, abnormally low employment
outcomes are an indication that
something about the program is amiss
(just as abnormally high outcomes
suggest something is working very well).
Further discussion on this topic can be
found under the subheading
Employment Outcomes as a Measure of
Performance, § 612.5(a)(2).
While we are sympathetic to the
commenters’ concern that the proposed
definition of teacher placement rate
permits States to calculate employment
outcomes only using data on teachers
hired to teach in public schools, States
may not, depending on State law, be
able to require that private schools
cooperate in the State data collection
that the regulations require. We do note
that, generally, teacher preparation
programs are designed to prepare
teachers to meet the requirements to
teach in public schools nationwide, and
over 90 percent of teachers in
elementary and secondary schools do
not work in private schools.13
Additionally, requiring States to collect
data on teachers employed in private
schools or out of State, as well as those
who enroll in graduate school or enter
13 According to data from the Bureau of Labor
Statistics, in May 2014, of the 3,696,580 individuals
employed as preschool, primary, secondary, and
special education school teachers in elementary and
secondary schools nationwide, only 358,770 were
employed in private schools. See www.bls.gov/oes/
current/naics4_611100.htm and www.bls.gov/oes/
current/611100_5.htm .
PO 00000
Frm 00021
Fmt 4701
Sfmt 4700
75513
the military, would create undue burden
on States. The regulations do not
prevent teacher preparation entities
from working with their States to secure
data on recent graduates who are subject
to one or more of the permissible State
exclusions and likewise do not prevent
the State using those data in calculating
the program’s employment outcomes,
including teacher placement rates.
Similarly, we appreciate commenters’
recommendation that the regulations
include placement rate data for those
recent graduates who take teaching
positions in a different State. Certainly,
many novice teachers do become
teachers of record in States other than
those where their teacher preparation
programs are located. We encourage
States and programs to develop
interstate data-sharing mechanisms to
facilitate reporting on indicators of
program performance to be as
comprehensive and meaningful as
possible.
Until States have a ready means of
gathering these kinds of data on an
interstate basis, we appreciate that many
States may find the costs and
complexities of this data-gathering to be
daunting. On the other hand, we do not
view the lack of these data (or the lack
of data on recent graduates teaching in
private schools) to undermine the
reasonableness of employment
outcomes as indicators of program
performance. As we have explained, it
is when employment outcomes are
particularly low that they become
indicators of poor performance, and we
are confident that the States, working in
consultation with their stakeholders,
can determine an appropriate threshold
for teacher placement and retention
rates.
Finally, we understand that the
discretion that the regulations grant to
each State to exclude novice teachers
who teach in other States and who work
in private schools (and those program
graduates who go on to graduate school
or join the military) means that the
teacher placement rates for teacher
preparation programs will not be
comparable across States. This is not a
major concern. The purpose of the
regulations and the SRC itself is to
ensure that each State reports those
programs that have been determined to
be low-performing or at-risk of being
low-performing based on reasonable and
transparent criteria. We believe that
each State, in consultation with its
stakeholders (see § 612.4(c), should
exercise flexibility to determine whether
to have the teacher placement rate
reflect inclusion of those program
graduates identified in paragraph (ii) of
the definition.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75514
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Changes: None.
Comments: Several commenters
recommended that a State with a
statewide preschool program that
requires early educators to have
postsecondary training and certification
and State licensure be required to
include data on early educators in the
teacher placement rate, rather than
simply permit such inclusion at the
State’s discretion.
Discussion: We strongly encourage
States with a statewide preschool
program where early educators are
required to obtain State licensure
equivalent to elementary school
teachers to include these teachers in
their placement data. However, we
decline to require States to include
these early educators in calculations of
programs’ teacher placement rates
because early childhood education
centers are often independent from local
districts, or are run by external entities.
This would make it extremely difficult
for States to determine a valid and
reasonable placement rate for these
teachers.
Changes: None.
Comments: Commenters
recommended that teachers who have
been hired in part-time teaching
positions be counted as ‘‘placed,’’
arguing that the placement of teachers
in part-time teaching positions is not
evidence of a lower quality teacher
preparation program.
Discussion: We are persuaded by
comments that a teacher may function
in a part-time capacity as a teacher of
record in the subject area and grade
level for which the teacher was trained
and that, in those instances, it would
not be appropriate to count this parttime placement against a program’s
teacher placement rate. As such, we
have removed the requirement that a
teacher placement rate be based on the
percentage of recent graduates teaching
in full-time positions.
Changes: We have removed the fulltime employment requirement from the
definition of ‘‘teacher placement rate.’’
Comments: Commenters asked
whether a participant attending a
teacher preparation program who is
already employed as a teacher by an
LEA prior to graduation would be
counted as ‘‘placed’’ post-graduation.
Commenters felt that excluding such
students may unduly penalize programs
that tailor their recruitment of aspiring
teachers to working adults.
Discussion: We are uncertain whether
the commenter is referring to a teacher
who has already received initial
certification or licensure and is enrolled
in a graduate degree program or is a
participant in an alternative route to
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
certification program and is working as
a teacher as a condition of participation
in the program. As discussed in the
section titled ‘‘Teacher Preparation
Program,’’ a teacher preparation
program is defined, in part, as a program
that prepares an individual for initial
certification or licensure. As a result, it
is unlikely that a working teacher would
be participating in such a program. See
the section titled ‘‘Alternative Route
Programs’’ for a discussion of the use of
teacher placement rate in alternative
route programs.
Comments: Some commenters
recommended that the teacher
placement rate calculation account for
regional differences in job availability
and the general competitiveness of the
employment market. In addition,
commenters argued that placement rates
should also convey whether the
placement is in the area in which the
candidate is trained to teach or out-offield (i.e., where there is a mismatch
between the teacher’s content training
and the area of the placement). The
commenters suggested that young
teachers may be more likely to get hired
in out-of-field positions because they
are among the few willing to take those
jobs. Commenters contended that many
teachers from alternative route programs
(including Teach for America) are in
out-of-field placements and should be
recognized as such. Commenters also
argued that high-need schools are
notoriously staffed by out-of-field
teachers, thus, they recommended that
placement rate data account for the
congruency of the placement. The
commenters stated this is especially
important if the final regulations
include placement rates in high-need
schools as an indicator of program
performance.
Discussion: We encourage entities
operating teacher preparation programs
to take factors affecting supply and
demand, such as regional differences in
job availability and the general
competitiveness of the employment
market, into consideration when they
design and implement their programs
and work to have their participants
placed as teachers.
Nonetheless, we decline to accept the
recommendation that the regulations
require that the teacher placement rate
calculation account for these regional
differences in job availability and the
competitiveness of the employment
market. Doing so would be complex,
and would entail very large costs of
cross-tabulating data on teacher
preparation program location, area of
residence of the program graduate,
teacher placement data, and a series of
employment and job market indicators.
PO 00000
Frm 00022
Fmt 4701
Sfmt 4700
States may certainly choose to account
for regional differences in job
availability and the general
competitiveness of the employment
market and pursue the additional data
collection that such effort would entail.
However, we decline to require it.
As explained in the NPRM, while we
acknowledge that teacher placement
rates are affected by some
considerations outside of the program’s
control, we believe that placement rates
are still a valid indicator of the quality
of a teacher preparation program (see
the discussion of employment outcomes
under § 612.5(a)(2)).
We understand that teachers may be
hired to teach subjects and areas in
which they were not prepared, and that
out-of-field placement is more frequent
in high-need schools. However, we
maintain the requirement that the
teacher placement rate assess the extent
to which program graduates become
novice teachers in the grade-level,
grade-span, and subject area in which
they were trained. A high incidence of
out-of-field placement reflects that the
teacher preparation program is not in
touch with the hiring needs of likely
prospective employers, thus providing
its participants with the academic
content knowledge and teaching skills
to teach in the fields that do not match
employers’ teaching needs. We also
recognize that placing teachers in
positions for which they were not
prepared could lead to less effective
teaching and exacerbate the challenges
already apparent in high-need schools.
Changes: None.
Comments: Some commenters stated
that, while it is appropriate to exclude
the categories of teachers listed in the
proposed definition of ‘‘teacher
placement rate,’’ data on the excluded
teachers would still be valuable to track
for purposes of the State’s quality rating
system. Commenters proposed requiring
States to report the number of teachers
excluded in each category.
Discussion: Like the commenters, we
believe that the number of recent
graduates that a State excludes from its
calculation of a program’s teacher
placement rate could provide useful
information to the program. For reasons
expressed above in response to
comments, however, we believe a
program’s teacher placement rate will be
a reasonable measure of program
performance without reliance on the
number of teachers in each category
whom a State chooses to exclude from
its calculations. Moreover, we do not
believe that the number of recent
graduates who go on to teach in other
States or in private schools, or who
enter graduate school or the military, is
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
a reflection of a program’s quality.
Because the purpose of the teacher
placement rate, like all of the
regulations’ indicators of academic
content knowledge and teaching skills,
is to provide information on the
performance of the program, we decline
to require that States report data in their
SRCs. We nonetheless encourage States
to consider obtaining, securing, and
publicizing these data as a way to make
information they provide about each
program more robust.
Changes: None.
Comments: Commenters stated that it
is important to have teacher placement
data beyond the first year following
graduation, because graduates
sometimes move among districts in the
early years of their careers. One
commenter noted that, in the
commenter’s State, data are currently
available only for teachers in their first
year of teaching, and that there is an
important Federal role in securing these
data beyond this first year.
Discussion: From our review of the
comments, we are unclear whether the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
commenters intended to refer to a
program’s teacher retention rate,
because recent graduates who become
novice teachers and then immediately
move to another district would be
captured by the teacher retention rate
calculation. But because our definition
of ‘‘novice teacher’’ includes an initial
three-year teaching period, program’s
teacher retention rate would still
continue to track these teachers in
future years.
In addition, we believe a number of
commenters may have misunderstood
how the teacher placement rate is
calculated and used. Specifically, a
number of commenters seemed to
believe that the teacher placement rate
is only calculated in the first year after
program completion. This is inaccurate.
The teacher placement rate is
determined by calculating the
percentage of recent graduates who have
become novice teachers, regardless of
their retention. As such, the teacher
placement rate captures any recent
graduate who works as a teacher of
record in an elementary or secondary
PO 00000
Frm 00023
Fmt 4701
Sfmt 4700
75515
public school, which may include
preschool at the State’s discretion,
within three years of program
completion.
In order to provide additional clarity,
we provide the following example. We
examine a theoretical group of graduates
from a single teacher preparation
program, as outlined in Table 1. In
examining the example, it is important
to understand that a State reports in its
SRC for a given year a program’s teacher
retention rate based on data from the
second preceding title II reporting year
(as the term is defined in the
regulations). Thus, recent graduates in
2018 (in the 2017–2018 title II reporting
year) might become novice teachers in
2018–2019. The State collects these data
in time to report them in the SRC to be
submitted in October 2019. Please see
the discussion of the timing of the SRC
under § 612.4(a)(1)(i) General State
Report Card reporting and § 612.4(b)
Timeline for changes in the reporting
timeline from that proposed in the
NPRM.
BILLING CODE 4000–01–P
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75516
VerDate Sep<11>2014
Preparation Program
Jkt 241001
Teacher
PO 00000
A
Frm 00024
B
Fmt 4701
c
D
Sfmt 4725
E
E:\FR\FM\31OCR2.SGM
F
G
H
31OCR2
I
J
K
ER31OC16.000
Example of Calculations of a Teacher Placement Rate for a Single Teacher
2017
(2016-2017)
Grad
N
Grad
N
Grad
N
Grad
N
Grad
N
2018
(2017-2018)
Title II Reporting Year
(Academic Year)
2019
2020
(2018-2019)
(2019-2020)
2021
(2020-2021)
2022
(2021-2022)
y
y
y
y
y
y
y
y
N
N
N
y
y
y
y
N
N
y
y
y
N
N
N
y
y
Grad
y
y
y
N
y
N
N
N
y
y
y
y
N
N
y
y
y
N
N
N
N
Grad
y
N
y
y
y
Grad
y
Grad
N
Grad
N
Grad
N
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
Table 1.
sradovich on DSK3GMQ082PROD with RULES2
Jkt 241001
Teacher
placement
rate
State does
not report
in this
year
A+B+C+F
A +B + C + +D
F+G+H+I+
G+H+]+l1 F+G+H+]+
A+B
A+B+C+D+ A+B+C+D+ F+G+H+l+
A+B+C+D+ F+G+H+l+J F+G+H+l+]
6
8
9
= - = 100%
2
= 11 = 72.7%
= 11 = 81.8%
6
= 5 = 40%
PO 00000
Frm 00025
NOTES:
Grad = Individual met all the requirements for program completion in that year.
Y = Teacher of record for P-12 public school students in that year
N = Not a teacher of record for P-12 public school students in that year.
0
- =N/A
0
Fmt 4701
Sfmt 4700
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
BILLING CODE 4000–01–C
VerDate Sep<11>2014
Pilot Year
75517
ER31OC16.001
sradovich on DSK3GMQ082PROD with RULES2
75518
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
In this example, the teacher
preparation program has five
individuals who met all of the
requirements for program completion in
the 2016–2017 academic year. The State
counts these individuals (A, B, C, D, and
E) in the denominator of the placement
rate for the program’s recent graduates
in each of the State’s 2018, 2019, and
2020 SRCs because they are, or could
be, recent graduates who had become
novice teachers in each of the prior title
II reporting years. Moreover, in each of
these years, the State would determine
how many of these individuals have
become novice teachers. In the 2018
SRC, the State identifies that A and B
have become novice teachers in the
prior reporting year. As such, the State
divides the total number of recent
graduates who have become novice
teachers (2) by the total number of
recent graduates from 2016–2017 (5).
Hence, in the 2018 SRC, this teacher
preparation program has a teacher
placement rate of 40 percent.
In the State’s 2019 SRC, all
individuals who completed the program
in 2017 and those who completed in
2018 (the 2016–2017 and 2017–2018
title II reporting years) meet the
definition of recent graduate. In the
2018–2019 academic year, one
additional completer from the 2016–
2017 academic year has become a
novice teacher (C), and five (F, G, H, J,
and K) of the six 2017–2018 program
completers have become novice
teachers. In this instance, Teacher J is
included as a recent graduate who has
become a novice teacher even though
Teacher J is not teaching in the current
year. This is because the definition
requires inclusion of all recent
graduates who have become novice
teachers at any time, regardless of their
retention. Teacher J is counted as a
successfully placed teacher. The fact
that Teacher J is no longer still
employed as a teacher is captured in the
teacher retention rate, not here. As such,
in the 2019 SRC, the teacher preparation
program’s teacher placement rate is 73
percent (eight program completers out
of eleven have been placed).
In the State’s 2020 SRC, there are no
additional cohorts to add to the pool of
recent graduates in this example
although, in reality, States will be
calculating this measure using three
rolling cohorts of program completers
each year. In this example, Teacher D
has newly obtained placement as a
novice teacher and would therefore be
included in the numerator. As with
Teacher J in the prior year’s SRC,
Teachers G and K remain in the
numerator even though they are no
longer teachers of record because they
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
have been placed as novice teachers
previously. In the 2020 SRC, the teacher
preparation program’s teacher
placement rate is 82 percent (nine
program completers out of eleven have
been placed).
In the 2021 SRC, individuals who
completed their teacher preparation
program in the 2016–2017 academic
year (A, B, C, D, and E) are no longer
considered recent graduates since they
completed their programs prior to the
preceding three title II reporting years
(2018, 2019, 2020). As such, the only
cohort of recent graduates the State
examines for this hypothetical teacher
preparation program are those that
completed the program in the 2016–
2017 academic year (F, G, H, I, J, and K).
In the 2020–2021 academic year,
Teacher I is placed as a novice teacher.
Once again, Teachers G and J are
included in the numerator even though
they are not currently employed as
teachers because they have previously
been placed as novice teachers. The
program’s teacher placement rate in the
2021 SRC would be 100 percent.
In the 2022 SRC, this hypothetical
teacher preparation program has no
recent graduates, as no one completed
the requirements of the program in any
of the three preceding title II reporting
years (2019, 2020, or 2021).
As noted above, it is important to
restate that recent graduates who have
become novice teachers at any point,
such as Teacher J, are included in the
numerator of this calculation, regardless
of whether they were retained as a
teacher of record in a subsequent year.
As such, if an individual completed a
teacher preparation program in Year 1
and became a novice teacher in Year 2,
regardless of whether he or she is still
a novice teacher in Year 3, the
individual is considered to have been
successfully placed under this measure.
Issues regarding retention of teachers
are captured by the teacher retention
rate measure, and therefore departures
from a teaching position have no
negative consequences under the
teacher placement rate.
We have adopted these procedures for
State reporting of a program’s teacher
placement rate in each year’s SRC to
keep them consistent with the proposal
we presented in the NPRM for reporting
teacher placement rates over a threeyear period, in line with the change in
the SRC reporting date, and as simple
and straightforward as possible. This led
us to make certain non-substantive
changes to the proposed definition of
teacher retention rate so that the
definition is clearer and less verbose. In
doing so, we have removed the State’s
option of excluding novice teachers who
PO 00000
Frm 00026
Fmt 4701
Sfmt 4700
have taken teaching positions that do
not require State certification (paragraph
(ii)(C) of the proposed definition)
because it seems superfluous; our
definition of teacher preparation
program is one that leads to an initial
State teacher certification or licensure in
a specific field.
Changes: We have revised the
definition of ‘‘teacher placement rate’’ to
include:
(i) The percentage of recent graduates
who have become novice teachers
(regardless of retention) for the grade
level, span, and subject area in which
they were prepared.
(ii) At the State’s discretion, exclusion
from the rate calculated under
paragraph (i) of this definition of one or
more of the following, provided that the
State uses a consistent approach to
assess and report on all of the teacher
preparation programs in the State:
(A) Recent graduates who have taken
teaching positions in another State.
(B) Recent graduates who have taken
teaching positions in private schools.
(C) Recent graduates who have
enrolled in graduate school or entered
military service.
Comments: None.
Discussion: The Department
recognizes that a State may be unable to
accurately determine the total number
of recent graduates in cases where a
teacher preparation program provided
through distance education is offered by
a teacher preparation entity that is
physically located in another State.
Each institution of higher education
conducting a teacher preparation
program is required to submit an IRC,
which would include the total number
of recent graduates from each program,
to the State in which it is physically
located. If the teacher preparation entity
operates a teacher preparation program
provided through distance education in
other States, it is not required to submit
an IRC in those States. As a result, a
State with a teacher preparation
program provided through distance
education that is operated by an entity
physically located in another State will
not have access to information on the
total number of recent graduates from
such program. Even if the State could
access the number of recent graduates,
recent graduates who neither reside in
nor intend to teach in such State would
be captured, inflating the number of
recent graduates and resulting in a
teacher placement rate that is artificially
low.
For these reasons, we have has
determined that it is appropriate to
allow States to use the total number of
recent graduates who have obtained
initial certification or licensure in the
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
State, rather than the total number of
recent graduates, when calculating
teacher placement rates for teacher
preparation programs provided through
distance education. We believe that
teacher placement rate calculated using
the number of recent graduates who
have obtained initial certification or
licensure is likely more accurate in
these instances than total recent
graduates from a multi-state program.
Even so, since fewer recent graduates
obtain initial certification or licensure
than the total number of recent
graduates, the teacher placement rate
may be artificially high. To address this,
we have also revised the employment
outcomes section in § 612.5(a)(2) to
allow States a greater degree of
flexibility in calculating and weighting
employment outcomes for teacher
preparation programs offered through
distance education.
Changes: We have revised the
definition of teacher placement rate in
§ 612.2 to allow States to use the total
number of recent graduates who have
obtained initial certification or licensure
in the State during the three preceding
title II reporting years as the
denominator in their calculation of
teacher placement rate for teacher
preparation programs provided through
distance education instead of the total
number of recent graduates.
sradovich on DSK3GMQ082PROD with RULES2
Teacher Preparation Program
Comments: Commenters stated that
the regulations are designed for
undergraduate teacher preparation
programs rather than graduate programs,
in part because the definition of teacher
preparation program is linked to
specific teaching fields. This could
result in small program sizes for postbaccalaureate preparation programs.
Another commenter noted that it
offers a number of graduate degree
programs in education that do not lead
to initial certification, but that the
programs which institutions and States
report on under part 612 are limited to
those leading to initial certification.
Other commenters urged that
aggregation of data to elementary and
secondary data sets would be more
appropriate in States with a primarily
post-baccalaureate teacher preparation
model. We understand that commenters
are suggesting that our proposed
definition of ‘‘teacher preparation
program,’’ with its focus on the
provision of a specific license or
certificate in a specific field, will give
States whose programs are primarily at
the post-baccalaureate level
considerable trouble collecting and
reporting data for the required
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
indicators given their small size. (See
generally § 612.4(b)(3).)
Discussion: The definition of teacher
preparation program in the regulations
is designed to apply to both
undergraduate and graduate level
teacher preparation programs. We do
not agree that the definition is designed
to fit teacher preparation programs
better at one or another level. With
regard to the commenters’ concerns
about greater applicability to graduatelevel programs, while the commenters
identified these as concerns regarding
the definition of teacher preparation
program, we understand the issues
described to be about program size,
which is addressed in § 612.4(b). As
such, these comments are addressed in
the discussion of program size under
§ 612.4(b)(3)(ii). We do believe that it is
important to clarify that a teacher
preparation program for purposes of
title II, HEA reporting is one that leads
to initial certification, as has been the
case under the title II reporting system
since its inception.
Changes: We have revised the
definition of the term ‘‘teacher
preparation program’’ to clarify that it is
one that leads to initial state teacher
certification or licensure in a specific
field.
Comments: Commenters noted that,
because teacher preparation programs in
some States confer academic degrees
(e.g., Bachelor of Arts in English) on
graduates rather than degrees in
education, it would be impossible to
identify graduates of teacher preparation
programs and obtain information on
teacher preparation graduates.
Additionally, some commenters were
concerned that the definition does not
account for students who transfer
between programs or institutions, or
distinguish between students who
attended more than one program; it
confers all of the credit or responsibility
for these students’ academic content
knowledge and teaching skills on the
program from which the student
graduates. In the case of alternative
route programs, commenters stated that
students may have received academic
training from a different program, which
could unfairly either reflect poorly on,
or give credit to, the alternative route
program.
Discussion: Under the regulatory
definition of the term, a teacher
preparation program, whether
alternative route or traditional, must
lead to an initial State teacher
certification or licensure in a specific
field. As a result, a program that does
not lead to an initial State teacher
certification or licensure in a specific
field (e.g., a Bachelor of Arts in English
PO 00000
Frm 00027
Fmt 4701
Sfmt 4700
75519
without some additional educationrelated coursework) is not considered a
teacher preparation program that is
reported on under title II. For example,
a program that provides a degree in
curriculum design, confers a Masters of
Education, but does not prepare
students for an initial State certification
or licensure, would not qualify as a
teacher preparation program under this
definition. However, a program that
prepares individuals to be high school
English teachers, including preparing
them for an initial State certification or
licensure, but confers no degree would
be considered a teacher preparation
program. The specific type of degree
granted by the program (if any) is
irrelevant to the definition in these
regulations. Regardless of their
structure, all teacher preparation
programs are responsible for ensuring
their students are prepared with the
academic content knowledge and
teaching skills they need to succeed in
the classroom. Therefore, by having the
regulatory definition of teacher
preparation program encompass all
teacher preparation programs, regardless
of their structure, that lead to initial
State teacher certification or licensure in
a specific field, it makes sense that
States must report on the performance
and associated data of each of these
programs.
While we understand that students
often transfer during their college
careers, we believe that that the teacher
preparation program that ultimately
determines that a student is prepared for
initial certification or licensure is the
one responsible for his or her
performance as a teacher. This is so
regardless of whether the student started
in that program or a different one. The
same is true for alternative route
programs. Since alternative route
programs enroll individuals who have
had careers, work experiences, or
academic training in fields other than
education, participants in these
programs have almost by definition had
academic training elsewhere. However,
we believe it is fully appropriate to have
the alternative route program assume
full responsibility for effective teacher
training under the title II reporting
system, as it is the program that
determined the teacher to have
sufficient academic content knowledge
and teaching skills to complete the
requirements of the program.
Finally, we note that in § 612.5(a)(4),
the regulations also require States to
determine whether teacher preparation
programs have rigorous exit
requirements. Hence, regardless of
student transfers, the public will know
whether the State considers program
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75520
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
completers to have reached a high
standard of preparation.
Changes: None.
Comment: None.
Discussion: In considering the
comments we received on alternative
route to certification programs, we
realized that our proposed definition of
‘‘teacher preparation program’’ did not
address the circumstance where the
program, while leading to an initial
teacher certification or licensure in a
specific field, enrolls some students in
a traditional teacher preparation
program and other students in an
alternative route to certification program
(i.e., hybrid programs). Like the students
enrolled in each of these two
programmatic components, the
components themselves are plainly very
different. Principally, one offers
instruction to those who will not
become teachers of record until after
they graduate and become certified to
teach, while the other offers instruction
to those who already are teachers of
record (and have met State requirements
to teach while enrolled in their teacher
preparation program), and that thereby
supports and complements those
individuals’ current teaching
experiences. Thus, while each
component is ‘‘offered by [the same]
teacher preparation entity’’ and ‘‘leads
to an initial State teacher certification or
licensure in a specific field,’’ this is
where the similarity may end.
We therefore have concluded that our
proposed definition of a teacher
preparation program does not fit these
hybrid programs. Having an IHE or the
State report composite information for a
teacher preparation program that has
both a traditional and alternative route
component does not make sense;
reporting in the aggregate will mask
what is happening with or in each
component. The clearest and simplest
way to avoid the confusion in reporting
that would otherwise result is to have
IHEs and States treat each component of
such a hybrid program as its own
teacher preparation program. We have
revised the definition of a ‘‘teacher
preparation program’’ in § 612.2 to do
just that. While doing so may create
more small teacher preparation
programs that require States to aggregate
data under § 612.4(b)(3)(ii), this
consequence will be far outweighed by
the benefits of cleaner and clearer
information.
Changes: We have revised the
definition of a ‘‘teacher preparation
program’’ in § 612.2 to clarify that where
some participants in the program are in
a traditional route to certification or
licensure in a specific field, and others
are in an alternative route to
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
certification or licensure in that same
field, the traditional and alternative
route component is each its own teacher
preparation program.
Teacher Retention Rate
Comments: Some commenters stated
that by requiring reporting on teacher
retention rates, both generally and for
high-need schools, program officials—
and their potential applicants—can
ascertain if the programs are aligning
themselves with districts’ staffing needs.
Other commenters stated that two of
the allowable options for calculating the
teacher retention rate would provide
useful information regarding: (1) The
percentage of new teachers hired into
full-time teaching positions and serving
at least three consecutive years within
five years of being certified or licensed;
and (2) the percentage of new teachers
hired full-time and reaching tenure
within five years of being certified.
According to commenters, the focus of
the third option, new teachers who were
hired and then fired for reasons other
than budget cuts, could be problematic
because it overlooks teachers who
voluntarily leave high-need schools, or
the profession altogether. Other
commenters recommended removing
the definition of teacher retention rate
from the regulations.
Another commenter stated that the
teacher retention rate, which we had
proposed to define as any of the three
specific rates as selected by the State,
creates the potential for incorrect
calculations and confusion for
consumers when teachers have initial
certification in multiple States;
however, the commenter did not offer
further information to clarify its
meaning. In addition, commenters
stated that the proposed definition
allows for new teachers who are not
retained due to market conditions or
circumstances particular to the LEA and
beyond the control of teachers or
schools to be excluded from calculation
of the retention rate, a standard that
allows each school to determine the
criteria for those conditions, which are
subject to interpretation.
Several commenters requested
clarification of the definition. Some
asked us to clarify what we meant by
tenure. Another commenter asked us to
clarify how to treat teachers on
probationary certificates.
Another commenter recommended
that the Department amend the teacher
retention rate definition so that it is
used to help rate teacher preparation
programs by comparing the program’s
recent graduates who demonstrate
effectiveness and remain in teaching to
those who fail to achieve high ratings on
PO 00000
Frm 00028
Fmt 4701
Sfmt 4700
evaluations. One commenter suggested
that programs track the number of years
graduates taught over the course of five
years, regardless of whether or not the
years taught were consecutive. Others
suggested shortening the timeframe for
reporting on retention so that the rate
would be reported for each of three
consecutive years and, as we
understand the comments, would apply
to individuals after they became novice
teachers.
Discussion: We agree with the
commenters who stated that reporting
on teacher retention rates both generally
and for high-need schools ensures that
teacher preparation programs are
aligning themselves with districts’
staffing needs.
In response to comments, we have
clarified and simplified the definition of
teacher retention rate. We agree with
commenters that the third proposed
option, by which one subtracts from 100
percent the percentage of novice
teachers who were hired and fired for
reasons other than budget cuts, is not a
true measure of retention because it
excludes those who voluntarily leave
the profession. Therefore, we have
removed it as an option for calculating
the retention rate. Doing so also
addresses those concerns that the third
option allowed for too much discretion
in interpreting when local conditions
beyond the schools’ control caused
teachers to no longer be retained.
We also agree with commenters that
the second proposed option for
calculating the rate, which looked to the
percentage of new teachers not receiving
tenure within five years, is confusing
and does not make sense when looking
at new teachers, which we had
proposed to define as covering a threeyear teaching period, as tenure may not
be reached during that timeframe. For
these reasons, we also have removed
this option from the definition. Doing so
addresses the commenters’ concerns
that multiple methods for calculating
the rate would create confusion. We also
believe this addresses the comments
regarding our use of the term tenure as
potentially causing confusion.
We also note that our proposed
definition of teacher retention rate did
not bring in the concept of certification
in the State in which one teaches.
Therefore, we do not believe this
definition will cause the confusion
identified by the commenter who was
concerned about teachers who were
certified to teach in multiple States.
Additionally, we revised the first
option for calculating the teacher
retention rate to clarify that the rate
must be calculated three times for each
cohort of novice teachers—after the first,
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
second, and third years as a novice
teacher. We agree with commenters who
recommended shortening the timeframe
for reporting on retention from three of
five years to the first three consecutive
years. We made this change because the
definition of recent graduate already
builds in a three-year window to allow
for delay in placement, and to simplify
the data collection and reporting
requirements associated with this
indicator.
We also agree with the
recommendation that States calculate a
program’s retention rate based on three
consecutive years after individuals
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
become novice teachers. We believe
reporting on each year for the first three
years is a reasonable indicator of
academic content and teaching skills in
that it shows how well a program
prepares novice teachers to remain in
teaching, and also both promotes greater
transparency and helps employers make
more informed hiring decisions. We
note that teacher retention rate is
calculated for all novice teachers, which
includes those on probationary
certificates. This is further explained in
the discussion of ‘‘Alternative Route
Programs’’ in section 612.5(a)(2).
PO 00000
We appreciate the suggestions that we
should require States to report a
comparison of retention rates of novice
teachers based on their evaluation
ratings, but decline to prescribe this
measure as doing so would create costs
and complexities that we do not think
are sufficiently necessary in
determining a program’s broad level of
performance. States that are interested
in such information for the purposes of
transparency or accountability are
welcome to consider it as another
criterion for assessing program
performance or for other purposes.
BILLING CODE 4000–01–P
Frm 00029
Fmt 4701
Sfmt 4700
75521
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75522
VerDate Sep<11>2014
Preparation Program
Jkt 241001
Teacher
PO 00000
A
I
I
2017
(2016-2017)
Grad
2022
(2021-2022)
y
N
Frm 00030
-
B
I
Grad
N
N
-
Fmt 4701
c
I
Grad
N
Sfmt 4725
I
N
I
N
I
N
-
D
I
Grad
N
-
Grad
E
E:\FR\FM\31OCR2.SGM
F
G
H
31OCR2
-
I
-
J
-
K
ER31OC16.002
Example of Calculations of a Teacher Retention Rate for a Single Teacher
I
N
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
Table 2a.
sradovich on DSK3GMQ082PROD with RULES2
VerDate Sep<11>2014
Jkt 241001
2017-2018
Cohort:
PO 00000
Frm 00031
Teacher
retention
rate
Fmt 4701
State does
not report
in this
year
State does
not report
in this
year
2017-2018
Cohort:
A+B+F+G
A+B+F+G+
4
=- = 80%
5
A+B+F
A+B+F+G+
3
= 5 = 60%
2018-2019
Cohort:
Sfmt 4725
C+H
s+H+K
E:\FR\FM\31OCR2.SGM
= 3 = 66.7%
A+F
IA+B+F+G+
2
= 5 = 40%
2018-2019
Cohort:
C+H
C+H+K
2
= 3 = 66.7%
12019-2020
Cohort:
2018-2019
Cohort·
C+H
C+H+K
2
= 3 = 66.7%
2019-2020
Cohort:
1
D
D
= 1 = 100%
D
D
1
= 1 = 100%
12020-2021
Cohort:
2
E +I
E +I
= 2 = 100%
31OCR2
NOTES:
Grad = Individual met all the requirements for program completion in that year.
Y = Teacher of record for P-12 public school students in that year
N = Not a teacher of record for P-12 public school students in that year.
Dark shaded cells represent the first year that a teacher was a teacher of record for P12 students in public schools.
Light shaded cells represent years in which a State calculates and reports a teacher
retention rate using data from that teacher.
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
2017-2018
Cohort:
75523
ER31OC16.003
75524
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
When calculating teacher retention
rate, it is important to first note that the
academic year in which an individual
met all of the requirements for program
completion is not relevant. Contrary to
teacher placement rate, the defining
concern of a teacher retention rate
calculation is the first year in which an
individual becomes a teacher of record
for P–12 public school students. In this
example, we use the same basic
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
information as we did for the teacher
placement rate example. As such, Table
2a recreates Table 1, with calculations
for teacher retention rate instead of the
teacher placement rate. However,
because the first year in which an
individual becomes a novice teacher is
the basis for the calculations, rather
than the year of program completion, we
could rearrange Table 2a in the order in
PO 00000
Frm 00032
Fmt 4701
Sfmt 4700
which teachers first became novice
teachers as in Table 2b.
In addition, Table 2b removes data on
program completion, and eliminates
both extraneous information before an
individual becomes a novice teacher
and employment information after the
State is no longer required to report on
these individuals for purposes of the
teacher retention rate.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
VerDate Sep<11>2014
Example of Calculations of a Teacher Retention Rate for a Single Teacher
Preparation Program
Jkt 241001
Title II Reporting Year
(Academic Year)
2020
(2019-2020)
Teacher
2022
(2021-2022)
PO 00000
Frm 00033
Fmt 4701
Sfmt 4725
E:\FR\FM\31OCR2.SGM
2017-2018
Cohort:
2017-2018
Cohort:
31OCR2
Teacher
retention
rate
State does not
report this
year
A+B+F+G
A+B+F+G+]
4
= - = 80%
5
A+B+F
A+B+F+G+]
3
= - = 60%
5
2018-2019
Cohort:
C+H
C+H+K
2017-2018
Cohort:
2018-2019
Cohort:
A+F
A+B+F+G+]
C+H
C+H+K
=
2
= - = 40%
5
2018-2019
Cohort:
= 66.7%
12019-2020
Cohort:
C+H
r-+-H-+-K
2
3
I
D
D
-
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
Table 2b.
75525
ER31OC16.004
sradovich on DSK3GMQ082PROD with RULES2
75526
VerDate Sep<11>2014
Jkt 241001
2
3
= 66.7%
2
=3
1
= 66.7%
PO 00000
Frm 00034
2019-2020
Cohort:
D
Fmt 4701
D
-!
- 1
= 100%
=i
=
100%
2020-2021
Cohort:
E +I
Sfmt 4725
E +I
2
=- = 100%
2
E:\FR\FM\31OCR2.SGM
31OCR2
NOTES:
Y = Teacher of record for P-12 public school students in that year
N = Not a teacher of record for P-12 public school students in that year.
Dark shaded cells represent the first year that a teacher was a teacher of record for P12 students in public schools.
Light shaded cells represent years in which a State calculates and reports a teacher
retention rate using data from that teacher.
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
18:31 Oct 28, 2016
ER31OC16.005
=
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
In this example, this particular
teacher preparation program has five
individuals who became novice teachers
for the first time in the 2017–2018
academic year (Teachers A, B, F, G, and
J). For purposes of this definition, we
refer to these individuals as a cohort of
novice teachers. As described below, the
State will first calculate a teacher
retention rate for this teacher
preparation program in the October
2019 State report card. In that year, the
State will determine how many
members of the 2017–2018 cohort of
novice teachers have been continuously
employed through the current year. Of
Teachers A, B, F, G, and J, only teachers
A, B, F, and G are still teaching in 2018–
2019. As such, the State calculates a
teacher retention rate of 80 percent for
this teacher preparation program for the
2019 State Report Card.
In the October 2020 SRC, the State is
required to report on the 2017–2018
cohort and the 2018–2019 cohort. The
membership of the 2017–2018 cohort
does not change. From that cohort,
Teachers A, B, and F were employed in
both the 2018–2019 academic year and
the 2019–2020 academic year. The
2018–2019 cohort consists of Teachers
C, H, and K. Of those, only Teachers C
and H are employed as teachers of
record in the 2019–2020 academic year.
Therefore, the State reports a teacher
retention rate of 60 percent for the
2017–2018 cohort—because three
teachers (A, B, and F) were
continuously employed through the
current year out of the five total teachers
(A, B, F, G, and J) in that cohort—and
67 percent for the 2018–2019 cohort—
because 2 teachers (C and H) were
employed in the current year of the
three total teachers (C, H, and K) in that
cohort.
In the October 2021 SRC, the State
will be reporting on three cohorts of
novice teachers for the first time—the
2017–2018 cohort (A, B, F, G, and J), the
2018–2019 cohort (C, H, and K), and the
2019–2020 cohort (D). Of the 2017–2018
cohort, only Teachers A and F have
been continuously employed as a
teacher of record since the 2017–2018
academic year, therefore the State will
report a retention rate of 40 percent for
this cohort (two out of five). Of the
2018–2019 cohort, only Teachers C and
H have been continuously employed
since the 2018–2019 academic year.
Despite being a teacher of record for the
2020–2021 academic year, Teacher K
does not count towards this program’s
teacher retention rate because Teacher K
was not a teacher of record in the 2019–
2020 academic year, and therefore has
not been continuously employed. The
State would report a 67 percent
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
retention rate for the 2018–2019 cohort
(two out of three). For the 2019–2020
cohort, Teacher D is still a teacher of
record in the current year. As such, the
State reports a teacher retention rate of
100 percent for that cohort.
Beginning with the 2022 SRC, the
State no longer reports on the 2017–
2018 cohort. Instead, the State reports
on the three most recent cohorts of
novice teachers—2018–2019 (C, H, and
K), 2019–2020 (D), and 2020–2021 (E
and I). Of the members of the 2018–2019
cohort, both Teachers C and H have
been employed as teachers of record in
each year from their first year as
teachers of record through the current
reporting year. Teacher K is still not
included in the calculation because of
the failure to be employed as a teacher
of record in the 2019–2020 academic
year. Therefore, the State reports a 67
percent retention rate for this cohort. Of
the 2019–2020 cohort, Teacher D has
been employed in each academic year
since first becoming a teacher of record.
The State would report a 100 percent
retention rate for this cohort. Teachers
E and I, of the 2020–2021 cohort, have
also been retained in the 2021–2022
academic year. As such, the State
reports a teacher retention rate of 100
percent in the 2022 SRC for this cohort.
Changes: We have revised the
definition of teacher retention rate by
removing the second and third proposed
options for calculating it. We have
replaced the first option with a method
for calculating the percentage of novice
teachers who have been continuously
employed as teachers of record in each
year between their first year as a novice
teacher and the current reporting year.
In doing so, we also clarify that the
teacher retention rate is based on the
percentage of novice teachers in each of
the three cohorts of novice teachers
immediately preceding the current title
II reporting year.
Comments: None.
Discussion: Upon review of
comments, we recognized that the data
necessary to calculate teacher retention
rate, as we had proposed to define this
term, will not be available for the
October 2018, 2019 and 2020 State
reports. We have therefore clarified in
§ 612.5(a)(2)(ii)the reporting
requirements for this indicator for these
initial implementation years. In doing
so, we have re-designated proposed
§ 612.5(a)(2)(ii), which permits States to
assess traditional and alternative route
teacher preparation programs differently
based on whether there are specific
components of the programs’ policies or
structure that affect employment
outcomes, as § 612.5(a)(2)(iii).
PO 00000
Frm 00035
Fmt 4701
Sfmt 4700
75527
Changes: We have added
§ 612.5(a)(2)(ii) to clarify that: For the
October 2018 State report, the rate does
not apply; for the October 2019 State
report, the rate is based on the cohort of
novice teachers identified in the 2017–
18 title II reporting year; for the October
2020 State report, separate rates will be
calculated for the cohorts of novice
teachers identified in the 2017–18 and
2018–19 title II reporting years. In
addition, we have re-designated
proposed § 612.5(a)(2)(ii) as
§ 612.5(a)(2)(iii).
Teacher Survey
Comments: Commenters stated that
the proposed definition of teacher
survey was unclear about whether all
novice teachers or only a sample of
novice teachers must be surveyed.
Commenters also stated that the
proposed definition missed an
opportunity to collect meaningful data
about teacher preparation program
performance because it would only
require a survey of novice teachers
serving in full-time teaching positions
for the grade level, span, and subject
area in which they were prepared, and
not all the completers of programs. One
commenter noted that Massachusetts
plans to collect survey data from recent
graduates upon completion and novice
teachers after a year of employment.
Some commenters provided
recommendations regarding survey
content. These commenters argued that
the teacher survey include questions to
determine whether a teacher
preparation program succeeded in the
following areas, which, according to the
commenters, research shows are
important for preparing teachers to
advance student achievement:
producing student learning and raising
student achievement for all students;
using data to assess and address student
learning challenges and successes;
providing differentiated teaching
strategies for students with varied
learning needs, including English
learners; keeping students engaged;
managing classroom behavior; and using
technology to improve teaching and
increase student learning.
Discussion: While the proposed
definition of survey outcomes provided
that States would have to survey all
novice teachers in their first year of
teaching in the State where their teacher
preparation program is located, our
proposed definition of teacher survey
limited this to those teachers in fulltime teaching positions. We agree with
the commenters’ explanations for why
States should need to survey all novice
teachers, and not just those who are in
full-time teaching positions. For clarity,
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75528
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
in addition to including the requirement
that ‘‘survey outcomes’’ be of all novice
teachers, which we have moved from its
own definition in proposed §§ 612.2 to
612.5(a)(3), we have revised the
definition of ‘‘teacher survey’’
accordingly. We are also changing the
term ‘‘new teacher’’ to ‘‘novice teacher’’
for the reasons discussed under the
definition of ‘‘novice teacher.’’
However, we believe that requiring
States to survey all program completers
would put undue burden on States by
requiring them to locate individuals
who have not been hired as teachers.
Rather, we believe it is enough that
States ensure that surveys are conducted
of all novice teachers who are in their
first year of teaching. We note that this
change provides consistency with the
revised definition of employer survey,
which is a survey of employers or
supervisors designed to capture their
perceptions of whether the novice
teachers they employ or supervise, who
are in their first year of teaching, were
effectively prepared. The goal of a
teacher preparation program is to
effectively prepare aspiring teachers to
step into a classroom prepared to teach.
As the regulations seek to help States
reach reasonable determinations of
whether teacher preparation programs
are meeting this goal, the definition of
survey outcomes focuses on novice
teachers in their first year of teaching.
We note that the regulations do not
prohibit States from surveying
additional individuals or conducting
their surveys of cohorts of teachers over
longer periods of time, and we
encourage States to consider doing so.
However, considering the costs
associated with further surveys of the
same cohorts of novice teachers, we
believe that requiring that these teachers
be surveyed once, during their first year
of teaching, provides sufficient
information about the basic issue—how
well their program prepared them to
teach.
We believe that States, in consultation
with their stakeholders (see § 612.4(c)),
are in the best position to determine the
content of the surveys used to evaluate
the teacher preparation programs in
their State. Therefore, the regulations do
not specify the number or types of
questions to be included in employer or
teacher surveys.
Changes: We have revised the
definition of ‘‘teacher survey’’ to require
States to administer surveys to all
novice teachers in their first year of
teaching in the State.
Title II Reporting Year
Comments: None.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Discussion: Since its inception, the
title II reporting system has used the
term ‘‘academic year’’ to refer to a
period of twelve consecutive months,
starting September 1 and ending August
31, during which States collect and
subsequently report data on their annual
report cards. This period of data
collection and reporting is familiar to
States, institutions, and the public;
however, the proposed regulations did
not contain a definition of this reporting
period. In order to confirm that we do
not intend for States to implement the
regulations in a way that changes their
longstanding practice of using that
‘‘academic year’’ as the period for their
data collection and reporting, we
believe that it is appropriate to add a
definition to the regulations. However,
to avoid confusion with the very generic
term academic year, which may mean
different things at the teacher
preparation program and LEA levels, we
instead use the term ‘‘title II reporting
year.’’
Changes: We added the term ‘‘title II
reporting year’’ under § 612.2, and
defined it as a period of twelve
consecutive months, starting September
1 and ending August 31.
Subpart B—Reporting Requirements
Section 612.3 What are the regulatory
reporting requirements for the
institutional report card?
Timeline of Reporting Requirements (34
CFR 612.3)
Comments: While there was some
support for our proposal to change the
IRC due date from April to October,
many commenters stated that the
proposed October 2017 pilot start date
for the annual reporting cycle for the
IRC, using data pertaining to an
institution’s programs and novice
teachers for the 2016–2017 academic
year, would be unworkable. Several
commenters therefore strongly
recommended that our proposal to move
the due date for the IRC up by six
months to October following the end of
the institutions’ academic year not be
implemented.
Commenters said that the change
would make it impossible to collect
reliable data on several factors and on
large numbers of recent students. They
stated that it would be impossible to
submit a final IRC by October 1 because
students take State licensing
assessments, as well as enter into, drop
from, and complete programs through
August 31, and therefore final student
data, pass rates for students who took
assessments used for teacher
certification or licensure by the State,
and other information would not be
PO 00000
Frm 00036
Fmt 4701
Sfmt 4700
available until September or October of
each year. Other commenters indicated
that, because most teacher preparation
programs will need to aggregate
multiple years of data to meet the
program size threshold for reporting, the
October submission date will
unnecessarily rush the production and
posting of their aggregated teacher
preparation program data. Some
commenters noted that changing the IRC
due date to October (for reporting on
students and programs for the prior
academic year) would require a change
in the definition of academic year
because, without such a change, the
October reports could not reflect scores
on assessment tests that students or
program completers took through
August 31st. Alternatively, the proposal
would require institutions to prepare
and submit supplemental reports later
in the year in order for the reports to
fully reflect information for the prior
academic year.
Some commenters also stated that
LEAs have limited staffing and cannot
provide assistance to institutions during
the summer when data would be
collected, or that because teacher hiring
often occurs in August, an October IRC
due date does not provide enough time
to collect reliable employment data.
Discussion: We believe that the NPRM
confused many commenters, leading
them to believe that IRC reporting
would occur in the October immediately
after the end of the title II academic year
on August 31. Rather, we had intended
that the reporting would be on the prior
year’s academic year (e.g., the October 1,
2018 IRC would report data on the
2016–2017 academic year). However, as
we discuss in our response to comments
on our proposals for the timing of the
SRC under § 612.4(a)(1)(i) General State
Report Card reporting and § 612.4(b)
Timeline, we have decided to maintain
the submission date for the SRC report
in October, and so also maintain the due
date for the IRC as April of the year
following the title II reporting year.
Finally, while several commenters
opined that an October date for
submission of the IRC did not provide
sufficient time for institutions to receive
information from LEAs, we do not
believe that the regulations require
LEAs to submit any information to
institutions for purposes of the IRC. We
assume that the comments were based
on a misunderstanding surrounding the
data to be reported in the IRC. While our
proposed indicators of program
performance would require States to
receive and report information from
LEAs, institutions would not need to
receive comparable information from
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
LEAs in order to prepare and submit
their IRCs.
Changes: We have revised § 612.3 to
provide that the first IRC under the
regulations, which would cover the
2016–2017 academic year, is due not
later than April 30, 2018.
Institutional Report Card (34 CFR
612.3(a))
Comments: Multiple commenters
noted that the proposed regulations
regarding the IRCs do not take into
account all of the existing reporting
demands, including not only the title II
report, but also reports for national and
regional accrediting bodies. Another
commenter stated that, because
feedback loops already exist to improve
teacher preparation programs, there is
no need to have a Federal report card on
each teacher preparation program.
On the other hand, some commenters
suggested that teacher preparation
programs report the demographics and
outcomes of enrolled teacher candidates
by race and ethnicity. Specifically,
commenters suggested reporting the
graduation rates, dropout rates,
placement rates for graduates, first-year
evaluation scores (if available), and the
percentage of teacher candidates who
stay within the teaching profession for
one, three, and five years. Another
commenter also suggested that gender,
age, grade-level, and specialized areas of
study be included; and that the data be
available for cross-tabulation (a method
of analysis allowing comparison of the
relationship between two variables).
One commenter stated that because title
II reporting metrics are geared to
evaluate how IHEs provide training,
recruitment, and education to first-time
graduates of education programs, the
metrics cannot be applied to alternative
route certification programs, which
primarily train career changers who
already have a degree and content
knowledge. This commenter argued that
attempting to compare the results of title
II metrics from alternative route
certification programs and traditional
IHE-based programs will result in
untrue conclusions because the
programs’ student candidates are so
different.
Another commenter suggested that, in
order to ensure that States are able to
separately report on the performance of
alternative route preparation programs,
IHEs should report whether they have a
partnership agreement with alternative
route providers, and identify the
candidates enrolled in each of those
programs. The commenter noted that,
while doing so may lead States to
identify groups of small numbers of
alternative route program participants, it
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
may eliminate the possibility that
candidates who actually participate in
alternative route programs are identified
as graduates of a traditional preparation
program at the same IHE.
Another commenter stated that the
variety of program academic calendars,
with their different ‘‘start’’ and ‘‘end’’
dates in different months and seasons of
the year, created another source of
inaccurate reporting. The commenter
explained that, with students entering a
program on different dates, the need to
aggregate cohorts will result in diffuse
data that have relatively little meaning
since the cohort will lose its
cohesiveness. As such, the commenter
stated, the data reported based on
aggregate cohorts should not be used in
assessing or evaluating the impact of
programs on participants.
A number of commenters noted what
they claimed were inherent flaws in our
proposed IRC. They argued that it has
not been tested for validity, feasibility,
or unintended consequences, and
therefore should not be used to judge
the quality of teacher preparation
programs.
Discussion: In response to comments
that would have IHEs report more
information on race, ethnicity, sex, and
other characteristics of their students or
graduates, the content of the IRC is
mandated by section 205(a) of the HEA.
Section 205(a)(C)(ii) of the HEA
provides the sole information that IHEs
must report regarding the characteristics
of their students: ‘‘the number of
students in the program (disaggregated
by race, ethnicity, and gender).’’
Therefore, we do not have the authority
to waive or change the statutorily
prescribed annual reporting
requirements for the IRC.
Regarding the recommendation that
institutions report whether their teacher
preparation programs have partnership
agreements with alternative route
providers, we note that section 205(a) of
the HEA neither provides for IHEs to
include this type of information in their
IRCs nor authorizes the Secretary to add
reporting elements to them. However, if
they choose, States could require
institutions to report such data to them
for inclusion in the SRCs. We defer to
States on whether they need such
information and, if so, the best way to
require IHEs to provide it.
In response to the comment that the
IRC is unnecessary because institutions
already have feedback loops for program
improvement, we note that by requiring
each institution to make the information
in the IRC available to the general
public Congress plainly intends that the
report serve a public interest that goes
beyond the private use the institution
PO 00000
Frm 00037
Fmt 4701
Sfmt 4700
75529
may make of the reported data. We thus
disagree that the current feedback loops
that IHEs may have for program
improvement satisfy Congress’ intent in
this regard.
We understand that there are
differences between traditional and
alternative route teacher preparation
programs and that variability among
programs in each category (including
program start and end dates) exists.
However, section 205(a) of the HEA is
very clear that an IHE that conducts
either a traditional or alternative route
teacher preparation program must
submit an IRC that contains the
information Congress has prescribed.
Moreover, we do not agree that the
characteristics of any of these programs,
specifically the demographics of the
participants in these programs or
whether participants have already
earned an undergraduate degree, would
necessarily lead to inaccurate or
confusing reporting of the information
Congress requires. Nor do we believe
that the IRC reporting requirements are
so geared to evaluate how IHEs provide
training, recruitment, and education to
first-time graduates of education
programs that IHEs operating alternative
route programs cannot explain the
specifics of their responses.
We do acknowledge that direct
comparisons of traditional and
alternative route programs would
potentially be misleading without
additional information. However, this is
generally true for comparisons of all
types of programs. For example, a
comparison of average cost of tuition
and fees between two institutions could
be misleading without the additional
context of the average value of financial
aid provided to each student. Simply
because analyzing specific data out of
context could potentially generate
confusion does not mitigate the value of
reporting the information to the general
public that, as we have noted, Congress
requires.
With specific regard to the fact that
programs have different operating
schedules, the IRC would have all IHEs
report on students participating in
teacher preparation programs during the
reporting year based on their graduation
date from the program. This would be
true regardless of the programs’ start
date or whether the students have
previous education credentials. We also
believe the IRC would become too
cumbersome if we tried to tailor the
specific reporting requirements in
section 205(a) of the HEA to address and
reflect each individual program start
time, or if the regulations created
different reporting structures based on
the program start time or the previous
E:\FR\FM\31OCR2.SGM
31OCR2
75530
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
career or educational background of the
program participants.
Furthermore, we see no need for any
testing of data reported in the IRC for
validity, feasibility, or unintended
consequences. The data required by
these regulations are the data that
Congress has specified in section 205(a)
of the HEA. We do not perceive the data
elements in section 205(a) as posing any
particular issues of validity. Just as they
would in any congressionally mandated
report, we expect all institutions to
report valid data in their IRCs and, if
data quality issues exist we expect
institutions will address them so as to
meet their statutory obligations. Further,
we have identified no issues with the
feasibility of reporting the required data.
While we have worked to simplify
institutional reporting, institutions have
previously reported the same or similar
data in their IRCs, albeit at a different
level of aggregation. Finally, we fail to
see any unintended consequences that
follow from meeting the statutory
reporting requirements. To the extent
that States use the data in the IRC to
help assess whether a program is lowperforming or at-risk of being lowperforming under section 207(a) of the
HEA, under our regulations this would
occur only if, in consultation with their
stakeholders under § 612.4(c), States
decide to use these data for this
purpose. If institutions are concerned
about such a use of these data, we
encourage them to be active participants
in the consultative process.
Changes: None.
sradovich on DSK3GMQ082PROD with RULES2
Prominent and Prompt Posting of
Institutional Report Card (34 CFR
612.3(b))
Comments: Multiple commenters
supported the requirement to have each
IHE post the information in its IRC on
its Web site and, if applicable, on the
teacher preparation program’s own Web
site. Based on the cost estimates in the
NPRM, however, several commenters
raised concerns about the ability of IHEs
to do so.
Discussion: We appreciate the
commenters’ support for our proposal as
an appropriate and efficient way for
IHEs to meet their statutory
responsibility to report annually the
content of its IRC to the general public
(see section 205(a)(1) of the HEA).
We discuss the comments regarding
concerns about the cost estimates in the
IRC Reporting Requirements section of
the Discussion of Costs, Benefits, and
Transfers in this document.
Changes: None.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Availability of Institutional Report Card
(34 CFR 612.3(c))
Comments: One commenter
recommended that we mandate that
each IHE provide the information
contained in its IRC in promotional and
other materials it makes available to
prospective students, rather than
leaving it to the discretion of the
institution.
Discussion: While we believe that
prospective students or participants of a
teacher preparation program need to
have ready access to the information in
the institution’s IRC, we do not believe
that requiring the IHE to provide this
information in its promotional materials
is either reasonable or necessary. We
believe that the costs of doing so would
be very large and would likely outweigh
the benefits. For example, many
institutions may make large printing
orders for pamphlets, brochures, and
other promotional materials that get
used over the course of several years.
Requiring the inclusion of IRC
information in those materials would
require that institutions both make these
promotional materials longer and print
them more often. As the regulations
already mandate that this information
be prominently posted on the
institution’s Web site, we fail to see a
substantial benefit to prospective
students that outweighs the additional
cost to the institution.
However, while not requiring the
information to be included in
promotional materials, we encourage
IHEs and their teacher preparation
programs to provide it in places that
prospective students can easily find and
access. We believe IHEs can find
creative ways to go beyond the
regulatory requirements to provide this
information to students and the public
without incurring significant costs.
Changes: None.
Section 612.4 What are the regulatory
reporting requirements for the State
report card?
General (34 CFR 612.4(a))
Comments: None.
Discussion: As proposed, § 612.4(a)
required all States to meet the annual
reporting requirements. For clarity, we
have revised this provision to provide,
as does section 205(b) of the HEA, that
all States that receive HEA funds must
do so.
Changes: We have revised § 612.4(a)
to provide that all States that receive
funds under the HEA must meet the
reporting requirements required by this
regulation.
PO 00000
Frm 00038
Fmt 4701
Sfmt 4700
General (Timeline) (34 CFR
612.4(a)(1)(i)) and Reporting of
Information on Teacher Preparation
Program Performance (Timeline) (34
CFR 612.4(b))
Comments: Many commenters
expressed concern with their State’s
ability to build data systems and to
collect and report the required data
under the proposed timeline.
Commenters noted that the proposed
timeline does not allow States enough
time to implement the proposed
regulations, and that the associated
logistical challenges impose undue and
costly burdens on States. Commenters
noted that States need more time to
make decisions about data collection,
involve stakeholders, and to pilot and
revise the data systems—activities that
they said cannot be completed in one
year.
Several commenters recommended
extending the timeline for
implementation by at least five years.
Some commenters suggested delaying
the reporting of program ratings until at
least 2021 to give States more time to
create data linkages and validate data.
Other commenters pointed out that their
States receive employment and student
learning data from LEAs in the fall or
winter, which they said makes reporting
outcomes in their SRCs in October of
each year, as we had proposed,
impossible. Still other commenters
noted that some data, by their nature,
may not be available to report by
October. Another commenter suggested
that institutions should report in
October, States should report outcome
data (but not performance designations)
in February, and then the States should
report performance designations in
June, effectively creating an additional
reporting requirement. To address the
timing problems in the proposed
schedule for SRC submission, other
commenters recommended that the
Department continue having States
submit their SRCs in October. On the
other hand, some commenters
supported or encouraged the
Department to maintain the proposed
timelines.
Many commenters stated that no State
currently implements the proposed
teacher preparation program rating
system. Therefore, to evaluate
effectiveness, or to uncover unintended
consequences, these commenters
emphasized the importance of
permitting States to develop and
evaluate pilot programs before broader
implementation. Some commenters
therefore recommended that the
proposed implementation timeline be
delayed until the process had been
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
piloted and evaluated for efficiency
while others recommended a multiyear
pilot program.
Discussion: We appreciate the
comments supporting the proposed
reporting timeline changes to the SRC.
However, in view of the public’s
explanation of problems that our
proposed reporting schedule could
cause, we are persuaded that the title II
reporting cycle should remain as
currently established—with the
institutions submitting their IRCs in
April of each year, and States
submitting their SRCs the following
October. IHEs and States are familiar
with this schedule, and we see that our
proposal to switch the reporting dates,
while having the theoretical advantage
of permitting the public to review
information much earlier, was largely
unworkable.
Under the final regulations, the initial
SRC (a pilot) would be due October 31,
2018, for the 2016–2017 academic year.
The October 2018 due date provides
much more time for submission of the
SRC. As we note in the discussion of
comments received on § 612.3(a)
(Reporting Requirements for the IRC),
IHEs will continue to report on their
programs, including pass rates for
students who took assessments used for
initial certification or licensure by the
State in which the teacher preparation
program is located, from the prior
academic year, by April 30 of each year.
States therefore will have these data
available for their October 31 reporting.
Because the outcome data States will
need to collect to help assess the
performance of their teacher preparation
programs (i.e., student learning
outcomes, employment outcomes, and
survey outcomes) would be collected on
novice teachers employed by LEAs from
the prior school year, these data would
likewise be available in time for the
October 31 SRC reporting. Given this,
we believe all States will have enough
time by October 31 of each year to
obtain the data they need to submit their
SRCs. In addition, since States are
expected to periodically examine the
quality of their data collection and
reporting under § 612.4(c)(2), we expect
that States have a process by which to
make modifications to their system if
they desire to do so.
By maintaining the current reporting
cycle, States will have a year (2016–
2017) to design and implement a
system. The 42 States, District of
Columbia, and the Commonwealth of
Puerto Rico that were previously
granted ESEA flexibility are therefore
well positioned to meet the
requirements of these regulations
because they either already have the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
systems in place to measure student
learning outcomes or have worked to do
so. Moreover, with the flexibility that
§ 612.5(a)(1)(ii) now provides for States
to measure student learning outcomes
using student growth, a teacher
evaluation measure, or another Statedetermined measure relevant to
calculating student learning outcomes
(or any combination of these three), all
States should be able to design and
implement their systems in time to
submit their initial reports by October
31, 2018. Additionally, at least 30
States, the District of Columbia, and the
Commonwealth of Puerto Rico either
already have the ability to aggregate data
on the achievement of students taught
by recent graduates and link those data
back to teacher preparation programs.
Similarly, as discussed below, 30 States
already implement teacher surveys that
could be modified to be used in this
accountability system.
Particularly given the added
flexibility in § 612.5(a)(1)(ii), as most
States already have or are well on their
way to having the systems required to
implement the regulations, we are
confident that the reduction in time to
prepare before the pilot SRC will be
prepared and submitted will prove to be
manageable. We understand that some
States will not have complete datasets
available for all indicators during initial
implementation, and so may need to
make adjustments based on experience
during the pilot year. We also stress that
the October 2018 SRC is a pilot report;
any State identification of a program as
low-performing or at-risk of being lowperforming included in that report
would not have implications either for
the program generally or for that
program’s eligibility to participate in the
TEACH Grant program. Full SRC
reporting begins in October 2019.
In addition, maintaining the SRC
reporting date of October 31 also is
important so that those who want to
apply for admission to teacher
preparation programs and for receipt of
TEACH Grants as early as January of the
year they wish to begin the program
know which IHEs have programs that
States have identified in their SRCs as
at-risk or low-performing. Prospective
students should have this information
as soon as they can so that they know
both the State’s assessment of each
program’s level of performance and
which IHEs lack authority to award
TEACH Grants. See our response to
public comment regarding the definition
of a TEACH Grant-eligible institution in
§ 686.2.
In summary, under our revised
reporting cycle, the SRC is due about
five months earlier than in the proposed
PO 00000
Frm 00039
Fmt 4701
Sfmt 4700
75531
regulations. However, because the
report due October 31, 2018 is a pilot
report, we believe that States will have
sufficient time to complete work
establishing their reporting and related
systems to permit submission of all
information in the SRC by the first full
reporting date of October 31, 2019.
While we appreciate the comments
suggesting that States be able to develop
and evaluate pilot programs before
broader implementation, or that the
implementation timeline be delayed
until the State process has been piloted
and evaluated for efficiency, we do not
believe that adding more time for States
to develop their systems is necessary.
Lastly, maintaining the existing timeline
does not affect the timing of
consequences for TEACH Grants for atrisk or low-performing teacher
preparation programs. Under the
regulations, the TEACH Grant
consequences would apply for the
2021–2022 award year.
Changes: We have revised § 612.4(a)
to provide that State reports under these
final regulations would be due on
October 31, 2018. We also changed the
date for SRC reporting to October
wherever it appears in the final
regulations.
Comments: Some commenters
expressed concern with States’ ability to
implement valid and reliable surveys in
the time provided. Commenters argued
that issues related to who to survey,
when to survey, and how often to
survey would make this the most
challenging performance indicator to
develop, implement, and use for
determining a program’s performance
level. Commenters stated that an
institution’s capacity to track graduates
accurately and completely is highly
dependent on the existence of
sophisticated State data systems that
track teacher employment and on
appropriate incentives to assure high
response rates to surveys, noting that
many States do not have such systems
in place and some are just beginning to
implement them. Commenters suggested
that the Department consider easing the
timeline for implementation of surveys
to reduce the cost and burden of
implementation of surveys.
Discussion: According to the GAO
survey of States, 30 States have used
surveys that assessed principals’ and
other district personnel’s satisfaction
with recent traditional teacher
preparation program graduates when
evaluating programs seeking State
approval.14 We believe these States can
modify these existing survey
instruments to develop teacher and
14 GAO
E:\FR\FM\31OCR2.SGM
at 13.
31OCR2
75532
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
employer surveys that comply with the
regulations without substantial
additional burden. Additionally, States
that do not currently use such surveys
may be able to shorten the time period
for developing their own surveys by
using whole surveys or individual
questions already employed by other
States as a template. States may also
choose to shorten the time required to
analyze survey results by focusing on
quantitative survey responses (e.g.,
score on a Likert scale or number of
hours of training in a specific teaching
skill) rather than taking the time to code
and analyze qualitative written
responses. However, we note that, in
many instances, qualitative responses
may provide important additional
information on program quality. As
such, States could opt to include
qualitative questions in their surveys
and send the responses to the applicable
teacher preparation programs for their
own analysis. With a far smaller set of
responses to analyze, individual
programs would be able to review and
respond much more quickly than the
State. However, these are decisions left
to the States and their stakeholders to
resolve.
Changes: None.
Comments: A number of commenters
indicated confusion about when
particular aspects of the proposed IRC
and SRC are to be reported and
recommended clarification.
Discussion: We agree with the
recommendation to clarify the reporting
of cohorts and metrics for reporting
years. The chart below outlines how
certain metrics will be reported and the
reporting calendar. We understand that
the information reported on the SRC
may differ from the example provided
below because initially some data may
be unavailable or incomplete. In these
instances, we expect that States will
weight indicators for which data are
unavailable in a way that is consistent
and applies equivalent levels of
accountability across programs.
TABLE 3—IMPLEMENTATION AND REPORTING CALENDAR EXAMPLE
Year
2018
2019
2020
2021
2022
April 30, 2021 ...........
Recent graduates
(from AY 2019–20).
April 30, 2022.
Recent graduates
(from AY 2020–21).
Institutional Report Card (IRC)
IRC Due Date .............
Pass Rate ...................
April 30, 2018 ...........
Recent graduates
(from AY 2016–17).
April 30, 2019 ...........
Recent graduates
(from AY 2017–18).
April 30, 2020 ...........
Recent graduates
(from AY 2018–19).
State Report Card (SRC)
SRC Due Date ...........
Placement Rate ..........
Retention Rate ...........
Student Learning Outcomes.
Survey Outcomes .......
October 31, 2018
(Pilot).
C1 .............................
N/A ............................
C1 .............................
October 31, 2019 ......
October 31, 2020 ......
October 31, 2021 ......
October 31, 2022.
C1, C2 .......................
C1 .............................
C1, C2 .......................
C1, C2, C3 ................
C1, C2 .......................
C1, C2, C3 ................
C2, C3, C4 ................
C1, C2, C3 ................
C2, C3, C4 ................
C3, C4, C5.
C2, C3, C4.
C3, C4, C5.
C1 .............................
C2 .............................
C3 .............................
C4 .............................
C5.
Impacts 2022–23
Award Year.
Impacts 2023–24
Award Year.
TEACH Eligibility
Not impacted .............
Not impacted .............
Impacts 2021–22
Award Year.
sradovich on DSK3GMQ082PROD with RULES2
Academic Year (AY): Title II academic year runs from September 1 to August 31.
Award year: Title IV award year runs from July 1 to June 30.
Note: Data systems are to be designed and implemented during the 2016–17 school year.
C1: Cohort 1, novice teachers whose first year in the classroom is 2017–18.
C2: Cohort 2, novice teachers whose first year in the classroom is 2018–19.
C3: Cohort 3, novice teachers whose first year in the classroom is 2019–20.
C4: Cohort 4, novice teachers whose first year in the classroom is 2020–21.
C5: Cohort 5, novice teachers whose first year in the classroom in 2021–22.
Changes: None.
Comments: To reduce information
collection and dissemination burden on
States, a commenter asked that the
Department provide a mechanism for
rolling up IRC data into the State data
systems.
Discussion: The Department currently
provides a system by which all IHEs
may electronically submit their IRC
data, and which also prepopulates the
SRC with relevant information from the
IRCs. We intend to continue to provide
this system.
Changes: None.
Comments: Some commenters stated
that States should be able to replace the
SRC reporting requirements in these
regulations with their own State-defined
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
accountability and improvement
systems for teacher preparation
programs.
Discussion: We disagree that States
should be able to replace the SRC
reporting requirements with their own
State-defined accountability and
improvement systems for teacher
preparation programs. Section 205(b) of
the HEA requires reporting of the
elements in the SRC by any State that
receives HEA funding. The measures
included in the regulations are either
specifically required by that provision
or are needed to give reasonable
meaning to the statutorily required
indicators of academic content
knowledge and teaching skills a State
must use to assess a teacher preparation
PO 00000
Frm 00040
Fmt 4701
Sfmt 4700
program’s performance. However,
§ 612.5(b) specifically permits a State to
assess a program’s performance using
additional indicators predictive of a
teacher’s effect on student performance,
provided that it uses the same indicators
for all teacher preparation programs in
the State. Following stakeholder
consultation (see § 612.4(c)), States are
free to adopt criteria for assessing
program performance beyond those
addressed in the regulations.
Changes: None.
Comments: Some commenters
recommended that the Department
provide adequate time for States to
examine and address the costs of
tracking student progress and academic
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
gains for teacher preparation program
completers who teach out of State.
Discussion: Section 612.5(a)(1) has
been revised to clarify that States may
exclude data regarding teacher
performance, or student academic
progress or growth, for calculating a
program’s student learning outcomes for
novice teachers who teach out of State
(and who teach in private schools). See
also the discussion of comments for
§ 612.5(a)(1) (student learning
outcomes). To the extent that States
wish to include this information, they
can continue to pilot and analyze data
collection quality and methodology for
a number of years before including it in
their SRCs.
Changes: None.
Comments: One commenter
specifically recommended laddering in
the proposed performance criteria only
after norming has occurred. We
interpret this comment to mean that
States should have time to collect data
on the required indicators for multiple
years on all programs and use that data
to establish specific thresholds for
acceptable program performance on
each indicator. This would require a
longer timeline before using the
indicators to assess program
performance than the Department had
proposed.
Discussion: We will not require
‘‘laddering in’’ the criteria in § 612.5
only after norming has occurred, as the
commenter suggested, because we
believe that States should be able to set
identifiable targets for these criteria
without respect to the current
distribution of program for an indicator
(e.g., a teacher retention rate of less than
50 percent as an indicator of low
performance). These regulations are not
intended to have States identify any
particular percentage of teacher
preparation programs as low-performing
or at-risk of being low-performing.
Rather, while they establish indicators
that each State will use and report, they
leave the process for how it determines
a teacher preparation program’s overall
rating to the discretion of the State and
its consultative group. If States wish to
incorporate norming, norming around
specific performance thresholds could
be completed during the pilot year and,
over time, performance thresholds can
be adjusted during the periodic
examinations of the evaluation systems
that States must conduct.
Changes: None.
Comments: Some commenters noted
that having States assess the
performance of teacher preparation
programs on a yearly basis seems likely
to drain already limited State and
institutional resources.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Discussion: Section 207(a) of the HEA
expressly requires States to provide an
‘‘annual list of low-performing [and atrisk] teacher preparation programs.’’ We
believe that Congress intended the State
program assessment requirement itself
also to be met annually. While we have
strived to develop a system that keeps
costs manageable, we also believe that
the improvement of teacher preparation
programs and consumers’ use of
information in the SRC on program
performance necessitate both annual
reporting and program determinations.
Changes: None.
Comments: A number of commenters
stated that the availability of student
growth and achievement data that are
derived from State assessment results
and district-determined measures are
subject to State legislative requirements
and, if the legislature changes them, the
State assessments given or the times
when they are administered could be
drastically impacted. One commenter
stated that, because the State operates
on a biennial budget cycle, it could not
request authority for creating the
administrative position the State needs
to comply with the proposed regulations
until the 2017–2019 budget cycle.
Discussion: We understand that the
availability of data States will need to
calculate student learning outcomes for
student achievement in tested grades
and subjects depends to some extent on
State legislative decisions to maintain
compatible State assessments subject to
section 1111(b)(2) of the ESEA, as
amended by the ESSA. But we also
assume that State legislatures will
ensure that their States have the means
to comply with this Federal law, as well
as the means to permit the State to
calculate student growth based on the
definition of ‘‘student achievement in
tested grades and subjects’’ in § 612.2.
Moreover, we believe that our decision
to revise § 612.5(a)(1)(ii) to include an
option for States to use ‘‘another Statedetermined measure relevant to
calculating student learning outcomes’’
should address the commenters’
concerns.
In addition, the commenter who
raised concerns based on the State
legislature being in session on only a
biennial basis did not provide enough
information to permit us to consider
why this necessarily bars the State’s
compliance with these regulations.
Changes: None.
Program-Level Reporting (Including
Distance Education) (34 CFR
612.4(a)(1)(i))
Comments: Some commenters
supported the shift to reporting at the
individual teacher preparation program
PO 00000
Frm 00041
Fmt 4701
Sfmt 4700
75533
rather than at the overall institutional
level. A couple of commenters agreed
that States should perform assessments
of each program, but be allowed to
determine the most appropriate way to
include outcomes in the individual
program determinations, including
determining how to roll-up outcomes
from the program level to the entity
level. Other commenters noted that
States should be required to report
outcomes by the overall entity, rather
than by the individual program, because
such reporting would increase the
reliability of the measures and would be
less confusing to students. Some
commenters expressed concerns that
only those programs that have data
demonstrating their graduates’
effectiveness in the public schools in
the State where the institution is located
would receive a top rating, and entitylevel reporting and rating would reduce
this concern. If States report by entity,
they could report the range in data
across programs in addition to the
median, or report data by quartile. This
would make transparent the differences
within an entity while maintaining
appropriate thresholds.
Commenters also stated that there are
too many variations in program size
and, as we understand the comment, in
the way States credential their teacher
preparation programs to mandate a
single Federal approach to disaggregated
program reporting for the entire Nation.
Discussion: We appreciate the
comments supporting the shift to
reporting at the program level. The
regulations provide extensive flexibility
to States to determine how to measure
and use outcomes in determining
program ratings. If a State wishes to
aggregate program level outcomes to the
entity level, it is free to do so, though
such aggregation would not replace the
requirements to report at the program
level unless the program (and the
method of aggregation) meets the smallsize requirements in § 612.4(b)(3)(ii).
Regarding the comment that reporting at
the institutional level is more reliable,
we note that the commenter did not
provide any additional context for this
statement, though we assume this
statement is based on a generalized
notion that data for the institution as a
whole might be more robust because of
the overall institution’s much larger
number of recent graduates. While we
agree that aggregation at a higher level
would generate more data for each
indicator, we believe that the program
size threshold in § 612.5(b)(3)
sufficiently addresses this concern
while also ensuring that the general
public and prospective students have
access to data that are as specific as
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75534
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
possible to the individual programs
operated by the institution.
We fail to understand how defining a
teacher preparation program as we have,
in terms of initial State teacher
certification or licensure in a specific
field, creates concerns that top ratings
would only go to programs with data
showing the effectiveness of graduates
working in public schools in the State.
So long as the number of novice
teachers the program produces meets
the minimum threshold size addressed
in § 612.4(b)(3) (excluding, at the State’s
discretion, teachers teaching out of State
and in private schools from
determinations of student learning
outcomes and teacher placement and
retention rates as permitted by
§ 612.5(a)(1) and § 612.2, respectively),
we are satisfied that the reporting of
program information will be sufficiently
robust and obviate concerns about data
reliability.
Moreover, we disagree with the
comments that students would find
reporting of outcomes at the institution
level less confusing than reporting at the
teacher preparation program level. We
believe students want information about
teacher preparation programs that are
specific to the areas in which they want
to teach so they can make important
educational and career decisions, such
as whether to enroll in a specific teacher
preparation program. This information
would be presented most clearly at the
teacher preparation program level rather
than at the institutional level, where
many programs would be collapsed
such that a student would not only lack
information about whether a specific
program in which she is interested is
low-performing or at risk of being lowperforming, but also be unable to review
data relative to indicators of the
program’s performance.
We also disagree with the claim that
program level reporting as required
under these regulations is inappropriate
due to the variation in program size and
structure across and within States. Since
the commenters did not provide an
example of how the requirements of
these regulations make program level
reporting impossible to implement, we
cannot address these concerns more
specifically than to say that since use of
indicators of program performance will
generate information unique to each
program, we fail to see why variation in
program size and structure undermine
these regulations.
Changes: None.
Comments: There were many
comments related to the reporting of
information for teacher preparation
programs provided through distance
education. Several commenters
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
indicated that the proposed regulations
are unclear on how the reporting
process would work for distance
education programs large enough to
meet a State’s threshold for inclusion on
their report card (see § 612.4(b)(3)), but
that lack a physical presence in the
State. Commenters indicated that, under
our proposed regulations, States would
need to identify out-of-State institutions
(and their teacher preparation programs)
that are serving individuals within their
borders through distance education, and
then collect the data, analyze it, and
provide assessments on these programs
operated from other States. Thus,
commenters noted, States may need
more authority either through regulatory
action or legislation to be able to collect
information from institutions over
which they do not currently have
authority.
Commenters also requested that the
Department clarify what would happen
to distance education programs and
their currently enrolled students if
multiple States would be assessing a
single program’s effectiveness and doing
so with differing results. One
commenter suggested a ‘‘home State’’
model in which, rather than developing
ratings for each program in each State,
all of a provider’s distance education
programs would be evaluated by the
State in which the provider, as opposed
to the program participants, is
physically located. The commenter
argued that this model would increase
the reliability of the measures and
decrease student confusion, especially
where comparability of measures
between States is concerned. Unless
such a home State model is adopted, the
commenter argued, other States may
discriminate against programs
physically located and operated in other
States by, as we understand the
comment, using the process of
evaluating program performance to
create excessive barriers to entry in
order to protect in-State institutions.
Another commenter asked that the
proposed regulations provide a specific
definition of the term ‘‘distance
education.’’
Several commenters expressed
support for the change to
§ 612.4(a)(1)(ii) proposed in the
Supplemental NPRM, which would
require that reporting on the quality of
all teacher preparation programs
provided through distance education in
the State be made by using procedures
for reporting that are consistent with
§ 612.4(b)(4), but based on whether the
program produces at least 25 or fewer
than 25 new teachers whom the State
certified to teach in a given reporting
year.
PO 00000
Frm 00042
Fmt 4701
Sfmt 4700
While commenters indicated that
reporting on hybrid teacher preparation
programs was a complicated issue,
commenters did not provide
recommendations specific to two
questions regarding hybrid programs
that were posed in the Supplemental
NPRM. The first question asked under
what circumstances, for purposes of
both reporting and determining the
teacher preparation program’s level of
overall performance, a State should use
procedures applicable to teacher
education programs offered through
distance education and when it should
use procedures for teacher preparation
programs provided at brick-and-mortar
institutions. Second, we asked, for a
single program, if one State uses
procedures applicable to teacher
preparation programs provided through
distance education, and another State
uses procedures for teacher preparation
programs provided at brick-and-mortar
institutions, what are the implications,
especially for TEACH Grant eligibility,
and how these inconsistencies should
be addressed.
In response to our questions, many
commenters indicated that it was
unclear how to determine whether a
teacher preparation program should be
classified as a teacher preparation
program provided through distance
education for reporting under
§ 612.4(a)(1)(ii) and asked for
clarification regarding how to determine
under what circumstances a teacher
preparation program should be
considered a teacher preparation
program provided through distance
education. One commenter
recommended that we define a teacher
preparation program provided through
distance education program to be one
where the full and complete program
can be completed without an enrollee
ever being physically present at the
brick-and-mortar institution or any of its
branch offices.
Commenters expressed a number of
concerns about reporting. Some
commenters indicated that while the
December 3, 2014, NPRM allowed States
to report on programs that produced
fewer than 25 new teachers, it was
unclear whether the same permission
would be applied to distance education
programs through the Supplemental
NPRM. Additionally, a few commenters
thought that, in cases where students
apply for certification in more than one
State, the outcomes of a single student
could be reported multiple times by
multiple States. Other commenters felt
that if States are expected to evaluate
distance education graduates from other
States’ programs, the regulations should
be revised to focus on programs that are
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
tailored to meet other States’
requirements. A commenter suggested
that the State in which a distance
education program is headquartered
should be responsible for gathering the
data reported by the other States in
which the program operates and then,
using their data along with other States’
data, that State where the program is
headquartered should make the
determination as to the performance
rating of that program. Doing so would
establish one rating for each distance
education program, which would come
from the State in which it is
headquartered. The commenter
expressed that this would create a
simplified rating system similar to
brick-and-mortar institutions. Another
commenter stated that the proposed
approach would force the States to
create a duplicative and unnecessary
second tracking system through their
licensure process for graduates of their
own teacher preparation programs
provided through distance education
who remain in the State.
Many commenters voiced concerns
related to the identification and tracking
of distance education programs
provided through distance education.
Specifically, commenters indicated that,
because the method by which a teacher
preparation program is delivered is not
transcribed or officially recorded on
educational credentials, the receiving
State (the State where the teacher has
applied for certification) has no way to
distinguish teacher preparation
programs provided through distance
education from brick-and-mortar teacher
preparation programs. Furthermore,
receiving States would not be able to
readily distinguish individual teacher
preparation programs provided through
distance education from one another.
Finally, a commenter stated that the
proposed regulations do not require
States to provide any notice of their
rating, and do not articulate an appeal
process to enable institutions to
challenge, inspect, or correct the data
and information on the basis of which
they might have received an adverse
rating. Commenters also indicated that
teacher preparation programs
themselves should receive data on
States’ student and program evaluation
criteria.
Discussion: Regarding comments that
the regulations need to describe how
teacher preparation programs provided
through distance education programs
should be reported, we intended for a
State to report on these programs
operating in that State in the same way
it reports on the State’s brick-and-mortar
teacher preparation programs.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
We appreciate commenters’
expressions of support for the change to
the proposed regulations under
§ 612.4(a)(1)(ii), as proposed in the
Supplemental NPRM, requiring that
reporting on the quality of all teacher
preparation programs provided through
distance education in the State be made
by using procedures for reporting that
are consistent with proposed
§ 612.4(b)(4), but based on whether the
program produces at least 25 or fewer
than 25 new teachers whom the State
certified to teach in a given reporting
year. In considering the language of
proposed § 612.4(a)(1)(ii) and the need
for clarity on the reporting requirements
for teacher preparation programs
provided through distance education,
we have concluded that the provision
would be simpler if it simply
incorporated by reference the reporting
requirements for those programs in
§ 612.4(b)(3) of the final regulations.
While we agree with the commenters
who stated that the proposed
regulations were unclear on what
constitutes a teacher preparation
program provided through distance
education, we decline to accept the
recommendation to define a distance
education program where the full and
complete program can be completed
without an enrollee ever being
physically present at the brick-andmortar institution or any of its branch
offices because this definition would
not be inclusive of teacher preparation
programs providing significant portions
of the program through distance
education. In addition, the proposed
definition would allow the teacher
preparation program to easily modify its
requirements such that it would not be
considered a teacher preparation
program provided through distance
education.
Instead, in order to clarify what
constitutes a teacher preparation
program provided through distance
education, we are adding the term
‘‘teacher preparation program provided
through distance education’’ to § 612.2
and defining it as a teacher preparation
program in which 50 percent or more of
the program’s required coursework is
offered through distance education. The
term distance education is defined
under 34 CFR 600.2 to mean education
that uses one or more specified
technologies to deliver instruction to
students who are separated from the
instructor and to support regular and
substantive interaction between the
students and the instructor, either
synchronously or asynchronously. The
technologies may include the internet;
one-way and two-way transmissions
through open broadcast, closed circuit,
PO 00000
Frm 00043
Fmt 4701
Sfmt 4700
75535
cable, microwave, broadband lines, fiber
optics, satellite, or wireless
communications devices; audio
conferencing; or video cassettes, DVDs,
and CD–ROMs, if the cassettes, DVDs, or
CD–ROMs are used in a course in
conjunction with any of the
technologies previously in this
definition. We have incorporated this
definition by reference (see § 612.2(a)).
In the Supplemental NPRM, we
specifically requested public comment
on how to determine when a program
that has both brick-and-mortar and
distance education components should
be considered a teacher preparation
program provided through distance
education. While we received no
suggestions, we believe that it is
reasonable that if 50 percent or more of
a teacher preparation program’s
required coursework is offered through
distance education, it should be
considered a teacher preparation
program provided through distance
education because the majority of the
program is offered through distance
education. This 50 percent threshold is
consistent with thresholds used
elsewhere in Departmental regulations,
such as those relating to correspondence
courses under 34 CFR 600.7 or
treatment of institutional eligibility for
disbursement of title IV HEA funds for
additional locations under 34 CFR
600.10(b)(3).
In addition, we do not agree with the
suggestion for a ‘‘home State’’ reporting
model, in which all of a provider’s
distance education programs would be
evaluated by the State in which the
provider is physically located. First,
section 205(b) of the HEA requires
States to report on the performance of
their teacher preparation programs. We
feel strongly both that, to date, defining
the program at the institutional level has
not produced meaningful results, and
that where programs provided through
distance education prepare individuals
to teach in different States, those
States—and not only the ‘‘home
State’’—should assess those programs’
performance. In addition, we believe
that each State should, as the law
anticipates, speak for itself about what
it concludes is the performance of each
teacher preparation program provided
through distance education operating
within its boundaries. Commenters did
not provide any evidence to support
their assertion that States would
discriminate against distance learning
programs physically located in other
States, nor do we understand how they
would do so if, as § 612.4(a) anticipates,
they develop and apply the same set of
criteria (taking into consideration the
need to have different employment
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75536
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
outcomes as provided in § 612.4(b)(2)
given the nature of these programs) for
assessing the performance of brick-andmortar programs and programs provided
through distance education programs.
Regarding reporting concerns, we
provide under § 612.4(b)(3)(i) for annual
reporting on the performance of each
teacher preparation program that
produces a total of 25 or more recent
graduates in a given reporting year (that
is, a program size threshold of 25), or,
at the State’s discretion, a lower
program size threshold (e.g., 15 or 20).
Thus, States can use a lower threshold
than the 25 recent graduates. We do not
agree that in cases where students apply
for certification in more than one State,
a single student would necessarily be
counted multiple times. For calculations
of the placement rate for a program
provided through distance education,
the student who teaches in one State but
who has received teaching certification
in that State and others would be
included in the denominator of
placement rates calculated by these
other States only if those States chose
not to exclude recent graduates teaching
out of State from their calculations. (The
same would be true of graduates of
brick-and-mortar programs.) But those
other States would only report and use
a placement rate in assessing the
performance of programs provided
through distance education if they have
graduates of those programs who are
certified in their States (in which case
the program size threshold and
aggregation procedures in § 612.4(b)
would apply).
Further, for the purposes of the
teacher placement rate, § 612.5(a)(2)(iv)
permits a State, at its discretion, to
assess the teacher placement rate for
teacher preparation programs provided
through distance education differently
from the teacher placement rate for
other teacher preparation programs
based on whether the differences in the
way the rate is calculated for teacher
preparation programs provided through
distance education affect employment
outcomes.
States that certify at least 25 teachers
from a teacher preparation program
provided through distance education do
have an interest in that program and
will be reporting on the program as a
program in their States. Moreover, we
disagree that States in which distance
education programs are headquartered
should round up data from other States,
determine a performance rating, and
report it for several reasons. In addition
to placing a higher cost and burden on
a particular State, this methodology
would undermine the goal of States
having a say in the quality of the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
program that is being used to certify
teachers in the State. The State where a
teacher preparation program operating
in multiple States is housed is not the
only State with an interest in the
program. Finally, we do not believe that
the regulations would force States to
create a duplicative and unnecessary
second tracking system because a State
is already required to report on teacher
preparation programs in the State.
We agree with commenters’ concerns
regarding the identification and tracking
of teacher preparation programs
provided through distance education.
To address this concern, institutions
will be asked to report which of their
teacher preparation programs are
teacher preparation programs provided
through distance education in the IRC,
which the institutions provide to the
State. The receiving State can then
verify this information during the
teacher certification process for a
teacher candidate in the State.
We note that an appeal process
regarding a teacher preparation
program’s performance is provided for
under § 612.4(c). We also note that
teacher preparation programs will have
access to data on States’ student and
program evaluation criteria because
State report cards are required to be
publicly available.
Changes: We are adding the term
‘‘teacher preparation program provided
through distance education’’ to § 612.2
and defining it as a teacher preparation
program in which 50 percent or more of
the program’s required coursework is
offered through distance education. We
are also providing under § 612.4(a)(1)(ii)
that States must report on the quality of
all teacher preparation programs
provided through distance education in
the State consistent with § 612.4(b)(3).
Making the State Report Card Available
on the State’s Web Site (34 CFR
612.4(a)(2))
Comments: One commenter
supported the proposed change that any
data used by the State to help evaluate
program performance should be
published at the indicator level to
ensure that programs understand the
areas they need to improve, and to
provide additional information to
students about program success. Other
commenters stated that posting SRCs
does not lead to constructive student
learning or to meeting pre-service
preparation program improvement
goals. Many commenters stated that the
method by which States would share
information with consumers to ensure
understanding of a teacher preparation
program’s employment outcomes or
overall rating is not stipulated in the
PO 00000
Frm 00044
Fmt 4701
Sfmt 4700
regulations and, furthermore, that the
Department does not specifically require
that this information be shared.
Discussion: We appreciate the
comment supporting publication of the
SRC data on the State’s Web site. The
regulation specifically requires posting
‘‘the State report card information’’ on
the Web site, and this information
includes all data that reflect how well
a program meets indicators of academic
content and teaching skills and other
criteria the State uses to assess a
program’s level of performance, the
program’s identified level of
performance, and all other information
contained in the SRC.
While posting of the SRC data on the
State’s Web site may not lead directly to
student learning or teacher preparation
program improvement, it does provide
the public with basic information about
the performance of each program and
other, broader measures about teacher
preparation in the State. Moreover,
making this information widely
available to the general public is a
requirement of section 205(b)(1) of the
HEA. Posting this information on the
State’s Web site is the easiest and least
costly way for States to meet this
requirement. We also note that the
commenters are mistaken in their belief
that our proposed regulations did not
require that information regarding
teacher preparation programs be shared
with consumers. Proposed § 612.4(a)(2)
would require States to post on their
Web sites all of the information required
to be included in their SRCs, and these
data include the data on each program’s
student learning outcomes, employment
outcomes, and survey outcomes, and
how the data contribute to the State’s
overall evaluation of the program’s
performance. The final regulations
similarly require the State to include all
of these data in the SRC, and
§ 612.4(a)(2) specifically requires the
State to make the same SRC information
it provides to the Secretary in its SRC
widely available to the general public by
posting it on the State’s Web site.
Changes: None.
Meaningful Differentiations in Teacher
Preparation Program Performance (34
CFR 612.4(b)(1))
Comments: Multiple commenters
expressed general opposition to our
proposal that in the SRC the State make
meaningful differentiation of teacher
preparation program performance using
at least four performance levels. These
commenters stated that such ratings
would not take into account the
uniqueness of each program, such as the
program’s size, mission, and diversity,
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
and therefore would not provide an
accurate rating of a program.
Others noted that simply ascribing
one of the four proposed performance
levels to a program is not nuanced or
sophisticated enough to fully explain
the quality of a teacher preparation
program. They recommended removing
the requirement that SEAs provide a
single rating to each program, and allow
States instead to publish the results of
a series of performance criteria for each
program.
Discussion: As noted under § 612.1,
we have withdrawn our proposal to
require States to identify programs that
are exceptional. Therefore, § 612.4(b)(1),
like section 207(a) of the HEA, requires
States in their SRCs to identify programs
as being low-performing, at-risk of being
low-performing, or effective or better,
with any additional categories
established at the State’s discretion.
This revised rating requirement mirrors
the requirements of section 207(a) the
HEA for reporting programs that are
low-performing or at-risk of being lowperforming (and thus by inference also
identifying those programs that are
performing well).
States cannot meet this requirement
unless they establish procedures for
using criteria, including indicators of
academic content knowledge and
teaching skills (see § 612.4(b)(2)(i)), to
determine which programs are classified
in each category. The requirement of
§ 612.4(b)(1) that States make
meaningful differentiation of teacher
preparation program performance using
at least these three categories simply
gives this statutory requirement
regulatory expression. While
§ 612.4(b)(1) permits States to categorize
teacher preparation programs using
more than three levels of performance if
they wish, the HEA cannot be properly
implemented without States making
meaningful differentiation among
programs based on their overall
performance.
We do not believe that these
regulations disregard the uniqueness of
each program’s size, mission, or
diversity, as they are intended to
provide a minimum set of criteria with
which States determine program
performance. They do not prescribe the
methods by which programs meet a
State’s criteria for program effectiveness.
Changes: We have revised
§ 612.4(b)(1) by removing the proposed
fourth program performance level,
‘‘exceptional teacher preparation
program,’’ from the rating system.
Comments: Commenters, for various
reasons, opposed our proposal to
require States, in making meaningful
differentiation in program performance,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
to consider employment outcomes in
high-need schools and student learning
outcomes ‘‘in significant part.’’ Some
commenters requested clarification on
what ‘‘significant’’ means with regard to
weighting employment outcomes for
high-need schools and student learning
outcomes in determining meaningful
differentiations of teacher preparation
programs. Commenters also noted that
including employment outcomes for
high-need schools will add another
level of complexity to an already
confusing and challenging process.
Some commenters recommended the
Department maintain the focus on
teacher placement and retention rates,
but eliminate the incentives to place
recent graduates in high-need schools.
They stated that doing so will permit
these indicators to focus on the quality
of the program without requiring the
program to have a focus on having its
students teach in high-need schools,
something that may not be in the
mission of all teacher preparation
programs.
Multiple other commenters expressed
confusion about whether or not the
regulations incentivize placement in
high-need schools by making such
placement a significant part of how
States must determine the rating of a
teacher preparation program. Some
commenters argued that, on the one
hand, the requirement that States use
student learning outcomes to help
assess a program’s overall performance
could incentivize teacher preparation
programs having teaching candidates
become teachers in schools where
students are likely to have higher test
scores. On the other hand, they argued
that the proposed regulations would
also assess program performance using,
as one indicator, placement of
candidates in high-need schools, an
indicator that commenters stated would
work in the opposite direction. These
commenters argued that this could
cause confusion and will create
challenges in implementing the
regulations by not giving States and
programs a clear sense of which issue is
of greater importance—student learning
outcomes or placement of teachers in
high-need schools.
Other commenters recommended that
the Department set specific thresholds
based on the affluence of the area the
school serves. For example, commenters
recommended that 85 percent of
program graduates who work in
affluent, high-performing schools
should have a certain level of student
learning outcomes, but that, to have the
same level of program performance,
only 60 percent of program graduates
PO 00000
Frm 00045
Fmt 4701
Sfmt 4700
75537
who work in high-need schools have
perform at that same level.
Multiple commenters also opposed
the inclusion of student learning
outcomes, employment outcomes, and
survey outcomes as indicators of the
performance of teacher preparation
programs. These commenters believed
that student learning outcomes are
embedded in the concept of VAM found
in standardized testing, a concept they
believe constitutes a flawed
methodology that does not accurately
represent teacher preparation program
effectiveness.
Discussion: The final regulations
require meaningful differentiation of
teacher preparation programs on the
basis of criteria that include
employment in high-need schools as an
indicator of program graduates’ (or in
the case of alternative route programs,
participants’) academic content
knowledge and teaching skills for
several reasons. First, like much of the
education community, we recognize
that the Nation needs more teachers
who are better prepared to teach in
high-need schools. We strongly believe
that teacher preparation programs
should accept a share of the
responsibility for meeting this
challenge. Second, data collected in
response to this indicator should
actually help distinguish the distinct
missions of teacher preparation
programs. For example, certain schools
have historically focused their programs
on recruiting and preparing teachers to
teach in high-need schools—a
contribution States and those
institutions may understandably want to
recognize. Third, we know that some
indicators may be influenced by
graduates’ (or in the case of alternative
route programs, participants’) placement
in high-need schools (e.g., teacher
retention rates tend to be lower in highneed schools), and States may also want
to consider this factor as they determine
how to use the various criteria and
indicators of academic content
knowledge and teaching skills to
identify an overall level of program
performance.
However, while States retain the
authority to determine thresholds for
performance under each indicator, in
consultation with their stakeholder
groups (see § 612.4(c)), we encourage
States to choose thresholds
purposefully. We believe that all
students, regardless of their race,
ethnicity, or socioeconomic status, are
capable of performing at high levels,
and that all teacher preparation
programs need to work to ensure that
teachers in all schools are capable of
helping them do so. We encourage
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75538
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
States to carefully consider whether
differential performance standards for
teachers in high-need schools reflect
sufficiently ambitious targets to ensure
that all children have access to a high
quality education.
Similarly, we encourage States to
employ measures of student learning
outcomes that are nuanced enough to
control for prior student achievement
and observable socio-economic factors
so that a teacher’s contribution to
student learning is not affected by the
affluence of his or her school. Overall,
the concerns stated here would also be
mitigated by use of growth, rather than
some indicator of absolute performance,
in the measure of student learning
outcomes. But, here again, we feel
strongly that decisions about how and
when student learning outcomes are
weighted differently should be left to
each State and its consultation with
stakeholders.
We respond to the commenters’
objections to our requirement that States
use student learning outcomes,
employment outcomes, and survey
outcomes in their assessment of the
performance levels of their teacher
preparation programs in our discussion
of comment on these subjects in
§ 612.5(a). For reasons we addressed
above in the discussion of § 612.1, while
still strongly encouraging States to give
significant weight to these indicators in
assessing a program’s performance, we
have omitted from the final regulations
any requirement that States consider
employment outcomes in high-need
schools and student outcomes ‘‘in
significant part.’’
Changes: We have revised
§ 612.4(b)(1) by removing the phrase
‘‘including, in significant part,
employment outcomes for high-need
schools and student learning
outcomes.’’
Comments: Commenters
recommended that States and their
stakeholders have the authority to
determine how and to what extent
outcomes are included in accountability
decisions for teacher preparation
programs in order to mitigate the
concerns regarding the validity and
reliability of the student growth
indicators. These commenters stated
that we should give more authority to
States and LEAs to identify indicators
and their relative weighting that would
be the greatest benefit to their
community. Other commenters also
stated that the proposal to require States
to provide meaningful differentiations
in teacher preparation programs may
conflict with existing State structures of
accountability, and by giving States
increased flexibility, the Department
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
would avoid inconsistencies with Statedetermined levels of quality.
Discussion: Having withdrawn our
proposal to require that student growth
and employment outcomes in high-need
schools be considered ‘‘in significant
part,’’ the final regulations provide
States with broad flexibility in how they
weight different indicators of academic
content knowledge and teaching skills
in evaluating teacher preparation
programs. While we strongly encourage
States to give significant weight to these
important indicators of a teacher
preparation program’s performance, we
provide each State full authority to
determine, in consultation with its
stakeholders, how each of their criteria,
including the required indicators of
academic content knowledge and
teaching skills, can be best used to fit
the individual needs of its schools,
teachers, and teacher preparation
programs.
Changes: None.
Satisfactory or Higher Student Learning
Outcomes for Programs Identified as
Effective or Higher (34 CFR 612.4(b)(2))
Comments: Multiple commenters
asked us to define the phrase
‘‘satisfactory or higher student learning
outcomes,’’ asking specifically what
requirements a program would have to
meet to be rated as effective or higher.
They also stated that States had
insufficient guidance on how to define
programs as ‘‘effective.’’ Some
commenters also noted that providing
flexibility to States to determine when
a program’s student learning outcomes
are satisfactory would diminish the
ability to compare teacher preparation
programs, and opposed giving States the
flexibility to determine for themselves
when a program has ‘‘satisfactory’’
student learning outcomes. However,
other commenters disagreed, stating that
States should have flexibility to
determine when the teachers trained by
a particular teacher preparation program
have students who have achieved
satisfactory student learning outcomes
since States would have a better ability
to know how individual teacher
preparation programs have helped to
meet these States’ needs.
Other commenters recommended
modifying the regulations so that States
would need to determine programs to
have ‘‘above average student learning
outcomes’’ in order to rate them in the
highest category of teacher preparation
performance. Another commenter
suggested that student learning data be
disaggregated by student groups to show
hidden inequities, and that States be
required to develop a pilot program to
use subgroup data in their measurement
PO 00000
Frm 00046
Fmt 4701
Sfmt 4700
of teacher preparation programs, such
that if the student subgroup
performance falls short the program
could not be rated as effective or higher.
Discussion: The Department
continues to believe that a teacher
preparation program should not be rated
effective if the learning outcomes of the
students taught by its graduates (or, in
the case of alternative route programs,
its participants) are not satisfactory.
And we appreciate the comments from
those who supported our proposal.
Nonetheless, we are persuaded by the
comments from those who urged that
States should have the flexibility to
determine how to apply the criteria and
indicators of student academic
achievement and learning needs to
determine the performance level of each
program, and have removed this
provision from the regulations.
Changes: We have removed
§ 612.4(b)(2). In addition, we have
renumbered § 612.4(b)(3) through (b)(5)
as § 612.4(b)(2) through (b)(4).
Data for Each Indicator (34 CFR
612.4(b)(2)(i))
Comments: One commenter requested
confirmation that the commenter’s State
would not be required to report the
disaggregated data on student growth
based on assessment test scores for
individual teachers, teacher preparation
programs, or entities on the SRC
because the educator effectiveness
measure approved for its ESEA
flexibility waiver meets the
requirements for student learning
outcomes in proposed §§ 612.4(b) and
612.5(a)(1) for both tested and nontested subjects. The commenter stated
that it would be cost prohibitive to
submit student growth information on
the SRC separately from reporting on its
educator effectiveness measure under
ESEA flexibility. Furthermore, some
commenters were concerned that a
State’s student privacy laws would
make it difficult to access the
disaggregated data as required.
In addition, some commenters
opposed our proposed § 612.4(b)(2)(i)(B)
requiring each State to include in its
SRC an assurance that a teacher
preparation program either is accredited
or produces teachers with content and
pedagogical knowledge because of what
they described as the federalization of
professional standards. They indicated
that our proposal to offer each State the
option of presenting an assurance that
the program is accredited by a
specialized accrediting agency would, at
best, make the specialized accreditor an
agent of the Federal government, and at
worst, effectively mandate specialized
accreditation by CAEP. The commenters
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
argued instead that professional
accreditation should remain a
voluntary, independent process based
on evolving standards of the profession.
Commenters also noted that no
definition of specialized accreditation
was proposed and requested that we
include a definition of this term. One
commenter recommended that a
definition of specialized accreditation
include the criteria that would be used
by the Secretary to recognize an agency
for the accreditation of professional
teacher preparation programs, and that
one of the criteria for a specialized
agency should be the inclusion of
alternative certification programs as
eligible professional teacher preparation
programs.
Discussion: Under § 612.4(b)(2)(i),
States may choose to report student
learning outcomes using a teacher
evaluation measure that meets the
definition in § 612.2. But if they do so,
States still must report student learning
outcomes for each teacher preparation
program in the SRC.
We believe that the costs of this SRC
reporting will be manageable for all
States, and have provided a detailed
discussion of costs in the RIA section of
this document. For further discussion of
reporting on student learning outcomes,
see the discussion in this document of
§ 612.5(a)(1). We also emphasize that
States will report these data in the
aggregate at the teacher preparation
program level and not at the teacher
level. Furthermore, while States will
need to comply with applicable Federal
and State student privacy laws in the
data they report in their SRC, the
commenters have not provided
information to help us understand how
our requirements, except as we discuss
for § 612.4(b)(3)(ii)(E), are affected by
State student privacy laws.
In addition, as we reviewed these
comments and the proposed regulatory
language, we realized the word
‘‘disaggregated’’ was unclear with regard
to the factors by which the data should
be disaggregated, and redundant with
regard to the description of indicators in
§ 612.5. We have therefore removed this
word from § 612.4(b)(2)(i).
Under § 612.5(a)(4) States must
annually report whether each program
is administered by an entity that is
accredited by a specialized accrediting
agency recognized by the Secretary, or
produces candidates (1) with content
and pedagogical knowledge and quality
clinical preparation, and (2) who have
met rigorous teacher candidate exit
qualifications. Upon review of the
comments and the language of
§ 612.5(a)(4), we have determined that
proposed § 612.4(b)(3)(i)(B), which
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
would have had States provide an
assurance in their SRCs that each
program met the characteristics
described in § 612.5(a)(4), is not needed.
We address the substantive comments
offered on that provision in our
discussion of comments on § 612.5(a)(4).
Finally, in reviewing the public
comment, we realized that the proposed
regulations focused only on having
States report in their SRCs the data they
would provide for indicators of
academic knowledge and teaching skills
that are used to determine the
performance level of each teacher
preparation program. This, of course,
was because State use of those
indicators was the focus of the proposed
regulations. But we did not mean to
suggest that in their SRCs, States would
not also report the data they would use
for other indicators and criteria they
establish for identifying each’s
program’s level of performance. While
the instructions in section V of the
proposed SRCs imply that States are to
report their data for all indicators and
criteria they use, we have revised those
instructions to clarify this point.
Changes: We have revised
§ 612.4(b)(2)(i) by removing the word
‘‘disaggregated.’’ We also have removed
proposed § 612.4(b)(2)(i)(B) from the
regulations.
Weighting of Indicators (34 CFR
612.4(b)(2)(ii))
Comments: Some commenters stated
that a formulaic approach, which they
argued was implied by the requirement
to establish the weights of each
indicator, will not yield meaningful
differentiations among programs. The
commenters recommended that States
be allowed to use a multiple-measures
system for assessing the performance of
teacher preparation programs that relies
on robust evidence, includes outcomes,
and gives weight to professional
judgment. In addition, some
commenters recommended that
stakeholders provide input as to how
and to what extent outcomes are
included in a teacher preparation
program’s overall performance rating.
Several commenters noted that the
flexibility our proposed regulations
provide to States to determine the
weighting system for use of criteria and
indicators to assess teacher preparation
program performance undermines what
the commenters state is the
Department’s goal of providing
meaningful data to, among other things,
facilitate State-to-State comparisons.
The commenters argue that consumers
might incorrectly assume the all States
are applying the same metrics to assess
program performance, and so draw
PO 00000
Frm 00047
Fmt 4701
Sfmt 4700
75539
incorrect conclusions especially for
programs located near each other but
located in different States. Several
commenters also expressed concerns
about the Department’s proposal in
§ 612.5(a)(2) that States be able to weigh
employment outcomes differently for
alternative route programs and
traditional teacher preparation
programs. The commenters argued that
all teacher preparation programs should
be held to the same standards and levels
of accountability.
Commenters also stated that our
proposal, by which we understand the
commenters to mean the proposed use
of student learning outcomes,
employment outcomes and survey
outcomes as indicators of academic
content knowledge and teaching skills
of teachers whom programs prepare,
should be adjusted based on the
duration of the teachers’ experience.
Commenters stated we should do so
because information about newer
teachers’ training programs should be
emphasized over information about
more experienced teachers, for whom
data reflecting these indicators would
likely be less useful.
Some commenters asked whether, if a
VAM is used to generate information for
indicators of student learning outcomes,
the indicators should be weighted to
count gains made by the lower
performing third of the student
population more than gains made by the
upper third of the population because it
would be harder to increase the former
students’ scores. The commenters noted
that poorer performing students will
have the ability to improve by greater
amounts than those who score higher on
tests.
Several commenters believed that the
weighting of the indicators used to
report on teacher preparation program
performance is a critical decision,
particularly with respect to the
weighting of indicators specific to highneed schools, and because of this,
decisions on weighting should be
determined after data are collected and
analyzed. As an example of why the
group of stakeholders should have
information available prior to making
weighting decisions, the commenter
noted that, if teacher placement in highneed schools has a relatively low-weight
and student growth is negatively
associated with the percentage of
economically disadvantaged students
enrolled in the school, programs may
game the system by choosing to counsel
students to seek employment in nonhigh-need schools.
Finally, several commenters stated
that the regulations incentivize
programs to place graduates in better
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75540
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
performing schools, noting that the
proposed regulations appeared to
require that student learning outcomes
be given the most weight. On the other
hand, the commenters stated that the
proposed regulations incentivize the
placement of graduates in high-need
schools, and argued that employment
rates in high-need schools would
receive the next highest weight. They
argued that this contradiction would
lead to confusion and challenges in
implementing the regulations.
Discussion: We have included a
summary of these comments here
because they generally address how
States should weight the indicators and
criteria used to assess the performance
of teacher preparation programs, and
advantages and disadvantages of giving
weight to certain indicators. However,
we stress that we did not intend for
States to adopt any particular system of
weighting to generate an overall level of
performance for each teacher
preparation program from the various
indicators and criteria they would use.
Rather, proposed § 612.4(b)(3)(ii), like
§ 612.4(b)(2)(ii) of the final regulations,
simply directs States to report in their
SRCs the weighting it has given to the
various indicators in § 612.5). Thus, we
are not requiring any State to adopt
some form of formulaic approach. And
States may, if they choose, build into
their indicators and criteria a reliance
on robust evidence and outcomes, and
give weight to professional judgment.
States plainly need to be able to
implement procedures for taking the
data relevant to each of the indicators of
academic knowledge and teaching skills
and other criteria they use to assess
program performance, and turn those
data into a reported overall level of
program performance. We do not see
how States can do this without
somehow providing some form of
weight to each of the indicators they
use. However, the specific method by
which a State does so is left to each
State, in consultation with its
stakeholders (see § 612.4(c)), to
determine.
As we addressed in the discussion of
§ 612.1, we had proposed in
§ 612.4(b)(1) that a State’s assessment of
a program’s performance needed to be
based ‘‘in significant part’’ on the results
for two indicators, student learning
outcomes and employment outcomes in
high-need schools. But as we noted in
our discussion of comment on §§ 612.1
and 612.4(b)(1), while strongly
encouraging States to adopt these
provisions in their procedures for
assessing a program’s performance, we
have revised these final regulations to
omit that proposal and any other
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
language that any regulatory indicator
receive special weight.
Furthermore, the flexibility the
regulations accord to States to
determine how these factors should be
weighed to determine a program’s level
of performance extends to the relative
weight a State might accord to factors
like a teacher’s experience and to
student learning outcomes of teachers in
low-performing versus high-performing
schools. It also extends to the weight a
State would provide to employment
outcomes for traditional teacher
preparation programs and alternative
route teacher preparation programs;
after all, these types of programs are
very different in their concept, who they
recruit, and when they work with LEAs
to place aspiring teachers as teachers of
record. In addition, State flexibility
extends to a State’s ability to assess the
overall performance of each teacher
preparation program using other
indicators of academic content
knowledge and teaching skills beyond
those contained in the regulations. We
do not believe that this flexibility
undermines any Departmental goal, or
goal that Congress had in enacting the
title II reporting system.
Thus, while a State must report the
procedures and weighting of indicators
of academic content knowledge and
teaching skills and other criteria it uses
to assess program performance in its
SRC, we believe States should be able to
exercise flexibility to determine how
they will identify programs that are lowperforming or at-risk of being so. In
establishing these regulations, we stress
that our goal is simple: to ensure that
the public—prospective teaching
candidates, LEAs that will employ
novice teachers, and State and national
policy makers alike—has confidence
that States are reasonably identifying
programs that are and are not working,
and understand how States are
distinguishing between the two. The
flexibilities the regulations accord to
States to determine how to determine a
program’s level of performance is fully
consistent with this goal. Furthermore,
given the variation we expect to find in
State approaches and the different
environments in which each State
operates, we reiterate that any State-toState comparisons will need to be made
only with utmost caution.
As noted above, our discussion of
§§ 612.1 and 612.4(b)(1) stressed both
(1) our hope that States would adopt our
proposals that student learning
outcomes and employment outcomes for
high-need schools be given significant
weight, and that to be considered
effective a teacher preparation program
would show positive student learning
PO 00000
Frm 00048
Fmt 4701
Sfmt 4700
outcomes, and (2) our decision not to
establish these proposals as State
requirements. Thus, we likewise leave
to States issues regarding incentives that
any given weight might cause to
placements of aspiring teachers and the
programs themselves.
Finally, in reviewing the public
comment, we realized that the proposed
regulations focused only on having
States report in their SRCs the weights
they would provide to indicators of
academic knowledge and teaching skills
used to determine the performance level
of each teacher preparation program.
This, of course, was because State use
of those indicators was the focus of the
proposed regulations. But we did not
mean to suggest that in their SRCs,
States would not also report the weights
they would provide to other indicators
and criteria they establish for
identifying each program’s level of
performance. While the instructions in
section V of the proposed SRCs imply
that States are to report their weighting
for all indicators and criteria they use,
we have revised them to clarify this
point.
Changes: None.
Reporting the Performance of All
Teacher Preparation Programs (34 CFR
612.4(b)(3))
Comments: Commenters stated that a
number of non-traditional teacher
preparation program providers will
never meet the criteria for inclusion in
annual reports due to their small
numbers of students. Commenters noted
that this implies that many of the most
exemplary programs will neither be
recognized nor rewarded and may even
be harmed by their omission in reports
provided to the media and public.
Commenters expressed concern that this
might lead prospective students and
parents to exclude them as viable
options, resulting in decreased program
enrollment.
Other commenters asked for more
clarity on the various methods for a
program to reach the threshold of 25
new teachers (or other threshold set by
the State). The commenters also stated
that a State could design this threshold
to limit the impact on programs. Other
commenters noted that smaller teacher
preparation programs may not have the
technical and human resources to
collect the data for proposed reporting
requirements, i.e., tracking employment
and impact on student learning, and
asked if the goal of these proposed
regulations is to encourage small
programs to close or merge with larger
ones.
Discussion: The regulations establish
minimum requirements for States to use
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
in assessing and reporting the
performance of each teacher preparation
program, and are not intended to
facilitate the merger or closure of small
programs. The proposed regulations
provided States with three methods of
identifying and reporting the
performance of teacher preparation
programs that produce fewer than 25
new teachers—or such lower number as
the State might choose—in a given
reporting year by aggregating data to
reach the minimum thresholds. Under
the final regulations, States could: (1)
Combine a teacher preparation
program’s performance data with data
for other teacher preparation programs
that are operated by the same teacher
preparation entity and are similar to or
broader than the program in content; (2)
combine data over multiple years for up
to four years until the size threshold is
met; or (3) use a combination of the two
methods. Given statistical and privacy
issues that are particular to small
programs, we believe that these
aggregation methods will adequately
address the desire to have the
performance of all programs, large and
small, reported in SRCs. In addition,
while we strongly believe that all
teacher preparation programs should
want to gather student learning
outcomes and results of employment
and survey results to help them to
improve their programs, States, not
institutions, ultimately have the
responsibility to report under § 612.4.
The proposed regulations had focused
State reporting and small program
aggregation procedures on the number
of new teachers a teacher preparation
program produced. Based on further
consideration of these and other
comments, it became clear that the term
‘‘new teacher’’ was problematic in this
case as it was in other places. We
realized that this approach would not
hold teacher preparation programs
accountable for producing recent
graduates who do not become novice
teachers. Because we believe that the
fundamental purpose of these programs
is to produce novice teachers, we have
concluded that our proposal to have
State reporting of a program’s
performance depend on the number of
new teachers that the program produces
was misplaced.
Therefore, in order to better account
for individuals who complete a teacher
preparation program but who do not
become novice teachers, we are
requiring a State to report annually on
the performance of each ‘‘brick-andmortar’’ teacher preparation program
that produces a total of 25 or more
recent graduates (or such lower
threshold as the State may establish).
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Similarly, aggregation procedures for
smaller programs apply to each teacher
preparation program that produces
fewer than 25 recent graduates (or such
lower threshold as the State may
establish). For teacher preparation
programs provided through distance
education, the requirement is the same
except that, since States are not likely to
know the number of recent graduates,
States will continue to look at whether
the program has that same threshold
number of 25 recent graduates, but in
this case, to be counted, these recent
graduates need to have received an
initial certification or licensure from the
State that allows them to serve in the
State as teachers of record for K–12
students.
Changes: We have revised
§ 612.4(b)(3) to provide that a State’s
annual reporting of a teacher
preparation program’s performance, and
whether it provides this reporting
alternatively through small program
aggregation procedures, depends on
whether the program produces a total of
25 or more recent graduates (or such
lower threshold as the State may
establish). For programs provided
through distance education, the number
of recent graduates counted will be
those who have received an initial
certification or licensure from the State
that allows them to serve in the State as
teachers of record for K–12 students.
Annual Performance Reporting of
Teacher Preparation Programs
(612.4(b)(3)(i))
Comments: Two commenters stated
that differentiated reporting for large
and small teacher preparation programs,
coupled with allowing States to
establish what the commenters referred
to as ‘‘certain criteria,’’ will lead to
invalid comparisons and rankings both
within and among States.
Discussion: The regulations require
separate reporting of the performance of
any teacher preparation program that
annually produces 25 or more recent
graduates. For programs that annually
produce fewer recent graduates, the
regulations also establish procedures for
data aggregation that result in reporting
on all of the State’s teacher preparation
programs (except for those programs
that are particularly small and for which
aggregation procedures cannot be
applied, or where the aggregation would
be in conflict with State or Federal
privacy or confidentiality laws). Based
on concerns expressed during the
negotiated rulemaking sessions, the
Department believes that use of an ‘‘nsize’’ of 25 (or such smaller number that
a State may adopt) and the means of
reporting the performance of smaller
PO 00000
Frm 00049
Fmt 4701
Sfmt 4700
75541
programs through the aggregation
procedures address privacy and
reliability concerns while promoting the
goal of having States report on the
performance of as many programs as
possible. Moreover, we reiterate that the
purpose of these regulations is to
identify key indicators that States will
use to assess the level of performance
for each program, and provide
transparency about how it identifies that
level. We are not proposing any
rankings and continue to caution against
making comparisons of programs based
on data States report.
Changes: None.
Performance Reporting of Small Teacher
Preparation Programs: General (34 CFR
612.4(b)(3)(ii))
Comments: Commenters stated that
the low population in some States
makes privacy of students in elementary
and secondary schools, and in teacher
preparation programs, difficult or
impossible to assure. The commenters
further stated that aggregating student
growth data to the school level to assure
privacy in the title II report would result
in meaningless ratings, because the
teachers in the schools more than likely
completed the preparation program at
different institutions.
Several commenters were concerned
that our proposals for aggregating data
to be used to annually identify and
report the level of performance of small
teacher preparation programs would
make year-by-year comparisons and
longitudinal trends difficult to assess in
any meaningful way, since it is very
likely that States will use different
aggregation methods institution-byinstitution and year-by-year.
Commenters noted that many small
rural teacher preparation programs and
programs producing small numbers of
teachers who disperse across the
country after program completion do
not have the requisite threshold size of
25. Commenters stated that for these
programs, States may be unable to
collect sufficient valid data. The result
will be misinformed high-stakes
decision making.
Some commenters proposed that
States be able to report a minimum of
10 new teachers with aggregation when
a minimum is not met instead of 25.
Other options would be to report what
data they have or aggregate previous
years to meet ‘‘n’’ size.
One commenter recommended that
rankings be initially based on a
relatively few, normed criteria common
to, and appropriate for, all sized
programs and States, i.e., a common
baseline ranking system. The
commenter stated that to do otherwise
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75542
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
could result in States rushing to the
lowest (not highest) common
denominator to protect both quality
programs from being unfairly ranked in
comparison with weaker programs in
other States, and small premier
programs from unfair comparisons with
mediocre larger programs.
Two commenters stated that even
though the proposed rules create several
ways in which States may report the
performance of teacher preparation
programs that annually produce fewer
than 25 teachers per year, the feasibility
of annual reporting at the program level
in some States would be so limited it
would not be meaningful. The
commenters added that regardless of the
aggregation strategy, having a minimum
threshold of 25 will protect the
confidentiality of completers for
reporting, but requiring annual
reporting of programs that produce 25 or
more recent graduates per year will omit
a significant number of individual
programs from the SRC. Several
commenters had similar concerns and
stated that annual reporting of the
teacher preparation program
performance would not be feasible for
the majority of teacher preparation
programs across the country due to their
size or where the student lives.
Commenters specifically mentioned that
many programs at Historically Black
Colleges and Universities will have
small cell sizes for graduates, which
will make statistical conclusions
difficult. Another commenter had
concerns with the manner in which
particular individual personnel data
will be protected from public
disclosure, while commenters
supported procedural improvements in
the proposed regulations discussed in
the negotiated rulemaking sessions that
addressed student privacy concerns by
increasing the reporting threshold from
10 to 25.
Commenters further expressed
concerns that for some States, where the
number of teachers a program produces
per year is less than 25, the manual
calculation that States would need to
perform to combine programs to
aggregate the number of students up to
25 so that the States would then report
the assessment of program performance
and information on indicators would
not only be excessive, but may lead to
significant inconsistencies across
entities and from one year to the next.
Discussion: We first reiterate that we
have revised § 612.5(a)(1)(ii) so that
States do not need to use student
growth, either by itself or as used in a
teacher evaluation measure, for student
learning outcomes when assessing a
teacher preparation program’s level of
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
performance. While we encourage them
to do so, if, for reasons the commenters
provided or other reasons, they do not
want to do so, States may instead use
‘‘another State-determined measure
relevant to calculating student learning
outcomes.’’
We do not share commenters’
concerns about small elementary and
secondary schools where privacy
concerns purportedly require a schoollevel calculation of student growth
measures rather than calculation of
student growth at the teacher level, or
related concerns about student learning
outcomes for an individual teacher not
yielding useable information about a
particular teacher preparation program.
Student learning outcomes applicable to
a particular teacher preparation program
would not be aggregated at the school
level. Whether measured using student
growth, a teacher evaluation measure, or
another State-determined measure
relevant to calculating student learning
outcomes, each teacher—whether
employed in a large school or a small
school—has some impact on student
learning. Under our regulations, these
impacts would be aggregated across all
schools (or at least all public schools in
the State in which the program is
located) that employ novice teachers the
program had prepared.
For small teacher preparation
programs, we believe that a State’s use
of the aggregation methods reasonably
balances the need for annual reporting
on teacher preparation program
performance with the special challenges
of generating a meaningful annual
snapshot of program quality for
programs that annually produce few
teachers. By permitting aggregation to
the threshold level of similar or broader
programs run by the same teacher
preparation entity (paragraph
(b)(3)(ii)(A)) or over a period of up to
four years (ii)(B)), or both (ii)(C)), we are
offering States options for meeting their
annual reporting responsibilities for all
programs. However, if aggregation under
any of the methods identified in
§ 612.4(b)(3)(ii)(A)–(C) would still not
yield the requisite program size
threshold of 25 recent graduates or such
lower number that a State establishes, or
if reporting such data would be
inconsistent with Federal or State
privacy and confidentiality laws and
regulations, § 612.4(b)(3)(ii)(D) and
§ 612.4(b)(5) provide that the State
would not need to report data on, or
identify an overall performance rating
for, that program.
Our regulations give States flexibility
to determine, with their consultative
groups, their own ways of determining
a teacher preparation program’s
PO 00000
Frm 00050
Fmt 4701
Sfmt 4700
performance. But if a State were to use
the ‘‘lowest common denominator’’ in
evaluating programs, as the commenter
suggested, it would not be meeting the
requirement in § 612.4(b)(1) to identify
meaningful differentiation between
programs. We continue to caution
against making comparisons of the
performance of each teacher preparation
program, or the data for each indicator
and criterion a State uses to determine
the overall level of performance, that
States report in their SRCs. Each teacher
preparation program is different; each
has a different mission and draws
different groups of aspiring teachers.
The purpose of this reporting is to
permit the public to understand which
programs a State determines to be lowperforming or at-risk of being lowperforming, and the reasons for this
determination. The regulations do not
create a national ranking system for
comparing the performance of programs
across States. For these reasons, we do
not believe that the regulations provide
perverse incentives for States to lower
their standards relative to other States.
While we appreciate the commenter’s
recommendation that States be required
to use a set of normed criteria common
across all sized programs and all States,
section 205(b) of the HEA requires each
State to include in its SRC its criteria for
assessing program performance,
including indicators of academic
content knowledge and teaching skills.
Therefore, subject only to use of the
indicators of academic content
knowledge and teaching skills defined
in these regulations, the law provides
that each State determine how to assess
a program’s performance and, in doing
so, how to weight different criteria and
indicators that bear on the overall
assessment of a program’s performance.
We appreciate the commenters’
statements about potential challenges
and limitations that the regulations’
aggregation procedures pose for small
teacher preparation programs. However,
while we agree that a State’s use of these
procedures for small programs may
produce results that are less meaningful
than those for programs that annually
produce 25 or more recent graduates (or
such lower threshold as the State
establishes), we believe that they do
provide information that is far more
meaningful than the omission of
information about performance of these
small programs altogether. We also
appreciate commenters’ concerns that
for some States, the process of
aggregating program data could entail
significant effort. But we assume that
data for indicators of this and other
programs of the same teacher
preparation entities would be procured
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
electronically, and, therefore, do not
believe that aggregation of data would
necessarily need to be performed
manually or that the effort involved
would be ‘‘excessive’’. Moreover, the
commenters do not explain why use of
the aggregation methods to identify
programs that are low-performing or atrisk of being low-performing should
lead to significant inconsistencies across
entities and from one year to the next,
nor do we agree this will be the case.
Like the commenter, we are
concerned about protection of
individual personnel data from public
disclosure. But we do not see how the
procedures for aggregating data on small
programs, such that what the State
reports concerns a combined program
that meets the size threshold of 25 (or
such lower size threshold as the State
establishes) creates legitimate concerns
about such disclosure. And as our
proposed regulations did not contain a
size threshold of 10, we do not believe
we need to make edits to address the
specific commenters’ concerns
regarding our threshold number.
Changes: None.
Aggregating Data for Teacher
Preparation Programs Operated by the
Same Entity (34 CFR 612.4(b)(3)(ii)(A))
Comments: One commenter expressed
concerns for how our proposed
definition of a teacher preparation
program meshed with how States would
report data for and make an overall
assessment of the performance of small
teacher preparation programs. The
commenter noted that the proposed
regulations define a teacher preparation
program as a program that is ‘‘offered by
a teacher preparation entity that leads to
a specific State teacher certification or
licensure in a specific field.’’ It therefore
appears that a program that is a
‘‘secondary mathematics program’’
would instead be a ‘‘secondary
program.’’ Based on the proposed
regulatory language about aggregation of
performance data among teacher
preparation programs that are operated
by the same teacher preparation entity
and are similar to or broader than the
program (§ 612.4(b)(3)(ii)(A)), the
commenter added that it appears that a
State can collapse secondary content
areas (e.g., biology, physics) and call it
a ‘‘secondary program.’’
Discussion: As explained in our
discussion of the prior comments, we
feel that meeting the program size
threshold of 25 novice teachers (or any
lower threshold a State establishes) by
aggregating performance data for each of
these smaller programs with
performance data of similar or broader
programs that the teacher preparation
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
entity operates (thus, in effect, reporting
on a broader-based set of teacher
preparation programs) is an acceptable
and reasonable way for a State to report
on the performance of these programs.
Depending on program size, reporting
could also be even broader, potentially
having reporting for the entire teacher
preparation entity. Indicators of teacher
preparation performance would then be
outcomes for all graduates of the
combined set of programs, regardless of
what subjects they teach. A State’s use
of these aggregation methods balances
the need to annually report on program
performance with the special challenges
of generating a meaningful annual
snapshot of program quality for
programs that annually produce few
novice teachers. We understand the
commenter’s concern that these
aggregation measures do not precisely
align with the definition of teacher
preparation program and permit, to use
the commenter’s example, a program
that is a ‘‘secondary mathematics
program’’ to potentially have its
performance reported as a broader
‘‘secondary program.’’ But as we noted
in our response to prior comments, if a
State does not choose to establish a
lower size threshold that would permit
reporting of the secondary mathematics
program, aggregating performance data
for that program with another similar
program still provides benefits that far
exceed having the State report no
program performance information at all.
TEACH Grant eligibility would not be
impacted because either the State will
determine and report the program’s
performance by aggregating relevant
data on that program with data for other
teacher preparation programs that are
operated by the same teacher
preparation entity and are similar to or
broader than the program in content, or
the program will meet the exceptions
provided in § 612.4(b)(3)(ii)(D) and
§ 612.4(b)(5).
Changes: None.
Aggregating Data in Performance
Reporting (34 CFR 612.4(b)(3)(ii)(B))
Comments: Several commenters
stated that aggregating data for any
given teacher preparation program over
four years to meet the program size
threshold would result in a significant
lack of reliability; some urged the
Department to cap the number of years
allowed for aggregating data at three
years. Another commenter raised
concerns about reported data on any
given program being affected by
program characteristics that are prone to
change significantly in the span of four
years (i.e., faculty turnover and changes
in clinical practice, curriculum, and
PO 00000
Frm 00051
Fmt 4701
Sfmt 4700
75543
assessments). The commenter noted that
many States’ programs will not meet the
criterion of setting the minimum
number of program completers, which
the commenter stated our proposed
regulations set at ten. The commenter
asked the Department to consider a
number of aggregation methods to reach
a higher completer count.
Discussion: The proposed regulations
did not establish, as a threshold for
reporting performance data and the
program’s level of performance, a
minimum of ten program completers.
Rather, where a teacher preparation
program does not annually produce 25
or more recent graduates (or such lower
threshold as the State may establish),
proposed § 612.4(b)(3)(ii)(B) would
permit a State to aggregate its
performance data in any year with
performance data for the same program
generated over a period of up to four
years. We appreciate that aggregating
data on a program’s new teachers over
a period of up to four years is not ideal;
as commenters note, program
characteristics may change significantly
in the span of four years.
However, given the challenges of
having States report on the performance
of small programs, we believe that
providing States this option, as well as
options for aggregating data on the
program with similar or broader
programs of the same teacher
preparation entity (§§ 612.4(b)(3)(ii)(A)
and (C)), allows the State to make a
reasonable determination of the
program’s level of performance. This is
particularly so given that the regulations
require that the State identify only
whether a given teacher preparation
program is low-performing or at-risk of
being low-performing. We note that
States have the option to aggregate
across programs within an entity, if in
consultation with stakeholders, they
find that produces a more accurate
representation of program quality. See
§ 612.4(b)(3)(ii)(A)). We believe that a
State’s use of these alternative methods
would produce more reliable and valid
measures of quality for each of these
smaller programs and reasonably
balance the need annually to report on
program performance with the special
challenges of generating a meaningful
annual snapshot of program quality for
programs that annually produce few
novice teachers.
The commenters who recommended
reducing the maximum time for
aggregating data on the same small
program from four years to three did not
explain why the option of having an
additional year to report on very small
programs was preferable to omitting a
report on program performance
E:\FR\FM\31OCR2.SGM
31OCR2
75544
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
altogether if the program was still below
the size threshold after three years. We
do not believe that it is preferable.
Moreover, if a State does not want to
aggregate performance data for the same
small program over a full four years, the
regulations permit it instead to combine
performance data with data for other
programs operated by the same entity
that are similar or broader.
Changes: None.
Aggregating Data in Performance
Reporting of Small Teacher Preparation
Programs (34 CFR 612.4(b)(3)(ii)(C))
Comments: Commenters noted that
while the proposed rule asserts that
States may use their discretion on how
to report on the performance of teacher
preparation programs that do not meet
the threshold of 25 novice teachers (or
any lower threshold the State
establishes), the State may still be
reporting on less than half of its
programs. Commenters note that if this
occurs, the Department’s approach will
not serve the purpose of increased
accountability of all programs. Another
commenter stated that human judgment
would have to be used to aggregate data
across programs or across years in order
to meet the reporting threshold, and this
would introduce error in the level of
performance the State assigns to the
program in what the commenter
characterizes as a high-stakes
accountability system.
Another commenter appears to
understand that the government wants
to review larger data fields for analysis
and reporting, but stated that the
assumption that data from a program
with a smaller ‘‘n’’ size is not report
worthy may dampen innovation and
learning from a sponsoring organization
with a stated goal of producing a limited
number of teachers or is in a locale
needing a limited number of teachers.
The commenter noted that, if a State
were to combine programs, report years,
or some other combination to get to 25,
the Federally stated goal of collecting
information about each program, rather
than the overall sponsoring
organization, is gone. The commenter
argued that § 612.4(c), which the
commenter states requires that States
report on teacher preparation at the
individual program level, appears to
contradict the over 25 completer rule for
reporting.
Discussion: We expect that, working
with their consultative group (see
§ 612.4(c)), States will adopt reasonable
criteria for deciding which procedure to
use in aggregating performance data for
programs that do not meet the minimum
threshold. We also expect that a key
factor in the State’s judgment of how to
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
proceed will be how best to minimize
error and confusion in reporting data for
indicators of academic content
knowledge and teaching skills and other
criteria the State uses, and the program’s
overall level of performance. States will
want to produce the most reliable and
valid measures of quality for each of
these smaller programs. Finally, while
the commenter is correct that § 612.4(c)
requires States to work with a
consultative group on procedures for
assessing and reporting the performance
of each teacher preparation program in
the State, how the State does so for
small programs is governed by
§ 612.4(b)(3)(ii).
Changes: None.
No Required State Reporting on Small
Teacher Preparation Programs That
Cannot Meet Reporting Options (34 CFR
612.4(b)(4)(ii)(D))
Comments: Some commenters urged
the Department not to exempt from
State title II reporting those teacher
preparation programs that are so small
they are unable to meet the proposed
threshold size requirements even with
the options for small programs we had
proposed.
Discussion: If a teacher preparation
program produces so few recent
graduates that the State cannot use any
of the aggregation methods to enable
reporting of program performance
within a four-year period, we do not
believe that use of the regulations’
indicators of academic content
knowledge and teaching skills to assess
its performance will produce
meaningful results.
Changes: None.
No Required State Reporting Where
Inconsistent With Federal and State
Privacy and Confidentiality Laws (34
CFR 612.4(b)(3)(ii)(E))
Comments: Two commenters objected
to the proposed regulations because of
concerns that the teacher evaluation
data and individual student data that
would be collected and reported would
potentially violate State statutes
protecting or sharing elementary and
secondary student performance data and
teacher evaluation results with any
outside entity. One commenter
expressed general concern about
whether this kind of reporting would
violate the privacy rights of teachers,
particularly those who are working in
their initial years of teaching.
Another commenter recommended
that the proposed regulations include
what the commenter characterized as
the exemption in the Family
Educational Rights and Privacy Act
(FERPA) (34 CFR 99.31 or 99.35) that
PO 00000
Frm 00052
Fmt 4701
Sfmt 4700
allows for the re-disclosure of studentlevel data for the purposes of teacher
preparation program accountability. The
commenter stressed that the proposed
regulations do not address a restriction
in FERPA that prevents teacher
preparation programs from being able to
access data that the States will receive
on program performance. The
commenter voiced concern that as a
result of this restriction in FERPA, IHEs
will be unable to perform the analyses
to determine which components of their
teacher preparation programs are
leading to improvements in student
academic growth and which are not,
and urged that we include an exemption
in 34 CFR 99.31 or 99.35 to permit the
re-disclosure of student-level data to
IHEs for the purposes of promoting
teacher preparation program
accountability. From a program
improvement standpoint, the
commenter argues that aggregated data
are meaningless; teacher preparation
programs need fine-grained, personspecific data (data at the lowest level
possible) that can be linked to student
information housed within the program.
Yet another commenter stated that
surveying students (by which we
interpret the comment to mean
surveying elementary or secondary
school students) or parents raises
general issues involving FERPA.
Discussion: The Department
appreciates the concerns raised about
the privacy of information on students
and teachers. Proposed
§ 612.4(b)(4)(ii)(E) provided that a State
is not required to report data on a
particular teacher preparation program
that does not meet the size thresholds
under § 612.4(b)(4)(ii)(A)–(C) if
reporting these data would be
inconsistent with Federal or State
privacy and confidentiality laws and
regulations. We had proposed to limit
this provision to these small programs
because we did (and do) not believe
that, for larger programs, Federal or
State laws would prohibit States or State
agencies from receiving the information
they need under our indicators of
academic content knowledge and
teaching skills to identify a program’s
level of performance. The commenters
did not provide the text of any specific
State law to make us think otherwise,
and for reasons we discuss below, we
are confident that FERPA does not
create such concerns. Still, in an
abundance of caution, we have revised
this provision to clarify that no
reporting of data under § 612.4(b) is
needed if such reporting is inconsistent
with Federal or State confidentiality
laws. We also have redesignated this
provision as § 612.4(b)(5) to clarify that
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
it is not limited to reporting of small
teacher preparation programs. States
should be aware of any restrictions in
reporting because of State privacy laws
that affect students or teachers.
At the Federal level, the final
regulations do not amend 34 CFR part
99, which are the regulations
implementing section 444 of the General
Education Provisions Act (GEPA),
commonly referred to as FERPA. FERPA
is a Federal law that protects the privacy
of personally identifiable information in
students’ education records. See 20
U.S.C. 1232g; 34 CFR part 99. FERPA
applies to educational agencies and
institutions (elementary and secondary
schools, school districts, colleges and
universities) that are recipients of
Federal funds under a program
administered by the Department. FERPA
prohibits educational agencies and
institutions to which it applies from
disclosing personally identifiable
information from students’ education
records, without the prior written
consent of the parent or eligible student,
unless the disclosure meets an
exception to FERPA’s general consent
requirement. The term ‘‘education
records’’ means those records that are:
(1) Directly related to a student; and (2)
maintained by an educational agency or
institution or by a party acting for the
agency or institution. Education records
would encompass student records that
LEAs maintain and that States will need
in order to have the data needed to
apply the regulatory indicators of
academic content and teaching skills to
individual teacher preparation
programs.
As the commenter implicitly noted,
one of the exceptions to FERPA’s
general consent requirement permits the
disclosure of personally identifiable
information from education records by
an educational agency or institution to
authorized representatives of a State
educational authority (as well as to local
educational authorities, the Secretary,
the Attorney General of the United
States, and the Comptroller General of
the United States) as may be necessary
in connection with the audit,
evaluation, or the enforcement of
Federal legal requirements related to
Federal or State supported education
programs (termed the ‘‘audit and
evaluation exception’’). The term ‘‘State
and local educational authority’’ is not
specifically defined in FERPA.
However, we have previously explained
in the preamble to FERPA regulations
published in the Federal Register on
December 2, 2011 (76 FR 75604, 75606),
that the term ‘‘State and local
educational authority’’ refers to an SEA,
a State postsecondary commission,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Bureau of Indian Education, or any
other entity that is responsible for and
authorized under local, State, or Federal
law to supervise, plan, coordinate,
advise, audit, or evaluate elementary,
secondary, or postsecondary Federal- or
State-supported education programs and
services in the State. Accordingly, an
educational agency or institution, such
as an LEA, may disclose personally
identifiable information from students’
education records to a State educational
authority that has the authority to access
such information for audit, evaluation,
compliance, or enforcement purposes
under FERPA.
We understand that all SEAs exercise
this authority with regard to data
provided by LEAs, and therefore FERPA
permits LEAs to provide to SEAs the
data the State needs to assess the
indicators our regulations require.
Whether other State agencies such as
those that oversee or help to administer
aspects of higher education programs or
State teacher certification requirements
are also State education authorities, and
so may likewise receive such data,
depends on State law. The Department
would therefore need to consider State
law (including valid administrative
regulations) and the particular
responsibilities of a State agency before
providing additional guidance about
whether a particular State entity
qualifies as a State educational authority
under FERPA.
The commenter would have us go
further, and amend the FERPA
regulations to permit State educational
authorities to re-disclose this personally
identifiable information from students’
education records to IHEs or the
programs themselves in order to give
them the disaggregated data they need
to improve the programs. While we
understand the commenter’s objective,
we do not have the legal authority to do
this.
Finally, in response to other
comments, FERPA does not extend
privacy protections to an LEA’s records
on teachers. Nor do the final regulations
require any reporting of survey results
from elementary or secondary school
students or their parents. To the extent
that either is maintained by LEAs,
disclosures would be subject to the
same exceptions and limitations under
FERPA as records of or related to
students.
Changes: We have revised
§ 612.4(b)(3)(ii)(E) and have
redesignated it as § 612.4(b)(5) to clarify
that where reporting of data on a
particular program would be
inconsistent with Federal or State
privacy or confidentiality laws or
regulations, the exclusion from State
PO 00000
Frm 00053
Fmt 4701
Sfmt 4700
75545
reporting of these data is not limited to
small programs subject to
§ 612.4(b)(3)(ii).
Fair and Equitable Methods:
Consultation With Stakeholders (34 CFR
612.4(c)(1))
Comments: We received several
comments on the proposed list of
stakeholders that each State would be
required to include, at a minimum, in
the group with which the State must
consult when establishing the
procedures for assessing and reporting
the performance of each teacher
preparation program in the State
(proposed § 612.4(c)(1)(i)). Some
commenters supported the list of
stakeholders. One commenter
specifically supported the inclusion of
representatives of institutions serving
minority and low-income students.
Some commenters believed that, as
the relevant stakeholders will vary by
State, the regulations should not specify
any of the stakeholders that each State
must include, leaving the determination
of necessary stakeholders to each State’s
discretion.
Some commenters suggested that
States be required to include
representatives beyond those listed in
the proposed rule. In this regard,
commenters stated that representatives
of small teacher preparation programs
are needed to help the State to annually
revisit the aggregation of data for
programs with fewer novice teachers
than the program size threshold, as
would be required under proposed
§ 612.4(b)(4)(ii). Some commenters
recommended adding advocates for lowincome and underserved elementary
and secondary school students. Some
commenters also stated that advocates
for students of color, including civil
rights organizations, should be required
members of the group. In addition,
commenters believed that the
regulations should require the inclusion
of a representative of at least one teacher
preparation program provided through
distance education, as distance
education programs will have unique
concerns.
One commenter recommended adding
individuals with expertise in testing and
assessment to the list of stakeholders.
This commenter noted, for example,
that there are psychologists who have
expertise in aspects of psychological
testing and assessment across the
variety of contexts in which
psychological and behavioral tests are
administered. The commenter stated
that, when possible, experts such as
these who are vested stakeholders in
education should be consulted in an
effort to ensure the procedures for
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75546
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
assessing teacher preparation programs
are appropriate and of high quality, and
that their involvement would help
prevent potential adverse, unintended
consequences in these assessments.
Some commenters supported the need
for student and parent input into the
process of establishing procedures for
evaluating program performance but
questioned the degree to which
elementary and secondary school
students and their parents should be
expected to provide input on the
effectiveness of teacher preparation
programs.
One commenter supported including
representatives of school boards, but
recommended adding the word ‘‘local’’
before ‘‘school boards’’ to clarify that
the phrase ‘‘school boards’’ does not
simply refer to State boards of
education.
Discussion: We believe that all States
must consult with the core group of
individuals and entities that are most
involved with, and affected by, how
teachers are prepared to teach. To
ensure that this is done, we have
specified this core group of individuals
and entities in the regulations. We agree
with the commenters that States should
be required to include in the group of
stakeholders with whom a State must
consult representatives of small teacher
preparation programs (i.e., programs
that produce fewer than a program size
threshold of 25 novice teachers in a
given year or any lower threshold set by
a State, as described in § 612.4(b)(3)(ii)).
We agree that the participation of
representatives of small programs, as is
required by § 612.4(c)(ii)(D), is essential
because one of the procedures for
assessing and reporting the performance
of each teacher preparation program that
States must develop with stakeholders
includes the aggregation of data for
small programs (§ 612.4(c)(1)(ii)(B)).
We also agree with commenters that
States should be required to include as
stakeholders advocates for underserved
students, such as low-income students
and students of color, who are not
specifically advocates for English
learners and students with disabilities.
Section 612.4(c)(ii)(I) includes these
individuals, and they could be, for
example, representatives of civil rights
organizations. To best meet the needs of
each State, and to provide room for
States to identify other groups of
underserved students, the regulations
do not specify what those additional
groups of underserved students must be.
We agree with the recommendation to
require States to include a
representative of at least one teacher
preparation program provided through
distance education in the group of
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
stakeholders as we agree teacher
preparation programs provided through
distance education are different from
brick-and-mortar programs, and warrant
representation on the stakeholder group.
Under the final regulations, except for
the teacher placement rates, States
collect information on those programs
and report their performance on the
same basis as brick-and-mortar
programs. See the discussion of
comment on Program-Level Reporting
(including distance education) (34 CFR
612.4(a)(1)(i)).
While a State may include individuals
with expertise in testing and assessment
in the group of stakeholders, we do not
require this because States alternatively
may either wish to consult with such
individuals through other arrangements,
or have other means for acquiring
information in this area that they need.
Nonetheless, we encourage States to
use their discretion to add
representatives from other groups to
ensure the process for developing their
procedures and for assessing and
reporting program performance are fair
and equitable.
We thank commenters for their
support for our inclusion of
representatives of ‘‘elementary through
secondary students and their parents’’
in the consultative group. We included
them because of the importance of
having teacher preparation programs
focus on their ultimate customers—
elementary and secondary school
students.
Finally, we agree that the regulation
should clarify that the school board
representatives whom a State must
include in its consultative group of
stakeholders are those of local school
boards. Similarly, we believe that the
regulation should clarify that the
superintendents whom a State must
include in the group of stakeholders are
LEA superintendents.
Changes: We have revised
§ 612.4(c)(1)(i) to clarify that a State
must include representatives of small
programs, other groups of underserved
students, of local school boards and
LEA superintendents and a
representative of at least one teacher
preparation program provided through
distance education in the group with
which the State must consult when
establishing its procedures.
Comments: Commenters
recommended that States should not be
required to establish consequences
(associated with a program’s
identification as low-performing or atrisk of being low-performing), as
required under proposed
§ 612.4(c)(1)(ii)(C), until after the phasein of the regulations. Commenters stated
PO 00000
Frm 00054
Fmt 4701
Sfmt 4700
that, because errors will be made in the
calculation of data and in determining
the weights associated with specific
indicators, States should be required to
calculate, analyze, and publish the data
for at least two years before high-stakes
consequences are attached. Commenters
believed that this would ensure initial
unintended consequences are identified
and addressed before programs are
subject to high-stakes consequences.
Commenters also expressed concerns
about the ability of States, under the
proposed timeline for implementation,
to implement appropriate opportunities
for programs to challenge the accuracy
of their performance data and
classification of their program under
proposed § 612.4(c)(1)(ii)(D).
Commenters also stated that the
proposed requirement that the
procedures for assessing and reporting
the performance of each teacher
preparation program in the State must
include State-level rewards and
consequences associated with the
designated performance levels is
inappropriate because the HEA does not
require States to develop rewards or
consequences associated with the
designated performance levels of
teacher preparation programs.
Commenters also questioned the
amount of information that States would
have to share with the group of
stakeholders establishing the procedures
on the fiscal status of the State to
determine what the rewards should be
for high-performing programs.
Commenters noted that rewards are
envisioned as financial in nature, but
States operate under tight fiscal
constraints. Commenters believed that
States would not want to find
themselves in an environment where
rewards could not be distributed yet
consequences (i.e., the retracting of
monies) would ensue.
In addition, commenters were
concerned about the lack of standards in
the requirement that States implement a
process for programs to challenge the
accuracy of their performance data and
classification. Commenters noted that
many aspects of the rating system carry
the potential for inaccurate data to be
inputted or for data to be miscalculated.
Commenters noted that the proposed
regulations do not address how to
ensure a robust and transparent appeals
process for programs to challenge their
classification.
Discussion: We believe the
implementation schedule for these final
regulations provides sufficient time for
States to implement the regulations,
including the time necessary to develop
the procedures for assessing and
reporting the performance of each
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
teacher preparation program in the State
(see the discussion of comments related
to the implementation timeline for the
regulations in General (Timeline) (34
CFR 612.4(a)(1)(i)) and Reporting of
Information on Teacher Preparation
Program Performance (Timeline) (34
CFR 612.4(b)). We note that States can
use results from the pilot reporting year,
when States are not required to classify
program performance, to adjust their
procedures. These adjustments could
include the weighting of indicators, the
procedure for program challenges, and
other changes needed to ensure that
unintended consequences are identified
and addressed before the consequences
have high stakes for programs.
Additionally, under § 612.4(c)(2), a State
has the discretion to determine how
frequently it will periodically examine
the quality of the data collection and
reporting activities it conducts, and
States may find it beneficial to examine
and make changes to their systems more
frequently during the initial
implementation stage.
The regulations do not require a State
to have State-level rewards or
consequences associated with teacher
preparation performance levels. To the
extent that the State does,
§ 612.4(b)(2)(iii) requires a State to
provide that information in the SRC,
and § 612.4(c)(1)(ii)(C) requires the State
to include those rewards or
consequences in the procedures for
assessing and reporting program
performance it establishes in
consultation with a representative group
of stakeholders in accordance with
§ 612.4(c)(1)(i).
Certainly, whether a State can afford
to provide financial rewards is an
essential consideration in the
development of any State-level rewards.
We leave it up to each State to
determine, in accordance with any
applicable State laws or regulations, the
amount of information to be shared in
the development of any State-level
rewards or consequences.
As a part of establishing appropriate
opportunities for teacher preparation
programs to challenge the accuracy of
their performance data and program
classification, States are responsible for
determining the related procedures and
standards, again in consultation with
the required representative group of
stakeholders. We expect that these
procedures and standards will afford
programs meaningful and timely
opportunities to appeal the accuracy of
their performance data and overall
program performance level.
Changes: None.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Fair and Equitable Methods: State
Examination of Data Collection and
Reporting (34 CFR 612.4(c)(2))
Comments: Commenters asserted that
the proposed requirement for a State to
periodically examine the quality of its
data collection and reporting activities
under proposed § 612.4(c)(2) is
insufficient. The commenters contended
that data collection and reporting
activities must be routinely and
rigorously examined and analyzed to
ensure transparency and accuracy in the
data and in the high-stakes results
resulting from the use of the data.
According to these commenters, State
data systems are not at this time
equipped to fully implement the
regulations, and thus careful scrutiny of
the data collection—especially in the
early years of the data systems—is vital
to ensure that data from multiple
sources are accurate, and, if they are
not, that modifications are made.
Commenters also suggested that there
should be a mechanism to adjust
measures when schools close or school
boundaries change as programs with
smaller numbers of graduates
concentrated in particular schools could
be significantly impacted by these
changes that are outside the control of
teacher preparation programs.
Discussion: The regulations do not
specify how often a State must examine
the quality of its data collection and
reporting activities and make any
appropriate modifications, requiring
only that it be done ‘‘periodically.’’ We
think that the frequency and extent of
this review is best left to each State, in
consultation with its representative
group of stakeholders. We understand,
as indicated by commenters, that many
State data systems are not currently
ready to fully implement the
regulations, and therefore it is likely
that such examinations and
modifications will need to be made
more frequently during the development
stage than will be necessary once the
systems have been in place and
operating for a while. As States have the
discretion to determine the frequency of
their examinations and modifications,
they may establish triggers for
examining and, if necessary, modifying
their procedures. This could include
developing a mechanism to modify the
procedures in certain situations, such as
where school closures and school
boundary changes may inadvertently
affect certain teacher preparation
programs.
Changes: None.
PO 00000
Frm 00055
Fmt 4701
Sfmt 4700
75547
Section 612.5 What indicators must a
State use to report on teacher
preparation program performance for
purposes of the State report card?
Indicators a State Must Use To Report
on Teacher Preparation Programs in the
State Report Card (34 CFR 612.5(a))
Comments: Some commenters
expressed support for the proposed
indicators, believing they may push
States to hold teacher preparation
programs more accountable. Some
commenters were generally supportive
of the feedback loop where teacher
candidate placement, retention, and
elementary and secondary classroom
student achievement results can be
reported back to the programs and
published so that the programs can
improve.
In general, many commenters
opposed the use of the indicators of
academic content knowledge and
teaching skills in the SRC, stating that
these indicators are arbitrary, and that
there is no empirical evidence that
connects the indicators to a quality
teacher preparation program; that the
proposed indicators have never been
tested or evaluated to determine their
workability; and that there is no
consensus in research or among the
teaching profession that the proposed
performance indicators combine to
accurately represent teacher preparation
program quality. Other commenters
opined that there is no evidence that the
indicators selected actually represent
program effectiveness, and further
stated that no algorithm would
accurately reflect program effectiveness
and be able to connect those variables
to a ranking system. Many commenters
expressed concern about the proposed
assessment system, stating that
reliability and validity data are lacking.
Some commenters indicated that
reporting may not need to be annual
since multi-year data are more reliable.
Commenters also stated that valid
conclusions about teacher preparation
program quality cannot be drawn using
data with questionable validity and with
confounding factors that cannot be
controlled at the national level to
produce a national rating system for
teacher preparation programs. Many
other commenters stated that teacher
performance cannot be equated with the
performance of the students they teach
and that there are additional factors that
impact teacher preparation program
effectiveness that have not been taken
into account by the proposed
regulations. We interpret other
comments as expressing concern that
use of the outcome indicators would not
necessarily help to ensure that teachers
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75548
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
are better prepared before entering the
classroom.
Commenters stated that there are
many potential opportunities for
measurement error in the outcome
indicators and therefore the existing
data do not support a large, fully scaled
implementation of this accountability
system. Commenters argued that the
regulations extend an untested
performance assessment into a highstakes realm by determining eligibility
for Federal student aid through
assessing the effectiveness of each
teacher preparation program. One
commenter stated that, in proposing the
regulations, the Department did not
consider issues that increase
measurement error, and thus decrease
the validity of inferences that can be
made about teacher quality. For
example, students who graduate but do
not find a teaching job because they
have chosen to stay in a specific
geographic location would essentially
count against a school and its respective
ranking. Several commenters suggested
that we pilot the proposed system and
assess its outcomes, using factors that
are flexible and contextualized within a
narrative, without high-stakes
consequences until any issues in data
collection are worked out.
Discussion: We appreciate
commenters’ concerns about the validity
and reliability of the individual
indicators of academic content
knowledge and teaching skill in the
proposed regulations, as well as the
relationship between these indicators
and the level of performance of a
teacher preparation program. However,
we believe the commenters
misunderstood the point we were
making in the preamble to the NPRM
about the basis for the proposed
indicators. We were not asserting that
rigorous research studies had
necessarily demonstrated the proposed
indicators—and particularly those for
student learning outcomes, employment
outcomes, employment outcomes in
high-need schools and survey
outcomes–-to be valid and reliable.
Where we believe that such research
shows one or more of the indicators to
be valid and reliable, we have
highlighted those findings in our
response to the comment on that
indicator. But our assertion in the
preamble to the NPRM was that use of
these indicators would produce
information about the performance-level
of each teacher preparation program
that, speaking broadly, is valid and
reliable. We certainly did not say that
these indicators were necessarily the
only measures that would permit the
State’s identification of each program’s
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
level of performance to be appropriate.
And in our discussion of public
comments we have clarified that States
are free to work with their consultative
group (see § 612.4(c)) to establish other
measures the State would use as well.
In broad terms, validity here refers to
the accuracy of these indicators in
measuring what they are supposed to
measure, i.e., that they collectively work
to provide significant information about
a teacher preparation program’s level of
performance. Again, in broad terms,
reliability here refers to the extent to
which these indicators collectively can
be used to assess a program’s level of
performance and to yield consistent
results.
For reasons we explain below, we
believe it is important that teacher
preparation programs produce new
teachers who positively impact student
academic success, take jobs as teachers
and stay in the profession at least three
years, and feel confident about the
training the programs have provided to
them. This is what these three
indicators in our final regulations do—
and by contrast what is missing from the
criteria that States have reported in
SRCs that they have used to date to
assess program performance.
We do not believe that State
conclusions about the performance
levels of their teacher preparation
programs can be valid or reliable if they,
as State criteria have done to date, focus
on inputs a program offers any more
than an automobile manufacturer’s
assessment of the validity and reliability
of its safety and performance testing
make sense if they do not pay attention
to how the vehicles actually perform on
the road.
Our final regulations give States,
working with their stakeholders, the
responsibility for establishing
procedures for ensuring that use of these
indicators, and such other indicators of
academic content knowledge and
teaching skills and other criteria the
State may establish, permits the State to
reasonably identify (i.e., with reasonable
validity and reliability) those teacher
preparation programs that are lowperforming or at-risk of being lowperforming. We understand that, to do
this, they will need to identify and
implement procedures for generating
relevant data on how each program
reflects these measures and criteria, and
for using those data to assess each
program in terms of its differentiated
levels of performance. But we have no
doubt that States can do this in ways
that are fair to entities that are operating
good programs while at the same time
are fair to prospective teachers,
prospective employers, elementary and
PO 00000
Frm 00056
Fmt 4701
Sfmt 4700
secondary school students and their
parents, and the general public—all of
whom rely on States to identify and
address problems with low-performing
or at-risk programs.
We further note that by defining
novice teacher to include a three-year
teaching period, which applies collected
for student learning outcomes and
employment outcomes, the regulations
will have States use data for these
indicators of program performance over
multiple years. Doing so will increase
reliability of the overall level of
performance the State assigns to each
program in at least two respects. First,
it will decrease the chance that one
aberrational year of performance or any
given cohort of program graduates (or
program participants in the case of
alternative route teacher preparation
programs) has a disproportionate effect
on a program’s performance. And
second, it will decrease the chance that
the level of performance a State reports
for a program will be invalid or
unreliable.
We stress, however, that the student
learning outcomes, employment
outcomes, and survey outcomes that the
regulations require States to use as
indicators of academic content and
teaching skills are not simply measures
that logically are important to assessing
a program’s true level of performance.
Rather, as we discuss below, we believe
that these measures are also workable,
based on research, and reflective of the
direction in which many States and
programs are going, even if not
reflecting an outright consensus of all
teacher preparation programs.
In this regard, we disagree with the
commenters’ assertions that these
measures are arbitrary, lack evidence of
support, and have not been tested. The
Department’s decision to require use of
these measures as indicators of
academic content knowledge and
teaching skills is reinforced by the
adoption of similar indicators by
CAEP,15 which reviews over half of the
Nation’s teacher preparation programs—
and by the States of North Carolina,
Tennessee, Ohio, and Louisiana, which
already report annually on indictors of
teacher preparation program
performance based on data from State
assessments. The recent GAO report
determined that more than half the
States already utilize data on program
completers’ effectiveness (such as
surveys, placement rates, and teacher
evaluation results) in assessing
15 CAEP 2013 Accreditation Standards, Standard
4, Indicator 4. (2013). Retrieved from https://
caepnet.org/standards/introduction. Amended by
the CAEP Board of Directors February 13, 2015.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
programs, with at least ten more
planning to do so.16 These measures
also reflect input received from many
non-Federal negotiators during
negotiated rulemaking. Taken together,
we believe that the adoption of these
measures of academic content
knowledge and teaching skills reflects
the direction in which the field is
moving, and the current use of similar
indicators by several SEAs demonstrates
their feasibility.
We acknowledge that many factors
account for the variation in a teacher’s
impact on student learning. However,
we strongly believe that a principal
function of any teacher preparation
program is to train teachers to promote
the academic growth of all students
regardless of their personal and family
circumstances, and that the indicators
whose use the regulations prescribe are
already being used to help measure
programs’ success in doing so. For
example, Tennessee employs some of
the outcome measures that the
regulations require, and reports that
some teacher preparation programs
consistently produce teachers with
statistically significant student learning
outcomes over multiple years.17
Delaware also collects and reports data
on the performance and effectiveness of
program graduates by student
achievement and reports differentiated
student learning outcomes by teacher
preparation program.18 Studies of
programs in Washington State 19 and
New York City,20 as well as data from
the University of North Carolina
system,21 also demonstrate that
graduates of different teacher
preparation programs show statistically
significant differences in value-added
scores. The same kinds of data from
Tennessee and North Carolina show
large differences in teacher placement
and retention rates among programs. In
Ohio 22 and North Carolina, survey data
at 13–14.
Report Card on the Effectiveness of Teacher
Training Programs, Tennessee 2014 Report Card.
(n.d.). Retrieved from www.tn.gov/thec/article/
report-card.
18 See 2015 Delaware Educator Preparation
Program Reports. (n.d.). Retrieved June 27, 2016
from www.doe.k12.de.us/domain/398.
19 Goldhaber, D., & Liddle, S. (2013). The Gateway
to the Profession: Assessing Teacher Preparation
Programs Based on Student Achievement.
Economics of Education Review, 34, 29–44.
20 Boyd, D., Grossman, P., Lankford, H., Loeb, S.,
& Wyckoff, J. (2009). Teacher Preparation and
Student Achievement. Education Evaluation and
Policy Analysis, 31(4), 416–440.
21 See UNC Educator Quality Dashboard. (n.d.).
Retrieved from https://
tqdashboard.northcarolina.edu/performanceemployment/.
22 See, for example: 2013 Educator Preparation
Performance Report Adolescence to Young Adult
also demonstrate that, on average,
graduates of teacher preparation
programs can have large differences in
opinions of the quality of their
preparation for the classroom. And a
separate study of North Carolina teacher
preparation programs found statistically
significant correlations between
programs that collect outcomes data on
graduates and their graduate’s valueadded scores.23 These results reinforce
that teacher preparation programs play
an important role in teacher
effectiveness, and so give prospective
students and employers important
information about which teacher
preparation programs most consistently
produce teachers who can best promote
student academic achievement.
While we acknowledge that some
studies of teacher preparation
programs 24 find very small differences
at the program level in graduates’
average effect on student outcomes, we
believe that the examples we have cited
above provide a reasonable basis for
States’ use of student learning outcomes
weighted in ways that they have
determined best reflect the importance
of this indicator. In addition, we believe
the data will help programs develop
insights into how they can more
consistently generate high-performing
graduates.
We have found little research one way
or the other that directly ties the
performance of teacher preparation
programs to employment outcomes and
survey outcomes. However, we believe
that these other measures—program
graduates and alternative route program
participants’ employment as teachers,
retention in the profession, and
perceptions (with those of their
employers) of how well their programs
have trained them for the classroom—
strongly complement use of student
learning outcomes in that they help to
complete the picture of how well
programs have really trained teachers to
16 GAO
sradovich on DSK3GMQ082PROD with RULES2
17 See
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
(7–12) Integrated Mathematics Ohio State
University. Retrieved from https://regents.ohio.gov/
educator-accountability/performance-report/2013/
OhioStateUniversity/OHSU_
IntegratedMathematics.pdf.
23 Henry, G., & Bastian, K. (2015). Measuring Up:
The National Council on Teacher Quality’s Ratings
of Teacher Preparation Programs and Measures of
Teacher Performance.
24 For example: C. Koedel, E. Parsons, M.
Podgursky, & M. Ehlert (2015). ‘‘Teacher
Preparation Programs and Teacher Quality: Are
There Real Differences Across Programs?’’
Education Finance and Policy, 10(4): 508–534; P.
von Hippel, L. Bellows, C. Osborne, J. Arnold
Lincove, & N. Mills (2014). ‘‘Teacher Quality
Differences Between Teacher Preparation Programs:
How Big? How Reliable? Which Programs Are
Different?’’ Retrieved from Social Science Research
Network, https://papers.ssrn.com/sol3/
papers.cfm?abstract_id=2506935.
PO 00000
Frm 00057
Fmt 4701
Sfmt 4700
75549
take and maintain their teaching
responsibilities.
We understand that research into how
best to evaluate both teacher
effectiveness and the quality of teacher
preparation programs continues. To
accommodate future developments in
research that improve a State’s ability to
measure program quality as well as
State perspectives of how the
performance of teacher preparation
programs should best be measured, the
regulations allow a State to include
other indicators of academic content
knowledge and teaching skills that
measure teachers’ effects on student
performance (see § 612.5(b)). In
addition, given their importance, while
we strongly encourage States to provide
significant weight in particular to the
student learning outcomes and retention
rate outcomes in high-need schools in
their procedures for assessing program
performance, the Department has
eliminated the proposed requirements
in § 612.4(b)(1) that States consider
these measures ‘‘in significant part.’’
The change confirms States’ ability to
determine how to weight each of these
indicators to reflect their own
understanding of how best to assess
program performance and address any
concerns with measurement error.
Moreover, the regulations offer States a
pilot year, corresponding to the 2017–18
reporting year (for data States are to
report in SRCs by October 31, 2018, in
which to address and correct for any
issues with data collection,
measurement error, validity, or
reliability in their reported data.
Use of these indicators themselves, of
course, does not ensure that novice
teachers are prepared to enter the
classroom. However, we believe that the
regulations, including the requirement
for public reporting on each indicator
and criterion a State uses to assess a
program’s level of performance, provide
strong incentives for teacher preparation
programs to use the feedback from these
measures to ensure that the novice
teachers they train are ready to take on
their teaching responsibilities when
they enter the classroom.
We continue to stress that the data on
program performance that States report
in their SRCs do not create and are not
designed to promote any kind of a
national, in-State, or interstate rating
system for teacher preparation
programs, and caution the public
against using reported data in this way.
Rather, States will use reported data to
evaluate program quality based on the
indicators of academic content
knowledge and teaching skills and other
criteria of program performance that
they decide to use for this purpose. Of
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75550
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
course, the Department and the public
at large will use the reported
information to gain confidence in State
decisions about which programs are
low-performing and at-risk of being lowperforming (and are at any other
performance level the State establishes)
and the process and data States use to
make these decisions.
Changes: None.
Comments: Commenters stated that it
is not feasible to collect and report
student learning outcomes or survey
data separately by credential program
for science, technology, engineering,
and mathematics (STEM) programs in a
meaningful way when only one science
test is administered, and teacher
preparation program graduates teach
two or more science disciplines with job
placements in at least two fields.
Discussion: We interpret these
comments to be about teacher
preparation programs that train teachers
to teach STEM subjects. We also
interpret these comments to mean that
certain conditions—including, the
placement or retention of recent
graduates in more than one field, having
only one statewide science assessment
at the high school level, and perhaps
program size—may complicate State
data collection and reporting on the
required indicators for preparation
programs that produce STEM teachers.
The regulations define the term
‘‘teacher of record’’ to clarify that
teacher preparation programs will be
assessed on the aggregate outcomes of
novice teachers who are assigned the
lead responsibility for a student’s
learning in the subject area. In this way,
although they may generate more data
for the student learning outcomes
measure, novice teachers who are
teachers of record for more than one
subject area are treated the same as
those who teach in only one subject
area.
We do not understand why a science
teacher whose district administers only
one examination is in a different
position than a teacher of any other
subject. More important, science is not
yet a tested grade or subject under
section 1111(b)(2) of the ESEA, as
amended by ESSA. Therefore, for the
purposes of generating data on a
program’s student learning outcomes,
States that use the definition of ‘‘student
growth’’ in § 612.2 will determine
student growth for teacher preparation
programs that train science teachers
through use of measures of student
learning and performance that are
rigorous, comparable across schools,
and consistent with State guidelines.
These might include student results on
pre-tests and end-of-course tests,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
objective performance-based
assessments, and student learning
objectives.
To the extent that the comments refer
to small programs that train STEM
teachers, the commenters did not
indicate why our proposed procedures
for reporting data and levels of
performance for small teacher
preparation programs did not
adequately address their concerns. For
reasons we discussed in response to
comments on aggregating and then
reporting data for small teacher
preparation programs (§ 612.4(b)(3)(ii)),
we believe the procedures the
regulations establish for reporting
performance of small programs
adequately address concerns about
program size.
Changes: None.
Comments: Commenters noted that
the transition to new State assessments
may affect reporting on student learning
outcomes and stated that the proposed
regulations fail to indicate when and
how States must use the results of State
assessments during such a transition for
the purpose of evaluating teacher
preparation program quality.
Discussion: For various reasons, one
or more States are often transitioning to
new State assessments, and this is likely
to continue as States implement section
1111(b)(2) of the ESEA, as amended by
ESSA. Therefore, transitioning to new
State assessments should not impact a
State’s ability to use data from these
assessments as a measure of student
learning outcomes, since there are valid
statistical methods for determining
student growth even during these
periods of transition. However, how this
should occur is best left to each State
that is going through such a transition,
just as it is best to leave to each State
whether to use another Statedetermined measure relevant to
calculating student learning outcomes
as permitted by § 612.5(a)(1)(ii)(C)
instead.
Changes: None.
Comments: Commenters
recommended that the student learning
outcomes indicator take into account
whether a student with disabilities uses
accommodations, and who is providing
the accommodation. Another
commenter was especially concerned
about special education teachers’
individualized progress monitoring
plans created to evaluate a student’s
progress on individualized learning
outcomes. The commenter noted that
current research cautions against
aggregation of student data gathered
with these tools for the purposes of
teacher evaluation.
PO 00000
Frm 00058
Fmt 4701
Sfmt 4700
Discussion: Under the regulations,
outcome data is reported on ‘‘teachers of
record,’’ defined as teachers (including
a teacher in a co-teaching assignment)
who have been assigned the lead
responsibility for a student’s learning in
a subject or course section. The teacher
of record for a class that includes
students with disabilities who require
accommodations is responsible for the
learning of those students, which may
include ensuring the proper
accommodations are provided. We
decline to require, as data to be reported
as part of the indicator, the number of
students with disabilities requiring
special accommodations because we
assume that the LEA will meet its
responsibilities to provide needed
accommodations, and out of
consideration for the additional
reporting burden the proposal would
place on States. However, States are free
to adopt this recommendation if they
choose to do so.
In terms of gathering data about the
learning outcomes for students with
disabilities, the regulations do not
require the teacher of record to use
special education teachers’
individualized monitoring plans to
document student learning outcomes
but rather expect teachers to identify,
based on the unique needs of the
students with disabilities, the
appropriate data source. However, we
stress that this issue highlights the
importance of consultation with key
stakeholders, like parents of and
advocates for students with disabilities,
as States determine how to calculate
their student learning outcomes.
Changes: None.
Comments: Commenters
recommended that the regulations
establish the use of other or additional
indicators, including the new teacher
performance assessment edTPA,
measures suggested by the Higher
Education Task Force on Teacher
Preparation, and standardized
observations of teachers in the
classroom. Some commenters
contended that a teacher’s effectiveness
can only be measured by mentor
teachers and university field instructors.
Other commenters recommended
applying more weight to some
indicators, such as students’ evaluations
of their teachers, or increasing emphasis
on other indicators, such as teachers’
scores on their licensure tests.
Discussion: We believe that the
indicators of academic content
knowledge and teaching skills that the
regulations require States to use in
assessing a program’s performance (i.e.,
student learning outcomes, employment
outcomes, survey outcomes, and
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
information about basic aspects of the
program) are the most important such
indicators in that, by focusing on a few
key areas, they provide direct
information about whether the program
is meeting its basic purposes. We
decline to require that States use
additional or other indicators like those
suggested because we strongly believe
they are less direct measures of
academic content knowledge and
teaching skills that would also add
significant cost and complexity.
However, we note that if district
evaluations of novice teachers use
multiple valid measures in determining
performance levels that include, among
other things, data on student growth for
all students, they are ‘‘teacher
evaluation measures’’ under § 612.2.
Therefore, § 612.5(a)(1)(ii) permits the
State to use and report the results of
those evaluations as student learning
outcomes.
Moreover, under § 612.5(b), in
assessing the performance of each
teacher preparation program, a State
may use additional indicators of
academic content and teaching skills of
its choosing, provided the State uses a
consistent approach for all of its teacher
preparation programs and these
additional indicators provide
information on how the graduates
produced by the program perform in the
classroom. In consultation with their
stakeholder groups, States may wish to
use additional indicators, such as
edTPA, teacher classroom observations,
or student survey results, to assess
teacher preparation program
performance.
As we addressed in our discussion of
comment on § 612.4(b)(2)(ii) (Weighting
of Indicators), we encourage States to
give significant weight to student
learning outcomes and employment
outcomes in high-need schools.
However, we have removed from the
final regulations any requirement that
States give special weight to these or
other indicators of academic content
knowledge and teaching skills. Thus,
while States must include in their SRCs
the weights they give to each indicator
and any other criteria they use to
identify a program’s level of
performance, each State has full
authority to determine the weighting it
gives to each indicator or criterion.
Changes: None.
Comments: Some commenters
expressed concerns that the regulations
permit the exclusion of some program
graduates (e.g., those leaving the State or
taking jobs in private schools), thus
providing an incomplete representation
of program performance. In particular,
commenters recommended using
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
measures that capture the academic
content knowledge and teaching skills
of all recent graduates, such as State
licensure test scores, portfolio
assessments, student and parent
surveys, performance on the edTPA,
and the rate at which graduates retake
licensure assessments (as opposed to
pass rates).
Discussion: While the three outcomebased measures required by the
regulations assess the performance of
program graduates who become novice
teachers, the requirement in
§ 612.5(a)(4) for an indication of either
a program’s specialized accreditation or
that it provides certain minimum
characteristics examines performance
based on multiple input-based measures
that apply to all program participants,
including those who do not become
novice teachers. States are not required
to also assess teacher preparation
programs on the basis of any of the
additional factors that commenters
suggest, i.e., State licensure test scores,
portfolio assessments, student and
parent surveys, performance on the
edTPA, and the rate at which graduates
retake licensure assessments. However,
we note that IHEs must continue to
include information in their IRCs on the
pass rates of a program’s students on
assessments required for State
certification. Furthermore, in
consultation with their stakeholders,
States may choose to use the data and
other factors commenters recommend to
help determine a program’s level of
performance.
Changes: None.
Comments: One commenter
recommended that the Department fund
a comprehensive five-year pilot of a
variety of measures for assessing the
range of K–12 student outcomes
associated with teacher preparation.
Discussion: Committing funds for
research is outside the scope of the
regulations. We note that the Institute of
Education Sciences and other research
organizations are conducting research
on teacher preparation programs that
the Department believes will inform
advances in the field.
Changes: None.
Comments: Some commenters stated
that a teacher preparation program’s
cost of attendance and the average
starting salary of the novice teachers
produced by the program should be
included as mandatory indicators for
program ratings because these two
factors, along with student outcomes,
would better allow stakeholders to
understand the costs and benefits of a
specific teacher preparation program.
Discussion: Section 205(b)(1)(F) of the
HEA requires each State to identify in
PO 00000
Frm 00059
Fmt 4701
Sfmt 4700
75551
its SRC the criteria it is using to identify
the performance of each teacher
preparation program within the State,
including its indicators of the academic
knowledge and teaching skills of the
program’s students. The regulations
define these indicators to include four
measures that States must use as these
indicators.
While we agree that information that
helps prospective students identify
programs that offer a good value is
important, the purpose of sections
205(b)(1)(F) and 207(a) of the HEA, and
thus our regulations, is to have States
identify and report on meaningful
criteria that they use to identify a
program’s level of performance—and
specifically whether the program is lowperforming or at-risk of being lowperforming. While we encourage States
to find ways to make information on a
program’s costs available to the public,
we do not believe the information is
sufficiently related to a program’s level
of performance to warrant the additional
costs of requiring States to report it. For
similar reasons, we decline to add this
consumer information to the SRC as
additional data States need to report
independent of its use in assessing the
program’s level of performance.
Changes: None.
Comments: Multiple commenters
stated that the teacher preparation
system in the United States should
mirror that of other countries and
broaden the definition of classroom
readiness. These commenters stated that
teacher preparation programs should
address readiness within a more
holistic, developmental, and collective
framework. Others stated that the
teacher preparation system should
emphasize experiential and community
service styles of teaching and learning to
increase student engagement.
Discussion: While we appreciate
commenters’ suggestions that teacher
preparation programs should be
evaluated using holistic measures
similar to those used by other countries,
we decline to include these kinds of
criteria because we believe that the
ability to influence student growth and
achievement is the most direct measure
of academic knowledge and teaching
skills. However, the regulations permit
States to include indicators like those
recommended by the commenters in
their criteria for assessing program
performance.
Changes: None.
Comments: Commenters noted that
post-graduation professional
development impacts a teacher’s job
performance in that there may be a
difference between teachers who
continue to learn during their early
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75552
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
teaching years compared to those who
do not, but that the proposed
regulations did not take this factor into
account.
Discussion: By requiring the use of
data from the first, second, and third
year of teaching, the student learning
outcomes measure captures
improvements in the impact of teachers
on student learning made over the first
three years of teaching. To the extent
that professional development received
in the first three years of teaching
contributes to a teacher’s impact on
student learning, the student learning
outcomes measure may reflect it.
The commenters may be suggesting
that student learning outcomes of
novice teachers are partially the
consequence of the professional
development they receive, yet the
proposed regulations seem to attribute
student learning outcomes to only the
teacher preparation program. The
preparation that novice teachers receive
in their teacher preparation programs, of
course, is not the only factor that
influences student learning outcomes.
But for reasons we have stated, the
failure of recent graduates as a whole to
demonstrate positive student learning
outcomes is an indicator that something
in the teacher preparation program is
not working. We recognize that novice
teachers receive various forms of
professional development, but believe
that high-quality teacher preparation
programs produce graduates who have
the knowledge and skills they need to
earn positive reviews and stay in the
classroom regardless of the type of
training they receive on the job.
Changes: None.
Comments: Commenters were
concerned that the proposed regulations
would pressure States to rate some
programs as low-performing even if all
programs in a State are performing
adequately. Commenters noted that the
regulations need to ensure that
programs are all rated on their own
merits, rather than ranked against one
another—i.e., criterion-referenced rather
than norm-referenced. The commenters
contended that, otherwise, programs
would compete against one another
rather than work together to continually
improve the quality of novice teachers.
Commenters stated that such
competition could lead to further
isolation of programs rather than
fostering the collaboration necessary for
addressing shortages in high-need
fields.
Some commenters stated that
although there can be differences in
traditional and alternative route
programs that make comparison
difficult, political forces that are pro- or
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
anti-alternative route programs can
attempt to make certain types of
programs look better or worse. Further,
commenters noted that it will be
difficult for the Department to enforce
equivalent levels of accountability and
reporting when differences exist across
States’ indicators and relative weighting
decisions.
Another commenter recommended
that, to provide context, programs and
States should also report raw numbers
in addition to rates for these metrics.
Discussion: We interpret the comment
on low-performing programs to argue
that these regulations might be viewed
as requiring a State to rate a certain
number of programs as low performing
regardless of their performance. Section
207(a) of the HEA requires that States
provide in the SRCs an annual list of
low-performing teacher preparation
programs and identify those programs
that are at risk of being put on the list
of low-performing programs. While the
regulations require States to establish at
least three performance categories (those
two and all other programs, which
would therefore be considered effective
or higher), we encourage States also to
differentiate between teacher
preparation programs whose
performance is satisfactory and those
whose performance is truly exceptional.
We believe that recognizing, and where
possible rewarding (see
§ 612.4(c)(1)(ii)(C)), excellence will help
other programs learn from best practice
and facilitate program improvement of
teacher preparation programs and
entities. Actions like these will
encourage collaboration, especially in
preparing teachers to succeed in highneed areas.
However, we stress that the
Department has no expectation or desire
that a State will designate a certain
number or percentage of its programs as
low-performing or at-risk of being lowperforming. Rather, we want States to
do what our regulations provide: Assess
the level of performance of each teacher
preparation program based on what they
determine to be differentiated levels of
performance, and report in the SRCs (1)
the data they secure about each program
based on the indicators and other
criteria they use to assess program
performance, (2) the weighting of these
data to generate the program’s level of
performance, and (3) a list of programs
it found to be low-performing or at-risk
of being low-performing. Beyond this,
these regulations do not create, and are
not designed to promote, an in-State or
inter-State ranking system, or to rank
traditional versus alternative route
programs based on the reported data.
PO 00000
Frm 00060
Fmt 4701
Sfmt 4700
We acknowledge that if they choose,
States may employ growth measures
specifically based on a relative
distribution of teacher scores statewide,
which could constitute a ‘‘normreferenced’’ indicator. While these
statewide scores may not improve on
the whole, an individual teacher
preparation program’s performance can
still show improvement (or declines)
relative to average teacher performance
in the State. The Department notes that
programs are evaluated on multiple
measures of program quality and the
other required indicators can be
criterion-referenced. For example, a
State may set a specific threshold for
retention rate or employer satisfaction
that a program must meet to be rated as
effective. Additionally, States may
decide to compare any norm-referenced
student learning outcomes, and other
indicators, to those of teachers prepared
out of State to determine relative
improvement of teacher preparation
programs as a whole.25 But whether or
not to take steps like these is purely a
State decision.
With respect to the recommendation
that report cards include raw numbers
as well as rates attributable to the
indicators and other criteria used to
assess program performance,
§ 612.4(b)(2)(i) requires the State to
report data relative to each indicator
identified in § 612.5. Section V of the
instructions for the SRC asks for the
numbers and percentages used in the
calculation of the indicators of academic
content knowledge and teaching skills
and any other indicators and criteria a
State uses.
Changes: None.
Comments: Commenters contended
that the proposed regulations do not
specifically address the skills
enumerated in the definition of
‘‘teaching skills.’’
Discussion: The commenters are
correct that the regulations do not
specifically address the various
‘‘teaching skills’’ identified in the
definition of the term in section 200(23)
of the HEA. However, we strongly
believe that they do not need to do so.
The regulations require States to use
establish four indicators of academic
content knowledge and teaching skills—
student learning outcomes, employment
outcomes, survey results, and minimum
program characteristics—in assessing
the level of a teacher preparation
program’s performance under sections
205(b)(1)(F) and 207(a) of the HEA. In
25 See, for example: See UNC Educator Quality
Dashboard.(n.d.). Retrieved from https://
tqdashboard.northcarolina.edu/performanceemployment/.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
establishing these indicators, we are
mindful of the definition of ‘‘teaching
skills’’ in section 200(23) of the HEA,
which includes skills that enable a
teacher to increase student learning,
achievement, and the ability to apply
knowledge, and to effectively convey
and explain academic subject matter. In
both the NPRM and the discussion of
our response to comment on
§ 612.5(a)(1)–(4), we explain why each
of the four measures is, in fact, a
reasonable indicator of whether teachers
have academic content knowledge and
teaching skills. We see no reason the
regulations need either to enumerate the
definition of teaching skills in section
200(23) or to expressly tie these
indicators to the statutory definition of
one term included in ‘‘academic content
knowledge and teaching skills’’.
Changes: None.
Comments: Some commenters stated
that the use of a rating system with
associated consequences is a ‘‘test and
punish’’ accountability model similar to
the K–12 accountability system under
the ESEA, as amended by the No Child
Left Behind Act (NCLB). They
contended that such a system limits
innovation and growth within academia
and denies the importance of capacity
building.
Discussion: We do not believe that the
requirements the regulations establish
for the title II reporting system are
punitive. The existing HEA title II
reporting framework has not provided
useful feedback to teacher preparation
programs, prospective teachers, other
stakeholders, or the public on program
performance. Until now, States have
identified few programs deserving of
recognition or remediation. This is
because few of the criteria they to date
have reported that they use to assess
program performance, under section
205(b)(1)(F) of the HEA, rely on
information that examines program
quality from the most critical
perspective—teachers’ ability to impact
student achievement once they begin
teaching. Given the importance of
academic knowledge and teaching
skills, we are confident that the
associated indicators in the regulations
will help provide more meaningful
information about the quality of these
programs, which will then facilitate selfimprovement and, by extension,
production of novice teachers better
trained to help students achieve once
they enter the classroom.
Thus, the regulations address
shortcomings in the current State
reporting system by defining indicators
of academic content knowledge and
teaching skills, focusing on program
outcomes that States will use to assess
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
program performance. The regulations
build on current State systems and
create a much-needed feedback loop to
facilitate program improvement and
provide valuable information to
prospective teachers, potential
employers, the general public, and the
programs themselves. We agree that
program innovation and capacity
building are worthwhile, and we believe
that what States will report on each
program will encourage these efforts.
Under the regulations, teacher
preparation programs whose graduates
(or participants, if they are teachers
while being trained in an alternative
route program) do not demonstrate
positive student learning outcomes are
not punished, nor are States required to
punish programs. To the extent that
proposed § 612.4(b)(2), which would
have permitted a program to be
considered effective or higher only if the
teachers it produces demonstrate
satisfactory or higher student learning
outcomes, raised concerns about the
regulations seeming punitive, we have
removed that provision from the final
regulations. Thus, the regulations echo
the requirements of section 207(a) of the
HEA, which requires that States
annually identify teacher preparation
programs that are low-performing or
that are at-risk of becoming lowperforming, and section 207(b) of the
HEA, which prescribes the
consequences for a program from which
the State has withdrawn its approval or
terminated its financial support. For a
discussion of the relationship between
the State classification of teacher
preparation programs and TEACH Grant
eligibility, see § 686.2 regarding a
TEACH Grant-eligible program.
Changes: None.
Comments: None.
Discussion: In removing the term
‘‘new teacher’’ and adding the term
‘‘novice teacher,’’ as discussed earlier in
this document, it became unclear for
what period of time a State must report
data related to those teachers. To resolve
this, we have clarified that a State may,
at its discretion, exclude from reporting
those individuals who have not become
novice teachers after three years of
becoming a ‘‘recent graduate,’’ as
defined in the regulations. We believe
that requiring States to report on
individuals who become novice
teachers more than three years after
those teachers graduated from a teacher
preparation program is overly
burdensome and would not provide an
accurate reflection of teacher
preparation program quality.
Changes: We have added § 612.5(c) to
clarify that States may exclude from
reporting under § 612.5(a)(1)–(3)
PO 00000
Frm 00061
Fmt 4701
Sfmt 4700
75553
individuals who have not become
novice teachers after three years of
becoming recent graduates.
Student Learning Outcomes (34 CFR
612.5(a)(1))
Growth, VAM, and Other
Methodological Concerns
Comments: Many commenters argued
that the proposed definition of ‘‘student
learning outcomes’’ invites States to use
VAM to judge teachers and teacher
preparation programs. Those
commenters argued that because the
efficacy of VAM is not established, the
definition of ‘‘student learning
outcomes’’ is not solidly grounded in
research.
Discussion: For those States that
choose to do so, the final regulations
permit States to use any measures of
student growth for novice teachers that
meet the definitions in § 612.2 in
reporting on a program’s student
learning outcomes. Their options
include a simple comparison of student
scores on assessments between two
points in time for grades and subjects
subject to section 1111(b)(2) of the
ESEA, as amended by ESSA, a range of
options measuring student learning and
performance for non-tested grades and
subjects (which can also be used to
supplement scores for tested grads and
subjects), or more complex statistical
measures, like student growth
percentiles (SGPs) or VAM that control
for observable student characteristics. A
detailed discussion of the use of VAM
as a specific growth measure follows
below; the discussion addresses the use
of VAM in student learning outcomes,
should States choose to use it. However,
we also note that the requirement for
States to assess teacher preparation
programs based, in part, on student
learning outcomes also allows States
that choose not to use student growth to
use a teacher evaluation measure or
another State-determined measure
relevant to calculating student learning
outcomes. Nothing in the final
regulations require the use of VAM over
other methodologies for calculating
student growth, specifically, or student
learning outcomes, more broadly.
These comments also led us to see
potential confusion in the proposed
definitions of student learning outcomes
and student growth. In reviewing the
proposed regulations, we recognized
that the original structure of the
definition of ‘‘student learning
outcomes’’ could cause confusion. We
are concerned that having a definition
for the term, which was intended only
to operationalize the other definitions in
the context of § 612.5, was not the
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75554
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
clearest way to present the
requirements. To clarify how student
learning outcomes are considered under
the regulations, we have removed the
definition of ‘‘student learning
outcomes’’ from § 612.2, and revised
§ 612.5(a)(1) to incorporate, and
operationalize, that definition.
Changes: We have removed the
definition of ‘‘student learning
outcomes’’ and revised § 612.5(a)(1) to
incorporate key aspects of that proposed
definition. In addition, we have
provided States with the option to
determine student learning outcomes
using another State-determined measure
relevant to calculating student learning
outcomes.
Comments: Many commenters stated
that the proposed student learning
outcomes would not adequately serve as
an indicator of academic content
knowledge and teaching skills for the
purpose of assessing teacher preparation
program performance. Commenters also
contended that tests only measure the
ability to memorize and that several
kinds of intelligence and ways of
learning cannot be measured by testing.
In general, commenters questioned
the Department’s basis for the use of
student learning outcomes as one
measure of teacher preparation program
performance, citing research to support
their claim that the method of
measuring student learning outcomes as
proposed in the regulations is neither
valid nor reliable, and that there is no
evidence to support the idea that
student outcomes are related to the
quality of the teacher preparation
program attended by the teacher.
Commenters further expressed concerns
about the emphasis on linking
children’s test scores on mandated
standardized tests to student learning
outcomes. Commenters also stated that
teacher preparation programs are
responsible for only a small portion of
the variation in teacher quality.
Commenters proposed that aggregate
teacher evaluation results be the only
measure of student learning outcomes
so long as the State teacher evaluations
do no overly rely on results from
standardized tests. Commenters stated
that in at least one State, teacher
evaluations cannot be used as part of
teacher licensure decisions or to
reappoint teachers due to the subjective
nature of the evaluations.
Some commenters argued that student
growth cannot be defined as a simple
comparison of achievement between
two points in time.
One commenter, who stated that the
proposed regulatory approach is
thorough and aligned with current
trends in evaluation, also expressed
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
concern that K–12 student performance
(achievement) data are generally a
snapshot in time, typically the result of
one standardized test, that does not
identify growth over time, the context of
the test taking, or other variables that
impact student learning.
Commenters further cited research
that concluded that student
achievement in the classroom is not a
valid predictor of whether the teacher’s
preparation program was high quality
and asserted that other professions do
not use data in such a simplistic way.
Another commenter stated that local
teacher evaluation instruments vary
significantly across towns and States.
Another commenter stated that
student performance data reported in
the aggregate and by subgroups to
determine trends and areas for
improvement is acceptable but should
not be used to label or categorize a
school system, school, or classroom
teacher.
Discussion: As discussed above, in the
final regulations we have removed the
requirement that States consider student
growth ‘‘in significant part,’’ in their
procedures for annually assessing
teacher preparation program
performance. Therefore, while we
encourage States to use student growth
as their measure of student learning
outcomes and to adopt such a weighting
of student learning outcomes on their
own, our regulations give States broad
flexibility to decide how to weight
student learning outcomes in
consultation with stakeholders (see
§ 612.4(c)), with the aim of it being a
sound and reasonable indicator of
teacher preparation program
performance. Similarly, we decline
commenters’ suggestions to restrict the
measure of student learning outcomes to
only aggregated teacher evaluation
results, in order to maintain that
flexibility. With our decision to permit
States to use their own State-determined
measure relevant to calculating student
learning outcomes rather than student
growth or a teacher evaluation measure,
we have provided even more State
flexibility in calculating student
learning outcomes than commenters had
requested.
As we have previously stated, we
intend the use of all indicators of
academic content knowledge and
teaching skills to produce information
about the performance-level of each
teacher preparation program that,
speaking broadly, is valid and reliable.
It is clear from the comments we
received that there is not an outright
consensus on using student learning
outcomes to help measure teacher
preparation program performance;
PO 00000
Frm 00062
Fmt 4701
Sfmt 4700
however, we strongly believe that a
program’s ability to prepare teachers
who can positively influence student
academic achievement is both an
indicator of their academic content
knowledge and teaching skills, and a
critical measure for assessing a teacher
preparation program’s performance.
Student learning outcomes therefore
belong among multiple measures States
must use. We continue to highlight
growth as a particularly appropriate way
to measure a teacher’s effect on student
learning because it takes a student’s
prior achievement into account, gives a
teacher an opportunity to demonstrate
success regardless of the student
characteristics of the class, and therefore
reflects the contribution of the teacher
to student learning. Even where student
growth is not used, producing teachers
who can make a positive contribution to
student learning should be a
fundamental objective of any teacher
preparation program and the reason
why it should work to provide
prospective teachers with academic
content and teaching skills. Hence,
student learning outcomes, as we define
them in the regulations, associated with
each teacher preparation program are an
important part of an assessment of any
program’s performance.
States therefore need to collect data
on student learning outcomes—through
either student growth that examines the
change in student achievement in both
tested and non-tested grades and
subjects, a teacher evaluation measure
as defined in the regulations, or another
State-determined measure relevant to
calculating student learning outcomes—
and then link these data to the teacher
preparation program that produced (or
in the case of an alternative route
program, is producing) these teachers.
In so doing, States may if they wish
choose to use statistical measures of
growth, like VAM or student growth
percentiles, that control for student
demographics that are typically
associated with student achievement.
There are multiple examples of the use
of similar student learning outcomes in
existing research and State reporting.
Tennessee, for example, reports that
some teacher preparation programs
consistently exhibit statistically
significant differences in student
learning outcomes over multiple years,
indicating that scores are reliable from
one year to the next.26 Studies from
Washington State 27 and New York
26 See Report Card on the Effectiveness of Teacher
Training Programs, Tennessee 2014 Report Card.
(n.d.). Retrieved from www.tn.gov/thec/article/
report-card.
27 D. Goldhaber & S. Liddle (2013). ‘‘The Gateway
to the Profession: Assessing Teacher Preparation
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
City 28 also find statistically significant
differences in the student learning
outcomes of teachers from different
teacher preparation programs as does
the University of North Carolina in how
it assesses its own teacher preparation
programs.29 Moreover, a teacher’s effect
on student growth is commonly used in
education research and evaluation
studies conducted by the Institute of
Education Sciences as a valid measure
of the effectiveness of other aspects of
teacher training, like induction or
professional development.30
While some studies of teacher
preparation programs 31 in other States
have not found statistically significant
differences at the preparation program
level in graduates’ effects on student
outcomes, we believe that there are
enough examples of statistically
significant differences in program
performance on student learning
outcomes to justify their inclusion in
the SRC. In addition, because even these
studies show a wide range of individual
teacher effectiveness within a program,
using these data can provide new
insights that can help programs to
produce more consistently highperforming graduates.
Moreover, looking at the related issue
of educator evaluations, there is debate
about the level of reliability and validity
of the individual elements used in
different teacher evaluation systems.
However, there is evidence that student
growth can be a useful and effective
component in teacher evaluation
systems. For example, a study found
that dismissal threats and financial
incentives based partially upon growth
scores positively influenced teacher
performance.32 33 In addition, there is
Programs Based on Student Achievement.’’
Economics of Education Review, 34: 29–44.
28 D. Boyd, P. Grossman, H. Lankford, S. Loeb, &
J. Wyckoff. (2009). Teacher Preparation and Student
Achievement. Education Evaluation and Policy
Analysis, 31(4), 416–440.
29 See UNC Educator Quality Dashboard.(n.d.).
Retrieved from https://
tqdashboard.northcarolina.edu/performanceemployment/.
30 See for example, S. Glazerman, E. Isenberg, S.
Dolfin, M. Bleeker, A. Johnson, M. Grider & M.
Jacobus. 2010). Impacts of comprehensive teacher
induction: Final results from a randomized
controlled study (NCEE 2010–4027). Washington,
DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education
Sciences, U.S. Department of Education.
31 Koedel, C., Parsons, E., Podgursky, M., &
Ehlert, M. (2015). Teacher Preparation Programs
and Teacher Quality: Are There Real Differences
Across Programs? Education Finance and Policy,
10(4), 508–534.
32 Dee, T., & Wyckoff, J. (2015). Incentives,
Selection, and Teacher Performance: Evidence from
IMPACT. Journal of Policy Analysis and
Management, 34(2), 267–297. doi:10.3386/w19529.
33 Henry, G., & Bastian, K. (2015). Measuring Up:
The National Council on Teacher Quality’s Ratings
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
evidence that combining multiple
measures, including student growth,
into an overall evaluation result for a
teacher can produce a more valid and
reliable result than any one measure
alone.34 For these reasons, this
regulation and § 612.5(b) continue to
give States the option of using teacher
evaluation systems based on multiple
measures that include student growth to
satisfy the student learning outcomes
requirement.
Teacher preparation programs may
well only account for some of the
variation in student learning outcomes.
However, this does not absolve
programs from being accountable for the
extent to which their graduates
positively impact student achievement.
Thus, while the regulations are not
intended to address the entire scope of
student achievement or all factors that
contribute to student learning outcomes,
the regulations focus on student
learning outcomes as an indicator of
whether or not the program is
performing properly. In doing so, one
would expect that, through a greater
focus on their student learning
outcomes, States and teacher
preparation programs will thereby have
the benefit of some basic data about
where their work to provide all students
with academic content knowledge and
teaching skills need to improve.
Changes: None.
Comments: Other commenters stated
that there are many additional factors
that can impact student learning
outcomes that were not taken into
account in the proposed regulations;
that teacher evaluation is incomplete
without taking into account the context
in which teachers work on a daily basis;
and that VAM only account for some
contextual factors. Commenters stated
that any proposed policies to directly
link student test scores to teacher
evaluation and teacher preparation
programs must recognize that schools
and classrooms are situated in a broader
socioeconomic context.
Commenters pointed out that not all
graduates from a specific institution or
program will be teaching in similar
school contexts and that many factors
influencing student achievement cannot
be controlled for between testing
intervals. Commenters also cited other
contributing factors to test results that
are not in a teacher’s control, including
poverty and poverty-related stress;
inadequate access to health care; food
of Teacher Preparation Programs and Measures of
Teacher Performance.
34 Mihaly, K., McCaffrey, D., Staiger, D., &
Lockwood, J. (2013, January 8). A Composite
Estimator of Effective Teaching.
PO 00000
Frm 00063
Fmt 4701
Sfmt 4700
75555
insecurity; the student’s development,
family, home life, and community; the
student’s background knowledge; the
available resources in the school district
and classroom; school leadership,
school curriculum, students not taking
testing situations seriously; and school
working conditions. Commenters also
noted that students are not randomly
placed into classrooms or schools, and
are often grouped by socioeconomic
class, and linguistic segregation, which
influences test results.
Discussion: Many commenters
described unmeasured or poorly
measured student and classroom
characteristics that might bias the
measurement of student outcomes and
noted that students are not randomly
assigned to teachers. These are valid
concerns and many of the factors stated
are correlated with student
performance.
However, teacher preparation
programs should prepare novice
teachers to be effective and successful in
all classroom environments, including
in high-need schools. It is for this
reason, as well as to encourage States to
highlight successes in these areas, that
we include as indicators of academic
content knowledge and teaching skills,
placement and retention rates in highneed schools.
In addition, States and school districts
can control for different kinds of student
and classroom characteristics in the
ways in which they determine student
learning outcomes (and student growth).
States can, for example, control for
school level characteristics like the
concentration of low-income students in
the school and in doing so compare
teachers who teach in similar schools.
Evidence cited below that student
growth, as measured by well-designed
statistical models, captures the causal
effects of teachers on their students also
suggests that measures of student
growth can successfully mitigate much
of potential bias, and supports the
conclusion that non-random sorting of
students into classrooms does not cause
substantial bias in student learning
outcomes. We stress, however, the
decision to use such controls and other
statistical measures to control for
student and school characteristics in
calculating student learning outcomes is
up to States in consultation with their
stakeholder groups.
Changes: None.
Comments: Commenters contended
that although the proposed regulations
offer States the option of using a teacher
evaluation measure in lieu of, or in
addition to, a student growth measure,
this option does not provide a real
alternative because it also requires that
E:\FR\FM\31OCR2.SGM
31OCR2
75556
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
the three performance levels in the
teacher evaluation measure include, as
a significant factor, data on student
growth, and student growth relies on
student test scores. Also, while the
regulations provide that evaluations
need not rely on VAM, commenters
suggested that VAM will drive teacher
effectiveness determinations because
student learning is assessed either
through student growth (which includes
the use of VAM) or teacher evaluation
(which is based in large part on student
growth), so there really is no realistic
option besides VAM. Commenters also
stated that VAM requirements in Race to
the Top and ESEA flexibility, along with
State-level legislative action, create a
context in which districts are compelled
to use VAM.
A large number of commenters stated
that research points to the challenges
and ineffectiveness of using VAM to
evaluate both teachers and teacher
preparation programs, and asserted that
the data collected will be neither
meaningful nor useful. Commenters also
stated that use of VAM for decisionmaking in education has been
discredited by leading academic and
professional organizations such as the
American Statistical Association
(ASA) 35, the American Educational
Research Association, and the National
Academy of Education.36 37 Commenters
provided research in support of their
arguments, asserting in particular ASA’s
contention that VAM do not meet
professional standards for validity and
reliability when applied to teacher
preparation programs. Commenters
voiced concerns that VAM typically
measure correlation and not causation,
often citing the ASA’s assertions.
Commenters also contended that
student outcomes have not been shown
to be correlated with, much less
predictive of, good teaching; VAM
scores and rankings can change
substantially when a different model or
test is used, and variation among
teachers accounts for a small part of the
variation in student test scores. One
commenter stated that student learning
outcomes are not data but target skills
and therefore the Department
incorrectly defined ‘‘student learning
outcomes.’’ We interpret this comment
35 American Statistical Association. (2014). ASA
Statement on Using Value-Added Models for
Educational Assessment: www.amstat.org/policy/
pdfs/ASA_VAM_Statement.pdf.
36 American Education Research Association
(AERA) and National Academy of Education.
(2011).Getting teacher evaluation right: A brief for
policymakers. Washington, DC: AERA.
37 Feuer, M. J., Floden, R. E., Chudowsky, N., &
Ahn, J. (2013). Evaluation of Teacher Preparation
Programs: Purposes, Methods, and Policy Options.
Washington, DC: National Academy of Education.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
to mean that tests that may form the
base of student growth only measure
certain skills rather than longer term
student outcomes.
Many commenters also noted that
value-added models of student
achievement are developed and normed
to test student achievement, not to
evaluate educators, so using these
models to evaluate educators is invalid
because the tests have not been
validated for that purpose. Commenters
further noted that value-added models
of student achievement tied to
individual teachers should not be used
for high-stakes, individual-level
decisions or comparisons across highly
dissimilar schools or student
populations.
Commenters stated that in
psychometric terms, VAM are not
reliable. They contended that it is a
well-established principle that
reliability is a necessary but not
sufficient condition for validity. If
judgments about a teacher preparation
program vary based on the method of
estimating value-added scores,
inferences made about programs cannot
be trusted.
Others noted Edward Haertel’s 38
conclusion that no statistical
manipulation can assure fair
comparisons of teachers working in very
different schools, with very different
students, under very different
conditions. Commenters also noted
Bruce Baker’s conclusions that even a
20 percent weight to VAM scores can
skew results too much. Thus, according
to the commenters, though the proposed
regulations permit States to define what
is ‘‘significant’’ for the purposes of using
student learning outcomes ‘‘in
significant part,’’ unreliable and invalid
VAM scores end up with at least a 20
percent weight in teacher evaluations.
Discussion: The proposed definition
of teacher evaluation measure in § 612.2
did provide that student growth be
considered in significant part, but we
have removed that aspect of the
definition of teacher evaluation measure
from the final regulations. Moreover, we
agree that use of such an evaluation
system may have been required, for
example, in order for a State to receive
ESEA flexibility, and States may still
choose to consider student growth in
significant part in a teacher evaluation
measure. However, not only are States
not required to include growth ‘‘in
significant part’’ in a teacher evaluation
38 Haertel, E. 2013. Reliability and Validity on
Inferences about Teachers Based on Student Test
Scores. The 14th William H. Angoff Memorial
Lecture, March 22. Princeton, NJ: Educational
Testing Service. Retrieved from www.ets.org/Media/
Research/pdf/PICANG14.pdf.
PO 00000
Frm 00064
Fmt 4701
Sfmt 4700
measure used for student learning
outcomes, but § 612.5(a)(1)(ii) clarifies
that States may choose to measure
student learning outcomes without
using student growth at all.
On the use of VAM specifically, we
reiterate that the regulations permit
multiple ways of measuring student
learning outcomes without use of VAM;
if they use student growth, States are
not required to use VAM. We note also
that use of VAM was not a requirement
of Race to the Top, nor was it a
requirement of ESEA Flexibility,
although many States that received Race
to the Top funds or ESEA flexibility
committed to using statistical models of
student growth based on test scores. We
also stress that in the context of these
regulations, a State that chooses to use
VAM and other statistical measures of
student growth would use them to help
assess the performance of teacher
preparation programs as a whole.
Neither the proposed nor final
regulations address, as many
commenters stated, how or whether a
State or district might use the results of
a statistical model for individual
teachers’ evaluations and any resulting
personnel actions.
Many States and districts currently
use a variety of statistical methods in
teacher, principal, and school
evaluation, as well as in State
accountability systems. VAM are one
such way of measuring student learning
outcomes that are used by many States
and districts for these accountability
purposes. While we stress that the
regulations do not require or anticipate
the use of VAM to calculate student
learning outcomes or teacher evaluation
measures, we offer the following
summary of VAM in view of the
significant amount of comments the
Department received on the subject.
VAM are statistical methodologies
developed by researchers to estimate a
teacher’s unique contribution to growth
in student achievement, and are used in
teacher evaluation and evaluation of
teacher preparation programs. Several
experimental and quasi-experimental
studies conducted in a variety of
districts have found that VAM scores
can measure the causal impact teachers
have on student learning.39 There is also
39 For example: Kane, T., & Staiger, D. (2008).
Estimating teacher impacts on student achievement:
An experimental evaluation. doi:10.3386/w14607;;
Kane, T., McCaffrey, D., Miller, T., & Staiger, D.
(2013). Have We Identified Effective Teachers?
Validating Measures of Effective Teaching Using
Random Assignment; Bacher-Hicks, A., Kane, T., &
Staiger, D.(2014). Validating Teacher Effect
Estimates Using Changes in Teacher Assignment in
Los Angeles (Working Paper No. 20657). Retrieved
from National Bureau of Economic Research Web
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
strong evidence that VAM measure
more than a teacher’s ability to improve
test scores; a recent paper found that
teachers with higher VAM scores
improved long term student outcomes
such as earnings and college
enrollment.40 While tests often measure
specific skills, these long-term effects
show that measures of student growth
are, in fact, measuring a teacher’s effect
on student outcomes rather than simple,
rote memorization, test preparation on
certain target skills, or a teacher’s
performance based solely on one
specific student test. VAM have also
been shown to consistently measure
teacher quality over time and across
different kinds of schools. A wellexecuted, randomized controlled trial
found that, after the second year,
elementary school students taught by
teachers with high VAM scores who
were induced to transfer to lowperforming schools had higher reading
and mathematics scores than students
taught by comparison teachers in the
same kinds of schools.41
The Department therefore disagrees
with commenters who state that the
efficacy of VAM is not grounded in
sound research. We believe that VAM is
commonly used as a component in
many teacher evaluation systems
precisely because the method minimizes
the influence of observable factors
independent of the teacher that might
affect student achievement growth, like
student poverty levels and prior levels
of achievement.
Several commenters raised important
points to consider with using VAM for
teacher evaluation. Many cited the April
8, 2014, ‘‘ASA Statement on Using
Value-Added Models for Educational
Assessment’’ cited in the summary of
comment, that makes several reasonable
recommendations regarding the use of
VAM, including its endorsement of wise
use of data, statistical models, and
designed experiments for improving the
quality of education. We believe that the
definitions of ‘‘student learning
outcomes’’ and ‘‘student growth’’ in the
regulations, is fully compatible with
valid and reliable ways of including
VAM to assess the impact of teachers on
student academic growth. Therefore,
States that chose to use VAM to generate
student learning outcomes would have
Web site: www.nber.org/papers/w20657; Chetty, et
al. at 2633–2679 and 2593–2632.
40 Chetty, et al at 2633–2679.
41 Glazerman, S., Protik, A., Teh, B., Bruch, J., &
Max, J. (2013). Transfer incentives for highperforming teachers: Final results from a multisite
randomized experiment (NCEE 2014–4003).
Washington, DC: National Center for Education
Evaluation and Regional Assistance, Institute of
Education Sciences, U.S. Department of Education.
https://files.eric.ed.gov/fulltext/ED544269.pdf.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
the means to do what the ASA study
recommends: Use data and statistical
models to improve the quality of their
teacher preparation programs. The ASA
also wisely cautions that VAMs are
complex statistical models,
necessitating high levels of statistical
expertise to develop and run and should
include estimates of the model’s
precision. These specific
recommendations are entirely consistent
with the regulations, and we encourage
States to follow them when using VAM.
We disagree, however, with the ASA
and commenters’ assertions that VAM
typically measures correlation, not
causation, and that VAM does not
measure teacher contributions toward
other student outcomes. These
assertions contradict the evidence cited
above that VAM does measure the
causal effects of teachers on student
achievement, and that teachers with
high VAM scores also improve longterm student outcomes.
The implication of the various studies
we cited in this section is clear; not only
can VAM identify teachers who improve
short- and long-term student outcomes,
but VAM can play a substantial role in
effective, useful teacher evaluation
systems.
However, as we have said, States do
not need to use VAM to generate
student learning outcomes. Working
with their stakeholders States can, if
they choose, establish other means of
reporting a teacher preparation
program’s ‘‘student learning outcomes’’
that meet the basic standard in
§ 612.5(a)(1).
Changes: None.
Comments: Two commenters
suggested that the United States
Government Accountability Office
(GAO) do an analysis and suggest
alternatives to VAM.
Discussion: The Secretary of
Education has no authority to direct
GAO’s work, so these comments are
outside the Department’s authority, and
the scope of the regulations.
Changes: None.
Comments: Several commenters
opined that it is not fair to measure new
teachers in the manner proposed in the
regulations because it takes new
teachers three to five years to become
good at their craft. Other commenters
mentioned that value-added scores
cannot be generated until at least two
years after a teacher candidate has
graduated.
Discussion: We recognize the
importance of experience in a teacher’s
development. However, while teachers
can be expected to improve in
effectiveness throughout their first few
years in the classroom, under
PO 00000
Frm 00065
Fmt 4701
Sfmt 4700
75557
§ 612.5(a)(1)) a State is not using student
learning outcomes to measure or predict
the future or long-term performance of
any individual teacher. It is using
student learning outcomes to measure
the performance of the teacher
preparation program that the novice
teacher completed—performance that,
in part, should be measured in terms of
a novice teacher’s ability to achieve
positive student learning outcomes in
the first year the teacher begins to teach.
We note, however, that there is strong
evidence that early career performance
is a significant predictor of future
performance. Two studies have found
that growth scores in the first two years
of a teacher’s career, as measured by
VAM, better predict future performance
than measured teacher characteristics
that are generally available to districts,
such as a teacher’s pathway into
teaching, available credentialing scores
and SAT scores, and competitiveness of
undergraduate institution.42 Given that
early career performance is a good
predictor of future performance, it is
reasonable to use early career results of
the graduates of teacher preparation
programs as an indicator of the
performance of those programs. These
studies also demonstrate that VAM
scores can be calculated for first-year
teachers.
Moreover, even if States choose not to
use VAM results as student growth
measures, the function of teacher
preparation programs is to train teachers
to be ready to teach when they enter the
classroom. We believe student learning
outcomes should be measured early in
a teacher’s career, when the impact of
their preparation is likely to be the
strongest. However, while we urge
States to give significant weight to their
student outcome measures across the
board, the regulations leave to each
State how to weight the indicators of
academic content knowledge and
teaching skills for novice teachers in
their first and other years of teaching.
Changes: None.
Differences Between Accountability and
Improvement
Comments: Commenters stated that
the Department is confusing
accountability with improvement by
requiring data on and accountability of
programs. Several commenters
42 Atteberry, A., Loeb, S., & Wyckoff, J. (2015). Do
first impressions matter? Improvement in early
career teacher effectiveness. American Educational
Research Association (AERA) Open.; Goldhaber, D.,
& Hansen, M. (2010). Assessing the Potential of
Using Value-Added Estimates of Teacher Job
Performance for Making Tenure Decisions. Working
Paper 31. National Center for Analysis of
Longitudinal Data in Education Research.
E:\FR\FM\31OCR2.SGM
31OCR2
75558
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
remarked that VAM will not guarantee
continuous program improvement.
Discussion: The regulations require
States to use the indicators of academic
content knowledge and teaching skills
identified in § 612.5(a), which may
include VAM if a State chooses, to
determine the performance level of each
teacher preparation program, to report
the data generated for each program,
and to provide a list of which programs
the State considers to be low-performing
or at-risk of being low-performing. In
addition, reporting the data the State
uses to measure student learning
outcomes will help States, IHEs, and
other entities with teacher preparation
programs to determine where their
program graduates (or program
participants in the case of alternative
route to teaching programs) are or are
not succeeding in increasing student
achievement. No information available
to those operating teacher preparation
programs, whether from VAM or
another source, can, on its own, ensure
the programs’ continuous improvement.
However, those operating teacher
preparation programs can use data on a
program’s student learning outcomes—
along with data from employment
outcomes, survey outcomes, and
characteristics of the program—to
identify key areas for improvement and
focus their efforts. In addition, the
availability of these data will provide
States with key information in deciding
what technical assistance to provide to
these programs.
Changes: None.
Consistency
Comments: One commenter noted the
lack of consistency in assessments at the
State level, which we understand to be
assessments of students across LEAs
within the same State, will make the
regulations almost impossible to
operationalize. Another commenter
noted that the comparisons will be
invalid, unreliable, and inherently
biased in favor of providers that enjoy
State sponsorship and are most likely to
receive favorable treatment under a
State-sponsored assessment schema
(which we understand to mean
‘‘scheme’’). Until there is a common
State assessment which we understand
to mean common assessment of students
across States, the commenter argued
that any evaluation of teachers using
student progress and growth will be
variable at best.
Discussion: We first note that,
regardless of the assessments a State
uses to calculate student learning
outcomes, the definition of student
growth in § 612.2 requires that such
assessments be comparable across
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
schools and consistent with State
policies. While comparability across
LEAs is not an issue for assessments
administered pursuant to section
1111(b)(2) of the ESEA—which are other
assessments used by the State for
purposes of calculating student growth
may not be identical, but are required to
be comparable. As such, we do not
believe that LEA-to-LEA or school-toschool variation in the particular
assessments that are administered
should inherently bias the calculation of
student learning outcomes across
teacher preparation programs.
Regarding comparability across States
in the assessments administered to
students, nothing in this regulation
requires such comparability and, we
believe such a requirement would
infringe upon the discretion States have
historically been provided under the
ESEA in determining State standards,
assessments, and curricula.
We understand the other comment to
question the validity of comparisons of
teacher preparation program ratings, as
reported in the SRC. We continue to
stress that the data regarding program
performance reported in the SRCs and
required by the regulations do not
create, or intend to promote, any inState or inter-State ranking system.
Rather, we anticipate that States will
use reported data to evaluate program
performance based on State-specific
weighting.
Changes: None.
Special Populations and Untested
Subjects
Comments: Two commenters stated
that VAMs will have an unfair impact
on special education programs. Another
commenter stated that for certain
subjects, such as music education, it is
difficult for students to demonstrate
growth.
One commenter stated that there are
validity issues with using tests to
measure the skills of deaf children since
standardized tests are based on hearing
norms and may not be applicable to deaf
children. Another commenter noted that
deaf and hard-of-hearing K–12 students
almost always fall below expected grade
level standards, impacting student
growth and, as a result, teacher
preparation program ratings under our
proposed regulations. In a similar vein,
one commenter expressed concern that
teacher preparation programs that
prepare teachers of English learners may
be unfairly branded as low-performing
or at-risk because the students are
forced to conform to tests that are
neither valid nor reliable for them.
Discussion: The Department is very
sensitive to the different teaching and
PO 00000
Frm 00066
Fmt 4701
Sfmt 4700
learning experiences associated with
students with disabilities (including
deaf and hard-of-hearing students) and
English learners, and encourages States
to use student learning outcome
measures that allow teachers to
demonstrate positive impact on student
learning outcomes regardless of the
prior achievement or other
characteristics of students in their
classroom. Where States use the results
of assessments or other tests for student
learning outcomes, such measures must
also conform to appropriate testing
accommodations provided to students
that allow them to demonstrate content
mastery instead of reflecting specific
disabilities or language barriers.
We expect that these measures of
student learning outcomes and other
indicators used in State systems under
this regulation will be developed in
consultation with key stakeholders (see
§ 612.4(c)), and be based on measures of
achievement that conform to student
learning outcomes as described in in
§ 612.5(a)(1)(ii).
Changes: None.
Comments: Several commenters cited
a study 43 stating unintended
consequences associated with the highstakes use of VAM, which emerged
through teachers’ responses.
Commenters stated that the study
revealed, among other things, that
teachers felt heightened pressure and
competition. This reduced morale and
collaboration, and encouraged cheating
or teaching to the test.
Some commenters stated that by, in
effect, telling teacher preparation
programs that their graduates should
engage in behaviors that lift the test
scores of their students, the likely main
effect will be classrooms that are more
directly committed to test preparation
(and to what the psychometric
community calls score inflation) than to
advancement of a comprehensive
education.
Discussion: The Department is
sensitive to issues of pressure on
teachers to artificially raise student
assessment scores, and perceptions of
some teachers that this emphasis on
testing reduces teacher morale and
collaboration. However, States and
LEAs have responsibility to ensure that
test data are monitored for cheating and
other forms of manipulation, and we
have no reason to believe that the
regulations will increase these
incidents. With regard to reducing
teacher morale and collaboration, value43 Collins, C (2014). Houston, we have a problem:
Teachers find no value in the SAS education valueadded assessment system (EVAAS®), Education
Policy Analysis Archives, 22(98).
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
added scores are typically calculated
statewide for all teachers in a common
grade and subject. Because teachers are
compared to all similarly situated
teachers statewide, it is very unlikely
that a teacher could affect her own score
by refusing to collaborate with other
teachers in a single school. We
encourage teachers to collaborate across
grades, subjects, and schools to improve
their practice, but also stress that the
regulations use student learning
outcomes only to help assess the
performance of teacher preparation
programs. Under the regulations, where
a State does not use student growth or
teacher evaluation data already gathered
for purposes of an LEA educator
evaluation, data related to student
learning outcomes is only used to help
assess the quality of teacher preparation
programs, and not the quality of
individual teachers.
Changes: None.
Comments: Commenters were
concerned that the regulations will not
benefit high-need schools and
communities because the indicator for
student learning outcomes creates a
disincentive for programs to place
teachers in high-need schools and
certain high-need fields, such as English
as a Second Language. In particular,
commenters expressed concern about
the requirements that student learning
outcomes be given significant weight
and that a program have satisfactory or
higher student learning outcomes in
order to be considered effective.
Commenters expressed particular
concern in these areas with regard to
Historically Black Colleges and
Universities and other programs whose
graduates, the commenters stated, are
more likely to work in high-need
schools.
Commenters opined that, to avoid
unfavorable outcomes, teacher
preparation programs will seek to place
their graduates in higher-performing
schools. Rather than encouraging
stronger partnerships, commenters
expressed concern that programs will
abandon efforts to place graduates in
low-performing schools. Others were
concerned that teachers will self-select
out of high-need schools, and a few
commenters noted that high-performing
schools will continue to have the most
resources while teacher shortages in
high-need schools, such as those in
Native American communities, will be
exacerbated.
Some commenters stated that it was
unfair to assess a teacher preparation
program based on, as we interpret the
comment, the student learning
outcomes of the novice teachers
produced by the program because the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
students taught by novice teachers may
also receive instruction from other
teachers who may have more than three
years of experience teaching.
Discussion: As we have already noted,
under the final regulations, States are
not required to apply special weight to
any of the indicators of academic
content knowledge and teaching skills.
Because of their special importance to
the purpose of teacher preparation
programs, we strongly encourage, but do
not require, States to include
employment outcomes for high-need
schools and student learning outcomes
in significant part when assessing
teacher preparation program
performance. We also encourage, but do
not require, States to identify the quality
of a teacher preparation program as
effective or higher if the State
determined that the program’s graduates
produce student learning outcomes that
are satisfactory or higher.
For the purposes of the regulations,
student learning outcomes may be
calculated using student growth.
Because growth measures the change in
student achievement between two or
more points in time, the prior
achievement of students is taken into
account. Teacher preparation programs
may thus be assessed, in part, based on
their recent graduates’ efforts to increase
student growth, not on whether the
teachers’ classrooms contained students
who started as high or low achieving.
For this reason, teachers—regardless of
the academic achievement level of the
students they teach—have the same
opportunity to positively impact student
growth. Likewise, teacher preparation
programs that place students in highneed schools have the same opportunity
to achieve satisfactory or higher student
learning outcomes. These regulations
take into account the commenters’
concerns related to teacher equity as
placement and retention in high-need
schools are required metrics.
We recognize that many factors
influence student achievement.
Commenters who note that students
taught by novice teachers may also
receive instruction from other teachers
who may have more than three years of
experience teaching cite but one factor.
But the objective in having States use
student growth as an indicator of the
performance of a teacher preparation
program is not to finely calculate how
novice teachers impact student growth.
As we have said, it rather is to have the
State determine whether a program’s
student learning outcomes are so far
from the mark as to be an indicator of
poor program performance.
For these reasons, we disagree with
commenters that the student learning
PO 00000
Frm 00067
Fmt 4701
Sfmt 4700
75559
outcomes measure will discourage
preparation programs and teachers from
serving high-need schools. We therefore
decline to make changes to the
regulations.
Changes: None.
Comments: Commenters expressed
concern with labeling programs as lowperforming if student data are not made
available about such programs. The
commenters stated that this may lead to
identifying high-quality programs as
low-performing. They were also
concerned about transparency, and
noted that it would be unfair to label
any program without actual information
on how that label was earned.
Discussion: We interpret the
commenters’ concern to be that States
may not be able to report on student
learning outcomes for particular teacher
preparation programs because districts
do not provide data on student learning
outcomes, and yet still identify
programs as low performing. In
response, we clarify that the State is
responsible for securing the information
needed to report on each program’s
student learning outcomes. Given the
public interest in program performance
and the interest of school districts in
having better information about the
programs in which prospective
employees have received their training,
we are confident that each State can
influence its school districts to get
maximum cooperation in providing
needed data.
Alternatively, to the extent that the
commenter was referring to difficulties
obtaining data for student learning
outcomes (or other of our indicators of
academic content and teaching skills)
because of the small size of the teacher
preparation programs, § 612.4(b)(3)(ii)
provides different options for
aggregation of data so the State can
provide these programs with
appropriate performance ratings. In this
case, except for teacher preparation
programs that are so small that even
these aggregation methods will not
permit the State to identify a
performance level (see
§ 612.4(b)(3)(ii)(D) and § 612.4(b)(5)), all
programs will have data on student
learning outcomes with which to
determine the program’s level of
performance.
Changes: None.
State and Local Concerns
Comments: Several commenters
expressed concerns about their specific
State laws regarding data collection as
they affect data needed for student
learning outcomes. Other commenters
noted that some States have specific
laws preventing aggregated student
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75560
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
achievement data from being reported
for individual teachers. One commenter
said that its State did not require annual
teacher evaluations. Some commenters
indicated that State standards should be
nationally coordinated.
One commenter asked the Department
to confirm that the commenters’ State’s
ESEA flexibility waiver would meet the
student learning outcome requirements
for both tested and non-tested grades
and subjects, and if so, given the
difficulty and cost, whether the State
would still be required to report
disaggregated data on student growth in
assessment test scores for individual
teachers, programs, or entities in the
SRC. Commenters also noted that LEAs
could be especially burdened, with no
corresponding State or Federal authority
to compel LEA compliance. A
commenter stated that in one city most
teachers have 20 to 40 percent of their
evaluations based on tests in subjects
they do not teach.
Commenters urged that States be
given flexibility in determining the
components of data collection and
reporting systems with minimal
common elements. This would, as
commenters indicated, ultimately delay
the State’s ability to make valid and
reliable determinations of teacher
preparation program quality. Some
commenters stated that States should be
required to use student learning
outcomes as a factor in performance
designations, but allow each State to
determine how best to incorporate these
outcomes into accountability systems.
Commenters noted that a plan for
creating or implementing a measure of
student achievement in content areas for
which States do not have valid
statewide achievement data was not
proposed, nor was a plan proposed to
pilot or fund such standardized
measures.
Discussion: We agree and understand
that some States may have to make
changes (including legislative,
regulatory, budgetary, etc.) in order to
comply with the regulations. We have
allowed time for these activities to take
place, if necessary, by providing time
for data system set-up and piloting
before full State reporting is required as
of October 31, 2019. We note that
§ 612.4(b)(4)(ii)(E) of the proposed
regulations and § 612.4(b)(5)) of the final
regulations expressly exempt reporting
of data where doing so would violate
Federal or State privacy laws or
regulations. We also provide in
§ 612.4(c)(2) that States must
periodically examine the quality of the
data collection and make adjustments as
necessary. So if problems arise, States
need to work on ways to resolve them.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Regarding the suggestion that State
standards for student learning outcomes
should be nationally coordinated, States
are free to coordinate. But how each
State assesses a program’s performance
is a State decision; the HEA does not
otherwise provide for such national
coordination.
With respect to the comment asking
whether a State’s ESEA flexibility
waiver would meet the student learning
outcomes requirement for both tested
and non-tested grades and subjects, this
issue is likely no longer relevant since
the enactment of the ESSA will make
ESEA flexibility waivers null and void
on August 1, 2016. However, in
response to the commenters’ question,
so long as the State is implementing the
evaluation systems as they committed to
do in order to receive ESEA flexibility,
the data it uses for student learning
outcomes would most likely represent
an acceptable way, among other ways,
to comply with the title II reporting
requirements.
We understand the comment, that
LEAs would be especially burdened
with no corresponding State or Federal
authority to compel LEA compliance, to
refer to LEA financial costs. It is unclear
that LEAs would be so burdened. We
believe that our cost estimates, as
revised to respond to public comment,
are accurate. Therefore, we also believe
that States, LEAs, and IHEs will be able
meet responsibilities under this
reporting system without need for new
funding sources. We discuss authorities
related to LEA compliance in the
discussion under § 612.1.
Regarding specific reporting
recommendations for State flexibility in
use of student learning outcomes, State
must use the indicators of academic
content knowledge and teaching skills
identified in § 612.5(a). However, States
otherwise determine for themselves how
to use these indicators and other
indicators and criteria they may
establish to assess a program’s
performance. In identifying the
performance level of each program,
States also determine the weighting of
all indicators and criteria they use to
assess program performance.
Finally, we understand that all States
are working to implement their
responsibilities to provide results of
student assessments for grades and
subjects in which assessments are
required under section 1111(b)(2) of the
ESEA, as amended by ESSA. With
respect to the comment that the
Department did not propose a plan for
creating or implementing a measure of
student achievement in content areas for
which States do not have valid
statewide achievement data, the
PO 00000
Frm 00068
Fmt 4701
Sfmt 4700
regulations give States substantial
flexibility in how they measure student
achievement. Moreover, we do not agree
that time to pilot such new assessments
or growth calculations, or more Federal
funding in this area, is needed.
Changes: None.
Permitted Exclusions From Calculation
of Student Learning Outcomes
Comments: None.
Discussion: In proposing use of
student learning outcomes for assessing
a teacher preparation program’s
performance, we had intended that
States be able, in their discretion, to
exclude student learning outcomes
associated with recent graduates who
take teaching positions out of State or in
private schools—just as the proposed
regulations would have permitted States
to do in calculating employment
outcomes. Our discussion of costs
associated with implementation of
student learning outcomes in the NPRM
(79 FR 71879) noted the proposed
regulations permitted the exclusion for
teachers teaching out of State. And
respectful of the autonomy accorded to
private schools, we never intended that
States be required to obtain data on
student learning outcomes regarding
recent graduates teaching in those
schools.
However, upon review of the
definitions of the terms ‘‘student
achievement in non-tested grades and
subjects,’’ ‘‘student achievement in
tested grades and subjects,’’ and
‘‘teacher evaluation measure’’ in
proposed § 612.2, we realized that these
definitions did not clearly authorize
States to exclude student learning
outcomes associated with these teachers
from their calculation of a teacher
preparation program’s aggregate student
learning outcomes. Therefore, we have
revised § 612.5(a)(1) to include authority
for the State to exclude data on student
learning outcomes for students of novice
teachers teaching out of State or in
private schools from its calculation of a
teacher preparation program’s student
learning outcomes. In doing so, as with
the definitions of teacher placement rate
and teacher retention rate, we have
included in the regulations a
requirement that the State use a
consistent approach with regard to
omitting or using these data in assessing
and reporting on all teacher preparation
programs.
Changes: We have revised section
612.5(a)(1) to provide that in calculating
a teacher preparation program’s
aggregate student learning outcomes, at
its discretion a State may exclude
student learning outcomes of students
taught by novice teachers teaching out
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
of State or in private schools, or both,
provided that the State uses a consistent
approach to assess and report on all of
the teacher preparation programs in the
State.
Employment Outcomes (34 CFR
612.5(a)(2))
sradovich on DSK3GMQ082PROD with RULES2
Measures of Employment Outcomes
Comments: Many commenters
suggested revisions to the definition of
‘‘employment outcomes.’’ Some
commenters mentioned that the four
measures included in the definition
(placement rates, high-need school
placement rates, retention rates, and
high-need school retention rates) are not
appropriate measures of a program’s
success in preparing teachers. One
commenter recommended that highneed school placement rates not be
included as a required program
measure, and that instead the
Department allow States to use it at
their discretion. Other commenters
recommended including placement and
retention data for preschool teachers in
States where their statewide preschool
program postsecondary training and
certification is required, and the State
licenses those educators.
Discussion: For several reasons, we
disagree with commenters that the
employment outcome measures are
inappropriate measures of teacher
preparation program quality. The goals
of any teacher preparation program
should be to provide prospective
teachers with the skills and knowledge
needed to pursue a teaching career,
remain successfully employed as a
teacher, and in doing so produce
teachers who meet the needs of LEAs
and their students. Therefore, the rate at
which a program’s graduates become
and remain employed as teachers is a
critical indicator of program quality.
In addition, programs that persistently
produce teachers who fail to find jobs,
or, once teaching, fail to remain
employed as teachers, may well not be
providing the level of academic content
knowledge and teaching skills that
novice teachers need to succeed in the
classroom. Working with their
stakeholders (see § 612.4(c)), each State
will determine the point at which the
reported employment outcomes for a
program go from the acceptable to the
unacceptable, the latter indicating a
problem with the quality of the
program. We fully believe that these
outcomes reflect another reasonable way
to define an indicator of academic
content knowledge and teaching skills,
and that unacceptable employment
outcomes show something is wrong
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
with the quality of preparation the
teaching candidates have received.
Further, we believe that given the
need for teacher preparation programs
to produce teachers who are prepared to
address the needs of students in highneed schools, it is reasonable and
appropriate that indicators of academic
content and teaching skills used to help
assess a program’s performance focus
particular attention on teachers in those
schools. Therefore, we do not believe
that States should have the option to
include teacher placement rates (and
teacher retention rates) for high-need
schools in their SRCs.
We agree with commenters that, in
States where postsecondary training and
certification is required, and the State
licenses those teachers, data on the
placement and retention of preschool
teachers should be reported. We
strongly encourage States to report this
information. However, we decline to
require that they do so because prekindergarten licensure and teacher
evaluation requirements vary
significantly between States and among
settings, and given these State and local
differences in approach we believe that
it is important to leave the
determination of whether and how to
include preschool teachers in this
measure to the States.
Changes: None.
Teacher Placement Rate
Comments: One commenter
recommended that the teacher
placement rate account for
‘‘congruency,’’ which we interpret to
mean whether novice teachers are
teaching in the grade level, grade span,
and subject area in which they were
prepared. The commenter noted that
teacher preparation programs that are
placing teachers in out-of-field positions
are not aligning with districts’ staffing
needs. In addition, we understand the
commenter was noting that procedures
LEAs use for filling vacancies with
teachers from alternative route programs
need to acknowledge the congruency
issue and build in a mechanism to
remediate it.
Discussion: We agree that teachers
should be placed in a position for which
they have content knowledge and are
prepared. For this reason, the proposed
and final regulations define ‘‘teacher
placement rate’’ as the percentage of
recent graduates who have become
novice teachers (regardless of retention)
for the grade level, grade span, and
subject area in which they were
prepared, except, as discussed in the
section titled ‘‘Alternative Route
Programs,’’ we have revised the
regulations to provide that a State is not
PO 00000
Frm 00069
Fmt 4701
Sfmt 4700
75561
required to calculate a teacher
placement rate for alternative route to
certification programs. While we do not
agree that teacher preparation programs
typically place teachers in their teaching
positions, programs that do not work to
ensure that novice teachers obtain
employment as teachers in a grade level,
span, or subject area that is the same as
that or which they were prepared will
likely fare relatively poorly on the
placement rate measure.
We disagree with the commenter’s
suggestion that alternative route
program participants are teaching in
out-of-field positions. Employment as a
teacher is generally a prerequisite to
entry into alternative route programs,
and the alternative route program
participants are being prepared for an
initial certification or licensure in the
field in which they are teaching. We do
not know of evidence to suggest that
most participants in alternative route
programs become teachers of record
without first having demonstrated
adequate subject-matter content
knowledge in the subjects they teach.
Nonetheless, traditional route
programs and alternative route programs
recruit from different groups of
prospective teachers and have different
characteristics. It is for this reason that,
both in our proposed and final
regulations, States are permitted to
assess the employment outcomes of
traditional route programs versus
alternative route programs differently,
provided that the different assessments
result in equivalent standards of
accountability and reporting.
Changes: None.
Teacher Retention Rate
Comments: Many commenters
expressed concern that the teacher
retention rate measure does not consider
other factors that influence retention,
including induction programs, the
support novice teachers receive in the
classroom, and the districts’ resources.
Other commenters suggested requiring
each State to demand from its
accredited programs a 65 percent
retention rate after five years.
Some commenters also expressed
concern about how the retention rate
measure will be used to assess
performance during the first few years
of implementation. They stated that it
would be unfair to rate teacher
preparation programs without complete
information on retention rates.
Discussion: We acknowledge that
retention rates are affected by factors
outside the teacher preparation
program’s control. However, we believe
that a teacher retention rate that is
extraordinarily low, just as one that is
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75562
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
extraordinarily high, is an important
indicator of the degree to which a
teacher preparation program adequately
prepares teachers to teach in the schools
that hire them and thus is a useful and
appropriate indicator of academic
content knowledge and teaching skills
that the State would use to assess the
program’s performance. The regulations
leave to the States, in consultation with
their stakeholders (see § 612.4(c)) the
determination about how they calculate
and then weight a program’s retention
rate. While we agree that programs
should strive for high retention rates,
and encourage States to set rigorous
performance goals for their programs,
we do not believe that the Department
should set a specific desired rate for this
indicator. Rather, we believe the States
are best suited to determine how to
implement and weight this measure.
However, we retain the proposal to have
the retention rate apply over the first
three years of teaching both because we
believe that having novice teachers
remain in teaching for the first three
years is key, and because having States
continue to generate data five years out
as the commenter recommended is
unnecessarily burdensome.
We understand that, during the initial
years of implementation, States will not
have complete data on retention. We
expect that States will weigh indicators
for which data are unavailable during
these initial implementation years in a
way that is consistent and applies
equivalent levels of accountability
across programs. For further discussion
of the reporting cycle and
implementation timeline, see § 612.4(a).
We also note that, as we explain in our
response to comments on the definition
of ‘‘teacher retention rate’’, under the
final regulations States will report on
teachers who remain in the profession
in the first three consecutive years after
placement.
Changes: None.
Comments: Commenters expressed
concern that the categories of teachers
who can be excluded from the ‘‘teacher
placement rate’’ calculation are different
from those who can be excluded from
the ‘‘teacher retention rate’’ calculation.
Commenters believed this could
unfairly affect the rating of teacher
preparation programs.
Discussion: We agree that differences
in the categories of teachers who can be
excluded from the ‘‘teacher placement
rate’’ calculation and the ‘‘teacher
retention rate’’ calculation should not
result in an inaccurate portrayal of
teacher preparation program
performance on these measures. Under
the proposed regulations, the categories
of teachers who could be excluded from
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
these calculations would have been the
same with two exceptions: Novice
teachers who are not retained
specifically and directly due to budget
cuts may be excluded from the
calculation of teacher retention rate
only, as may recent graduates who have
taken teaching positions that do not
require State certification. A teacher
placement rate captures whether a
recent graduate has ever become a
novice teacher and therefore is reliant
on initial placement as a teacher of
record. Retention in a teaching position
has no bearing on this initial placement,
and therefore allowing States to exclude
teachers from the placement rate who
were not retained due to budget cuts
would not be appropriate. Therefore, the
option to exclude this category of
teachers from the retention rate
calculation does not create
inconsistencies between these measures.
However, permitting States to exclude
from the teacher placement rate
calculation, but not from the teacher
retention rate calculation, recent
graduates who have taken teaching
positions that do not require State
certification could create
inconsistencies between the measures.
Moreover, upon further review, we
believe permitting the exclusion of this
category of teachers from either
calculation runs contrary to the purpose
of the regulations, which is to assess the
performance of programs that lead to an
initial State teacher certification or
licensure in a specific field. For these
reasons, the option to exclude this
category of teachers has been removed
from the definition of ‘‘teacher
placement rate’’ in the final regulations
(see § 612.2). With this change, the
differences between the categories of
teachers that can be excluded from
teacher placement rate and teacher
retention rate will not unfairly impact
the outcomes of these measures, so long
as the State uses a consistent approach
to assess and report on all programs in
the State.
Changes: None.
Comments: Commenters stated that
this the teacher retention rate measure
would reflect poorly on special
education teachers, who have a high
turnover rate, and on the programs that
prepare them. They argued that, in
response to the regulations, some
institutions will reduce or eliminate
their special education preparation
programs rather than risk low ratings.
Discussion: Novice special education
teachers have chosen their area of
specialization, and their teacher
preparation programs trained them
consistent with State requirements. The
percentage of these teachers, like
PO 00000
Frm 00070
Fmt 4701
Sfmt 4700
teachers trained in other areas, who
leave their area of specialization within
their first three years of teaching, or
leave teaching completely, is too high
on an aggregated national basis.
We acknowledge that special
education teachers face particular
challenges, and that like other teachers,
there are a variety of reasons—some
dealing with the demands of their
specialty, and some dealing with a
desire for other responsibilities, or
personal factors—for novice special
education teachers to decide to move to
other professional areas. For example,
some teachers with special education
training, after initial employment, may
choose to work in regular education
classrooms, where many children with
disabilities are taught consistent with
the least restrictive environment
provisions of the Individuals with
Disabilities Education Act. Their
specialized training can be of great
benefit in the regular education setting.
Under our regulations, States will
determine how to apply the teacher
retention indicator, and so determine in
consultation with their stakeholders (see
§ 612.4(c)) what levels of retention
would be so unreasonably low (or so
unexpectedly high) to reflect on the
quality of the teacher preparation
program. We believe this State
flexibility will incorporate
consideration of the programmatic
quality of special education teacher
preparation and the general
circumstances of employment of these
teachers. Special education teachers are
teachers first and foremost, and we do
not believe the programs that train
special education teachers should be
exempted from the State’s overall
calculations of their teacher retention
rates. Demand for teachers trained in
special education is expected to remain
high, and given the flexibility States
have to determine what is a reasonable
retention rate for novice special
education teachers, we do not believe
that this indicator of program quality
will result in a reduction of special
education preparation programs.
Changes: None.
Placement in High-Need Schools
Comments: Many commenters noted
that incentivizing the placement of
novice teachers in high-need schools
contradicts the ESEA requirement that
States work against congregating novice
teachers in high-need schools. The
‘‘Excellent Educators for All’’ 44
initiative asks States to work to ensure
44 Equitable Access to Excellent Educators: State
Plans to Ensure Equitable Access to Excellent
Educators.(2014). Retrieved from https://
www2.ed.gov/programs/titleiparta/resources.html.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
that high-need schools obtain and retain
more experienced teachers. Commenters
believed States would be challenged to
meet the contradictory goals of the
mandated rating system and the
Department’s other initiatives.
Discussion: The required use of
teacher placement and retention rates
(i.e., our employment rate outcomes) are
intended to provide data that confirm
the extent to which those whom a
teacher preparation program prepares go
on to become novice teachers and
remain in teaching for at least three
years. Moreover, placement rates overall
are particularly important, in that they
provide a baseline context for evaluating
a program’s retention rates. Our
employment outcomes include similar
measures that focus on high-need
schools because of the special
responsibility of programs to meet the
needs of those schools until such time
as SEAs and LEAs truly have
implemented their responsibilities
under 1111(g)(1)(B) and 1112(b)(2) of
the ESEA, as amended by ESSA,
(corresponding to similar requirements
in sections 1111(b)(8)(C) and
1112(c)(1)(L) of the ESEA, as previously
amended by NCLB) to take actions to
ensure that low-income children and
children of color are not taught at higher
rates than other children by
inexperienced, unqualified, or out-offield teachers.
The Department required all States to
submit State Plans to Ensure Equitable
Access to Excellent Educations
(Educator Equity Plans) to address this
requirement, and we look forward to the
time when employment outcomes that
focus on high-need schools are
unnecessary. However, it is much too
early to remove employment indicators
that focus on high-need schools. For this
reason, we decline to accept the
commenters’ recommendation that we
do so because of concern that these
reporting requirements are inconsistent
with those under the ESEA.
We add that, just as States will
establish the weights to these outcomes
in assessing the level of program
performance, States also may adjust
their expectations for placement and
retention rates for high-need schools in
order to support successful
implementation of their State plans.
Changes: None.
Comments: Many commenters
expressed concern about placing novice
teachers in high-need schools without
additional support systems. Several
other commenters stated that the
proposed regulations would add to the
problem of chronic turnover of the least
experienced teachers in high-need
schools.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Discussion: We agree that high-need
schools face special challenges, and that
teachers who are placed in high-need
schools need to be prepared for those
challenges so that they have a positive
impact on the achievement and growth
of their students. By requiring
transparency in reporting of
employment outcomes through
disaggregated information about highneed schools, we hope that preparation
programs and high-need schools and
districts will work together to ensure
novice teachers have the academic
content knowledge and teaching skills
they need when placed as well as the
supports they need to stay in high-need
schools.
We disagree with commenters that the
regulations will lead to higher turnover
rates. By requiring reporting on teacher
preparation rates by program, we
believe that employers will be better
able to identify programs with strong
track records for preparing novice
teachers who stay, and succeed, in highneed schools. This information will
help employers make informed hiring
decisions and may ultimately help
districts reduce teacher turnover rates.
Changes: None.
State Flexibility To Define and
Incorporate Measures
Comments: Commenters suggested
that States be able to define the specific
employment information they are
collecting, as well as the process for
collecting it, so that they can use the
systems they already have in place.
Other commenters suggested that the
Department require that States use
employment outcomes as a factor in
performance designations, but allow
each State to determine how best to
incorporate these outcomes into
accountability systems.
Several commenters suggested
additional indicators that could be used
to report on employment outcomes.
Specifically, commenters suggested that
programs should report the
demographics and outcomes of enrolled
teacher candidates by race and ethnicity
(graduation rate, dropout rates,
placement rates for graduates, first-year
evaluation scores (if available), and the
percentage of teachers candidates who
stay within the teaching profession for
one, three, and five years). Also,
commenters suggested that the
Department include the use of readilyavailable financial data when reporting
employment outcomes. Another
commenter suggested that the
Department collect information on how
many teachers from each teacher
preparation program attain an
exemplary rating through the statewide
PO 00000
Frm 00071
Fmt 4701
Sfmt 4700
75563
evaluation systems. Finally, one
commenter suggested counting the
number of times schools hire graduates
from the same teacher preparation
program.
Discussion: As with the other
indicators, States have flexibility to
determine how the employment
outcome measures will be implemented
and used to assess the performance of
teacher preparation programs. If a State
wants to adopt the recommendations in
the way it implements collecting data
on placement and retention rates, it
certainly may do so. But we are mindful
of the additional costs associated with
calculating these employment measures
for each teacher preparation program
that would come from adopting
commenters’ recommendations to
disaggregate their employment measures
by category of teachers or to include the
other categories of data they
recommend.
We do not believe that further
disaggregation of data as recommended
will produce a sufficiently useful
indicator of teacher preparation program
performance to justify a requirement
that all States implement one or more of
these recommendations. We therefore
decline to adopt them. We also do not
believe additional indicators are
necessary to assess the academic
content knowledge and teaching skills
of the novice teachers from each teacher
preparation program though consistent
with § 612.5(b), States are free to adopt
them if they choose to do so.
Changes: None.
Employment Outcomes as a Measure of
Program Performance
Comments: Commenters suggested
that States be expected to report data on
teacher placement, without being
required to use the data in making
annual program performance
designations.
Several commenters noted that school
districts often handle their own
decisions about hiring and placement of
new school teachers, which severely
limits institutions’ ability to place
teachers in schools. Many commenters
advised against using employment data
in assessments of teacher preparation
programs. Some stated that these data
would fail to recognize the importance
of teacher preparation program students’
variable career paths and potential for
employment in teaching-related fields.
To narrowly define teacher preparation
program quality in terms of a limited
conception of employment for graduates
is misguided and unnecessarily
damaging.
Other commenters argued that the
assumption underlying this proposed
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75564
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
measure of a relationship between
program quality and teacher turnover is
not supported by research, especially in
high-need schools. They stated that
there are too many variables that impact
teacher hiring, placement, and retention
to effectively connect that variable to
the quality of teacher preparation
programs. Examples provided include:
The economy and budget cuts, layoffs
that poor school districts are likely to
implement, State politics, the
unavailability of a position in given
content area, personal choices (e.g.,
having a family), better paying
positions, out of State positions, private
school positions, military installations
and military spouses, few opportunities
for advancement, and geographic hiring
patterns (e.g., rural versus urban hiring
patterns). Some commenters also stated
that edTPA, which they described as an
exam that is similar to a bar exam for
teaching, would be a much more direct,
valid measure of a graduate’s skills.
Discussion: We acknowledge that
there are factors outside of a program’s
control that influence teacher placement
rates and teacher retention rates. As
commenters note, teacher preparation
program graduates (or alternative route
program participants if a State chooses
to look at them rather than program
graduates) may decide to enter or leave
the profession due to family
considerations, working conditions at
their school, or other reasons that do not
necessarily reflect upon the quality of
their teacher preparation program or the
level of content knowledge and teaching
skills of the program’s graduates.
In applying these employment
outcome measures, it would be absurd
to assume that States will treat a rate
that is below 100 percent as a poor
reflection on the quality of the teacher
preparation program. Rather, in
applying these measures States may
determine what placement rates and
retention rates would be so low (or so
high, if they choose to identify
exceptionally performing programs) as
to speak to the quality of the program
itself.
However, while factors like those
commenters identify affect employment
outcomes, we believe that the primary
goal of teacher preparation programs
should be to produce graduates who
successfully become classroom teachers
and stay in teaching at least several
years. We believe that high placement
and retention rates are indicators that a
teacher preparation program’s graduates
(or an alternative route program’s
participants if a State chooses to look at
them rather than program graduates)
have the requisite content knowledge
and teaching skills to demonstrate
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
sufficient competency to find a job, earn
positive reviews, and choose to stay in
the profession. This view is shared by
States like North Carolina, Louisiana,
and Tennessee, as well as CAEP, which
require reporting on similar outcomes
for teacher preparation programs.
Commenters accurately point out that
teachers in low-performing schools with
high concentrations of students of color
have significantly higher rates of
turnover. Research from New York State
confirms this finding, but also shows
that first-year teachers who leave a
school are, on average, significantly less
effective than those who stay.45 This
finding, along with other similar
findings,46 indicates that teacher
retention and teaching skills are
positively associated with one another.
Another study found that when given a
choice between teachers who transfer
schools, schools tend to choose the
teachers with greater impact on student
outcomes,47 suggesting that hiring
decisions are also indications of teacher
skills and content knowledge. Research
studies 48 and available State data 49 on
teacher preparation programs placement
and retention rates also show that there
can be large differences in employment
outcomes across programs within a
State. While these rates are no doubt
influenced by many factors, the
Department believes that they are in
part a reflection of the quality of the
program, because they signal a
program’s ability to produce graduates
that schools and districts deem to be
qualified.
The use of employment outcomes as
indicators of the performance of a
45 Boyd, D., Grossman, P., Lankford, H., & Loeb,
S. (2008). Who Leaves? Teacher Attrition and
Student Achievement? (Working Paper No. 14022).
Retrieved from National Bureau of Economic
Research.
46 Goldhaber, D., Gross, P., & Player, D. (2007).
Are public schools really losing their ‘‘best’’?
Assessing the career transitions of teachers and
their implications for the quality of the teacher
workforce (Working Paper No. 12).
47 Boyd, D., Lankford, H., Loeb, S., Ronfeldt, M.,
& Wyckoff, J. (2011). The role of teacher quality in
retention and hiring: Using applications to transfer
to uncover preferences of teachers and schools.
Journal of Policy Analysis and Management, 30(1),
88–110.
48 Kane, T., Rockoff, J., & Staiger, D. (2008). What
does certification tell us about teacher
effectiveness? Evidence from New York City.
Economics of Education Review, 27(6), 615–631–
615–631.
49 See, for example information on these
indicators reported by Tennessee and North
Carolina: Report Card on the Effectiveness of
Teacher Training Programs, Tennessee 2014 Report
Card. (n.d.). Retrieved November 30, 2015, from
www.tn.gov/thec/Divisions/AcademicAffairs/rttt/
report_card/2014/report_card/14report_card.shtml;
UNC Educator Quality Dashboard. (n.d.). Retrieved
from https://tqdashboard.northcarolina.edu/
performance-employment/.
PO 00000
Frm 00072
Fmt 4701
Sfmt 4700
teacher preparation program also
reflects the relationship between teacher
retention rates and student outcomes. At
the school level, high teacher turnover
can have multiple negative effects on
student learning. When a teacher leaves
a school, it is more likely that the
vacancy will be filled by a lessexperienced and, on average, lesseffective teacher, which will lower the
achievement of students in the school.
In addition to this effect on the
composition of a school’s teacher
workforce, the findings of Ronfeldt, et
al. suggest that disruption from teacher
turnover has an additional negative
effect on the school as a whole, in part,
by lowering the effectiveness of the
teachers who remain in the school.50
Thus, we believe that employment
outcomes, taken together, serve not only
as reasonable indicators of academic
content knowledge and teaching skill,
but also as potentially important
incentives for programs and States to
focus on a program’s ability to produce
graduates with the skills and
preparation to teach for many years.
Placement rates overall and in highneed schools specifically, are
particularly important, in that they
provide a baseline context for evaluating
a program’s retention rates. In an
extreme example, a program may have
100 graduates, but if only one graduate
who actually secures employment as a
teacher, and continues to teach, that
school would have a retention rate of
100 percent. Plainly, such a retention
rate does not provide a meaningful or
complete assessment of the program’s
impact on teacher retention rate, and
thus on this indicator of program
quality. Similarly, two programs may
each produce 100 teachers, but one
program only places teachers in highneed schools, while the other places no
teachers in high-need schools. Even if
the programs produced graduates of the
exact same quality, the program that
serves high-need schools would be
likely to have lower retention rates, due
to the challenges described in comments
and above.
Finally, we reiterate that States have
flexibility to determine how
employment outcomes should be
weighted, so that they may match their
metrics to their individual needs and
conditions. In regard to using other
available measures of teaching ability
and academic content knowledge, like
edTPA, we believe that, taken together,
outcome-based measures that we require
50 Ronfeldt, M., Loeb, S., & Wyckoff, J. (2013).
How Teacher Turnover Harms Student
Achievement. American Education Research
Journal, 50(1), 4–36.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
(student learning outcomes,
employment outcomes, and survey
outcomes) are the most direct measures
of academic content knowledge and
teaching skills. Placement and retention
rates reflect the experiences of
program’s recent graduates and novice
teachers over the course of three to six
years (depending on when recent
graduates become novice teachers),
which cannot be captured by other
measures. We acknowledge that States
may wish to include additional
indicators, such as student survey
results, to assess teacher preparation
program performance. Section 612.5(b)
permits States to do so. However, we
decline to require that States use
additional or other indicators like those
suggested in place of employment
outcomes, because we strongly believe
they are less direct measures of
academic content knowledge and
teaching skills.
Changes: None.
Validity and Reliability
Comments: Several commenters
indicated that the teacher retention data
that States would need to collect for
each program do not meet the standards
for being valid or reliable. They stated
that data on program graduates will be
incomplete because States can exclude
teachers who move across State lines,
teach in private schools or in positions
which do not require certification, or
who join the military or go to graduate
school. Commenters further expressed
concern over the numerous requests for
additional data regarding persistence,
academic achievement, and job
placement that are currently beyond the
reach of most educator preparation
programs.
Discussion: As we have previously
stated, we intend the use of all
indicators of academic content
knowledge and teaching skill to produce
information about the performance-level
of each teacher preparation program
that, speaking broadly, is valid and
reliable. See, generally, our discussion
of the issue in response to public
comment on Indicators a State Must Use
to Report on Teacher Preparation
Programs in the State Report Card (34
CFR 612.5(a)).
It is clear from the comments we
received that there is not an outright
consensus on using employment
outcomes to measure teacher
preparation programs; however, we
strongly believe that the inclusion of
employment outcomes with other
measures contributes to States’ abilities
to make valid and reliable decisions
about program performance. Under the
regulations, States will work with their
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
stakeholders (see § 612.4(c)) to establish
methods for evaluating the quality of
data related to a program’s outcome
measures, and all other indicators, to
ensure that the reported data are fair
and equitable. As we discussed in the
NPRM, in doing so, the State should use
this process to ensure the reliability,
validity, integrity, and accuracy of all
data reported about the performance of
teacher preparation programs. We
recognize the burden that reporting on
employment outcomes may place on
individual programs, and for this
reason, we suggest, but do not require,
that States examine their capacity,
within their longitudinal data systems,
to track employment outcomes because
we believe this will reduce costs for
IHEs and increase efficiency of data
collection.
We recognize that program graduates
may not end up teaching in the same
State as their teacher preparation
program for a variety of reasons and
suggest, but do not require, that States
create inter-State partnerships to better
track employment outcomes of program
completers as well as agreements that
allow them to track military service,
graduate school enrollment, and
employment as teacher in a private
school. But we do not believe that the
exclusion of these recent graduates, or
those who go on to teach in private
schools, jeopardizes reasonable use of
this indicator of teacher preparation
program performance. As noted,
previously, we have revised the
regulations so that States may not
exclude recent graduates employed in
positions which do not require
certification from their calculations of
employment outcomes. Working with
their stakeholders (see § 612.4(c) States
will be able to determine how best to
apply the retention rate data that they
have.
Finally, we understand that many
teacher preparation programs do not
currently collect data on factors like job
placement, how long their graduates
who become teachers stay in the
profession, and the gains in academic
achievement that are associated with
their graduates. However, collecting this
information is not beyond those
programs’ capacity. Moreover, the
regulations make the State responsible
for ensuring that data needed for each
indicator to assess program performance
are secured and used. How they will do
so would be a subject for State
discussion with its consultative group.
Changes: None.
Data Collection and Reporting
Concerns
Comments: Commenters
recommended that placement-rate data
PO 00000
Frm 00073
Fmt 4701
Sfmt 4700
75565
be collected beyond the first year after
graduation and across State boundaries.
Another commenter noted that a State
would need to know which ‘‘novice
teachers’’ or ‘‘recent graduates’’ who
attended teacher preparation programs
in their State are not actually teaching
in their State, and it is unclear how a
State would be able to get this
information. Several commenters
further stated that States would need
information about program graduates
who teach in private schools that is not
publically available and may violate
privacy laws to obtain.
Commenters were concerned about
how often data will be updated by the
Department. They stated that, due to
teachers changing schools mid-year,
data will be outdated and not helpful to
the consumer. Several commenters
suggested that a national database
would need to be in place for accurate
data collection so institutions would be
able to track graduates across State
boundaries. Two commenters noted that
it will be difficult to follow graduates
over several years and collect accurate
data to address all of the areas relevant
to a program’s retention rate, and that
therefore reported rates would reflect a
great deal of missing data.
Another commenter suggested that
the Department provide support for the
development and implementation of
data systems that will allow States to
safely and securely share employment,
placement, and retention data.
Discussion: We note first that, due to
the definition of the terms ‘‘teacher
placement rate’’ and ‘‘recent graduate’’
(see § 612.2), placement rate data is
collected on individuals who have met
the requirements of program in any of
the three title II reporting years
preceding the current reporting year.
In order to decrease the costs
associated with calculating teacher
placement and teacher retention rates
and to better focus the data collection,
our proposed and final definitions of
teacher placement rate and teacher
retention rate in § 612.2 permit States to
exclude certain categories of novice
teachers from their calculations for their
teacher preparation programs, provided
that each State uses a consistent
approach to assess and report on all of
the teacher preparation programs in the
State. As we have already noted, these
categories include teachers who teach in
other States, teach in private schools,
are not retained specifically and directly
due to budget cuts, or join the military
or enroll in graduate school. While we
encourage States to work to capture
these data to make the placement and
retention rates for each program as
robust as possible, we understand that
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75566
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
current practicalities may affect their
ability to do so for one or more of these
categories of teachers. But we strongly
believe that, except in rare
circumstances, States will have enough
data on employment outcomes for each
program, based on the numbers of
recent graduates who take teaching
positions in the State, to use as an
indicator of the program’s performance.
To address confidentiality concerns,
§ 612.4(b)(5) expressly exempts
reporting of data where doing so would
violate Federal or State privacy laws or
regulations.
The regulations do not require States
to submit documentation with the SRCs
that supports their data collections; they
only must submit the ultimate
calculation for each program’s indicator
(and its weighting). However, States
may not omit program graduates (or
participants in alternative route
programs if a State chooses to look at
participants rather than program
graduates) from any of the calculations
of employment or survey outcomes
indicators without being able to verify
that these individuals are in the groups
that the regulators permit States to omit.
Some commenters recommended that
the Department maintain a national
database, while others seemed to think
that we plan to maintain such a
database. States must submit their SRCs
to the Department annually, and the
Department intends to make these
reports and the data they include, like
SRCs that States annually submitted in
prior years, publicly available. The
Department has no other plans for
activities relevant to a national database.
Commenters were concerned about
difficulties in following graduates for
the three-year period proposed in the
NPRM. As discussed in response to
comment on the ‘‘teacher retention rate’’
definition in § 612.2, we have modified
the definition of ‘‘teacher retention rate’’
so that States will be reporting on the
first three years a teacher is in the
classroom rather than three out of the
first five years. We believe this change
addresses the commenters’ concerns.
As we interpret the comment, one
commenter suggested we provide
support for more robust data systems so
that States have access to the
employment data of teachers who move
to other States. We have technical
assistance resources dedicated to
helping States collect and use
longitudinal data, including the
Statewide Longitudinal Data System’s
Education Data Technical Assistance
Program and the Privacy Technical
Assistance Center, which focuses on the
privacy and security of student data. We
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
will look into whether these resources
may be able to help address this matter.
Changes: None.
Alternative Route Programs
Comments: Commenters stated that
the calculation of placement and
retention rates for alternative route
teacher preparation programs should be
different from those for traditional route
teacher preparation programs. Others
asked that the regulations ensure the use
of multiple measures by States in
assessing traditional and alternative
route programs. Many commenters
stated that the proposed regulations give
advantages to alternative route
programs, as programs that train
teachers on the job get significant
advantages by being allowed to count all
of their participants as employed while
they are still learning to teach, virtually
ensuring a very high placement rate for
those programs. Other commenters
suggested that the common starting
point for both alternative and traditional
route programs should be the point at
which a candidate has the opportunity
to become a teacher of record.
As an alternative, commenters
suggested that the Department alter the
definition of ‘‘new teacher’’ so that both
traditional and alternative route teacher
candidates start on equal ground. For
example, the definition might include
‘‘after all coursework is completed,’’ ‘‘at
the point a teacher is placed in the
classroom,’’ or ‘‘at the moment a teacher
becomes a teacher of record.’’
Commenters recommended that teacher
retention rate should be more in line
with CAEP standards, which do not
differentiate accountability for alternate
and traditional route teacher
preparation programs.
Many commenters were concerned
about the ability of States to weight
employment outcomes differently for
alternative and traditional route
programs, thus creating unfair
comparisons among States or programs
in different States while providing the
illusion of fair comparisons by using the
same metrics. One commenter was
concerned about a teacher preparation
program’s ability to place candidates in
fields where a degree in a specific
discipline is needed, as those jobs will
go to those with the discipline degree
and not to a teacher preparation
program degree, thus giving teachers
from alternative route programs an
advantage. Others stated that
demographics may impact whether a
student enrolls in a traditional or an
alternative route program, so comparing
the two types of programs in any way
is not appropriate.
PO 00000
Frm 00074
Fmt 4701
Sfmt 4700
Discussion: We agree that
employment outcomes could vary based
solely on the type, rather than the
quality, of a teacher preparation
program. While there is great variability
both among traditional route programs
and among alternative route programs,
those two types of programs have
characteristics that are generally very
different from each other. We agree with
commenters that, due to the
fundamental characteristics of
alternative certification programs (in
particular the likelihood that all
participants will be employed as
teachers of record while completing
coursework), the reporting of teacher
placement rate data of individuals who
participated in such programs will
inevitably result in 100 percent
placement rate. However, creation of a
different methodology for calculating
the teacher placement rate solely for
alternative route programs would be
unnecessarily complex and potentially
confusing for States as they implement
these regulations and for the public as
they examine the data. Accordingly, we
have removed the requirement that
States report and assess the teacher
placement rate of alternative route
programs from the final regulations.
States may, at their discretion, continue
to include teacher placement rate for
alternative certification programs in
their reporting system if they determine
that this information is meaningful and
deserves weight. However, they are not
required to do so by these final
regulations.
For reasons discussed in the
Meaningful Differentiations in Teacher
Preparation Program Performance
section of this preamble, we have not
removed the requirement that States
report the teacher placement rate in
high-need schools for alternative route
programs. If a teacher is employed as a
teacher of record in a high-need school
prior to program completion, that
teacher will be considered to have been
placed when the State calculates and
reports a teacher placement rate for
high-need schools. Unlike teacher
placement rate generally, the teacher
placement rate in high-need schools can
be used to meaningfully differentiate
between programs of varying quality.
Recognizing both that (a) the
differences in the characteristics of
traditional and alternative route
programs may create differences
between teacher placement rate in highneed schools and (b) our removal of the
requirement to include teacher
placement rate for alternative
certification programs creates a different
number of required indicators for
Employment Outcomes between the two
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
program types, we have revised
§ 612.5(a)(2) to clarify that (1) in their
overall assessment of program
performance States may assess
employment outcomes for these
programs differently, and (2) States may
do so provided that differences in
assessments and the reasons for those
differences are transparent and that
assessments result in equivalent levels
of accountability and reporting
irrespective of the type of program.
We believe States are best suited to
analyze their traditional and alternative
route programs and determine how best
to apply employment outcomes to
assess the overall performance of these
programs. As such, to further promote
transparency and fair treatment, we
have revised section V of the SRC to
include the need for each State to
describe the rationale for how the State
is treating the employment outcomes
differently, provided it has not chosen
to add a measure of placement rate for
alternative route programs and does in
fact have different bases for
accountability.
We also believe that, as we had
proposed, States should apply
equivalent standards of accountability
in how they treat employment outcomes
for traditional programs and alternative
route programs, and suggest a few
approaches States might consider for
achieving such equivalency.
For example, a State might devise a
system with five areas in which a
teacher preparation program must have
satisfactory outcomes in order to be
considered not low-performing or at-risk
of being low-performing. For the
employment outcomes measure (and
leaving aside the need for employment
outcomes for high-need schools), a State
might determine that traditional route
programs must have a teacher
placement rate of at least 80 percent and
a second-year teacher retention rate of at
least 70 percent to be considered as
having satisfactory employment
outcomes. The State may, in
consultation with stakeholders,
determine that a second-year retention
rate of 85 percent for alternative
certification programs results in an
equivalent level of accountability for
those programs, given that almost all
participants in such programs in the
State are placed and retained for some
period of time during their program.
As another example, a State might
establish a numerical scale wherein the
employment outcomes for all teacher
preparation programs in the State
account for 20 percent. A State might
then determine that teacher placement
(overall and at high-needs schools) and
teacher retention (overall and at high-
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
needs schools) outcomes are weighted
equally, say at 10 percent each, for all
traditional route programs, but weight
the placement rate in high-need schools
at 10 percent and retention rate (overall
and at high-needs schools) at 10 percent
for alternative route programs.
We also recognize that some
alternative route programs are
specifically designed to recruit highquality participants who may be
committed to teach only for a few years.
Many also recruit participants who in
college had academic majors in fields
similar to what they will teach. Since a
significant aspect of our indicators of
academic content knowledge and
teaching skills focus on the success of
novice teachers regardless of the nature
of their teacher preparation program, we
do not believe we should establish a
one-size-fits-all rule here. Rather, we
think that States are in a better position
to determine how the employment
outcomes should best be used to help
assess the performance of alternative
route and traditional route programs.
We agree that use of multiple
measures of program performance is
important. We reiterate that the
regulations require that, in reporting the
performance of all programs, both
traditional and alternative route, States
must use the four indicators of academic
content knowledge and teaching skills
the regulations identify in § 612.5(a),
including employment outcomes—the
teacher placement rate (excepting the
requirement here for alternative route
programs), teacher placement rate in
high-need schools, teacher retention
rate, and teacher retention rate in highneed schools—in addition to any
indicators of academic content
knowledge and teaching skills and other
criteria they may establish on their own.
However, we do not know of any
inherent differences between traditional
route programs and alternative route
programs that should require different
treatment of the other required
indicators—student learning outcomes,
survey outcomes, and the basic
characteristics of the program addressed
in § 612.5(a)(4). Nor do we see any
reason why any differences in the type
of individuals that traditional route
programs and alternative route programs
enroll should mean that the program’s
student learning outcomes should be
assessed differently.
Finally, while some commenters
argued about the relative advantage of
alternative route or traditional route
programs in reporting on employment
outcomes, we reiterate that neither the
regulations nor the SRCs pit programs
against each other. Each State
determines what teacher preparation
PO 00000
Frm 00075
Fmt 4701
Sfmt 4700
75567
programs are and are not lowperforming or at-risk of being lowperforming (as well as in any other
category of performance it may
establish). Each State then reports the
data that reflect the indicators and
criteria used to make this determination,
and identifies those programs that are
low-performing or at-risk of being lowperforming. Of course, any differences
in how employment outcomes are
applied to traditional route and
alternative route programs would need
to result in equivalent levels of
accountability and reporting (see
§ 612.5(a)(2)(B)). But the issue for each
State is identifying each program’s level
of performance relative to the level of
expectations the State established—not
relative to levels of performance or
results for indicators or criteria that
apply to other programs.
Changes: We have revised
§ 612.5(a)(2)(iii) to clarify that in their
overall assessment of program
performance States may assess
employment outcomes for traditional
route programs and programs provided
through alternative routes differently
provided that doing results in
equivalent levels of accountability.
We have also added a new
§ 612.5(a)(2)(v) to provide that a State is
not required to calculate a teacher
placement rate under paragraph
(a)(2)(i)(A) of that section for alternative
route to certification programs.
Teacher Preparation Programs Provided
Through Distance Education
Comments: None.
Discussion: In reviewing the proposed
regulations, we recognized that, as with
alternative route programs, teacher
preparation programs provided through
distance education may pose unique
challenges to States in calculating
employment outcomes under
§ 612.5(a)(2). Specifically, because such
programs may operate across State lines,
an individual State may be unable to
accurately determine the total number
of recent graduates from any given
program and only a subset of that total
would be, in theory, preparing to teach
in that State. For example, a teacher
preparation entity may be physically
located in State A and operate a teacher
preparation program provided through
distance education in both State A and
State B. While the teacher preparation
entity is required to submit an IRC to
State A, which would include the total
number of recent graduates from their
program, only a subset of that total
number would be residing in or
preparing to teach in State A. Therefore,
when State A calculates the teacher
placement rate for that program, it
E:\FR\FM\31OCR2.SGM
31OCR2
75568
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
would generate an artificially low rate.
In addition, State B would face the same
issue if it had ready access to the total
number of recent graduates (which it
would not as the program would not be
required to submit an IRC to State B).
Any teacher placement rate that State B
attempts to calculate for this, or any
other, teacher preparation program
provided through distance education
would be artificially low as recent
graduates who did not reside in State B,
did not enroll in a teacher preparation
program in State B, and never intended
to seek initial certification or licensure
in State B would be included in the
denominator of the teacher placement
rate calculation.
Recognizing these types of issues, the
Department has determined that it is
appropriate to create an alternative
method for States to calculate
employment outcomes for teacher
preparation programs provided through
distance education. Specifically, we
have revised the definition of teacher
placement rate to allow States, in
calculating teacher placement rate for
teacher preparation programs provided
through distance education, to use the
total number of recent graduates who
have obtained initial certification or
licensure in the State during the three
preceding title II reporting years as the
denominator in their calculation instead
of the total number of recent graduates.
Additionally, we believe it is
appropriate to give States greater
flexibility in assessing these outcomes,
and have added a new § 612.5(a)(2)(iv)
which allows States to assess teacher
placement rates differently for teacher
preparation programs provided through
distance education provided that the
differences in assessment are
transparent and result in similar levels
of accountability for all teacher
preparation programs.
Changes: We have added
§ 612.5(a)(2)(iv), which allows States to
assess teacher placement rates
differently for teacher preparation
programs provided through distance
education so long as the differences in
assessment are transparent and result in
similar levels of accountability.
Survey Outcomes (34 CFR 612.5(a)(3))
Comments: Several commenters
agreed that there is value in using
surveys of teacher preparation program
graduates and the administrators who
employ and supervise them to evaluate
the programs, with some commenters
noting that such surveys are already in
place. Some commenters expressed
concerns about the use of survey data as
part of a rating system with high-stakes
consequences for teacher preparation
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
programs. Some commenters felt that
States should have discretion about how
or even whether to incorporate survey
outcomes into an accountability system.
Other commenters suggested making
surveys one of a number of options that
States could elect to include in their
systems for evaluating the quality of
teacher preparation programs. Still other
commenters felt that, because surveys
are currently in place for the evaluation
of teacher preparation programs (for
example, through State, accrediting
agency, and institutional requirements),
Federal regulations requiring the use of
survey outcomes for this purpose would
be either duplicative or add unnecessary
burden if they differ from what
currently exists. One commenter stated
that Massachusetts is currently building
valid and reliable surveys of novice
teachers, recent graduates, employers,
and supervising practitioners on
educator preparation, and this work
exceeds the expectation of the proposed
rules. However, the commenter also was
concerned about the reliability, validity,
and feasibility of using survey outcomes
as an independent measure for assessing
teacher preparation program
performance. The commenter felt that
the proposed regulations do not specify
how States would report survey results
in a way that captures both qualitative
and quantitative data. The commenter
expressed doubt that aggregating survey
data into a single data point for
reporting purposes would convey
valuable information, and stated that
doing so would diminish the usefulness
of the survey data and could lead to
distorted conclusions.
In addition, commenters
recommended allowing institutions
themselves to conduct and report
annual survey data for teacher graduates
and employers, noting that a number of
institutions currently conduct wellhoned, rigorous surveys of teacher
preparation program graduates and their
employers. Commenters were concerned
with the addition of a uniform Statelevel survey for assessing teacher
preparation programs, stating that it is
not possible to obtain high individual
response rates for two surveys
addressing the same area. Commenters
contended that, as a result, the extensive
longitudinal survey databases
established by some of the best teacher
education programs in the Nation will
be at-risk, resulting in the potential loss
of the baseline data, the annual data,
and the continuous improvement
systems associated with these surveys
despite years of investment in them and
substantial demonstrated benefits.
Some commenters noted that it is
hard to predict how reliable the teacher
PO 00000
Frm 00076
Fmt 4701
Sfmt 4700
and employer surveys required by the
regulations would be as an indicator of
teacher preparation program quality,
since the proposed regulations do not
specify how these surveys would be
developed or whether they would be the
same across the State or States. In
addition, the commenters noted that it
is hard to predict how reliable the
surveys may be in capturing teacher and
employer perceptions of how
adequately prepared teachers are since
these surveys do not exist in most
places and would have to be created.
Commenters also stated that survey data
will need to be standardized for all of
a State’s institutions, which will likely
result in a significant cost to States.
Some commenters stated that, in lieu
of surveys, States should be allowed to
create preparation program-school
system partnerships that provide for
joint design and administration of the
preparation program. They claimed
when local school systems and
preparation programs jointly design and
oversee the preparation program,
surveys are unnecessary because the
partnership creates one preparation
program entity that is responsible for
the quality of preparation and
satisfaction of district and school
leaders.
Discussion: As we stressed in the
NPRM, many new teachers report
entering the profession feeling
unprepared for classroom realities.
Since teacher preparation programs
have responsibility for preparing
teachers for these classroom realities,
we believe that asking novice teachers
whether they feel prepared to teach, and
asking those who supervise them
whether they feel those novice teachers
are prepared to teach, generate results
that are necessary components in any
State’s process of assessing the level of
a teacher preparation program’s
performance. Moreover, while all States
do not have experience employing
surveys to determine program
effectiveness, we believe that their use
for this purpose has been well
established. As noted in the NPRM, two
major national organizations focused on
teacher preparation and others in the
higher education world are now
incorporating this kind of survey data as
an indicator of program quality (see 79
FR 71840).
We share the belief of these
organizations that a novice teacher’s
perception, and that of his or her
employer, of the teacher’s readiness and
capability during the first year teaching
are key indicators of that individual’s
academic knowledge and teaching skills
as well as whether his or her
preparation program is training teachers
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
well. In addition, aside from wanting to
ensure that what States report about
each program’s level of performance is
reasonable, a major byproduct of the
regulations is that they can ensure that
States have accurate information on the
quality of teacher preparation programs
so that they and the programs can make
improvements where needed and
recognize excellence where it exists.
Regarding commenters concerns
about the validity and reliability of the
use of survey results to help assess
program performance, we first reference
our general discussion of the issue in
response to public comment on
Indicators a State Must Use to Report on
Teacher Preparation Programs in the
State Report Card (34 CFR 612.5(a)).
Beyond this, it plainly is important
that States develop procedures to enable
teachers’ and employers’ perceptions to
be appropriately used and have the
desired impacts, and at the same time to
enable States to use survey results in
ways that treat all programs fairly. To do
so, we strongly encourage States to
standardize their use of surveys so that
for novice teachers who are similarly
situated, they seek common information
from them and their employers. We are
confident that, in consultation with key
stakeholders as provided for in
§ 612.4(c)(1), States will be able to
develop a standardized, unbiased, and
reliable set of survey questions, or
ensure that IHE surveys meet the same
standard. This goal would be very
difficult to achieve, however, if States
relied on existing surveys (unless
modified appropriately) whose
questions vary in content and thus
solicit different information and
responses. Of course, it is likely that
many strong surveys already exist and
are in use, and we encourage States to
consider using such an existing survey
so long as it comports with § 612.5(a)(3).
Where a State finds an existing survey
of novice teachers and their employers
to be adequate, doing so will avoid the
cost and time of preparing another, and
to the extent possible, prevent the need
for teachers and employers to complete
more than one survey, which
commenters reasonably would like to
avoid. Concerns about the cost and
burden of implementing teacher and
employer surveys are discussed further
with the next set of comments on this
section.
We note that States have the
discretion to determine how they will
publicly post the results of surveys and
how they will aggregate the results
associated with teachers from each
program for use as an indicator of that
program’s performance. We encourage
States to report survey results
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
disaggregated by question (as is done,
for example, by Ohio 51 and North
Carolina 52), as we believe this
information would be particularly
useful for prospective teachers in
evaluating the strengths of different
teacher preparation programs. At some
point, however, States must identify any
programs that are low-performing or atrisk of being low-performing, and to
accomplish this they will need to
aggregate quantitative and qualitative
survey responses in some way, in a
method developed in consultation with
key stakeholders as provided for in
§ 612.4(c)(1).
Like those who commented, we
believe that partnerships between
teacher preparation programs and local
school systems have great value in
improving the transition of individuals
whom teacher preparation programs
train to the classroom and a novice
teacher’s overall effectiveness. However,
these partnerships cannot replace
survey results as an indicator of the
program’s performance.
Changes: None.
Comments: Commenters suggested
that the Department consider options for
reducing the cost and burden of
implementation, such as clarifying that
States would not have to survey 100
percent of novice teachers or permitting
States to conduct surveys less frequently
than every year.
Commenters stated that, if used as
expected for comparability purposes,
the survey would likely need to be
designed by and conducted through a
third-party agency with professional
credentials in survey design and survey
administration. They stated that
sampling errors and various forms of
bias can easily skew survey results and
the survey would need to be managed
by a professional third-party group,
which would likely be a significant cost
to States.
One commenter recommended that a
national training and technical
assistance center be established to build
data capacity, consistency, and quality
among States and educator preparation
providers to support scalable
continuous improvement and program
quality in teacher preparation. In
support of this recommendation, the
commenter, an accreditor of education
51 See, for example: 2013 Educator Preparation
Performance Report Adolescence to Young Adult
(7–12) Integrated Mathematics Ohio State
University. (2013). Retrieved from https://
regents.ohio.gov/educator-accountability/
performance-report/2013/OhioStateUniversity/
OHSU_IntegratedMathematics.pdf.
52 See UNC Educator Quality Dashboard.
Retrieved from https://
tqdashboard.northcarolina.edu/performanceemployment/.
PO 00000
Frm 00077
Fmt 4701
Sfmt 4700
75569
preparation providers, stated that, based
on its analysis of its first annual
collection of outcome data from
education preparation providers, and its
follow-up survey of education
preparation providers, the availability of
survey outcomes data differs by survey
type. The commenter noted that while
714 teacher preparation program
providers reported that they have access
to completer survey data, 250 providers
reported that they did not have access.
In addition, the commenter noted that
teacher preparation program providers
indicated that there were many
challenges in reporting employment
status, including State data systems as
well as programs that export completers
across the nation or internationally.
Discussion: To obtain the most
comprehensive feedback possible, it is
important for States to survey all novice
teachers who are employed as teachers
in their first year of teaching and their
employers. This is because feedback
from novice teachers is one indicator of
how successfully a preparation program
imparts knowledge of content and
academic skills, and survey results from
only a sample may introduce
unnecessary opportunities for error and
increased cost and burden. There is no
established n-size at which point a
sample is guaranteed to be
representative, but rather, statistical
calculations must be made to verify that
the sample is representative of the
characteristics of program completers or
participants. While drawing a larger
sample often increases the likelihood
that it will be representative, we believe
that for nearly all programs, a
representative sample will not be
substantially smaller than the total
population of completers. Therefore, we
do not believe that there is a meaningful
advantage to undertaking the analysis
required to draw a representative
sample. Furthermore, we believe that
any potential advantage does not
outweigh the potential for error that
could be introduced by States or
programs that unwittingly draw a biased
sample, or report that their sample is
representative, when in fact it is not. As
with student learning outcomes and
employment outcomes, we have
clarified in § 612.5(a)(3)(ii) that a State
may exclude from its calculations of a
program’s survey outcomes those survey
outcomes for all novice teachers who
have taken teaching positions in private
schools so long as the State uses a
consistent approach to assess and report
on all of the teacher preparation
programs in the State.
We note that in ensuring that the
required surveys are reasonable and
appropriate, States have some control
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75570
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
over the cost of, and the time necessary
for, implementing the surveys. Through
consultation with their stakeholders (see
§ 612.4(c)), they determine the number
and type of questions in each survey,
and the method of dissemination and
collection. However, we believe that it
is important that teacher and employer
surveys be conducted at least annually.
Section 207(a) of the HEA requires that
States annually identify teacher
preparation programs that are lowperforming or at-risk of being lowperforming. To implement this
requirement, we strongly believe that
States need to use data from the year
being evaluated to identify those
programs. If data from past years were
used for annual evaluations, it would
impede low-performing programs from
seeing the benefits of their
improvement, tend to enable
deteriorating programs to rest on their
laurels, and prevent prospective
teachers and employers and the public
at large from seeing the program’s actual
level of performance. Moreover, because
the regulations require these surveys
only of novice teachers in their first year
of teaching, the commenter’s proposal to
collect survey outcomes less than
annually would mean that entire
cohorts of graduates would not be
providing their assessment of the
quality of their preparation program.
In considering the comment, we
realized that while we estimated costs of
reporting all indicators of academic
content knowledge and teaching skills,
including survey outcomes, on an
annual basis, the regulations did not
adequately clarify the need to collect
and report data related to each indicator
annually. Therefore, we have revised
§ 612.4(b)(2)(i) to require that data for
each indicator be provided annually for
the most recent title II reporting year.
Further discussion regarding the cost
and burden of implementing teacher
and employer surveys can be found in
the Discussion of Costs, Benefits, and
Transfers in the RIA section of this
document.
The regulations do not prescribe any
particular method for obtaining the
completed surveys, and States may
certainly work with their teacher
preparation programs and teacher
preparation entities to implement
effective ways to obtain survey results.
Beyond this, we expect that States will
seek and employ the assistance that they
need to develop, implement, and
manage teacher and employer surveys
as they see fit. We expect that States
will ensure the validity and reliability of
survey outcomes—including how to
address responder bias—when they
establish their procedures for assessing
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
and reporting the performance of each
teacher preparation program with a
representative group of stakeholders, as
is required under § 612.4(c)(1)(i). The
regulations do not specify the process
States must use to develop, implement,
or manage their employer surveys, so
whether they choose to use third-party
entities to help them do so is up to
them.
Finally, we believe it is important for
the Department to work with States and
teacher preparation programs across the
nation to improve those programs, and
we look forward to engaging in
continuing dialogue about how this can
be done and what the appropriate role
of the Department should be. However,
the commenters’ request for a national
training and technical assistance center
to support scalable continuous
improvement and to improve program
quality is outside the scope of this
regulation—which is focused on the
States’ use of indicators of academic
content knowledge and teaching skills
in their processes of identifying those
programs that are low-performing, or atrisk of being low-performing, and other
matters related to reporting under the
title II reporting system.
Changes: We have added
§ 612.5(a)(3)(ii) to clarify that a State
may exclude form its calculations of a
program’s survey outcomes those for
novice teachers who take teaching
positions in private schools so long as
the State uses a consistent approach to
assess and report on all of the teacher
preparation programs in the State. In
addition, we have revised 612.4(b)(2)(i)
to provide that data for each of the
indicators identified in § 612.5 is to be
for the most recent title II reporting year.
Comments: Commenters also
expressed specific concerns about
response bias on surveys, such as the
belief that teacher surveys often end up
providing information about the
personal likes or dislikes of the
respondent that can be attributed to
issues not related to program
effectiveness. Commenters stated that
surveys can be useful tools for the
evaluation of programs and methods,
but believed the use of surveys in a
ratings scheme is highly problematic
given how susceptible they are to what
some commenters referred to as
‘‘political manipulation.’’ In addition,
commenters stated that surveys of
employer satisfaction may be
substantially biased by the relationship
of school principals to the teacher
preparation program. Commenters felt
that principals who are graduates of
programs at specific institutions are
likely to have a positive bias toward
teachers they hire from those
PO 00000
Frm 00078
Fmt 4701
Sfmt 4700
institutions. Commenters also believed
that teacher preparation programs
unaffiliated with the educational
leadership at the school will be
disadvantaged by comparison.
Commenters also felt that two of our
suggestions in the NPRM to ensure
completion of surveys—that States
consider using commercially available
survey software or that teachers be
required to complete a survey before
they can access their class rosters—raise
tremendous questions about the security
of student data and the sharing of
identifying information with
commercial entities.
Discussion: We expect that States will
ensure the validity and reliability of
survey outcomes, including how to
address responder bias and avoid
‘‘political manipulation’’ and like
problems when they establish their
procedures for assessing and reporting
the performance of each teacher
preparation program with a
representative group of stakeholders, as
is required under § 612.4(c)(1)(i).
While it may be true that responder
bias could impact any survey data, we
expect that the variety and number of
responses from novice teachers
employed at different schools and
within different school districts will
ensure that such bias will not
substantially affect overall survey
results.
There is no reason student data
should ever be captured in any survey
results, even if commercially available
software is used or teachers are required
to complete a survey before they can
access and verify their class rosters.
Commenters did not identify any
particular concerns related to State or
Federal privacy laws, and we do not
understand what they might be. That
being said, we fully expect States will
design their survey procedures in
keeping with requirements of any
applicable privacy laws.
Changes: None.
Comments: Some commenters
expressed concerns with the effect that
a low response rate would have on the
use of survey data as an indicator of
teacher preparation program quality.
Commenters noted that obtaining
responses to teacher and employer
surveys can be quite burdensome due to
the difficulty in tracking graduates and
identifying their employers. Moreover,
commenters stated that obtaining their
responses is frequently unsuccessful.
Some commenters noted that, even with
aggressive follow-up, it would be
difficult to obtain a sufficient number of
responses to warrant using them in
high-stakes decision making about
program quality. Some commenters felt
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
that the regulations should offer
alternatives or otherwise address what
happens if an institution is unable to
secure sufficient survey responses.
One commenter shared that, since
2007, the Illinois Association of Deans
of Public Colleges of Education has
conducted graduate surveys of new
teachers from the twelve Illinois public
universities, by mailing surveys to new
teachers and their employers. The
response rate for new teachers has been
extremely low (44.2 percent for the 2012
survey and 22.6 percent for the 2013
survey). The supervisor response has
been higher, but still insufficient,
according to the commenter, for the
purpose of rating programs (65.3 percent
for the 2012 survey and 40.5 percent for
the 2013 survey). In addition, the
commenter stated that some data from
these surveys indicate differences in the
responses provided by new teachers and
their supervisors. The commenter felt
that the low response rate is
compounded when trying to find
matched pairs of teachers and
supervisors. Using results from an
institution’s new teacher survey data,
the commenter was only able to identify
29 out of 104 possible matched pairs in
2012 and 11 out of 106 possible
matched pairs in 2013.
One commenter from an IHE stated
that the institution’s return rate on
graduate surveys over the past 24 years
has been 10 to 24 percent, which they
stated is in line with national response
rates. While the institution’s last survey
of 50 school principals had a 50 percent
return rate, the commenter noted that
her institution only surveys those
school divisions which they know
regularly hire its graduates because it
does not have a source from which it
can obtain actual employment
information for all graduates. According
to the commenter, a statewide process
that better ensures that all school
administrators provide feedback would
be very helpful, but could also be very
burdensome for the schools.
Another commenter noted that the
response rate from the institution’s
graduates increased significantly when
the questionnaire went out via email,
rather than through the United States
Postal Service; however, the response
rate from school district administrators
remained dismal, no matter what format
was used—mail, email, Facebook,
Instagram, SurveyMonkey, etc. One
commenter added that defaulting back
to the position of having teachers
complete surveys during their school
days, and thus being yet another
imposition on content time in the
classroom, was not a good alternative to
address low response rates. Commenters
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
saw an important Federal role in
accurately tracking program graduates
across State boundaries.
Discussion: We agree that low
response rates can affect the validity
and reliability of survey outcomes as an
indicator of program performance.
While we are not sure why States would
necessarily need to have match pairs of
surveys from novice teachers and their
employers as long as they achieve what
the State and its consultative group
determine to be a sufficient response
rate, we expect that States will work to
develop procedures that will promote
adequate response rates in their
consultation with stakeholders, as
required under § 612.4(c)(1)(i)). We also
expect that States will use survey data
received for the initial pilot reporting
year (2017–2018), when States are not
required to identify program
performance, to adjust their procedures,
address insufficient response rates, and
address other issues affecting validity
and reliability of survey results. We also
note that since States, working with
their stakeholders, may determine how
to weight the various indicators and
criteria they use to come up with a
program’s overall level of performance,
they also have the means to address
survey response rates that they deem too
low to provide any meaningful indicator
of program quality.
We believe that States can increase
their response rate by incorporating the
surveys into other structures, for
example, having LEAs disseminate the
survey at various points throughout
teachers’ induction period. Surveys may
also be made part of required end-ofyear closeout activities for teachers and
their supervisors. As the regulations
require States to survey only those
teachers who are teaching in public
schools and the public school
employees who employ them (see the
discussion of the definition of a novice
teacher under § 612.2(d)), we believe
that approaches such as these will
enable States to achieve reasonably high
response rates and, thus, valid survey
results.
Finally, before the Department would
consider working to develop a system,
like one the commenter suggested, for
tracking program graduates across State
boundaries, we would want to consult
with States, IHEs and other
stakeholders.
Changes: None.
Specialized Accreditation (34 CFR
612.5(a)(4)(i))
Comments: Commenters were both
supportive of and opposed to the
proposed provision regarding
specialized accreditation. Some
PO 00000
Frm 00079
Fmt 4701
Sfmt 4700
75571
commenters noted that CAEP, the new
specialized accreditor for teacher
preparation programs, is not an
accreditor currently recognized by the
Department, which creates the
possibility that there would be no
federally-recognized specialized
accreditor for teacher preparation
programs. Commenters believed that the
inclusion of this metric is premature
without an organization, which the
Secretary recognizes, that can confer
accreditation on these programs. Other
commenters argued that this provision
inserts the Federal government into the
State program approval process by
mandating specific requirements that a
State must consider when approving
teacher preparation programs within its
jurisdiction. They further stated that,
although the Department references
CAEP and its standards for what they
referred to as a justification for some of
the mandated indicators, CAEP does not
accredit at the program level. They
noted that, in fact, no accreditor
provides accreditation specifically to
individual teacher preparation
programs; CAEP does so only to entities
that offer these programs.
Commenters raised an additional
concern that the Department is seeking
to implicitly mandate national
accreditation, which would result in
increased costs; and that the proposed
regulations set a disturbing precedent by
effectively mandating specialized
accreditation as a requirement for
demonstrating program quality. Some
commenters were concerned that with
CAEP as the only national accreditor for
teacher preparation, variety of and
access to national accreditation would
be limited and controlled.
Other commenters expressed concern
that our proposal to offer each State the
option of presenting an assurance that
the program is accredited by a
specialized accrediting agency would, at
best, make the specialized accreditor an
agent of the Federal government, and at
worst, effectively mandate specialized
accreditation by CAEP. The commenters
argued instead that professional
accreditation should remain a
voluntary, independent process based
on evolving standards of the profession.
Some commenters asked that the
requirement for State reporting on
accreditation or program characteristics
in § 612.5(a)(4)(i) and (ii) be removed
because these are duplicative of existing
State efforts with no clear benefit to
understanding whether a teacher
preparation program can effectively
prepare candidates for classroom
success, and because the proposed
regulations are redundant to work being
done for State and national
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75572
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
accreditation. Other commenters
recommended that States should not be
required to adhere to one national
system because absent a floor for
compliance purposes, States may build
better accreditation systems. One
commenter proposed that, as an
alternative to program accreditation,
States be allowed to include other
indicators predictive of a teacher’s effect
on student performance, such as
evidence of the effective use of positive
behavioral interventions and supports
on the basis of the aggregate number of
suspensions and expulsions written by
educators from each teacher preparation
program.
Some commenters argued that
stronger standards are essential to
improving teacher preparation
programs, and providing some gradation
of ratings of how well preparation
programs are doing would provide
useful information to the prospective
candidates, hiring districts, and the
teacher preparation programs the IRCs
and SRCs are intended to inform. They
noted that as long as CAEP continued
with these accreditation levels, rather
than lumping them all together under a
high-level assurance, indicators of these
levels should be reflected in the rating
system. They also stated that where
States do not require accreditation,
States should attempt to assess the level
at which programs are meeting the
additional criteria.
Some commenters argued that
accreditation alone is sufficient to hold
teacher preparation programs
accountable. Other commenters stated
their agreement that active participation
in professional accreditation should be
recognized as an indicator of program
quality. One commenter supported the
alignment between the proposed
regulations and CAEP’s annual
outcomes-based reporting measures, but
was concerned that the regulations as
proposed would spawn 50 separate
State reporting systems, data
definitions, and processes for quality
assurance. The commenter supported
incentivizing accreditation and holding
all teacher preparation programs to the
same standards and reporting
requirements, and stated that CAEP’s
new accreditation process would
achieve the goals of the proposed rules
on a national level, while removing
burden from the States. The commenter
expressed concern about the
requirement that the Secretary recognize
the specialized accrediting agency, and
the statement in the preamble of the
NPRM that alternative route programs
are often not eligible for specialized
accreditation.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
The commenter also indicated that
current input- and compliance-based
system requirements within the
Department’s recognition process for
accreditors runs counter to the
overarching goal of providing
meaningful data and feedback loops for
continuous improvement. The
commenter noted that CAEP was
launched to bring all teacher
preparation programs, whether
alternative, higher education based, or
online-based, into the fold of
accreditation. The commenter
recommended that specialized
accrediting agencies recognized by the
Council for Higher Education
Accreditation (CHEA) should be
allowed to serve as a State indicator for
program quality.
Commenters also noted that no
definition of specialized accreditation
was proposed, and requested that we
include a definition of this term. One
commenter recommended that a
definition of specialized accreditation
include the criteria that the Secretary
would use to recognize an agency for
the accreditation of professional teacher
preparation programs, and that one of
the criteria for a specialized agency
should be the inclusion of alternative
certification programs as eligible
professional teacher preparation
programs.
Discussion: First, it is important to
note that these regulations do not set
requirements for States’ teacher
preparation program approval
processes. The regulations establish
requirements for States’ reporting to the
Secretary on teacher preparation
programs in their States, and
specifically their identification of
programs determined to be lowperforming or at-risk of being lowperforming, and the basis for those
determinations.
Also, upon review of the comments,
we realized that imprecise wording in
the proposed regulations likely led to
misunderstanding of its intent regarding
program-level accreditation. Our intent
was simple: to allow States able to
certify that the entity offering the
teacher preparation program had been
accredited by a teacher preparation
program accreditor recognized by the
Secretary to rely on that accreditation to
demonstrate that the program produces
teacher candidates with the basic
qualifications identified in
§ 612.5(a)(4)(ii) rather than having to
separately report on those
qualifications. The proposed regulations
would not have required separate
accreditation of each individual
program offered by an entity, but we
have revised § 612.5(a)(4)(i) to better
PO 00000
Frm 00080
Fmt 4701
Sfmt 4700
reflect this intent. In response to the
concern about whether an entity that
administers an alternative route
program can receive such accreditation,
the entity can apply for CAEP
accreditation, as one of the commenters
noted.
As summarized above, commenters
presented opposing views of the role in
the regulations of national accreditation
through an accreditor recognized by the
Secretary: Opinions that the inclusion of
national accreditation in the regulations
represented an unauthorized mandate
for accreditation on the one hand, and
an implication that accreditation alone
was sufficient, thus making other
options or further indicators
unnecessary, on the other. Similarly,
some commenters argued that the
regulations require too much
standardization across States (through
either accreditation or a consistent set of
broad indicators), while others argued
that the regulations either allow too
much variability among States (leading
to lack of comparability) or encourage
the duplicative effort of creating over 50
separate systems.
In the final regulations we seek to
balance these concerns. States are to
assess whether a program either has
Federally recognized accreditation
(§ 612.5(a)(4)(i)) or produces teacher
candidates with certain characteristics
(§ 612.5(a)(4)(ii)). Allowing States to
report and assess whether their teacher
preparation programs have specialized
accreditation or produce teacher
candidates with specific characteristics
is not a mandate that a program fulfill
either option, and it may eliminate or
reduce duplication of effort by the State.
If a State has an existing process to
assess the program characteristics in
§ 612.5(a)(4)(ii), it can use that process
rather than report on whether a program
has specialized accreditation;
conversely, if a State would like to
simply use accreditation by an agency
that evaluates factors in § 612.5(a)(4)(ii)
(whether federally recognized or not) to
fulfill this requirement, it may choose
do so. We believe these factors do relate
to preparation of effective teachers,
which is reflected in standards and
expectations developed by the field,
including the CAEP standards. And
since accreditation remains a voluntary
process, we cannot rely on it alone for
transparency and accountability across
all programs.
We now address the commenters’
statement that there may be no federally
recognized accreditor for educator
preparation entities. If there is none,
and a State would like to use
accreditation by an agency whose
standards align with the elements listed
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
in § 612.5(a)(4)(ii) (whether federally
recognized or not) to fulfill the
requirements in § 612.5(a)(4)(ii), it may
do so. In fact, many States have worked
or are working with CAEP on
partnerships to align standards, data
collection, and processes.
As we summarized above, some
commenters requested that we include a
definition of specialized accreditation,
and that it include criteria the Secretary
would use to recognize an agency for
accreditation of teacher preparation
programs, and that one of the criteria
should be inclusion of alternative
certification programs as eligible
programs. While we appreciate these
comments, we believe they are outside
the scope of the proposed and final
regulations.
Finally, because teacher preparation
program oversight authority lies with
the States, we do not intend for the
regulations to require a single
approach—via accreditation or
otherwise—for all States to use in
assessing the characteristics of teacher
preparation programs. We do, however,
encourage States to work together in
designing data collection processes, in
order to reduce or share costs, learn
from one another, and allow greater
comparability across States.
In terms of the use of other specific
indicators (e.g., positive behavioral
interventions), we encourage interested
parties to bring these suggestions
forward to their States in the
stakeholder engagement process
required of all States under § 612.4(c).
As one commenter noted, the current
statutory recognition process for
accreditors is heavily input based, while
the emphasis of the regulations is on
outcomes. Any significant reorientation
of the accreditor recognition process
would require statutory change.
Nonetheless, given the rigor and general
acceptance of the Federal recognition
process, we believe that accreditation
only by a Federally recognized
accreditor be specifically assessed in
§ 612.5(a)(4)(i), rather than accreditors
recognized by outside agencies such as
CHEA. For programs not accredited by
a federally recognized accreditor, States
determine whether or to what degree a
program meets characteristics for the
alternative, § 612.5(a)(4)(ii).
Because the regulation provides for
use of State procedures as an alternative
to specialized accreditor recognized by
the Secretary, nothing in § 612.5(a)(4)
would mandate program accreditation
by CAEP or any other entity. Nor would
the regulation otherwise interfere in
what commenters argue should be a
voluntary, independent process based
on evolving standards of the profession.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Indeed, this provision does not require
any program accreditation at all.
Changes: We have revised
§ 612.5(a)(4)(i) to clarify that the
assessment of whether a program is
accredited by a specialized accreditor
could be fulfilled by assessing the
accreditation of the entity administering
teacher preparation programs, not by
accreditation of the individual programs
themselves.
Characteristics of Teacher Preparation
Programs (34 CFR 612.5(a)(4)(ii))
Comments: Multiple commenters
expressed opposition to this provision,
which would have States report whether
a program lacking specialized
accreditation under § 612.5(a)(4)(ii), has
certain basic program characteristics.
They stated that it is Federal overreach
into areas of State or institutional
control. For example, while commenters
raised the issue in other contexts, one
commenter noted that entrance and exit
qualifications of teacher candidates
have traditionally been the right of the
institution to determine when
considering requirements of State
approval of teacher preparation
programs. Other commenters expressed
concern about Federal involvement in
State and accrediting agency approval of
teacher preparation programs, in which
they stated that the Federal government
should have limited involvement.
Other commenters expressed concern
about the consequences of creating
rigorous teacher candidate entry and
exit qualifications. Some commenters
expressed concerns that this
requirement does not take into account
the unique missions of the institutions
and will have a disproportionate and
negative impact on MSIs, which may
see decreases in eligible teacher
preparation program candidates by
denying entry to candidates who do not
meet entry requirements established by
this provision. These commenters were
concerned that rigorous entrance
requirements could decrease diversity
in the teaching profession.
Commenters also expressed general
opposition to requiring rigorous entry
and exit qualifications because they felt
that the general assurance of entry and
exit requirements did little to provide
transparency or differentiate programs
by program quality. Therefore, the
provisions were unneeded, and only
added to the confusion and bureaucracy
of these requirements.
Other commenters noted that a lack of
clinical experience similar to the
teaching environment in which they
begin their careers results in a struggle
for novice teachers, limiting their ability
to meet the needs of their students in
PO 00000
Frm 00081
Fmt 4701
Sfmt 4700
75573
their early years in the classroom. They
suggested that the regulations include
‘‘teaching placement,’’ for example, or
‘‘produces teacher candidates with
content and pedagogical knowledge and
quality clinical preparation relevant to
their teaching placement, who have met
rigorous teacher candidate entry and
exit qualifications pursuant’’ to increase
the skills and knowledge of teacher
preparation program completers who
are being placed in the classroom as a
teacher.
Discussion: While some commenters
expressed concern with Federal
overreach, as noted in the earlier
discussion of § 612.5(a)(4)(i) these
regulations do not set any requirements
that States have established for
approving teacher preparation
programs; they establish requirements
for State reporting to the Secretary on
teacher preparation programs and how
they determined whether any given
program was low-performing or at-risk
of being low-performing. In addition, a
State may report whether institutions
have fulfilled requirements in
§ 612.5(a)(4) through one of two options:
Accreditation by an accreditor
recognized by the Secretary or,
consistent with § 612.5(a)(4)(ii),
showing that the program produces
teacher candidates (1) with content and
pedagogical knowledge and quality
clinical preparation, and (2) who have
met rigorous exit qualifications
(including, as we observe in response to
the comments summarized immediately
above, by being accredited by an agency
whose standards align with the
elements listed in § 612.5(a)(4)(ii)).
Thus, the regulations do not require that
programs produce teacher candidates
with any Federally prescribed rigorous
exit requirements or quality clinical
preparation.
Rather, as discussed in our response
to public comment in the section on
Specialized Accreditation, States have
the authority to use their own process
to determine whether a program has
these characteristics. We feel that this
authority provides ample flexibility for
State discretion in how to treat this
indicator in assessing overall program
performance and the information about
each program that could help that
program in areas of program design.
Moreover, the basic elements identified
in § 612.5(a)(4)(ii) reflect
recommendations of the non-Federal
negotiators, and we agree with them that
the presence or absence of these
elements should impact the overall level
of a teacher preparation program’s
performance.
The earlier discussion of ‘‘rigorous
entry and exit requirements’’ in our
E:\FR\FM\31OCR2.SGM
31OCR2
75574
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
discussion of public comment on
Definitions addresses the comments
regarding rigorous entry requirements.
We have revised § 612.5(a)(4)(ii)(C)
accordingly to focus solely on rigorous
exit standards. As mentioned in that
previous discussion, the Department
also encourages all States to include
diversity of program graduates as an
indicator in their performance rating
systems, to recognize those programs
that are addressing this critical need in
the teaching workforce.
Ensuring that the program produces
teacher candidates who have met
rigorous exit qualifications alone will
not provide necessary transparency or
differentiation of program quality.
However, having States report data on
the full set of indicators for each
program will provide significant and
useful information, and explain the
basis for a State’s determination that a
particular program is or is not lowperforming or at-risk of being lowperforming.
We agree with the importance of high
quality clinical experience. However, it
is unrealistic to require programs to
ensure that each candidate’s clinical
experience is directly relevant to his or
her future, as yet undetermined,
teaching placement.
Changes: We have revised
§ 612.5(a)(4)(ii)(C) to require a State to
assess whether the teacher preparation
program produces teacher candidates
who have met rigorous teacher
candidate exit qualifications. We have
removed the proposed requirement that
States assess whether teacher candidates
meet rigorous entry requirements.
Comments: None.
Discussion: Under § 612.5(a)(4) States
must annually report whether a program
is administered by an entity that is
accredited by a specialized accrediting
agency or produces candidates with the
same knowledge, preparation, and
qualifications. Upon review of the
comments and the language of
§ 612.5(a)(4), we realized that the
proposed lead stem to § 612.5(a)(4)(ii),
‘‘consistent with § 612.4(b)(3)(i)(B)’’, is
not needed since the proposed latter
provision has been removed.
Changes: We have removed the
phrase ‘‘consistent with
§ 612.4(b)(3)(i)(B)’’ from § 612.5(a)(4)(ii).
Other Indicators of a Teacher’s Effect on
Student Performance (34 CFR 612.5(b))
Comments: Multiple commenters
provided examples of other indicators
that may be predictive of a teacher’s
effect on student performance and
requested the Department to include
them. Commenters stated that a teacher
preparation program (by which we
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
assume the commenters meant ‘‘State’’)
should be required to report on the
extent to which each program meets
workforce demands in their State or
local area. Commenters argued this
would go further than just reporting job
placement, and inform the public about
how the program works with the local
school systems to prepare qualified
teacher candidates for likely positions.
Other commenters stated that, in
addition to assessments, students
should evaluate their own learning,
reiterating that this would be a more
well-rounded approach to assessing
student success. One commenter
recommended that the diversity of a
teacher preparation program’s students
should be a metric to assess teacher
preparation programs to ensure that
teacher preparation programs have
significant diversity in the teachers who
will be placed in the classroom.
Discussion: We acknowledge that a
State might find that other indicators
beyond those the regulations require
including those recommended by the
commenters, could be used to provide
additional information on teacher
preparation program performance. The
regulations permit States to use (in
which case they need to report on)
additional indicators of academic
content knowledge and teaching skills
to assess program performance,
including other measures that assess the
effect of novice teachers on student
performance. In addition, as we have
previously noted, States also may apply
and report on other criteria they have
established for identifying which
teacher preparation programs are lowperforming or at-risk of being lowperforming.
In reviewing commenters’
suggestions, we realized that the term
‘‘predictive’’ in the phrase ‘‘predictive
of a teacher’s effect on student
performance’’ is inaccurate. The
additional measures States may use are
indicators of their academic content
knowledge and teaching skill, rather
than predictors of teacher performance.
We therefore are removing the word
‘‘predictive’’ from the regulations. If a
State uses other indicators of academic
content knowledge and teaching skills,
it must, as we had proposed, apply the
same indicators for all of its teacher
preparation programs to ensure
consistent evaluation of preparation
programs within the State.
Changes: We have removed the word
‘‘predictive’’ from § 612.5(b).
Comments: None.
Discussion: As we addressed in the
discussion of public comments on
Scope and Purpose (§ 612.1), we have
removed the proposed requirement that
PO 00000
Frm 00082
Fmt 4701
Sfmt 4700
in assessing the performance of each
teacher preparation program States
consider student learning outcomes ‘‘in
significant part.’’ In addition, as we
addressed in the discussion of public
comments on Requirements for State
Reporting on Characteristics of Teacher
Preparation Programs (§ 612.5(a)(4)(ii)),
we have removed rigorous entry
requirements from the characteristics of
teacher preparation programs whose
administering entities do not have
accreditation by an agency approved by
the Secretary. Proposed § 612.6(a)(1)
stated that States must use student
learning outcomes in significant part to
identify low-performing or at risk
programs, and proposed § 612.6(b)
stated that the technical assistance that
a State must provide to low-performing
programs included technical assistance
in the form of information on assessing
the rigor of their entry requirements. We
have removed both phrases from the
final regulations.
Changes: The phrase ‘‘in significant
part’’ has been removed from
§ 612.6(a)(1), and ‘‘entry requirement
and’’ has been removed from § 612.6(b).
What must a State consider in
identifying low-performing teacher
preparation programs or at-risk teacher
preparation programs, and what actions
must a State take with respect to those
identified as low-performing? (34 CFR
612.6)
Comments: Some commenters
supported the requirement in § 612.6(b)
that at a minimum, a State must provide
technical assistance to low-performing
teacher preparation programs in the
State to help them improve their
performance. Commenters were
supportive of targeted technical
assistance because it has the possibility
of strengthening teacher preparation
programs and the proposed
requirements would allow States and
teacher preparation programs to focus
on continuous improvement and
particular areas of strength and need.
Commenters indicated that they were
pleased that the first step for a State
upon identifying a teacher preparation
program as at-risk or low-performing is
providing that program with technical
support, including sharing data from
specific indicators to be used to improve
instruction and clinical practice.
Commenters noted that States can help
bridge the gap between teacher
preparation programs and LEAs by
using that data to create supports for
those teachers whose needs were not
met by their program. Commenters
commended the examples of technical
assistance provided in the regulations.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Some commenters suggested
additional examples of technical
assistance to include in the regulations.
Commenters believed that technical
assistance could include: Training
teachers to serve as clinical faculty or
cooperating teachers using the National
Board for Professional Teaching
Standards; integrating models of
accomplished practice into the
preparation program curriculum; and
assisting preparation programs to
provide richer clinical experiences.
Commenters also suggested including
first-year teacher mentoring programs
and peer networks as potential ways in
which a State could provide technical
assistance to low-performing programs.
One commenter noted that, in a recent
survey of educators, teachers cite
mentor programs in their first year of
teaching (90 percent) and peer networks
(84 percent) as the top ways to improve
teacher training programs.
Commenters recommended that States
have the discretion to determine the
scope of the technical assistance, rather
than requiring that technical assistance
focus only on low-performing programs.
This would allow States to distribute
support as appropriate in an individual
context, and minimize the risk of
missing essential opportunities to
identify best practices from highperforming programs and supporting
those programs who are best-positioned
to be increasingly productive and
effective providers. Commenters
suggested that entities who administer
teacher preparation programs be
responsible for seeking and resourcing
improvement for their low-performing
programs.
Some commenters suggested that the
Federal government provide financial
assistance to States to facilitate the
provision of technical assistance to lowperforming programs. Commenters
suggested that the Department make
competitive grants available to States to
distribute to low-performing programs
in support of program improvement.
Commenters also suggested that the
Federal government offer meaningful
incentives to help States design, test,
and share approaches to strengthening
weak programs and support research to
assess effective interventions, as it
would be difficult for States to offer the
required technical assistance because
State agencies have little experience and
few staff in this area. In addition,
commenters recommended that a
national training and technical
assistance center be established to build
data capacity, consistency, and quality
among States and teacher preparation
programs to support scalable continuous
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
improvement and program quality in
educator preparation.
Commenters recommended that, in
addition to a description of the
procedures used to assist lowperforming programs as required by
section 207 of the HEA, States should be
required to describe in the SRC the
technical assistance they provide to
low-performing teacher preparation
programs in the last year. Commenters
suggested that this would shift the
information reported from descriptions
of processes to more detailed
information about real technical
assistance efforts, which could inform
technical assistance efforts in other
States.
Commenters suggested adding a
timeframe for States to provide the
technical assistance to low-performing
programs. Commenters suggested a
maximum of three months from the time
that the program is identified as lowperforming because, while waiting for
the assistance, and in the early stages of
its implementation, the program will
continue to produce teacher candidates
of lower quality.
Commenters suggested that States
should be required to offer the
assistance of a team of well-recognized
scholars in teacher education and in the
education of diverse students in P–12
schools to assist in the assessment and
redesign of programs that are rated
below effective. Some commenters
noted that States with publically
supported universities designated as
Historically Black Colleges and
Universities, Hispanic Serving
Institutions, and tribal institutions are
required to file with the Secretary a
supplemental report of equity in
funding and other support to these
institutions. Private and publically
supported institutions in these
categories often lack the resources to
attract the most recognized scholars in
the field.
Discussion: The Department
appreciates the commenters’ support for
the requirement that States provide
technical assistance to improve the
performance of any teacher preparation
program in its State that has been
identified as low-performing.
We decline to adopt the
recommendations of commenters who
suggested that the regulations require
States to provide specific types of
technical assistance because we seek to
provide States with flexibility to design
technical assistance that is appropriate
for the circumstances of each lowperforming program. States have the
discretion to implement technical
assistance in a variety of ways. The
regulations outline the minimum
PO 00000
Frm 00083
Fmt 4701
Sfmt 4700
75575
requirements, and we encourage States
that wish to do more, such as providing
assistance to at-risk or other programs,
to do so. Furthermore, nothing in the
regulations prohibits States from
providing technical assistance to at-risk
programs in addition to low-performing
programs. Similarly, while we
encourage States to provide timely
assistance to low-performing programs,
we decline to prescribe a certain
timeframe so that States have the
flexibility to meet these requirements
according to their capacity. In the SRC,
States are required to provide a
description of the process used to
determine the kind of technical
assistance to provide to low-performing
programs and how such assistance is
administered.
The Department appreciates
comments requesting Federal guidance
and resources to support high-quality
technical assistance. We agree that such
activities could be beneficial. However,
the commenters’ suggestions that the
Department provide financial assistance
to States to facilitate their provision of
technical assistance, and to teacher
preparation programs to support their
improvement, and request for national
technical assistance centers to support
scalable continuous improvement and
to improve program quality, are outside
the scope of this regulation, which is
focused on reporting. The Department
will consider ways to help States
implement this and other provisions of
the regulations, including by facilitating
the sharing of best practices across
States.
Changes: None.
Subpart C—Consequences of
Withdrawal of State Approval or
Financial Support
What are the consequences for a lowperforming teacher preparation program
that loses the State’s approval or the
State’s financial support? (34 CFR
612.7(a))
Comments: Multiple commenters
opposed the consequences for a lowperforming teacher preparation program
based on their opinion that the loss of
TEACH Grant eligibility will result in
decreased access to higher education for
students. Commenters noted that, as
institutions become unable to award
TEACH Grants to students in lowperforming teacher preparation
programs, students attending those
programs would also lose access to
TEACH Grant funds and thereby be
responsible for the additional costs that
the financial aid program normally
would have covered. If low-income
students are required to cover additional
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75576
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
amounts of their tuition, the
commenters asserted, they will be less
likely to continue their education or to
enroll in the first place, if they are
prospective students. The commenters
noted that this would
disproportionately impact low-income
and minority teacher preparation
students and decrease the enrollment
for those populations.
A number of commenters expressed
their concerns about the impacts of
losing financial aid eligibility, and
stated that decreasing financial aid for
prospective teachers would negatively
impact the number of teachers joining
the profession. As costs for higher
education continue to increase and less
financial aid is available, prospective
teacher preparation program students
may decide not to enroll in a teacher
preparation program, and instead
pursue other fields that may offer other
financial incentives to offset the costs
associated with college. The
commenters believed this would result
in fewer teachers entering the field
because fewer students would begin and
complete teacher preparation programs,
thus increasing teacher shortages. Other
commenters were concerned about how
performance results of teacher
preparation programs may impact job
outcomes for students who attended
those programs in the past as their
ability to obtain jobs may be impacted
by the rating of a program they have not
attended recently. The commenters
noted that being rated as lowperforming would likely reduce the
ability of a program to recruit, enroll,
and retain students, which would
translate into fewer teachers being
available for teaching positions. Others
stated that there will be a decrease in
the number of students who seek
certification in a high-need subject area
due to link between TEACH Grant
eligibility and teacher preparation
program metrics. They believe this will
increase teacher shortages in areas that
have a shortage of qualified teachers.
Additional commenters believed that
results from an individual teacher
would affect privacy concerns and
further drive potential teachers away
from the field due to fears that their
performance would be published in a
public manner.
Some commenters were specifically
concerned about the requirement that
low-performing programs be required to
provide transition support and remedial
services to students enrolled at the time
of termination of State support or
approval. The commenters noted that
low-performing programs are unlikely to
have the resources or capacity to
provide transitional support to students.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Discussion: As an initial matter, we
note that the requirements in § 612.7(a)
are drawn directly from section 207(b)
of the HEA, which provides that a
teacher preparation program from which
the State has withdrawn its approval or
financial support due to the State’s
identification of the program as lowperforming may not, among other
things, accept or enroll any student who
receives title IV student aid. Section
207(b) of the HEA and § 612.7(a) do not
concern simply the consequences of a
program being rated as low-performing,
but rather the consequences associated
with a State’s withdrawal of the
approval of a program or the State’s
termination of its financial support
based on such a rating. Similarly,
section 207(b) of the HEA and § 612.7(a)
do not concern a program’s loss of
eligibility to participate in the TEACH
Grant program pursuant to part 686, but
rather the statutory prohibition on the
award of title IV student aid to students
enrolled in such a teacher preparation
program.
We disagree with the commenters that
the loss of TEACH Grant funds will
have a negative impact on affordability
of, and access to attend, teacher
preparation programs. A program that
loses its eligibility would be required to
provide transitional support, if
necessary, to students enrolled at the
institution at the time of termination of
financial support or withdrawal of
approval to assist students in finding
another teacher preparation program
that is eligible to enroll students
receiving title IV, HEA funds. By
providing transition services to
students, individuals who receive title
IV, HEA funds would be able to find
another program in which to use their
financial aid and continue in a teacher
preparation program in a manner that
will still address college affordability.
We also disagree with the commenters
who stated that low-performing
programs are unlikely to have the
resources to provide transitional
support to students. We believe that an
IHE with a low-performing teacher
preparation program will be offering
other programs that may not be
considered low-performing. As such, an
IHE will have resources to provide
transition services to students affected
by the teacher preparation program
being labeled as low-performing even if
the money does not come directly from
the teacher preparation program.
While teacher preparation program
labels may negatively impact job market
outcomes because low-performing
teacher preparation programs’ ability to
recruit and enroll future cohorts of
students would be negatively impacted
PO 00000
Frm 00084
Fmt 4701
Sfmt 4700
by the rating, we believe these labels
better serve the interests of students
who deserve to know the quality of the
program they may enroll in. As we have
explained, § 612.7 applies only to
programs that lose State approval or
financial support as a result of being
identified by the State as lowperforming. It does not apply to every
program that is identified as lowperforming. We believe that, while
providing information about the quality
of a program to a prospective student
may impact the student’s enrollment
decision, a student who wishes to
become a teacher will find and enroll in
a program that has not lost State
approval or State financial support. We
believe that providing quality consumer
information to prospective students will
allow them to make informed
enrollment decisions. Students who are
aware that a teacher preparation
program is not approved by the State
may reasonably choose not to enter that
program. Individuals who wish to enter
the teaching field will continue to find
programs that prepare them for the
workforce, while avoiding less effective
programs. By doing so, we believe, the
overall impact to the number of
individuals entering the field will be
minimal. Section 612.4(b) implements
protections and allowances for teacher
preparation programs with a program
size of fewer than 25 students, which
would help to protect against privacy
violations, but does not require sharing
information on individual teacher
effectiveness with the general public.
In addition, we believe that, as section
207(b) of the HEA requires, removing
title IV, HEA program eligibility from
low-performing teacher preparation
programs that lose State approval or
financial support as a result of the State
assessment will encourage individuals
to enroll in more successful teacher
preparation programs. This will keep
more prospective teachers enrolled and
will mitigate any negative impact on
teacher employment rates.
While these regulations specify that
the teacher placement rate and the
teacher retention rate be calculated
separately for high-need schools, no
requirements have been created to track
employment outcomes based on highneed subject areas. We believe that an
emphasis on high-need schools will
help focus on improving student
success across the board for students in
these schools. In addition, the
requirement to report performance at
the individual teacher preparation
program level will likely promote
reporting by high-need subjects as well.
Section 612.7(a) codifies statutory
requirements related to teacher
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
preparation programs that lose State
approval or State financial support, and
the Department does not have flexibility
to alter the language. This includes the
requirements for providing transitional
services to students enrolled. However,
we believe that many transition services
are already being offered by colleges and
universities, as well as through
community organizations focused on
student transition to higher education.
For example, identifying potential
colleges and support in admissions and
financial aid application completion,
disability support services, remedial
education, as well as career services
support are all components of transition
services that most IHEs offer to some
degree to their student body.
The regulations do not require that an
institution dictate how a student is
assisted at the time of termination of
financial support or withdrawal of
approval from the State. Transition
services may include helping a student
transfer to another program at the same
institution that still receives State
funding and State approval, or another
program at another institution. The
transition services offered by the
institution should be in the best interest
of the student and assist the student in
meeting their educational and
occupational goals. However, the
Department believes that teacher
preparation programs may be offering
these services through their staff already
and those services should not stop
because of the consequences of
withdrawal of State approval or
financial support.
Changes: None.
Institutional Requirements for
Institutions Administering a Teacher
Preparation Program That Has Lost State
Approval or Financial Support (34 CFR
612.7(b))
Comments: One commenter believed
that the Department should require
States to notify K–12 school officials in
the instance where a teacher preparation
program student is involved in clinical
practice at the school, noting that the K–
12 school would be impacted by the loss
of State support for the teacher
preparation program.
Discussion: We decline to require
schools and districts to be notified
directly when a teacher preparation
program of a student teacher is assessed
as low-performing. While that
information would be available to the
public, we believe that directly
notifying school officials may unfairly
paint students within that program as
ineffective. A student enrolled in a lowperforming teacher preparation program
may be an effective and successful
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
teacher and we believe that notifying
school officials directly may influence
the school officials to believe the
student teacher would be a poor
performer even though there would be
no evidence about the individual
supporting this assumption.
Additionally, we intend § 612.7(b) to
focus exclusively on the title IV, HEA
consequences to the teacher preparation
program that loses State approval or
financial support and on the students
enrolled in those programs. This
subsection describes the procedure that
a program must undertake to ensure that
students are informed of the loss of
State approval or financial support.
Changes: None.
How does a low-performing teacher
preparation program regain eligibility to
accept or enroll students receiving title
IV, HEA program funds after a loss of
the State’s approval or the State’s
financial support? (34 CFR 612.8(a))
Comments: One commenter noted
that even if a State has given its
reinstatement of funds and recognition
of improved performance, the program
would have to wait for the Department’s
approval to be fully reinstated. The
commenter stated that this would be
Federal overreach into State jurisdiction
and decision-making. Additionally, the
commenter noted that the regulations
appear to make access to title IV, HEA
funds for an entire institution
contingent on the ratings of teacher
preparation programs.
Another commenter noted that some
programs might not ever regain
authorization to prepare teachers if they
must transfer students to other programs
since there will not be any future
student outcomes associated with the
recent graduates of the low-performing
programs.
Discussion: We decline to adopt the
suggestion of the commenter that the
Department should not require an
application by a low-performing teacher
preparation program to regain their
eligibility to accept or enroll students
receiving title IV, HEA funds which had
previously lost their eligibility to do so.
Section 207(b)(4) of the HEA provides
that a teacher preparation program that
loses eligibility to enroll students
receiving title IV, HEA funds may be
reinstated upon demonstration of
improved performance, as determined
by the State. Reinstatement of eligibility
of a low-performing teacher preparation
program would occur if the program
meets two criteria: (1) Improved
performance on the teacher preparation
program performance criteria in § 612.5
as determined by the State; and (2)
reinstatement of the State’s approval or
PO 00000
Frm 00085
Fmt 4701
Sfmt 4700
75577
the State’s financial support, or, if both
were lost, the State’s approval and the
State’s financial support. Section 612.8
operationalizes the process for an
institution to notify the Secretary that
the State has determined the program
has improved its performance
sufficiently to regain the States approval
or financial support and the teacher
preparation should again be permitted
to enroll students receiving title IV aid.
We do not propose to tie the entire
institution’s eligibility for title IV, HEA
funds to the performance of their
teacher preparation program. Any loss
of title IV, HEA funds based on these
regulations would only apply to the
institution’s teacher preparation
program and not to the entire
institution. Therefore, an institution
would be able to have both title IV
eligible and non-title IV eligible
programs at their institution. In
addition, based on the reporting by
program, an institution could have both
eligible and non-eligible title IV teacher
preparation programs based on the
rating of each program. The remaining
programs at the institution would still
be eligible to receive title IV, HEA
funds. We are concerned that our
inclusion of proposed § 612.8(b)(2) may
have led the commenter to believe that
an entire institution would be
prohibited from participating in the title
IV programs as a result of a teacher
preparation program’s loss of approval
or financial support based on low
performance. To avoid such confusion,
we have removed § 612.8(b)(2) from the
final regulations. The institutional
eligibility requirements in part 600
sufficiently describe the requirements
for institutions to participate in the title
IV, HEA programs.
We believe that providing transitional
support to students enrolled at the
institution at the time a State may
terminate financial support or withdraw
approval of a teacher preparation
program will provide appropriate
consumer protections to students. We
disagree with the commenter who stated
it would be impossible for a program to
improve its performance on the State
assessment, because there could not be
any data available on which the
program could be assessed, such as
student learning outcomes associated
with programs if the program was
prohibited from enrolling additional
title IV eligible students. Programs
would not be prohibited from enrolling
students to determine future student
outcomes. Programs that have lost State
approval or financial support would be
limited only in their ability to enroll
additional title IV eligible students, not
to enroll all students.
E:\FR\FM\31OCR2.SGM
31OCR2
75578
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Changes: We have removed
§ 612.8(b)(2), which was related to
institutional eligibility.
High-Quality Teacher Preparation
Program Not Provided Through Distance
Education § 686.2
Part 686—Teacher Education
Assistance for College and Higher
Education (TEACH) Grant Program
Comments: None.
Discussion: In the NPRM, we
proposed a definition for the term
‘‘high-quality teacher preparation
program.’’ In response to comments, we
have added a definition of a ‘‘highquality teacher preparation program
provided through distance education’’
in § 686.2. We make a corresponding
change to the proposed definition of the
term ‘‘high-quality teacher preparation
program’’ to distinguish a ‘‘high-quality
teacher preparation program not
provided through distance education’’
from a ‘‘high-quality teacher preparation
program provided through distance
education.’’
Furthermore, to ensure that the
TEACH Grant program regulations are
consistent with the changes made to
part 612, we have revised the timelines
that we proposed in the definition of the
term high-quality teacher preparation
program in part 686 that we now
incorporate in the terms ‘‘high quality
teacher preparation not provided
through distance education’’ and ‘‘high
quality teacher preparation program
provided through distance education.’’
We have also removed the phrase ‘‘or of
higher quality’’ from ‘‘effective or of
higher quality’’ to align the definition of
‘‘high-quality teacher preparation
program not provided through distance
education’’ with the definition of the
term ‘‘effective teacher preparation
program’’ in 34 CFR 612.1(d), which
provides that an effective teacher
preparation program is a program with
a level of performance higher than a
low-performing teacher preparation or
an at-risk teacher preparation program.
The phrase ‘‘or of higher quality’’ was
redundant and unnecessary.
The new definition is consistent with
changes we made respect to programlevel reporting (including distance
education), which are described in the
section of the preamble related to
§ 612.4(a)(1)(i). We note that the new
definition of the term ‘‘high quality
teacher preparation program not
provided through distance education’’
relates to the classification of the
program under 34 CFR 612.4(b) made by
the State where the program was
located, as the proposed definition of
the term ‘‘high-quality teacher
preparation program’’ provided. This is
in contrast to the definition of the term
‘‘high-quality teacher preparation
program provided through distance
education’’ discussed later in this
document.
Subpart A—Scope, Purpose, and
General Definitions
Section 686.1
Scope and Purpose
Comments: None.
Discussion: The Higher Education
Opportunity Act of 2008 (HEOA) (Pub.
L. 110–315) amended section
465(a)(2)(A) of the HEA to include
educational service agencies in the
description of the term low-income
school, and added a new section 481(f)
that provides that the term ‘‘educational
service agency’’ has the meaning given
the term in section 9101 of the ESEA.
Also, the ESSA maintained the
definition of the term ‘‘educational
service agency’’, but it now appears in
section 8101 of the ESEA, as amended
by the ESSA. We proposed changes to
the TEACH Grant program regulations
to incorporate the statutory change,
such as replacing the definition of the
term ‘‘school serving low-income
students (low-income school)’’ in
§ 686.2 with the term ‘‘school or
educational service agency serving lowincome students (low-income school).’’
Previously, § 686.1 stated that in
exchange for a TEACH Grant, a student
must agree to serve as a full-time teacher
in a high-need field in a school serving
low-income students. We revise the
section to provide that a student must
teach in a school or educational service
agency serving low-income students.
Changes: We revised § 686.1 to update
the citation in the definition of the term
educational service agency to section
8101 of the ESEA, as amended, and to
use the new term ‘‘school or educational
service agency serving low-income
students (low-income school)’’ in place
of the term ‘‘school serving low-income
students (low-income school).’’
Section 686.2
Definitions
sradovich on DSK3GMQ082PROD with RULES2
Classification of Instructional Program
(CIP)
Comments: None.
Discussion: In the NPRM, we
proposed to use the CIP to identify
TEACH Grant-eligible STEM programs.
Because, as discussed below, we are no
longer identifying TEACH Grant-eligible
STEM programs, the term CIP is not
used in the final regulations.
Changes: We have removed the
definition of the term CIP from § 686.2.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
PO 00000
Frm 00086
Fmt 4701
Sfmt 4700
Also, the proposed definition
provided that in the 2020–2021 award
year, a program would be ‘‘high-quality’’
only if it was classified as an effective
teacher preparation program in either or
both the April 2019 and/or April 2020
State Report Cards. We have determined
that this provision is unnecessary and
have deleted it. Now, because the first
State Report Cards under the regulations
will be submitted in October 2019, we
have provided that starting with the
2021–2022 award year, a program is
high-quality if it is not classified by the
State to be less than an effective teacher
preparation program based on 34 CFR
612.4(b) in two out of the previous three
years. We note that in the NPRM, the
definition of the term ‘‘high-quality
teacher preparation program’’ contained
an error. The proposed definition
provided that a program would be
considered high-quality if it were
classified as effective or of higher
quality for two out of three years. We
intended the requirement to be that a
program is high-quality if it is not rated
at a rating lower than effective for two
out of three years. This is a more
reasonable standard, and allows a
program that has been rated as less than
effective to improve its rating before
becoming ineligible to award TEACH
Grants.
Changes: We have added to § 686.2
the term ‘‘high-quality teacher
preparation program not provided
through distance education’’ and
defined it as a teacher preparation
program at which less than 50 percent
of the program’s required coursework is
offered through distance education that,
starting with the 2021–2022 award year
and subsequent award years, is not
classified by the State to be less than an
effective teacher preparation program,
based on 34 CFR 612.4(b) in two out of
the previous three years or meets the
exception from State reporting of
teacher preparation program
performance under 34 CFR
612.4(b)(3)(ii)(D) or 34 CFR 612.4(b)(5).
High-Quality Teacher Preparation
Program Provided Through Distance
Education § 686.2
Comments: In response to the
Supplemental NPRM, many
commenters stated that it was unfair
that one State’s classification of a
teacher preparation program provided
through distance education as lowperforming or at-risk of being lowperforming would determine TEACH
Grant eligibility for all students enrolled
in that program who receive TEACH
Grants, even if other States classified the
program as effective. Commenters did
not propose alternative options. One
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
commenter argued that the
determination of institutional eligibility
to disburse TEACH Grants is meant to
rest squarely with the Department,
separate from determinations relating to
the title II reporting system. Another
commenter suggested that there should
be a single set of performance standards
for TEACH Grants to which all States
agree to hold distance education
program accountable. Some commenters
felt teacher preparation programs
provided through distance education
might have few students in a State and,
as a result, might become victims of an
unusually unrepresentative sample in a
particular State.
Several commenters stated that it was
unclear how the proposed regulations
would take into account TEACH Grant
eligibility for students enrolled in a
teacher preparation program provided
through distance education that does
not lead to initial certification or if the
program does not receive an evaluation
by a State. Another commenter stated
that the proposed regulations would
effectively impose a requirement for
distance education institutions to adopt
a 50-State authorization compliance
strategy to offer their distance education
teacher licensure programs to students
in all 50 States.
Discussion: We are persuaded by the
commenters that the proposed
regulations were too stringent.
Consequently, we are revising the
proposed definition of ‘‘high-quality
teacher preparation program provided
through distance education’’ such that,
to become ineligible to participate in the
TEACH Grant program, the teacher
preparation program provided through
distance education would need to be
rated as low-performing or at-risk for
two out of three years by the same State.
This revision focuses on the
classification of a teacher preparation
program provided through distance
education as provided by the same State
rather than the classification of a
program by multiple States to which the
commenters objected. Moreover, this is
consistent with the treatment of teacher
preparation programs at brick-andmortar institutions which also have to
be classified as low-performing or atrisk for two out of three years by the
same State to become ineligible to
participate in the TEACH Grant
program.
We disagree with the commenter that
the determination of institutional
eligibility to disburse TEACH Grants is
meant to rest squarely with the
Department, separate from
determinations relating to teacher
preparation program performance under
title II of the HEA. The HEA provides
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
that the Secretary determines which
teacher preparation programs are highquality, and the Secretary has
reasonably decided to rely, in part, on
the classification of teacher preparation
program performance by States under
title II of the HEA. Further, as the
performance rating of teacher
preparation programs not provided
through distance education could also
be subject to unrepresentative samples
(for example, programs located near a
State border), this concern is not limited
to teacher preparation programs
provided through distance education.
The performance standards related to
title II are left to a State’s discretion;
thus, if States want to work together
create a single set of performance
standards, there is no barrier to them
doing so.
By way of clarification, the HEA and
current regulations provide for TEACH
Grant eligibility for students enrolled in
post-baccalaureate and master’s degree
programs. The eligibility of programs
that do not lead to initial certification is
not based on a title II performance
rating. In addition, if the teacher
preparation program provided through
distance education is not classified by a
State for a given year due to small nsize, students would still be able to
receive TEACH Grants if the program
meets the exception from State reporting
of teacher preparation program
performance under 34 CFR
612.4(b)(3)(ii)(D) or 34 CFR 612.4(b)(5).
We disagree that the regulations
effectively impose a requirement for
distance education institutions to adopt
a 50-State authorization compliance
strategy to offer their distance education
teacher licensure programs to students
in all 50 States. Rather, our regulations
provide, in part, for reporting on teacher
preparation programs provided through
distance education under the title II
reporting system with the resulting
performance level classification of the
program based on that reporting forming
the basis for that program’s eligibility to
disburse TEACH Grants.
Changes: We have revised the
definition of a high-quality teacher
preparation program provided through
distance education to be a teacher
preparation program at which at least 50
percent of the program’s required
coursework is offered through distance
education and that starting with the
2021–2022 award year and subsequent
award years, is not classified by the
same State to be less than an effective
teacher preparation program based on
34 CFR 612.4(b) in two of the previous
three years or meets the exception from
State reporting of teacher preparation
PO 00000
Frm 00087
Fmt 4701
Sfmt 4700
75579
program performance under 34 CFR
612.4(b)(3)(ii)(D) or 34 CFR 612.4(b)(5).
TEACH Grant-Eligible Institution
Comments: Several commenters
disagreed with our proposal to link
TEACH Grant program eligibility to
State ratings of teacher preparation
program performance conducted under
the title II reporting system described in
part 612. Commenters asserted that
State ratings of teacher preparation
programs should not determine TEACH
Grant program eligibility because it is
not a good precedent to withhold
financial aid from qualified students on
the basis of the quality of the program
in which the student is enrolled.
Commenters also expressed concern
that, under part 612, each State may
develop its own criteria for assessing
teacher preparation program quality,
and that this variation between States
will impact teacher preparation
programs’ eligibility for TEACH Grants.
Commenters stated that using different
quality measures to determine student
eligibility for TEACH Grants will be
unfair to students, as programs in
different States will be evaluated using
different criteria.
A commenter that offers only graduate
degree programs and no programs that
lead to initial certification noted that the
HEA provides that current teachers may
be eligible for TEACH Grants to obtain
graduate degrees, and questioned how
those students could obtain TEACH
Grants under the proposed definitions
of the terms ‘‘TEACH Grant-eligible
institution’’ and ‘‘TEACH Grant-eligible
program.’’.
Commenters also expressed concern
that the proposed definition of the term
TEACH Grant-eligible institution will
result in an overall reduction in the
number of institutions that are eligible
to provide TEACH Grants, and that,
because of this reduction, fewer
students will pursue high-need fields
such as special education, or teach in
high-poverty, diverse, urban or rural
communities where student test scores
may be lower. One commenter stated
that it is unfair to punish students by
denying them access to financial aid
when the States they live in and the
institutions they attend may not be able
to supply the data on which the teacher
preparation programs are being
assessed.
Discussion: We believe that creating a
link between institutions with teacher
preparation programs eligible for
TEACH Grants and the ratings of teacher
preparation programs under the title II
reporting system is critical, and will
allow the Secretary to identify what
teacher preparation programs are high-
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75580
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
quality. An ‘‘eligible institution,’’ as
defined in section 420L(1)(A) of the
HEA, is one that the Secretary
determines ‘‘provides high-quality
teacher preparation and professional
development services, including
extensive clinical experience as part of
pre-service preparation,’’ among other
requirements. Consistent with this
requirement, we have defined the term
‘‘TEACH Grant-eligible program’’ to
include those teacher preparation
programs that a State has determined
provide at least effective teacher
preparation. Under title II of the HEA,
States are required to assess the quality
of teacher preparation programs in the
State and to make a determination as to
whether a program is low-performing or
at-risk of being low-performing. A
teacher preparation program that does
not fall under either one of these
categories is considered an effective
teacher preparation program under
these final regulations. It is appropriate
and reasonable for the Secretary to rely
on a State’s assessment of the quality of
teacher preparation programs in that
State for purposes of determining which
programs are TEACH Grant-eligible
programs.
We agree that States will assess
teacher preparation programs based on
different criteria and measures. The
HEA only requires a State to assess the
quality of teacher preparation in that
State and does not require comparability
between States. That different States
may use different standards is not
necessarily unfair, as it is reasonable for
States to consider specific conditions in
their States when designing their annual
assessments. We believe it is important
that students receiving TEACH Grants
be enrolled in programs that the State
has identified as providing effective
teacher preparation.
We agree that in addition to ensuring
that students wishing to achieve initial
certification to become teachers are
eligible for TEACH Grants, the HEA
provides that a teacher or a retiree from
another occupation with expertise in a
field in which there is a shortage of
teachers, such as mathematics, science,
special education, English language
acquisition, or another high-need field,
or a teacher who is using high-quality
alternative certification routes to
become certified is eligible to receive
TEACH Grants. To ensure that these
eligible students are able to obtain
TEACH grants, we have modified the
definitions of the terms ‘‘TEACH Granteligible institution’’ and ‘‘TEACH Granteligible program.’’
We also acknowledge the possibility
that the overall number of institutions
eligible to award TEACH Grants could
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
decrease, because a TEACH Granteligible institution now must, in most
cases, provide at least one high quality
teacher preparation program, while in
the current regulation, an institution
may be TEACH Grant-eligible if it offers
a baccalaureate degree that, in
combination with other training or
experience, will prepare an individual
to teach in a high-need field and has
entered into an agreement with another
institution to provide courses necessary
for its students to begin a career in
teaching. We note that so long as an
otherwise eligible institution has one
high-quality teacher preparation
program not provided through distance
education or one high-quality program
provided through distance education, it
continues to be a TEACH Grant-eligible
institution. Furthermore, we do not
believe that fewer incentives for
students to pursue fields such as special
education or to teach in high-poverty,
diverse, or rural communities where test
scores may be lower would necessarily
be created. TEACH Grants will continue
to be available to students so long as
their teacher preparation programs are
classified as effective teacher
preparation programs by the State
(subject to the exceptions previously
discussed), and we are not aware of any
evidence that programs that prepare
teachers who pursue fields such as
special education or who teach in
communities where test scores are lower
will be classified as at-risk or lowperforming teacher preparation
programs on the basis of lower test
scores. We believe that those students
will choose to pursue those fields while
enrolled in high-quality programs. The
larger reason that the number of
institutions providing TEACH Grants
may decrease is that the final
regulations narrow the definition of a
TEACH Grant-eligible institution to
generally those institutions that offer at
least one high-quality teacher
preparation program not provided
through distance education or one highquality teacher preparation program
provided through distance education at
the baccalaureate or master’s degree
level (that also meets additional
requirements) and institutions that
provide a high-quality teacher
preparation program not provided
through distance education or one highquality teacher preparation program
provided through distance education
that is a post-baccalaureate program of
study.
We do not agree that student learning
outcomes for any subgroup, including
for teachers who teach students with
disabilities, would necessarily be lower
PO 00000
Frm 00088
Fmt 4701
Sfmt 4700
if properly measured. Further, student
learning outcomes is one of multiple
measures used to determine a rating
and, thereby, TEACH eligibility. So a
single measure, whether student
learning outcomes or another, would
not necessarily lead to the teacher
preparation program being determined
by the State to be low-performing or atrisk of being low-performing and
correspondingly being ineligible for
TEACH Grants. As discussed elsewhere
in this document, States determine the
ways to measure student learning
outcomes that give all teachers a chance
to demonstrate effectiveness regardless
of the composition of their classrooms,
and States may also determine weights
of the criteria used in their State
assessments of teacher preparation
program quality.
We do not agree with the comment
that the definition of the term Teach
Grant-eligible program will unfairly
punish students who live in States or
attend institutions that fail to comply
with the regulations in part 612 by
failing to supply the data required in
that part. Section 205 of the HEA
requires States and institutions to
submit IRCs and SRCs annually. In
addition, students will have access to
information about a teacher preparation
program’s eligibility before they enroll
so that they may select programs that
are TEACH Grant-eligible. Section
686.3(c) also allows students who are
currently enrolled in a TEACH Granteligible program to receive additional
TEACH Grants to complete their
program, even if the program becomes
ineligible to award TEACH Grants to
new students.
For reasons discussed under the
TEACH Grant-eligible program section
of this document, we have made
conforming changes to the definition of
a TEACH Grant-eligible program that are
reflected in the definition of TEACH
Grant-eligible institution where
applicable.
Changes: We have revised the
definition of a TEACH Grant-eligible
institution to provide that, if an
institution provides a program that is
the equivalent of an associate degree as
defined in § 668.8(b)(1) that is
acceptable for full credit toward a
baccalaureate degree in a high-quality
teacher preparation program not
provided through distance education or
one high-quality teacher preparation
program provided through distance
education or provides a master’s degree
program that does not meet the
definition of the terms ‘‘high quality
teacher preparation not provided
through distance education’’ or ‘‘high
quality teacher preparation program that
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
is provided through distance education’’
because it is not subject to reporting
under 34 CFR part 612, but that
prepares (1) a teacher or a retiree from
another occupation with expertise in a
field in which there is a shortage of
teachers, such as mathematics, science,
special education, English language
acquisition, or another high-need field;
or (2) a teacher who is using highquality alternative certification routes to
become certified, the institution is
considered a TEACH Grant-eligible
institution.
TEACH Grant-Eligible Program
Comments: A commenter
recommended the definition of Teach
Grant-eligible program be amended to
add ‘‘or equivalent,’’ related to the
eligibility of a two-year program so that
the definition would read, ‘‘Provides a
two-year or equivalent program that is
acceptable for full credit toward a
baccalaureate degree in a high-quality
teacher preparation program’’ because
some programs could be less than two
years, but the curriculum covered is the
equivalent of a two-year program.
Discussion: We agree with the
commenter that some programs could be
less than two years, but the curriculum
could cover the equivalent of a two-year
program, and therefore agree that the
provision regarding what constitutes an
eligible two-year program of study
should be revised. However, we base the
revision on already existing regulations
regarding ‘‘eligible program’’ rather than
the commenter’s specific language
recommendations. The regulations for
‘‘eligible program’’ in § 668.8 provide
that an eligible program is an
educational program that is provided by
a participating institution and satisfies
other relevant requirements contained
in the section, including that an eligible
program provided by an institution of
higher education must, in part, lead to
an associate, bachelors, professional, or
graduate degree or be at least a two
academic-year program that is
acceptable for full credit toward a
bachelor’s degree. For purposes of
§ 668.8, the Secretary considers an
‘‘equivalent of an associate degree’’ to
be, in part, the successful completion of
at least a two-year program that is
acceptable for full credit toward a
bachelor’s degree and qualifies a student
for admission into the third year of a
bachelor’s degree program. Based on
these existing regulations, we amended
the proposed definition of TEACH
Grant-eligible program to provide that a
program that is the equivalent of an
associate degree as defined in
§ 668.8(b)(1) that is acceptable for full
credit toward a baccalaureate degree in
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
a high-quality teacher preparation
program is considered to be a TEACH
Grant-eligible program. In addition, as
described in the discussion of the term
‘‘TEACH Grant-eligible institution,’’ we
have made a corresponding change to
the definition of the term ‘‘TEACH
Grant-eligible program’’ to ensure that
programs that prepare graduate degree
students who are eligible to receive
TEACH grants pursuant to section
420N(a)(2)(B) of the HEA are eligible
programs. This change applies to
programs that are not assessed by a State
under title II of the HEA.
Changes: We have revised the
definition of TEACH Grant-eligible
program to provide that a program that
is a two-year program or is the
equivalent of an associate degree as
defined in § 668.8(b)(1) that is
acceptable for full credit toward a
baccalaureate degree in a high quality
teacher preparation program is also
considered to be a TEACH Grant-eligible
program. We have also clarified that a
master’s degree program that does not
meet the definition of the terms ‘‘high
quality teacher preparation not provided
through distance education’’ or ‘‘high
quality teacher preparation program that
is provided through distance education’’
because it is not subject to reporting
under 34 CFR part 612, but that
prepares (1) a teacher or a retiree from
another occupation with expertise in a
field in which there is a shortage of
teachers, such as mathematics, science,
special education, English language
acquisition, or another high-need field;
or (2) a teacher who is using highquality alternative certification routes to
become certified is a TEACH Granteligible program.
TEACH Grant-Eligible STEM Program
Comments: Multiple commenters
stated that the proposed definition of
the term TEACH Grant-eligible STEM
program was not discussed during the
negotiated rulemaking process and
unreasonably creates a separate
standard for TEACH Grant eligibility
without the corresponding reporting
required in the SRC. Commenters
generally stated that all teacher
preparation programs should be held
accountable in a fair and equitable
manner. Commenters further stated that
the Department did not provide any
rationale for excepting STEM programs
from the ratings of teacher preparation
programs described in part 612.
Commenters also noted that the
proposed definition ignores foreign
language, special education, bilingual
education, and reading specialists,
which are identified as high-need fields
in the HEA. Several commenters also
PO 00000
Frm 00089
Fmt 4701
Sfmt 4700
75581
disagreed with the different treatment
provided to STEM programs under the
definition because they believed that
STEM fields were being given extra
allowances with respect to failing
programs and that creating different
standards of program effectiveness for
STEM programs and teacher preparation
programs makes little sense.
Commenters suggested that, instead, the
Department should require that STEM
programs to be rated as effective or
exceptional in order for students in
those programs to receive TEACH
Grants.
Commenters also questioned what
criteria the Secretary would use to
determine eligibility, since the Secretary
would be responsible for determining
which STEM programs are TEACH
Grant-eligible. Finally, commenters
emphasized the importance of the
pedagogical aspects of teacher
education.
Discussion: We agree that it is
important that teacher preparation
programs that are considered TEACH
Grant-eligible programs be high-quality
programs, and that the proposed
definition of the term TEACH Granteligible STEM program may not achieve
that goal. The regulations in part 612
only apply to teacher preparation
programs, which are defined in that part
generally as programs that lead to an
initial State teacher certification or
licensure in a specific field. Many
STEM programs do not lead to an initial
State teacher certification or licensure,
and hence are not subject to the State
assessments described in part 612 and
section 207 of the HEA. We have
carefully considered the commenters’
concerns, and have decided to remove
our proposed definition of the term
TEACH Grant-eligible STEM program
because it would be difficult to
implement and would result in different
types of programs being held to different
quality standards. We also acknowledge
the importance of the pedagogical
aspects of teacher education. A result of
the removal of this definition will be
that a student must be enrolled in a
high-quality teacher preparation
program as defined in § 686.2(e) to be
eligible for a TEACH Grant, and that few
students participating in STEM
programs will receive TEACH Grants.
Those students may be eligible for
TEACH Grants for post-baccalaureate or
graduate study after completion of their
STEM programs.
Changes: We have removed the
TEACH Grant-eligible STEM program
definition from § 686.2, as well as
references to and uses of that definition
elsewhere in part 686 where this term
appeared.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75582
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Section 686.11 Eligibility To Receive a
TEACH Grant
Comments: Some commenters
supported linking TEACH Grant
eligibility to the title II reporting system
for the 2020–2021 title IV award year,
noting that this would prevent programs
that fail to prepare teachers effectively
from remaining TEACH Grant-eligible,
and that linking TEACH Grant program
eligibility to teacher preparation
program quality is an important lever to
bring accountability to programs
equipping teachers to teach in the
highest need schools. Other commenters
were concerned that linking title II
teacher preparation program ratings to
TEACH Grant eligibility will have a
negative impact on recruitment for
teacher preparation programs, will
restrict student access to TEACH Grants,
and will negatively impact college
affordability for many students,
especially for low- and middle-income
students and students of color who may
be disproportionately impacted because
these students typically depend more on
Federal student aid. Commenters were
concerned that limiting aid for these
students, as well as for students in rural
communities or students in special
education programs, would further
increase teacher shortages in these
areas, would slow down progress in
building a culturally and racially
representative educator workforce, and
possibly exacerbate current or pending
teacher shortages across the nation in
general. Many commenters opined that,
because there is no evidence supporting
the use of existing student growth
models for determining institutional
eligibility for the TEACH Grant
program, institutional eligibility for
TEACH Grants and student eligibility
for all title IV Federal student aid in a
teacher preparation program would be
determined based on an invalid and
unreliable rating system. Some
commenters recommended that Federal
student aid be based on student need,
not institutional ratings, that they
asserted lack a sound research base
because of the potential unknown
impacts on underrepresented groups.
Others expressed concern that financial
aid offices would experience more
burden and more risk of error in the
student financial aid packaging process
because they would have more
information to review to determine
student eligibility. This would include,
for distance education programs, where
each student lives and which programs
are eligible in which States.
Many commenters stated that the
proposed regulations would grant the
State, rather than the Department of
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Education, authority to determine
TEACH Grant eligibility, which is a
delegation of authority that Congress
did not provide the Department, and
that a State’s strict requirements may
make the TEACH Grant program
unusable by institutions, thereby
eliminating TEACH Grant funding from
students at those institutions. It was
recommended that the regulations allow
for professional judgment regarding
TEACH Grant eligibility, that TEACH
Grants mimic Federal Pell grants in
annual aggregates, and that a link
should be available at studentloans.gov
for TEACH Grant requirements. One
commenter further claimed that the
proposed regulations represent a
profound and unwelcome shift in the
historic relationship between colleges,
States, and the Federal government and
that there is no indication that the HEA
envisions the kind of approach to
institutional and program eligibility for
TEACH Grants proposed in the
regulations. The commenter opined that
substantive changes to the eligibility
requirements should be addressed
through the legislative process, rather
than through regulation. A commenter
noted that a purpose of the proposed
regulations is to deal with deficiencies
in the TEACH Grant program, and thus,
the Department should focus
specifically on issues with the TEACH
Grant program and not connect these to
reporting of the teacher preparation
programs.
Discussion: We appreciate the
comments supporting the linking of
TEACH Grant eligibility to the title II
reporting system for the 2021–2022 title
IV award year. We disagree, however,
with comments suggesting that such a
link will have a negative impact on
recruitment for teacher preparation
programs and restrict student access to
TEACH Grants because this
circumstance would only arise in the
case of programs rated other than
effective, and it is not unreasonable for
students to choose to attend teacher
preparation programs that are effective
over those that are not. While we agree
that low- and middle-income students
and students of color are more likely to
depend on Federal student aid, the
regulations would not affect their
eligibility for Federal student aid as long
as they are enrolled in a TEACH Granteligible teacher preparation program at
a TEACH Grant-eligible institution. The
same would be true for students in rural
communities or in special education
programs. Because student eligibility for
Federal student aid would not be
affected in these circumstances, teacher
shortages in these areas also would not
PO 00000
Frm 00090
Fmt 4701
Sfmt 4700
be impacted. In 2011, only 38
institutions were identified by their
States as having a low-performing
teacher preparation program.53 That
evaluation was based on an institutionwide assessment of quality. Under part
612, each individual teacher preparation
program offered by an institution will be
evaluated by the State, and it would be
unlikely for all teacher preparation
programs at an institution to be rated as
low-performing. We believe that
students reliant on Federal student aid
will have sufficient options to enroll in
high-quality teacher preparation
programs under the final regulations.
While we hope that students would use
the ratings of teacher preparation
programs to pick more effective
programs initially, we also provide
under § 686.3 that an otherwise eligible
student who received a TEACH Grant
for enrollment in a TEACH Granteligible program is eligible to receive
additional TEACH Grants to complete
that program, even if that program is no
longer considered TEACH Granteligible. An otherwise eligible student
who received a TEACH Grant for
enrollment in a program before July 1 of
the year these final regulations become
effective would remain eligible to
receive additional TEACH Grants to
complete the program even if the
program is no longer considered TEACH
Grant-eligible under § 686.2(e).
With respect to comments objecting to
the use of student growth to determine
TEACH Grant eligibility, student growth
is only one of the many indicators that
States use to assess teacher preparation
program quality in part 612, and States
have discretion to determine the weight
assigned to that indicator in their
assessment.
While the new regulations will
require financial aid offices to track and
review additional information with
respect to student eligibility for TEACH
Grants, we do not agree that this would
result in greater risk of incorrect
packaging of financial aid. For an
institution to begin and continue to
participate in any title IV, HEA program,
the institution must demonstrate to the
Secretary that it is capable of
administering that program under the
standards of administrative capability
provided under § 668.16 (Standards of
administrative capability). An
institution that does not meet
administrative capability standards
53 U.S. Department of Education, Office of
Postsecondary Education (2013). Preparing and
Credentialing the Nation’s Teachers: The
Secretary’s Ninth Report on Teacher Quality.
Washington, DC. Retrieved from https://
title2.ed.gov/Public/TitleIIReport13.pdf. (Hereafter
referred to as ‘‘Secretary’s Ninth Report.’’)
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
would not be eligible to disburse any
title IV, HEA funds, including TEACH
Grants. Moreover, institutions have
always had to determine whether a
student seeking a TEACH Grant is
enrolled in a TEACH Grant-eligible
program. The final regulations require
the institution to be aware of whether
any of the teacher preparation programs
at the institution have been rated as
low-performing or at-risk by the State
when identifying which programs that it
offers are TEACH Grant-eligible
programs.
We disagree with comments asserting
that the proposed regulations would
grant States, rather than the Department,
authority to determine TEACH Grant
eligibility, which they claimed is a
delegation of authority that Congress
did not authorize. The HEA provides
that an ‘‘eligible institution’’ for
purposes of the TEACH Grant program
is one ‘‘that the Secretary determines
. . . provides high quality teacher
preparation . . . .’’ The Secretary has
determined that States are in the best
position to assess the quality of teacher
preparation programs located in their
States, and it is reasonable for the
Secretary to rely on the results of the
State assessment required by section
207 of the HEA. We believe that it is
appropriate to use the regulatory
process to define how the Secretary
determines that an institution provides
high quality teacher preparation and
that the final regulations reasonably
amend the current requirements so that
they are more meaningful.
We also disagree with commenters
that a State’s strict requirements may
make the TEACH Grant program
unusable by institutions and thereby
eliminate TEACH Grant funding for
students at those institutions. We
believe that States will conduct careful
and reasonable assessments of teacher
preparation programs located in their
States, and we also believe if a State
determines a program is not effective at
providing teacher preparation, students
should not receive TEACH Grants to
attend that program.
Regarding the recommendation that
the regulations allow for professional
judgment regarding TEACH Grant
eligibility, there is no prohibition
regarding the use of professional
judgment for the TEACH Grant program,
provided that all applicable regulatory
requirements are met. With respect to
the comment suggesting that the TEACH
Grant program should mimic the Pell
Grant program in annual aggregates, we
note that, just as the Pell Grant program
has its own annual aggregates, the
TEACH Grant program has its own
statutory annual award limits that must
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
be adhered to. The HEA provides that a
undergraduate or post-graduate student
may receive up to $4,000 per year, and
§ 686.3(a) provides that an
undergraduate or post-baccalaureate
student may receive the equivalent of
up to four Scheduled Awards during the
period required for completion of the
first undergraduate baccalaureate
program of study and the first postbaccalaureate program of study
combined. For graduate students, the
HEA provides up to $4,000 per year,
and § 686.3(b) stipulates that a graduate
student may receive the equivalent of
up to two Scheduled Awards during the
period required for the completion of
the TEACH Grant-eligible master’s
degree program of study.
Regarding the comment requesting a
link to the TEACH Grant program via
the studentloans.gov Web site, we do
not believe that adding a link to the
studentloans.gov Web site for TEACH
Grants would be helpful, and could in
fact be confusing. This Web site is
specific to loans, not grants. Only if a
student does not fulfill the Agreement to
Serve is the TEACH Grant converted to
a Direct Unsubsidized Loan. The Web
site already includes a link to the teachats.ed.gov Web site, where students can
complete TEACH Grant counseling and
the Agreement to Serve. The
Department does provide information
about the TEACH Grant program on its
studentaid.ed.gov Web site.
We disagree with the comment that
the Department should focus
specifically on issues or deficiencies
with the TEACH Grant program and not
connect any issues or deficiencies to
reporting of teacher preparation
programs under title II. The regulations
are intended to improve the TEACH
Grant program, in part, by
operationalizing the definition of a highquality teacher preparation program by
connecting the definition to the ratings
of teacher preparation programs under
the title II reporting system. The
regulations are not meant to address
specific TEACH Grant program issues or
program deficiencies.
We decline to adopt the suggestion
that an at-risk teacher preparation
program should be given the
opportunity and support to improve
before any consequences, including
those regarding TEACH Grants, are
imposed. The HEA specifies that
TEACH Grants may only be provided to
high-quality teacher preparation
programs, and we do not believe that a
program identified as being at-risk
should be considered a high-quality
teacher preparation program. With
respect to the comment that institutions
in the specific commenter’s State will
PO 00000
Frm 00091
Fmt 4701
Sfmt 4700
75583
remove themselves from participation in
the TEACH Grant program rather than
pursue high-stakes Federal
requirements, we note that, while we
cannot prevent institutions from ending
their participation in the program, we
believe that institutions understand the
need for providing TEACH Grants to
eligible students and that institutions
will continue to try to meet that need.
Additionally, we note that all
institutions that enroll students
receiving Federal financial assistance
are required to submit an annual IRC
under section 205(a) of the HEA, and
that all States that receive funds under
the HEA must submit an annual SRC.
These provisions apply whether or not
an institution participates in the TEACH
Grant program.
We agree with the commenters who
recommended avoiding specific carveouts for potential mathematics and
science teachers. As discussed under
the section titled ‘‘TEACH Grant-eligible
STEM program,’’ we have removed the
TEACH Grant-eligible STEM program
definition from § 686.2 and deleted the
term where it appeared elsewhere in
§ 686.
Changes: None.
§ 686.42 Discharge of Agreement To
Serve
Comments: None.
Discussion: Section 686.42(b)
describes the procedure we use to
determine a TEACH Grant recipient’s
eligibility for discharge of an agreement
to serve based on the recipient’s total
and permanent disability. We intend
this procedure to mirror the procedure
outlined in § 685.213 which governs
discharge of Direct Loans. We are
making a change to § 686.42(b) to make
the discharge procedures for TEACH
Grants more consistent with the Direct
Loan discharge procedures. Specifically,
§ 685.213(b)(7)(ii)(C) provides that the
Secretary does not require a borrower to
pay interest on a Direct Loan for the
period from the date the loan was
discharged until the date the borrower’s
obligation to repay the loan was
reinstated. This idea was not clearly
stated in § 686.42(b). We have added
new § 686.42(b)(4) to explicitly state
that if the TEACH Grant of a recipient
whose TEACH Grant agreement to serve
is reinstated is later converted to a
Direct Unsubsidized Stafford Loan, the
recipient will not be required to pay
interest that accrued on the TEACH
Grant disbursements from the date the
agreement to serve was discharged until
the date the agreement to serve was
reinstated. Similarly, § 685.213(b)(7)(iii)
describes the information that the
Secretary’s notification to a borrower in
E:\FR\FM\31OCR2.SGM
31OCR2
75584
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
the event of reinstatement of the loan
will include. We have amended
§ 686.42(b)(3) to make the TEACH Grant
regulations more consistent with the
Direct Loan regulations. Specifically, we
removed proposed § 686.42(b)(3)(iii),
which provided that interest accrual
would resume on TEACH Grant
disbursements made prior to the date of
discharge if the agreement was
reinstated.
Changes: We have removed proposed
§ 686.42(b)(3)(iii) and added a new
§ 686.42(b)(4) to more clearly describe
that, if the TEACH Grant of a recipient
whose TEACH Grant agreement to serve
is reinstated is later converted to a
Direct Unsubsidized Stafford Loan, the
recipient will not be required to pay
interest that accrued on the TEACH
Grant disbursements from the date the
agreement to serve was discharged until
the date the agreement to serve was
reinstated. This change also makes the
TEACH Grant regulation related to total
and permanent disability more
consistent with the Direct Loan
discharge procedures.
Executive Orders 12866 and 13563
sradovich on DSK3GMQ082PROD with RULES2
Regulatory Impact Analysis
Under Executive Order 12866, the
Secretary must determine whether this
regulatory action is ‘‘significant’’ and,
therefore, subject to the requirements of
the Executive order and subject to
review by the Office of Management and
Budget (OMB). Section 3(f) of Executive
Order 12866 defines a ‘‘significant
regulatory action’’ as an action likely to
result in a rule that may—
(1) Have an annual effect on the
economy of $100 million or more, or
adversely affect a sector of the economy,
productivity, competition, jobs, the
environment, public health or safety, or
State, local, or tribal governments or
communities in a material way (also
referred to as an ‘‘economically
significant’’ rule);
(2) Create serious inconsistency or
otherwise interfere with an action taken
or planned by another agency;
(3) Materially alter the budgetary
impacts of entitlement grants, user fees,
or loan programs or the rights and
obligations of recipients thereof; or
(4) Raise novel legal or policy issues
arising out of legal mandates, the
President’s priorities, or the principles
stated in the Executive order.
This final regulatory action is a
significant regulatory action subject to
review by OMB under section 3(f) of
Executive Order 12866.
We have also reviewed these
regulations under Executive Order
13563, which supplements and
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
explicitly reaffirms the principles,
structures, and definitions governing
regulatory review established in
Executive Order 12866. To the extent
permitted by law, Executive Order
13563 requires that an agency—
(1) Propose or adopt regulations only
on a reasoned determination that their
benefits justify their costs (recognizing
that some benefits and costs are difficult
to quantify);
(2) Tailor its regulations to impose the
least burden on society, consistent with
obtaining regulatory objectives and
taking into account—among other things
and to the extent practicable—the costs
of cumulative regulations;
(3) In choosing among alternative
regulatory approaches, select those
approaches that maximize net benefits
(including potential economic,
environmental, public health and safety,
and other advantages; distributive
impacts; and equity);
(4) To the extent feasible, specify
performance objectives, rather than the
behavior or manner of compliance a
regulated entity must adopt; and
(5) Identify and assess available
alternatives to direct regulation,
including economic incentives—such as
user fees or marketable permits—to
encourage the desired behavior, or
provide information that enables the
public to make choices.
Executive Order 13563 also requires
an agency ‘‘to use the best available
techniques to quantify anticipated
present and future benefits and costs as
accurately as possible.’’ The Office of
Information and Regulatory Affairs of
OMB has emphasized that these
techniques may include ‘‘identifying
changing future compliance costs that
might result from technological
innovation or anticipated behavioral
changes.’’
We are issuing these final regulations
only on a reasoned determination that
their benefits justify their costs. In
choosing among alternative regulatory
approaches, we selected those
approaches that maximize net benefits.
Based on the analysis that follows, the
Department believes that these
regulations are consistent with the
principles in Executive Order 13563.
We also have determined that this
regulatory action does not unduly
interfere with State, local, or tribal
governments in the exercise of their
governmental functions.
In this RIA we discuss the need for
regulatory action, the potential costs
and benefits, net budget impacts,
assumptions, limitations, and data
sources, as well as regulatory
alternatives we considered. Although
the majority of the costs related to
PO 00000
Frm 00092
Fmt 4701
Sfmt 4700
information collection are discussed
within this RIA, elsewhere in this
document under Paperwork Reduction
Act of 1995, we also identify and further
explain burdens specifically associated
with information collection
requirements.
1. Need for Regulatory Action
Recent international assessments of
student achievement have revealed that
students in the United States are
significantly behind students in other
countries in science, reading, and
mathematics.54 Although many factors
influence student achievement, a large
body of research has used value-added
modeling to demonstrate that teacher
quality is the largest in-school factor
affecting student achievement.55 We use
‘‘value-added’’ modeling and related
terms to refer to statistical methods that
use changes in the academic
achievement of students over time to
isolate and estimate the effect of
particular factors, such as family,
school, or teachers, on changes in
student achievement.56 One study
found that the difference between
having a teacher who performed at a
level one standard deviation below the
mean and a teacher who performed at a
level one standard deviation above the
mean was equivalent to student learning
gains of a full year’s worth of
knowledge.57
A number of factors are associated
with teacher quality, including
academic content knowledge, in-service
training, and years of experience, but
researchers and policymakers have
begun to examine whether student
achievement discrepancies can be
54 Kelly, D., Xie, H., Nord, C.W., Jenkins, F., Chan,
J.Y., Kastberg, D. (2013). Performance of U.S. 15Year-Old Students in Mathematics, Science, and
Reading Literacy in an International Context: First
Look at PISA 2012 (NCES 2014–024). Retrieved
from U.S. Department of Education, National Center
for Education Statistics Web site: https://
nces.ed.gov/pubs2014/2014024rev.pdf.
55 Sanders, W., Rivers, J.C. (1996). Cumulative
and Residual Effects of Teachers on Future Student
Academic Achievement. Retrieved from University
of Tennessee, Value-Added Research and
Assessment Center; Rivkin, S., Hanushek, E., &
Kane, T. (2005). Teachers, Schools, and Academic
Achievement. Econometirica, 417–458; Rockoff, J.
(2004). The Impact of Individual Teachers on
Student Achievement: Evidence from Panel Data.
American Economic Review, 94(2), 247–252.
56 For more information on approaches to valueadded modeling, see also: Braun, H. (2005). Using
Student Progress to Evaluate Teachers: A Primer on
Value-Added Models. Retrieved from https://
files.eric.ed.gov/fulltext/ED529977.pdf; Sanders,
W.J. (2006). Comparisons Among Various
Educational Assessment Value-Added Models,
Power of Two—National Value-Added Conference,
Battelle for Kids, Columbus, OH. SAS, Inc.
57 E. Hanushek. (1992). The Trade-Off between
Child Quantity and Quality. Journal of Political
Economy, 100(1), 84–117.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
explained by differences in the
preparation their teachers received
before entering the classroom.58 An
influential study on this topic found
that the effectiveness of teachers in
public schools in New York City who
were prepared through different teacher
preparation programs varied in
statistically significant ways, as the
student growth found using value-added
measures shows.59
Subsequent studies have examined
the value-added scores of teachers
prepared through different teacher
preparation programs in Missouri,
Louisiana, North Carolina, Tennessee,
and Washington.60 Many of these
studies have found statistically
significant differences between teachers
prepared at different preparation
programs. For example, State officials in
Tennessee and Louisiana have worked
with researchers to examine whether
student achievement could be used to
inform teacher preparation program
accountability. After controlling for
observable differences in students,
researchers in Tennessee found that the
most effective teacher preparation
programs in that State produced
graduates who were two to three times
more likely than other novice teachers
to be in the top quintile of teachers in
a particular subject area, as measured by
increases in the achievement of their
students, with the least-effective
programs producing teachers who were
equally likely to be in the bottom
quintile.61 Analyses based on Louisiana
data on student growth linked to the
programs that prepared students’
teachers found some statistically
58 D. Harris & T. Sass. (2011). Teacher Training,
Teacher Quality, and Student Achievement. Journal
of Public Economics, 95(7–8), 798–812; D.
Aaronson, L. Barrow, & W. Sanders. (2007).
Teachers and Student Achievement in the Chicago
Public High Schools. Journal of Labor Economics,
25(1), 95–135; D. Boyd, H. Lankford, S. Loeb, J.
Rockoff, & Wyckoff, J. (2008). The Narrowing Gap
in New York City Teacher Qualifications and Its
Implications for Student Achievement in HighPoverty Schools. Journal of Policy Analysis and
Management, 27(4), 793–818.
59 D. Boyd, P. Grossman, H. Lankford, S. Loeb, &
J. Wyckoff (2009). ‘‘Teacher Preparation and
Student Achievement.’’ Education Evaluation and
Policy Analysis, 31(4): 416–440.
60 Koedel, C., Parsons, E., Podgursky, M., &
Ehlert, M. (2015). Teacher Preparation Programs
and Teacher Quality: Are There Real Differences
Across Programs? Education Finance and Policy,
10(4), 508–534.; Campbell, S., Henry, G., Patterson,
K., Yi, P. (2011). Teacher Preparation Program
Effectiveness Report. Carolina Institute for Public
Policy; Goldhaber, D., & Liddle, S. (2013). The
Gateway to the Profession: Assessing Teacher
Preparation Programs Based on Student
Achievement. Economics of Education Review, 34,
29–44.
61 Tennessee Higher Education Commission.
Report Card on the Effectiveness of Teacher
Training Programs, 2010.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
significant differences in teacher
effectiveness.62 Although the study’s
sample size was small, three teacher
preparation programs produced novice
teachers who appeared, on average, to
be as effective as teachers with at least
two years of experience, based on
growth in student achievement in four
or more content areas.63 A study
analyzing differences between teacher
preparation programs in Washington
based on the value-added scores of their
graduates also found a few statistically
significant differences, which the
authors argued were educationally
meaningful.64 In mathematics, the
average difference between teachers
from the highest performing program
and the lowest performing program was
approximately 1.5 times the difference
in performance between students
eligible for free or reduced-price
lunches and those who are not, while in
reading the average difference was 2.3
times larger.65
In contrast to these findings, Koedel,
et al. found very small differences in
effectiveness between teachers prepared
at different programs in Missouri.66 The
vast majority of variation in teacher
effectiveness was within programs,
instead of between programs.67
However, the authors noted that the lack
of variation between programs in
Missouri could reflect a lack of
competitive pressure to spur innovation
within traditional teacher preparation
programs.68 A robust evaluation system
that included outcomes could spur
innovation and increase differentiation
between teacher preparation
programs.69
We acknowledge that there is debate
in the research community about the
specifications that should be used when
conducting value-added analyses of the
effectiveness of teachers prepared
through different preparation
62 Gansle, K., Noell, G., Knox, R.M., Schafer, M.J.
(2010). Value Added Assessment of Teacher
Preparation Programs in Louisiana: 2007–2008 TO
2009–2010 Overview of 2010–11 Results. Retrieved
from Louisiana Board of Regents.
63 Ibid.
64 Goldhaber, D., & Liddle, S. (2013). The Gateway
to the Profession: Assessing Teacher Preparation
Programs Based on Student Achievement.
Economics of Education Review, 34, 29–44.
65 Ibid. 1.5 times the difference between students
eligible for free or reduced price lunch is
approximately 12 percent of a standard deviation,
while 2.3 times the difference is approximately 19
percent of a standard deviation.
66 Koedel, C., Parsons, E., Podgursky, M., &
Ehlert, M. (2015). Teacher Preparation Programs
and Teacher Quality: Are There Real Differences
Across Programs? Education Finance and Policy,
10(4), 508–534.
67 Ibid.
68 Ibid.
69 Ibid.
PO 00000
Frm 00093
Fmt 4701
Sfmt 4700
75585
programs,70 but also recognize that the
field is moving in the direction of
weighting value-added analyses in
assessments of teacher preparation
program quality.
Thus, despite the methodological
debate in the research community,
CAEP has developed new standards that
require, among other measures,
evidence that students completing a
teacher preparation program positively
impact student learning.71 The new
standards are currently voluntary for the
more than 900 education preparation
providers who participate in the
education preparation accreditation
system. Participating institutions
account for nearly 60 percent of the
providers of educator preparation in the
United States, and their enrollments
account for nearly two-thirds of newly
prepared teachers. The new CAEP
standards will be required beginning in
2016.72 The standards are an indication
that the effectiveness ratings of teachers
trained through teacher preparation
programs are increasingly being used as
a way to evaluate teacher preparation
program performance. The research on
teacher preparation program
effectiveness is relevant to the
elementary and secondary schools that
rely on teacher preparation programs to
recruit and select talented individuals
and prepare them to become future
teachers. In 2011–2012 (the most recent
year for which data are available),
203,701 individuals completed either a
traditional teacher preparation program
or an alternative route program. The
National Center for Education Statistics
(NCES) projects that by 2020, public and
private schools will need to hire as
many as 362,000 teachers each year due
to teacher retirement and attrition and
increased student enrollment.73 In order
to meet the needs of public and private
schools, States may have to expand
traditional and alternative route
70 For a discussion of issues and considerations
related to using school fixed effects models to
compare the effectiveness of teachers from different
teacher preparation programs who are working in
the same school, see Lockwood, J.R., McCaffrey, D.,
Mihaly, K., Sass, T.(2012). Where You Come From
or Where You Go? Distinguishing Between School
Quality and the Effectiveness of Teacher
Preparation Program Graduates. (Working Paper
63). Retrieved from National Center for Analysis of
Longitudinal Data in Education Research.
71 CAEP 2013 Accreditation Standards.(2013).
Retrieved from https://caepnet.files.wordpress.com/
2013/09/final_board_approved1.
72 Teacher Preparation: Ensuring a Quality
Teacher in Every Classroom. Hearing before the
Senate Committee on Health, Education, Labor and
Pensions. 113th Congress. 113th Cong.
(2014)(Statement by Mary Brabeck).
73 U.S. Department of Education (2015). Table
208.20. Digest of Education Statistics, 2014.
Retrieved from National Center for Education
Statistics.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75586
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
programs to prepare more teachers, find
new ways to recruit and train qualified
individuals, or reduce the need for
novice teachers by reducing attrition or
developing different staffing models.
Better information on the quality of
teacher preparation programs will help
States and LEAs make sound staffing
decisions.
Despite research suggesting that the
academic achievement of students
taught by graduates of different teacher
preparation programs may vary with
regard to their teacher’s program,
analyses linking student achievement to
teacher preparation programs have not
been conducted and made available
publicly for teacher preparation
programs in all States. Congress has
recognized the value of assessing and
reporting on the quality of teacher
preparation, and requires States and
IHEs to report detailed information
about the quality of teacher preparation
programs in the State under the HEA.
When reauthorizing the title II reporting
system, members of Congress noted a
goal of having teacher preparation
programs explore ways to assess the
impact of their programs’ graduates on
student academic achievement. In fact,
the report accompanying the House Bill
(H. Rep. 110–500) included the
following statement, ‘‘[i]t is the intent of
the Committee that teacher preparation
programs, both traditional and those
providing alternative routes to State
certification, should strive to increase
the quality of individuals graduating
from their programs with the goal of
exploring ways to assess the impact of
such programs on student’s academic
achievement.’’
Moreover, in roundtable discussions
and negotiated rulemaking sessions held
by the Department, stakeholders
repeatedly expressed concern that the
current title II reporting system provides
little meaningful data on the quality of
teacher preparation programs or the
impact of those programs’ graduates on
student achievement. The recent GAO
report on teacher preparation programs
noted that half or more of the States and
teacher preparation programs surveyed
said the current title II data collection
was not useful to assessing their
programs; and none of the surveyed
school district staff said they used the
data.74
Currently, States must annually
calculate and report data on more than
400 data elements, and IHEs must report
on more than 150 elements. While some
information requested in the current
reporting system is statutorily required,
other elements—such as whether the
74 GAO
at 26.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
IHE requires a personality test prior to
admission—are not required by statute
and do not provide information that is
particularly useful to the public. Thus,
stakeholders stressed at the negotiated
rulemaking sessions that the current
system is too focused on inputs and that
outcome-based measures would provide
more meaningful information.
Similarly, even some of the
statutorily-required data elements in the
current reporting system do not provide
meaningful information on program
performance and how program
graduates are likely to perform in a
classroom. For example, the HEA
requires IHEs to report both scaled
scores on licensure tests and pass rates
for students who complete their teacher
preparation programs. Yet, research
provides mixed findings on the
relationship between licensure test
scores and teacher effectiveness.75 This
may be because most licensure tests
were designed to measure the
knowledge and skills of prospective
teachers but not necessarily to predict
classroom effectiveness.76 The
predictive value of licensure exams is
further eroded by the significant
variation in State pass/cut scores on
these exams, with many States setting
pass scores at a very low level. The
National Council on Teacher Quality
found that every State except
Massachusetts sets its pass/cut scores on
content assessments for elementary
school teachers below the average score
for all test takers, and most States set
pass/cut scores at the 16th percentile or
lower.77 Further, even with low pass/cut
scores, some States allow teacher
candidates to take licensure exams
multiple times. Some States also permit
IHEs to exclude students who have
completed all program coursework but
have not passed licensure exams when
75 Clotfelter, C., Ladd, H., & Vigdor, J. (2010).
Teacher Credentials and Student Achievement:
Longitudinal Analysis with Student Fixed Effects.
Economics of Education Review, 26(6), 673–682;
Goldhaber, D. (2007). Everyone’s Doing It, But What
Does Teacher Testing Tell Us about Teacher
Effectiveness? The Journal of Human Resources,
42(4), 765–794; Buddin, R., & Zamarro, G. (2009).
Teacher Qualifications and Student Achievement in
Urban Elementary Schools. Journal of Urban
Economics,66, 103–115.
76 Goldhaber, D. (2007). Everyone’s Doing It, But
What Does Teacher Testing Tell Us about Teacher
Effectiveness? The Journal of Human Resources,
42(4), 765–794.
77 National Council on Teacher Quality, State
Teacher Policy Yearbook, 2011. Washington, DC:
National Council on Teacher Quality (2011). For
more on licensure tests, see U.S. Department of
Education, Office of Planning, Evaluation, and
Policy Development, Policy and Program Studies
Service (2010), Recent Trends in Mean Scores and
Characteristics of Test-Takers on Praxis II Licensure
Tests. Washington, DC: U.S. Department of
Education.
PO 00000
Frm 00094
Fmt 4701
Sfmt 4700
the IHEs report pass rates on these
exams for individuals who have
completed teacher preparation programs
under the current title II reporting
system. This may explain, in part, why
States and IHEs have reported over the
past three years a consistently high
average pass rate on licensure or
certification exams ranging between 95
and 96 percent for individuals who
completed traditional teacher
preparation programs in the 2009–10
academic year.78
Thus, while the current title II
reporting system produces detailed and
voluminous data about teacher
preparation programs, the data do not
convey a clear picture of program
quality as measured by how program
graduates will perform in a classroom.
This lack of meaningful data prevents
school districts, principals, and
prospective teacher candidates from
making informed choices, creating a
market failure due to imperfect
information.
On the demand side, principals and
school districts lack information about
the past performance of teachers from
different teacher preparation programs
and may rely on inaccurate assumptions
about the quality of teacher preparation
programs when recruiting and hiring
novice teachers. An accountability
system that provides information about
how teacher preparation program
graduates are likely to perform in a
classroom and how likely they are to
stay in the classroom will be valuable to
school districts and principals seeking
to efficiently recruit, hire, train, and
retain high-quality educators. Such a
system can help to reduce teacher
attrition, a particularly important
problem because many novice teachers
do not remain in the profession, with
more than a quarter of novice teachers
leaving the teaching profession
altogether within three years of
becoming classroom teachers.79 High
teacher turnover rates are problematic
because research has demonstrated that,
on average, student achievement
increases considerably with more years
of teacher experience in the first three
through five years of teaching.80
78 Secretary’s
Tenth Report.
R. (2003). Is There Really a Teacher
Shortage? Retrieved from University of Washington
Center for the Study of Teaching and Policy Web
site: https://depts.washington.edu/ctpmail/PDFs/
Shortage-RI-09-2003.pdf.
80 Ferguson, R.F. & Ladd, H.F. (1996). How and
why money matters: An analysis of Alabama
schools. In H.F. Ladd (Ed.), Holding schools
accountable: Performance-based education reform
(pp. 265–298). Washington, DC: The Brookings
Institution; Hanushek, E., Kain, J., O’Brien, D., &
Rivkin, S. (2005). The Market for Teacher Quality
(Working Paper no. 11154). Retrieved from National
79 Ingersoll,
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
On the supply side, when considering
which program to attend, prospective
teachers lack comparative information
about the placement rates and
effectiveness of a program’s graduates.
Teacher candidates may enroll in a
program without the benefit of
information on employment rates postgraduation, employer and graduate
feedback on program quality, and, most
importantly, without understanding
how well the program prepared
prospective teachers to be effective in
the classroom. NCES data indicate that
66 percent of certified teachers who
received their bachelor’s degree in 2008
took out loans to finance their
undergraduate education. These
teachers borrowed an average of
$22,905.81 The average base salary for
full-time teachers with a bachelor’s
degree in their first year of teaching in
public elementary and secondary
schools is $38,490.82 Thus, two-thirds of
prospective teacher candidates may
incur debt equivalent to 60 percent of
their starting salary in order to attend
teacher preparation programs without
access to reliable indicators of how well
these programs will prepare them for
classroom teaching or help them find a
teaching position in their chosen field.
A better accountability system with
more meaningful information will
enable prospective teachers to make
more informed choices while also
enabling and encouraging States, IHEs,
and alternative route providers to
monitor and continuously improve the
quality of their teacher preparation
programs.
The lack of meaningful data also
prevents States from restricting program
credentials to programs with the
demonstrated ability to prepare more
effective teachers, or accurately
identifying low-performing and at-risk
teacher preparation programs and
helping these programs improve. Not
surprisingly, States have not identified
many programs as low-performing or atBureau for Economic Research Web site:
www.nber.org/papers/w11154; Gordon, R., Kane, T.,
Staiger, D. (2006). Identifying Effective Teachers
Using Performance on the Job; Clotfelter, C., Ladd,
H., & Vigdor, J. (2007). How and Why Do Teacher
Credentials Matter for Student Achievement?
(Working Paper No. 2). Retrieved from National
Center for Analysis of Longitudinal Data in
Education Research.; Kane, T., Rockoff, J., Staiger,
D. (2008). What does certification tell us about
teacher effectiveness? Evidence from New York
City. Economics of Education Review, 27(6), 615–
631.
81 National Center for Education Statistics (2009).
Baccalaureate and Beyond Longitudinal Study.
Washington, DC: U.S. Department of Education.
82 National Center for Education Statistics (2015).
Digest of Education Statistics, 2014. Washington,
DC: U.S. Department of Education (2015): Table
211.20.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
risk based on the data currently
collected. In the latest title II reporting
requirement submissions, twenty-one
States did not classify any teacher
preparation programs as low-performing
or at-risk.83 Of the programs identified
by States as low-performing or at-risk,
28 were based in IHEs that participate
in the Teacher Education Assistance for
College and Higher Education (TEACH)
Grant program. The GAO also found
that some States were not assessing
whether programs in their State were
low performing at all.84 Since the
beginning of Title II, HEA reporting in
2001, 29 States and territories have
never identified a single IHE with an atrisk or low-performing teacher
preparation program.85 Under the final
regulations, however, every State will
collect and report more meaningful
information about teacher preparation
program performance which will enable
them to target scarce public funding
more efficiently through direct support
to more effective teacher preparation
programs and State financial aid to
prospective students attending those
programs.
Similarly, under the current title II
reporting system, the Federal
government is unable to ensure that
financial assistance for prospective
teachers is used to help students attend
programs with the best record for
producing effective classroom teachers.
The final regulations help accomplish
this by ensuring that program
performance information is available for
all teacher preparation programs in all
States and by restricting eligibility for
Federal TEACH Grants to programs that
are rated ‘‘effective.’’
Most importantly, elementary and
secondary school students, including
those students in high-need schools and
communities who are
disproportionately taught by recent
teacher preparation program graduates,
will be the ultimate beneficiaries of an
improved teacher preparation program
accountability system.86 Such a system
83 Secretary’s
Tenth Report.
at 17.
85 Secretary’s Tenth Report.
86 Several studies have found that inexperienced
teachers are far more likely to be assigned to highpoverty schools, including Boyd, D., Lankford, H.,
Loeb, S., Rockoff, J., & Wyckoff, J. (2008). The
Narrowing Gap in New York City Teacher
Qualifications and Its Implications for Student
Achievement in High-Poverty Schools. Journal of
Policy Analysis and Management, 27(4), 793–818;
Clotfelter, C., Ladd, H., Vigdor, J., & Wheeler, J.
(2007). High Poverty Schools and the Distribution
of Teachers and Principals. North Carolina Law
Review, 85, 1345–1379; Sass, T., Hannaway, J., Xu,
Z., Figlio, D., & Feng, L. (2010). Value Added of
Teachers in High-Poverty Schools and LowerPoverty Schools (Working Paper No. 52). Retrieved
from National Center for Analysis of Longitudinal
84 GAO
PO 00000
Frm 00095
Fmt 4701
Sfmt 4700
75587
better focuses State and Federal
resources on promising teacher
preparation programs while informing
teacher candidates and potential
employers about high-performing
teacher preparation programs and
enabling States to more effectively
identify and improve low-performing
teacher preparation programs.
Recognizing the benefits of improved
information on teacher preparation
program quality and associated
accountability, several States have
already developed and implemented
systems that map teacher effectiveness
data back to teacher preparation
programs. The regulations help ensure
that all States generate useful data that
are accessible to the public to support
efforts to improve teacher preparation
programs.
Brief Summary of the Regulations
The Department’s plan to improve
teacher preparation has three core
elements: (1) Reduce the reporting
burden on IHEs while encouraging
States to make use of data on teacher
effectiveness to build an effective
teacher preparation accountability
system driven by meaningful indicators
of quality (title II accountability system);
(2) reform targeted financial aid for
students preparing to become teachers
by directing scholarship aid to students
attending higher-performing teacher
preparation programs (TEACH Grants);
and (3) provide more support for IHEs
that prepare high-quality teachers.
The regulations address the first two
elements of this plan. Improving
institutional and State reporting and
State accountability builds on the work
that States like Louisiana and Tennessee
have already started, as well as work
that is underway in States receiving
grants under Phase One or Two of the
Race to the Top Fund.87 All of these
States have, will soon have, or plan to
have statewide systems that track the
academic growth of a teacher’s students
by the teacher preparation program from
which the teacher graduated and, as a
result, will be better able to identify the
teacher preparation programs that are
producing effective teachers and the
policies and programs that need to be
strengthened to scale those effects.
Consistent with feedback the
Department has received from
stakeholders, under the regulations
Data in Education Research at
www.coweninstitute.com/wp-content/uploads/
2011/01/1001469-calder-working-paper-52-1.pdf.
87 The applications and Scopes of Work for States
that received a grant under Phase One or Two of
the Race to the Top Fund are available online at:
https://www2.ed.gov/programs/racetothetop/
awards.html.
E:\FR\FM\31OCR2.SGM
31OCR2
75588
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
States must assess the quality of teacher
preparation programs according to the
following indicators: (1) Student
learning outcomes of students taught by
graduates of teacher preparation
programs (as measured by aggregating
learning outcomes of students taught by
graduates of each teacher preparation
program); (2) job placement and
retention rates of these graduates (based
on the number of program graduates
who are hired into teaching positions
and whether they stay in those
positions); and (3) survey outcomes for
surveys of program graduates and their
employers (based on questions about
whether or not graduates of each teacher
preparation program are prepared to be
effective classroom teachers).
The regulations will help provide
meaningful information on program
quality to prospective teacher
candidates, school districts, States, and
IHEs that administer traditional teacher
preparation programs and alternative
routes to State certification or licensure
programs. The regulations will make
data available that also can inform
academic program selection, program
improvement, and accountability.
During public roundtable discussions
and subsequent negotiated rulemaking
sessions, the Department consulted with
representatives from the teacher
preparation community, States, teacher
preparation program students, teachers,
and other stakeholders about the best
way to produce more meaningful data
on the quality of teacher preparation
programs while also reducing the
reporting burden on States and teacher
preparation programs where possible.
The regulations specify three types of
outcomes States must use to assess
teacher preparation program quality, but
States retain discretion to select the
most appropriate methods to collect and
report these data. In order to give States
and stakeholders sufficient time to
develop these methods, the
requirements of these regulations are
implemented over several years.
2. Discussion of Costs, Benefits, and
Transfers
The Department has analyzed the
costs of complying with the final
regulations. Due to uncertainty about
the current capacity of States in some
relevant areas and the considerable
discretion the regulations will provide
States (e.g., the flexibility States would
have in determining who conducts the
teacher and employer surveys), we
cannot evaluate the costs of
implementing the regulations with
absolute precision. In the NPRM, the
Department estimated that the total
annualized cost of these regulations
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
would be between $42.0 million and
$42.1 million over ten years. However,
based on public comments received, it
became clear to us that this estimate
created confusion. In particular, a
number of commenters incorrectly
interpreted this estimate as the total cost
of the regulations over a ten year period.
That is not correct. The estimates in the
NPRM captured an annualized cost (i.e.,
between $42.0 million and $42.1
million per year over the ten year
period) rather than a total cost (i.e.,
between $42.0 million and $42.1
million in total over ten years). In
addition, these estimated costs reflected
both startup and ongoing costs, so
affected entities would likely see costs
higher than these estimates in the first
year of implementation and costs lower
than these estimates in subsequent
years. The Department believed that
these assumptions were clearly outlined
for the public in the NPRM; however,
based on the nature of public comments
received, we recognize that additional
explanation is necessary.
The Department has reviewed the
comments submitted in response to the
NPRM and has revised some
assumptions in response to the
information we received. We discuss
specific public comments, where
relevant, in the appropriate sections
below. In general, we do not discuss
non-substantive comments.
A number of commenters expressed
general concerns regarding the cost
estimates included in the NPRM and
indicated that implementing these
regulations would cost far more than
$42.0 million over ten years. As noted
above, we believe most of these
comments arose from a fundamental
misunderstanding of the estimates
presented in the NPRM. While several
commenters attempted to provide
alternate cost estimates, we note that
many of these estimates were
unreasonably high because they
included costs for activities or
initiatives that are not required by the
regulations. For instance, in one
alternate estimate (submitted jointly by
the California Department of Education,
the California Commission on Teacher
Credentialing, and the California State
Board of Education) cited by a number
of commenters, over 95 percent of the
costs outlined were due to non-required
activities such as dramatically
expanding State standardized
assessments to all grades and subjects or
completing time- and cost-intensive
teacher evaluations of all teachers in the
State in every year. Nonetheless, we
have taken portions of those estimates
into account where appropriate (i.e.,
where the alternate estimates reflect
PO 00000
Frm 00096
Fmt 4701
Sfmt 4700
actual requirements of the final
regulations) in revising our
assumptions.
In addition, some commenters argued
that our initial estimates were too low
because they did not include costs for
activities not directly required by the
regulations. These activities included
making changes in State laws where
those laws prohibited the sharing of
data between State entities responsible
for teacher certification and the State
educational agency. Upon reviewing
these comments, we have declined to
include estimates of these potential
costs. Such costs are difficult to
quantify, as it is unclear how many
States would be affected, how extensive
the needed changes would be, or how
much time and resources would be
required on the part of State legislatures.
Also, we believe that many States
removed potential barriers in order to
receive ESEA flexibility prior to the
passage of ESSA, further minimizing the
potential cost of legislative changes. To
the extent that States do experience
costs associated with these actions, or
other actions not specifically required
by the regulations and therefore not
outlined below (e.g., costs associated
with including more than the minimum
number of participants in the
consultation process described in
§ 612.4(c)), our estimates will not
account for those costs.
We have also updated our estimates
using the most recently available wage
rates from the Bureau of Labor Statistics.
We have also updated our estimates of
the number of teacher preparation
programs and teacher preparation
entities using the most recent data
submitted to the Department in the 2015
title II data collection. While no
commenters specifically addressed
these issues, we believe that these
updates will provide the most
reasonable estimate of costs.
Based on revised assumptions, the
Department estimates that the total
annualized cost of the regulations will
be between $27.5 million and $27.7
million (see the Accounting Statement
section of this document for further
detail). This estimate is significantly
lower than the total annualized cost
estimated in the proposed rule. The
largest driver of this decrease is the
increased flexibility provided to States
under § 612.5(a)(1)(ii), as explained
below. To provide additional context,
we provide estimates in Table 3 for
IHEs, States, and LEAs in Year 1 and
Year 5. These estimates are not
annualized or calculated on a net
present value basis, but instead
represent real dollar estimates.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
needs of LEAs, be closely linked with
the instructional decisions novice
teachers confront in the classroom, and
prepare candidates to work with diverse
Year 1
Year 5
populations and in urban and rural
IHE ............
$4,800,050
$4,415,930 settings, as applicable. Consistent with
State .........
$24,077,040
$16,111,570 these statutory provisions, the
LEA ...........
$5,859,820
$5,859,820 Department is issuing regulations to
Total ......
$34,736,910
$26,387,320 ensure that the data reported by IHEs
and States is accurate. The following
sections provide a detailed examination
Relative to these costs, the major
of the costs associated with each of the
benefit of the requirements, taken as a
regulatory provisions.
whole, will be better publicly available
information on the effectiveness of
Institutional Report Card Reporting
teacher preparation programs that can
Requirements
be used by prospective students when
The regulations require that beginning
choosing programs to attend; employers
on April 1, 2018, and annually
in selecting teacher preparation program
thereafter, each IHE that conducts a
graduates to recruit, train, and hire;
States in making funding decisions; and traditional teacher preparation program
or alternative route to State certification
teacher preparation programs
or licensure program and enrolls
themselves in seeking to improve.
The following is a detailed analysis of students receiving title IV, HEA funds,
the estimated costs of implementing the report to the State on the quality of its
program using an IRC prescribed by the
specific requirements, including the
Secretary.
costs of complying with paperworkUnder the current IRC, IHEs typically
related requirements, followed by a
discussion of the anticipated benefits.88 report at the entity level, rather than the
program level, such that an IHE that
The burden hours of implementing
specific paperwork-related requirements administers multiple teacher
preparation programs typically gathers
are also shown in the tables in the
Paperwork Reduction Act section of this data on each of those programs,
aggregates the data, and reports the
document.
required information as a single teacher
Title II Accountability System (HEA
preparation entity on a single report
Title II Regulations)
card. By contrast, the regulations
generally require that States report on
Section 205(a) of the HEA requires
program performance at the individual
that each IHE that provides a teacher
program level. The Department
preparation program leading to State
originally estimated that the initial
certification or licensure report on a
burden for each IHE to adjust its
statutorily enumerated series of data
recordkeeping systems in order to report
elements for the programs it provides.
the required data separately for each of
Section 205(b) of the HEA requires that
each State that receives funds under the its teacher preparation programs would
be four hours per IHE. Numerous
HEA provide to the Secretary and make
commenters argued that this estimate
widely available to the public
information on the quality of traditional was low. Several commenters argued
that initial set-up would take 8 to 12
and alternative route teacher
hours, while others argued that it would
preparation programs that includes not
take 20 to 40 hours per IHE. While we
less than the statutorily enumerated
recognize that the amount of time it will
series of data elements it provides. The
take to initially adjust their recordState must do so in a uniform and
keeping systems will vary, we believe
comprehensible manner, conforming
that the estimates in excess of 20 hours
with definitions and methods
are too high, given that IHEs will only
established by the Secretary. Section
be adjusting the way in which they
205(c) of the HEA directs the Secretary
report data, rather than collecting new
to prescribe regulations to ensure the
data. However, the Department found
validity, reliability, accuracy, and
arguments in favor of both 8 hours and
integrity of the data submitted. Section
12 hours to be compelling and
206(b) requires that IHEs provide
reasonable. We believe that eight hours
assurance to the Secretary that their
teacher training programs respond to the is a reasonable estimate for how long it
will take to complete this process
generally; and for institutions with
88 Unless otherwise specified, all hourly wage
estimates for particular occupation categories were
greater levels of oversight, review, or
taken from the May 2014 National Occupational
complexity, this process may take
Employment and Wage Estimates for Federal, State,
longer. Without additional information
and local government published by the Department
about the specific levels of review and
of Labor’s Bureau of Labor Statistics and available
online at www.bls.gov/oes/current/999001.htm.
oversight at individual institutions, we
sradovich on DSK3GMQ082PROD with RULES2
TABLE 4—ESTIMATED COSTS BY
ENTITY TYPE IN YEARS 1 AND 5
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
PO 00000
Frm 00097
Fmt 4701
Sfmt 4700
75589
assume that the amount of time it will
take institutions to complete this work
will be normally distributed between 8
and 12 hours, with a national average of
10 hours per institution. Therefore, the
Department has upwardly revised its
initial estimate of four hours to ten
hours. In the most recent year for which
data are available, 1,490 IHEs submitted
IRCs to the Department, for an estimated
one-time cost of $384,120.89
One commenter argued that
institutions would have to make costly
updates and upgrades to their existing
information technology (IT) platforms in
order to generate the required new
reports. However, given that institutions
will not be required to generate reports
on any new data elements, but only
disaggregate the data already being
collected by program, and that we
include cost estimates for making the
necessary changes to their existing
systems in order to generate reports in
that way, we do not believe it would be
appropriate to include additional costs
associated with large IT purchases in
this cost estimate.
The Department further estimated that
each of the 1,490 IHEs would need to
spend 78 hours to collect the data
elements required for the IRC for its
teacher preparation programs. Several
commenters argued that it would take
longer than 78 hours to collect the data
elements required for the IRC each year.
The Department reviewed its original
estimates in light of these comments
and the new requirement for IHEs to
identify, in their IRCs, whether each
program met the definition of a teacher
preparation program provided through
distance education. Pursuant to that
review, the Department has increased its
initial estimate to 80 hours, for an
annual cost of $3,072,980.
We originally estimated that entering
the required information into the
information collection instrument
would take 13.65 hours per entity. We
currently estimate that, on average, it
takes one hour for institutions to enter
the data for the current IRC. The
Department believed that it would take
institutions approximately as long to
complete the report for each program as
it does currently for the entire entity. As
such, the regulations would result in an
additional burden of the time to
complete all individual program level
89 Unless otherwise specified, for paperwork
reporting requirements, we use a wage rate of
$25.78, which is based on a weighted national
average hourly wage for full-time Federal, State and
local government workers in office and
administrative support (75 percent) and managerial
occupations (25 percent), as reported by the Bureau
of Labor Statistics in the National Occupational
Employment and Wage Estimates, May 2014.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75590
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
reports minus the current entity time
burden. In the NPRM, this estimate was
based on an average of 14.65 teacher
preparation programs per entity—22,312
IHE-based programs divided by 1,522
IHEs. Given that entities are already
taking approximately one hour to
complete the report, we estimated the
time burden associated with this
regulation at 13.65 hours (14.65 hours to
complete individual program level
reports minus one hour of current entity
time burden). Based on the most recent
data available, we now estimate an
average of 16.40 teacher preparation
programs per entity—24,430 IHE-based
programs divided by 1,490 IHEs. This
results in a total cost of $591,550 to the
1,490 IHEs. One commenter stated that
it would take a total of 140 hours to
enter the required information into the
information collection instrument.
However, it appears that this estimate is
based on an assumption that it would
require 10 hours of data entry for each
program at an institution. Given the
number of data elements involved and
our understanding of how long
institutions have historically taken to
complete data entry tasks, we believe
this estimate is high, and that our
revised estimate, as described above, is
appropriate.
The regulations also require that each
IHE provide the information reported on
the IRC to the general public by
prominently and promptly posting the
IRC on the IHE’s Web site, and, if
applicable, on the teacher preparation
portion of the Web site. We originally
estimated that each IHE would require
30 minutes to post the IRC. One
commenter stated that this estimate was
reasonable given the tasks involved,
while two commenters argued that this
was an underestimate. One of these
commenters stated that posting data on
the institutional Web site often involved
multiple staff, which was not captured
in the Department’s initial estimate.
Another commenter argued that this
estimate did not take into account time
for data verification, drafting of
summary text to accompany the
document, or ensuring compliance with
the Americans with Disabilities Act
(ADA). Given that institutions will
simply be posting on their Web site the
final IRC that was submitted to the
Department, we assume that the
document has already been reviewed by
all necessary parties and that all
included data have been verified prior
to being submitted to the Department.
As such, the requirement to post the IRC
to the Web site should not incur any
additional levels of review or data
validation. Regarding ADA compliance,
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
we assume the commenter was referring
to the broad set of statutory
requirements regarding accessibility of
communications by entities receiving
Federal funding. In general, it is our
belief that the vast majority of
institutions, when developing materials
for public dissemination, already ensure
that such materials meet governmentand industry-recognized standards for
accessibility. To the extent that they do
not already do so, nothing in the
regulations imposes additional
accessibility requirements beyond those
in the Rehabilitation Act of 1973, as
amended, or the ADA. As such, while
there may be accessibility-related work
associated with the preparation of these
documents that is not already within the
standard procedures of the institution,
such work is not a burden created by the
regulations. Thus, we believe our initial
estimate of 30 minutes is appropriate,
for an annual cumulative cost of
$19,210. The estimated total annual cost
to IHEs to meet the requirements
concerning IRCs would be $3,991,030.
We note that several commenters, in
response to the Supplemental NPRM,
argued that institutions would
experience increased compliance costs
given new provisions related to teacher
preparation programs provided through
distance education. However, nothing in
the Supplemental NPRM proposed
changes to institutional burden under
§ 612.3. Under the final regulations, the
only increased burden on IHEs with
respect to teacher preparation programs
provided through distance education is
that they identify whether each of the
teacher preparation programs they offer
meet the definition in § 612.2. We
believe that the additional two hours
estimated for data collection above the
Department’s initial estimate provides
more than enough time for IHEs to meet
this requirement. We do not estimate
additional compliance costs to accrue to
IHEs as a result of provisions in this
regulation related to teacher preparation
programs provided through distance
education.
State Report Card Reporting
Requirements
Section 205(b) of the HEA requires
each State that receives funds under the
HEA to report annually to the Secretary
on the quality of teacher preparation in
the State, both for traditional teacher
preparation programs and for alternative
routes to State certification or licensure
programs, and to make this report
available to the general public. In the
NPRM, the Department estimated that
the 50 States, the District of Columbia,
the Commonwealth of Puerto Rico,
Guam, American Samoa, the United
PO 00000
Frm 00098
Fmt 4701
Sfmt 4700
States Virgin Islands, the
Commonwealth of the Northern Mariana
Islands, and the Freely Associated
States, which include the Republic of
the Marshall Islands, the Federated
States of Micronesia, and the Republic
of Palau would each need 235 hours to
report the data required under the SRC.
In response to the original NPRM, two
commenters argued that this estimate
was too low. Specifically, one
commenter stated that, based on the
amount of time their State has
historically devoted to reporting the
data in the SRC, it would take
approximately 372.5 hours to complete.
We note that not all States will be able
to complete the reporting requirements
in 235 hours and that some States,
particularly those with more complex
systems or more institutions, will take
much longer. We also note that the State
identified by the commenter in
developing the 372.5 hour estimate
meets both of those conditions—it uses
a separate reporting structure to develop
its SRC (one of only two States
nationwide to do so), and has an aboveaverage number of preparation
programs. As such, it is reasonable to
assume that this State would require
more than the nationwide average
amount of time to complete the process.
Another commenter stated that the
Department’s estimates did not take into
account the amount of time and
potential staff resources needed to
prepare and post the information. We
note that there are many other aspects
of preparing and posting the data that
are not reflected in this estimate, such
as collecting, verifying, and validating
the data. We also note that this estimate
does not take into account the time
required to report on student learning
outcomes, employment outcomes, or
survey results. However, all of these
estimates are included elsewhere in
these cost estimates. We believe that,
taken as a whole, all of these various
elements appropriately capture the time
and staff resources necessary to comply
with the SRC reporting requirement.
As proposed in the Supplemental
NPRM, and as described in greater
detail below, in these final regulations,
States will be required to report on
teacher preparation programs offered
through distance education that produce
25 or more certified teachers in their
State. The Department estimates that the
reporting on these additional programs,
in conjunction with the reduction in the
total number of teacher preparation
programs from our initial estimates in
the NPRM, will result in a net increase
in the time necessary to report the data
required in the SRC from the 235 hours
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
estimated in the NPRM to 243 hours, for
an annual cost of $369,610.
Section 612.4(a)(2) requires that States
post the SRC on the State’s Web site.
Because all States already have at least
one Web site in operation, we originally
estimated that posting the SRC on an
existing Web site would require no more
than half an hour at a cost of $25.78 per
hour. Two commenters suggested that
this estimate was too low. One
commenter argued that the
Department’s initial estimate did not
take into account time to create Webready materials or to address technical
errors. In general, the regulations do not
require the SRC to be posted in any
specific format and we believe that it
would take a State minimal time to
create a file that would be compliant
with the regulations by, for example,
creating a PDF containing the SRC. We
were unable to determine from this
comment the specific technical errors
that the commenter was concerned
about, but believe that enough States
will need less than the originally
estimated 30 minutes to post the SRC so
that the overall average will not be
affected if a handful of States encounter
technical issues. Another commenter
estimated that, using its current Web
reporting system, it would take
approximately 450 hours to initially set
up the SRC Web site with a recurring 8
hours annually to update it. However,
we note that the system the commenter
describes is more labor intensive and
includes more data analysis than the
regulations require. While we recognize
the value in States’ actively trying to
make the SRC data more accessible and
useful to the public, we cannot
accurately estimate how many States
will choose to do more than the
regulations require, or what costs they
would encounter to do so. We have
therefore opted to estimate only the time
and costs necessary to comply with the
regulations. As such, we retain our
initial estimate of 30 minutes to post the
SRC. For the 50 States, the District of
Columbia, the Commonwealth of Puerto
Rico, Guam, American Samoa, the
United States Virgin Islands, the
Commonwealth of the Northern Mariana
Islands, the Freely Associated States,
which include the Republic of the
Marshall Islands, the Federated States of
Micronesia, and the Republic of Palau
the total annual estimated cost of
meeting this requirement would be
$760.
Scope of State Reporting
The costs associated with the
reporting requirements in paragraphs (b)
and (c) of § 612.4 are discussed in the
following paragraphs. The requirements
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
regarding reporting of a teacher
preparation program’s indicators of
academic content knowledge and
teaching skills do not apply to the
insular areas of American Samoa, Guam,
the Commonwealth of the Northern
Mariana Islands, the U.S. Virgin Islands,
the freely associated States of the
Republic of the Marshall Islands, the
Federated States of Micronesia, and the
Republic of Palau. Due to their size and
limited resources and capacity in some
of these areas, we believe that the cost
to these insular areas of collecting and
reporting data on these indicators would
not be warranted.
Number of Distance Education Programs
As described in the Supplemental
NPRM (81 FR 18808), the Department
initially estimated that the portions of
this regulation relating to reporting on
teacher preparation programs offered
through distance education would result
in 812 additional reporting instances for
States. A number of commenters
acknowledged the difficulty in arriving
at an accurate estimate of the number of
teacher preparation programs offered
through distance education that would
be subject to reporting under the final
regulation. However, those commenters
also noted that, without a clear
definition from the Department on what
constitutes a teacher preparation
program offered through distance
education, it would be exceptionally
difficult to offer an alternative estimate.
No commenters provided alternate
estimates. In these final regulations, the
Department has adopted a definition of
teacher preparation program offered
through distance education. We believe
that this definition is consistent with
our initial estimation methodology and
have no reason to adjust that estimate at
this time.
Reporting of Information on Teacher
Preparation Program Performance
Under § 612.4(b)(1), a State would be
required to make meaningful
differentiations in teacher preparation
program performance using at least
three performance levels—lowperforming teacher preparation
program, at-risk teacher preparation
program, and effective teacher
preparation program—based on the
indicators in § 612.5, including student
learning outcomes and employment
outcomes for teachers in high-need
schools. Because States would have the
discretion to determine the weighting of
these indicators, the Department
assumes that States would consult with
early adopter States or researchers to
determine best practices for making
such determinations and whether an
PO 00000
Frm 00099
Fmt 4701
Sfmt 4700
75591
underlying qualitative basis should exist
for these decisions. The Department
originally estimated that State higher
education authorities responsible for
making State-level classifications of
teacher preparation programs would
require at least 35 hours to discuss
methods for ensuring that meaningful
differentiations are made in their
classifications. This initial estimate also
included determining what it meant for
particular indicators to be included ‘‘in
significant part’’ and what constituted
‘‘satisfactory’’ student learning
outcomes, which are not included in the
final regulations.
A number of commenters stated that
35 hours was an underestimate. Of the
commenters that suggested alternative
estimates, those estimates typically
ranged from 60 to 70 hours (the highest
estimate was 350 hours). Based on these
comments, the Department believes that
its original estimate would not have
provided sufficient time for multiple
staff to meet and discuss teacher
preparation program quality in a
meaningful way. As such, and given
that these staff will be making decisions
regarding a smaller range of issues, the
Department is revising its estimate to 70
hours per State. We believe that this
amount of time would be sufficient for
staff to discuss and make decisions on
these issues in a meaningful and
purposeful way. To estimate the cost per
State, we assume that the State
employee or employees would likely be
in a managerial position (with national
average hourly earnings of $45.58), for
a total one-time cost for each of the 50
States, the District of Columbia, and the
Commonwealth of Puerto Rico of
$165,910.
Fair and Equitable Methods
Section 612.4(c)(1) requires States to
consult with a representative group of
stakeholders to determine the
procedures for assessing and reporting
the performance of each teacher
preparation program in the State. The
regulations specify that these
stakeholders must include, at a
minimum, representatives of leaders
and faculty of traditional teacher
preparation programs and alternative
routes to State certification or licensure
programs; students of teacher
preparation programs; LEA
superintendents; local school board
members; elementary and secondary
school leaders and instructional staff;
elementary and secondary school
students and their parents; IHEs that
serve high proportions of low-income
students or students of color, or English
learners; advocates for English learners
and students with disabilities; officials
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75592
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
of the State’s standards board or other
appropriate standards body; and a
representative of at least one teacher
preparation program provided through
distance education. Because the final
regulations do not prescribe any
particular methods or activities, we
expect that States will implement these
requirements in ways that vary
considerably, depending on their
population and geography and any
applicable State laws concerning public
meetings.
Many commenters stated that their
States would likely adopt methods
different from those outlined below. In
particular, these commenters argued
that their States would include more
than the minimum number of
participants we used for these estimates.
In general, while States may opt to do
more than what is required by the
regulations, for purposes of estimating
the cost, we have based the estimate on
what the regulations require. If States
opt to include more participants or
consult with them more frequently or
for longer periods of time, then the costs
incurred by States and the participants
would be higher.
In order to estimate the cost of
implementing these requirements, we
assume that the average State will
convene at least three meetings with at
least the following representatives from
required categories of stakeholders: One
administrator or faculty member from a
traditional teacher preparation program,
one administrator or faculty member
from an alternative route teacher
preparation program, one student from
a traditional or alternative route teacher
preparation program, one teacher or
other instructional staff, one
representative of a small teacher
preparation program, one LEA
superintendent, one local school board
member, one student in elementary or
secondary school and one of his or her
parents, one administrator or faculty
member from an IHE that serves high
percentages of low-income students or
students of color, one representative of
the interests of English learners, one
representative of the interests of
students with disabilities, one official
from the State’s standards board or other
appropriate standards body, and one
administrator or faculty from a teacher
preparation program provided through
distance education. We note that a
representative of a small teacher
preparation program and a
representative from a teacher
preparation program provided through
distance education were not required
stakeholders in the proposed
regulations, but are included in these
final regulations.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
To estimate the cost of participating
in these meetings for the required
categories of stakeholders, we initially
assumed that each meeting would
require four hours of each participant’s
time and used the following national
average hourly wages for full-time State
government workers employed in these
professions: Postsecondary education
administrators, $50.57 (4 stakeholders);
elementary or secondary education
administrators, $50.97 (1 stakeholder);
postsecondary teachers, $45.78 (1
stakeholder); primary, secondary, and
special education school teachers,
$41.66 (1 stakeholder). For the official
from the State’s standards board or other
appropriate standards body, we used the
national average hourly earnings of
$59.32 for chief executives employed by
State governments. For the
representatives of the interests of
students who are English learners and
students with disabilities, we used the
national average hourly earnings of
$62.64 for lawyers in educational
services (including private, State, and
local government schools). For the
opportunity cost to the representatives
of elementary and secondary school
students, we used the Federal minimum
wage of $7.25 per hour and the average
hourly wage for all workers of $22.71.
These wage rates could represent either
the involvement of a parent and a
student at these meetings, or a single
representative from an organization
representing their interests who has an
above average wage rate (i.e., $29.96).
We used the average hourly wage rate
for all workers ($22.71) for the school
board official. For the student from a
traditional or alternative route teacher
preparation program, we used the 25th
percentile of hourly wage for all workers
of $11.04. We also assumed that at least
two State employees in managerial
positions (with national average hourly
earnings of $45.58) would attend each
meeting, with one budget or policy
analyst to assist them (with national
average hourly earnings of $33.98).90
A number of commenters stated that
this consultation process would take
longer than the 12 hours in our initial
estimate and that our estimates did not
include time for preparation for the
meetings or for participant travel.
Alternate estimates from commenters
90 Unless otherwise noted, all wage rates in this
section are based on average hourly earnings as
reported by in the May 2014 National Occupational
Employment and Wage Estimates from the Bureau
of Labor Statistics available online at www.bls.gov/
oes/current/oessrci.htm. Where hourly wages were
unavailable, we estimated hourly wages using
average annual wages from this source and the
average annual hours worked from the National
Compensation Survey, 2010.
PO 00000
Frm 00100
Fmt 4701
Sfmt 4700
ranged from 56 hours to 3,900 hours.
Based on the comments we received, the
Department believes that both States
and participants may opt to meet for
longer periods of time at each meeting
or more frequently. However, we believe
that many of the estimates from
commenters were overestimates for an
annual process. For example, the 3,900
hour estimate would require a
commitment on the part of participants
totaling 75 hours per week for 52 weeks
per year. We believe this is highly
unrealistic. However, we do recognize
that States and interested parties may
wish to spend a greater amount of time
in the first year to discuss and establish
the initial framework than we initially
estimated. As such, we are increasing
our initial estimate of 12 hours in the
first year to 60 hours. We believe that
this amount of time will provide an
adequate amount of time for discussion
of these important issues. We therefore
estimate the cumulative cost to the 50
States, the District of Columbia, and
Puerto Rico to be $2,385,900.
We also recognize that, although the
Department initially only estimated this
consultative process occurring once
every five years, States may wish to
have a continuing consultation with
these stakeholders. We believe that this
engagement would take place either
over email or conference call, or with an
on-site meeting. We therefore are adding
an estimated 20 hours per year for the
intervening years for consulting with
stakeholders. We therefore estimate that
these additional consultations with
stakeholders will cumulatively cost the
50 States, the District of Columbia, and
Puerto Rico $690,110.
States would also be required to
report on the State-level rewards or
consequences associated with the
designated performance levels and on
the opportunities they provide for
teacher preparation programs to
challenge the accuracy of their
performance data and classification of
the program. Costs associated with
implementing these requirements are
estimated in the discussion of annual
costs associated with the SRC.
Procedures for Assessing and Reporting
Performance
Under final § 612.4(b)(3), a State
would be required to ensure that teacher
preparation programs in the State are
included on the SRC, but with some
flexibility due to the Department’s
recognition that reporting on teacher
preparation programs particularly
consisting of a small number of
prospective teachers could present
privacy and data validity concerns. See
§ 612.4(b)(5). The Department originally
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
estimated that each State would need up
to 14 hours to review and analyze
applicable State and Federal privacy
laws and regulations and existing
research on the practices of other States
that set program size thresholds in order
to determine the most appropriate
aggregation level and procedures for its
own teacher preparation program
reporting. Most of the comments the
Department received on this estimate
focused on the comparability of data
across years and stated that this process
would have to be conducted annually in
order to reassess appropriate cut points.
The Department agrees that
comparability could be an issue in
several instances, but is equally
concerned with variability in the data
induced solely by the small size of
programs. As such, we believe
providing States the flexibility to
aggregate data across small programs is
key to ensuring meaningful data for the
public. Upon further review, the
Department also recognized an error in
the NPRM, in which we initially stated
that this review would be a one-time
cost. Contrary to that statement, our
overall estimates in the NPRM included
this cost on an annual basis. This review
will likely take place annually to
determine whether there are any
necessary changes in law, regulation, or
practice that need to be taken into
consideration. As such, we are revising
our statement to clarify that these costs
will be reflected annually. However,
because of the error in the original
description of the burden estimate, this
change does not substantively affect the
underlying calculations.
Two commenters stated that the
Department’s initial estimate seemed
low given the amount of work involved
and three other commenters stated that
the Department’s initial estimates were
adequate. Another commenter stated
that this process would likely take
longer in his State. No commenters
offered alternative estimates. For the
vast majority of States, we continue to
believe that 14 hours is a sufficient
amount of time for staff to review and
analyze the applicable laws and
statutes. However, given the potential
complexity of these issues, as raised by
commenters, we recognize that there
may be additional staff involved and
additional meetings required for
purposes of consultation. In order to
account for these additional burdens
where they may exist, the Department is
increasing its initial estimate to 20
hours. We believe that this will provide
sufficient time for review, analysis, and
discussion of these important issues.
This provides an estimated cost to the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
50 States, the District of Columbia, and
the Commonwealth of Puerto Rico of
$51,750, based on the average national
hourly earnings for a lawyer employed
full-time by a State government
($49.76).
Required Elements of the State Report
Card
For purposes of reporting under
§ 612.4, each State will need to establish
indicators that would be used to assess
the academic content knowledge and
teaching skills of the graduates of
teacher preparation programs within its
jurisdiction. At a minimum, States must
base their assessments on student
learning outcomes, employment
outcomes, survey outcomes, and
whether or not the program is
accredited by a specialized accrediting
agency recognized by the Secretary for
accreditation of professional teacher
education programs, or provides teacher
candidates with content and
pedagogical knowledge, and quality
clinical preparation, and has rigorous
teacher candidate exit qualifications.
States are required to report these
outcomes for teacher preparation
programs within their jurisdiction, with
the only exceptions being for small
programs for which aggregation under
§ 612.4(b)(3)(ii) would not yield the
program size threshold (or for a State
that chooses a lower program size
threshold, would not yield the lower
program size threshold) for that
program, and for any program where
reporting data would lead to conflicts
with Federal or State privacy and
confidentiality laws and regulations.
Student Learning Outcomes
In § 612.5, the Department requires
that States assess the performance of
teacher preparation programs based in
part on data on the aggregate learning
outcomes of students taught by novice
teachers prepared by those programs.
States have the option of calculating
these outcomes using student growth, a
teacher evaluation measure that
includes student growth, another Statedetermined measure relevant to
calculating student learning outcomes,
or a combination of the three.
Regardless of how they determine
student learning outcomes, States are
required to link these data to novice
teachers and their teacher preparation
programs. In the NPRM, we used
available sources of information to
assess the extent to which States
appeared to already have the capacity to
measure student learning outcomes and
estimated the additional costs States
that did not currently have the capacity
might incur in order to comply with the
PO 00000
Frm 00101
Fmt 4701
Sfmt 4700
75593
regulations. However, in these final
regulations, the Department has
expanded the definition of ‘‘teacher
evaluation measure’’ and provided
States with the discretion to use a Statedetermined measure relevant to
calculating student learning outcomes,
which they did not have in the
proposed regulations. In our initial
estimates, the Department assumed that
only eight States would experience costs
associated with measuring student
learning outcomes. Of those, the
Department noted that two already had
annual teacher evaluations that
included at least some objective
evidence of student learning. For these
two States, we estimated it would cost
approximately $596,720 to comply with
the proposed regulations. For the six
remaining States, we estimated a cost of
$16,079,390. We note that several
commenters raised concerns about the
specifics of some of our assumptions in
making these estimates, particularly the
amount of time we assumed it would
take to complete the tasks we described.
We outline and respond to those
comments below. However, given the
revised definition of ‘‘teacher evaluation
measure,’’ the additional option for
States to use a State-defined measure
other than student growth or a teacher
evaluation measure, and the measures
that States are already planning to
implement consistent with ESSA, we
believe all States either already have in
place a system for measuring student
learning outcomes or are already
planning to have one in place absent
these regulations. As such, we no longer
believe that States will incur costs
associated with measuring student
learning outcomes solely as a result of
these regulations.
Tested Grades and Subjects
In the NPRM, we assumed that the
States would not need to incur any
additional costs to measure student
growth for tested grades and subjects
and would only need to link these
outcomes to teacher preparation
programs by first linking the students’
teachers to the teacher preparation
program from which they graduated.
The costs of linking student learning
outcomes to teacher preparation
programs are discussed below. Several
commenters stated that assuming no
costs for teachers in tested grades and
subjects was unrealistic because this
estimate was based on assurances
provided by States, rather than on an
assessment of actual State practice. We
recognize the commenters’ point. States
that have made assurances to provide
these student growth data may not
currently be providing this information
E:\FR\FM\31OCR2.SGM
31OCR2
75594
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
to teachers and therefore will still incur
a cost to do so. However, such cost and
burden is not occurring as a result of the
regulations, but as a result of prior
assurances made by the States under
other programs. In general, we do not
include costs herein that arise from
other programs or requirements, but
only those that are newly created by the
final rule. As such, we continue to
estimate no new costs in this area for
States to comply with this final rule.
Non-Tested Grades and Subjects
In the NPRM, we assumed that the
District of Columbia, Puerto Rico, and
the 42 States, which all that had their
requests for flexibility regarding specific
requirements of the ESEA approved,
would not incur additional costs to
comply with the proposed regulations.
This was, in part, because the teacher
evaluation measures that they agreed to
implement as part of the flexibility
would meet the definition of a ‘‘teacher
evaluation measure’’ under the
proposed regulations. Some commenters
expressed doubt that there would be no
additional costs for these States, and
others cited costs associated with
developing new assessments for all
currently non-tested grades and subjects
(totaling as many as 57 new
assessments). We recognize that States
likely incurred costs to implement
statewide comprehensive teacher
evaluations. However, those additional
costs did not accrue to States as a result
of the regulations, but instead as part of
their efforts under flexibility
agreements. Therefore, we do not
include an analysis of costs for States
that received ESEA flexibility herein.
Additionally, as noted previously, the
regulations do not require States to
develop new assessments for all
currently non-tested grades and
subjects. Therefore, we do not include
costs for such efforts in these estimates.
To estimate, in the NPRM, the cost of
measuring student growth for teachers
in non-tested grades and subjects in the
eight States that were not approved for
ESEA flexibility, we divided the States
into two groups—those who had annual
teacher evaluations with at least some
objective evidence of student learning
outcomes and those that did not.
For those States that did not have an
annual teacher evaluation in place, we
estimated that it would take
approximately 6.85 hours of a teacher’s
time and 5.05 hours of an evaluator’s
time to measure student growth using
student learning objectives. Two
commenters stated that these were
underestimates, specifically noting that
certain student outcomes (e.g., in the
arts) are process-oriented and would
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
likely take longer. We recognize that it
may be more time-intensive to develop
student learning objectives to measure
student growth in some subject areas.
However, the Rhode Island model we
used as a basis for these estimates was
designed to be used across subject areas,
including the arts. Further, we believe
that both teachers and evaluators would
have sufficient expertise in their content
areas that they would be able to
complete the activities outlined in the
Rhode Island guidance in times
approximating our initial estimates. As
such, we continue to believe those
estimates were appropriate for the
average teacher.
In fact, we believe that this estimate
likely overstated the cost to States that
already require annual evaluations of all
novice teachers because many of these
evaluations would already encompass
many of the activities in the framework.
The National Council on Teacher
Quality has reported that two of the
eight States that did not receive ESEA
flexibility required annual evaluations
of all novice teachers and that those
evaluations included at least some
objective evidence of student learning.
In these States, we initially estimated
that teachers and evaluators would need
to spend only a combined three hours
to develop and measure against student
learning objectives for the 4,629 novice
teachers in these States.
Several commenters stated that their
States did not currently have these data,
and others argued that this estimate did
not account for the costs of verifying the
data. We understand that States may not
currently have structures in place to
measure student learning outcomes as
defined in the proposed rules. However,
we believe that the revisions in the final
rule provide sufficient flexibility to
States to ensure that they can meet the
requirements of this section without
incurring additional measurement costs
as a result of compliance with this
regulation. We have included costs for
challenging data elsewhere in these
estimates.
Linking Student Learning Outcomes to
Teacher Preparation Programs
Whether using student scores on State
assessments, teacher evaluation ratings,
or other measures of student growth,
under the regulations States must link
the student learning outcomes data back
to the teacher, and then back to that
teacher’s preparation program. The costs
to States to comply with this
requirement will depend, in part, on the
data and linkages in their statewide
longitudinal data system. Through the
Statewide Longitudinal Data Systems
(SLDS) program, the Department has
PO 00000
Frm 00102
Fmt 4701
Sfmt 4700
awarded $575.7 million in grants to
support data systems that, among other
things, allow States to link student
achievement data to individual teachers
and to postsecondary education
systems. Forty-seven States, the District
of Columbia, and the Commonwealth of
Puerto Rico have already received at
least one grant under this program to
support the development of these data
systems, so we expect that the cost to
these States of linking student learning
outcomes to teacher preparation
programs would be lower than for the
remaining States.
According to information from the
SLDS program in June 2015, nine States
currently link K–12 teacher data
including data on both teacher/
administrator evaluations and teacher
preparation programs to K–12 student
data. An additional 11 States and the
District of Columbia are currently in the
process of establishing this linkage, and
ten States and the Commonwealth of
Puerto Rico have plans to add this
linkage to their systems during their
SLDS grant. Based on this information,
it appears that 30 States, the
Commonwealth of Puerto Rico, and the
District of Columbia either already have
the ability to aggregate data on student
achievement of students taught by
program graduates and link those data
back to teacher preparation programs or
have committed to doing so; therefore,
we do not estimate any additional costs
for these States to comply with this
aspect of the regulations. We note that,
based on information from other
Department programs and initiatives, a
larger number of States currently make
these linkages and would therefore
incur no additional costs associated
with the regulations. However, for
purposes of this estimate, we use data
from the SLDS program. As a result,
these estimates are likely overestimates
of the actual costs borne by States to
make these data connections.
During the development of the
regulations, the Department consulted
with experts familiar with the
development of student growth models
and longitudinal data systems. These
experts indicated that the cost of
calculating growth for students taught
by individual teachers and aggregating
these data according to the teacher
preparation program that these teachers
completed would vary among States.
For example, in States in which data on
teacher preparation programs are
housed within different or even
multiple different postsecondary data
systems that are not currently linked to
data systems for elementary through
secondary education students and
teachers, these experts suggested that a
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
reasonable estimate of the cost of
additional staff or vendor time to link
and analyze the data would be $250,000
per State. For States that already have
data systems that include data from
elementary to postsecondary education
levels, we estimate that the cost of
additional staff or vendor time to
analyze the data would be $100,000.
Since we do not know enough about the
data systems in the remaining 20 States
to determine whether they are likely to
incur the higher or lower estimate of
costs, we averaged the higher and lower
figure. Accordingly we estimate that the
remaining 20 States will need to incur
an average cost of $175,000 to develop
models to calculate growth for students
taught by individual teachers and then
link these data to teacher preparation
programs for a total cost of $3,500,000.
Several commenters stated that their
States did not currently have the ability
to make these linkages and their data
systems would have to be updated and
that, even in States that already have
these linkages, there may be required
updates to the system. We recognize
that some States for which we assume
no costs do not yet have the required
functionality in their State data systems
to make the links required under the
regulations. However, as noted
elsewhere, we reasonably rely on the
assurances made by States that they are
already planning on establishing these
links, and are not doing so as a result
of the regulations. As a result, we do not
estimate costs for those States here. In
regards to States that already have
systems with these links in place, we
are not aware of any updates that will
need to be made to any of these systems
solely in order to comply with the
regulations, and therefore estimate no
additional costs to these States.
Employment Outcomes
The final regulations require States to
report employment outcomes, including
data on both the teacher placement rate
and the teacher retention rate, and on
the effectiveness of a teacher
preparation program in preparing,
placing, and supporting novice teachers
consistent with local educational needs.
We have limited information on the
extent to which States currently collect
and maintain data on placement and
retention for individual teachers.
Under § 612.4(b), States are required
to report annually, for each teacher
preparation program, on the teacher
placement rate for traditional teacher
preparation programs, the teacher
placement rate calculated for high-need
schools for all teacher preparation
programs (whether traditional or
alternative route), the teacher retention
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
rate for all teacher preparation programs
(whether traditional or alternative
route), and the teacher retention rate
calculated for high-need schools for all
teacher preparation programs (whether
traditional or alternative route). States
are not required to report on the teacher
placement rate for alternative route
programs. The Department has defined
the ‘‘teacher placement rate’’ as the
percentage of recent graduates who have
become novice teachers (regardless of
retention) for the grade level, span, and
subject area in which they were
prepared. ‘‘High-need schools’’ is
defined in § 612.2(d) by using the
definition of ‘‘high-need school’’ in
section 200(11) of the HEA. The
regulations will give States discretion to
exclude recent graduates from this
measure if they are teaching in a private
school, teaching in another State,
teaching in a position that does not
require State certification, enrolled in
graduate school, or engaged in military
service.
Section 612.5(a)(2) and the definition
of ‘‘teacher retention rate’’ in § 612.2
require a State to provide data on each
teacher preparation program’s teacher
retention rate, by calculating, for each of
the last three cohorts of novice teachers
preceding the current title II reporting
year, the percentage of those teachers
who have been continuously employed
as teachers of record in each year
between their first year as a novice
teacher and the current reporting year.
For the purposes of this definition, a
cohort of novice teachers is determined
by the first year in which they were
identified as a novice teacher by the
State. High-need schools is defined in
§ 612.2 by using the definition of ‘‘highneed school’’ from section 200(11) of the
HEA. The regulations give States
discretion to exclude novice teachers
from this measure if they are teaching in
a private school or another State,
enrolled in graduate school, or serving
in the military. States also have the
discretion to treat this rate differently
for alternative route and traditional
route providers.
In its comments on the Department’s
Notice of Intention to Develop Proposed
Regulations Regarding Teacher
Preparation Reporting Requirements,
the Data Quality Campaign reported that
50 States, the District of Columbia, and
the Commonwealth of Puerto Rico all
collect some certification information
on individual teachers and that a subset
of States collect the following specific
information on teacher preparation or
qualifications that is relevant to the
requirements: Type of teacher
preparation program (42 States),
location of teacher preparation program
PO 00000
Frm 00103
Fmt 4701
Sfmt 4700
75595
(47 States), and year of certification (51
States).91
Data from the SLDS program indicate
that 24 States can currently link data on
individual teachers with their teacher
preparation programs, including
information on their current
certification status and placement. In
addition, seven States are currently in
the process of making these links, and
10 States plan to add this capacity to
their data systems, but have not yet
established the link and process for
doing so. Because these States would
also maintain information on the
certification status and year of
certification of individual teachers, we
assume they would already be able to
calculate the teacher placement and
retention rates for novice teachers but
may incur additional costs to identify
recent graduates who are not employed
in a full-time teaching position within
the State. It should be possible to do this
at minimal cost by matching rosters of
recent graduates from teacher
preparation programs against teachers
employed in full-time teaching
positions who received their initial
certification within the last three years.
Additionally, because States already
maintain the necessary information in
State databases to identify schools as
‘‘high-need,’’ we do not believe there
would be any appreciable additional
cost associated with adding ‘‘high-need’’
flags to any accounting of teacher
retention or placement rates in the State.
Several commenters stated that it was
unrealistic to assume that any States
currently had the information required
under the regulations as the
requirements were new. While we
recognize that States may not have
previously conducted these specific
data analyses in the past, this does not
mean that their systems are incapable of
doing so. In fact, as outlined above,
information available to the Department
indicates that at least 24 States already
have this capacity and that an
additional 17 are in the process of
developing it or plan to do so.
Therefore, regardless of whether the
specific data analysis itself is new, these
States will not incur additional costs
associated with the final regulations to
establish that functionality.
The remaining 11 States may need to
collect additional information from
teacher preparation programs and LEAs
because they do not appear to be able
to link information on the employment,
91 ED’s Notice of Intention to Develop Proposed
Regulations Regarding Teacher Preparation
Reporting Requirements: DQC Comments to Share
Knowledge on States’ Data Capacity. Retrieved from
www.dataqualitycampaign.org/files/HEA%20
Neg%20Regs%20formatted.pdf.
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75596
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
certification, and teacher preparation
program for individual teachers. If it is
not possible to establish this link using
existing data systems, States may need
to obtain some or all of this information
from teacher preparation programs or
from the teachers themselves. The
American Association of Colleges for
Teacher Education reported that, in
2012, 495 of 717 institutions (or about
70 percent) had begun tracking their
graduates into job placements. Although
half of those institutions have
successfully obtained placement
information, these efforts suggest that
States may be able to take advantage of
work already underway.92
A number of commenters stated that
IHEs would experience substantial
burden in obtaining this information
from all graduates. We agree that teacher
preparation programs individually
tracking and contacting their recent
graduates would be highly burdensome
and inefficient. However, in the
regulations, the reporting burden falls
on States, rather than institutions. As
such, we believe it would be
inappropriate to assume data collection
costs and reporting burdens accruing to
institutions.
For each of these 11 States, the
Department originally estimated that
150 hours may be required at the State
level to collect information about novice
teachers employed in full-time teaching
positions (including designing the data
collection instruments, disseminating
them, providing training or other
technical assistance on completing the
instruments, collecting the data, and
checking their accuracy). Several
commenters stated that the
Department’s estimates were too low.
One commenter estimated that this
process would take 350 hours. Another
commenter indicated that his State takes
approximately 100 hours to collect data
on first year teachers and that data
collection on more cohorts would take
more time. Generally, the Department
believes that this sort of data collection
is subject to economies of scale—that for
each additional cohort on which data
are collected in a given year, the average
time and cost associated with each
cohort will decrease. This belief arises
from the fact that many of the costs
associated with such a collection, such
as designing the data request
instruments and disseminating them,
are largely fixed. As such, we do not
think that collecting data on three
cohorts will take three times as long as
92 American Association of Colleges for Teacher
Education (2013), The Changing Teacher
Preparation Profession: A report from AACTE’s
Professional Education Data System (PEDS).
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
collecting data on one. However, we do
recognize that there could be wide
variation across States depending on the
complexity of their systems and the way
in which they opt to collect these data.
For example, a State that sends data
requests to individual LEAs to query
their own data systems will experience
a much higher overall burden with this
provision than one that sends data
requests to a handful of analysts at the
State level who perform a small number
of queries on State databases. Because of
this potentially wide variation in
burden across States, it is difficult to
accurately estimate an average.
However, based on public comment, we
recognize that our initial estimate may
have been too low. However, we also
believe that States will make every effort
to reduce the burdens associated with
this provision. As such, we are
increasing our estimate to 200 hours,
with an expectation that this may vary
widely across States. Using this
estimate, we calculate a total annual
cost to the 11 States of $112,130, based
on the national average hourly wage for
education administrators of $50.97.
Teacher Preparation Program
Characteristics
Under § 612.5(a)(4) States are required
to report whether each teacher
preparation program in the State either:
(a) Is accredited by a specialized
accrediting agency recognized by the
Secretary for accreditation of
professional teacher education
programs, or (b) provides teacher
candidates with content and
pedagogical knowledge and quality
clinical preparation, and has rigorous
teacher candidate exit standards. As
discussed in greater detail in the
Paperwork Reduction Act section of this
document, we estimate that the total
cost to the 50 States, the District of
Columbia, and the Commonwealth of
Puerto Rico of providing these
assurances for the estimated 15,335
teacher preparation programs
nationwide for which States have
already determined are accredited based
on previous title II reporting
submissions would be $790,670,
assuming that 2 hours were required per
teacher preparation program and using
an estimated hourly wage of $25.78.
Several commenters argued that these
estimates did not accurately reflect the
costs associated with seeking
specialized accreditation. We agree with
this statement. However, the regulations
do not require programs to seek
specialized accreditation. Thus, there
would be no additional costs associated
with this requirement for programs that
are already seeking or have obtained
PO 00000
Frm 00104
Fmt 4701
Sfmt 4700
specialized accreditation. If teacher
preparation programs that do not
currently have specialized accreditation
decide to seek it, they would not be
doing so because of a requirement in
these regulations, and therefore, it
would be inappropriate to include those
costs here.
Survey Outcomes
The Department requires States to
report—disaggregated for each teacher
preparation program—qualitative and
quantitative data from surveys of novice
teachers and their employers in order to
capture their perceptions of whether
novice teachers who were prepared at a
teacher preparation program in that
State possess the skills needed to
succeed in the classroom. The design
and implementation of these surveys
would be determined by the State, but
we provide the following estimates of
costs associated with possible options
for meeting this requirement.
Some States and IHEs currently
survey graduates or recent graduates of
teacher preparation programs.
According to experts consulted by the
Department, depending on the number
of questions and the size of the sample,
some of these surveys have been
administered quite inexpensively.
Oregon conducted a survey of a
stratified random sample of
approximately 50 percent of its teacher
preparation program graduates and
estimated that it cost $5,000 to develop
and administer the survey and $5,000 to
analyze and report the data. Since these
data will be used to assess and publicly
report on the quality of each teacher
preparation program, we expect that the
cost of implementing the proposed
regulations is likely to be higher,
because States may need to survey a
larger sample of teachers and their
employers in order to capture
information on all teacher preparation
programs.
Another potential factor in the cost of
the teacher and employer surveys would
be the number and type of questions.
We have consulted with researchers
experienced in the collection of survey
data, and they have indicated that it is
important to balance the burden on the
respondent with the need to collect
adequate information. In addition to
asking teachers and their employers
whether graduates of particular teacher
preparation programs are adequately
prepared before entering the classroom,
States may also wish to ask about
course-taking and student teaching
experiences, as well as to collect
demographic information on the
respondent, including information on
the school environment in which the
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
teacher is currently employed. Because
the researchers we consulted stressed
that teachers and their employers are
unlikely to respond to a survey that
requires more than 30 minutes to
complete, we assume that the surveys
would not exceed this length.
Based on our consultation with
experts and previous experience
conducting surveys of teachers through
evaluations of Department programs or
policies, we originally estimated that it
would cost the average State
approximately $25,000 to develop the
survey instruments, including
instructions for the survey recipients.
However, a number of commenters
argued that these development costs
were far too low. Alternate estimates
provided by commenters ranged from
$50,000 per State to $200,000, with the
majority of commenters offering a
$50,000 estimate. As such, the
Department has revised its original
estimate to $50,000. This provides a
total cost to the 50 States, the District of
Columbia, and the Commonwealth of
Puerto Rico of $2,600,000. However, we
recognize that the cost would be lower
for States that identify an existing
instrument that could be adapted or
used for this purpose, potentially
including survey instruments
previously developed by other States.93
If States surveyed all individuals who
completed teacher preparation programs
in the previous year, we estimate that
they would survey 180,744 teachers,
based on the reported number of
individuals completing teacher
preparation programs, both traditional
and alternative route programs, during
the 2013–2014 academic year.
To estimate the cost of administering
these surveys, we consulted researchers
with experience conducting a survey of
all recent graduates of teacher
preparation programs in New York
City.94 In order to meet the target of a
70 percent response rate for that survey,
the researchers estimated that their cost
per respondent was $100, which
included an incentive for respondents
worth $25. We believe that it is unlikely
93 The experts with whom we consulted did not
provide estimates of the number of hours involved
in the development of this type of survey. For the
estimated burden hours for the Paperwork
Reduction Act section, this figure represents 1,179
hours at an average hourly wage rate of $42.40,
based on the hourly wage for faculty at a public IHE
and statisticians employed by State governments.
94 These cost estimates were based primarily on
our consultation with a researcher involved in the
development, implementation, and analysis of
surveys of teacher preparation program graduates
and graduates of alternative certification programs
in New York City in 2004 as part of the Teacher
Pathways Project. These survey instruments are
available online at: www.teacherpolicyresearch.org/
TeacherPathwaysProject/Surveys/tabid.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
that States will provide cash incentives
for respondents to the survey, thus
providing an estimate of $75 per
respondent. However, since the time of
data collection in that survey, there
have been dramatic advances in the
availability and usefulness of online
survey software with a corresponding
decrease in cost. As such, we believe
that the $75 per respondent estimate
may actually provide an extreme upper
bound and may dramatically overestimate the costs associated with
administering any such survey. For
example, several prominent online
survey companies offer survey hosting
services for as little as $300 per year for
unlimited questions and unlimited
respondents. In the NPRM, using that
total cost, and assuming surveys
administered and hosted by the State
and using the number of program
graduates in 2013 (203,701), we
estimated that the cost per respondent
would range from $0.02 to $21.43, with
an average cost per State of $0.97. We
recognize that this estimate would
represent an extreme lower bound and
many States are unlikely to see costs per
respondent that low until the survey is
fully integrated into existing systems.
For example, States may be able to
provide teachers with a mechanism,
such as an online portal, to both verify
their class rosters and complete the
survey. Because teachers would be
motivated to ensure that they were not
evaluated based on the performance of
students they did not teach, requiring
novice teachers to complete the survey
in order to access their class rosters
would increase the response rate for the
survey and allow novice teachers to
select their teacher preparation program
from a pull-down menu, reducing the
amount of time required to link the
survey results to particular programs.
States could also have teacher
preparation programs disseminate the
novice teacher survey with other
information for teacher preparation
program alumni or have LEAs
disseminate the novice teacher survey
during induction or professional
development activities. We believe that,
as States incorporate these surveys into
other structures, data collection costs
will dramatically decline towards the
lower bounds noted above.
The California State School Climate
Survey (CSCS) is one portion of the
larger California School Climate, Health,
& Learning Survey, designed to survey
teachers and staff to address questions
of school climate. While the CSCS is
subsidized by the State of California, it
is also offered to school districts outside
of the State for a fee, ranging from $500
PO 00000
Frm 00105
Fmt 4701
Sfmt 4700
75597
to $1,500 per district, depending on its
enrollment size. Applying this cost
structure to all school districts
nationwide with enrollment (as outlined
in the Department’s Common Core of
Data), we estimated in the NPRM that
costs would range from a low of $0.05
per FTE teacher to $500 per FTE teacher
with an average of $21.29 per FTE.
However, these costs are inflated by
single-school, single-teacher districts,
which are largely either charter schools
or small, rural school districts unlikely
to administer separate surveys. When
removing single-school, single-teacher
districts, the average cost per
respondent decreased to $12.27.
Given the cost savings associated with
online administration of surveys and the
likelihood that States will fold these
surveys into existing structures, we
believe that many of these costs are
likely over-estimates of the actual costs
that States will bear in administering
these surveys. However, for purposes of
estimating costs in this context, we use
a rate of $30.33 per respondent, which
represents a cost per respondent at the
85th percentile of the CSCS
administration and well above the
maximum administration cost for
popular consumer survey software. One
commenter stated that the Department’s
initial estimate was appropriate; but
also suggested that, to reduce costs
further, a survey could be administered
less than annually, or only a subset of
novice teachers could be surveyed. One
commenter argued that this estimate
was too low and provided an alternate
estimate of aggregate costs for their State
of $300,000 per year. We note, however,
that this commenter’s alternate estimate
was actually a lower cost per
respondent than the Department’s initial
estimate—approximately $25 per
respondent compared to $30.33.
Another commenter argued that
administration of the survey would cost
$100 per respondent. Some commenters
also argued that administering the
survey would require additional staff.
Given the information discussed above
and that public comment was divided
on whether our estimate was too high,
too low, or appropriate, we do not
believe there is adequate reason to
change our initial estimate of $30.33 per
respondent. Undoubtedly, some States
may bear the administration costs by
hiring additional staff while others will
contract with an outside entity for the
administration of the survey. In either
case, we believe our original estimates
to be reasonable. Using that estimate, we
estimate that, if States surveyed a
combined sample of 180,744 teachers
and an equivalent number of
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75598
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
employers,95 with a response rate of 70
percent, the cumulative cost to the 50
States, the District of Columbia, and the
Commonwealth of Puerto Rico of
administering the survey would be
$7,674,760.
If States surveyed all teacher
preparation program graduates and their
employers, assuming that both the
teacher and employer surveys would
take no more than 30 minutes to
complete, that the employers are likely
to be principals or district
administrators, and a response rate of 70
percent of teachers and employers
surveyed, the total estimated burden for
126,521 teachers and their 126,521
employers of completing the surveys
would be $2,635,430 and $3,224,390
respectively, based on the national
average hourly wage of $41.66 for
elementary and secondary public school
teachers and $50.97 for elementary and
secondary school level administrators.
These costs would vary depending on
the extent to which a State determines
that it can measure these outcomes
based on a sample of novice teachers
and their employers. This may depend
on the distribution of novice teachers
prepared by teacher preparation
programs throughout the LEAs and
schools within each State and also on
whether or not some of this information
is available from existing sources such
as surveys of recent graduates
conducted by teacher preparation
programs as part of their accreditation
process.
One commenter stated that principals
would be unlikely to complete these
surveys unless paid to do so. We
recognize that some administrators may
see these surveys as a burden and may
be less willing to complete these
surveys. However, we believe that States
will likely take this factor into
consideration when designing and
administering these surveys by either
reducing the amount of time necessary
to complete the surveys, providing a
financial incentive to complete them, or
incorporating the surveys into other,
pre-existing instruments that already
require administrator input. Some States
may also simply make completion a
mandatory part of administrators’
duties.
95 We note that, to the extent that multiple novice
teachers are employed in the same school, there
would be fewer employers surveyed than the
estimates outlined above. However, for purposes of
this estimate, we have assumed an equivalent
number of employers. This assumption will result
in an overestimate of actual costs.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Annual Reporting Requirements Related
to State Report Card
As discussed in greater detail in the
Paperwork Reduction Act section of this
document, § 612.4 includes several
requirements for which States must
annually report on the SRC. Using an
estimated hourly wage of $25.78, we
estimate that the total cost for the 50
States, the District of Columbia, and the
Commonwealth of Puerto Rico to report
the following required information in
the SRC would be: Classifications of
teacher preparation programs ($370,280,
based on 0.5 hours per 28,726
programs); assurances of accreditation
($98,830, based on 0.25 hours per
15,335 programs); State’s weighting of
the different indicators in § 612.5 ($340
annually, based on 0.25 hours per
State); State-level rewards and
consequences associated with the
designated performance levels ($670 in
the first year and $130 thereafter, based
on 0.5 hours per State in the first year
and 0.1 hours per State in subsequent
years); method of program aggregation
($130 annually, based on 0.1 hours per
State); and process for challenging data
and program classification ($4,020 in
the first year and $1,550 thereafter,
based on 3 hours per State in the first
year and 6 hours for 10 States in
subsequent years).
The Department’s initial estimates
also included costs associated with the
examination of data collection quality
(5.3 hours per State annually), and
recordkeeping and publishing related to
appeal decisions (5.3 hours per State).
However, one commenter stated that the
examination of data quality would take
a high level of scrutiny and would take
more time than was originally estimated
and that our estimate associated with
recordkeeping and publishing was low.
Additionally, several commenters
responded generally to the overall cost
estimates in the NPRM with concerns
about data quality and review. In
response to these general concerns, and
upon further review, the Department
believes that States are likely to engage
in a more robust data quality review
process in response to these regulations.
Furthermore, we believe that the
associated documentation and
recordkeeping estimates may have been
lower than those reasonably expected by
States. As such, the Department has
increased its estimate of the time
required from the original 5.3 hour
estimate to 10 hours in both cases.
These changes result in an estimated
cost of $13,410 for each of the two
components. The sum of these annual
reporting costs would be $495,960 for
the first year and $492,950 in
PO 00000
Frm 00106
Fmt 4701
Sfmt 4700
subsequent years, based on a cumulative
burden hours of 19,238 hours in the first
year and 19,121 hours in subsequent
years.
In addition, a number of commenters
expressed concern that our estimates
included time and costs associated with
challenging data and program
classification but did not reflect time
and costs associated with allowing
programs to actually review data in the
SRC to ensure that the teachers
attributed to them were actual recent
program graduates. We agree that
program-level review of these data may
be necessary, particularly in the first
few years, in order to ensure valid and
reliable data. As such, we have revised
our cost estimates to include time for
programs to individually review data
reports to ensure their accuracy. We
assume that this review will largely
consist of matching lists of recent
teacher preparation program graduates
with prepopulated lists provided by the
State. Based on the number of program
completers during the 2013–2014
academic year, and the total number of
teacher preparation programs in that
year, we estimate the average program
would review a list of 19 recent
graduates (180,744 program completers
each year over three years divided by
27,914 programs). As such, we do not
believe this review will take a
considerable amount of time. However,
to ensure that we estimate sufficient
time for this review, we estimate 1 hour
per program for a total cost for the
27,914 teacher preparation programs of
$719,620.
Under § 612.5, States would also
incur burden to enter the required
aggregated information on student
learning, employment, and survey
outcomes into the information
collection instrument for each teacher
preparation program. Using the
estimated hourly wage rate of $25.78,
we estimate the following cumulative
costs to the 50 States, the District of
Columbia, and Puerto Rico to report on
27,914 teacher preparation programs
and 812 teacher preparation programs
provided through distance education:
Annual reporting on student learning
outcomes ($1,851,390 annually, based
on 2.5 hours per program); and annual
reporting of employment outcomes
($2,591,950 annually, based on 3.5
hours per program); and annual
reporting of survey outcomes ($740,560
annually, based on 1 hour per program).
After publication of the NPRM, we
recognized that our initial estimates did
not include costs or burden associated
with States’ reporting data on any other
indicators of academic content
knowledge and teaching skills. To the
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
extent that States use additional
indicators not required by these
regulations, we believe that they will
choose to use indicators currently in
place for identifying low-performing
teacher preparation programs rather that
instituting new indicators and new data
collection processes. As such, we do not
believe that States will incur any
additional data collection costs.
Additionally, we assume that
transitioning reporting on these
indicators from the entity level to the
program level will result in minimal
costs at the State level that are already
captured elsewhere in these estimates.
As such, we believe the only additional
costs associated with these other
indicators will be in entering the
aggregated information into the
information collection instrument. We
assume that, on average, it will take
States 1 hour per program to enter this
information. States with no or few other
indicators will experience much lower
costs than those estimated here. Those
States that use a large number of other
indicators may experience higher costs
than those estimated here, though we
believe it is unlikely that the data entry
process per program for these other
indicators will exceed this estimate. As
such, we estimate an annual cost to the
50 States, the District of Columbia, and
Puerto Rico of $740,560 to report on
other indicators of academic content
knowledge and teaching skills.
Our estimate of the total annual cost
of reporting these outcome measures on
the SRC related to § 612.5 is $5,924,460,
based on 229,808 hours.
sradovich on DSK3GMQ082PROD with RULES2
Potential Benefits
The principal benefits related to the
evaluation and classification of teacher
preparation programs under the
regulations are those resulting from the
reporting and public availability of
information on the effectiveness of
teachers prepared by teacher
preparation programs within each State.
The Department believes that the
information collected and reported as a
result of these requirements will
improve the accountability of teacher
preparation programs, both traditional
and alternative route to certification
programs, for preparing teachers who
are equipped to succeed in classroom
settings and help their students reach
their full potential.
Research studies have found
significant and substantial variation in
teaching effectiveness among individual
teachers and some variation has also
been found among graduates of different
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
teacher preparation programs.96 For
example, Tennessee reports that some
teacher preparation programs
consistently report statistically
significant differences in student
learning outcomes for grades and
subjects covered by State assessments
over multiple years and meaningful
differences in teacher placement and
retention rates.97 Because this variation
in the effectiveness of graduates is not
associated with any particular type of
preparation program, the only way to
determine which programs are
producing more effective teachers is to
link information on the performance of
teachers in the classroom back to their
teacher preparation programs.98 The
regulations do this by requiring States to
link data on student learning outcomes,
employment outcomes, and teacher and
employer survey outcomes back to the
teacher preparation programs, rating
each program based on these data, and
then making that information available
to the public.
The Department recognizes that
simply requiring States to assess the
performance of teacher preparation
programs and report this information to
the public will not produce increases in
student achievement, but it is an
important part of a larger set of policies
and investments designed to attract
talented individuals to the teaching
profession; prepare them for success in
the classroom; and support, reward, and
retain effective teachers. In addition, the
Department believes that, once
information on the performance of
teacher preparation programs is more
readily available, a variety of
stakeholders will become better
consumers of these data, which will
ultimately lead to improved student
achievement by influencing the
behavior of States seeking to provide
technical assistance to low-performing
programs, IHEs engaging in deliberate
self-improvement efforts, prospective
teachers seeking to train at the highest
quality teacher preparation programs,
and employers seeking to hire the most
highly qualified novice teachers.
Louisiana has already adopted some
of the proposed requirements and has
begun to see improvements in teacher
preparation programs. Based on data
96 See, for example: Boyd, D., Grossman, P.,
Lankford, H., Loeb, S., & Wyckoff, J. (2009). Teacher
Preparation and Student Achievement. Education
Evaluation and Policy Analysis, 31(4), 416–440.
97 See Report Card on the Effectiveness of Teacher
Training Programs, Tennessee 2014 Report Card.
(n.d.). Retrieved from https://www.tn.gov/thec/
article/report-card.
98 Kane, T., Rockoff, J., & Staiger, D. (2008). What
does certification tell us about teacher
effectiveness? Evidence from New York City.
Economics of Education Review, 27(6), 615–631.
PO 00000
Frm 00107
Fmt 4701
Sfmt 4700
75599
suggesting that the English Language
Arts teachers prepared by the University
of Louisiana at Lafayette were
producing teachers who were less
effective than other novice teachers
prepared by other programs, Louisiana
identified the program in 2008 as being
in need of improvement and provided
additional analyses of the qualifications
of the program’s graduates and of the
specific areas where the students taught
by program graduates appeared to be
struggling.99 When data suggested that
students struggled with essay questions,
faculty from the elementary education
program and the liberal arts department
in the university collaborated to
restructure the teacher education
curriculum to include more writing
instruction. Based on 2010–11 data,
student learning outcomes for teachers
prepared by this program are now
comparable to other novice teachers in
the State, and the program is no longer
identified for improvement.100
This is one example, but it suggests
that States can use data on student
learning outcomes for graduates of
teacher preparation programs to help
these programs identify weaknesses and
implement needed reforms in a
reasonable amount of time. As more
information becomes available and if
the data indicate that some programs
produce more effective teachers, LEAs
seeking to hire novice teachers will
prefer to hire teachers from those
programs. All things being equal,
aspiring teachers will elect to pursue
their degrees or certificates at teacher
preparation programs with strong
student learning outcomes, placement
and retention rates, survey outcomes,
and other measures.
TEACH Grants
The final regulations link program
eligibility for participation in the
TEACH Grant program to the State
assessment of program quality under 34
CFR part 612. Under §§ 686.11(a)(1)(iii)
and 686.2(d), to be eligible to receive a
TEACH Grant for a program, an
individual must be enrolled in a highquality teacher preparation program—
that is, a program that is classified by
the State as effective or higher in either
or both the October 2019 or October
2020 SRC for the 2021–2022 title IV,
HEA award year; or, classified by the
State as effective or higher in two out of
99 Sawchuk, S. (2012). Value Added Concept
Proves Beneficial to Teacher Colleges. Retrieved
from www.edweek.org.
100 Gansle, K., Noell, G., Knox, R.M., Schafer, M.J.
(2010). Value Added Assessment of Teacher
Preparation Programs in Louisiana: 2007–2008 to
2009–2010 Overview of 2010–11 Results. Retrieved
from Louisiana Board of Regents.
E:\FR\FM\31OCR2.SGM
31OCR2
75600
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
the previous three years, beginning with
the October 2020 SRC, for the 2022–
2023 title IV, HEA award year, under 34
CFR 612.4(b). As noted in the NPRM,
the Department estimates that
approximately 10 percent of TEACH
Grant recipients are not enrolled in
teacher preparation programs, but are
majoring in such subjects as STEM,
foreign languages, and history. Under
the final regulations, in a change from
the NPRM and from the current TEACH
Grant regulations, students would need
to be in an effective teacher preparation
program as defined in § 612.2, but those
who pursue a dual-major that includes
a teacher preparation program would be
eligible for a TEACH Grant.
Additionally, institutions could design
and designate programs that aim to
develop teachers in STEM and other
high-demand teaching fields that
combine subject matter and teacher
preparation courses as TEACH Grant
eligible programs. Therefore, while we
expect some reduction in TEACH Grant
volume as detailed in the Net Budget
Impacts section of this Regulatory
Impact Analysis (RIA), we do expect
that many students interested in
teaching STEM and other key subjects
will still be able to get TEACH Grants
at some point in their postsecondary
education.
In addition to the referenced benefits
of improved accountability under the
title II reporting system, the Department
believes that the regulations relating to
TEACH Grants will also contribute to
the improvement of teacher preparation
programs. Linking program eligibility
for TEACH Grants to the performance
assessment by the States under the title
II reporting system provides an
additional factor for prospective
students to consider when choosing a
program and an incentive for programs
to achieve a rating of effective or higher.
In order to analyze the possible effects
of the regulations on the number of
programs eligible to participate in the
TEACH Grant program and the amount
of TEACH Grants disbursed, the
Department analyzed data from a variety
of sources. This analysis focused on
teacher preparation programs at IHEs.
This is because, under the HEA,
alternative route programs offered
independently of an IHE are not eligible
to participate in the TEACH Grant
program. For the purpose of analyzing
the effect of the regulations on TEACH
Grants, the Department estimated the
number of teacher preparation programs
based on data from the Integrated
Postsecondary Education Data System
(IPEDS) about program graduates in
education-related majors as defined by
the Category of Instructional Program
(CIP) codes and award levels. For the
purposes of this analysis, ‘‘teacher
preparation programs’’ refers to
programs in the relevant CIP codes that
also have the IPEDS indicator flag for
being a State-approved teacher
education program.
As detailed in the NPRM published
December 3, 2014, in order to estimate
how many programs might be affected
by a loss of TEACH Grant eligibility, the
Department had to estimate how many
programs will be individually evaluated
under the regulations, which encourage
States to report on the performance of
individual programs offered by IHEs
rather than on the aggregated
performance of programs at the
institutional level as currently required.
As before, the Department estimates that
approximately 3,000 programs may be
evaluated at the highest level of
aggregation and approximately 17,000
could be evaluated if reporting is done
at the most disaggregated level. Table 3
summarizes these two possible
approaches to program definition that
represent the opposite ends of the range
of options available to the States. Based
on IPEDS data, approximately 30
percent of programs defined at the six
digit CIP code level have at least 25
novice teachers when aggregated across
three years, so States may add one
additional year to the analysis or
aggregate programs with similar features
to push more programs over the
threshold, pursuant to the regulations.
The actual number of programs at IHEs
reported on will likely fall between
these two points represented by
Approach 1 and Approach 2. The final
regulations define a teacher preparation
program offered through distance
education as a teacher preparation
program at which at least 50 percent of
the program’s required coursework is
offered through distance education and
that starting with the 2021–2022 award
year and subsequent award years, is not
classified as less than effective, based on
34 CFR 612.4(b), by the same State for
two out of the previous three years or
meets the exception from State reporting
of teacher preparation program
performance under 34 CFR
612.4(b)(3)(ii)(D) or (E). The exact
number of these programs is uncertain,
but in the Supplemental NPRM
concerning teacher preparation
programs offered through distance
education, the Department estimated
that 812 programs would be reported.
Whatever the number of programs, the
TEACH Grant volume associated with
these schools is captured in the amounts
used in our Net Budget Impacts
discussion. In addition, as discussed
earlier in the Analysis of Comments and
Changes section, States will have to
report on alternative certification
teacher preparation programs that are
not housed at IHEs, but they are not
relevant for analysis of the effects on
TEACH Grants because they are
ineligible for title IV, HEA funds and are
not included in Table 5.
TABLE 5—TEACHER PREPARATION PROGRAMS AT IHES AND TEACH GRANT PROGRAM
Approach 1
Approach 2
Approach 2
Total
sradovich on DSK3GMQ082PROD with RULES2
Approach 1
TEACH Grant
participating
Total
TEACH Grant
participating
Public Total ..............................................................................
4-year .......................................................................................
2-year or less ...........................................................................
Private Not-for-Profit Total .......................................................
4-year .......................................................................................
2-year or less ...........................................................................
Private For-Profit Total ............................................................
4-year .......................................................................................
2-year or less ...........................................................................
2,522
2,365
157
1,879
1,878
1
67
59
8
1,795
1,786
9
1,212
1,212
..............................
39
39
..............................
11,931
11,353
578
12,316
12,313
3
250
238
12
8,414
8,380
34
8,175
8,175
..............................
132
132
..............................
Total ..................................................................................
4,468
3,046
24,497
16,721
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
PO 00000
Frm 00108
Fmt 4701
Sfmt 4700
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Given the number of programs and
their TEACH Grant participation status
as described in Table 3, the Department
examined IPEDS data and the
Department’s budget estimates for 2017
related to TEACH Grants to estimate the
effect of the regulations on TEACH
Grants beginning with the FY 2021
cohort when the regulations would be in
effect. Based on prior reporting, only 37
IHEs (representing an estimated 129
programs) were identified as having a
low-performing or at-risk program in
2010 and twenty-seven States have not
identified any low-performing programs
in twelve years. Given prior
identification of such programs and the
fact that the States would continue to
control the classification of teacher
preparation programs subject to
analysis, the Department does not
expect a large percentage of programs to
be subject to a loss of eligibility for
TEACH Grants. Therefore, the
Department evaluated the effects on the
amount of TEACH Grants disbursed and
the number of recipients on the basis of
the States classifying a range of three
percent, five percent, or eight percent of
programs to be low-performing or atrisk. These results are summarized in
Table 6. Ultimately, the number of
programs affected is subject to the
program definition, rating criteria, and
program classifications adopted by the
individual States, so the distribution of
75601
those effects is not known with
certainty. However, the maximum
effect, whatever the distribution, is
limited by the amount of TEACH Grants
made and the percentage of programs
classified as low-performing and at-risk
that participate in the TEACH Grant
program. In the NPRM, the Department
invited comments about the expected
percentage of programs that will be
found to be low-performing and at-risk.
No specific comments were received, so
the updated numbers based on the
budget estimates for 2017 apply the
same percentages as were used in the
NPRM.
TABLE 6—ESTIMATED EFFECT IN 2021 ON PROGRAMS AND TEACH GRANT AMOUNTS OF DIFFERENT RATES OF
INELIGIBILITY
3%
5%
8%
356
641
1,768
$5,212,977
570
1,026
2,828
$8,340,764
[Percentage of low-performing or at-risk programs]
sradovich on DSK3GMQ082PROD with RULES2
Programs:
Approach 1 .........................................................................................................
Approach 2 .........................................................................................................
TEACH Grant Recipients ...................................................................................
TEACH Grant Amount at Low-Performing or At-Risk programs .......................
The estimated effects presented in
Table 4 reflect assumptions about the
likelihood of a program being ineligible
and do not take into account the size of
the program or participation in the
TEACH Grant program. The Department
had no program level performance
information and treats the programs as
equally likely to become ineligible for
TEACH Grants. If, in fact, factors such
as size or TEACH Grant participation
were associated with high or low
performance, the number of TEACH
Grant recipients and TEACH Grant
volume could deviate from these
estimates.
Whatever the amount of TEACH Grant
volume at programs found to be
ineligible, the effect on IHEs will be
reduced from the full amounts
represented by the estimated effects
presented here as students could elect to
enroll in other programs at the same IHE
that retain eligibility because they are
classified by the State as effective or
higher. Another factor that would
reduce the effect of the regulations on
programs and students is that an
otherwise eligible student who received
a TEACH Grant for enrollment in a
TEACH Grant-eligible program is
eligible to receive additional TEACH
Grants to complete the program, even if
that program loses status as a TEACH
Grant-eligible program.
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
214
385
1,061
$3,127,786
Several commenters expressed
concern that linking TEACH Grant
eligibility to the State’s evaluation of the
program would harm teacher
development from, and availability to,
poor and underserved communities. We
believe that the pilot year that provides
some warning of program performance,
the flexibility for States to develop their
evaluation criteria, and a long history of
programs performing above the at-risk
or low-performing levels will reduce the
possibility of this effect. The
Department continues to expect that
over time a large portion of the TEACH
Grant volume now disbursed to students
at programs that will be categorized as
low-performing or at-risk will be shifted
to programs that remain eligible. The
extent to which this happens will
depend on other factors affecting the
students’ enrollment decisions such as
in-State status, proximity to home or
future employment locations, and the
availability of programs of interest, but
the Department believes that students
will take into account a program’s rating
and the availability of TEACH Grants
when looking for a teacher preparation
program. As discussed in the Net
Budget Impacts section of this RIA, the
Department expects that the reduction
in TEACH Grant volume will taper off
as States identify low-performing and
at-risk programs and those programs are
improved or are no longer eligible for
PO 00000
Frm 00109
Fmt 4701
Sfmt 4700
TEACH Grants. Because existing
recipients will continue to have access
to TEACH Grants, and incoming
students will have notice and be able to
consider the program’s eligibility for
TEACH Grants in making an enrollment
decision, the reduction in TEACH Grant
volume that is classified as a transfer
from students at ineligible programs to
the Federal government will be
significantly reduced from the estimated
range of approximately $3.0 million to
approximately $8.0 million in Table 4
for the initial years the regulations are
in effect. While we have no past
experience with students’ reaction to a
designation of a program as lowperforming and loss of TEACH Grant
eligibility, we assume that, to the extent
it is possible, students would choose to
attend a program rated effective or
higher. For IHEs, the effect of the loss
of TEACH Grant funds will depend on
the students’ reaction and how many
choose to enroll in an eligible program
at the same IHE, choose to attend a
different IHE, or make up for the loss of
TEACH Grants by funding their program
from other sources.
The Department does not anticipate
that many programs will lose State
approval or financial support. If this
does occur, IHEs with such programs
would have to notify enrolled and
accepted students immediately, notify
the Department within 30 days, and
E:\FR\FM\31OCR2.SGM
31OCR2
75602
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
disclose such information on its Web
site or promotional materials. The
Department estimates that 50 IHEs
would offer programs that lose State
approval or financial support and that it
would take 5.75 hours to make the
necessary notifications and disclosures
at a wage rate of $25.78 for a total cost
of $7,410. Finally, some of the programs
that lose State approval or financial
support may apply to regain eligibility
for title IV, HEA funds upon improved
performance and restoration of State
approval or financial support. The
Department estimates that 10 IHEs with
such programs would apply for restored
eligibility and the process would require
20 hours at a wage rate of $25.78 for a
total cost of $5,160.
3. Regulatory Alternatives Considered
The final regulations were developed
through a negotiated rulemaking process
in which different options were
considered for several provisions.
Among the alternatives the Department
considered were various ways to reduce
the volume of information States and
teacher preparation programs are
required to collect and report under the
existing title II reporting system. One
approach would have been to limit State
reporting to items that are statutorily
required. While this would reduce the
reporting burden, it would not address
the goal of enhancing the quality and
usefulness of the data that are reported.
Alternatively, by focusing the reporting
requirements on student learning
outcomes, employment outcomes, and
teacher and employer survey data, and
also providing States with flexibility in
the specific methods they use to
measure and weigh these outcomes, the
regulations balance the desire to reduce
burden with the need for more
meaningful information.
Additionally, during the negotiated
rulemaking session, some non-Federal
negotiators spoke of the difficulty States
would have developing the survey
instruments, administering the surveys,
and compiling and tabulating the results
for the employer and teacher surveys.
The Department offered to develop and
conduct the surveys to alleviate
additional burden and costs on States,
but the non-Federal negotiators
indicated that they preferred that States
and teacher preparation programs
conduct the surveys.
One alternative considered in carrying
out the statutory directive to direct
TEACH Grants to ‘‘high quality’’
programs was to limit eligibility only to
programs that States classified as
‘‘exceptional’’, positioning the grants
more as a reward for truly outstanding
programs than as an incentive for lowperforming and at-risk programs to
improve. In order to prevent a program’s
eligibility from fluctuating year-to-year
based on small changes in evaluation
systems that are being developed and to
keep TEACH Grants available to a wider
pool of students, including those
attending teacher preparation programs
producing satisfactory student learning
outcomes, the Department and most
non-Federal negotiators agreed that
programs rated effective or higher
would be eligible for TEACH Grants.
4. Net Budget Impacts
The final regulations related to the
TEACH Grant program are estimated to
have a net budget impact of $0.49
million in cost reduction over the 2016
to 2026 loan cohorts. These estimates
were developed using the Office of
Management and Budget’s (OMB) Credit
Subsidy Calculator. The OMB calculator
takes projected future cash flows from
the Department’s student loan cost
estimation model and produces
discounted subsidy rates reflecting the
net present value of all future Federal
costs associated with awards made in a
given fiscal year. Values are calculated
using a ‘‘basket of zeros’’ methodology
under which each cash flow is
discounted using the interest rate of a
zero-coupon Treasury bond with the
same maturity as that cash flow. To
ensure comparability across programs,
this methodology is incorporated into
the calculator and used Governmentwide to develop estimates of the Federal
cost of credit programs. Accordingly,
the Department believes it is the
appropriate methodology to use in
developing estimates for these
regulations. That said, in developing the
following Accounting Statement, the
Department consulted with OMB on
how to integrate the Department’s
discounting methodology with the
discounting methodology traditionally
used in developing regulatory impact
analyses.
Absent evidence of the impact of
these regulations on student behavior,
budget cost estimates were based on
behavior as reflected in various
Department data sets and longitudinal
surveys. Program cost estimates were
generated by running projected cash
flows related to the provision through
the Department’s student loan cost
estimation model. TEACH Grant cost
estimates are developed across risk
categories: Freshmen/sophomores at 4year IHEs, juniors/seniors at 4-year
IHEs, and graduate students. Risk
categories have separate assumptions
based on the historical pattern of the
behavior of borrowers in each
category—for example, the likelihood of
default or the likelihood to use statutory
deferment or discharge benefits.
As discussed in the TEACH Grants
section of the Discussion of Costs,
Benefits, and Transfers section in this
RIA, the regulations could result in a
reduction in TEACH Grant volume.
Under the effective dates and data
collection schedule in the regulations,
that reduction in volume would start
with the 2021 TEACH Grant cohort. The
Department assumes that the effect of
the regulations would be greatest in the
first years they were in effect as the lowperforming and at-risk programs are
identified, removed from TEACH Grant
eligibility, and helped to improve or are
replaced by better performing programs.
Therefore, the percent of volume
estimated to be at programs in the lowperforming or at-risk categories is
assumed to drop for future cohorts. As
shown in Table 7, the net budget impact
over the 2016–2026 TEACH Grant
cohorts is approximately $0.49 million
in reduced costs.
sradovich on DSK3GMQ082PROD with RULES2
TABLE 7—ESTIMATED BUDGET IMPACT
PB 2017 TEACH Grant:
Awards ...........................................................................................
Amount ...........................................................................................
Remaining Volume after Reduction from Change in TEACH Grants
for STEM Programs:
% ....................................................................................................
Awards ...........................................................................................
Amount ...........................................................................................
Low Performing and At Risk:
% ....................................................................................................
Awards ...........................................................................................
Amount ...........................................................................................
Redistributed TEACH Grants:
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
PO 00000
Frm 00110
35,354
104,259,546
36,055
106,326,044
36,770
108,433,499
37,498
110,582,727
38,241
112,774,555
38,999
115,009,826
92.00%
32,526
95,918,782
92.00%
33,171
97,819,960
92.00%
33,828
99,758,819
92.00%
34,498
101,736,109
92.00%
35,182
103,752,591
92.00%
35,879
105,809,040
5.00%
1,626
4,795,939
3.00%
995
2,934,599
1.50%
507
1,496,382
1.00%
345
1,017,361
0.75%
264
778,144
0.50%
179
529,045
Fmt 4701
Sfmt 4700
E:\FR\FM\31OCR2.SGM
31OCR2
75603
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
TABLE 7—ESTIMATED BUDGET IMPACT—Continued
% ....................................................................................................
Amount ...........................................................................................
Reduced TEACH Grant Volume:
% ....................................................................................................
Amount ...........................................................................................
Estimated Budget Impact of Policy:
Subsidy Rate ..................................................................................
Baseline Volume ............................................................................
Revised Volume .............................................................................
Baseline Cost .................................................................................
Revised Cost ..................................................................................
Estimated Cost Reduction .............................................................
The estimated budget impact
presented in Table 5 is defined against
the PB 2017 baseline costs for the
TEACH Grant program, and the actual
volume of TEACH Grants in 2021 and
beyond will vary. The budget impact
estimate depends on the assumptions
about the percent of TEACH Grant
volume at programs that become
ineligible and the share of that volume
that is redistributed or reduced as
shown in Table 5. Finally, absent
evidence of different rates of loan
conversion at programs that will be
eligible or ineligible for TEACH Grants
when the proposed regulations are in
place, the Department did not assume a
different loan conversion rate as TEACH
Grants shifted to programs rated
effective or higher. However, given that
placement and retention rates are one
element of the program evaluation
system, the Department does hope that,
as students shift to programs rated
75%
3,596,954
75%
2,200,949
75%
1,122,287
75%
763,021
75%
583,608
75%
396,784
25%
1,198,985
25%
733,650
25%
374,096
25%
254,340
25%
194,536
25%
132,261
17.00%
104,259,546
103,060,561
17,724,123
17,520,295
203,827
17.16%
106,326,044
105,592,394
18,245,549
18,119,655
125,894
17.11%
108,433,499
108,059,403
18,552,972
18,488,964
64,008
16.49%
110,582,727
110,328,387
18,235,092
18,193,151
41,941
16.40%
112,774,555
112,580,019
18,495,027
18,463,123
31,904
16.24%
115,009,826
114,877,565
18,677,596
18,656,117
21,479
effective, more TEACH Grant recipients
will fulfill their service obligations. If
this is the case and their TEACH Grants
do not convert to loans, the students
who do not have to repay the converted
loans will benefit and the expected cost
reductions for the Federal government
may be reduced or reversed because
more of the TEACH Grants will remain
grants and no payment will be made to
the Federal government for these grants.
The final regulations also change total
and permanent disability discharge
provisions related to TEACH Grants to
be more consistent with the treatment of
interest accrual for total and permanent
discharges in the Direct Loan program.
This is not expected to have a
significant budget impact.
In addition to the TEACH Grant
provision, the regulations include a
provision that would make a program
ineligible for title IV, HEA funds if the
program was found to be low-
performing and subject to the
withdrawal of the State’s approval or
termination of the State’s financial
support. As noted in the NPRM, the
Department assumes this will happen
rarely and that the title IV, HEA funds
involved would be shifted to other
programs. Therefore, there is no budget
impact associated with this provision.
5. Accounting Statement
As required by OMB Circular A–4
(available at www.whitehouse.gov/sites/
default/files/omb/assets/omb/circulars/
a004/a-4.pdf), in the following table we
have prepared an accounting statement
showing the classification of the
expenditures associated with the
provisions of these final regulations.
This table provides our best estimate of
the changes in annual monetized costs,
benefits, and transfers as a result of the
final regulations.
Category
Benefits
Better and more publicly available information on the effectiveness of teacher preparation programs ................
Not Quantified
Distribution of TEACH Grants to better performing programs ................................................................................
Not Quantified
Category
Costs
7%
3%
$3,734,852
$3,727,459
State Report Card (Statutory requirements: Annual reporting, posting on Web site; Regulatory requirements:
Meaningful differentiation, consulting with stakeholders, aggregation of small programs, teacher preparation
program characteristics, other annual reporting costs) .......................................................................................
$3,653,206
$3,552,147
Reporting Student Learning Outcomes (develop model to link aggregate data on student achievement to
teacher preparation programs, modifications to student growth models for non-tested grades and subjects,
and measuring student growth) ...........................................................................................................................
$2,317,111
$2,249,746
Reporting Employment Outcomes (placement and retention data collection directly from IHEs or LEAs) ...........
$2,704,080
$2,704,080
Reporting Survey Results (developing survey instruments, annual administration, and response costs) .............
sradovich on DSK3GMQ082PROD with RULES2
Institutional Report Card (set-up, annual reporting, posting on Web site) .............................................................
$14,621,104
$14,571,062
Reporting Other Indicators ......................................................................................................................................
$740,560
$740,560
Identifying TEACH Grant-eligible Institutions ..........................................................................................................
$12,570
$12,570
Category
Transfers
Reduced costs to the Federal government from TEACH Grants to prospective students at teacher preparation
programs found ineligible .....................................................................................................................................
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
PO 00000
Frm 00111
Fmt 4701
Sfmt 4700
E:\FR\FM\31OCR2.SGM
31OCR2
($60,041)
($53,681)
75604
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
6. Final Regulatory Flexibility Analysis
These regulations will affect IHEs that
participate in the title IV, HEA
programs, including TEACH Grants,
alternative certification programs not
housed at IHEs, States, and individual
borrowers. The U.S. Small Business
Administration (SBA) Size Standards
define for-profit IHEs as ‘‘small
businesses’’ if they are independently
owned and operated and not dominant
in their field of operation with total
annual revenue below $7,000,000. The
SBA Size Standards define nonprofit
IHEs as small organizations if they are
independently owned and operated and
not dominant in their field of operation,
or as small entities if they are IHEs
controlled by governmental entities
with populations below 50,000. The
revenues involved in the sector affected
by these regulations, and the
concentration of ownership of IHEs by
private owners or public systems means
that the number of title IV, HEA eligible
IHEs that are small entities would be
limited but for the fact that the
nonprofit entities fit within the
definition of a small organization
regardless of revenue. The potential for
some of the programs offered by entities
subject to the final regulations to lose
eligibility to participate in the title IV,
HEA programs led to the preparation of
this Final Regulatory Flexibility
Analysis.
Description of the Reasons That Action
by the Agency Is Being Considered
The Department has a strong interest
in encouraging the development of
highly trained teachers and ensuring
that today’s children have high quality
and effective teachers in the classroom,
and it seeks to help achieve this goal in
these final regulations. Teacher
preparation programs have operated
without access to meaningful data that
could inform them of the effectiveness
of their teachers who graduate and go on
to work in the classroom setting.
The Department wants to establish a
teacher preparation feedback
mechanism premised upon teacher
effectiveness. Under the final
regulations, an accountability system
would be established that would
identify programs by quality so that
effective teacher preparation programs
could be recognized and rewarded and
low-performing programs could be
supported and improved. Data collected
under the new system will help all
teacher preparation programs make
necessary corrections and continuously
improve, while facilitating States’ efforts
to reshape and reform low-performing
and at-risk programs.
We are issuing these regulations to
better implement the teacher
preparation program accountability and
reporting system under title II of the
HEA and to revise the regulations
implementing the TEACH Grant
program. Our key objective is to revise
Federal reporting requirements, while
reducing institutional burden, as
appropriate. Additionally, we aim to
have State reporting focus on the most
important measures of teacher
preparation program quality while tying
TEACH Grant eligibility to assessments
of program performance under the title
II accountability system. The legal basis
for these regulations is 20 U.S.C. 1022d,
1022f, and 1070g, et seq.
The final regulations related to title II
reporting affect a larger number of
entities, including small entities, than
the smaller number of entities that
could lose TEACH Grant eligibility or
title IV, HEA program eligibility. The
Department has more data on teacher
preparation programs housed at IHEs
than on those independent of IHEs.
Whether evaluated at the aggregated
institutional level or the disaggregated
program level, as described in the
TEACH Grant section of the Discussion
of Costs, Benefits, and Transfers section
in this RIA as Approach 1 and
Approach 2, respectively, Stateapproved teacher preparation programs
are concentrated in the public and
private not-for-profit sectors. For the
provisions related to the TEACH Grant
program and using the institutional
approach with a threshold of 25 novice
teachers (or a lower threshold at the
discretion of the State), since the IHEs
will be reporting for all their programs,
we estimate that approximately 56.4
percent of teacher preparation programs
are at public IHEs—the vast majority of
which would not be small entities, and
42.1 percent are at private not-for-profit
IHEs. The remaining 1.5 percent are at
private for-profit IHEs and of those with
teacher preparation programs,
approximately 18 percent had reported
FY 2012 total revenues under $7 million
based on IPEDS data and are considered
small entities. Table 8 summarizes the
estimated number of teacher preparation
programs offered at small entities.
TABLE 8—TEACHER PREPARATION PROGRAMS AT SMALL ENTITIES
Total
programs
Public:
Approach 1 ...............................................................................................
Approach 2 ...............................................................................................
Private Not-for-Profit:
Approach 1 ...............................................................................................
Approach 2 ...............................................................................................
Private For-Profit:
Approach 1 ...............................................................................................
Approach 2 ...............................................................................................
Programs at
small entities
% of Total
programs
offered at
small entities
Programs at
TEACH Grant
participating
small entities
2,522
11,931
17
36
1
0
14
34
1,879
12,316
1,879
12,316
100
100
1,212
8,175
67
250
12
38
18
15
1
21
sradovich on DSK3GMQ082PROD with RULES2
Source: IPEDS
Note: Table includes programs at IHEs only.
The Department has no indication
that programs at small entities are more
likely to be ineligible for TEACH Grants
or title IV, HEA funds. Since all private
not-for-profit IHEs are considered to be
small because none are dominant in the
field, we would expect about 5 percent
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
of TEACH Grant volume at teacher
preparation programs at private not-forprofit IHEs to be at ineligible programs.
In AY 2014–15, approximately 43.7
percent of TEACH Grant disbursements
went to private not-for-profit IHEs, and
by applying that to the estimated
PO 00000
Frm 00112
Fmt 4701
Sfmt 4700
TEACH Grant volume in 2021 of
$95,918,782, the Department estimates
that TEACH Grant volume at private
not-for-profit IHEs in 2021 would be
approximately $42.0 million. At the five
percent low-performing or at-risk rate
assumed in the TEACH Grants portion
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
of the Cost, Benefits, and Transfers
section of the Regulatory Impact
Analysis, TEACH Grant revenues would
be reduced by approximately $2.1
million at programs at private not-forprofit entities in the initial year the
regulations are in effect and a lesser
amount after that. Much of this revenue
could be shifted to eligible programs
within the IHE or the sector, and the
cost to programs would be greatly
reduced by students substituting other
sources of funds for the TEACH Grants.
In addition to the teacher preparation
programs at IHEs included in Table 6,
approximately 1,281 alternative
certification programs offered outside of
IHEs are subject to the reporting
requirements in the regulations. The
Department assumes that a significant
majority of these programs are offered
by non-profit entities that are not
dominant in the field, so all of the
alternative certification teacher
preparation programs are considered to
be small entities. However, the reporting
burden for these programs falls on the
States. As discussed in the Paperwork
Reduction Act section of this document,
the estimated total paperwork burden
on IHEs would decrease by 66,740
hours. Small entities would benefit from
this relief from the current institutional
reporting requirements.
The final regulations are unlikely to
conflict with or duplicate existing
Federal regulations.
Paperwork Reduction Act of 1995
The Paperwork Reduction Act of 1995
(PRA) does not require you to respond
to a collection of information unless it
displays a valid OMB control number.
We display the valid OMB control
numbers assigned to the collections of
information in these final regulations at
the end of the affected sections of the
regulations.
Sections 612.3, 612.4, 612.5, 612.6,
612.7, 612.8, and 686.2 contain
information collection requirements.
Under the PRA, the Department has
submitted a copy of these sections,
related forms, and Information
Collection Requests (ICRs) to the Office
of Management and Budget (OMB) for
its review.
The OMB control number associated
with the regulations and related forms is
1840–0837. Due to changes described in
the Discussion of Costs, Benefits, and
Transfers section of the RIA, estimated
burdens have been updated below.
Start-Up and Annual Reporting Burden
These regulations implement a
statutory requirement that IHEs and
States establish an information and
accountability system through which
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
IHEs and States report on the
performance of their teacher preparation
programs. Because parts of the
regulations require IHEs and States to
establish or scale up certain systems and
processes in order to collect information
necessary for annual reporting, IHEs and
States may incur one-time start-up costs
for developing those systems and
processes. The burden associated with
start-up and annual reporting is
reported separately in this statement.
Section 612.3 Reporting Requirements
for the Institutional Report Cards
Section 205(a) of the HEA requires
that each IHE that provides a teacher
preparation program leading to State
certification or licensure report on a
statutorily enumerated series of data
elements for the programs it provides.
The HEOA revised a number of the
reporting requirements for IHEs.
The final regulations under § 612.3(a)
require that, beginning on April 1, 2018,
and annually thereafter, each IHE that
conducts traditional or alternative route
teacher preparation programs leading to
State initial teacher certification or
licensure and that enrolls students
receiving title IV, HEA funds report to
the State on the quality of its programs
using an IRC prescribed by the
Secretary.
Start-Up Burden
Entity-Level and Program-Level
Reporting
Under the current IRC, IHEs typically
report at the entity level rather than the
program level. For example, if an IHE
offers multiple teacher preparation
programs in a range of subject areas (for
example, music education and special
education), that IHE gathers data on
each of those programs, aggregates the
data, and reports the required
information as a single teacher
preparation entity on a single report
card. Under the final regulations and for
the reasons discussed in the NPRM and
the preamble to this final rule, reporting
is now required at the teacher
preparation program level rather than at
the entity level. No additional data must
be gathered as a consequence of this
regulatory requirement; instead, IHEs
will simply report the required data
before, rather than after, aggregation.
As a consequence, IHEs will not be
required to alter appreciably their
systems for data collection. However,
the Department acknowledges that in
order to communicate disaggregated
data, minimal recordkeeping
adjustments may be necessary. The
Department estimates that initial burden
for each IHE to adjust its recordkeeping
PO 00000
Frm 00113
Fmt 4701
Sfmt 4700
75605
systems will be 10 hours per entity. In
the most recent year for which data are
available, 1,490 IHEs reported required
data to the Department through the IRC.
Therefore, the Department estimates
that the one-time total burden for IHEs
to adjust recordkeeping systems will be
14,900 hours (1,490 IHEs multiplied by
10 burden hours per IHE).
Subtotal of Start-Up Burden Under
§ 612.3
The Department believes that IHEs’
experience during prior title II reporting
cycles has provided sufficient
knowledge to ensure that IHEs will not
incur any significant start-up burden,
except for the change from entity-level
to program-level reporting described
above. Therefore, the subtotal of start-up
burden for § 612.3 is 14,900 hours.
Annual Reporting Burden
Changes to the Institutional Report Card
For a number of years IHEs have
gathered, aggregated, and reported data
on teacher preparation program
characteristics, including those required
under the HEOA, to the Department
using the IRC approved under OMB
control number 1840–0837. The
required reporting elements of the IRC
principally concern admissions criteria,
student characteristics, clinical
preparation, numbers of teachers
prepared, accreditation of the program,
and the pass rates and scaled scores of
teacher candidates on State teacher
certification and licensure
examinations.
Given all of the reporting changes
under these final rules as discussed in
the NPRM, the Department estimates
that each IHE will require 66 fewer
burden hours to prepare the revised IRC
annually. The Department estimates that
each IHE will require 146 hours to
complete the current IRC approved by
OMB. There would thus be an annual
burden of 80 hours to complete the
revised IRC (146 hours minus 66 hours
in reduced data collection). The
Department estimates that 1,490 IHEs
would respond to the IRC required
under the regulations, based on
reporting figures from the most recent
year data are available. Therefore,
reporting data using the IRC would
represent a total annual reporting
burden of 119,200 hours (80 hours
multiplied by 1,490 IHEs).
Entity-Level and Program-Level
Reporting
As noted in the start-up burden
section of § 612.3, under the current
IRC, IHEs report teacher preparation
program data at the entity level. The
final regulations require that each IHE
E:\FR\FM\31OCR2.SGM
31OCR2
75606
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
report disaggregated data at the teacher
preparation program level. The
Department believes this will not
require any additional data collection or
appreciably alter the time needed to
calculate data reported to the
Department. However, the Department
believes that some additional reporting
burden will exist for IHEs’ electronic
input and submission of disaggregated
data because each IHE typically houses
multiple teacher preparation programs.
Based on the most recent year of data
available, the Department estimates that
there are 27,914 teacher preparation
programs at 1,490 IHEs nationwide.
Based on these figures, the Department
estimates that on average, each of these
IHEs offers 16.40 teacher preparation
programs. Because each IHE already
collects disaggregated IRC data, the
Department estimates it will take each
IHE one additional hour to fill in
existing disaggregated data into the
electronic IRC for each teacher
preparation program it offers. Because
IHEs already have to submit an IRC for
the IHE, we estimate that the added
burden for reporting on a program level
will be 15.40 hours (an average of 16.40
programs at one hour per program,
minus the existing submission of one
IRC for the IHE, or 15.40 hours).
Therefore, each IHE will incur an
average burden increase of 15.40 hours
(1 hour multiplied by an average of
15.40 teacher preparation programs at
each IHE), and there will be an overall
burden increase of 22,946 hours each
year associated with this regulatory
reporting requirement (15.40 multiplied
by 1,490 IHEs).
Posting on the Institution’s Web site
The regulations also require that the
IHE provide the information reported on
the IRC to the general public by
prominently and promptly posting the
IRC information on the IHE’s Web site.
Because the Department believes it is
reasonable to assume that an IHE
offering a teacher preparation program
and communicating data related to that
program by electronic means maintains
a Web site, the Department presumes
that posting such information to an
already-existing Web site will represent
a minimal burden increase. The
Department therefore estimates that
IHEs will require 0.5 hours (30 minutes)
to meet this requirement. This would
represent a total burden increase of 745
hours each year for all IHEs (0.5 hours
multiplied by 1,490 IHEs).
Subtotal of Annual Reporting Burden
Under § 612.3
Aggregating the annual burdens
calculated under the preceding sections
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
results in the following burdens:
Together, all IHEs would incur a total
burden of 119,200 hours to develop the
systems needed to meet the
requirements of the revised IRC, 22,946
hours to report program-level data, and
745 hours to post IRC data to their Web
sites. This would constitute a total
burden of 142,891 hours of annual
burden nationwide.
Total Institutional Report Card
Reporting Burden
Aggregating the start-up and annual
burdens calculated under the preceding
sections results in the following
burdens: Together, all IHEs would incur
a total start-up burden under § 612.3 of
14,900 hours and a total annual
reporting burden under § 612.3 of
142,891 hours. This would constitute a
total burden of 157,791 total burden
hours under § 612.3 nationwide.
The burden estimate for the existing
IRC approved under OMB control
number 1840–0837 was 146 hours for
each IHE with a teacher preparation
program. When the current IRC was
established, the Department estimated
that 1,250 IHEs would provide
information using the electronic
submission of the form for a total
burden of 182,500 hours for all IHEs
(1,250 IHEs multiplied by 146 hours).
Applying these estimates to the current
number of IHEs that are required to
report (1,490) would constitute a burden
of 217,540 hours (1,490 IHEs multiplied
by 146 hours). Based on these estimates,
the revised IRC would constitute a net
burden reduction of 59,749 hours
nationwide (217,540 hours minus
157,791 hours).
Section 612.4 Reporting Requirements
for the State Report Card
Section 205(b) of the HEA requires
that each State that receives funds under
the HEA provide to the Secretary and
make widely available to the public not
less than the statutorily required
specific information on the quality of
traditional and alternative route teacher
preparation programs. The State must
do so in a uniform and comprehensible
manner, conforming with definitions
and methods established by the
Secretary. Section 205(c) of the HEA
directs the Secretary to prescribe
regulations to ensure the validity,
reliability, accuracy, and integrity of the
data submitted. Section 206(b) requires
that IHEs assure the Secretary that their
teacher training programs respond to the
needs of LEAs, be closely linked with
the instructional decisions novice
teachers confront in the classroom, and
prepare candidates to work with diverse
PO 00000
Frm 00114
Fmt 4701
Sfmt 4700
populations and in urban and rural
settings, as applicable.
Implementing the relevant statutory
directives, the regulations under
§ 612.4(a) require that, starting October
1, 2019, and annually thereafter, each
State report on the SRC the quality of all
approved teacher preparation programs
in the State, whether or not they enroll
students receiving Federal assistance
under the HEA, including distance
education programs. This new SRC, to
be implemented in 2019, is an update of
the current SRC. The State must also
make the SRC information widely
available to the general public by
posting the information on the State’s
Web site.
Section 103(20) of the HEA and
§ 612.2(d) of the proposed regulations
define ‘‘State’’ to include nine locations
in addition to the 50 States: The
Commonwealth of Puerto Rico, the
District of Columbia, Guam, American
Samoa, the United States Virgin Islands,
the Commonwealth of the Northern
Mariana Islands, the Freely Associated
States, which include the Republic of
the Marshall Islands, the Federated
States of Micronesia, and the Republic
of Palau. For this reason, all reporting
required of States explicitly enumerated
under § 205(b) of the HEA (and the
related portions of the regulations,
specifically §§ 612.4(a) and 612.6(b)),
apply to these 59 States. However,
certain additional regulatory
requirements (specifically §§ 612.4(b),
612.4(c), 612.5, and 612.6(a)) only apply
to the 50 States of the Union, the
Commonwealth of Puerto Rico, and the
District of Columbia. The burden
estimates under those portions of this
report apply to those 52 States. For a
full discussion of the reasons for the
application of certain regulatory
provisions to different States, see the
preamble to the NPRM.
Entity-Level and Program-Level
Reporting
As noted in the start-up and annual
burden sections of § 612.3, under the
current information collection process,
data are collected at the entity level, and
the final regulations require data
reporting at the program level. In 2015,
States reported that there were 27,914
teacher preparation programs offered,
including 24,430 at IHEs and 3,484
through alternative route teacher
preparation programs not associated
with IHEs. In addition, as discussed in
the Supplemental NPRM, the
Department estimates that the sections
of these final regulations addressing
teacher preparation programs offered
through distance education will result
in 812 additional reporting instances.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
Because the remainder of the data
reporting discussed in this burden
statement is transmitted using the SRC,
for those burden estimates concerning
reporting on the basis of teacher
preparation programs, the Department
uses the estimate of 28,726 teacher
preparation programs (27,914 teacher
preparation programs plus 812 reporting
instances related to teacher preparation
programs offered through distance
education).
Start Up and Annual Burden Under
§ 612.4(a)
Section 612.4(a) codifies State
reporting requirements expressly
referenced in section 205(b) of the HEA;
the remainder of § 612.4 provides for
reporting consistent with the directives
to the Secretary under sections 205(b)
and (c) and the required assurance
described in section 206(c).
The HEOA revised a number of the
reporting requirements for States. The
requirements of the SRC are more
numerous than those contained in the
IRC, but the reporting elements required
in both are similar in many respects. In
addition, the Department has
successfully integrated reporting to the
extent that data reported by IHEs in the
IRC is pre-populated in the relevant
fields on which the States are required
to report in the SRC. In addition to the
elements discussed in § 612.3 of this
burden statement regarding the IRC,
under the statute a State must also
report on its certification and licensure
requirements and standards, State-wide
pass rates and scaled scores, shortages
of highly qualified teachers, and
information related to low-performing
or at-risk teacher preparation programs
in the State.
The SRC currently in use, approved
under OMB control number 1840–0837,
collects information on these elements.
States have been successfully reporting
information under this collection for
many years. The burden estimate for the
existing SRC was 911 burden hours per
State. In the burden estimate for that
SRC, the Department reported that 59
States were required to report data,
equivalent to the current requirements.
This represented a total burden of
53,749 hours for all States (59 States
multiplied by 911 hours). This burden
calculation was made on entity-level,
rather than program-level, reporting (for
a more detailed discussion of the
consequences of this issue, see the
sections on entity-level and programlevel reporting in §§ 612.3 and 612.4).
However, because relevant programlevel data reported by the IHEs on the
IRC will be pre-populated for States on
the SRC, the burden associated with
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
75607
program-level reporting under § 612.4(a)
will be minimal. Those elements that
will require additional burden are
discussed in the subsequent paragraphs
of this section.
reporting required under § 612.4(b) and
(c).
Elements Changed in the State Report
Card
Using the calculations outlined in the
NPRM and changes discussed above, the
Department estimates that the total
reporting burden for each State will be
243 hours (193 hours for the revised
SRC plus the additional statutory
reporting requirements totaling 50
hours). This would represent a
reduction of 668 burden hours for each
State to complete the requirements of
the SRC, as compared to approved OMB
collection 1840–0837 (911 burden hours
under the current SRC compared to 243
burden hours under the revised SRC).
The total burden for States to report this
information would be 14,337 hours (243
hours multiplied by 59 States).
Under § 612.4(b)(1), a State is required
to make meaningful differentiations in
teacher preparation program
performance using at least three
performance levels—low-performing
teacher preparation program, at-risk
teacher preparation program, and
effective teacher preparation program—
based on the indicators in § 612.5 and
including employment outcomes for
high-need schools and student learning
outcomes.
The Department believes that State
higher education authorities responsible
for making State-level classifications of
teacher preparation programs will
require time to make meaningful
differentiations in their classifications
and determine whether alternative
performance levels are warranted. States
are required to consult with external
stakeholders, review best practices by
early adopter States that have more
experience in program classification,
and seek technical assistance.
States will also have to determine
how they will make such classifications.
For example, a State may choose to
classify all teacher preparation programs
on an absolute basis using a cut-off
score that weighs the various indicators,
or a State may choose to classify teacher
preparation programs on a relative basis,
electing to classify a certain top
percentile as exceptional, the next
percentile as effective, and so on. In
exercising this discretion, States may
choose to consult with various external
and internal parties and discuss lessons
learned with those States already
making such classifications of their
teacher preparation programs.
The Department estimates that each
State will require 70 hours to make
these determinations, and this would
constitute a one-time total burden of
3,640 hours (70 hours multiplied by 52
States).
Posting on the State’s Web Site
The final regulations also require that
the State provide the information
reported on the SRC to the general
public by prominently and promptly
posting the SRC information on the
State’s Web site. Because the
Department believes it is reasonable to
assume that each State that
communicates data related to its teacher
preparation programs by electronic
means maintains a Web site, the
Department presumes that posting such
information to an already-existing Web
site represents a minimal burden
increase. The Department therefore
estimates that States will require 0.5
hours (30 minutes) to meet this
requirement. This would represent a
total burden increase of 29.5 hours each
year for all IHEs (0.5 hours multiplied
by 59 States).
Subtotal § 612.4(a) Start-Up and Annual
Reporting Burden
As noted in the preceding discussion,
there is no start-up burden associated
solely with § 612.4(a). Therefore, the
aggregate start-up and annual reporting
burden associated with reporting
elements under § 612.4(a) would be
14,366.5 hours (243 hours multiplied by
59 States plus 0.5 hours for each of the
59 States).
Reporting Required Under § 612.4(b)
and § 612.4(c)
The preceding burden discussion of
§ 612.4 focused on burdens related to
the reporting requirements under
section 205(b) of the HEA and reflected
in 34 CFR 612.4(a). The remaining
burden discussion of § 612.4 concerns
PO 00000
Frm 00115
Fmt 4701
Sfmt 4700
Start-Up Burden
Meaningful Differentiations
Assurance of Specialized Accreditation
Under § 612.4(b)(3)(i)(A), for each
teacher preparation program, a State
must provide disaggregated data for
each of the indicators identified
pursuant to § 612.5. See the start-up
burden section of § 612.5 for a more
detailed discussion of the burden
associated with gathering the indicator
data required to be reported under this
regulatory section. See the annual
reporting burden section of § 612.4 for a
discussion of the ongoing reporting
burden associated with reporting
E:\FR\FM\31OCR2.SGM
31OCR2
75608
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
disaggregated indicator data under this
regulation. No further burden exists
beyond the burden described in these
two sections.
Under § 612.4(b)(3)(i)(B), a State is
required to provide, for each teacher
preparation program in the State, the
State’s assurance that the teacher
preparation program either: (a) Is
accredited by a specialized agency or (b)
provides teacher candidates with
content and pedagogical knowledge,
quality clinical preparation, and
rigorous teacher exit qualifications. See
the start-up burden section of § 612.5 for
a detailed discussion of the burden
associated with gathering the indicator
data required to be reported under this
regulation. See the annual reporting
burden section of § 612.4 for a
discussion of the ongoing reporting
burden associated with reporting these
assurances. No further burden exists
beyond the burden described in these
two sections.
Indicator Weighting
Under § 612.4(b)(2)(ii), a State must
provide its weighting of the different
indicators in § 612.5 for purposes of
describing the State’s assessment of
program performance. See the start-up
burden section of § 612.4 on stakeholder
consultation for a detailed discussion of
the burden associated with establishing
the weighting of the various indicators
under § 612.5. See the annual reporting
burden section of § 612.4 for a
discussion of the ongoing reporting
burden associated with reporting these
relative weightings. No further burden
exists beyond the burden described in
these two sections.
sradovich on DSK3GMQ082PROD with RULES2
State-Level Rewards or Consequences
Under § 612.4(b)(2)(iii), a State must
provide the State-level rewards or
consequences associated with the
designated performance levels. See the
start-up burden section of § 612.4 on
stakeholder consultation for a more
detailed discussion of the burden
associated with establishing these
rewards or consequences. See the
annual reporting burden section of
§ 612.4 for a discussion of the ongoing
reporting burden associated with
reporting these relative weightings. No
further burden exists beyond the burden
described in these two sections.
Aggregation of Small Programs
Under § 612.4(b)(3), a State must
ensure that all of its teacher preparation
programs in that State are represented
on the SRC. The Department recognized
that many teacher preparation programs
consist of a small number of prospective
teachers and that reporting on these
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
programs could present privacy and
data validity issues. After discussion
and input from various non-Federal
negotiators during the negotiated
rulemaking process, the Department
elected to set a required reporting
program size threshold of 25. However,
the Department realized that, on the
basis of research examining accuracy
and validity relating to reporting small
program sizes, some States may prefer to
report on programs smaller than 25.
Section 612.4(b)(3)(i) permits States to
report using a lower program size
threshold. In order to determine the
preferred program size threshold for its
programs, a State may review existing
research or the practices of other States
that set program size thresholds to
determine feasibility for its own teacher
preparation program reporting. The
Department estimates that such review
will require 20 hours for each State, and
this would constitute a one-time total
burden of 1,040 hours (20 hours
multiplied by 52 States).
Under § 612.4(b)(3), all teacher
preparation entities must report on the
remaining small programs that do not
meet the program size threshold the
State chooses. States will be able to do
so through a combination of two
possible aggregation methods described
in § 612.4(b)(3)(ii). The preferred
aggregation methodology is to be
determined by the States after
consultation with a group of
stakeholders. For a detailed discussion
of the burden related to this
consultation process, see the start-up
burden section of § 612.4, which
discusses the stakeholder consultation
process. Apart from the burden
discussed in that section, no other
burden is associated with this
requirement.
c. The State-level rewards or
consequences associated with the
designated performance levels; and
d. The appropriate process and
opportunity for programs to challenge
the accuracy of their performance data
and program classification.
The Department believes that this
consultative process will require that
the group convene at least three times
to afford each of the stakeholder
representatives multiple opportunities
to meet and consult with the
constituencies they represent. Further,
the Department believes that members
of the stakeholder group will require
time to review relevant materials and
academic literature and advise on the
relative strength of each of the
performance indicators under § 612.5, as
well as any other matters requested by
the State.
These stakeholders will also require
time to advise whether any of the
particular indicators will have more or
less predictive value for the teacher
preparation programs in their State,
given its unique traits. Finally, because
some States have already implemented
one or more components of the
regulatory indicators of program quality,
these stakeholders will require time to
review these States’ experiences in
implementing similar systems. The
Department estimates that the
combination of gathering the
stakeholder group multiple times,
review of the relevant literature and
other States’ experiences, and making
determinations unique to their
particular State will take 900 hours for
each State (60 hours per stakeholder
multiplied by 15 stakeholders). This
would constitute a one-time total of
46,800 hours for all States (900 hours
multiplied by 52 States).
Stakeholder Consultation
Subtotal of Start-Up Burden Under
§ 612.4(b) and § 612.4(c)
Under § 612.4(c), a State must consult
with a representative group of
stakeholders to determine the
procedures for assessing and reporting
the performance of each teacher
preparation program in the State. This
stakeholder group, composed of a
variety of members representing
viewpoints and interests affected by
these regulations, must provide input on
a number of issues concerning the
State’s discretion. There are four issues
in particular on which the stakeholder
group advises the State—
a. The relative weighting of the
indicators identified in § 612.5;
b. The preferred method for
aggregation of data such that
performance data for a maximum
number of small programs are reported;
PO 00000
Frm 00116
Fmt 4701
Sfmt 4700
Aggregating the start-up burdens
calculated under the preceding sections
results in the following burdens: All
States would incur a total burden of
3,640 hours to make meaningful
differentiations in program
classifications, 1,040 hours to determine
the State’s aggregation of small
programs, and 46,800 hours to complete
the stakeholder consultation process.
This would constitute a total of 51,480
hours of start-up burden nationwide.
Annual Reporting Burden
Classification of Teacher Preparation
Programs
The bulk of the State burden
associated with assigning programs
among classification levels should be in
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
gathering and compiling data on the
indicators of program quality that
compose the basis for the classification.
Once a State has made a determination
of how a teacher preparation program
will be classified at a particular
performance level, applying the data
gathered under § 612.5 to this
classification basis is straightforward.
The Department estimates that States
will require 0.5 hours (30 minutes) to
apply already-gathered indicator data to
existing program classification
methodology. The total burden
associated with classification of all
teacher preparation programs using
meaningful differentiations would be
14,363 hours each year (0.5 hours
multiplied by 28,726 teacher
preparation programs).
Disaggregated Data on Each Indicator in
§ 612.5
Under § 612.4(b)(2)(i)(A), States must
report on the indicators of program
performance in § 612.5. For a full
discussion of the burden related to the
reporting of this requirement, see the
annual reporting burden section of
§ 612.5. Apart from the burden
discussed in this section, no other
burden is associated with this
requirement.
Indicator Weighting
sradovich on DSK3GMQ082PROD with RULES2
Under § 612.4(b)(2)(ii), States must
report the relative weight it places on
each of the different indicators
enumerated in § 612.5. The burden
associated with this reporting is
minimal: After the State, in consultation
with a group of stakeholders, has made
the determination about the percentage
weight it will place on each of these
indicators, reporting this information on
the SRC is a simple matter of inputting
a number for each of the indicators.
Under § 612.5, this minimally requires
the State to input eight general
indicators of quality.
Note: The eight indicators are—
a. Associated student learning outcome
results;
b. Teacher placement results;
c. Teacher retention results;
d. Teacher placement rate calculated for
high-need school results;
e. Teacher retention rate calculated for
high-need school results;
f. Teacher satisfaction survey results;
g. Employer satisfaction survey results; and
h. Teacher preparation program
characteristics.
This reporting burden will not be
affected by the number of teacher
preparation programs in a State, because
such weighting applies equally to each
program. Although the State has the
discretion to add indicators, the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Department does not believe that
transmission of an additional figure
representing the percentage weighting
assigned to that indicator will constitute
an appreciable burden increase. The
Department therefore estimates that
each State will incur a burden of 0.25
hours (15 minutes) to report the relative
weighting of the regulatory indicators of
program performance. This would
constitute a total burden on States of 13
hours each year (0.25 hours multiplied
by 52 States).
State-Level Rewards or Consequences
Similar to the reporting required
under § 612.4(b)(2)(ii), after a State has
made the requisite determination about
rewards and consequences, reporting
those rewards and consequences
represents a relatively low burden.
States must report this on the SRC
during the first year of implementation,
the SRC could provide States with a
drop-down list representing common
rewards or consequences in use by early
adopter States, and States can briefly
describe those rewards or consequences
not represented in the drop-down
options. For subsequent years, the SRC
could be pre-populated with the prioryear’s selected rewards and
consequences, such that there will be no
further burden associated with
subsequent year reporting unless the
State altered its rewards and
consequences. For these reasons, the
Department estimates that States will
incur, on average, 0.5 hours (30
minutes) of burden in the first year of
implementation to report the State-level
rewards and consequences, and 0.1
hours (6 minutes) of burden in each
subsequent year. The Department
therefore estimates that the total burden
for the first year of implementation of
this regulatory requirement will be 26
hours (0.5 hours multiplied by 52
States) and 5.2 hours each year
thereafter (0.1 hours multiplied by 52
States).
Stakeholder Consultation
Under § 612.4(b)(4), during the first
year of reporting and every five years
thereafter, States must report on the
procedures they established in
consultation with the group of
stakeholders described under
§ 612.4(c)(1). The burden associated
with the first and third of these four
procedures, the weighting of the
indicators and State-level rewards and
consequences associated with each
performance level, respectively, are
discussed in the preceding paragraphs
of this section.
The second procedure, the method by
which small programs are aggregated, is
PO 00000
Frm 00117
Fmt 4701
Sfmt 4700
75609
a relatively straightforward reporting
procedure on the SRC. Pursuant to
§ 612.4(b)(3)(ii), States are permitted to
use one of two methods, or a
combination of both in aggregating
small programs. A State can aggregate
programs that are similar in teacher
preparation subject matter. A State can
also aggregate using prior year data,
including that of multiple prior years.
Or a State can use a combination of both
methods. On the SRC, the State simply
indicates the method it uses. The
Department estimates that States will
require 0.5 hours (30 minutes) to enter
these data every fifth year. On an
annualized basis, this would therefore
constitute a total burden of 5.2 hours
(0.5 hours multiplied by 52 States
divided by five to annualize burden for
reporting every fifth year).
The fourth procedure that States must
report under § 612.4(b)(4) is the method
by which teacher preparation programs
in the State are able to challenge the
accuracy of their data and the
classification of their program. First, the
Department believes that States will
incur a paperwork burden each year
from recordkeeping and publishing
decisions of these challenges. Because
the Department believes the instances of
these appeals will be relatively rare, we
estimate that each State will incur 10
hours of burden each year related to
recordkeeping and publishing decisions.
This would constitute an annual
reporting burden of 520 hours (10 hours
multiplied by 52 States).
After States and their stakeholder
groups determine the preferred method
for programs to challenge data, reporting
that information will likely take the
form of narrative responses. This is
because the method for challenging data
may differ greatly from State to State,
and it is difficult for the Department to
predict what methods States will
choose. The Department therefore
estimates that reporting this information
in narrative form during the first year
will constitute a burden of 3 hours for
each State. This would represent a total
reporting burden of 156 hours (3 hours
multiplied by 52 States).
In subsequent reporting cycles, the
Department can examine State
responses and (1) pre-populate this
response for States that have not altered
their method for challenging data or (2)
provide a drop-down list of
representative alternatives. This will
minimize subsequent burden for most
States. The Department therefore
estimates that in subsequent reporting
cycles (every five years under the final
regulations), only 10 States will require
more time to provide additional
narrative responses totaling 3 burden
E:\FR\FM\31OCR2.SGM
31OCR2
75610
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
hours each, with the remaining 42
States incurring a negligible burden.
This represents an annualized reporting
burden of 6 hours for those 10 States (3
hours multiplied by 10 States, divided
by 5 years), for a total annualized
reporting burden of 60 hours for
subsequent years (6 hours multiplied by
10 States).
Under § 612.4(c)(2), each State must
periodically examine the quality of its
data collection and reporting activities
and modify those activities as
appropriate. The Department believes
that this review will be carried out in a
manner similar to the one described for
the initial stakeholder determinations in
the preceding paragraphs: States will
consult with representative groups to
determine their experience with
providing and using the collected data,
and they will consult with data experts
to ensure the validity and reliability of
the data collected. The Department
believes such a review will recur every
three years, on average. Because this
review will take place years after the
State’s initial implementation of the
regulations, the Department further
believes that the State’s review will be
of relatively little burden. This is
because the State’s review will be based
on the State’s own experience with
collecting and reporting data pursuant
to the regulations, and because States
can consult with many other States to
determine best practices. For these
reasons, the Department estimates that
the periodic review and modification of
data collection and reporting will
require 30 hours every three years or an
annualized burden of 10 hours for each
State. This would constitute a total
annualized burden of 520 hours for all
States (10 hours per year multiplied by
52 States).
Subtotal Annual Reporting Burden
Under § 612.4(b) and § 612.4(c)
Aggregating the annual burdens
calculated under the preceding sections
results in the following: All States
would incur a burden of 14,363 hours
to report classifications of teacher
preparation programs, 13 hours to report
State indicator weightings, 26 hours in
the first year and 5.2 hours in
subsequent years to report State-level
rewards and consequences associated
with each performance classification,
5.2 hours to report the method of
program aggregation, 520 hours for
recordkeeping and publishing appeal
decisions, 156 hours the first year and
60 hours in subsequent years to report
the process for challenging data and
program classification, and 520 hours to
report on the examination of data
collection quality. This totals 15,603.2
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
hours of annual burden in the first year
and 15,486.4 hours of annual burden in
subsequent years nationwide.
Total Reporting Burden Under § 612.4
Aggregating the start-up and annual
burdens calculated under the preceding
sections results in the following
burdens: All States would incur a total
burden under § 612.4(a) of 14,366.5
hours, a start-up burden under
§§ 612.4(b) and 612.4(c) of 51,480 hours,
and an annual burden under §§ 612.4(b)
and 612.4(c) of 15,603.2 hours in the
first year and 15,486.4 hours in
subsequent years. This totals between
81,332.9 and 81,449.7 total burden
hours under § 612.4 nationwide. Based
on the prior estimate of 53,749 hours of
reporting burden on OMB collection
1840–0837, the total burden increase
under § 612.4 is between 27,583.9 hours
and 27,700.7 hours (53,749 hours minus
a range of 81,332.9 and 81,449.7 total
burden hours).
Section 612.5 Indicators a State Must
Use To Report on Teacher Preparation
Program Performance
The final regulations at § 612.5(a)(1)
through (a)(4) identify those indicators
that a State is required to use to assess
the academic content knowledge and
teaching skills of novice teachers from
each of its teacher preparation
programs. Under the regulations, a State
must use the following indicators of
teacher preparation program
performance: (a) Student learning
outcomes, (b) employment outcomes, (c)
survey outcomes, and (d) whether the
program (1) is accredited by a
specialized accrediting agency or (2)
produces teacher candidates with
content and pedagogical knowledge and
quality clinical preparation, who have
met rigorous exit standards. Section
612.5(b) permits a State, at its
discretion, to establish additional
indicators of academic content
knowledge and teaching skills.
Start-Up Burden
Student Learning Outcomes
As described in the Discussion of
Costs, Benefits, and Transfers section of
the RIA, we do not estimate that States
will incur any additional burden
associated with creating systems for
evaluating student learning outcomes.
However, the regulations also require
that States link student growth or
teacher evaluation data back to each
teacher’s preparation programs
consistent with State discretionary
guidelines included in § 612.4.
Currently, few States have such
capacity. However, based on data from
the SLDS program, it appears that 30
PO 00000
Frm 00118
Fmt 4701
Sfmt 4700
States, the District of Columbia, and the
Commonwealth of Puerto Rico either
already have the ability to aggregate data
on student achievement and map back
to teacher preparation programs or have
committed to do so. For these 30 States,
the District of Columbia, and the
Commonwealth of Puerto Rico we
estimate that no additional costs will be
needed to link student learning
outcomes back to teacher preparation
programs.
For the remaining States, the
Department estimates that they will
require 2,940 hours for each State, for a
total burden of 58,800 hours nationwide
(2,940 hours multiplied by 20 States).
Employment Outcomes
Section 612.5(a)(2) requires a State to
provide data on each teacher
preparation program’s teacher
placement rate as well as the teacher
placement rate calculated for high-need
schools. High-need schools are defined
in § 612.2(d) by using the definition of
‘‘high-need school’’ in section 200(11) of
the HEA. The regulations give States
discretion to exclude those novice
teachers or recent graduates from this
measure if they are teaching in a private
school, teaching in another State,
enrolled in graduate school, or engaged
in military service. States also have the
discretion to treat this rate differently
for alternative route and traditional
route providers.
Section 612.5(a)(2) requires a State to
provide data on each teacher
preparation program’s teacher retention
rate and teacher retention rate
calculated for high-need schools. The
regulations give States discretion to
exclude those novice teachers or recent
graduates from this measure if they are
teaching in a private school (or other
school not requiring State certification),
another State, enrolled in graduate
school, or serving in the military. States
also have the discretion to treat this rate
differently for alternative route and
traditional route providers.
As discussed in the NPRM, the
Department believes that only 11 States
will likely incur additional burden in
collecting information about the
employment and retention of recent
graduates of teacher preparation
programs in its jurisdiction. To the
extent that it is not possible to establish
these measures using existing data
systems, States may need to obtain some
or all of this information from teacher
preparation programs or from the
teachers themselves upon requests for
certification and licensure. The
Department estimates that 200 hours
may be required at the State level to
collect information about novice
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
teachers employed in full-time teaching
positions (including designing the data
request instruments, disseminating
them, providing training or other
technical assistance on completing the
instruments, collecting the data, and
checking their accuracy), which would
amount to a total of 2,200 hours (200
hours multiplied by 11 States).
sradovich on DSK3GMQ082PROD with RULES2
Survey Outcomes
Section 612.5(a)(3) requires a State to
provide data on each teacher
preparation program’s teacher survey
results. This requires States to report
data from a survey of novice teachers in
their first year of teaching designed to
capture their perceptions of whether the
training that they received was
sufficient to meet classroom and
profession realities.
Section 612.5(a)(3) also requires a
State to provide data on each teacher
preparation program’s employer survey
results. This requires States to report
data from a survey of employers or
supervisors designed to capture their
perceptions of whether the novice
teachers they employ or supervise were
prepared sufficiently to meet classroom
and profession realities.
Some States and IHEs already survey
graduates of their teacher preparation
programs. The sampling size and length
of survey instrument can strongly affect
the potential burden associated with
administering the survey. The
Department has learned that some States
already have experience carrying out
such surveys (for a more detailed
discussion of these and other estimates
in this section, see the Discussion of
Costs, Benefits and Transfers section
regarding student learning outcomes in
the RIA). In order to account for
variance in States’ abilities to conduct
such surveys, the variance in the survey
instruments themselves, and the need to
ensure statistical validity and reliability,
the Department assumes a somewhat
higher burden estimate than States’
initial experiences.
Based on Departmental consultation
with researchers experienced in
carrying out survey research, the
Department assumes that survey
instruments will not require more than
30 minutes to complete. The
Department further assumes that a State
can develop a survey in 1,224 hours.
Assuming that States with experience in
administering surveys will incur a lower
cost, the Department assumes that the
total burden incurred nationwide would
maximally be 63,648 hours (1,224 hours
multiplied by 52 States).
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Teacher Preparation Program
Characteristics
Under § 612.5(a)(4), States must
report, for each teacher preparation
program in the State whether it: (a) Is
accredited by a specialized accrediting
agency recognized by the Secretary for
accreditation of professional teacher
education programs, or (b) provides
teacher candidates with content and
pedagogical knowledge and quality
clinical preparation, and has rigorous
teacher candidate exit standards.
CAEP, a union of two formerly
independent national accrediting
agencies, the National Council for
Accreditation of Teacher Education
(NCATE) and the Teacher Education
Accreditation Council (TEAC), reports
that currently it has fully accredited
approximately 800 IHEs. The existing
IRC currently requires reporting of
whether each teacher preparation
program is accredited by a specialized
accrediting agency, and if so, which
one. We note that, as of July 1, 2016,
CAEP has not been recognized by the
Secretary for accreditation of teacher
preparation programs. As such,
programs accredited by CAEP would not
qualify under § 612.5(a)(4)(i). However,
as described in the discussion of
comments above, States would be able
to use accreditation by CAEP as an
indicator that the teacher preparation
program meets the requirements of
§ 612.5(a)(4)(ii). In addition, we explain
in the comments above that a State also
could meet the reporting requirements
in § 612.5(a)(4)(ii) by indicating that a
program has been accredited by an
accrediting organization whose
standards cover the program
characteristics identified in that section.
Because section 205(a)(1)(D) of the HEA
requires IHEs to include in their IRCs
the identity of any agency that has
accredited their programs, and the
number of such accrediting agencies is
small, States should readily know
whether these other agencies meet these
standards. For these reasons, the
Department believes that no significant
start-up burden will be associated with
State determinations of specialized
accreditation of teacher preparation
programs for those programs that are
already accredited.
As discussed in the NPRM, the
Department estimates that States will
have to provide information for 15,335
teacher preparation programs
nationwide (11,461 unaccredited
programs at IHEs plus 3,484 programs at
alternative routes not affiliated with an
IHE plus 390 reporting instances for
teacher preparation programs offered
through distance education).
PO 00000
Frm 00119
Fmt 4701
Sfmt 4700
75611
The Department believes that States
will be able to make use of accreditation
guidelines from specialized accrediting
agencies to determine the measures that
will adequately inform them about
which of its teacher preparation
programs provide teacher candidates
with content and pedagogical
knowledge, quality clinical preparation,
and have rigorous teacher candidate exit
qualifications—the indicators contained
in § 612.5(a)(4)(ii). The Department
estimates that States will require 2
hours for each teacher preparation
program to determine whether or not it
can provide such information.
Therefore, the Department estimates
that the total reporting burden to
provide this information would be
30,670 hours (15,335 teacher
preparation programs multiplied by 2
hours).
Subtotal of Start-Up Reporting Burden
Under § 612.5
Aggregating the start-up burdens
calculated under the preceding sections
results in the following burdens: All
States would incur a burden of 58,800
hours to link student learning outcome
measures back to each teacher’s
preparation program, 2,200 hours to
measure employment outcomes, 63,648
hours to develop surveys, and 30,670
hours to establish the process to obtain
information related to certain indicators
for teacher preparation programs
without specialized accreditation. This
totals 155,318 hours of start-up burden
nationwide.
Annual Reporting Burden
Under § 612.5(a), States must
transmit, through specific elements on
the SRC, information related to
indicators of academic content
knowledge and teaching skills of novice
teachers for each teacher preparation
program in the State. We discuss the
burden associated with establishing
systems related to gathering these data
in the section discussing start-up
burden associated with § 612.5. The
following section describes the burden
associated with gathering these data and
reporting them to the Department
annually.
Student Learning Outcomes
Under § 612.5(a)(1), States are
required to transmit information related
to student learning outcomes for each
teacher preparation program in the
State. The Department believes that in
order to ensure the validity of the data,
each State will require two hours to
gather and compile data related to the
student learning outcomes of each
teacher preparation program. Much of
E:\FR\FM\31OCR2.SGM
31OCR2
75612
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
the burden related to data collection
will be built into State-established
reporting systems, limiting the burden
related to data collection to technical
support to ensure proper reporting and
to correct data that had been inputted
incorrectly. States have the discretion to
use student growth measures or teacher
evaluation measures in determining
student learning outcomes. Regardless
of the measure(s) used, the Department
estimates that States will require 0.5
hours (30 minutes) for each teacher
preparation program to convey this
information to the Department through
the SRC. This is because these measures
will be calculated on a quantitative
basis. The combination of gathering and
reporting data related to student
learning outcomes therefore constitutes
a burden of 2.5 hours for each teacher
preparation program, and would
represent a total burden of 71,815 hours
annually (2.5 hours multiplied by
28,726 teacher preparation programs).
Employment Outcomes
Under § 612.5(a)(2), States are
required to transmit information related
to employment outcomes for each
teacher preparation program in the
State. In order to report employment
outcomes to the Department, States
must compile and transmit teacher
placement rate data, teacher placement
rate data calculated for high-need
schools, teacher retention rate data, and
teacher retention rate data for high-need
schools. Similar to the process for
reporting student learning outcome
data, much of the burden related to
gathering data on employment outcomes
is subsumed into the State-established
data systems, which provides
information on whether and where
teachers were employed. The
Department estimates that States will
require 3 hours to gather data both on
teacher placement and teacher retention
for each teacher preparation program in
the State. Reporting these data using the
SRC is relatively straightforward. The
measures are the percentage of teachers
placed and the percentage of teachers
who continued to teach, both generally
and at high-need schools. The
Department therefore estimates that
States will require 0.5 hours (30
minutes) for each teacher preparation
program to convey this information to
the Department through the SRC. The
combination of gathering and reporting
data related to employment outcomes
therefore constitutes a burden of 3.5
hours for each teacher preparation
program and would represent a total
burden of 100,541 hours annually (3.5
hours multiplied by 28,726 teacher
preparation programs).
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Survey Outcomes
In addition to the start-up burden
needed to produce a survey, States will
incur annual burdens to administer the
survey. Surveys will include, but will
not be limited to, a teacher survey and
an employer survey, designed to capture
perceptions of whether novice teachers
who are employed as teachers in their
first year of teaching in the State where
the teacher preparation program is
located possess the skills needed to
succeed in the classroom. The burdens
for administering an annual survey will
be borne by the State administering the
survey and the respondents completing
it. For the reasons discussed in the RIA
in this document, the Department
estimates that States will require
approximately 0.5 hours (30 minutes)
per respondent to collect a sufficient
number of survey instruments to ensure
an adequate response rate. The
Department employs an estimate of
253,042 respondents (70 percent of
361,488—the 180,744 completers plus
their 180,744 employers) that will be
required to complete the survey.
Therefore, the Department estimates
that the annual burden to respondents
nationwide would be 126,521 hours
(285,181 respondents multiplied by 0.5
hours per respondent).
With respect to burden incurred by
States to administer the surveys
annually, the Department estimates that
one hour of burden will be incurred for
every respondent to the surveys. This
would constitute an annual burden
nationwide of 253,042 hours (253,042
respondents multiplied by one hour per
respondent).
Under § 612.5(a)(3), after these
surveys are administered, States are
required to report the information using
the SRC. In order to report survey
outcomes to the Department, the
Department estimates that States will
need 0.5 hours to report the quantitative
data related to the survey responses for
each instrument on the SRC,
constituting a total burden of one hour
to report data on both instruments. This
would represent a total burden of 28,726
hours annually (1 hour multiplied by
28,726 teacher preparation programs).
The total burden associated with
administering, completing, and
reporting data on the surveys therefore
constitutes 408,289 hours annually
(126,521 hours plus 253,042 hours plus
28,726 hours).
Teacher Preparation Program
Characteristics
Under § 612.5(a)(4), States are
required to report whether each program
in the State is accredited by a
PO 00000
Frm 00120
Fmt 4701
Sfmt 4700
specialized accrediting agency
recognized by the Secretary, or produces
teacher candidates with content and
pedagogical knowledge, with quality
clinical preparation, and who have met
rigorous teacher candidate exit
qualifications. The Department
estimates that 726 IHEs offering teacher
preparation programs are or will be
accredited by a specialized accrediting
agency (see the start-up burden
discussion for § 612.5 for an explanation
of this figure). Using the IRC, IHEs
already report to States whether teacher
preparation programs have specialized
accreditation. However, as noted in the
start-up burden discussion of § 612.5, as
of July 1, 2016, there are no specialized
accrediting agencies recognized by the
Secretary for teacher preparation
programs. As such, the Department does
not expect any teacher preparation
program to qualify under § 612.5(a)(4)(i).
However, as discussed elsewhere in this
document, States can use accreditation
by CAEP or another entity whose
standards for accreditation cover the
basic program characteristics in
§ 612.5(a)(4)(ii) as evidence that the
teacher preparation program has
satisfied the indicator of program
performance in that provision. Since
IHEs are already reporting whether they
have specialized accreditation in their
IRCs, and this reporting element will be
pre-populated for States on the SRC,
States would simply need to know
whether these accrediting agencies have
standards that examine the program
characteristics in § 612.5(a)(4)(ii).
Therefore, the Department estimates no
additional burden for this reporting
element for programs that have the
requisite accreditation.
Under § 612.5(a)(4)(ii), for those
programs that are not accredited by a
specialized accrediting agency, States
are required to report on certain
indicators in lieu of that accreditation:
Whether the program provides teacher
candidates with content and
pedagogical knowledge and quality
clinical preparation, and has rigorous
teacher candidate exit qualifications.
We assume that such requirements are
already built into State approval of
relevant programs. The Department
estimates that States will require 0.25
hours (15 minutes) to provide to the
Secretary an assurance, in a yes/no
format, whether each teacher
preparation program in its jurisdiction
not holding a specialized accreditation
from CAEP, NCATE, or TEAC meets
these indicators.
As discussed in the start-up burden
section of § 612.5 which discusses
reporting of teacher preparation
program characteristics, the Department
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
estimates States will have to provide
such assurances for 15,335 teacher
preparation programs that do not have
specialized accreditation. Therefore, the
Department estimates that the total
burden associated with providing an
assurance that these teacher preparation
programs meet these indicators is 3,834
hours (0.25 hours multiplied by the
15,335 teacher preparation programs
that do not have specialized
accreditation).
from § 612.5, to identify low-performing
or at-risk teacher preparation programs.
For a full discussion of the burden
related to the consideration and
selection of the criteria reflected in the
indicators described in § 612.5, see the
start-up burden section of §§ 612.4(b)
and 612.4(c) discussing meaningful
differentiations. Apart from that burden
discussion, the Department believes
States will incur no other burden related
to this regulatory provision.
Other Indicators
Under § 612.5(b), States may include
additional indicators of academic
content knowledge and teaching skill in
their determination of whether teacher
preparation programs are lowperforming. As discussed in the
Discussion of Costs, Benefits, and
Transfers section of the RIA, we do not
assume that States will incur any
additional burden under this section
beyond entering the relevant data into
the information collection instrument.
The Department estimates that the total
reporting burden associated with this
provision will be 28,726 hours (28,726
teacher preparation programs multiplied
by 1 hour).
Section 612.7 Consequences for a LowPerforming Teacher Preparation
Program That Loses the State’s Approval
or the State’s Financial Support
For any IHE administering a teacher
preparation program that has lost State
approval or financial support based on
being identified as a low-performing
teacher preparation program, the
regulations under § 612.7 require the
IHE to—(a) notify the Secretary of its
loss of State approval or financial
support within thirty days of such
designation; (b) immediately notify each
student who is enrolled in or accepted
into the low-performing teacher
preparation program and who receives
funding under title IV, HEA that the IHE
is no longer eligible to provide such
funding to them; and (c) disclose
information on its Web site and
promotional materials regarding its loss
of State approval or financial support
and loss of eligibility for title IV
funding.
The Department does not expect that
a large percentage of programs will be
subject to a loss of title IV eligibility.
The Department estimates that
approximately 50 programs will lose
their State approval or financial
support.
For those 50 programs, the
Department estimates that it will take
each program 15 minutes to notify the
Secretary of its loss of eligibility; 5
hours to notify all students who are
enrolled in or accepted into the program
and who receive funding under title IV
of the HEA; and 30 minutes to disclose
this information on its Web sites and
promotional materials, for a total of 5.75
hours per program. The Department
estimates the total burden at 287.5 hours
(50 programs multiplied by 5.75 hours).
Subtotal of Annual Reporting Burden
Under § 612.5
Aggregating the annual burdens
calculated under the preceding sections
results in the following burdens: All
States would incur a burden of 71,815
hours to report on student learning
outcome measures for all subjects and
grades, 100,541 hours to report on
employment outcomes, 408,289 hours to
report on survey outcomes, 3,834 hours
to report on teacher preparation
program characteristics, and 28,726
hours to report on other indicators not
required in § 612.5(a)(1)–(4). This totals
613,204.75 hours of annual burden
nationwide.
sradovich on DSK3GMQ082PROD with RULES2
Total Reporting Burden Under § 612.5
Aggregating the start-up and annual
burdens calculated under the preceding
sections results in the following
burdens: All States would incur a startup burden under § 612.5 of 155,318
hours and an annual burden under
§ 612.5 of 613,204.75 hours. This totals
768,522.75 burden hours under § 612.5
nationwide.
Section 612.6 What Must a State
Consider in Identifying Low-Performing
Teacher Preparation Programs or AtRisk Programs?
The regulations in § 612.6 require
States to use criteria, including, at a
minimum, indicators of academic
content knowledge and teaching skills
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Section 612.8 Regaining Eligibility To
Accept or Enroll Students Receiving
Title IV, HEA Funds After Loss of State
Approval or Financial Support
The regulations in § 612.8 provide a
process for a low-performing teacher
preparation program that has lost State
approval or financial support to regain
its ability to accept and enroll students
who receive title IV, HEA funds. Under
PO 00000
Frm 00121
Fmt 4701
Sfmt 4700
75613
this process, IHEs will submit an
application and supporting
documentation demonstrating to the
Secretary: (1) Improved performance on
the teacher preparation program
performance criteria reflected in
indicators described in § 612.5 as
determined by the State; and (2)
reinstatement of the State’s approval or
the State’s financial support.
The process by which programs and
institutions apply for title IV eligibility
already accounts for the burden
associated with this provision.
Total Reporting Burden Under Part 612
Aggregating the total burdens
calculated under the preceding sections
of part 612 results in the following
burdens: Total burden hours incurred
under § 612.3 is 157,791 hours, under
§ 612.4 is between 81,332.9 hours and
81,449.7 hours, under § 612.5 is
768,522.75 hours, under § 612.7 is 287.5
hours, and under § 612.8 is 200 hours.
This totals between 1,008,134.15 hours
and 1,008,250.95 hours nationwide.
Reporting Burden Under Part 686
The changes to part 686 in these
regulations have no measurable effect
on the burden currently identified in the
OMB Control Numbers 1845–0083 and
1845–0084.
Consistent with the discussions
above, the following chart describes the
sections of the final regulations
involving information collections, the
information being collected, and the
collections the Department has
submitted to the OMB for approval and
public comment under the Paperwork
Reduction Act. In the chart, the
Department labels those estimated
burdens not already associated an OMB
approval number under a single
prospective designation ‘‘OMB 1840–
0837.’’ This label represents a single
information collection; the different
sections of the regulations are separated
in the table below for clarity and to
appropriately divide the burden hours
associated with each regulatory section.
Please note that the changes in burden
estimated in the chart are based on the
change in burden under the current IRC
OMB control numbers 1840–0837 and
‘‘OMB 1840–0837.’’ The burden
estimate for 612.3 is based on the most
recent data available for the number of
IHEs that are required to report (i.e.
1,522 IHEs using most recent data
available rather than 1,250 IHEs using
prior estimates). For a complete
discussion of the costs associated with
the burden incurred under these
regulations, please see the RIA in this
document, specifically the accounting
statement.
E:\FR\FM\31OCR2.SGM
31OCR2
75614
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Regulatory section
Information collection
OMB Control No. and
estimated change in the burden
612.3 ...........................
This section requires IHEs that provide a teacher preparation program leading to State certification or licensure to
provide data on teacher preparation program performance to the States.
This section requires States that receive funds under the
Higher Education Act of 1965, as amended, to report to
the Secretary on the quality of teacher preparation in
the State, both for traditional teacher preparation programs and for alternative route to State certification and
licensure programs.
This regulatory section requires States to use certain indicators of teacher preparation performance for purposes
of the State report card.
This regulatory section requires States to use criteria, including indicators of academic content knowledge and
teaching skills, to identify low-performing or at-risk
teacher preparation programs.
The regulations under this section require any IHE administering a teacher preparation program that has lost
State approval or financial support based on being identified as a low-performing teacher preparation program
to notify the Secretary and students receiving title IV,
HEA funds, and to disclose this information on its Web
site.
The regulations in this section provide a process for a
low-performing teacher preparation program that lost
State approval or financial support to regain its ability to
accept and enroll students who receive title IV funds.
..............................................................................................
OMB 1840–0837—The burden will decrease by 64,421
hours.
612.4 ...........................
612.5 ...........................
612.6 ...........................
612.7 ...........................
612.8 ...........................
Total Change in
Burden.
Intergovernmental Review
These programs are subject to the
requirements of Executive Order 12372
and the regulations in 34 CFR part 79.
One of the objectives of the Executive
order is to foster an intergovernmental
partnership and a strengthened
federalism. The Executive order relies
on processes developed by State and
local governments for coordination and
review of proposed Federal financial
assistance.
This document provides early
notification of our specific plans and
actions for these programs.
sradovich on DSK3GMQ082PROD with RULES2
Assessment of Educational Impact
In the NPRM we requested comments
on whether the proposed regulations
would require transmission of
information that any other agency or
authority of the United States gathers or
makes available.
Based on the response to the NPRM
and on our review, we have determined
that these final regulations do not
require transmission of information that
any other agency or authority of the
United States gathers or makes
available.
Federalism
Executive Order 13132 requires us to
ensure meaningful and timely input by
State and local elected officials in the
development of regulatory policies that
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
OMB 1840–0837—The burden will increase by between
27,700.7 hours.
OMB 1840–0837—The
768,522.75.
Frm 00122
Fmt 4701
Sfmt 4700
will
increase
by
OMB 1840–0837—The burden associated with this regulatory provision is accounted for in other portions of this
burden statement.
OMB 1840–0837—The burden will increase by 287.5
hours.
OMB 1840–0837—The burden will increase by 200 hours.
Total increase in burden under parts 612 will be between
732,173.15 hours and 732,289.95 hours.
have federalism implications.
‘‘Federalism implications’’ means
substantial direct effects on the States,
on the relationship between the
National Government and the States, or
on the distribution of power and
responsibilities among the various
levels of government.
In the NPRM we identified a specific
section that may have federalism
implications and encouraged State and
local elected officials to review and
provide comments on the proposed
regulations. In the Public Comment
section of this preamble, we discuss any
comments we received on this subject.
Accessible Format: Individuals with
disabilities can obtain this document in
an accessible format (e.g., braille, large
print, audiotape, or compact disc) on
request to the person listed under FOR
FURTHER INFORMATION CONTACT.
Electronic Access to This Document:
The official version of this document is
the document published in the Federal
Register. Free Internet access to the
official edition of the Federal Register
and the Code of Federal Regulations is
available via the Federal Digital System
at: www.gpo.gov/fdsys. At this site you
can view this document, as well as all
other documents of this Department
published in the Federal Register, in
text or Adobe Portable Document
Format (PDF). To use PDF you must
PO 00000
burden
have Adobe Acrobat Reader, which is
available free at the site.
You may also access documents of the
Department published in the Federal
Register by using the article search
feature at: www.federalregister.gov.
Specifically, through the advanced
search feature at this site, you can limit
your search to documents published by
the Department.
List of Subjects in 34 CFR Parts 612 and
686
Administrative practice and
procedure, Aliens, Colleges and
universities, Consumer protection,
Grant programs—education, Loan
programs—education, Reporting and
recordkeeping requirements, Selective
Service System, Student aid, Vocational
education.
Dated: October 11, 2016.
John B. King, Jr.,
Secretary of Education.
For the reasons discussed in the
preamble, the Secretary amends chapter
VI of title 34 of the Code of Federal
Regulations as follows:
■ 1. Part 612 is added to read as follows:
PART 612—TITLE II REPORTING
SYSTEM
Subpart A—Scope, Purpose, and
Definitions
Sec.
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
612.1
612.2
Scope and purpose.
Definitions.
Subpart B—Reporting Requirements
612.3 What are the regulatory reporting
requirements for the Institutional Report
Card?
612.4 What are the regulatory reporting
requirements for the State Report Card?
612.5 What indicators must a State use to
report on teacher preparation program
performance for purposes of the State
report card?
612.6 What must States consider in
identifying low-performing teacher
preparation programs or at-risk teacher
preparation programs, and what actions
must a State take with respect to those
programs identified as low-performing?
Subpart C—Consequences of Withdrawal of
State Approval or Financial Support
612.7 What are the consequences for a lowperforming teacher preparation program
that loses the State’s approval or the
State’s financial support?
612.8 How does a low-performing teacher
preparation program regain eligibility to
accept or enroll students receiving Title
IV, HEA funds after loss of the State’s
approval or the State’s financial support?
Authority: 20 U.S.C. 1022d and 1022f.
Subpart A—Scope, Purpose, and
Definitions
sradovich on DSK3GMQ082PROD with RULES2
§ 612.1
Scope and purpose.
This part establishes regulations
related to the teacher preparation
program accountability system under
title II of the HEA. This part includes:
(a) Institutional Report Card reporting
requirements.
(b) State Report Card reporting
requirements.
(c) Requirements related to the
indicators States must use to report on
teacher preparation program
performance.
(d) Requirements related to the areas
States must consider to identify lowperforming teacher preparation
programs and at-risk teacher preparation
programs and actions States must take
with respect to those programs.
(e) The consequences for a lowperforming teacher preparation program
that loses the State’s approval or the
State’s financial support.
(f) The conditions under which a lowperforming teacher preparation program
that has lost the State’s approval or the
State’s financial support may regain
eligibility to resume accepting and
enrolling students who receive title IV,
HEA funds.
§ 612.2
Definitions.
(a) The following terms used in this
part are defined in the regulations for
Institutional Eligibility under the HEA,
34 CFR part 600:
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
Distance education
Secretary
State
Title IV, HEA program
(b) The following term used in this
part is defined in subpart A of the
Student Assistance General Provisions,
34 CFR part 668:
Payment period
(c) The following term used in this
part is defined in 34 CFR 77.1:
Local educational agency (LEA)
(d) Other terms used in this part are
defined as follows:
At-risk teacher preparation program:
A teacher preparation program that is
identified as at-risk of being lowperforming by a State based on the
State’s assessment of teacher
preparation program performance under
§ 612.4.
Candidate accepted into the teacher
preparation program: An individual
who has been admitted into a teacher
preparation program but who has not
yet enrolled in any coursework that the
institution has determined to be part of
that teacher preparation program.
Candidate enrolled in the teacher
preparation program: An individual
who has been accepted into a teacher
preparation program and is in the
process of completing coursework but
has not yet completed the teacher
preparation program.
Content and pedagogical knowledge:
An understanding of the central
concepts and structures of the discipline
in which a teacher candidate has been
trained, and how to create effective
learning experiences that make the
discipline accessible and meaningful for
all students, including a distinct set of
instructional skills to address the needs
of English learners and students with
disabilities, in order to assure mastery of
the content by the students, as described
in applicable professional, State, or
institutional standards.
Effective teacher preparation
program: A teacher preparation program
with a level of performance higher than
a low-performing teacher preparation
program or an at-risk teacher
preparation program.
Employer survey: A survey of
employers or supervisors designed to
capture their perceptions of whether the
novice teachers they employ or
supervise who are in their first year of
teaching were effectively prepared.
High-need school: A school that,
based on the most recent data available,
meets one or both of the following:
(i) The school is in the highest
quartile of schools in a ranking of all
schools served by a local educational
agency (LEA), ranked in descending
PO 00000
Frm 00123
Fmt 4701
Sfmt 4700
75615
order by percentage of students from
low-income families enrolled in such
schools, as determined by the LEA
based on one of the following measures
of poverty:
(A) The percentage of students aged 5
through 17 in poverty counted in the
most recent Census data approved by
the Secretary.
(B) The percentage of students eligible
for a free or reduced price school lunch
under the Richard B. Russell National
School Lunch Act [42 U.S.C. 1751 et
seq.].
(C) The percentage of students in
families receiving assistance under the
State program funded under part A of
title IV of the Social Security Act (42
U.S.C. 601 et seq.).
(D) The percentage of students eligible
to receive medical assistance under the
Medicaid program.
(E) A composite of two or more of the
measures described in paragraphs (i)(A)
through (D) of this definition.
(ii) In the case of—
(A) An elementary school, the school
serves students not less than 60 percent
of whom are eligible for a free or
reduced price school lunch under the
Richard B. Russell National School
Lunch Act; or
(B) Any school other than an
elementary school, the school serves
students not less than 45 percent of
whom are eligible for a free or reduced
price school lunch under the Richard B.
Russell National School Lunch Act.
Low-performing teacher preparation
program: A teacher preparation program
that is identified as low-performing by
a State based on the State’s assessment
of teacher preparation program
performance under § 612.4.
Novice teacher: A teacher of record in
the first three years of teaching who
teaches elementary or secondary public
school students, which may include, at
a State’s discretion, preschool students.
Quality clinical preparation: Training
that integrates content, pedagogy, and
professional coursework around a core
of pre-service clinical experiences. Such
training must, at a minimum—
(i) Be provided by qualified clinical
instructors, including school and LEAbased personnel, who meet established
qualification requirements and who use
a training standard that is made publicly
available;
(ii) Include multiple clinical or field
experiences, or both, that serve diverse,
rural, or underrepresented student
populations in elementary through
secondary school, including English
learners and students with disabilities,
and that are assessed using a
performance-based protocol to
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
75616
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
demonstrate teacher candidate mastery
of content and pedagogy; and
(iii) Require that teacher candidates
use research-based practices, including
observation and analysis of instruction,
collaboration with peers, and effective
use of technology for instructional
purposes.
Recent graduate: An individual whom
a teacher preparation program has
documented as having met all the
requirements of the program in any of
the three title II reporting years
preceding the current reporting year, as
defined in the report cards prepared
under §§ 612.3 and 612.4.
Documentation may take the form of a
degree, institutional certificate, program
credential, transcript, or other written
proof of having met the program’s
requirements. For the purposes of this
definition, a program may not use either
of the following criteria to determine if
an individual has met all the
requirements of the program:
(i) Becoming a teacher of record; or
(ii) Obtaining initial certification or
licensure.
Rigorous teacher candidate exit
qualifications: Qualifications of a
teacher candidate established by a
teacher preparation program prior to the
candidate’s completion of the program
using an assessment of candidate
performance that relies, at a minimum,
on validated professional teaching
standards and measures of the
candidate’s effectiveness in curriculum
planning, instruction of students,
appropriate plans and modifications for
all students, and assessment of student
learning.
Student growth: The change in
student achievement between two or
more points in time, using a student’s
scores on the State’s assessments under
section 1111(b)(2) of the ESEA or other
measures of student learning and
performance, such as student results on
pre-tests and end-of-course tests;
objective performance-based
assessments; student learning
objectives; student performance on
English language proficiency
assessments; and other measures that
are rigorous, comparable across schools,
and consistent with State guidelines.
Teacher evaluation measure: A
teacher’s performance level based on an
LEA’s teacher evaluation system that
differentiates teachers on a regular basis
using at least three performance levels
and multiple valid measures in
assessing teacher performance. For
purposes of this definition, multiple
valid measures must include data on
student growth for all students
(including English learners and students
with disabilities) and other measures of
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
professional practice (such as
observations based on rigorous teacher
performance standards, teacher
portfolios, and student and parent
surveys).
Teacher of record: A teacher
(including a teacher in a co-teaching
assignment) who has been assigned the
lead responsibility for student learning
in a subject or area.
Teacher placement rate: (i) The
percentage of recent graduates who have
become novice teachers (regardless of
retention) for the grade level, grade
span, and subject area in which they
were prepared.
(ii) At the State’s discretion, the rate
calculated under paragraph (i) of this
definition may exclude one or more of
the following, provided that the State
uses a consistent approach to assess and
report on all of the teacher preparation
programs in the State:
(A) Recent graduates who have taken
teaching positions in another State.
(B) Recent graduates who have taken
teaching positions in private schools.
(C) Recent graduates who have
enrolled in graduate school or entered
military service.
(iii) For a teacher preparation program
provided through distance education, a
State calculates the rate under
paragraph (i) of this definition using the
total number of recent graduates who
have obtained certification or licensure
in the State during the three preceding
title II reporting years as the
denominator.
Teacher preparation entity: An
institution of higher education or other
organization that is authorized by the
State to prepare teachers.
Teacher preparation program: A
program, whether traditional or
alternative route, offered by a teacher
preparation entity that leads to initial
State teacher certification or licensure in
a specific field. Where some
participants in the program are in a
traditional route to certification or
licensure in a specific field, and others
are in an alternative route to
certification or licensure in that same
field, the traditional and alternative
route components are considered to be
separate teacher preparation programs.
The term teacher preparation program
includes a teacher preparation program
provided through distance education.
Teacher preparation program
provided through distance education: A
teacher preparation program at which at
least 50 percent of the program’s
required coursework is offered through
distance education.
Teacher retention rate: The
percentage of individuals in a given
cohort of novice teachers who have been
PO 00000
Frm 00124
Fmt 4701
Sfmt 4700
continuously employed as teachers of
record in each year between their first
year as a novice teacher and the current
reporting year.
(i) For the purposes of this definition,
a cohort of novice teachers includes all
teachers who were first identified as a
novice teacher by the State in the same
title II reporting year.
(ii) At the State’s discretion, the
teacher retention rates may exclude one
or more of the following, provided that
the State uses a consistent approach to
assess and report on all teacher
preparation programs in the State:
(A) Novice teachers who have taken
teaching positions in other States.
(B) Novice teachers who have taken
teaching positions in private schools.
(C) Novice teachers who are not
retained specifically and directly due to
budget cuts.
(D) Novice teachers who have
enrolled in graduate school or entered
military service.
Teacher survey: A survey
administered to all novice teachers who
are in their first year of teaching that is
designed to capture their perceptions of
whether the preparation that they
received from their teacher preparation
program was effective.
Title II reporting year: A period of
twelve consecutive months, starting
September 1 and ending August 31.
Subpart B—Reporting Requirements
§ 612.3 What are the regulatory reporting
requirements for the Institutional report
card?
Beginning not later than April 30,
2018, and annually thereafter, each
institution of higher education that
conducts a teacher preparation program
and that enrolls students receiving title
IV HEA program funds—
(a) Must report to the State on the
quality of teacher preparation and other
information consistent with section
205(a) of the HEA, using an institutional
report card that is prescribed by the
Secretary;
(b) Must prominently and promptly
post the institutional report card
information on the institution’s Web site
and, if applicable, on the teacher
preparation program portion of the
institution’s Web site; and
(c) May also provide the institutional
report card information to the general
public in promotional or other materials
it makes available to prospective
students or other individuals.
§ 612.4 What are the regulatory reporting
requirements for the State report card?
(a) General. Beginning not later than
October 31, 2018, and annually
E:\FR\FM\31OCR2.SGM
31OCR2
sradovich on DSK3GMQ082PROD with RULES2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
thereafter, each State that receives funds
under the HEA must—
(1) Report to the Secretary, using a
State report card that is prescribed by
the Secretary, on—
(i) The quality of all teacher
preparation programs in the State
consistent with paragraph (b)(3) of this
section, whether or not they enroll
students receiving Federal assistance
under the HEA; and
(ii) All other information consistent
with section 205(b) of the HEA; and
(2) Make the State report card
information widely available to the
general public by posting the State
report card information on the State’s
Web site.
(b) Reporting of information on
teacher preparation program
performance. In the State report card,
beginning not later than October 31,
2019, and annually thereafter, the
State—
(1) Must make meaningful
differentiations in teacher preparation
program performance using at least
three performance levels—lowperforming teacher preparation
program, at-risk teacher preparation
program, and effective teacher
preparation program—based on the
indicators in § 612.5.
(2) Must provide—
(i) For each teacher preparation
program, data for each of the indicators
identified in § 612.5 for the most recent
title II reporting year;
(ii) The State’s weighting of the
different indicators in § 612.5 for
purposes of describing the State’s
assessment of program performance;
and
(iii) Any State-level rewards or
consequences associated with the
designated performance levels;
(3) In implementing paragraph (b)(1)
through (2) of this section, except as
provided in paragraphs (b)(3)(ii)(D) and
(b)(5) of this section, must ensure the
performance of all of the State’s teacher
preparation programs are represented in
the State report card by—
(i)(A) Annually reporting on the
performance of each teacher preparation
program that, in a given reporting year,
produces a total of 25 or more recent
graduates who have received initial
certification or licensure from the State
that allows them to serve in the State as
teachers of record for K–12 students
and, at a State’s discretion, preschool
students (i.e., the program size
threshold); or
(B) If a State chooses a program size
threshold of less than 25 (e.g., 15 or 20),
annually reporting on the performance
of each teacher preparation program
that, in a given reporting year, produces
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
an amount of recent graduates, as
described in this paragraph (b)(3)(i), that
meets or exceeds this threshold; and
(ii) For any teacher preparation
program that does not meet the program
size threshold in paragraph (b)(3)(i)(A)
or (B) of this section, annually reporting
on the program’s performance by
aggregating data under paragraph
(b)(3)(ii)(A), (B), or (C) of this section in
order to meet the program size threshold
except as provided in paragraph
(b)(3)(ii)(D) of this section.
(A) The State may report on the
program’s performance by aggregating
data that determine the program’s
performance with data for other teacher
preparation programs that are operated
by the same teacher preparation entity
and are similar to or broader than the
program in content.
(B) The State may report on the
program’s performance by aggregating
data that determine the program’s
performance over multiple years for up
to four years until the program size
threshold is met.
(C) If the State cannot meet the
program size threshold by aggregating
data under paragraph (b)(3)(ii)(A) or (B)
of this section, it may aggregate data
using a combination of the methods
under both of these paragraphs.
(D) The State is not required under
this paragraph (b)(3)(ii) of this section to
report data on a particular teacher
preparation program for a given
reporting year if aggregation under
paragraph (b)(3)(ii) of this section would
not yield the program size threshold for
that program; and
(4) Must report on the procedures
established by the State in consultation
with a group of stakeholders, as
described in paragraph (c)(1) of this
section, and on the State’s examination
of its data collection and reporting, as
described in paragraph (c)(2) of this
section, in the State report card
submitted—
(i) No later than October 31, 2019, and
every four years thereafter; and
(ii) At any other time that the State
makes a substantive change to the
weighting of the indicators or the
procedures for assessing and reporting
the performance of each teacher
preparation program in the State
described in paragraph (c) of this
section.
(5) The State is not required under
this paragraph (b) to report data on a
particular teacher preparation program
if reporting these data would be
inconsistent with Federal or State
privacy and confidentiality laws and
regulations.
(c) Fair and equitable methods—(1)
Consultation. Each State must establish,
PO 00000
Frm 00125
Fmt 4701
Sfmt 4700
75617
in consultation with a representative
group of stakeholders, the procedures
for assessing and reporting the
performance of each teacher preparation
program in the State under this section.
(i) The representative group of
stakeholders must include, at a
minimum, representatives of—
(A) Leaders and faculty of traditional
teacher preparation programs and
alternative routes to State certification
or licensure programs;
(B) Students of teacher preparation
programs;
(C) LEA superintendents;
(D) Small teacher preparation
programs (i.e., programs that produce
fewer than a program size threshold of
25 recent graduates in a given year or
any lower threshold set by a State, as
described in paragraph (b)(3)(i) of this
section);
(E) Local school boards;
(F) Elementary through secondary
school leaders and instructional staff;
(G) Elementary through secondary
school students and their parents;
(H) IHEs that serve high proportions
of low-income students, students of
color, or English learners;
(I) English learners, students with
disabilities, and other underserved
students;
(J) Officials of the State’s standards
board or other appropriate standards
body; and
(K) At least one teacher preparation
program provided through distance
education.
(ii) The procedures for assessing and
reporting the performance of each
teacher preparation program in the State
under this section must, at minimum,
include—
(A) The weighting of the indicators
identified in § 612.5 for establishing
performance levels of teacher
preparation programs as required by this
section;
(B) The method for aggregation of data
pursuant to paragraph (b)(3)(ii) of this
section;
(C) Any State-level rewards or
consequences associated with the
designated performance levels; and
(D) Appropriate opportunities for
programs to challenge the accuracy of
their performance data and
classification of the program.
(2) State examination of data
collection and reporting. Each State
must periodically examine the quality of
the data collection and reporting
activities it conducts pursuant to
paragraph (b) of this section and § 612.5,
and, as appropriate, modify its
procedures for assessing and reporting
the performance of each teacher
preparation program in the State using
E:\FR\FM\31OCR2.SGM
31OCR2
75618
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
the procedures in paragraph (c)(1) of
this section.
(d) Inapplicability to certain insular
areas. Paragraphs (b) and (c) of this
section do not apply to American
Samoa, the Commonwealth of the
Northern Mariana Islands, the freely
associated States of the Republic of the
Marshall Islands, the Federated States of
Micronesia, the Republic of Palau,
Guam, and the United States Virgin
Islands.
sradovich on DSK3GMQ082PROD with RULES2
§ 612.5 What indicators must a State use
to report on teacher preparation program
performance for purposes of the State
report card?
(a) For purposes of reporting under
§ 612.4, a State must assess, for each
teacher preparation program within its
jurisdiction, indicators of academic
content knowledge and teaching skills
of novice teachers from that program,
including, at a minimum, the following
indicators:
(1) Student learning outcomes.
(i) For each year and each teacher
preparation program in the State, a State
must calculate the aggregate student
learning outcomes of all students taught
by novice teachers.
(ii) For purposes of calculating
student learning outcomes under
paragraph (a)(1)(i) of this section, a State
must use:
(A) Student growth;
(B) A teacher evaluation measure;
(C) Another State-determined measure
that is relevant to calculating student
learning outcomes, including academic
performance, and that meaningfully
differentiates among teachers; or
(D) Any combination of paragraphs
(a)(1)(ii)(A), (B), or (C) of this section.
(iii) At the State’s discretion, in
calculating a teacher preparation
program’s aggregate student learning
outcomes a State may exclude one or
both of the following, provided that the
State uses a consistent approach to
assess and report on all of the teacher
preparation programs in the State—
(A) Student learning outcomes of
students taught by novice teachers who
have taken teaching positions in another
State.
(B) Student learning outcomes of all
students taught by novice teachers who
have taken teaching positions in private
schools.
(2) Employment outcomes.
(i) Except as provided in paragraph
(a)(2)(v) of this section, for each year
and each teacher preparation program in
the State, a State must calculate:
(A) Teacher placement rate;
(B) Teacher placement rate in highneed schools;
(C) Teacher retention rate; and
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
(D) Teacher retention rate in highneed schools.
(ii) For purposes of reporting the
teacher retention rate and teacher
retention rate in high-need schools
under paragraph (a)(2)(i)(C) and (D) of
this section—
(A) Except as provided in paragraph
(B), the State reports a teacher retention
rate for each of the three cohorts of
novice teachers immediately preceding
the current title II reporting year.
(B)(1) The State is not required to
report a teacher retention rate for any
teacher preparation program in the State
report to be submitted in October 2018.
(2) For the State report to be
submitted in October 2019, the teacher
retention rate must be calculated for the
cohort of novice teachers identified in
the 2017–2018 title II reporting year.
(3) For the State report to be
submitted in October 2020, separate
teacher retention rates must be
calculated for the cohorts of novice
teachers identified in the 2017–2018
and 2018–2019 title II reporting years.
(iii) For the purposes of calculating
employment outcomes under paragraph
(a)(2)(i) of this section, a State may, at
its discretion, assess traditional and
alternative route teacher preparation
programs differently, provided that
differences in assessments and the
reasons for those differences are
transparent and that assessments result
in equivalent levels of accountability
and reporting irrespective of the type of
program.
(iv) For the purposes of the teacher
placement rate under paragraph
(a)(2)(i)(A) and (B) of this section, a
State may, at its discretion, assess
teacher preparation programs provided
through distance education differently
from teacher preparation programs not
provided through distance education,
based on whether the differences in the
way the rate is calculated for teacher
preparation programs provided through
distance education affect employment
outcomes. Differences in assessments
and the reasons for those differences
must be transparent and result in
equivalent levels of accountability and
reporting irrespective of where the
program is physically located.
(v) A State is not required to calculate
a teacher placement rate under
paragraph (a)(2)(i)(A) of this section for
alternative route to certification
programs.
(3) Survey outcomes. (i) For each year
and each teacher preparation program
on which a State must report a State
must collect through survey instruments
qualitative and quantitative data
including, but not limited to, a teacher
survey and an employer survey
PO 00000
Frm 00126
Fmt 4701
Sfmt 4700
designed to capture perceptions of
whether novice teachers who are
employed in their first year of teaching
possess the academic content
knowledge and teaching skills needed to
succeed in the classroom.
(ii) At the State’s discretion, in
calculating a teacher preparation
program’s survey outcomes the State
may exclude survey outcomes for all
novice teachers who have taken
teaching positions in private schools
provided that the State uses a consistent
approach to assess and report on all of
the teacher preparation programs in the
State.
(4) Characteristics of teacher
preparation programs. Whether the
program—
(i) Is administered by an entity
accredited by an agency recognized by
the Secretary for accreditation of
professional teacher education
programs; or
(ii) Produces teacher candidates—
(A) With content and pedagogical
knowledge;
(B) With quality clinical preparation;
and
(C) Who have met rigorous teacher
candidate exit qualifications.
(b) At a State’s discretion, the
indicators of academic content
knowledge and teaching skills may
include other indicators of a teacher’s
effect on student performance, such as
student survey results, provided that the
State uses the same indicators for all
teacher preparation programs in the
State.
(c) A State may, at its discretion,
exclude from its reporting under
paragraph (a)(1)–(3) of this section
individuals who have not become
novice teachers after three years of
becoming recent graduates.
(d) This section does not apply to
American Samoa, the Commonwealth of
the Northern Mariana Islands, the freely
associated states of the Republic of the
Marshall Islands, the Federated States of
Micronesia, the Republic of Palau,
Guam, and the United States Virgin
Islands.
§ 612.6 What must a State consider in
identifying low-performing teacher
preparation programs or at-risk teacher
preparation programs, and what actions
must a State take with respect to those
programs identified as low-performing?
(a)(1) In identifying low-performing or
at-risk teacher preparation programs the
State must use criteria that, at a
minimum, include the indicators of
academic content knowledge and
teaching skills from § 612.5.
(2) Paragraph (a)(1) of this section
does not apply to American Samoa, the
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
Commonwealth of the Northern Mariana
Islands, the freely associated states of
the Republic of the Marshall Islands, the
Federated States of Micronesia, the
Republic of Palau, Guam, and the
United States Virgin Islands.
(b) At a minimum, a State must
provide technical assistance to lowperforming teacher preparation
programs in the State to help them
improve their performance in
accordance with section 207(a) of the
HEA. Technical assistance may include,
but is not limited to: Providing
programs with information on the
specific indicators used to determine
the program’s rating (e.g., specific areas
of weakness in student learning, job
placement and retention, and novice
teacher and employer satisfaction);
assisting programs to address the rigor
of their exit criteria; helping programs
identify specific areas of curriculum or
clinical experiences that correlate with
gaps in graduates’ preparation; helping
identify potential research and other
resources to assist program
improvement (e.g., evidence of other
successful interventions, other
university faculty, other teacher
preparation programs, nonprofits with
expertise in educator preparation and
teacher effectiveness improvement,
accrediting organizations, or higher
education associations); and sharing
best practices from exemplary programs.
Subpart C—Consequences of
Withdrawal of State Approval or
Financial Support
sradovich on DSK3GMQ082PROD with RULES2
§ 612.7 What are the consequences for a
low-performing teacher preparation
program that loses the State’s approval or
the State’s financial support?
(a) Any teacher preparation program
for which the State has withdrawn the
State’s approval or the State has
terminated the State’s financial support
due to the State’s identification of the
program as a low-performing teacher
preparation program—
(1) Is ineligible for any funding for
professional development activities
awarded by the Department as of the
date that the State withdrew its
approval or terminated its financial
support;
(2) May not include any candidate
accepted into the teacher preparation
program or any candidate enrolled in
the teacher preparation program who
receives aid under title IV, HEA
programs in the institution’s teacher
preparation program as of the date that
the State withdrew its approval or
terminated its financial support; and
(3) Must provide transitional support,
including remedial services, if
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
necessary, to students enrolled at the
institution at the time of termination of
financial support or withdrawal of
approval for a period of time that is not
less than the period of time a student
continues in the program but no more
than 150 percent of the published
program length.
(b) Any institution administering a
teacher preparation program that has
lost State approval or financial support
based on being identified as a lowperforming teacher preparation program
must—
(1) Notify the Secretary of its loss of
the State’s approval or the State’s
financial support due to identification
as low-performing by the State within
30 days of such designation;
(2) Immediately notify each student
who is enrolled in or accepted into the
low-performing teacher preparation
program and who receives title IV, HEA
program funds that, commencing with
the next payment period, the institution
is no longer eligible to provide such
funding to students enrolled in or
accepted into the low-performing
teacher preparation program; and
(3) Disclose on its Web site and in
promotional materials that it makes
available to prospective students that
the teacher preparation program has
been identified as a low-performing
teacher preparation program by any
State and has lost the State’s approval
or the State’s financial support,
including the identity of the State or
States, and that students accepted or
enrolled in the low-performing teacher
preparation program may not receive
title IV, HEA program funds.
§ 612.8 How does a low-performing
teacher preparation program regain
eligibility to accept or enroll students
receiving Title IV, HEA program funds after
loss of the State’s approval or the State’s
financial support?
(a) A low-performing teacher
preparation program that has lost the
State’s approval or the State’s financial
support may regain its ability to accept
and enroll students who receive title IV,
HEA program funds upon
demonstration to the Secretary under
paragraph (b) of this section of—
(1) Improved performance on the
teacher preparation program
performance criteria in § 612.5 as
determined by the State; and
(2) Reinstatement of the State’s
approval or the State’s financial
support, or, if both were lost, the State’s
approval and the State’s financial
support.
(b) To regain eligibility to accept or
enroll students receiving title IV, HEA
funds in a teacher preparation program
PO 00000
Frm 00127
Fmt 4701
Sfmt 4700
75619
that was previously identified by the
State as low-performing and that lost the
State’s approval or the State’s financial
support, the institution that offers the
teacher preparation program must
submit an application to the Secretary
along with supporting documentation
that will enable the Secretary to
determine that the teacher preparation
program has met the requirements
under paragraph (a) of this section.
PART 686—TEACHER EDUCATION
ASSISTANCE FOR COLLEGE AND
HIGHER EDUCATION (TEACH) GRANT
PROGRAM
2. The authority citation for part 686
continues to read as follows:
■
Authority: 20 U.S.C. 1070g, et seq., unless
otherwise noted.
§ 686.1
[Amended]
3. Section 686.1 is amended by
removing the words ‘‘school serving
low-income students’’ and adding, in
their place, the words ‘‘school or
educational service agency serving lowincome students (low-income school)’’.
■ 4. Section 686.2 is amended by:
■ A. Redesignating paragraph (d) as
paragraph (e).
■ B. Adding a new paragraph (d).
■ C. In newly redesignated paragraph
(e):
■ i. Redesignating paragraphs (1) and (2)
in the definition of ‘‘Academic year or
its equivalent for elementary and
secondary schools (elementary or
secondary academic year)’’ as
paragraphs (i) and (ii);
■ ii. Adding in alphabetical order the
definition of ‘‘Educational service
agency’’;
■ iii. Redesignating paragraphs (1)
through (7) in the definition of ‘‘Highneed field’’ as paragraphs (i) through
(vii), respectively;
■ iv. Adding in alphabetical order
definitions of ‘‘High-quality teacher
preparation program not provided
through distance education’’ and ‘‘Highquality teacher preparation program
provided through distance education’’;
■ v. Redesignating paragraphs (1)
through (3) in the definition of
‘‘Institutional Student Information
Record (ISIR)’’ as paragraphs (i) through
(iii), respectively;
■ vi. Redesignating paragraphs (1) and
(2) as paragraphs (i) and (ii) and
paragraphs (2)(i) and (ii) as paragraphs
(ii)(A) and (B), respectively, in the
definition of ‘‘Numeric equivalent’’;
■ vii. Redesignating paragraphs (1)
through (3) in the definition of ‘‘Postbaccalaureate program’’ as paragraphs
(i) through (iii), respectively;
■
E:\FR\FM\31OCR2.SGM
31OCR2
75620
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
viii. Adding in alphabetical order a
definition for ‘‘School or educational
service agency serving low-income
students (low-income school)’’;
■ ix. Removing the definition of
‘‘School serving low-income students
(low-income school)’’;
■ x. Revising the definitions of ‘‘TEACH
Grant-eligible institution’’ and ‘‘TEACH
Grant-eligible program’’; and
■ xi. Revising the definition of ‘‘Teacher
preparation program.’’
The additions and revisions read as
follows:
■
§ 686.2
Definitions.
sradovich on DSK3GMQ082PROD with RULES2
*
*
*
*
*
(d) A definition for the following term
used in this part is in Title II Reporting
System, 34 CFR part 612:
Effective teacher preparation program.
(e) * * *
Educational service agency: A
regional public multiservice agency
authorized by State statute to develop,
manage, and provide services or
programs to LEAs, as defined in section
8101 of the Elementary and Secondary
Education Act of l965, as amended
(ESEA).
*
*
*
*
*
High-quality teacher preparation
program not provided through distance
education: A teacher preparation
program at which less than 50 percent
of the program’s required coursework is
offered through distance education; and
(i) Beginning with the 2021–2022
award year, is not classified by the State
to be less than an effective teacher
preparation program based on 34 CFR
612.4(b) in two of the previous three
years; or
(ii) Meets the exception from State
reporting of teacher preparation
program performance under 34 CFR
612.4(b)(3)(ii)(D) or 34 CFR 612.4(b)(5).
High-quality teacher preparation
program provided through distance
education: A teacher preparation
program at which at least 50 percent of
the program’s required coursework is
offered through distance education; and
(i) Beginning with the 2021–2022
award year, is not classified by the same
State to be less than an effective teacher
preparation program based on 34 CFR
612.4(b); in two of the previous three
years; or
(ii) Meets the exception from State
reporting of teacher preparation
program performance under 34 CFR
612.4(b)(3)(ii)(D) or (E).
*
*
*
*
*
School or educational service agency
serving low-income students (lowincome school): An elementary or
secondary school or educational service
agency that—
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
(i) Is located within the area served by
the LEA that is eligible for assistance
pursuant to title I of the ESEA;
(ii) Has been determined by the
Secretary to be a school or educational
service agency in which more than 30
percent of the school’s or educational
service agency’s total enrollment is
made up of children who qualify for
services provided under title I of the
ESEA; and
(iii) Is listed in the Department’s
Annual Directory of Designated LowIncome Schools for Teacher
Cancellation Benefits. The Secretary
considers all elementary and secondary
schools and educational service
agencies operated by the Bureau of
Indian Education (BIE) in the
Department of the Interior or operated
on Indian reservations by Indian tribal
groups under contract or grant with the
BIE to qualify as schools or educational
service agencies serving low-income
students.
*
*
*
*
*
TEACH Grant-eligible institution: An
eligible institution as defined in 34 CFR
part 600 that meets financial
responsibility standards established in
34 CFR part 668, subpart L, or that
qualifies under an alternative standard
in 34 CFR 668.175 and—
(i) Provides at least one high-quality
teacher preparation program not
provided through distance education or
one high-quality teacher preparation
program provided through distance
education at the baccalaureate or
master’s degree level that also provides
supervision and support services to
teachers, or assists in the provision of
services to teachers, such as—
(A) Identifying and making available
information on effective teaching skills
or strategies;
(B) Identifying and making available
information on effective practices in the
supervision and coaching of novice
teachers; and
(C) Mentoring focused on developing
effective teaching skills and strategies;
(ii) Provides a two-year program that
is acceptable for full credit in a TEACH
Grant-eligible program offered by an
institution described in paragraph (i) of
this definition, as demonstrated by the
institution that provides the two-year
program, or provides a program that is
the equivalent of an associate degree, as
defined in § 668.8(b)(1), that is
acceptable for full credit toward a
baccalaureate degree in a TEACH Granteligible program;
(iii) Provides a high-quality teacher
preparation program not provided
through distance education or a highquality teacher preparation program
PO 00000
Frm 00128
Fmt 4701
Sfmt 4700
provided through distance education
that is a post-baccalaureate program of
study; or
(iv) Provides a master’s degree
program that does not meet the
definition of terms ‘‘high-quality teacher
preparation program not provided
through distance education’’ or ‘‘highquality teacher preparation program that
is provided through distance education’’
because it is not subject to reporting
under 34 CFR part 612, but that
prepares:
(A) A teacher or a retiree from another
occupation with expertise in a field in
which there is a shortage of teachers,
such as mathematics, science, special
education, English language acquisition,
or another high-need field; or
(B) A teacher who is using highquality alternative certification routes to
become certified.
TEACH Grant-eligible program: (i) An
eligible program, as defined in 34 CFR
668.8, that meets the definition of a
‘‘high-quality teacher preparation
program not provided through distance
education’’ or ‘‘high-quality teacher
preparation program provided through
distance education’’ and that is
designed to prepare an individual to
teach as a highly-qualified teacher in a
high-need field and leads to a
baccalaureate or master’s degree, or is a
post-baccalaureate program of study;
(ii) A program that is a two-year
program or is the equivalent of an
associate degree, as defined in 34 CFR
668.8(b)(1), that is acceptable for full
credit toward a baccalaureate degree in
a TEACH Grant-eligible program; or;
(iii) A master’s degree program that
does not meet the definition of the terms
‘‘high-quality teacher preparation not
provided through distance education’’
or ‘‘high-quality teacher preparation
program that is provided through
distance education’’ because it is not
subject to reporting under 34 CFR part
612, but that prepares:
(A) A teacher or a retiree from another
occupation with expertise in a field in
which there is a shortage of teachers,
such as mathematics, science, special
education, English language acquisition,
or another high-need field; or
(B) A teacher who is using highquality alternative certification routes to
become certified.
*
*
*
*
*
Teacher preparation program: A
course of study, provided by an
institution of higher education, the
completion of which signifies that an
enrollee has met all of the State’s
educational or training requirements for
initial certification or licensure to teach
in the State’s elementary or secondary
E:\FR\FM\31OCR2.SGM
31OCR2
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
schools. A teacher preparation program
may be a traditional program or an
alternative route to certification or
licensure, as defined by the State.
*
*
*
*
*
■ 5. Section 686.3 is amended by adding
paragraph (c) to read as follows:
§ 686.3
Duration of student eligibility.
*
*
*
*
*
(c) An otherwise eligible student who
received a TEACH Grant for enrollment
in a TEACH Grant-eligible program is
eligible to receive additional TEACH
Grants to complete that program, even if
that program is no longer considered a
TEACH Grant-eligible program, not to
exceed four Scheduled Awards for an
undergraduate or post-baccalaureate
student and up to two Scheduled
Awards for a graduate student.
■ 6. Section 686.11 is amended by:
■ A. Revising paragraph (a)(1)(iii).
■ B. Adding paragraph (d).
The revision and addition read as
follows:
sradovich on DSK3GMQ082PROD with RULES2
§ 686.11
Eligibility to receive a grant.
(a) * * *
(1) * * *
(iii) Is enrolled in a TEACH Granteligible institution in a TEACH Granteligible program or is an otherwise
eligible student who received a TEACH
Grant and who is completing a program
under § 686.3(c);
*
*
*
*
*
(d) Students who received a total and
permanent disability discharge of a
TEACH Grant agreement to serve or a
title IV, HEA loan. If a student’s
previous TEACH Grant agreement to
serve or title IV, HEA loan was
discharged based on total and
permanent disability, the student is
eligible to receive a TEACH Grant if the
student—
(1) Obtains a certification from a
physician that the student is able to
engage in substantial gainful activity as
defined in 34 CFR 685.102(b);
(2) Signs a statement acknowledging
that neither the new agreement to serve
for the TEACH Grant the student
receives nor any previously discharged
agreement to serve which the grant
recipient is required to fulfill in
accordance with paragraph (d)(3) of this
section can be discharged in the future
on the basis of any impairment present
when the new grant is awarded, unless
that impairment substantially
deteriorates and the grant recipient
applies for and meets the eligibility
requirements for a discharge in
accordance with 34 CFR 685.213; and
(3) In the case of a student who
receives a new TEACH Grant within
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
three years of the date that any previous
TEACH Grant service obligation or title
IV loan was discharged due to a total
and permanent disability in accordance
with § 686.42(b), 34 CFR
685.213(b)(4)(iii), 34 CFR
674.61(b)(3)(v), or 34 CFR
682.402(c)(3)(iv), acknowledges that he
or she is once again subject to the terms
of the previously discharged TEACH
Grant agreement to serve or resumes
repayment on the previously discharged
loan in accordance with 34 CFR
685.213(b)(7), 674.61(b)(6), or
682.402(c)(6) before receiving the new
grant.
■ 7. Section 686.12 is amended by:
■ A. In paragraph (b)(2), adding the
words ‘‘low-income’’ before the word
‘‘school’’; and
■ B. Revising paragraph (d).
The revision reads as follows:
§ 686.12
Agreement to serve.
*
*
*
*
*
(d) Majoring and serving in a highneed field. In order for a grant
recipient’s teaching service in a highneed field listed in the Nationwide List
to count toward satisfying the
recipient’s service obligation, the highneed field in which he or she prepared
to teach must be listed in the
Nationwide List for the State in which
the grant recipient teaches—
(1) At the time the grant recipient
begins teaching in that field, even if that
field subsequently loses its high-need
designation for that State; or
(2) For teaching service performed on
or after July 1, 2010, at the time the
grant recipient begins teaching in that
field or when the grant recipient signed
the agreement to serve or received the
TEACH Grant, even if that field
subsequently loses its high-need
designation for that State before the
grant recipient begins teaching.
*
*
*
*
*
§ 686.32
[Amended]
8. Section 686.32 is amended by:
A. In paragraph (a)(3)(iii)(B), adding
the words ‘‘or when the grant recipient
signed the agreement to serve or
received the TEACH Grant’’ after the
words ‘‘that field’’; and
■ B. In paragraph (c)(4)(iv)(B), adding
the words ‘‘or when the grant recipient
signed the agreement to serve or
received the TEACH Grant’’ after the
words ‘‘that field’’.
■
■
§ 686.37
[Amended]
9. Section 686.37(a)(1) is amended by
removing the citation ‘‘§§ 686.11’’ and
adding in its place the citation
‘‘§§ 686.3(c), 686.11,.’’
■
PO 00000
Frm 00129
Fmt 4701
Sfmt 4700
75621
10. Section 686.40 is amended by
revising paragraphs (b) and (f) to read as
follows:
■
§ 686.40 Documenting the service
obligation.
*
*
*
*
*
(b) If a grant recipient is performing
full-time teaching service in accordance
with the agreement to serve, or
agreements to serve if more than one
agreement exists, the grant recipient
must, upon completion of each of the
four required elementary or secondary
academic years of teaching service,
provide to the Secretary documentation
of that teaching service on a form
approved by the Secretary and certified
by the chief administrative officer of the
school or educational service agency in
which the grant recipient is teaching.
The documentation must show that the
grant recipient is teaching in a lowincome school. If the school or
educational service agency at which the
grant recipient is employed meets the
requirements of a low-income school in
the first year of the grant recipient’s four
elementary or secondary academic years
of teaching and the school or
educational service agency fails to meet
those requirements in subsequent years,
those subsequent years of teaching
qualify for purposes of this section for
that recipient.
*
*
*
*
*
(f) A grant recipient who taught in
more than one qualifying school or more
than one qualifying educational service
agency during an elementary or
secondary academic year and
demonstrates that the combined
teaching service was the equivalent of
full-time, as supported by the
certification of one or more of the chief
administrative officers of the schools or
educational service agencies involved,
is considered to have completed one
elementary or secondary academic year
of qualifying teaching.
■ 11. Section 686.42 is amended by
revising paragraph (b) to read as follows:
§ 686.42
Discharge of agreement to serve.
*
*
*
*
*
(b) Total and permanent disability. (1)
A grant recipient’s agreement to serve is
discharged if the recipient becomes
totally and permanently disabled, as
defined in 34 CFR 685.102(b), and the
grant recipient applies for and satisfies
the eligibility requirements for a total
and permanent disability discharge in
accordance with 34 CFR 685.213.
(2) If at any time the Secretary
determines that the grant recipient does
not meet the requirements of the threeyear period following the discharge as
described in 34 CFR 685.213(b)(7), the
E:\FR\FM\31OCR2.SGM
31OCR2
75622
Federal Register / Vol. 81, No. 210 / Monday, October 31, 2016 / Rules and Regulations
sradovich on DSK3GMQ082PROD with RULES2
Secretary will notify the grant recipient
that the grant recipient’s obligation to
satisfy the terms of the agreement to
serve is reinstated.
(3) The Secretary’s notification under
paragraph (b)(2) of this section will—
(i) Include the reason or reasons for
reinstatement;
(ii) Provide information on how the
grant recipient may contact the
Secretary if the grant recipient has
questions about the reinstatement or
believes that the agreement to serve was
reinstated based on incorrect
information; and
(iii) Inform the TEACH Grant
recipient that he or she must satisfy the
VerDate Sep<11>2014
18:31 Oct 28, 2016
Jkt 241001
service obligation within the portion of
the eight-year period that remained after
the date of the discharge.
(4) If the TEACH Grant of a recipient
whose TEACH Grant agreement to serve
is reinstated is later converted to a
Direct Unsubsidized Loan, the recipient
will not be required to pay interest that
accrued on the TEACH Grant
disbursements from the date the
agreement to serve was discharged until
the date the agreement to serve was
reinstated.
*
*
*
*
*
■ 12. Section 686.43 is amended by
revising paragraph (a)(1) to read as
follows:
PO 00000
Frm 00130
Fmt 4701
Sfmt 9990
§ 686.43
Obligation to repay the grant.
(a) * * *
(1) The grant recipient, regardless of
enrollment status, requests that the
TEACH Grant be converted into a
Federal Direct Unsubsidized Loan
because he or she has decided not to
teach in a qualified school or
educational service agency, or not to
teach in a high-need field, or for any
other reason;
*
*
*
*
*
[FR Doc. 2016–24856 Filed 10–28–16; 8:45 am]
BILLING CODE 4000–01–P
E:\FR\FM\31OCR2.SGM
31OCR2
Agencies
[Federal Register Volume 81, Number 210 (Monday, October 31, 2016)]
[Rules and Regulations]
[Pages 75494-75622]
From the Federal Register Online via the Government Publishing Office [www.gpo.gov]
[FR Doc No: 2016-24856]
[[Page 75493]]
Vol. 81
Monday,
No. 210
October 31, 2016
Part II
Department of Education
-----------------------------------------------------------------------
34 CFR Parts 612 and 686
Teacher Preparation Issues; Final Rule
Federal Register / Vol. 81 , No. 210 / Monday, October 31, 2016 /
Rules and Regulations
[[Page 75494]]
-----------------------------------------------------------------------
DEPARTMENT OF EDUCATION
34 CFR Parts 612 and 686
[Docket ID ED-2014-OPE-0057]
RIN 1840-AD07
Teacher Preparation Issues
AGENCY: Office of Postsecondary Education, Department of Education.
ACTION: Final regulations.
-----------------------------------------------------------------------
SUMMARY: The Secretary establishes new regulations to implement
requirements for the teacher preparation program accountability system
under title II of the Higher Education Act of 1965, as amended (HEA),
that will result in the collection and dissemination of more meaningful
data on teacher preparation program quality (title II reporting
system). The Secretary also amends the regulations governing the
Teacher Education Assistance for College and Higher Education (TEACH)
Grant program under title IV of the HEA to condition TEACH Grant
program funding on teacher preparation program quality and to update,
clarify, and improve the current regulations and align them with title
II reporting system data.
DATES: The regulations in 34 CFR part 612 are effective November 30,
2016. The amendments to part 686 are effective on July 1, 2017, except
for amendatory instructions 4.A., 4.B., 4.C.iv., 4.C.x. and 4.C.xi.,
amending 34 CFR 686.2(d) and (e), and amendatory instruction 6,
amending 34 CFR 686.11, which are effective on July 1, 2021.
FOR FURTHER INFORMATION CONTACT: Sophia McArdle, Ph.D., U.S. Department
of Education, 400 Maryland Avenue SW., Room 6W256, Washington, DC
20202. Telephone: (202) 453-6318 or by email: sophia.mcardle@ed.gov.
If you use a telecommunications device for the deaf (TDD) or a text
telephone (TTY), call the Federal Relay Service (FRS), toll free, at 1-
800-877-8339.
SUPPLEMENTARY INFORMATION:
Executive Summary
Purpose of This Regulatory Action
Section 205 of the HEA requires States and institutions of higher
education (IHEs) annually to report on various characteristics of their
teacher preparation programs, including an assessment of program
performance. These reporting requirements exist in part to ensure that
members of the public, prospective teachers and employers (districts
and schools), and the States, IHEs, and programs themselves have
accurate information on the quality of these teacher preparation
programs. These requirements also provide an impetus to States and IHEs
to make improvements where they are needed. Thousands of novice
teachers enter the profession every year \1\ and their students deserve
to have well-prepared teachers.
---------------------------------------------------------------------------
\1\ U.S. Department of Education, Digest of Education Statistics
(2013). Public and private elementary and secondary teachers,
enrollment, pupil/teacher ratios, and new teacher hires: Selected
years, fall 1955 through fall 2023 [Data File]. Retrieved from:
https://nces.ed.gov/programs/digest/d13/tables/dt13_208.20.asp.
---------------------------------------------------------------------------
Research from States such as Tennessee, North Carolina, and
Washington indicates that some teacher preparation programs report
statistically significant differences in the student learning outcomes
of their graduates.\2\ Statutory reporting requirements on teacher
preparation program quality for States and IHEs are broad. The
Department's existing title II reporting system framework has not,
however, ensured sufficient quality feedback to various stakeholders on
program performance. A U.S. Government Accountability Office (GAO)
report found that some States are not assessing whether teacher
preparation programs are low-performing, as required by law, and so
prospective teachers may have difficulty identifying low-performing
teacher preparation programs, possibly resulting in teachers who are
not fully prepared to educate children.\3\ In addition, struggling
teacher preparation programs may not receive the technical assistance
they need and, like the teaching candidates themselves, school
districts, and other stakeholders, will not be able to make informed
decisions.
---------------------------------------------------------------------------
\2\ See Report Card on the Effectiveness of Teacher Training
Programs, Tennessee 2014201420142014 Report Card. (n.d.). Retrieved
from: www.tn.gov/thec/article/report-card; Goldhaber, D., & Liddle,
S. (2013). The Gateway to the Profession: Assessing Teacher
Preparation Programs Based on Student Achievement. Economics of
Education Review, 34, 29-44.
\3\ See U.S. Government Accountability Office (GAO) (2015).
Teacher Preparation Programs: Education Should Ensure States
Identify Low-Performing Programs and Improve Information-Sharing.
GAO-15-598. Washington, DC. Retrieved from: https://gao.gov/products/GAO-15-598. (Hereafter referred to as ``GAO.'')
---------------------------------------------------------------------------
Moreover, section 205 of the HEA requires States to report on the
criteria they use to assess whether teacher preparation programs are
low-performing or at-risk of being low-performing, but it is difficult
to identify programs in need of remediation or closure because few of
the reporting requirements ask for information indicative of program
quality. The GAO report noted that half the States said current title
II reporting system data were ``slightly useful,'' ``neither useful nor
not useful,'' or ``not useful''; over half the teacher preparation
programs surveyed said the data were not useful in assessing their
programs; and none of the surveyed school district staff said they used
the data.\4\ The Secretary is committed to ensuring that the measures
by which States judge the quality of teacher preparation programs
reflect the true quality of the programs and provide information that
facilitates program improvement and, by extension, improvement in
student achievement.
---------------------------------------------------------------------------
\4\ GAO at 26.
---------------------------------------------------------------------------
The final regulations address shortcomings in the current system by
defining the indicators of quality that a State must use to assess the
performance of its teacher preparation programs, including more
meaningful indicators of program inputs and program outcomes, such as
the ability of the program's graduates to produce gains in student
learning \5\ (understanding that not all students will learn at the
same rate). The final regulations build on current State data systems
and linkages and create a much-needed feedback loop to facilitate
program improvement and provide valuable information to prospective
teachers, potential employers, and the general public.
---------------------------------------------------------------------------
\5\ Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005).
Teachers, schools, and academic achievement. Econometrica, 73(2),
417-458. https://doi.org/10.1111/j.1468-0262.2005.00584.x.
---------------------------------------------------------------------------
The final regulations also link assessments of program performance
under HEA title II to eligibility for the Federal TEACH Grant program.
The TEACH Grant program, authorized by section 420M of the HEA,
provides grants to eligible IHEs, which, in turn, use the funds to
provide grants of up to $4,000 annually to eligible teacher preparation
candidates who agree to serve as full-time teachers in high-need fields
at low-income schools for not less than four academic years within
eight years after completing their courses of study. If a TEACH Grant
recipient fails to complete his or her service obligation, the grant is
converted into a Federal Direct Unsubsidized Stafford Loan that must be
repaid with interest.
Pursuant to section 420L(1)(A) of the HEA, one of the eligibility
requirements for an institution to participate in the TEACH Grant
program is that it must provide high-quality teacher preparation.
However, of the 38 programs identified by States as ``low-performing''
or ``at-risk,'' 22 programs were offered by IHEs participating in the
TEACH Grant program. The final
[[Page 75495]]
regulations limit TEACH Grant eligibility to only those programs that
States have identified as ``effective'' or higher in their assessments
of program performance under HEA title II.
Summary of the Major Provisions of This Regulatory Action
The final regulations--
Establish necessary definitions and requirements for IHEs
and States related to the quality of teacher preparation programs, and
require States to develop measures for assessing teacher preparation
performance.
Establish indicators that States must use to report on
teacher preparation program performance, to help ensure that the
quality of teacher preparation programs is judged on reliable and valid
indicators of program performance.
Establish the areas States must consider in identifying
teacher preparation programs that are low-performing and at-risk of
being low-performing, the actions States must take with respect to
those programs, and the consequences for a low-performing program that
loses State approval or financial support. The final regulations also
establish the conditions under which a program that loses State
approval or financial support may regain its eligibility for title IV,
HEA funding.
Establish a link between the State's classification of a
teacher preparation program's performance under the title II reporting
system and that program's identification as ``high-quality'' for TEACH
Grant eligibility purposes.
Establish provisions that allow TEACH Grant recipients to
satisfy the requirements of their agreement to serve by teaching in a
high-need field that was designated as high-need at the time the grant
was received.
Establish conditions that allow TEACH Grant recipients to
have their service obligations discharged if they are totally and
permanently disabled. The final regulations also establish conditions
under which a student who had a prior service obligation discharged due
to total and permanent disability may receive a new TEACH Grant.
Costs and Benefits
The benefits, costs, and transfers related to the regulations are
discussed in more detail in the Regulatory Impact Analysis (RIA)
section of this document. Significant benefits of the final regulations
include improvements to the HEA title II accountability system that
will enable prospective teachers to make more informed choices about
their enrollment in a teacher preparation program, and will enable
employers of prospective teachers to make more informed hiring
decisions. Further, the final regulations will create incentives for
States and IHEs to monitor and continuously improve the quality of
their teacher preparation programs. Most importantly, the final
regulations will help support elementary and secondary school students
because the changes will lead to better prepared, higher quality
teachers in classrooms, including for students in high-need schools and
communities, who are disproportionately taught by less experienced
teachers.
The net budget impact of the final regulations is approximately
$0.49 million in reduced costs over the TEACH Grant cohorts from 2016
to 2026. We estimate that the total cost annualized over 10 years of
the final regulations is between $27.5 million and $27.7 million (see
the Accounting Statement section of this document).
On December 3, 2014, the Secretary published a notice of proposed
rulemaking (NPRM) for these parts in the Federal Register (79 FR
71820). The final regulations contain changes from the NPRM, which are
fully explained in the Analysis of Comments and Changes section of this
document. Some commenters requested clarification regarding how the
proposed State reporting requirements would affect teacher preparation
programs provided through distance education and TEACH Grant
eligibility for students enrolled in teacher preparation programs
provided through distance education. In response to these comments, on
April 1, 2016, the Department published a supplemental notice of
proposed rulemaking (Supplemental NPRM) in the Federal Register (81 FR
18808) that reopened the public comments period for 30 days solely to
seek comment on those specific issues. The Department specifically
requested on public comments on issues related to reporting by States
on teacher preparation programs provided through distance education,
and TEACH Grant eligibility requirements for teacher preparation
programs provided through distance education. The comment period for
the Supplemental NPRM closed on May 2, 2016.
Public Comment: In response to our invitation in the December 3,
2014, NPRM, approximately 4,800 parties submitted comments on the
proposed regulations. In response to our invitation in the Supplemental
NPRM, the Department received 58 comments.
We discuss substantive issues under the sections of the proposed
regulations to which they pertain. Generally, we do not address
technical or other minor changes.
Analysis of Comments and Changes: An analysis of the comments and
of any changes in the regulations since publication of the NPRM and the
Supplemental NPRM follows.
Part 612--Title II Reporting System
Subpart A--Scope, Purpose, and Definitions
Section 612.1 Scope and Purpose
Statutory Authority
Comments: A number of commenters raised concerns about whether the
Department has authority under the HEA to issue these regulations. In
this regard, several commenters asserted that the Department does not
have the statutory authority to require States to include student
learning outcomes, employment outcomes, and survey outcomes among the
indicators of academic content knowledge and teaching skills that would
be included in the State's report card under Sec. 612.5. Commenters
also claimed that the HEA does not authorize the Department to require
States, in identifying low-performing or at-risk teacher preparation
programs, to use those indicators of academic content knowledge and
teaching skills as would be required under Sec. 612.6. These
commenters argued that section 207 of the HEA provides that levels of
performance shall be determined solely by the State, and that the
Department may not provide itself authority to mandate these
requirements through regulations when the HEA does not do so.
Commenters argued that only the State may determine whether to
include student academic achievement data (and by inference our other
proposed indicators of academic content knowledge and teaching skills)
in their assessments of teacher preparation program performance. One
commenter contended that the Department's attempt to ``shoehorn''
student achievement data into the academic content knowledge and
teaching skills of students enrolled in teacher preparation programs
(section 205(b)(1)(F)) would render meaningless the language of section
207(a) that gives the State the authority to establish levels of
performance, and what those levels contain. These commenters argued
that, as a result, the HEA prohibits the Department from requiring
States to use any particular indicators. Other commenters argued that
such State authority also flows from section 205(b)(1)(F) of the HEA,
which provides
[[Page 75496]]
that, in the State Report Card (SRC), the State must include a
description of the method of assessing teacher preparation program
performance. This includes indicators of the academic content knowledge
and teaching skills of the students enrolled in such programs.
Commenters also stated that the Department does not have the
authority to require that a State's criteria for assessing the
performance of any teacher preparation program include the indicators
of academic content knowledge and teaching skills, including, ``in
significant part,'' student learning outcomes and employment outcomes
for high-need schools. See proposed Sec. Sec. 612.6(a)(1) and
612.4(b)(1). Similar concerns were expressed with respect to proposed
Sec. 612.4(b)(2), which provided that a State could determine that a
teacher preparation program was effective (or higher) only if the
program was found to have ``satisfactory or higher'' student learning
outcomes.
Discussion: Before we respond to the comments about specific
regulations and statutory provisions, we think it would be helpful to
outline the statutory framework under which we are issuing these
regulations. Section 205(a) of the HEA requires that each IHE that
provides a teacher preparation program leading to State certification
or licensure and that enrolls students who receive HEA student
financial assistance report on a statutorily enumerated series of data
elements for the programs it provides. Section 205(b) of the HEA
requires each State that receives funds under the HEA to provide to the
Secretary and make widely available to the public information on, among
other things, the quality of traditional and alternative route teacher
preparation programs that includes not less than the statutorily
enumerated series of data elements. The State must do so in a uniform
and comprehensible manner, conforming to definitions and methods
established by the Secretary. Section 205(c) of the HEA directs the
Secretary to prescribe regulations to ensure the validity, reliability,
accuracy, and integrity of the data submitted. Section 206(b) requires
that IHEs provide assurance to the Secretary that their teacher
training programs respond to the needs of LEAs, are closely linked with
the instructional decisions novice teachers confront in the classroom,
and prepare candidates to work with diverse populations and in urban
and rural settings, as applicable. Section 207(a) of the HEA provides
that in order to receive funds under the HEA, a State must conduct an
assessment to identify low-performing teacher preparation programs in
the State, and help those programs through provision of technical
assistance. Section 207(a) further provides that the State's report
identify programs that the State determines to be low-performing or at
risk of being low-performing, and that levels of performance are to be
determined solely by the State.
The proposed regulations, like the final regulations, reflect the
fundamental principle and the statutory requirement that the assessment
of teacher preparation program performance must be conducted by the
State, with criteria the State establishes and levels of differentiated
performance that are determined by the State. Section 205(b)(1)(F) of
the HEA provides that a State must include in its report card a
description of its criteria for assessing the performance of teacher
preparation programs within IHEs in the State and that those criteria
must include indicators of the academic content knowledge and teaching
skills of students enrolled in such programs. Significantly, section
205(b)(1) further provides that the State's report card must conform
with definitions and methods established by the Secretary, and section
205(c) authorizes the Secretary to prescribe regulations to ensure the
reliability, validity, integrity, and accuracy of the data submitted in
the report cards.
Consistent with those statutory provisions, Sec. 612.5 establishes
the indicators States must use to comply with the reporting requirement
in section 205(b)(1)(F), namely by having States include in the report
card their criteria for program assessment and the indicators of
academic content knowledge and teaching skills that they must include
in those criteria. While the term ``teaching skills'' is defined in
section 200(23) of the HEA, the definition is complex and the statute
does not indicate what are appropriate indicators of academic content
knowledge and teaching skills of those who complete teacher preparation
programs. Thus, in Sec. 612.5, we establish reasonable definitions of
these basic, but ambiguous statutory phrases in an admittedly complex
area--how States may reasonably assess the performance of their teacher
preparation programs--so that the conclusions States reach about the
performance of individual programs are valid and reliable in compliance
with the statute. We discuss the reasonableness of the four general
indicators of academic content knowledge and teaching skills that the
Secretary has established in Sec. 612.5 later in this preamble under
the heading What indicators must a State use to report on teacher
preparation program performance for purposes of the State report card?.
Ultimately though, section 205(b) clearly permits the Secretary to
establish definitions for the types of information that must be
included in the State report cards, and, in doing so, complements the
Secretary's general authority to define statutory phrases that are
ambiguous or require clarification.
The provisions of Sec. 612.5 are also wholly consistent with
section 207(a) of the HEA. Section 207(a) provides that States
determine the levels of program performance in their assessments of
program performance and discusses the criteria a State ``may'' include
in those levels of performance. However, section 207(a) does not negate
the basic requirement in section 205(b) that States include indicators
of academic content knowledge and teaching skills within their program
assessment criteria or the authority of the Secretary to establish
definitions for report card elements. Moreover, the regulations do not
limit a State's authority to establish, use, and report other criteria
that the State determines are appropriate for generating a valid and
reliable assessment of teacher preparation program performance. Section
612.5(b) of the regulations expressly permits States to supplement the
required indicators with other indicators of a teacher's effect on
student performance, including other indicators of academic content and
knowledge and teaching skills, provided that the State uses the same
indicators for all teacher preparation programs in the State. In
addition, working with stakeholders, States are free to determine how
to apply these various criteria and indicators in order to determine,
assess, and report whether a preparation program is low-performing or
at-risk of being low-performing.
We appreciate commenters' concerns regarding the provisions in
Sec. Sec. 612.4(b)(1) and (b)(2) and 612.6(b)(1) regarding weighting
and consideration of certain indicators. Based on consideration of the
public comments and the potential complexity of these requirements, we
have removed these provisions from the final regulations. While we have
taken this action, we continue to believe strongly that providing
significant weight to these indicators when determining a teacher
preparation program's level of performance is very important. The
ability of novice teachers to promote positive student academic growth
should be central to the missions of all teacher preparation programs,
and having those programs focus on producing well-prepared novice
[[Page 75497]]
teachers who work and stay in high-need schools is critical to meeting
the Nation's needs. Therefore, as they develop their measures and
weights for assessing and reporting the performance of each teacher
preparation program in their SRCs, we strongly encourage States, in
consultation with their stakeholders, to give significant weight to
these indicators.
Changes: We have revised Sec. Sec. 612.4(b)(1) and 612.6(a)(1) to
remove the requirement for States to include student learning outcomes
and employment outcomes, ``in significant part,'' in their use of
indicators of academic content knowledge and teaching skills as part of
their criteria for assessing the performance of each teacher
preparation program. We also have revised Sec. 612.4(b)(2) to remove
the requirement that permitted States to determine that a teacher
preparation program was effective (or higher quality) only if the State
found the program to have ``satisfactory or higher'' student learning
outcomes.
Comments: Several commenters objected to the Department's proposal
to establish four performance levels for States' assessment of their
teacher preparation programs. They argued that section 207(a), which
specifically requires States to report those programs found to be
either low-performing or at-risk of being low-performing, establishes
the need for three performance levels (low-performing, at-risk of being
low-performing, and all other programs) and that the Department lacks
authority to require reporting on the four performance levels proposed
in the NPRM, i.e., those programs that are ``low-performing,'' ``at-
risk,'' exceptional,'' and everything else. These commenters stated
that these provisions of the HEA give to the States the authority to
determine whether to establish more than three performance levels.
Discussion: Section 205(b) of the HEA provides that State reports
``shall include not less than the following,'' and this provision
authorizes the Secretary to add reporting elements to the State
reports. It was on this basis that we proposed, in Sec. 612.4(b)(1),
to supplement the statutorily required elements to require States, when
making meaningful differentiation in teacher preparation program
performance, to use at least four performance levels, including
exceptional. While we encourage States to identify programs that are
exceptional in order to recognize and celebrate outstanding programs,
and so that prospective teachers and their employers know of them and
others may learn from them, in consideration of comments that urged the
Secretary not to require States to report a fourth performance level
and other comments that expressed concerns about overall implementation
costs, we are not adopting this proposal in the final regulations.
Changes: We have revised Sec. 612.4(b)(1) to remove the
requirement for States to rate their teacher preparation programs using
the category ``exceptional.'' We have also removed the definition of
``exceptional teacher preparation program'' from the Definitions
section in Sec. 612.2.
Comments: Several commenters raised concerns about whether the
provisions of Sec. 612.6 are consistent with section 205(b)(2) of the
HEA, which prohibits the Secretary from creating a national list or
ranking of States, institutions, or schools using the scaled scores
required under section 205. Some of these commenters acknowledged the
usefulness of a system for public information on teacher preparation.
However, the commenters argued that, if these regulations are
implemented, the Federal government would instead be creating a program
rating system in violation of section 205(b)(2).
Commenters also stated that by mandating a system for rating
teacher preparation programs, including the indicators by which teacher
preparation programs must be rated, what a State must consider in
identifying low-performing or at-risk teacher preparation programs, and
the actions a State must take with respect to low-performing programs
(proposed Sec. Sec. 612.4, 612.5, and 612.6), the Federal government
is impinging on the authority of States, which authorize, regulate, and
approve IHEs and their teacher preparation programs.
Discussion: Although section 207(a) of the HEA expressly requires
States to include in their SRCs a list of programs that they have
identified as low-performing or at-risk of being low-performing, the
regulations do not in any other way require States to specify or create
a list or ranking of institutions or programs and the Department has no
intention of requiring States to do so. Nor will the Department be
creating a national list or ranking of States, institutions, or teacher
preparation programs. Thus, there is no conflict with section
205(b)(2).
As we discussed in response to the prior set of comments, these
regulations establish definitions for terms provided in title II of the
HEA in order to help ensure that the State and IHE reporting system
meet its purpose. In authorizing the Secretary to define statutory
terms and establish reporting methods needed to properly implement the
title II reporting system, neither Congress nor the Department is
abrogating State authority to authorize, regulate, and approve IHEs and
their teacher preparation programs. Finally, in response to the
comments that proposed Sec. Sec. 612.4, 612.5, and 612.6 would
impermissibly impinge on the authority of States in terms of actions
they must take with respect to low-performing programs, we note that
the regulations do little more than clarify the sanctions that Congress
requires in section 207(b) of the HEA. Those sanctions address the
circumstances in which students enrolled in a low-performing program
may continue to receive or regain Federal student financial assistance,
and thus the Federal government has a direct interest in the subject.
Changes: None.
Comments: One commenter contended that Federal law provides no
authority to compel LEAs to develop the criteria and implement the
collection and reporting of student learning outcome data, and that
there is little that the commenter's State can do to require LEA
compliance with those reporting requirements.
Discussion: Section 205(b) of the HEA requires all States receiving
HEA funds to provide the information the law identifies ``in a uniform
and comprehensible manner that conforms with the definitions and
methods established by the Secretary.'' These regulations place
responsibility for compliance upon the States, not the LEAs.
Since all LEAs stand to benefit from the success of the new
reporting system through improved transparency and information about
the quality of teacher preparation programs from which they may recruit
and hire new teachers, we assume that all LEAs will want to work with
their States to find manageable ways to implement the regulations.
Moreover, without more information from the commenter, we cannot
address why a particular State would not have the authority to insist
that an LEA provide the State with the information it needs to meet
these reporting requirements.
Changes: None.
Federal-State-Institution Relationship, Generally
Comments: Many commenters commented generally that the proposed
regulations are an example of Federal overreach and represent a
profound and improper shift in the historic relationship among
institutions, States, school districts, accrediting agencies, and the
Federal government in the area
[[Page 75498]]
of teacher preparation and certification. For example, one commenter
stated that the proposal threatens the American tradition of Federal
non-interference with academic judgments, and makes the Department the
national arbiter of what teacher preparation programs should teach, who
they should teach, and how they should teach. Commenters also contended
that the proposed regulations impermissibly interfere with local and
State control and governance by circumventing States' rights delegated
to local school districts and the citizens of those districts to
control the characteristics of quality educators and to determine
program approval.
Discussion: The need for teacher preparation programs to produce
teachers who can adequately and effectively teach to the needs of the
Nation's elementary and secondary school students is national in scope
and self-evident. Congress enacted the HEA title II reporting system as
an important tool to address this need. Our final regulations are
intended to give the public confidence that, as Congress anticipated
when it enacted sections 205(b) and 207 of the HEA, States have
reasonably determined whether teacher preparation programs are, or are
not, meeting the States' expectations for their performance. While the
regulations provide for use of certain minimum indicators and
procedures for determining and reporting program performance, they
provide States with a substantial amount of discretion in how to
measure these indicators, what additional indicators a State may choose
to add, and how to weight and combine these indicators and criteria
into an overall assessment of a teacher preparation program's
performance. Thus, the final regulations are consistent with the
traditional importance of State decision-making in the area of
evaluating educational performance. The public, however, must have
confidence that the procedures and criteria that each State uses to
assess program performance and to report programs as low-performing or
at-risk are reasonable and transparent. Consistent with the statutory
requirement that States report annually to the Secretary and to the
public ``in a uniform and comprehensible manner that conforms to the
definitions and methods established by the Secretary,'' the regulations
aim to help ensure that each State report meets this basic test.
We disagree with comments that allege that the regulations reflect
overreach by the Federal government into the province of States
regarding the approval of teacher preparation programs and the academic
domain of institutions that conduct these programs. The regulations do
not constrain the academic judgments of particular institutions, what
those institutions should teach in their specific programs, which
students should attend those programs, or how those programs should be
conducted. Nor do they dictate which teacher preparation programs
States should approve or should not approve. Rather, by clarifying
limited areas in which sections 205 and 207 of the HEA are unclear, the
regulations implement the statutory mandate that, consistent with
definitions and reporting methods the Secretary establishes, States
assess the quality of the teacher preparation programs in their State,
identify those that are low-performing or at-risk of being low-
performing, and work to improve the performance of those programs.
With the changes we are making in these final regulations, the
system for determining whether a program is low-performing or at-risk
of being low-performing is unarguably a State-determined system.
Specifically, as noted above, in assessing and reporting program
performance, each State is free to (1) adopt and report other measures
of program performance it believes are appropriate, (2) use discretion
in how to measure student learning outcomes, employment outcomes,
survey outcomes, and minimum program characteristics, and (3) determine
for itself how these indicators of academic content knowledge and
teaching skills and other criteria a State may choose to use will
produce a valid and reliable overall assessment of each program's
performance. Thus, the assessment system that each State will use is
developed by the State, and does not compromise the ability of the
State and its stakeholders to determine what is and is not a low-
performing or at-risk teacher preparation program.
Changes: None.
Constitutional Issues
Comments: One commenter stated that the proposed regulations
amounted to a coercive activity that violates the U.S. Constitution's
Spending Clause (i.e., Article I, Section 8, Clause 1 of the U.S.
Constitution). The commenter argued that sections 205 and 207 of the
HEA are grounded in the Spending Clause and Spending Clause
jurisprudence, including cases such as Arlington C. Sch. Dist. Bd. of
Educ. v. Murphy, 548 U.S. 291 (2006), which provides that States are
not bound by requirements of which they have no clear notice. In
particular, the commenter asserted that, in examining the text of the
statute in order to decide whether to accept Federal financial
assistance, a State would not have clear notice that it would be
required to commit substantial amounts of funds to develop the
infrastructure required to include student learning outcome data in its
SRC or include student learning outcomes in its evaluation of teacher
preparation programs. Some commenters stated that the proposed
regulations violate the Tenth Amendment to the U.S. Constitution.
Discussion: Congress' authority to enact the provisions in title II
of the HEA governing the State reporting system flows from its
authority to ``. . . provide for general Welfare of the United
States.'' Article I, Section 8, Clause 1 (commonly referred to as
Congress' ``spending authority''). Under that authority, Congress
authorized the Secretary to implement the provisions of sections 205
through 207. Thus, the regulations do not conflict with Congress'
authority under the Spending Clause. With respect to cases such as
Arlington C. Sch. Dist. Bd. of Educ. v. Murphy, States have full notice
of their responsibilities under the reporting system through the
rulemaking process the Department has conducted under the
Administrative Procedure Act and the General Education Provisions Act
to develop these regulations.
We also do not perceive a legitimate Tenth Amendment issue. The
Tenth Amendment provides in pertinent part that powers not delegated to
the Federal government by the Constitution are reserved to the States.
Congress used its spending authority to require institutions that
enroll students who receive Federal student financial assistance in
teacher preparation programs, and States that receive HEA funds, to
submit information as required by the Secretary in their institutional
report cards (IRCs) and SRCs. Thus, the Secretary's authority to define
the ambiguous statutory term ``indicators of academic content knowledge
and teaching skills'' to include the measures the regulations
establish, coupled with the authority States have under section
205(b)(1)(F) of the HEA to establish other criteria with which they
assess program performance, resolves any claim that the assessment of
program performance is a matter left to the States under the Tenth
Amendment.
Changes: None.
Unfunded Mandates
Comments: Some commenters stated that the proposed regulations
would amount to an unfunded mandate, in that they would require States,
institutions
[[Page 75499]]
with teacher preparation programs, and public schools to bear
significant implementation costs, yet offer no Federal funding to cover
them. To pay for this unfunded mandate, several commenters stated that
costs would be passed on to students via tuition increases, decreases
in funding for higher education, or both.
Discussion: These regulations do not constitute an unfunded
mandate. Section 205(b) makes reporting ``in a uniform and
comprehensible manner that conforms with the definitions and methods
established by the Secretary'' a condition of the State's receipt of
HEA funds. And, as we have stated, the regulations implement this
statutory mandate.
Changes: None.
Loss of Eligibility To Enroll Students Who Receive HEA-Funded Student
Financial Aid
Comments: Many commenters stated that the Department lacks
authority to establish Federally defined performance criteria for the
purpose of determining a teacher preparation program's eligibility for
student financial aid under title IV of the HEA. Commenters expressed
concern that the Department is departing from the current model, in
which the Department determines institutional eligibility for title IV
student aid, to a model in which this function would be outsourced to
the States. While some commenters acknowledged that, under the HEA, a
teacher preparation program loses its title IV eligibility if its State
decides to withdraw approval or financial support, commenters asserted
that the HEA does not intend for this State determination to be coupled
with a prescriptive Federal mandate governing how the determination
should be made. A number of commenters also stated that the regulations
would result in a process of determining eligibility for Federal
student aid that will vary by State.
Similarly, some commenters stated that the proposed requirements in
Sec. 612.8(b)(1) for regaining eligibility to enroll students who
receive title IV aid exceed the statutory authority in section
207(b)(4) of the HEA, which provides that a program is reinstated upon
a demonstration of improved performance, as determined by the State.
Commenters expressed concern that the proposed regulations would shift
this responsibility from the State to the Federal government, and
stated that teacher preparation programs could be caught in limbo. They
argued that if a State had already reinstated funding and identified
that a program had improved performance, the program's ability to
enroll students who receive student financial aid would be conditioned
on the Secretary's approval. The commenters contended that policy
changes as significant as these should come from Congress, after
scrutiny and deliberation of a reauthorized HEA.
Discussion: Section 207(b) of the HEA states, in relevant part:
Any teacher preparation program from which the State has withdrawn
the State's approval, or terminated the State's financial support, due
to the low performance of the program based upon the State assessment
described in subsection (a)--
(1) Shall be ineligible for any funding for professional
development activities awarded by the Department;
(2) May not be permitted to accept or enroll any student who
receives aid under title IV in the institution's teacher preparation
program;
(3) Shall provide transitional support, including remedial services
if necessary, for students enrolled at the institution at the time of
termination of financial support or withdrawal of approval; and
(4) Shall be reinstated upon demonstration of improved performance,
as determined by the State.
Sections 612.7 and 612.8 implement this statutory provision through
procedures that mirror existing requirements governing termination and
reinstatement of student financial support under title IV of the HEA.
As noted in the preceding discussion, our regulations do not usurp
State authority to determine how to assess whether a given program is
low-performing, and our requirement that States do so using, among
other things, the indicators of novice teachers' academic content
knowledge and teaching skills identified in Sec. 612.5 is consistent
with title II of the HEA.
Consistent with section 207(a) of the HEA, a State determines a
teacher preparation program's performance level based on the State's
use of those indicators and any other criteria or indicators the State
chooses to use to measure the overall level of the program's
performance. In addition, consistent with section 207(b), the loss of
eligibility to enroll students receiving Federal student financial aid
does not depend upon a Department decision. Rather, the State
determines whether the performance of a particular teacher preparation
program is so poor that it withdraws the State's approval of, or
terminates the State's financial support for, that program. Each State
may use a different decision model to make this determination, as
contemplated by section 207(b).
Commenters' objections to our proposal for how a program subject to
section 207(b) may regain eligibility to enroll students who receive
title IV aid are misplaced. Section 207(b)(4) of the HEA provides that
a program found to be low-performing is reinstated upon the State's
determination that the program has improved, which presumably would
need to include the State's reinstatement of State approval or
financial support, since otherwise the institution would continue to
lose its ability to accept or enroll students who receive title IV aid
in its teacher preparation programs. However, the initial loss of
eligibility to enroll students who receive title IV aid is a
significant event, and we believe that Congress intended that section
207(b)(4) be read and implemented not in isolation, but rather in the
context of the procedures established in 34 CFR 600.20 for
reinstatement of eligibility based on the State's determination of
improved performance.
Changes: None.
Relationship to Department Waivers Under ESEA Flexibility
Comments: A number of commenters stated that the proposed
regulations inappropriately extend the Federal requirements of the
Department's Elementary and Secondary Education Act (ESEA) flexibility
initiative to States that have either chosen not to seek a waiver of
certain ESEA requirements or have applied for a waiver but not received
one. The commenters argued that requiring States to assess all students
in non-tested grades and subjects (i.e., those grades and subjects for
which testing is not required under title I, part A of the ESEA)--a
practice that is currently required only in States with ESEA
flexibility or in States that have chosen to participate in the Race to
the Top program--sets a dangerous precedent.
Discussion: While the regulations are similar to requirements the
Department established for States that received ESEA flexibility or
Race to the Top grants regarding linking data on student growth to
individual teachers of non-tested grades and subjects under ESEA title
I, part A, they are independent of those requirements. While section
4(c) of the Every Student Succeeds Act (ESSA) \6\ ends conditions of
waivers granted under ESEA flexibility on August 1, 2016, States that
received ESEA flexibility or a Race to the Top grant may well have a
head start in
[[Page 75500]]
implementing systems for linking academic growth data for elementary
and secondary school students to individual novice teachers, and then
linking data on these novice teachers to individual teacher preparation
programs. However, we believe that all States have a strong interest
and incentive in finding out whether each of their teacher preparation
programs is meeting the needs of their K-12 students and the
expectations of their parents and the public. We therefore expect that
States will seek to work with other stakeholders to find appropriate
ways to generate the data needed to perform the program assessments
that these regulations implementing section 205 of the HEA require.
---------------------------------------------------------------------------
\6\ ESSA, which was signed into law in December 2015 (e.g.,
after the NPRM was published), reauthorizes and amends the ESEA.
---------------------------------------------------------------------------
Changes: None.
Consistency With State Law and Practice
Comments: A number of commenters expressed concerns about whether
the proposed regulations were consistent with State law. Some
commenters stated that California law prohibits the kind of data
sharing between the two State agencies, the California Commission on
Teacher Credentialing (CTC) and the California Department of Education
(CDE), that would be needed to implement the proposed regulations.
Specifically, the commenter stated that section 44230.5 of the
California Education Code (CEC) does not allow CTC to release
information on credential holders to any entity other than the type of
credential and employing district. In addition, the commenter noted
that California statutes (sections 44660-44665 of the CEC) authorize
each of the approximately 1,800 districts and charter schools to
independently negotiate and implement teacher evaluations, so there is
no statewide collection of teacher evaluation data. The commenter also
noted that current law prohibits employers from sharing teacher
evaluation data with teacher preparation programs or with the State if
an individual teacher would be identifiable.
Another commenter argued that in various ways the proposed
regulations constitute a Federal overreach with regard to what Missouri
provides in terms of State and local control and governance.
Specifically, the commenter stated that proposed regulations
circumvent: The rights of Missouri school districts and citizens under
the Missouri constitution to control the characteristics of quality
education; the authority of the Missouri legislative process and the
State Board of Education to determine program quality; State law,
specifically, according to the commenter Missouri House Bill 1490
limits how school districts can share locally held student data such as
student learning outcomes; and the process already underway to improve
teacher preparation in Missouri.
Other commenters expressed concern that our proposal to require
States to use student learning outcomes, employment outcomes, and
survey outcomes, as defined in the proposed regulations, would create
inconsistencies with what they consider to be the more comprehensive
and more nuanced way in which their States assess teacher preparation
program performance and then provide relevant feedback to programs and
the institutions that operate them.
Finally, a number of commenters argued that requirements related to
indicators of academic content knowledge and teaching skills are
unnecessary because there is already an organization, the Council for
the Accreditation of Educator Preparation (CAEP), which requires IHEs
to report information similar to what the regulations require. These
commenters claimed that the reporting of data on indicators of academic
content knowledge and teaching skills related to each individual
program on the SRC may be duplicative and unnecessary.
Discussion: With respect to comments on the CEC, we generally defer
to each State to interpret its own laws. However, assuming that the CTC
will play a role in how California would implement these regulations,
we do not read section 44230.5 of the CEC to prohibit CTC from
releasing information on credential holders to any entity other than
the type of credential and employing district, as the commenters state.
Rather, the provision requires CTC to ``establish a nonpersonally
identifiable educator identification number for each educator to whom
it issues a credential, certificate, permit, or other document
authorizing that individual to provide a service in the public
schools.'' Moreover, while sections 44660 through 44665 of the CEC
authorize each LEA in California to independently negotiate and
implement teacher evaluations, we do not read this to mean that
California is prohibited from collecting data relevant to the student
learning outcomes of novice teachers and link them to the teachers'
preparation program. Commenters did not cite any provision of the CEC
that prohibits LEAs from sharing teacher evaluation data with teacher
preparation programs or the State if it is done without identifying any
individual teachers. We assume that use of the nonpersonally
identifiable educator identification number that section 44230.5 of the
CEC directs would provide one way to accomplish this task. Finally, we
have reviewed the commenters' brief description of the employer surveys
and teacher entry and retention data that California is developing for
use in its assessments of teacher preparation programs. Based on the
comments, and as discussed more fully under the subheading Student
Learning Outcomes, we believe that the final regulations are not
inconsistent with California's approach.
While the commenter who referred to Missouri law raised several
broad concerns about purported Federal overreach of the State's laws,
these concerns were very general. However, we note that in previously
applying for and receiving ESEA flexibility, the Missouri Department of
Elementary and Secondary Education (MDESE) agreed to have LEAs in the
State implement basic changes in their teacher evaluation systems that
would allow them to generate student growth data that would fulfill the
student learning outcomes requirement. In doing so the MDESE
demonstrated that it was fully able to implement these types of
activities without conflict with State law. Moreover, the regulations
address neither how a State or LEA are to determine the characteristics
of effective educators, nor State procedures and authority for
determining when to approve a teacher preparation program. Nor do the
regulations undermine any State efforts to improve teacher preparation;
in implementing their responsibilities under sections 205(b) and 207(a)
of the HEA, they simply require that, in assessing the level of
performance of each teacher preparation program, States examine and
report data about the performance of novice teachers the program
produces.
Finally, we note that, as enacted, House Bill 1490 specifically
directs the Missouri State Board of Education to issue a rule regarding
gathering student data in the Statewide Longitudinal Data System in
terms of the Board's need to make certain data elements available to
the public. This is the very process the State presumably would use to
gather and report the data that these regulations require. In addition,
we read House Bill 1490 to prohibit the MDESE, unless otherwise
authorized, ``to transfer personally identifiable student data'',
something that the regulations do not contemplate. Further, we do not
read House Bill 1490 as establishing the kind of limitation on LEAs'
sharing student data with the MDESE that the commenter stresses. House
Bill 1490 also requires the State Board to ensure
[[Page 75501]]
compliance with the Family Educational Rights and Privacy Act (FERPA)
and other laws and policies; see our discussion of comment on FERPA and
State privacy laws under Sec. 612.4(b)(3)(ii)(E).
We are mindful that a number of States have begun their own efforts
to use various methods and procedures to examine how well their teacher
preparation programs are performing. For the title II reporting system,
HEA provides that State reporting must use common definitions and
reporting methods as the Secretary shall determine necessary. While the
regulations require all States to use data on student learning
outcomes, employment outcomes, survey outcomes, and minimum program
characteristics to determine which programs are low-performing or at-
risk of being low-performing, States may, after working with their
stakeholders, also adopt other criteria and indicators. We also know
from the recent GAO report that more than half the States were already
using information on program graduates' effectiveness in their teacher
preparation program approval or renewal processes and at least 10
others planned to do so--data we would expect to align with these
reporting requirements.\7\ Hence, we trust that what States report in
the SRCs will complement their own systems of assessing program
performance.
---------------------------------------------------------------------------
\7\ GAO at 13-14.
---------------------------------------------------------------------------
Finally, with regard to the work of CAEP, we agree that CAEP may
require some institutional reporting that may be similar to the
reporting required under the title II reporting system; however,
reporting information to CAEP does not satisfy the reporting
requirements under title II. Regardless of the information reported to
CAEP, States and institutions still have a statutory obligation to
submit SRCs and IRCs. The CAEP reporting requirements include the
reporting of data associated with student learning outcomes, employment
outcomes, and survey outcomes; however, CAEP standards do not require
the disaggregation of data for individual teacher preparation programs
but this disaggregation is necessary for title II reporting.
Changes: None.
Cost Implications
Comments: A number of commenters raised concerns about the costs of
implementing the regulations. They stated that the implementation
costs, such as those for the required statewide data systems to be
designed, implemented, and refined in the pilot year, would require
States either to take funds away from other programs or raise taxes or
fees to comply. The commenters noted that these costs could be passed
on to students via tuition increases or result in decreased State
funding for higher education, and that doing so would create many other
unintended consequences, such as drawing State funding away from hiring
of educators, minority-serving institutions, or future innovation,
reforms, and accountability initiatives. Commenters also stated that
the cost to institutions of implementing the regulations could pull
funding away from earning national accreditation.
Some commenters also expressed concern about the costs to States of
providing technical assistance to teacher preparation programs that
they find to be low-performing, and suggested that those programs could
lose State approval or financial support.
Finally, in view of the challenges in collecting accurate and
meaningful data on teacher preparation program graduates who fan out
across the United States, commenters argued that the Department should
find ways to provide financial resources to States and institutions to
help them gather the kinds of data the regulations will require.
Discussion: The United States has a critical need to ensure that it
is getting a good return on the billions of dollars of public funds it
spends producing novice teachers. The teacher preparation program
reporting system established in title II of the HEA provides an
important tool for understanding whether these programs are making good
on this investment. But the system can only serve its purpose if States
measure and report a program's performance in a variety of ways--in
particular, based on important inputs, such as good clinical education
and support, as well as on important outcomes, such as novice teachers'
success in improving student performance.
The regulations are designed to achieve these goals, while
maintaining State responsibility for deciding how to consider the
indicators of academic content knowledge and teaching skills described
in Sec. 612.5, along with other relevant criteria States choose to
use. We recognize that moving from the current system--in which States,
using criteria of their choosing, identified only 39 programs
nationally in 2011 as low-performing or at-risk of being low-performing
(see the NPRM, 79 FR 71823)--to one in which such determinations are
based on meaningful indicators and criteria of program effectiveness is
not without cost. We understand that States will need to make important
decisions about how to provide for these costs. However, as explained
in the Regulatory Impact Analysis section of this document, we
concluded both that (1) these costs are manageable, regardless of
States' current ability to establish the systems they will need, and
(2) the benefits of a system in which the public has confidence that
program reporting is valid and reliable are worth those costs.
While providing technical assistance to low-performing teacher
preparation programs will entail some costs, Sec. 612.6(b) simply
codifies the statutory requirement Congress established in section
207(a) of the HEA and offers examples of what this technical assistance
could entail. Moreover, we assume that a State would want to provide
such technical assistance rather than have the program continue to be
low-performing and so remain at-risk of losing State support (and
eligibility to enroll students who receive title IV aid).
Finally, commenters requested that we identify funding sources to
help States and IHEs gather the required data on students who, upon
completing their programs, do not stay in the State. We encourage
States to gather and use data on all program graduates regardless of
the State to which they ultimately move. However, given the evident
costs of doing so on an interstate basis, the final regulations permit
States to exclude these students from their calculations of student
learning outcomes, their teacher placement and retention rates and from
the employer and teacher survey (see the definitions of teacher
placement and retention rate in Sec. 612.2) and provisions governing
student learning outcomes and survey outcomes in Sec. 612.5(a)(1)(iii)
and (a)(3)(ii).
Changes: None.
Section 612.2 Definitions
Content and Pedagogical Knowledge
Comments: Several commenters requested that we revise the
definition of ``content and pedagogical knowledge'' to specifically
refer to a teacher's ability to factor students' cultural, linguistic,
and experiential backgrounds into the design and implementation of
productive learning experiences. The commenters stated that pedagogical
diversity is an important construct in elementary and secondary
education and should be included in this definition.
Additional commenters requested that this definition specifically
refer to knowledge and skills regarding assessment. These commenters
stated that the ability to measure student
[[Page 75502]]
learning outcomes depends upon a teacher's ability to understand the
assessment of such learning and not just from the conveyance and
explanation of content.
Another commenter recommended that we specifically mention the
distinct set of instructional skills necessary to address the needs of
students who are gifted and talented. This commenter stated that there
is a general lack of awareness of how to identify and support advanced
and gifted learners, and that this lack of awareness has contributed to
concerns about how well the Nation's top students are doing compared to
top students around the world. The commenter also stated that this
disparity could be rectified if teachers were required to address the
specific needs of this group of students.
Multiple commenters requested that we develop data definitions and
metrics related to the definition of ``content and pedagogical
knowledge,'' and then collect related data on a national level. They
stated that such a national reporting system would facilitate
continuous improvement and quality assurance on a systemic level, while
significantly reducing burden on States and programs.
Other commenters recommended that to directly assess for content
knowledge and pedagogy, the definition of the term include rating
graduates of teacher preparation programs based on a portfolio of the
teaching candidates' work over the course of the academic program.
These commenters stated that reviewing a portfolio reflecting a recent
graduate's pedagogical preparation would be more reliable than rating
an individual based on student learning, which cannot be reliably
measured.
Discussion: The proposed definition of ``content and pedagogical
knowledge'' reflected the specific and detailed suggestions of a
consensus of non-Federal negotiators. We believe that the definition is
sufficiently broad to address, in general terms, the key areas of
content and pedagogical knowledge that aspiring teachers should gain in
their teacher preparation programs.
In this regard, we note that the purpose here is not to offer a
comprehensive definition of the term that all States must use, as the
commenters appear to recommend. Rather, it is to provide a general
roadmap for States to use as they work with stakeholders (see Sec.
612.4(c)) to decide how best to determine whether programs that lack
the accreditation referenced in Sec. 612.5(a)(4)(i) will ensure that
students have the requisite content and pedagogical knowledge they will
need as teachers before they complete the programs.
For this reason, we believe that requiring States to use a more
prescriptive definition or to develop common data definitions and
metrics aligned to that definition, as many commenters urged, would
create unnecessary costs and burdens. Similarly, we do not believe that
collecting this kind of data on a national level through the title II
reporting system is worth the significant cost and burden that it would
entail. Instead, we believe that States, working in consultation with
stakeholders, should determine whether their State systems for
evaluating program performance should include the kinds of additions to
the definition of content and pedagogical knowledge that the commenters
recommend.
We also stress that our definition underscores the need for teacher
preparation programs to train teachers to have the content knowledge
and pedagogical skills needed to address the learning needs of all
students. It specifically refers to the need for a teacher to possess
the distinct skills necessary to meet the needs of English learners and
students with disabilities, both because students in these two groups
face particular challenges and require additional support, and to
emphasize the need for programs to train aspiring teachers to teach to
the learning needs of the most vulnerable students they will have in
their classrooms. While the definition's focus on all students plainly
includes students who are gifted and talented, as well as students in
all other subgroups, we do not believe that, for purposes of this title
II reporting system, the definition of ``content and pedagogical
skills'' requires similar special reference to those or other student
groups. However, we emphasize again that States are free to adopt many
of the commenters' recommendations. For example, because the definition
refers to ``effective learning experiences that make the discipline
accessible and meaningful for all students,'' States may consider a
teacher's ability to factor students' cultural, linguistic, and
experiential backgrounds into the design and implementation of
productive learning experiences, just as States may include a specific
focus on the learning needs of students who are gifted and talented.
Finally, through this definition we are not mandating a particular
method for assessing the content and pedagogical knowledge of teachers.
As such, under the definition, States may allow teacher preparation
programs to use a portfolio review to assess teachers' acquisition of
content and pedagogical knowledge.
Changes: None.
Employer Survey
Comments: None.
Discussion: The proposed definition of ``survey outcomes''
specified that a State would be required to survey the employers or
supervisors of new teachers who were in their first year of teaching in
the State where their teacher preparation program is located. To avoid
confusion with regard to teacher preparation programs provided through
distance education, in the final regulations we have removed the phrase
``where their teacher preparation program is located'' from the final
definition of ``employer survey.'' In addition to including a
requirement to survey those in their first year of teaching in the
State and their employers in the ``survey outcomes'' provision that we
have moved to Sec. 612.5(a)(3) of the final regulations, we are
including the same clarification in the definitions of ``employer
survey'' and ``teacher survey''. We also changed the term ``new
teacher'' to ``novice teacher'' for the reasons discussed in this
document under the definition of ``novice teacher.''
Changes: We have revised the definition of ``employer survey'' to
clarify that this survey is of employers or supervisors of novice
teachers who are in their first year of teaching.
Employment Outcomes
Comments: None.
Discussion: Upon review of the proposed regulations, we recognized
that the original structure of the regulations could have generated
confusion. We are concerned that having a definition for the term
``employment outcomes'' in Sec. 612.2, when that provision largely
serves to operationalize other definitions in the context of Sec.
612.5, was not the clearest way to present these requirements. We
therefore are moving the explanations and requirements of those terms
into the text of Sec. 612.5(a).
Changes: We have removed the proposed definition of ``employment
outcomes'' from Sec. 612.2, and moved the text and requirements from
the proposed definition to Sec. 612.5(a)(2).
Exceptional Teacher Preparation Program
Comments: Many commenters opposed having the regulations define,
and having States identify in their SRCs, ``exceptional teacher
preparation programs'', stating that section 207(a) of the HEA only
gives the Department
[[Page 75503]]
authority to require reporting of three categories of teacher
preparation programs: Low-performing, at-risk of being low-performing,
and teacher preparation programs that are neither low-performing nor
at-risk. A number of commenters noted that some States have used a
designation of exceptional and found that the rating did not indicate
truly exceptional educational quality. They also stated that teacher
preparation programs have used that rating in their marketing
materials, and that it may mislead the public as to the quality of the
program. In addition, commenters noted that, with respect to the
determination of a high-quality teacher preparation program for TEACH
Grant program eligibility, it makes no practical difference whether a
teacher preparation program is rated as effective or exceptional
because eligible students would be able to receive TEACH Grants whether
the programs in which they enroll are effective, exceptional, or some
other classification above effective.
Discussion: Section 207(a) of the HEA requires that a State
identify programs as low-performing or at-risk of being low-performing,
and report those programs in its SRC. However, section 205(b) of the
HEA authorizes the Secretary to require States to include other
information in their SRCs. Therefore, we proposed that States report
which teacher preparation programs they had identified as exceptional
because we believe the public should know which teacher preparation
programs each State has concluded are working very well. We continue to
urge States to identify for the public those teacher preparation
programs that are indeed exceptional. Nonetheless, based on our
consideration of the concerns raised in the comments, and the costs of
reporting using this fourth performance level, we have decided to
remove this requirement from the final regulations. Doing so has no
impact on TEACH Grants because, as commenters noted, an institution's
eligibility to offer TEACH Grants is impacted only where a State has
identified a teacher preparation program as low-performing or at-risk.
Despite these changes, we encourage States to adopt and report on this
additional performance level.
Changes: We have removed the proposed definition of ``exceptional
teacher preparation program,'' and revised the proposed definition of
``effective teacher preparation program'' under Sec. 612.2 to mean a
teacher preparation program with a level of performance that is higher
than low-performing or at-risk. We have also revised Sec. 612.4(b)(1)
to remove the requirement that an SRC include ``exceptional'' as a
fourth teacher preparation program performance level.
High-Need School
Comments: Multiple commenters requested that States be allowed to
develop and use their own definitions of ``high-need school'' so that
State systems do not need to be modified to comply with the
regulations. These commenters stated that many States had made great
strides in improving the quality of teacher preparation programs, and
that the definition of ``high-need school'' may detract from the
reforms already in place in those States. In addition, the commenters
noted that States are in the best position to define a high-need school
since they can do so with better knowledge of State-specific context.
Some commenters suggested, alternatively, that the Department
include an additional disaggregation requirement for high-need subject
areas. These commenters stated that targeting high-need subject areas
would have a greater connection to employment outcomes than would high-
need schools and, as such, should be tracked as a separate category
when judging the quality of teacher preparation programs.
A number of commenters requested that the definition of high-need
school include schools with low graduation rates. Other commenters
agreed that this definition should be based on poverty, as defined in
section 200(11) of the HEA, but also recommended that a performance
component should be included. Specifically, these commenters suggested
that high schools in which one-third or more of the students do not
graduate on time be designated as high-need schools. Other commenters
recommended including geography as an indicator of a school's need,
arguing that, in their experience, high schools' urbanicity plays a
significant role in determining student success.
Other commenters expressed concerns with using a quartile-based
ranking of all schools to determine which schools are considered high
need. These commenters stated that such an approach may lead to schools
with very different economic conditions being considered high need. For
example, a school in one district might fall into the lowest quartile
with only 15 percent of students living in poverty while a school in
another district would need to have 75 percent of students living in
poverty to meet the same designation.
Discussion: Our definition of ``high-need school'' mirrors the
definition of that term in section 200(11)(A) of the HEA and, we
believe, provides sufficient breadth and flexibility for all States to
use it to help determine the performance of their teacher preparation
programs. Under the definition, all schools that are in an LEA's
highest quartile of schools ranked by family need based on measures
that include student eligibility for free and reduced price lunch are
deemed high-need schools. (We focus here on this measure of poverty
because we believe that this is the primary measure on which many LEAs
will collect data.) So, too, are schools with high individual family
poverty rates measured by large numbers or percentages of students who
are eligible for free and reduced price lunches. Hence, for purposes of
title II reporting, not only will all schools with sufficiently high
family poverty rates be considered high-need schools, but, regardless
of the school's level of family poverty level, every LEA in the Nation
with four or more schools will have at least one high-need school. The
definition therefore eliminates a novice teacher's LEA preference as a
factor affecting the placement or retention rate in high-need schools,
and thus permits these measures to work well with this definition of
high-need school. This would not necessarily be true if we permitted
States to adopt their own definitions of this term.
We acknowledge the concern expressed by some commenters that the
definition of ``high-need school'' permits schools in different LEAs
(and indeed, depending on the breakdown of an LEA's schools in the
highest quartile based on poverty, in the same LEA as well) that serve
communities with very different levels of poverty all to be considered
high-need. However, for a reporting system that will use placement and
retention rates in high-need schools as factors bearing on the
performance of each teacher preparation program, States may consider
applying significantly greater weight to employment outcomes for novice
teachers who work in LEAs and schools that serve high-poverty areas
than for novice teachers who work in LEAs and schools that serve low-
poverty areas.
Moreover, while we acknowledge that the definition of ``high-need
school'' in section 200(11)(A) of the HEA does not apply to the
statutory provisions requiring the submission of SRCs and IRCs, we
believe that if we use the term in the title II reporting system it is
reasonable that we should give some deference to the definition used
elsewhere in title II of the HEA. For reasons provided above, we
believe the definition can work well for the
[[Page 75504]]
indicators concerning teacher placement and retention rates in high-
need schools.
Furthermore, we disagree with the comments that the definition of
``high-need school'' should include high-need subject areas. As defined
in the regulations, a ``teacher preparation program'' is a program that
leads to an initial State teacher certification or licensure in a
specific field. Thus, the State's assessment of a teacher preparation
program's performance already focuses on a specific subject area,
including those we believe States would generally consider to be high-
need. In addition, maintaining focus on placement of teachers in
schools where students come from families with high actual or relative
poverty levels, and not on the subject areas they teach in those
schools, will help maintain a focus on the success of students who have
fewer opportunities. We therefore do not see the benefit of further
burdening State reporting by separately carrying into the definition of
a ``high-need school'' as commenters recommend, factors that focus on
high-need subjects.
We also disagree that the definition of ``high-need school'' should
include an additional criterion of low graduation rates. While we agree
that addressing the needs of schools with low graduation rates is a
major priority, we believe the definition of ``high-need school''
should focus on the poverty level of the area the school serves. The
measure is easy to calculate and understand, and including this
additional component would complicate the data collection and analysis
process for States. However, we believe there is a sufficiently high
correlation between schools in high-poverty areas, which our definition
would deem high-need, and the schools with low graduation rates on
which the commenters desire to have the definition focus. We believe
this correlation means that a large proportion of low-performing
schools would be included in a definition of high-need schools that
focuses on poverty.
Changes: None.
Comments: None.
Discussion: Under paragraphs (i)(B) and (ii) of the definition of
``high-need school'' in the regulations, the identification of a high-
need school may be based, in part, on the percentage of students
enrolled in the school that are eligible for free or reduced price
school lunch under the Richard B. Russell National School Lunch Act.
With the passage of the Healthy, Hunger-Free Kids Act of 2010, the
National School Lunch Program (NSLP) now includes a new universal meal
option, the ``Community Eligibility Provision'' (CEP or Community
Eligibility). CEP reduces burden at the household and local level by
eliminating the need to obtain eligibility data from families through
individual household applications, and permits schools, if they meet
certain criteria, to provide meal service to all students at no charge
to the students or their families. To be eligible to participate in
Community Eligibility, schools must: (1) Have at least 40 percent of
their students qualify for free meals through ``direct certification''
\8\ in the year prior to implementing Community Eligibility; (2) agree
to serve free breakfasts and lunches to all students; and, (3) agree to
cover, with non-Federal funds, any costs of providing free meals to
students above the amounts provided by Federal assistance.
---------------------------------------------------------------------------
\8\ ``Direct certification'' is a process by which schools
identify students as eligible for free meals using data from, among
other sources, the Supplemental Nutrition Assistance Program (SNAP)
or the Temporary Assistance for Needy Families (TANF) program.
---------------------------------------------------------------------------
CEP schools are not permitted to use household applications to
determine a reimbursement percentage from the USDA. Rather, the USDA
determines meal reimbursement for CEP schools based on ``claiming
percentages,'' calculated by multiplying the percentage of students
identified through the direct certification data by a multiplier
established in the Healthy, Hunger-Free Kids Act of 2010 and set in
regulation at 1.6. The 1.6 multiplier provides an estimate of the
number of students that would be eligible for free and reduced-price
meals in CEP schools if the schools determined eligibility through
traditional means, using both direct certification and household
applications. If a State uses NSLP data from CEP schools when
determining whether schools are high-need schools, it should not use
the number of children actually receiving free meals in CEP schools to
determine the percentage of students from low-income families because,
in those schools, some children receiving free meals live in households
that do not meet a definition of low-income. Therefore, States that
wish to use NSLP data for purposes of determining the percentage of
children from low-income families in schools that are participating in
Community Eligibility should use the number of children for whom the
LEA is receiving reimbursement from the USDA (direct certification
total with the 1.6 multiplier), not to exceed 100 percent of children
enrolled. For example, we can consider a school that participates in
Community Eligibility with an enrollment of 1,000 children. The school
identifies 600 children through direct certification data as eligible
for the NSLP. The school multiplies 600 by 1.6, and that result is 960.
The LEA would receive reimbursement through the NSLP for meals for 960
children, or 96 percent of students enrolled. In a ranking of schools
in the LEA on the basis of the percentage of students from low-income
families, even though 100 percent of students are receiving free meals
through NSLP, the school would be ranked on the basis of 96 percent of
students from low-income families. The use of claiming percentages for
identifying CEP schools as high-need schools, rather than the number of
students actually receiving free lunch through NSLP ensures
comparability, regardless of an individual school's decision regarding
participation in the program.
Changes: None.
Novice Teacher
Comments: Many commenters expressed concerns about the proposed
definition of ``new teacher.'' These commenters noted that the
definition distinguishes between traditional teacher preparation
programs and alternative route teacher preparation programs. The
commenters argued that, because alternative route teacher preparation
programs place their participants as teachers while they are still
enrolled, these participants will have already established teacher
retention rates by the time they complete their programs. Traditional
program participants, on the other hand, are only placed as teachers
after earning their credential, leaving their programs at a comparative
disadvantage under the indicators of academic content knowledge and
teaching skills. Many of these commenters contended that, as a result,
comparisons between traditional teacher preparation programs and
alternative route teacher preparation programs will be invalid. Others
recommended that the word ``licensure'' be changed to ``professional
licensure'' to alleviate the need for States to compare traditional
teacher preparation programs and alternative route teacher preparation
programs.
A number of commenters claimed that the proposed definition
confused the attainment of certification or licensure with graduation
from a program, which is often a precursor for certification or
licensure. They stated that the proposed definition was not clear
regarding how States would report on recent program completers who are
entering the classroom. Others noted that some
[[Page 75505]]
States allow individuals to be employed as full-time teachers for up to
five years before obtaining licensure. They contended that reporting
all of these categories together would provide misleading statistics on
teacher preparation programs.
Other commenters specifically requested that the definition include
pre-kindergarten teachers (if a State requires postsecondary education
and training for pre-kindergarten teachers), and that pre-kindergarten
teachers be reflected in teacher preparation program assessment.
A number of commenters also recommended that the word ``recent'' be
removed from the definition of ``new teacher'' so that individuals who
take time off between completing their teaching degree and obtaining a
job in a classroom are still considered to be new teachers. They argued
that individuals who take time off to raise a family or who do not
immediately find a full-time teaching position should still be
considered new teachers if they have not already had full-time teaching
experience. Other commenters stated that the term ``new teacher'' may
result in confusion based on State decisions about when an individual
may begin teaching. For example, the commenters stated that in Colorado
teachers may obtain an alternative license and begin teaching before
completing a formal licensure program. As such, new teachers may have
been teaching for up to three years at the point that the proposed
definition would consider them to be a ``new teacher,'' and the
proposed definition therefore may cause confusion among data entry
staff about which individuals should be reported as new teachers. They
recommended the we replace the term ``new teacher'' with the term
``employed completer'' because the latter more clearly reflects that an
individual would need to complete his or her program and have found
employment to be included in the reporting requirements.
Discussion: The intent of the proposed definition of ``new
teacher'' was to capture those individuals who have newly entered the
classroom and become responsible for student outcomes. Upon review of
the public comments, we agree that the proposed definition of ``new
teacher'' is unclear and needs revision.
We understand that many alternative route teacher preparation
programs place their participants as teachers while they are enrolled
in their programs, and many traditional preparation program
participants are only placed after earning their credential.
Furthermore, we agree that direct comparisons between alternative route
and traditional teacher preparation programs could be misleading if
done without a more complete understanding of the inherent differences
between the two types of programs. For example, a recent completer of
an alternative route program may actually have several more years of
teaching experience than a recent graduate of a traditional teacher
preparation program, so apparent differences in their performance may
be based more on the specific teacher's experience than the quality of
the preparation program.
In addition, we agree with commenters that the preparation of
preschool teachers is a critical part of improving early childhood
education, and inclusion of these staff in the assessment of teacher
preparation program quality could provide valuable insights. We
strongly encourage States that require preschool teachers to obtain
either the same level of licensure as elementary school teachers, or a
level of licensure focused on preschool or early childhood education,
to include preschool teachers who teach in public schools in their
assessment of the quality of their teacher preparation programs.
However, we also recognize that preschool licensure and teacher
evaluation requirements vary among States and among settings, and
therefore believe that it is important to leave the determination of
whether and how to include preschool teachers in this measure to the
States. We hope that States will base their determination on what is
most supportive of high-quality early childhood education in their
State.
We also agree with commenters that the proposed term ``new
teacher'' may result in confusion based on State decisions about when
individuals in an alternative route program have the certification they
need to begin teaching, and that, in some cases, these individuals may
have taught for up to three years before the proposed definition would
consider them to be new teachers. We believe, however, that the term
``employed completer'' could be problematic for alternative route
programs because, while their participants are employed, they may not
have yet completed their program.
Likewise, we agree with commenters who expressed concern that our
proposed definition of ``new teacher'' confuses the attainment of
certification or licensure with graduation from a program leading to
recommendation for certification or licensure.
For all of these reasons, we are removing the term and definition
of ``new teacher'' and replacing it with the term ``novice teacher,''
which we are defining as ``a teacher of record in the first three years
of teaching who teaches elementary or secondary public school students,
which may include, at a State's discretion, preschool students.'' We
believe this new term and definition more clearly distinguish between
individuals who have met all the requirements of a teacher preparation
program (recent graduates), and those who have been assigned the lead
responsibility for a student's learning (i.e., a teacher of record as
defined in this document) but who may or may not have completed their
teacher preparation program. In doing so, we also have adopted language
that captures as novice teachers those individuals who are responsible
for student outcomes, because these are the teachers on whom a
program's student learning outcomes should focus. We chose a period of
three years because we believe this is a reasonable timeframe in which
one could consider a teacher to be a novice, and because it is the
length of time for which retention rate data will be collected. In this
regard, the definition of novice teacher continues to include three
cohorts of teachers, but treats the first year of teaching as the first
year as a teacher of record regardless of whether the teacher has
completed a preparation program (as is the case for most traditional
programs) or is still in process of completing it (as is the case for
alternate route programs).
Finally, we agree with commenters that we should remove the word
``recent'' from the definition, and have made this change. As
commenters suggest, making this change will ensure that individuals who
take time off between completing their teacher preparation program and
obtaining a job in a classroom, or who do not immediately find a full-
time teaching position, are still included in the definition of
``novice teacher.'' Therefore, our definition of ``novice teacher''
does not include the word ``recent''; the term instead clarifies that a
novice teacher is an individual who is responsible for student
outcomes, while still allowing individuals who are recent graduates to
be categorized as novice teachers for three years in order to account
for delays in placement.
Changes: We have removed the term ``new teacher'' and replaced it
with the term ``novice teacher,'' which we define as ``a teacher of
record in the first three years of teaching who teaches elementary or
secondary public school students, which may include, at a State's
discretion, preschool students.'' See the discussion below regarding
the definition of ``teacher of record.''
[[Page 75506]]
Quality Clinical Preparation
Comments: Commenters provided a number of specific suggestions for
revising the proposed definition of ``quality clinical preparation.''
Commenters suggested that the definition include a requirement that
mentor teachers be ``effective.'' While our proposed definition did not
use the term ``mentor teacher,'' we interpret the comments as
pertaining to the language of paragraph (1) of the proposed
definition--the requirement that those LEA-based personnel who provide
training be qualified clinical instructors. Commenters also suggested
that we eliminate the phrase ``at least in part'' when referring to the
training to be provided by qualified clinical instructors, and that we
require the clinical practice to include experience with high-need and
high-ability students, as well as the use of data analysis and
development of classroom management skills.
Other commenters suggested that the definition require multiple
clinical or field experiences, or both, with effective mentor teachers
who (1) address the needs of diverse, rural, or underrepresented
student populations in elementary and secondary schools, including
English learners, students with disabilities, high-need students, and
high-ability students, and (2) assess the clinical experiences using a
performance-based protocol to demonstrate teacher candidates' mastery
of content and pedagogy.
Some commenters suggested that the definition require that teacher
candidates use specific research-based practices in addition to those
currently listed in the definition, including data analysis,
differentiation, and classroom management. The commenters recommended
that all instructors be qualified clinical instructors, and that they
ensure that clinical experiences include working with high-need and
high-ability students because doing so will provide a more robust and
realistic clinical experience.
Commenters further suggested that ``quality clinical preparation''
use a program model similar to that utilized by many alternative route
programs. This model would include significant in-service training and
support as a fundamental and required component, alongside an
accelerated pre-service training program. Another commenter suggested
the inclusion of residency programs in the definition.
Commenters also suggested that the Department adopt, for the title
II reporting system, the definitions of the terms ``clinical
experience'' and ``clinical practice'' used by CAEP so that the
regulatory definitions describe a collaborative relationship between a
teacher preparation program and a school district. Commenters explained
that CAEP defines ``clinical experiences'' as guided, hands-on,
practical applications and demonstrations of professional knowledge of
theory to practice, skills, and dispositions through collaborative and
facilitated learning in field-based assignments, tasks, activities, and
assessments across a variety of settings. Commenters further explained
that CAEP defines ``clinical practice'' as student teaching or
internship opportunities that provide candidates with an intensive and
extensive culminating field-based set of responsibilities, assignments,
tasks, activities, and assessments that demonstrate candidates'
progressive development of the professional knowledge, skills, and
dispositions to be effective educators. Another commenter recommended
that we develop common definitions of data and metrics on quality
clinical preparation.
Discussion: We agree with the commenters that it is important to
ensure that mentor teachers and qualified clinical instructors are
effective. Effective instructors play an important role in ensuring
that students in teacher preparation programs receive the best possible
clinical training if they are to become effective educators. However,
we believe that defining the term ``quality clinical preparation'' to
provide that all clinical instructors, whether LEA-based or not, meet
specific established qualification requirements and use a training
standard that is publicly available (as required by paragraph (1) of
our definition) reasonably ensures that students are receiving clinical
training from effective instructors.
We agree with the recommendation to remove the phrase ``at least in
part'' from the definition, so that all training must be provided by
quality clinical instructors.
We decline to revise the definition to provide that quality
clinical preparation specifically include work with high-need or high-
ability students, using data analysis and differentiation, and
developing classroom management skills. We agree that these are
important elements in developing highly effective educators and could
be an important part of clinical preparation. However, the purpose of
this definition is to highlight general characteristics of quality
clinical instruction that must be reflected in how a State assesses
teacher preparation program performance, rather than provide a
comprehensive list of elements of quality clinical preparation. We
believe that including the additional elements suggested by the
commenters would result in an overly prescriptive definition. We note,
however, that States are free to supplement this definition with
additional criteria for assessing teacher preparation program
performance.
We also decline to revise the definition to provide that quality
clinical preparation be assessed using a performance-based protocol as
a means of demonstrating student mastery of content and pedagogy. While
this is a strong approach that States may choose to take, we are not
revising the definition to prescribe this particular method because we
believe it may in some cases be overly burdensome.
We decline commenters' recommendation to include significant in-
service training and support as a fundamental and required component,
alongside an accelerated pre-service training program. Similarly, we
reject the suggestion to include residency programs in the definition.
Here again, we feel that both of these additional qualifications would
result in a definition that is too prescriptive. Moreover, as noted
above, this definition is meant to highlight general characteristics of
quality clinical instruction that must be reflected in how a State
assesses teacher preparation program performance, rather than to
provide a comprehensive list of elements of quality clinical
preparation.
Furthermore, while we understand why commenters recommended that we
use CAEP's definitions, we do not want to issue an overly prescriptive
definition of what is and is not quality clinical preparation, nor do
we want to endorse any particular organization's approach. Rather, we
are defining a basic indicator of teacher preparation program
performance for programs that do not meet the program accreditation
provision in Sec. 612.5(a)(4)(i). However, States are free to build
the CAEP definitions into their own criteria for assessing teacher
preparation program performance; furthermore, programs may implement
CAEP criteria.
We encourage States and teacher preparation programs to adopt
research-based practices of effective teacher preparation for all
aspects of their program accountability systems. Indeed, we believe the
accountability systems that States establish will help programs and
States to gather more evidence about what aspects of clinical training
and other parts of preparation programs lead to the most successful
teachers. However, we decline to develop more
[[Page 75507]]
precise regulatory definitions of data and metrics on quality clinical
preparation because we feel that these should be determined by the
State in collaboration with IHEs, LEAs, and other stakeholders (see
Sec. 612.4(c)).
Changes: We have revised the definition of ``quality clinical
preparation'' by removing the phrase ``at least in part'' to ensure
that all training is provided by quality clinical instructors.
Recent Graduate
Comments: Multiple commenters recommended replacing the term
``recent graduate'' with the term ``program completer'' to include
candidates who have met all program requirements, regardless of
enrollment in a traditional teacher preparation program or an
alternative route teacher preparation program. In addition, they
recommended that States be able to determine the criteria that a
candidate must satisfy in order to be considered a program completer.
Other commenters recommended changing the definition of ``recent
graduate'' to limit it to those graduates of teacher preparation
programs who are currently credentialed and practicing teachers. The
commenters stated that this would avoid having programs with completers
who become gainfully employed in a non-education field or enroll in
graduate school being penalized when the State determines the program's
performance.
Discussion: We intended the term ``recent graduate'' to capture
those individuals who have met all the requirements of the teacher
preparation program within the last three title II reporting years. We
recognize that a number of alternative route programs do not use the
term ``graduate'' to refer to individuals who have met those
requirements. However, using the term ``recent graduate'' to encompass
both individuals who complete traditional teacher preparation programs
and those who complete alternative route programs is simpler than
creating a separate term for alternative route participants. Thus, we
continue to believe that the term ``recent graduate,'' as defined,
appropriately captures the relevant population for purposes of the
regulations.
Furthermore, we decline to amend the definition to include only
those individuals who are currently credentialed and practicing
teachers. Doing so would create confusion between this term and
``novice teacher'' (defined elsewhere in this document). The term
``novice teacher'' is designed to capture individuals who are in their
first three years of teaching, whereas the definition of ``recent
graduate'' is designed to capture individuals who have completed a
program, regardless of whether they are teaching. In order to maintain
this distinction, we have retained the prohibitions that currently
exist in the definitions in the title II reporting system against using
recommendation to the State for licensure or becoming a teacher of
record as a condition of being identified as a recent graduate.
We are, however, making slight modifications to the proposed
definition. Specifically, we are removing the reference to being hired
as a full-time teacher and instead using the phrase ``becoming a
teacher of record.'' We do not believe this substantially changes the
meaning of ``recent graduate,'' but it does clarify which newly hired,
full-time teachers are to be captured under the definition.
We decline to provide States with additional flexibility in
establishing other criteria for making a candidate a program completer
because we believe that the revised definition of the term ``recent
graduate'' provides States with sufficient flexibility. We believe that
the additional flexibility suggested by the commenters would result in
definitions that stray from the intent of the regulations.
Some commenters expressed concern that programs would be penalized
if some individuals who have completed them go on to become gainfully
employed in a non-education field or enroll in graduate school. We feel
that it is important for the public and prospective students to know
the degree to which participants in a teacher preparation program do
not become teachers, regardless of whether they become gainfully
employed in a non-education field. However, we think it is reasonable
to allow States flexibility to exclude certain individuals when
determining the teacher placement and retention rates (i.e., those
recent graduates who have taken teaching positions in another State, or
who have enrolled in graduate school or entered military service). For
these reasons, we have not adopted the commenters' recommendation to
limit the definition of ``recent graduate'' to those graduates of
teacher preparation programs who are currently credentialed and
practicing teachers.
Changes: We have revised the definition of ``recent graduate'' to
clarify that a teacher preparation program may not use the criterion
``becoming a teacher of record'' when it determines if an individual
has met all of the program requirements.
Rigorous Teacher Candidate Exit Qualifications
Comments: One commenter recommended that we remove the reference to
entry requirements from the proposed definition of ``rigorous teacher
entry and exit requirements'' because using rigorous entry requirements
to assess teacher preparation program performance could compromise the
mission of minority-serving institutions, which often welcome
disadvantaged students and develop them into profession-ready teachers.
Commenters said that those institutions and others seek, in part, to
identify potential teacher candidates whose backgrounds are similar to
students they may ultimately teach but who, while not meeting purely
grade- or test-based entry requirements, could become well-qualified
teachers through an effective preparation program.
Commenters recommended adding a number of specific items to the
definition of exit qualifications, such as classroom management,
differentiated instructional planning, and an assessment of student
growth over time.
Another commenter suggested amending the definition to include
culturally competent teaching, which the commenter defined as the
ability of educators to teach students intellectual, social, emotional,
and political knowledge by utilizing their diverse cultural knowledge,
prior experiences, linguistic needs, and performance styles. This
commenter stated that culturally competent teaching is an essential
pedagogical skill that teachers must possess. The commenter also
recommended that we include as separate terms and define ``culturally
competent education'' and ``culturally competent leadership''. Finally,
this commenter requested that we develop guidance on culturally and
linguistically appropriate approaches in education.
Discussion: Although overall research findings regarding the effect
of teacher preparation program selectivity on student outcomes are
generally mixed, some research indicates there is a correlation between
admission requirements for teacher preparation programs and the
teaching effectiveness of program graduates.\9\ In addition, under our
proposed definition, States and programs could define ``rigorous entry
requirements'' in many and varied ways, including through evidence of
other skills and characteristics
[[Page 75508]]
determined by programs to correlate with graduates' teaching
effectiveness, such as grit, disposition, or performance-based
assessments relevant to teaching. Nonetheless, we understand that
prospective teachers who themselves come from high-need schools--and
who may therefore bring a strong understanding of the backgrounds of
students they may eventually teach--could be disproportionately
affected by grade-based or test-based entry requirements. Additionally,
because the primary emphasis of the regulations is to ensure that
candidates graduate from teacher preparation programs ready to teach,
we agree that measures of program effectiveness should emphasize
rigorous exit requirements over program entry requirements. Therefore,
we are revising the regulations to require only rigorous exit
standards.
---------------------------------------------------------------------------
\9\ See, for example: Henry, G., & Bastian, K. (2015). Measuring
Up: The National Council on Teacher Quality's Ratings of Teacher
Preparation Programs and Measures of Teacher Performance.
---------------------------------------------------------------------------
In our definition of rigorous exit requirements, we identified four
basic characteristics that we believe all teacher candidates should
possess. Regarding the specific components of rigorous exit
requirements that commenters suggested (such as standards-based and
differentiated planning, classroom management, and cultural
competency), the definition does not preclude States from including
those kinds of elements as rigorous exit requirements. We acknowledge
that these additional characteristics, including cultural competency,
may also be important, but we believe that the inclusion of these
additional characteristics should be left to the discretion of States,
in consultation with their stakeholders. To the extent that they choose
to include them, States would need to develop definitions for each
additional element. We also encourage interested parties to bring these
suggestions forward to their States in the stakeholder engagement
process required of all States in the design of their performance
rating systems (see Sec. 612.4(c)). Given that we are not adding
cultural competency into the definition of rigorous candidate exit
requirements, we are not adding the recommended related definitions or
developing guidance on this topic at this time.
In addition, as we reviewed comments, we realized both that the
phrase ``at a minimum'' was misplaced in the sentence and should refer
not to the use of an assessment but to the use of validated standards
and measures of the candidate's effectiveness, and that the second use
of ``measures of'' in the phrase ``measures of candidate effectiveness
including measures of curriculum planning'' was redundant.
Changes: We have revised the term ``rigorous teacher candidate
entry and exit qualifications'' by removing entry qualifications. We
have also revised the language in Sec. 612.5(a)(4)(ii)(C) accordingly.
In addition, we have moved the phrase ``at a minimum'' from preceding
``assessment of candidate performance'' to preceding ``on validated
professional teaching standards.'' Finally, we have revised the phrase
``measures of candidate effectiveness including measures of curriculum
planning'' to read ``measures of candidate effectiveness in curriculum
planning.''
Student Achievement in Non-Tested Grades and Subjects
Comments: Multiple commenters opposed the definition of the term
``student achievement in non-tested grades and subjects,'' and provided
different recommendations on how the definition should be revised. Some
commenters recommended removing the definition from the regulations
altogether, noting that, for some subjects (such as music, art,
theater, and physical education), there simply are not effective or
valid ways to judge the growth of student achievement by test scores.
Others recommended that student achievement in non-tested grades and
subjects be aligned to State and local standards. These commenters
asserted that alignment with State and local standards will ensure
rigor and consistency for non-tested grades and subjects. A number of
commenters also recommended that teachers who teach in non-tested
subjects should be able to use scores from an already administered test
to count toward their effectiveness rating, a policy that some States
have already implemented to address student achievement in non-tested
subjects.
Discussion: We have adopted the recommendation to remove the
definition of ``student achievement in non-tested grades and
subjects,'' and have moved the substance of this definition to the
definition of ``student growth.'' Upon review of comments regarding
this definition, as well as comments pertaining to student learning
outcomes more generally, we have also altered the requirements in Sec.
612.5(a)(1)(ii) for the calculation of student learning outcomes--
specifically by permitting a State to use another State-determined
measure relevant to calculating student learning outcomes instead of
only student growth or a teacher evaluation measure. We believe that
the increased flexibility resulting from these changes sufficiently
addresses commenter concerns regarding the definition of ``student
achievement in non-tested grades and subjects.'' We also believe it is
important that the regulations permit States to determine an effective
and valid way to measure growth for students in all grades and subjects
not covered by section 1111(b)(2222of the ESEA, as amended by the ESSA,
and that the revisions we have made provide sufficient flexibility for
States to do so.
Under the revised definition of student growth, States must use
measures of student learning and performance, such as students' results
on pre-tests and end-of-course-tests, objective performance-based
assessments, student learning objectives, student performance on
English language proficiency assessments, and other measures of student
achievement that are rigorous, comparable across schools, and
consistent with State requirements. Further, as a number of commenters
recommended that the definition of student achievement in non-tested
grades and subjects includes alignment to State and local standards, we
feel that this new definition of student growth, in conjunction with
altered requirements in the calculation of student learning outcomes,
is sufficiently flexible to allow such alignment. Further, a State
could adopt the commenters' recommendations summarized above under the
revised requirements for the calculation of student learning outcomes
and the revised definition of ``student growth.''
We note that the quality of individual teachers is not being
measured by the student learning outcomes indicator. Rather, it will
help measure overall performance of a teacher preparation program
through an examination of student growth in the many grades and
subjects taught by novice teachers that are not part of the State's
assessment system under section 1111(b) of the ESEA, as amended by the
ESSA.
Changes: The definition of student achievement in non-tested grades
and subjects has been removed. The substance of the definition has been
moved to the definition of student growth.
Student Achievement in Tested Grades and Subjects
Comments: A number of commenters opposed the definition of
``student achievement in tested grades and subjects'' because of its
link to ESEA standardized test scores and the definitions used in ESEA
flexibility. Commenters found this objectionable because these sources
are subject to change, which could present complications in future
implementation
[[Page 75509]]
of the regulations. Further, the commenters asserted that standardized
testing and value-added models (VAM) \10\ are not valid or reliable and
should not be used to assess teacher preparation programs.
---------------------------------------------------------------------------
\10\ In various comments, commenters used the phrases ``value-
added modeling,'' ``value-added metrics,'' ``value-added measures,''
``value-added methods,'' ``value-added estimation,'' and ``value-
added analysis.'' For purposes of these comments, we understand the
use of these terms to reflect similar ideas and concepts, so for
ease of presentation of our summary of the comments and our
responses to them, we use the single phrase ``value-added models,''
abbreviated as VAM.
---------------------------------------------------------------------------
Discussion: We have adopted the recommendation to remove the
definition of ``student achievement in tested grades and subjects.''
While we have moved the substance of this definition to the definition
of ``student growth,'' we have also altered the requirements for the
calculation of student learning outcomes upon review of comments
related to this definition and comments pertaining to student learning
outcomes more generally. We believe that the increased flexibility
resulting from these changes sufficiently addresses commenter concerns
regarding the definition of ``student achievement in tested grades and
subjects.'' We believe it is important that the regulations permit
States to determine an effective and valid way to measure growth for
students in grades and subjects covered by section 1111(b)(2) of the
ESEA, as amended by ESSA, and that the revisions we have made provide
sufficient flexibility for States to do so.
While the revised requirement does not necessitate the use of ESEA
standardized test scores, we believe that the use of such scores could
be a valid and reliable measure of student growth and encourage its use
in determining student learning outcomes where appropriate.\11\
---------------------------------------------------------------------------
\11\ See, for example: Chetty, R., Friedman, J., & Rockoff, J.
(2014). Measuring the Impacts of Teachers II: Teacher Value-Added
and Student Outcomes in Adulthood. American Economic Review, 104(9),
2633-2679 (hereafter referred to as ``Chetty et al.'')
---------------------------------------------------------------------------
We now turn to the comments from those who asserted that
maintaining a link between this definition and conditions of waivers
granted to States under ESEA flexibility is problematic. While we
maintain the substance of this definition in the definition of
``student growth,'' in view of section 4(c) of ESSA, which terminates
waivers the Department granted under ESEA flexibility as of August 1,
2016, we have revised the requirements for calculation of student
learning outcomes in Sec. 612.5(a)(1)(ii) to allow States the
flexibility to use ``another State-determined measure relevant to
calculating student learning outcomes.'' We believe that doing so
allows the flexibility recommended by commenters. In addition, as we
have stressed above in the discussion of Federal-State-Institution
Relationship, Generally, under the regulations States have flexibility
in how to weight each of the indicators of academic content knowledge
and teaching skills.
Finally, the use of value-added measures are not specifically
included in the definition in the revised requirements for the
calculation of student learning outcomes, or otherwise required by the
regulations. However, we believe that there is convincing evidence that
value-added scores, based on standardized tests, can be valid and
reliable measures of teacher effectiveness and a teacher's effect on
long-term student outcomes.\12\ See our response to comments regarding
Sec. 612.5(a)(1), which provides an in-depth discussion of the use of
student growth and VAM, and why we firmly believe that our student
learning outcome measure, which references ``student achievement in
tested grades and subjects'' is valid and reliable.
---------------------------------------------------------------------------
\12\ See, for example: Chetty, et al. at 2633-2679.
---------------------------------------------------------------------------
Changes: The definition of student achievement in tested grades and
subjects has been removed. The substance of the definition has been
moved to the definition of student growth.
Student Growth
Comments: Multiple commenters opposed the proposed definition of
``student growth'' because the definition, which was linked to ESEA
standardized test scores and definitions of terms used for Race to the
Top, would also be linked to VAM, which commenters stated are not valid
or reliable. Additionally, other commenters disagreed with the
suggestion that student growth may be defined as a simple comparison of
achievement between two points in time, which they said downplays the
potential challenges of incorporating such measures into evaluation
systems.
A number of commenters also stated that the definition of ``student
growth'' has created new testing requirements in areas that were
previously not tested. They urged that non-tested grades and subjects
should not be a part of the definition of student growth. By including
them in this definition, the commenters argued, States and school
districts would be required to test students in currently non-tested
areas, which they contended should remain non-tested. Several
commenters also stated that, even as the value of yearly student
testing is being questioned, the regulations would effectively add cost
and burden to States that have not sought ESEA flexibility or received
Race to the Top funds.
Discussion: These regulations define student growth as the change
in student achievement between two or more points in time, using a
student's score on the State's assessments under section 1111(b)(2) of
the ESEA, as amended by ESSA, or other measures of student learning and
performance, such as student results on pre-tests and end-of-course
tests; objective performance-based assessments; student learning
objectives; student performance on English language proficiency
assessments; and other measures that are rigorous, comparable across
schools, and consistent with State guidelines.
Due to the removal of separate definitions of student achievement
in tested grades and subjects and student achievement in non-tested
grades and subjects, and their replacement by one flexible definition
of student growth, we believe we have addressed many concerns raised by
commenters. This definition, for example, no longer requires States to
use ESEA standardized test scores to measure student growth in any
grade or subject, and does not require the use of definitions of terms
used for Race to the Top.
We recognize commenters' assertion that student growth defined as a
comparison of achievement between two points in time downplays the
potential challenges of incorporating such measures into evaluation
systems. However, since the revised definition of student growth and
the revised requirements for calculating student learning outcomes
allow States a large degree of flexibility in how such measures are
applied, we do not believe the revised definition will place a
significant burden on States to implement and incorporate these
concepts into their teacher preparation assessment systems.
We have addressed commenters' recommendation that non-tested grades
and subjects not be a part of the definition of student growth by
removing the definition of student achievement in non-tested grades and
subjects, and providing States with flexibility in how they apply the
definition of student growth, should they choose to use it for
measuring a program's student learning outcomes. However, we continue
to believe that student growth in non-tested grades and subjects can
and should be measured at
[[Page 75510]]
regular intervals. Further, the revisions to the definition address
commenters' concerns that the regulations would effectively add cost
and burden to States that have not sought ESEA flexibility or received
Race to the Top funds.
Consistent with the definition, and in conjunction with the altered
requirements for the calculation of student learning outcomes, and the
removal of the definition of student achievement in tested grades and
subjects as well as the definition of student achievement in non-tested
grades and subjects, States have significant flexibility to determine
the methods they use for measuring student growth and the extent to
which it is factored into a teacher preparation program's performance
rating. The Department's revised definition of ``student growth'' is
meant to provide States with more flexibility in response to
commenters. Additionally, if a State chooses to use a method that
controls for additional factors affecting student and teacher
performance, like VAM, the regulations permit it to do so. See our
response to comments in Sec. 612.5(a)(1), which provides an in-depth
discussion of the use of student growth and VAM.
Changes: The definition of student growth has been revised to be
the change in student achievement between two or more points in time,
using a student's scores on the State's assessments under section
1111(b)(2) of the ESEA or other measures of student learning and
performance, such as student results on pre-tests and end-of-course
tests; objective performance-based assessments; student learning
objectives; student performance on English language proficiency
assessments; and other measures that are rigorous, comparable across
schools, and consistent with State guidelines, rather than the change
between two or more points in time in student achievement in tested
grades and subjects and non-tested grades and subjects.
Student Learning Outcomes
Comments: None.
Discussion: Due to many commenters' concerns regarding State
flexibility, the use of ESEA standardized test scores, and the
relationships between our original proposed requirements and those
under ESEA flexibility, we have included a provision in Sec.
612.5(a)(1)(ii)(C) allowing States to use a State-determined measure
relevant to calculating student learning outcomes. This measure may be
used alone, or in combination with student growth and a teacher
evaluation measure, as defined. As with the measure for student growth,
State-determined learning outcomes must be rigorous, comparable across
schools, and consistent with state guidelines. Additionally, such
measures should allow for meaningful differentiation between teachers.
If a State did not select an indicator that allowed for such meaningful
differentiation among teachers, and instead chose an indicator that led
to consistently high results among teachers without reflecting existing
inconsistencies in student learning outcomes--such as average daily
attendance in schools, which is often uniformly quite high even in the
lowest performing schools--the result would be very problematic. This
is because doing so would not allow the State to meaningfully
differentiate among teachers for the purposes of identifying which
teachers, and thus which teacher preparation programs, are making a
positive contribution to improving student learning outcomes.
Further, upon review of the proposed regulations, we recognized
that the structure could be confusing. In particular, we were concerned
that having a definition for the term ``student learning outcomes'' in
Sec. 612.2, when it largely serves to operationalize other definitions
in the context of Sec. 612.5, was not the clearest way to present
these requirements. We therefore are moving the explanations and
requirements of this term into the text of Sec. 612.5(a).
Changes: We have altered the requirements in Sec. 612.5(a)(1)(ii)
for calculating ``student learning outcomes'' to provide States with
additional flexibility. We have also removed the proposed definition of
``student learning outcomes'' from Sec. 612.2, and moved the substance
of the text and requirements of the student learning outcomes
definition to Sec. 612.5(a)(1).
Survey Outcomes
Comments: Commenters argued that States need flexibility on the
types of indicators used to evaluate and improve teacher preparation
programs. They suggested that States be required to gather data through
teacher and employer surveys in a teacher's first three years of
teaching, but be afforded the flexibility to determine the content of
the surveys. Commenters added that specific content dictated from the
Federal level would limit innovation in an area where best practices
are still developing.
Some commenters also stated that it is important to follow
graduates through surveys for their first five years of employment,
rather than just their first year of teaching (as proposed in the
regulations) to obtain a rich and well-informed understanding of the
profession over time, as the first five years is a significant period
when teachers decide whether to leave or stay in the profession.
Commenters were concerned about the inclusion of probationary
certificate teachers in surveys of teachers and employers for purposes
of reporting teacher preparation program performance. Commenters noted
that, in Texas, alternate route participants may be issued a
probationary certificate that allows the participants to be employed as
teachers of record for a period of up to three years while they are
completing the requirements for a standard certificate. As a result,
these probationary certificate holders would meet the proposed
definition of ``new teacher'' and, therefore, they and their
supervisors would be asked to respond to surveys that States would use
to determine teacher preparation program performance, even though they
have not completed their programs.
In addition, commenters asked which States are responsible for
surveying teachers from a distance education program and their
employers or supervisors.
Discussion: The regulations do not specify the number or type of
questions to be included in employer or teacher surveys. Rather, we
have left decisions about the content of these surveys to each State.
We also note that, under the regulations, States may survey novice
teachers and their employers for a number of consecutive years, even
though they are only required to survey during the first year of
teaching.
The goal of every teacher preparation program is to effectively
prepare aspiring teachers to step into a classroom and teach all of
their students well. As the regulations are intended to help States
determine whether each teacher preparation program is meeting this
goal, we have decided to focus on novice teachers in their first year
of teaching, regardless of the type of certification the teachers have
or the type of teacher preparation program they attended or are
attending. When a teacher is given primary responsibility for the
learning outcomes of a group of students, the type of program she
attended or is still attending is largely irrelevant--she is expected
to ensure that her students learn. We expect that alternative route
teacher preparation programs are ensuring that the teachers they place
in classrooms prior to completion of their coursework are sufficiently
prepared to ensure student growth in that school year. We recognize
that these teachers, and those who completed traditional teacher
[[Page 75511]]
preparation programs, will grow and develop as teachers in their first
few years in the classroom.
We agree with commenters who suggested that surveying teachers and
their employers about the quality of training in the teachers'
preparation program would provide a more rich and well-informed
understanding of the programs over time. However, we decline to require
that States survey novice teachers and their employers for more than
one year. As an indicator of novice teachers' academic content
knowledge and teaching skills, these surveys are a much more robust
indicator of program performance in preparing novice teachers for
teaching when completed in the first year of teaching. In this way, the
program is still fresh and teachers and employers can best focus on the
unique impact of the program independent of other factors that may
contribute to teaching quality such as on-the-job training. However, if
they so choose, States are free to survey novice teachers and their
employers in subsequent years beyond a teacher's first year of
teaching, and consider the survey results in their assessment of
teacher preparation program effectiveness.
For teacher preparation programs provided through distance
education, a State must survey the novice teachers described in the
definition of ``teacher survey'' who have completed such a program and
who teach in that State, as well as the employers of those same
teachers.
Changes: None.
Comments: None.
Discussion: Upon review, we recognized that the structure of the
proposed regulations could be confusing. In particular, we were
concerned that having a definition for the term ``survey outcomes'' in
Sec. 612.2, when it largely serves to operationalize other definitions
in the context of Sec. 612.5, was not the clearest way to present
these requirements. We therefore are removing the definition of
``survey outcomes'' from Sec. 612.2 and moving its explanations and
requirements into Sec. 612.5(a)(3).
Through this change, we are clarifying that the surveys will assess
whether novice teachers possess the academic content knowledge and
teaching skills needed to succeed in the classroom. We do so for
consistency with Sec. 612.5(a), which requires States to assess, for
each teacher preparation program, indicators of academic content
knowledge and teaching skills of novice teachers from that program. We
also have removed the provision that the survey is of teachers in their
first year of teaching in the State where the teacher preparation is
located, and instead provide that the survey is of teachers in their
first year teaching in the State. This change is designed to be
consistent with new language related to the reporting of teacher
preparation programs provided through distance education, as discussed
later in this document. Finally, we are changing the term ``new
teacher'' to ``novice teacher'' for the reasons discussed under the
definition of ``novice teacher.''
Changes: We have moved the content of the proposed definition of
``survey outcomes'' from Sec. 612.2, with edits for clarity, to Sec.
612.5(a)(3). We have also replaced the term ``new teacher'' with
``novice teacher'' in Sec. 612.5(a)(3).
Teacher Evaluation Measure
Comments: Many commenters noted that the proposed definition of
``teacher evaluation measure'' is based on the definition of ``student
growth.'' Therefore, commenters stated that the definition is based on
VAM, which they argued, citing research, is not valid or reliable for
this purpose.
Discussion: The proposed definition of ``teacher evaluation
measure'' did include a measure of student growth. However, while VAM
reflects a permissible way to examine student growth, neither in the
final definition of teacher evaluation measure nor anywhere else in
these regulations is the use of VAM required. For a more detailed
discussion of the use of VAM, please see the discussion of Sec.
612.5(a)(1).
Changes: None.
Comments: Commenters stated that the proposed definitions of
``teacher evaluation measure'' and ``student growth'' offer value from
a reporting standpoint and should be used when available. Commenters
also noted that it would be useful to understand novice teachers'
impact on student growth and recommended that States be required to
report student growth outcomes separately from teacher evaluation
measures where both are available.
Commenters also noted that not all States may have teacher
evaluation measures that meet the proposed definition because not all
States require student growth to be a significant factor in teacher
evaluations, as required by the proposed definition. Other commenters
suggested that, while student growth or achievement should be listed as
the primary factors in calculating teacher evaluation measures, other
factors such as teacher portfolios and student and teacher surveys
should be included as secondary considerations.
Some commenters felt that any use of student performance to
evaluate effectiveness of teacher instruction needs to include multiple
measures over a period of time (more than one to two years) and take
into consideration the context (socioeconomic, etc.) in which the
instruction occurred.
Discussion: We first stress that the regulations allow States to
use ``teacher evaluation measures'' as one option for student learning
outcomes; use of these measures is not required. States also may use
student growth or, another State-determined measure relevant to
calculating student learning outcomes, or combination of these three
options.
Furthermore, while we agree that reporting on student growth
separately from teacher evaluation measures would likely provide the
public with more information about the performance of novice teachers,
we are committed to providing States the flexibility to develop
performance systems that best meet their specific needs. In addition,
because of the evident cost and burden of disaggregating student growth
data from teacher evaluation measures, we do not believe that the HEA
title II reporting system is the right vehicle for gathering this
information. As a result, we decline to require separate reporting.
States may consider having LEAs incorporate teacher portfolios and
student and teacher surveys into teacher evaluation measures, as the
commenters recommended. In this regard, we note that the definition of
``teacher evaluation measure'' requires use of multiple valid measures,
and we believe that teacher evaluation systems that use such additional
measures of professional practice provide the best information on a
teacher's effectiveness. We also note that, because the definition of
``novice teacher'' encompasses the first three years as a teacher of
record, teacher evaluation measures that include up to three years of
student growth data are acceptable measures of student learning
outcomes under Sec. 612.5(a)(1). In addition, States can control for
different kinds of student and classroom characteristics in ways that
apply our definition of student learning outcomes and student growth.
See the discussion of Sec. 612.5(a)(2) for further information of the
student learning outcomes indicator.
With regard to the comment that some States lack teacher evaluation
measures that meet the proposed definition because they do not require
student growth to be a significant factor in teacher evaluations, we
previously explained in our discussion of Sec. 612.1 (and do so again
in our discussion of Sec. 612.6) our reasons for removing any
[[Page 75512]]
proposed weightings of indicators from these regulations. Thus we have
removed the phrase, ``as a significant factor,'' from the definition of
teacher evaluation measure.
Changes: We have removed the words ``as a significant factor'' from
the second sentence of the definition.
Comments: None.
Discussion: In response to the student learning outcomes indicator,
some commenters recommended that States be allowed to use the teacher
evaluation system they have in place. By proposing definitions relevant
to student learning outcomes that align with previous Department
initiatives, our intention was that the teacher evaluation systems of
States that include student growth as a significant factor, especially
those that had been granted ESEA flexibility, would meet the
requirements for student learning outcomes under the regulations. Upon
further review, we determined that revision to the definition of
``teacher evaluation measure'' is necessary to ensure that States are
able to use teacher evaluation measures to collect data for student
learning outcomes if the teacher evaluation measures include student
growth, and in order to ensure that the definition describes the
measure itself, which is then operationalized through a State's
calculation.
We understand that some States and districts that use student
growth in their teacher evaluation systems do not do so for teachers in
their first year, or first several years, of teaching. We are satisfied
that such systems meet the requirements of the regulations so long as
student growth is used as one of the multiple valid measures to assess
teacher performance within the first three years of teaching. To ensure
such systems meet the definition of ``teacher evaluation measure,'' we
are revising the phrase ``in determining each teacher's performance
level'' in the first sentence of the definition so that it reads ``in
determining teacher performance.''
Furthermore, for the reasons included in the discussion of
Sec. Sec. 612.1 and 612.6, we are removing the phrase ``as a
significant factor'' from the definition. In addition, we are removing
the phrase ``of performance levels'' from the second sentence of the
definition, as inclusion of that phrase in the NPRM was an error.
In addition, we have determined that the parenthetical phrase
beginning ``such as'' could be shortened without changing the intent,
which is to provide examples of other measures of professional
practice.
Finally, in response to commenters' desire for additional
flexibility in calculating student learning outcomes, and given the
newly enacted ESSA, under which waivers granted under ESEA flexibility
will terminate as of August 1, 2016, we have revised the regulations so
that States may use any State-determined measure relevant to
calculating student learning outcomes, or combination of these three
options.
Changes: We have revised the definition of ``teacher evaluation
measure'' by removing the phrase ``By grade span and subject area and
consistent with statewide guidelines, the percentage of new teachers
rated at each performance level under'' and replaced it with ``A
teacher's performance level based on''. We have removed the final
phrase ``determining each teacher's performance level'' and replaced it
with ``assessing teacher performance.'' We have also revised the
parenthetical phrase beginning ``such as'' so that it reads ``such as
observations based on rigorous teacher performance standards, teacher
portfolios, and student and parent surveys.''
Teacher of Record
Comments: Commenters requested that the Department establish a
definition of ``teacher of record,'' but did not provide us with
recommended language.
Discussion: We used the term ``teacher of record'' in the proposed
definition of ``new teacher,'' and have retained it as part of the
definitions of ``novice teacher'' and ``recent graduate.'' We agree
that a definition of ``teacher of record'' will be helpful and will add
clarity to those two definitions.
We are adopting a commonly used definition of ``teacher of record''
that focuses on a teacher or co-teacher who is responsible for student
outcomes and determining a student's proficiency in the grade or
subject being taught.
Changes: We have added to Sec. 612.2 a definition of ``teacher of
record,'' and defined it to mean a teacher (including a teacher in a
co-teaching assignment) who has been assigned the lead responsibility
for student learning in a subject or course section.
Teacher Placement Rate
Comments: Some commenters questioned whether it was beyond the
Department's authority to set detailed expectations for teacher
placement rates. Several commenters expressed concerns about which
individuals would and would not be counted as ``placed'' when
calculating this rate. In this regard, the commenters argued that the
Federal government should not mandate the definitive list of
individuals whom a State may exclude from the placement rate
calculation; rather, they stated that those decisions should be
entirely up to the States.
Discussion: In response to commenters who questioned the
Department's authority to establish detailed expectations for a
program's teacher placement rate, we note that the regulations simply
define the teacher placement rate and how it is to be calculated. The
regulations also generally require that States use it as an indicator
of academic content and teaching skills when assessing a program's
level of performance. And they require this use because we strongly
believe both (1) that a program's teacher placement rate is an
important indicator of academic content knowledge and teaching skills
of recent graduates, and (2) that a rate that is very low, like one
that is very high, is a reasonable indicator of whether the program is
successfully performing one of its basic functions--to produce
individuals who become hired as teachers of record.
The regulations do not, as the commenters state, establish any
detailed expectations of what such a low (or high) teacher placement
rate is or should be. This they leave up to each State, in consultation
with its group of stakeholders as required under Sec. 612.4(c).
We decline to accept commenters' recommendations to allow States to
determine who may be excluded from placement rate calculations beyond
the exclusions the regulations permit in the definition of ``teacher
placement rate.'' Congress has directed that States report their
teacher placement rate data ``in a uniform and comprehensible manner
that conforms to the definitions and methods established by the
Secretary.'' See section 205(a) of the HEA. We believe the groups of
recent graduates that we permit States, at their discretion, to exclude
from these calculations--teachers teaching out of State and in private
schools, and teachers who have enrolled in graduate school or entered
the military--reflect the most common and accepted groups of recent
graduates that States should be able to exclude, either because States
cannot readily track them or because individual decisions to forgo
becoming teachers does not speak to the program's performance.
Commenters did not propose another comparable group whose failure to
become novice teachers should allow a State to exclude them in
calculations of a program's teacher placement rate, and upon review of
the
[[Page 75513]]
comments we have not identified such a group.
We accept that, in discussing this matter with its group of
stakeholders, a State may identify one or more such groups of recent
graduates whose decisions to pass up opportunities to become novice
teachers are also reasonable. However, as we said above, a teacher
placement rate becomes an indicator of a teacher preparation program's
performance when it is unreasonably low, i.e., below a level of
reasonableness the State establishes based on the fact that the program
exists to produce new teachers. We are not aware of any additional
categories of recent graduates that are not already included in the
allowable exclusions that would be both sufficiently large and whose
circumstances are out of the control of the teacher preparation program
that would, without their exclusion, result in an unreasonably low
teacher placement rate. Given this, we believe States do not need the
additional flexibility that the commenters propose.
Changes: None.
Comments: Commenters also expressed concern about participants who
are hired in non-teaching jobs while enrolled and then withdraw from
the program to pursue those jobs, suggesting that these students should
not be counted against the program. Some commenters questioned the
efficacy of teacher placement rates as an indicator of teacher
preparation program performance, given the number of teachers who may
be excluded from the calculation for various reasons (e.g., those who
teach in private schools). Other commenters were more generally
concerned that the discretion granted to States to exclude certain
categories of novice teachers meant that the information available on
teacher preparation programs would not be comparable across States.
Some commenters objected to permitting States to exclude teachers
or recent graduates who take teaching positions out of State, arguing
that, to be useful, placement rate data need to be gathered across
State boundaries as program graduates work in numerous States.
Discussion: We believe that the revised definition of ``recent
graduate,'' as well as the allowable exclusions in the definitions of
both teacher placement and retention rates, not only alleviate obvious
sources of burden, but provide States with sufficient flexibility to
calculate these rates in reasonable ways. Program participants who do
not complete the program do not become recent graduates, and would not
be included in calculations of the teacher placement rate. However, if
the commenters intended to address recent graduates who were employed
in non-teaching positions while in or after completing the program, we
would decline to accept the recommendation to exclude individuals
because we believe that, except for those who become teachers out of
State or in private schools, those who enroll in graduate school, or
those who enter the military (which the regulations permit States to
exclude), it is important to assess teacher preparation programs based
on factors that include their success rates in having recent graduates
hired as teachers of record.
With regard to the efficacy of the teacher placement rate as an
indicator of program performance, we understand that employment
outcomes, including teacher placement rates, are influenced by many
factors, some of which are outside of a program's control. However, we
believe that employment outcomes are, in general, a good reflection of
program because they signal a program's ability to produce graduates
whom schools and districts deem to be qualified and seek to hire and
retain. Moreover, abnormally low employment outcomes are an indication
that something about the program is amiss (just as abnormally high
outcomes suggest something is working very well). Further discussion on
this topic can be found under the subheading Employment Outcomes as a
Measure of Performance, Sec. 612.5(a)(2).
While we are sympathetic to the commenters' concern that the
proposed definition of teacher placement rate permits States to
calculate employment outcomes only using data on teachers hired to
teach in public schools, States may not, depending on State law, be
able to require that private schools cooperate in the State data
collection that the regulations require. We do note that, generally,
teacher preparation programs are designed to prepare teachers to meet
the requirements to teach in public schools nationwide, and over 90
percent of teachers in elementary and secondary schools do not work in
private schools.\13\ Additionally, requiring States to collect data on
teachers employed in private schools or out of State, as well as those
who enroll in graduate school or enter the military, would create undue
burden on States. The regulations do not prevent teacher preparation
entities from working with their States to secure data on recent
graduates who are subject to one or more of the permissible State
exclusions and likewise do not prevent the State using those data in
calculating the program's employment outcomes, including teacher
placement rates.
---------------------------------------------------------------------------
\13\ According to data from the Bureau of Labor Statistics, in
May 2014, of the 3,696,580 individuals employed as preschool,
primary, secondary, and special education school teachers in
elementary and secondary schools nationwide, only 358,770 were
employed in private schools. See www.bls.gov/oes/current/naics4_611100.htm and www.bls.gov/oes/current/611100_5.htm .
---------------------------------------------------------------------------
Similarly, we appreciate commenters' recommendation that the
regulations include placement rate data for those recent graduates who
take teaching positions in a different State. Certainly, many novice
teachers do become teachers of record in States other than those where
their teacher preparation programs are located. We encourage States and
programs to develop interstate data-sharing mechanisms to facilitate
reporting on indicators of program performance to be as comprehensive
and meaningful as possible.
Until States have a ready means of gathering these kinds of data on
an interstate basis, we appreciate that many States may find the costs
and complexities of this data-gathering to be daunting. On the other
hand, we do not view the lack of these data (or the lack of data on
recent graduates teaching in private schools) to undermine the
reasonableness of employment outcomes as indicators of program
performance. As we have explained, it is when employment outcomes are
particularly low that they become indicators of poor performance, and
we are confident that the States, working in consultation with their
stakeholders, can determine an appropriate threshold for teacher
placement and retention rates.
Finally, we understand that the discretion that the regulations
grant to each State to exclude novice teachers who teach in other
States and who work in private schools (and those program graduates who
go on to graduate school or join the military) means that the teacher
placement rates for teacher preparation programs will not be comparable
across States. This is not a major concern. The purpose of the
regulations and the SRC itself is to ensure that each State reports
those programs that have been determined to be low-performing or at-
risk of being low-performing based on reasonable and transparent
criteria. We believe that each State, in consultation with its
stakeholders (see Sec. 612.4(c), should exercise flexibility to
determine whether to have the teacher placement rate reflect inclusion
of those program graduates identified in paragraph (ii) of the
definition.
[[Page 75514]]
Changes: None.
Comments: Several commenters recommended that a State with a
statewide preschool program that requires early educators to have
postsecondary training and certification and State licensure be
required to include data on early educators in the teacher placement
rate, rather than simply permit such inclusion at the State's
discretion.
Discussion: We strongly encourage States with a statewide preschool
program where early educators are required to obtain State licensure
equivalent to elementary school teachers to include these teachers in
their placement data. However, we decline to require States to include
these early educators in calculations of programs' teacher placement
rates because early childhood education centers are often independent
from local districts, or are run by external entities. This would make
it extremely difficult for States to determine a valid and reasonable
placement rate for these teachers.
Changes: None.
Comments: Commenters recommended that teachers who have been hired
in part-time teaching positions be counted as ``placed,'' arguing that
the placement of teachers in part-time teaching positions is not
evidence of a lower quality teacher preparation program.
Discussion: We are persuaded by comments that a teacher may
function in a part-time capacity as a teacher of record in the subject
area and grade level for which the teacher was trained and that, in
those instances, it would not be appropriate to count this part-time
placement against a program's teacher placement rate. As such, we have
removed the requirement that a teacher placement rate be based on the
percentage of recent graduates teaching in full-time positions.
Changes: We have removed the full-time employment requirement from
the definition of ``teacher placement rate.''
Comments: Commenters asked whether a participant attending a
teacher preparation program who is already employed as a teacher by an
LEA prior to graduation would be counted as ``placed'' post-graduation.
Commenters felt that excluding such students may unduly penalize
programs that tailor their recruitment of aspiring teachers to working
adults.
Discussion: We are uncertain whether the commenter is referring to
a teacher who has already received initial certification or licensure
and is enrolled in a graduate degree program or is a participant in an
alternative route to certification program and is working as a teacher
as a condition of participation in the program. As discussed in the
section titled ``Teacher Preparation Program,'' a teacher preparation
program is defined, in part, as a program that prepares an individual
for initial certification or licensure. As a result, it is unlikely
that a working teacher would be participating in such a program. See
the section titled ``Alternative Route Programs'' for a discussion of
the use of teacher placement rate in alternative route programs.
Comments: Some commenters recommended that the teacher placement
rate calculation account for regional differences in job availability
and the general competitiveness of the employment market. In addition,
commenters argued that placement rates should also convey whether the
placement is in the area in which the candidate is trained to teach or
out-of-field (i.e., where there is a mismatch between the teacher's
content training and the area of the placement). The commenters
suggested that young teachers may be more likely to get hired in out-
of-field positions because they are among the few willing to take those
jobs. Commenters contended that many teachers from alternative route
programs (including Teach for America) are in out-of-field placements
and should be recognized as such. Commenters also argued that high-need
schools are notoriously staffed by out-of-field teachers, thus, they
recommended that placement rate data account for the congruency of the
placement. The commenters stated this is especially important if the
final regulations include placement rates in high-need schools as an
indicator of program performance.
Discussion: We encourage entities operating teacher preparation
programs to take factors affecting supply and demand, such as regional
differences in job availability and the general competitiveness of the
employment market, into consideration when they design and implement
their programs and work to have their participants placed as teachers.
Nonetheless, we decline to accept the recommendation that the
regulations require that the teacher placement rate calculation account
for these regional differences in job availability and the
competitiveness of the employment market. Doing so would be complex,
and would entail very large costs of cross-tabulating data on teacher
preparation program location, area of residence of the program
graduate, teacher placement data, and a series of employment and job
market indicators. States may certainly choose to account for regional
differences in job availability and the general competitiveness of the
employment market and pursue the additional data collection that such
effort would entail. However, we decline to require it.
As explained in the NPRM, while we acknowledge that teacher
placement rates are affected by some considerations outside of the
program's control, we believe that placement rates are still a valid
indicator of the quality of a teacher preparation program (see the
discussion of employment outcomes under Sec. 612.5(a)(2)).
We understand that teachers may be hired to teach subjects and
areas in which they were not prepared, and that out-of-field placement
is more frequent in high-need schools. However, we maintain the
requirement that the teacher placement rate assess the extent to which
program graduates become novice teachers in the grade-level, grade-
span, and subject area in which they were trained. A high incidence of
out-of-field placement reflects that the teacher preparation program is
not in touch with the hiring needs of likely prospective employers,
thus providing its participants with the academic content knowledge and
teaching skills to teach in the fields that do not match employers'
teaching needs. We also recognize that placing teachers in positions
for which they were not prepared could lead to less effective teaching
and exacerbate the challenges already apparent in high-need schools.
Changes: None.
Comments: Some commenters stated that, while it is appropriate to
exclude the categories of teachers listed in the proposed definition of
``teacher placement rate,'' data on the excluded teachers would still
be valuable to track for purposes of the State's quality rating system.
Commenters proposed requiring States to report the number of teachers
excluded in each category.
Discussion: Like the commenters, we believe that the number of
recent graduates that a State excludes from its calculation of a
program's teacher placement rate could provide useful information to
the program. For reasons expressed above in response to comments,
however, we believe a program's teacher placement rate will be a
reasonable measure of program performance without reliance on the
number of teachers in each category whom a State chooses to exclude
from its calculations. Moreover, we do not believe that the number of
recent graduates who go on to teach in other States or in private
schools, or who enter graduate school or the military, is
[[Page 75515]]
a reflection of a program's quality. Because the purpose of the teacher
placement rate, like all of the regulations' indicators of academic
content knowledge and teaching skills, is to provide information on the
performance of the program, we decline to require that States report
data in their SRCs. We nonetheless encourage States to consider
obtaining, securing, and publicizing these data as a way to make
information they provide about each program more robust.
Changes: None.
Comments: Commenters stated that it is important to have teacher
placement data beyond the first year following graduation, because
graduates sometimes move among districts in the early years of their
careers. One commenter noted that, in the commenter's State, data are
currently available only for teachers in their first year of teaching,
and that there is an important Federal role in securing these data
beyond this first year.
Discussion: From our review of the comments, we are unclear whether
the commenters intended to refer to a program's teacher retention rate,
because recent graduates who become novice teachers and then
immediately move to another district would be captured by the teacher
retention rate calculation. But because our definition of ``novice
teacher'' includes an initial three-year teaching period, program's
teacher retention rate would still continue to track these teachers in
future years.
In addition, we believe a number of commenters may have
misunderstood how the teacher placement rate is calculated and used.
Specifically, a number of commenters seemed to believe that the teacher
placement rate is only calculated in the first year after program
completion. This is inaccurate. The teacher placement rate is
determined by calculating the percentage of recent graduates who have
become novice teachers, regardless of their retention. As such, the
teacher placement rate captures any recent graduate who works as a
teacher of record in an elementary or secondary public school, which
may include preschool at the State's discretion, within three years of
program completion.
In order to provide additional clarity, we provide the following
example. We examine a theoretical group of graduates from a single
teacher preparation program, as outlined in Table 1. In examining the
example, it is important to understand that a State reports in its SRC
for a given year a program's teacher retention rate based on data from
the second preceding title II reporting year (as the term is defined in
the regulations). Thus, recent graduates in 2018 (in the 2017-2018
title II reporting year) might become novice teachers in 2018-2019. The
State collects these data in time to report them in the SRC to be
submitted in October 2019. Please see the discussion of the timing of
the SRC under Sec. 612.4(a)(1)(i) General State Report Card reporting
and Sec. 612.4(b) Timeline for changes in the reporting timeline from
that proposed in the NPRM.
BILLING CODE 4000-01-P
[[Page 75516]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.000
[[Page 75517]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.001
BILLING CODE 4000-01-C
[[Page 75518]]
In this example, the teacher preparation program has five
individuals who met all of the requirements for program completion in
the 2016-2017 academic year. The State counts these individuals (A, B,
C, D, and E) in the denominator of the placement rate for the program's
recent graduates in each of the State's 2018, 2019, and 2020 SRCs
because they are, or could be, recent graduates who had become novice
teachers in each of the prior title II reporting years. Moreover, in
each of these years, the State would determine how many of these
individuals have become novice teachers. In the 2018 SRC, the State
identifies that A and B have become novice teachers in the prior
reporting year. As such, the State divides the total number of recent
graduates who have become novice teachers (2) by the total number of
recent graduates from 2016-2017 (5). Hence, in the 2018 SRC, this
teacher preparation program has a teacher placement rate of 40 percent.
In the State's 2019 SRC, all individuals who completed the program
in 2017 and those who completed in 2018 (the 2016-2017 and 2017-2018
title II reporting years) meet the definition of recent graduate. In
the 2018-2019 academic year, one additional completer from the 2016-
2017 academic year has become a novice teacher (C), and five (F, G, H,
J, and K) of the six 2017-2018 program completers have become novice
teachers. In this instance, Teacher J is included as a recent graduate
who has become a novice teacher even though Teacher J is not teaching
in the current year. This is because the definition requires inclusion
of all recent graduates who have become novice teachers at any time,
regardless of their retention. Teacher J is counted as a successfully
placed teacher. The fact that Teacher J is no longer still employed as
a teacher is captured in the teacher retention rate, not here. As such,
in the 2019 SRC, the teacher preparation program's teacher placement
rate is 73 percent (eight program completers out of eleven have been
placed).
In the State's 2020 SRC, there are no additional cohorts to add to
the pool of recent graduates in this example although, in reality,
States will be calculating this measure using three rolling cohorts of
program completers each year. In this example, Teacher D has newly
obtained placement as a novice teacher and would therefore be included
in the numerator. As with Teacher J in the prior year's SRC, Teachers G
and K remain in the numerator even though they are no longer teachers
of record because they have been placed as novice teachers previously.
In the 2020 SRC, the teacher preparation program's teacher placement
rate is 82 percent (nine program completers out of eleven have been
placed).
In the 2021 SRC, individuals who completed their teacher
preparation program in the 2016-2017 academic year (A, B, C, D, and E)
are no longer considered recent graduates since they completed their
programs prior to the preceding three title II reporting years (2018,
2019, 2020). As such, the only cohort of recent graduates the State
examines for this hypothetical teacher preparation program are those
that completed the program in the 2016-2017 academic year (F, G, H, I,
J, and K). In the 2020-2021 academic year, Teacher I is placed as a
novice teacher. Once again, Teachers G and J are included in the
numerator even though they are not currently employed as teachers
because they have previously been placed as novice teachers. The
program's teacher placement rate in the 2021 SRC would be 100 percent.
In the 2022 SRC, this hypothetical teacher preparation program has
no recent graduates, as no one completed the requirements of the
program in any of the three preceding title II reporting years (2019,
2020, or 2021).
As noted above, it is important to restate that recent graduates
who have become novice teachers at any point, such as Teacher J, are
included in the numerator of this calculation, regardless of whether
they were retained as a teacher of record in a subsequent year. As
such, if an individual completed a teacher preparation program in Year
1 and became a novice teacher in Year 2, regardless of whether he or
she is still a novice teacher in Year 3, the individual is considered
to have been successfully placed under this measure. Issues regarding
retention of teachers are captured by the teacher retention rate
measure, and therefore departures from a teaching position have no
negative consequences under the teacher placement rate.
We have adopted these procedures for State reporting of a program's
teacher placement rate in each year's SRC to keep them consistent with
the proposal we presented in the NPRM for reporting teacher placement
rates over a three-year period, in line with the change in the SRC
reporting date, and as simple and straightforward as possible. This led
us to make certain non-substantive changes to the proposed definition
of teacher retention rate so that the definition is clearer and less
verbose. In doing so, we have removed the State's option of excluding
novice teachers who have taken teaching positions that do not require
State certification (paragraph (ii)(C) of the proposed definition)
because it seems superfluous; our definition of teacher preparation
program is one that leads to an initial State teacher certification or
licensure in a specific field.
Changes: We have revised the definition of ``teacher placement
rate'' to include:
(i) The percentage of recent graduates who have become novice
teachers (regardless of retention) for the grade level, span, and
subject area in which they were prepared.
(ii) At the State's discretion, exclusion from the rate calculated
under paragraph (i) of this definition of one or more of the following,
provided that the State uses a consistent approach to assess and report
on all of the teacher preparation programs in the State:
(A) Recent graduates who have taken teaching positions in another
State.
(B) Recent graduates who have taken teaching positions in private
schools.
(C) Recent graduates who have enrolled in graduate school or
entered military service.
Comments: None.
Discussion: The Department recognizes that a State may be unable to
accurately determine the total number of recent graduates in cases
where a teacher preparation program provided through distance education
is offered by a teacher preparation entity that is physically located
in another State. Each institution of higher education conducting a
teacher preparation program is required to submit an IRC, which would
include the total number of recent graduates from each program, to the
State in which it is physically located. If the teacher preparation
entity operates a teacher preparation program provided through distance
education in other States, it is not required to submit an IRC in those
States. As a result, a State with a teacher preparation program
provided through distance education that is operated by an entity
physically located in another State will not have access to information
on the total number of recent graduates from such program. Even if the
State could access the number of recent graduates, recent graduates who
neither reside in nor intend to teach in such State would be captured,
inflating the number of recent graduates and resulting in a teacher
placement rate that is artificially low.
For these reasons, we have has determined that it is appropriate to
allow States to use the total number of recent graduates who have
obtained initial certification or licensure in the
[[Page 75519]]
State, rather than the total number of recent graduates, when
calculating teacher placement rates for teacher preparation programs
provided through distance education. We believe that teacher placement
rate calculated using the number of recent graduates who have obtained
initial certification or licensure is likely more accurate in these
instances than total recent graduates from a multi-state program. Even
so, since fewer recent graduates obtain initial certification or
licensure than the total number of recent graduates, the teacher
placement rate may be artificially high. To address this, we have also
revised the employment outcomes section in Sec. 612.5(a)(2) to allow
States a greater degree of flexibility in calculating and weighting
employment outcomes for teacher preparation programs offered through
distance education.
Changes: We have revised the definition of teacher placement rate
in Sec. 612.2 to allow States to use the total number of recent
graduates who have obtained initial certification or licensure in the
State during the three preceding title II reporting years as the
denominator in their calculation of teacher placement rate for teacher
preparation programs provided through distance education instead of the
total number of recent graduates.
Teacher Preparation Program
Comments: Commenters stated that the regulations are designed for
undergraduate teacher preparation programs rather than graduate
programs, in part because the definition of teacher preparation program
is linked to specific teaching fields. This could result in small
program sizes for post-baccalaureate preparation programs.
Another commenter noted that it offers a number of graduate degree
programs in education that do not lead to initial certification, but
that the programs which institutions and States report on under part
612 are limited to those leading to initial certification.
Other commenters urged that aggregation of data to elementary and
secondary data sets would be more appropriate in States with a
primarily post-baccalaureate teacher preparation model. We understand
that commenters are suggesting that our proposed definition of
``teacher preparation program,'' with its focus on the provision of a
specific license or certificate in a specific field, will give States
whose programs are primarily at the post-baccalaureate level
considerable trouble collecting and reporting data for the required
indicators given their small size. (See generally Sec. 612.4(b)(3).)
Discussion: The definition of teacher preparation program in the
regulations is designed to apply to both undergraduate and graduate
level teacher preparation programs. We do not agree that the definition
is designed to fit teacher preparation programs better at one or
another level. With regard to the commenters' concerns about greater
applicability to graduate-level programs, while the commenters
identified these as concerns regarding the definition of teacher
preparation program, we understand the issues described to be about
program size, which is addressed in Sec. 612.4(b). As such, these
comments are addressed in the discussion of program size under Sec.
612.4(b)(3)(ii). We do believe that it is important to clarify that a
teacher preparation program for purposes of title II, HEA reporting is
one that leads to initial certification, as has been the case under the
title II reporting system since its inception.
Changes: We have revised the definition of the term ``teacher
preparation program'' to clarify that it is one that leads to initial
state teacher certification or licensure in a specific field.
Comments: Commenters noted that, because teacher preparation
programs in some States confer academic degrees (e.g., Bachelor of Arts
in English) on graduates rather than degrees in education, it would be
impossible to identify graduates of teacher preparation programs and
obtain information on teacher preparation graduates. Additionally, some
commenters were concerned that the definition does not account for
students who transfer between programs or institutions, or distinguish
between students who attended more than one program; it confers all of
the credit or responsibility for these students' academic content
knowledge and teaching skills on the program from which the student
graduates. In the case of alternative route programs, commenters stated
that students may have received academic training from a different
program, which could unfairly either reflect poorly on, or give credit
to, the alternative route program.
Discussion: Under the regulatory definition of the term, a teacher
preparation program, whether alternative route or traditional, must
lead to an initial State teacher certification or licensure in a
specific field. As a result, a program that does not lead to an initial
State teacher certification or licensure in a specific field (e.g., a
Bachelor of Arts in English without some additional education-related
coursework) is not considered a teacher preparation program that is
reported on under title II. For example, a program that provides a
degree in curriculum design, confers a Masters of Education, but does
not prepare students for an initial State certification or licensure,
would not qualify as a teacher preparation program under this
definition. However, a program that prepares individuals to be high
school English teachers, including preparing them for an initial State
certification or licensure, but confers no degree would be considered a
teacher preparation program. The specific type of degree granted by the
program (if any) is irrelevant to the definition in these regulations.
Regardless of their structure, all teacher preparation programs are
responsible for ensuring their students are prepared with the academic
content knowledge and teaching skills they need to succeed in the
classroom. Therefore, by having the regulatory definition of teacher
preparation program encompass all teacher preparation programs,
regardless of their structure, that lead to initial State teacher
certification or licensure in a specific field, it makes sense that
States must report on the performance and associated data of each of
these programs.
While we understand that students often transfer during their
college careers, we believe that that the teacher preparation program
that ultimately determines that a student is prepared for initial
certification or licensure is the one responsible for his or her
performance as a teacher. This is so regardless of whether the student
started in that program or a different one. The same is true for
alternative route programs. Since alternative route programs enroll
individuals who have had careers, work experiences, or academic
training in fields other than education, participants in these programs
have almost by definition had academic training elsewhere. However, we
believe it is fully appropriate to have the alternative route program
assume full responsibility for effective teacher training under the
title II reporting system, as it is the program that determined the
teacher to have sufficient academic content knowledge and teaching
skills to complete the requirements of the program.
Finally, we note that in Sec. 612.5(a)(4), the regulations also
require States to determine whether teacher preparation programs have
rigorous exit requirements. Hence, regardless of student transfers, the
public will know whether the State considers program
[[Page 75520]]
completers to have reached a high standard of preparation.
Changes: None.
Comment: None.
Discussion: In considering the comments we received on alternative
route to certification programs, we realized that our proposed
definition of ``teacher preparation program'' did not address the
circumstance where the program, while leading to an initial teacher
certification or licensure in a specific field, enrolls some students
in a traditional teacher preparation program and other students in an
alternative route to certification program (i.e., hybrid programs).
Like the students enrolled in each of these two programmatic
components, the components themselves are plainly very different.
Principally, one offers instruction to those who will not become
teachers of record until after they graduate and become certified to
teach, while the other offers instruction to those who already are
teachers of record (and have met State requirements to teach while
enrolled in their teacher preparation program), and that thereby
supports and complements those individuals' current teaching
experiences. Thus, while each component is ``offered by [the same]
teacher preparation entity'' and ``leads to an initial State teacher
certification or licensure in a specific field,'' this is where the
similarity may end.
We therefore have concluded that our proposed definition of a
teacher preparation program does not fit these hybrid programs. Having
an IHE or the State report composite information for a teacher
preparation program that has both a traditional and alternative route
component does not make sense; reporting in the aggregate will mask
what is happening with or in each component. The clearest and simplest
way to avoid the confusion in reporting that would otherwise result is
to have IHEs and States treat each component of such a hybrid program
as its own teacher preparation program. We have revised the definition
of a ``teacher preparation program'' in Sec. 612.2 to do just that.
While doing so may create more small teacher preparation programs that
require States to aggregate data under Sec. 612.4(b)(3)(ii), this
consequence will be far outweighed by the benefits of cleaner and
clearer information.
Changes: We have revised the definition of a ``teacher preparation
program'' in Sec. 612.2 to clarify that where some participants in the
program are in a traditional route to certification or licensure in a
specific field, and others are in an alternative route to certification
or licensure in that same field, the traditional and alternative route
component is each its own teacher preparation program.
Teacher Retention Rate
Comments: Some commenters stated that by requiring reporting on
teacher retention rates, both generally and for high-need schools,
program officials--and their potential applicants--can ascertain if the
programs are aligning themselves with districts' staffing needs.
Other commenters stated that two of the allowable options for
calculating the teacher retention rate would provide useful information
regarding: (1) The percentage of new teachers hired into full-time
teaching positions and serving at least three consecutive years within
five years of being certified or licensed; and (2) the percentage of
new teachers hired full-time and reaching tenure within five years of
being certified. According to commenters, the focus of the third
option, new teachers who were hired and then fired for reasons other
than budget cuts, could be problematic because it overlooks teachers
who voluntarily leave high-need schools, or the profession altogether.
Other commenters recommended removing the definition of teacher
retention rate from the regulations.
Another commenter stated that the teacher retention rate, which we
had proposed to define as any of the three specific rates as selected
by the State, creates the potential for incorrect calculations and
confusion for consumers when teachers have initial certification in
multiple States; however, the commenter did not offer further
information to clarify its meaning. In addition, commenters stated that
the proposed definition allows for new teachers who are not retained
due to market conditions or circumstances particular to the LEA and
beyond the control of teachers or schools to be excluded from
calculation of the retention rate, a standard that allows each school
to determine the criteria for those conditions, which are subject to
interpretation.
Several commenters requested clarification of the definition. Some
asked us to clarify what we meant by tenure. Another commenter asked us
to clarify how to treat teachers on probationary certificates.
Another commenter recommended that the Department amend the teacher
retention rate definition so that it is used to help rate teacher
preparation programs by comparing the program's recent graduates who
demonstrate effectiveness and remain in teaching to those who fail to
achieve high ratings on evaluations. One commenter suggested that
programs track the number of years graduates taught over the course of
five years, regardless of whether or not the years taught were
consecutive. Others suggested shortening the timeframe for reporting on
retention so that the rate would be reported for each of three
consecutive years and, as we understand the comments, would apply to
individuals after they became novice teachers.
Discussion: We agree with the commenters who stated that reporting
on teacher retention rates both generally and for high-need schools
ensures that teacher preparation programs are aligning themselves with
districts' staffing needs.
In response to comments, we have clarified and simplified the
definition of teacher retention rate. We agree with commenters that the
third proposed option, by which one subtracts from 100 percent the
percentage of novice teachers who were hired and fired for reasons
other than budget cuts, is not a true measure of retention because it
excludes those who voluntarily leave the profession. Therefore, we have
removed it as an option for calculating the retention rate. Doing so
also addresses those concerns that the third option allowed for too
much discretion in interpreting when local conditions beyond the
schools' control caused teachers to no longer be retained.
We also agree with commenters that the second proposed option for
calculating the rate, which looked to the percentage of new teachers
not receiving tenure within five years, is confusing and does not make
sense when looking at new teachers, which we had proposed to define as
covering a three-year teaching period, as tenure may not be reached
during that timeframe. For these reasons, we also have removed this
option from the definition. Doing so addresses the commenters' concerns
that multiple methods for calculating the rate would create confusion.
We also believe this addresses the comments regarding our use of the
term tenure as potentially causing confusion.
We also note that our proposed definition of teacher retention rate
did not bring in the concept of certification in the State in which one
teaches. Therefore, we do not believe this definition will cause the
confusion identified by the commenter who was concerned about teachers
who were certified to teach in multiple States.
Additionally, we revised the first option for calculating the
teacher retention rate to clarify that the rate must be calculated
three times for each cohort of novice teachers--after the first,
[[Page 75521]]
second, and third years as a novice teacher. We agree with commenters
who recommended shortening the timeframe for reporting on retention
from three of five years to the first three consecutive years. We made
this change because the definition of recent graduate already builds in
a three-year window to allow for delay in placement, and to simplify
the data collection and reporting requirements associated with this
indicator.
We also agree with the recommendation that States calculate a
program's retention rate based on three consecutive years after
individuals become novice teachers. We believe reporting on each year
for the first three years is a reasonable indicator of academic content
and teaching skills in that it shows how well a program prepares novice
teachers to remain in teaching, and also both promotes greater
transparency and helps employers make more informed hiring decisions.
We note that teacher retention rate is calculated for all novice
teachers, which includes those on probationary certificates. This is
further explained in the discussion of ``Alternative Route Programs''
in section 612.5(a)(2).
We appreciate the suggestions that we should require States to
report a comparison of retention rates of novice teachers based on
their evaluation ratings, but decline to prescribe this measure as
doing so would create costs and complexities that we do not think are
sufficiently necessary in determining a program's broad level of
performance. States that are interested in such information for the
purposes of transparency or accountability are welcome to consider it
as another criterion for assessing program performance or for other
purposes.
BILLING CODE 4000-01-P
[[Page 75522]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.002
[[Page 75523]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.003
[[Page 75524]]
When calculating teacher retention rate, it is important to first
note that the academic year in which an individual met all of the
requirements for program completion is not relevant. Contrary to
teacher placement rate, the defining concern of a teacher retention
rate calculation is the first year in which an individual becomes a
teacher of record for P-12 public school students. In this example, we
use the same basic information as we did for the teacher placement rate
example. As such, Table 2a recreates Table 1, with calculations for
teacher retention rate instead of the teacher placement rate. However,
because the first year in which an individual becomes a novice teacher
is the basis for the calculations, rather than the year of program
completion, we could rearrange Table 2a in the order in which teachers
first became novice teachers as in Table 2b.
In addition, Table 2b removes data on program completion, and
eliminates both extraneous information before an individual becomes a
novice teacher and employment information after the State is no longer
required to report on these individuals for purposes of the teacher
retention rate.
[[Page 75525]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.004
[[Page 75526]]
[GRAPHIC] [TIFF OMITTED] TR31OC16.005
[[Page 75527]]
In this example, this particular teacher preparation program has
five individuals who became novice teachers for the first time in the
2017-2018 academic year (Teachers A, B, F, G, and J). For purposes of
this definition, we refer to these individuals as a cohort of novice
teachers. As described below, the State will first calculate a teacher
retention rate for this teacher preparation program in the October 2019
State report card. In that year, the State will determine how many
members of the 2017-2018 cohort of novice teachers have been
continuously employed through the current year. Of Teachers A, B, F, G,
and J, only teachers A, B, F, and G are still teaching in 2018-2019. As
such, the State calculates a teacher retention rate of 80 percent for
this teacher preparation program for the 2019 State Report Card.
In the October 2020 SRC, the State is required to report on the
2017-2018 cohort and the 2018-2019 cohort. The membership of the 2017-
2018 cohort does not change. From that cohort, Teachers A, B, and F
were employed in both the 2018-2019 academic year and the 2019-2020
academic year. The 2018-2019 cohort consists of Teachers C, H, and K.
Of those, only Teachers C and H are employed as teachers of record in
the 2019-2020 academic year. Therefore, the State reports a teacher
retention rate of 60 percent for the 2017-2018 cohort--because three
teachers (A, B, and F) were continuously employed through the current
year out of the five total teachers (A, B, F, G, and J) in that
cohort--and 67 percent for the 2018-2019 cohort--because 2 teachers (C
and H) were employed in the current year of the three total teachers
(C, H, and K) in that cohort.
In the October 2021 SRC, the State will be reporting on three
cohorts of novice teachers for the first time--the 2017-2018 cohort (A,
B, F, G, and J), the 2018-2019 cohort (C, H, and K), and the 2019-2020
cohort (D). Of the 2017-2018 cohort, only Teachers A and F have been
continuously employed as a teacher of record since the 2017-2018
academic year, therefore the State will report a retention rate of 40
percent for this cohort (two out of five). Of the 2018-2019 cohort,
only Teachers C and H have been continuously employed since the 2018-
2019 academic year. Despite being a teacher of record for the 2020-2021
academic year, Teacher K does not count towards this program's teacher
retention rate because Teacher K was not a teacher of record in the
2019-2020 academic year, and therefore has not been continuously
employed. The State would report a 67 percent retention rate for the
2018-2019 cohort (two out of three). For the 2019-2020 cohort, Teacher
D is still a teacher of record in the current year. As such, the State
reports a teacher retention rate of 100 percent for that cohort.
Beginning with the 2022 SRC, the State no longer reports on the
2017-2018 cohort. Instead, the State reports on the three most recent
cohorts of novice teachers--2018-2019 (C, H, and K), 2019-2020 (D), and
2020-2021 (E and I). Of the members of the 2018-2019 cohort, both
Teachers C and H have been employed as teachers of record in each year
from their first year as teachers of record through the current
reporting year. Teacher K is still not included in the calculation
because of the failure to be employed as a teacher of record in the
2019-2020 academic year. Therefore, the State reports a 67 percent
retention rate for this cohort. Of the 2019-2020 cohort, Teacher D has
been employed in each academic year since first becoming a teacher of
record. The State would report a 100 percent retention rate for this
cohort. Teachers E and I, of the 2020-2021 cohort, have also been
retained in the 2021-2022 academic year. As such, the State reports a
teacher retention rate of 100 percent in the 2022 SRC for this cohort.
Changes: We have revised the definition of teacher retention rate
by removing the second and third proposed options for calculating it.
We have replaced the first option with a method for calculating the
percentage of novice teachers who have been continuously employed as
teachers of record in each year between their first year as a novice
teacher and the current reporting year. In doing so, we also clarify
that the teacher retention rate is based on the percentage of novice
teachers in each of the three cohorts of novice teachers immediately
preceding the current title II reporting year.
Comments: None.
Discussion: Upon review of comments, we recognized that the data
necessary to calculate teacher retention rate, as we had proposed to
define this term, will not be available for the October 2018, 2019 and
2020 State reports. We have therefore clarified in Sec.
612.5(a)(2)(ii)the reporting requirements for this indicator for these
initial implementation years. In doing so, we have re-designated
proposed Sec. 612.5(a)(2)(ii), which permits States to assess
traditional and alternative route teacher preparation programs
differently based on whether there are specific components of the
programs' policies or structure that affect employment outcomes, as
Sec. 612.5(a)(2)(iii).
Changes: We have added Sec. 612.5(a)(2)(ii) to clarify that: For
the October 2018 State report, the rate does not apply; for the October
2019 State report, the rate is based on the cohort of novice teachers
identified in the 2017-18 title II reporting year; for the October 2020
State report, separate rates will be calculated for the cohorts of
novice teachers identified in the 2017-18 and 2018-19 title II
reporting years. In addition, we have re-designated proposed Sec.
612.5(a)(2)(ii) as Sec. 612.5(a)(2)(iii).
Teacher Survey
Comments: Commenters stated that the proposed definition of teacher
survey was unclear about whether all novice teachers or only a sample
of novice teachers must be surveyed. Commenters also stated that the
proposed definition missed an opportunity to collect meaningful data
about teacher preparation program performance because it would only
require a survey of novice teachers serving in full-time teaching
positions for the grade level, span, and subject area in which they
were prepared, and not all the completers of programs. One commenter
noted that Massachusetts plans to collect survey data from recent
graduates upon completion and novice teachers after a year of
employment.
Some commenters provided recommendations regarding survey content.
These commenters argued that the teacher survey include questions to
determine whether a teacher preparation program succeeded in the
following areas, which, according to the commenters, research shows are
important for preparing teachers to advance student achievement:
producing student learning and raising student achievement for all
students; using data to assess and address student learning challenges
and successes; providing differentiated teaching strategies for
students with varied learning needs, including English learners;
keeping students engaged; managing classroom behavior; and using
technology to improve teaching and increase student learning.
Discussion: While the proposed definition of survey outcomes
provided that States would have to survey all novice teachers in their
first year of teaching in the State where their teacher preparation
program is located, our proposed definition of teacher survey limited
this to those teachers in full-time teaching positions. We agree with
the commenters' explanations for why States should need to survey all
novice teachers, and not just those who are in full-time teaching
positions. For clarity,
[[Page 75528]]
in addition to including the requirement that ``survey outcomes'' be of
all novice teachers, which we have moved from its own definition in
proposed Sec. Sec. 612.2 to 612.5(a)(3), we have revised the
definition of ``teacher survey'' accordingly. We are also changing the
term ``new teacher'' to ``novice teacher'' for the reasons discussed
under the definition of ``novice teacher.''
However, we believe that requiring States to survey all program
completers would put undue burden on States by requiring them to locate
individuals who have not been hired as teachers. Rather, we believe it
is enough that States ensure that surveys are conducted of all novice
teachers who are in their first year of teaching. We note that this
change provides consistency with the revised definition of employer
survey, which is a survey of employers or supervisors designed to
capture their perceptions of whether the novice teachers they employ or
supervise, who are in their first year of teaching, were effectively
prepared. The goal of a teacher preparation program is to effectively
prepare aspiring teachers to step into a classroom prepared to teach.
As the regulations seek to help States reach reasonable determinations
of whether teacher preparation programs are meeting this goal, the
definition of survey outcomes focuses on novice teachers in their first
year of teaching. We note that the regulations do not prohibit States
from surveying additional individuals or conducting their surveys of
cohorts of teachers over longer periods of time, and we encourage
States to consider doing so. However, considering the costs associated
with further surveys of the same cohorts of novice teachers, we believe
that requiring that these teachers be surveyed once, during their first
year of teaching, provides sufficient information about the basic
issue--how well their program prepared them to teach.
We believe that States, in consultation with their stakeholders
(see Sec. 612.4(c)), are in the best position to determine the content
of the surveys used to evaluate the teacher preparation programs in
their State. Therefore, the regulations do not specify the number or
types of questions to be included in employer or teacher surveys.
Changes: We have revised the definition of ``teacher survey'' to
require States to administer surveys to all novice teachers in their
first year of teaching in the State.
Title II Reporting Year
Comments: None.
Discussion: Since its inception, the title II reporting system has
used the term ``academic year'' to refer to a period of twelve
consecutive months, starting September 1 and ending August 31, during
which States collect and subsequently report data on their annual
report cards. This period of data collection and reporting is familiar
to States, institutions, and the public; however, the proposed
regulations did not contain a definition of this reporting period. In
order to confirm that we do not intend for States to implement the
regulations in a way that changes their longstanding practice of using
that ``academic year'' as the period for their data collection and
reporting, we believe that it is appropriate to add a definition to the
regulations. However, to avoid confusion with the very generic term
academic year, which may mean different things at the teacher
preparation program and LEA levels, we instead use the term ``title II
reporting year.''
Changes: We added the term ``title II reporting year'' under Sec.
612.2, and defined it as a period of twelve consecutive months,
starting September 1 and ending August 31.
Subpart B--Reporting Requirements
Section 612.3 What are the regulatory reporting requirements for the
institutional report card?
Timeline of Reporting Requirements (34 CFR 612.3)
Comments: While there was some support for our proposal to change
the IRC due date from April to October, many commenters stated that the
proposed October 2017 pilot start date for the annual reporting cycle
for the IRC, using data pertaining to an institution's programs and
novice teachers for the 2016-2017 academic year, would be unworkable.
Several commenters therefore strongly recommended that our proposal to
move the due date for the IRC up by six months to October following the
end of the institutions' academic year not be implemented.
Commenters said that the change would make it impossible to collect
reliable data on several factors and on large numbers of recent
students. They stated that it would be impossible to submit a final IRC
by October 1 because students take State licensing assessments, as well
as enter into, drop from, and complete programs through August 31, and
therefore final student data, pass rates for students who took
assessments used for teacher certification or licensure by the State,
and other information would not be available until September or October
of each year. Other commenters indicated that, because most teacher
preparation programs will need to aggregate multiple years of data to
meet the program size threshold for reporting, the October submission
date will unnecessarily rush the production and posting of their
aggregated teacher preparation program data. Some commenters noted that
changing the IRC due date to October (for reporting on students and
programs for the prior academic year) would require a change in the
definition of academic year because, without such a change, the October
reports could not reflect scores on assessment tests that students or
program completers took through August 31st. Alternatively, the
proposal would require institutions to prepare and submit supplemental
reports later in the year in order for the reports to fully reflect
information for the prior academic year.
Some commenters also stated that LEAs have limited staffing and
cannot provide assistance to institutions during the summer when data
would be collected, or that because teacher hiring often occurs in
August, an October IRC due date does not provide enough time to collect
reliable employment data.
Discussion: We believe that the NPRM confused many commenters,
leading them to believe that IRC reporting would occur in the October
immediately after the end of the title II academic year on August 31.
Rather, we had intended that the reporting would be on the prior year's
academic year (e.g., the October 1, 2018 IRC would report data on the
2016-2017 academic year). However, as we discuss in our response to
comments on our proposals for the timing of the SRC under Sec.
612.4(a)(1)(i) General State Report Card reporting and Sec. 612.4(b)
Timeline, we have decided to maintain the submission date for the SRC
report in October, and so also maintain the due date for the IRC as
April of the year following the title II reporting year.
Finally, while several commenters opined that an October date for
submission of the IRC did not provide sufficient time for institutions
to receive information from LEAs, we do not believe that the
regulations require LEAs to submit any information to institutions for
purposes of the IRC. We assume that the comments were based on a
misunderstanding surrounding the data to be reported in the IRC. While
our proposed indicators of program performance would require States to
receive and report information from LEAs, institutions would not need
to receive comparable information from
[[Page 75529]]
LEAs in order to prepare and submit their IRCs.
Changes: We have revised Sec. 612.3 to provide that the first IRC
under the regulations, which would cover the 2016-2017 academic year,
is due not later than April 30, 2018.
Institutional Report Card (34 CFR 612.3(a))
Comments: Multiple commenters noted that the proposed regulations
regarding the IRCs do not take into account all of the existing
reporting demands, including not only the title II report, but also
reports for national and regional accrediting bodies. Another commenter
stated that, because feedback loops already exist to improve teacher
preparation programs, there is no need to have a Federal report card on
each teacher preparation program.
On the other hand, some commenters suggested that teacher
preparation programs report the demographics and outcomes of enrolled
teacher candidates by race and ethnicity. Specifically, commenters
suggested reporting the graduation rates, dropout rates, placement
rates for graduates, first-year evaluation scores (if available), and
the percentage of teacher candidates who stay within the teaching
profession for one, three, and five years. Another commenter also
suggested that gender, age, grade-level, and specialized areas of study
be included; and that the data be available for cross-tabulation (a
method of analysis allowing comparison of the relationship between two
variables). One commenter stated that because title II reporting
metrics are geared to evaluate how IHEs provide training, recruitment,
and education to first-time graduates of education programs, the
metrics cannot be applied to alternative route certification programs,
which primarily train career changers who already have a degree and
content knowledge. This commenter argued that attempting to compare the
results of title II metrics from alternative route certification
programs and traditional IHE-based programs will result in untrue
conclusions because the programs' student candidates are so different.
Another commenter suggested that, in order to ensure that States
are able to separately report on the performance of alternative route
preparation programs, IHEs should report whether they have a
partnership agreement with alternative route providers, and identify
the candidates enrolled in each of those programs. The commenter noted
that, while doing so may lead States to identify groups of small
numbers of alternative route program participants, it may eliminate the
possibility that candidates who actually participate in alternative
route programs are identified as graduates of a traditional preparation
program at the same IHE.
Another commenter stated that the variety of program academic
calendars, with their different ``start'' and ``end'' dates in
different months and seasons of the year, created another source of
inaccurate reporting. The commenter explained that, with students
entering a program on different dates, the need to aggregate cohorts
will result in diffuse data that have relatively little meaning since
the cohort will lose its cohesiveness. As such, the commenter stated,
the data reported based on aggregate cohorts should not be used in
assessing or evaluating the impact of programs on participants.
A number of commenters noted what they claimed were inherent flaws
in our proposed IRC. They argued that it has not been tested for
validity, feasibility, or unintended consequences, and therefore should
not be used to judge the quality of teacher preparation programs.
Discussion: In response to comments that would have IHEs report
more information on race, ethnicity, sex, and other characteristics of
their students or graduates, the content of the IRC is mandated by
section 205(a) of the HEA. Section 205(a)(C)(ii) of the HEA provides
the sole information that IHEs must report regarding the
characteristics of their students: ``the number of students in the
program (disaggregated by race, ethnicity, and gender).'' Therefore, we
do not have the authority to waive or change the statutorily prescribed
annual reporting requirements for the IRC.
Regarding the recommendation that institutions report whether their
teacher preparation programs have partnership agreements with
alternative route providers, we note that section 205(a) of the HEA
neither provides for IHEs to include this type of information in their
IRCs nor authorizes the Secretary to add reporting elements to them.
However, if they choose, States could require institutions to report
such data to them for inclusion in the SRCs. We defer to States on
whether they need such information and, if so, the best way to require
IHEs to provide it.
In response to the comment that the IRC is unnecessary because
institutions already have feedback loops for program improvement, we
note that by requiring each institution to make the information in the
IRC available to the general public Congress plainly intends that the
report serve a public interest that goes beyond the private use the
institution may make of the reported data. We thus disagree that the
current feedback loops that IHEs may have for program improvement
satisfy Congress' intent in this regard.
We understand that there are differences between traditional and
alternative route teacher preparation programs and that variability
among programs in each category (including program start and end dates)
exists. However, section 205(a) of the HEA is very clear that an IHE
that conducts either a traditional or alternative route teacher
preparation program must submit an IRC that contains the information
Congress has prescribed. Moreover, we do not agree that the
characteristics of any of these programs, specifically the demographics
of the participants in these programs or whether participants have
already earned an undergraduate degree, would necessarily lead to
inaccurate or confusing reporting of the information Congress requires.
Nor do we believe that the IRC reporting requirements are so geared to
evaluate how IHEs provide training, recruitment, and education to
first-time graduates of education programs that IHEs operating
alternative route programs cannot explain the specifics of their
responses.
We do acknowledge that direct comparisons of traditional and
alternative route programs would potentially be misleading without
additional information. However, this is generally true for comparisons
of all types of programs. For example, a comparison of average cost of
tuition and fees between two institutions could be misleading without
the additional context of the average value of financial aid provided
to each student. Simply because analyzing specific data out of context
could potentially generate confusion does not mitigate the value of
reporting the information to the general public that, as we have noted,
Congress requires.
With specific regard to the fact that programs have different
operating schedules, the IRC would have all IHEs report on students
participating in teacher preparation programs during the reporting year
based on their graduation date from the program. This would be true
regardless of the programs' start date or whether the students have
previous education credentials. We also believe the IRC would become
too cumbersome if we tried to tailor the specific reporting
requirements in section 205(a) of the HEA to address and reflect each
individual program start time, or if the regulations created different
reporting structures based on the program start time or the previous
[[Page 75530]]
career or educational background of the program participants.
Furthermore, we see no need for any testing of data reported in the
IRC for validity, feasibility, or unintended consequences. The data
required by these regulations are the data that Congress has specified
in section 205(a) of the HEA. We do not perceive the data elements in
section 205(a) as posing any particular issues of validity. Just as
they would in any congressionally mandated report, we expect all
institutions to report valid data in their IRCs and, if data quality
issues exist we expect institutions will address them so as to meet
their statutory obligations. Further, we have identified no issues with
the feasibility of reporting the required data. While we have worked to
simplify institutional reporting, institutions have previously reported
the same or similar data in their IRCs, albeit at a different level of
aggregation. Finally, we fail to see any unintended consequences that
follow from meeting the statutory reporting requirements. To the extent
that States use the data in the IRC to help assess whether a program is
low-performing or at-risk of being low-performing under section 207(a)
of the HEA, under our regulations this would occur only if, in
consultation with their stakeholders under Sec. 612.4(c), States
decide to use these data for this purpose. If institutions are
concerned about such a use of these data, we encourage them to be
active participants in the consultative process.
Changes: None.
Prominent and Prompt Posting of Institutional Report Card (34 CFR
612.3(b))
Comments: Multiple commenters supported the requirement to have
each IHE post the information in its IRC on its Web site and, if
applicable, on the teacher preparation program's own Web site. Based on
the cost estimates in the NPRM, however, several commenters raised
concerns about the ability of IHEs to do so.
Discussion: We appreciate the commenters' support for our proposal
as an appropriate and efficient way for IHEs to meet their statutory
responsibility to report annually the content of its IRC to the general
public (see section 205(a)(1) of the HEA).
We discuss the comments regarding concerns about the cost estimates
in the IRC Reporting Requirements section of the Discussion of Costs,
Benefits, and Transfers in this document.
Changes: None.
Availability of Institutional Report Card (34 CFR 612.3(c))
Comments: One commenter recommended that we mandate that each IHE
provide the information contained in its IRC in promotional and other
materials it makes available to prospective students, rather than
leaving it to the discretion of the institution.
Discussion: While we believe that prospective students or
participants of a teacher preparation program need to have ready access
to the information in the institution's IRC, we do not believe that
requiring the IHE to provide this information in its promotional
materials is either reasonable or necessary. We believe that the costs
of doing so would be very large and would likely outweigh the benefits.
For example, many institutions may make large printing orders for
pamphlets, brochures, and other promotional materials that get used
over the course of several years. Requiring the inclusion of IRC
information in those materials would require that institutions both
make these promotional materials longer and print them more often. As
the regulations already mandate that this information be prominently
posted on the institution's Web site, we fail to see a substantial
benefit to prospective students that outweighs the additional cost to
the institution.
However, while not requiring the information to be included in
promotional materials, we encourage IHEs and their teacher preparation
programs to provide it in places that prospective students can easily
find and access. We believe IHEs can find creative ways to go beyond
the regulatory requirements to provide this information to students and
the public without incurring significant costs.
Changes: None.
Section 612.4 What are the regulatory reporting requirements for the
State report card?
General (34 CFR 612.4(a))
Comments: None.
Discussion: As proposed, Sec. 612.4(a) required all States to meet
the annual reporting requirements. For clarity, we have revised this
provision to provide, as does section 205(b) of the HEA, that all
States that receive HEA funds must do so.
Changes: We have revised Sec. 612.4(a) to provide that all States
that receive funds under the HEA must meet the reporting requirements
required by this regulation.
General (Timeline) (34 CFR 612.4(a)(1)(i)) and Reporting of Information
on Teacher Preparation Program Performance (Timeline) (34 CFR 612.4(b))
Comments: Many commenters expressed concern with their State's
ability to build data systems and to collect and report the required
data under the proposed timeline.
Commenters noted that the proposed timeline does not allow States
enough time to implement the proposed regulations, and that the
associated logistical challenges impose undue and costly burdens on
States. Commenters noted that States need more time to make decisions
about data collection, involve stakeholders, and to pilot and revise
the data systems--activities that they said cannot be completed in one
year.
Several commenters recommended extending the timeline for
implementation by at least five years. Some commenters suggested
delaying the reporting of program ratings until at least 2021 to give
States more time to create data linkages and validate data. Other
commenters pointed out that their States receive employment and student
learning data from LEAs in the fall or winter, which they said makes
reporting outcomes in their SRCs in October of each year, as we had
proposed, impossible. Still other commenters noted that some data, by
their nature, may not be available to report by October. Another
commenter suggested that institutions should report in October, States
should report outcome data (but not performance designations) in
February, and then the States should report performance designations in
June, effectively creating an additional reporting requirement. To
address the timing problems in the proposed schedule for SRC
submission, other commenters recommended that the Department continue
having States submit their SRCs in October. On the other hand, some
commenters supported or encouraged the Department to maintain the
proposed timelines.
Many commenters stated that no State currently implements the
proposed teacher preparation program rating system. Therefore, to
evaluate effectiveness, or to uncover unintended consequences, these
commenters emphasized the importance of permitting States to develop
and evaluate pilot programs before broader implementation. Some
commenters therefore recommended that the proposed implementation
timeline be delayed until the process had been
[[Page 75531]]
piloted and evaluated for efficiency while others recommended a
multiyear pilot program.
Discussion: We appreciate the comments supporting the proposed
reporting timeline changes to the SRC. However, in view of the public's
explanation of problems that our proposed reporting schedule could
cause, we are persuaded that the title II reporting cycle should remain
as currently established--with the institutions submitting their IRCs
in April of each year, and States submitting their SRCs the following
October. IHEs and States are familiar with this schedule, and we see
that our proposal to switch the reporting dates, while having the
theoretical advantage of permitting the public to review information
much earlier, was largely unworkable.
Under the final regulations, the initial SRC (a pilot) would be due
October 31, 2018, for the 2016-2017 academic year. The October 2018 due
date provides much more time for submission of the SRC. As we note in
the discussion of comments received on Sec. 612.3(a) (Reporting
Requirements for the IRC), IHEs will continue to report on their
programs, including pass rates for students who took assessments used
for initial certification or licensure by the State in which the
teacher preparation program is located, from the prior academic year,
by April 30 of each year. States therefore will have these data
available for their October 31 reporting. Because the outcome data
States will need to collect to help assess the performance of their
teacher preparation programs (i.e., student learning outcomes,
employment outcomes, and survey outcomes) would be collected on novice
teachers employed by LEAs from the prior school year, these data would
likewise be available in time for the October 31 SRC reporting. Given
this, we believe all States will have enough time by October 31 of each
year to obtain the data they need to submit their SRCs. In addition,
since States are expected to periodically examine the quality of their
data collection and reporting under Sec. 612.4(c)(2), we expect that
States have a process by which to make modifications to their system if
they desire to do so.
By maintaining the current reporting cycle, States will have a year
(2016-2017) to design and implement a system. The 42 States, District
of Columbia, and the Commonwealth of Puerto Rico that were previously
granted ESEA flexibility are therefore well positioned to meet the
requirements of these regulations because they either already have the
systems in place to measure student learning outcomes or have worked to
do so. Moreover, with the flexibility that Sec. 612.5(a)(1)(ii) now
provides for States to measure student learning outcomes using student
growth, a teacher evaluation measure, or another State-determined
measure relevant to calculating student learning outcomes (or any
combination of these three), all States should be able to design and
implement their systems in time to submit their initial reports by
October 31, 2018. Additionally, at least 30 States, the District of
Columbia, and the Commonwealth of Puerto Rico either already have the
ability to aggregate data on the achievement of students taught by
recent graduates and link those data back to teacher preparation
programs. Similarly, as discussed below, 30 States already implement
teacher surveys that could be modified to be used in this
accountability system.
Particularly given the added flexibility in Sec. 612.5(a)(1)(ii),
as most States already have or are well on their way to having the
systems required to implement the regulations, we are confident that
the reduction in time to prepare before the pilot SRC will be prepared
and submitted will prove to be manageable. We understand that some
States will not have complete datasets available for all indicators
during initial implementation, and so may need to make adjustments
based on experience during the pilot year. We also stress that the
October 2018 SRC is a pilot report; any State identification of a
program as low-performing or at-risk of being low-performing included
in that report would not have implications either for the program
generally or for that program's eligibility to participate in the TEACH
Grant program. Full SRC reporting begins in October 2019.
In addition, maintaining the SRC reporting date of October 31 also
is important so that those who want to apply for admission to teacher
preparation programs and for receipt of TEACH Grants as early as
January of the year they wish to begin the program know which IHEs have
programs that States have identified in their SRCs as at-risk or low-
performing. Prospective students should have this information as soon
as they can so that they know both the State's assessment of each
program's level of performance and which IHEs lack authority to award
TEACH Grants. See our response to public comment regarding the
definition of a TEACH Grant-eligible institution in Sec. 686.2.
In summary, under our revised reporting cycle, the SRC is due about
five months earlier than in the proposed regulations. However, because
the report due October 31, 2018 is a pilot report, we believe that
States will have sufficient time to complete work establishing their
reporting and related systems to permit submission of all information
in the SRC by the first full reporting date of October 31, 2019. While
we appreciate the comments suggesting that States be able to develop
and evaluate pilot programs before broader implementation, or that the
implementation timeline be delayed until the State process has been
piloted and evaluated for efficiency, we do not believe that adding
more time for States to develop their systems is necessary. Lastly,
maintaining the existing timeline does not affect the timing of
consequences for TEACH Grants for at-risk or low-performing teacher
preparation programs. Under the regulations, the TEACH Grant
consequences would apply for the 2021-2022 award year.
Changes: We have revised Sec. 612.4(a) to provide that State
reports under these final regulations would be due on October 31, 2018.
We also changed the date for SRC reporting to October wherever it
appears in the final regulations.
Comments: Some commenters expressed concern with States' ability to
implement valid and reliable surveys in the time provided. Commenters
argued that issues related to who to survey, when to survey, and how
often to survey would make this the most challenging performance
indicator to develop, implement, and use for determining a program's
performance level. Commenters stated that an institution's capacity to
track graduates accurately and completely is highly dependent on the
existence of sophisticated State data systems that track teacher
employment and on appropriate incentives to assure high response rates
to surveys, noting that many States do not have such systems in place
and some are just beginning to implement them. Commenters suggested
that the Department consider easing the timeline for implementation of
surveys to reduce the cost and burden of implementation of surveys.
Discussion: According to the GAO survey of States, 30 States have
used surveys that assessed principals' and other district personnel's
satisfaction with recent traditional teacher preparation program
graduates when evaluating programs seeking State approval.\14\ We
believe these States can modify these existing survey instruments to
develop teacher and
[[Page 75532]]
employer surveys that comply with the regulations without substantial
additional burden. Additionally, States that do not currently use such
surveys may be able to shorten the time period for developing their own
surveys by using whole surveys or individual questions already employed
by other States as a template. States may also choose to shorten the
time required to analyze survey results by focusing on quantitative
survey responses (e.g., score on a Likert scale or number of hours of
training in a specific teaching skill) rather than taking the time to
code and analyze qualitative written responses. However, we note that,
in many instances, qualitative responses may provide important
additional information on program quality. As such, States could opt to
include qualitative questions in their surveys and send the responses
to the applicable teacher preparation programs for their own analysis.
With a far smaller set of responses to analyze, individual programs
would be able to review and respond much more quickly than the State.
However, these are decisions left to the States and their stakeholders
to resolve.
---------------------------------------------------------------------------
\14\ GAO at 13.
---------------------------------------------------------------------------
Changes: None.
Comments: A number of commenters indicated confusion about when
particular aspects of the proposed IRC and SRC are to be reported and
recommended clarification.
Discussion: We agree with the recommendation to clarify the
reporting of cohorts and metrics for reporting years. The chart below
outlines how certain metrics will be reported and the reporting
calendar. We understand that the information reported on the SRC may
differ from the example provided below because initially some data may
be unavailable or incomplete. In these instances, we expect that States
will weight indicators for which data are unavailable in a way that is
consistent and applies equivalent levels of accountability across
programs.
Table 3--Implementation and Reporting Calendar Example
--------------------------------------------------------------------------------------------------------------------------------------------------------
Year 2018 2019 2020 2021 2022
--------------------------------------------------------------------------------------------------------------------------------------------------------
Institutional Report Card (IRC)
--------------------------------------------------------------------------------------------------------------------------------------------------------
IRC Due Date....................... April 30, 2018........ April 30, 2019........ April 30, 2020....... April 30, 2021....... April 30, 2022.
Pass Rate.......................... Recent graduates (from Recent graduates (from Recent graduates Recent graduates Recent graduates
AY 2016-17). AY 2017-18). (from AY 2018-19). (from AY 2019-20). (from AY 2020-21).
--------------------------------------------------------------------------------------------------------------------------------------------------------
State Report Card (SRC)
--------------------------------------------------------------------------------------------------------------------------------------------------------
SRC Due Date....................... October 31, 2018 October 31, 2019...... October 31, 2020..... October 31, 2021..... October 31, 2022.
(Pilot).
Placement Rate..................... C1.................... C1, C2................ C1, C2, C3........... C2, C3, C4........... C3, C4, C5.
Retention Rate..................... N/A................... C1.................... C1, C2............... C1, C2, C3........... C2, C3, C4.
Student Learning Outcomes.......... C1.................... C1, C2................ C1, C2, C3........... C2, C3, C4........... C3, C4, C5.
Survey Outcomes.................... C1.................... C2.................... C3................... C4................... C5.
--------------------------------------------------------------------------------------------------------------------------------------------------------
TEACH Eligibility
--------------------------------------------------------------------------------------------------------------------------------------------------------
Not impacted.......... Not impacted.......... Impacts 2021-22 Award Impacts 2022-23 Award Impacts 2023-24 Award
Year. Year. Year.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Academic Year (AY): Title II academic year runs from September 1 to August 31.
Award year: Title IV award year runs from July 1 to June 30.
Note: Data systems are to be designed and implemented during the 2016-17 school year.
C1: Cohort 1, novice teachers whose first year in the classroom is 2017-18.
C2: Cohort 2, novice teachers whose first year in the classroom is 2018-19.
C3: Cohort 3, novice teachers whose first year in the classroom is 2019-20.
C4: Cohort 4, novice teachers whose first year in the classroom is 2020-21.
C5: Cohort 5, novice teachers whose first year in the classroom in 2021-22.
Changes: None.
Comments: To reduce information collection and dissemination burden
on States, a commenter asked that the Department provide a mechanism
for rolling up IRC data into the State data systems.
Discussion: The Department currently provides a system by which all
IHEs may electronically submit their IRC data, and which also
prepopulates the SRC with relevant information from the IRCs. We intend
to continue to provide this system.
Changes: None.
Comments: Some commenters stated that States should be able to
replace the SRC reporting requirements in these regulations with their
own State-defined accountability and improvement systems for teacher
preparation programs.
Discussion: We disagree that States should be able to replace the
SRC reporting requirements with their own State-defined accountability
and improvement systems for teacher preparation programs. Section
205(b) of the HEA requires reporting of the elements in the SRC by any
State that receives HEA funding. The measures included in the
regulations are either specifically required by that provision or are
needed to give reasonable meaning to the statutorily required
indicators of academic content knowledge and teaching skills a State
must use to assess a teacher preparation program's performance.
However, Sec. 612.5(b) specifically permits a State to assess a
program's performance using additional indicators predictive of a
teacher's effect on student performance, provided that it uses the same
indicators for all teacher preparation programs in the State. Following
stakeholder consultation (see Sec. 612.4(c)), States are free to adopt
criteria for assessing program performance beyond those addressed in
the regulations.
Changes: None.
Comments: Some commenters recommended that the Department provide
adequate time for States to examine and address the costs of tracking
student progress and academic
[[Page 75533]]
gains for teacher preparation program completers who teach out of
State.
Discussion: Section 612.5(a)(1) has been revised to clarify that
States may exclude data regarding teacher performance, or student
academic progress or growth, for calculating a program's student
learning outcomes for novice teachers who teach out of State (and who
teach in private schools). See also the discussion of comments for
Sec. 612.5(a)(1) (student learning outcomes). To the extent that
States wish to include this information, they can continue to pilot and
analyze data collection quality and methodology for a number of years
before including it in their SRCs.
Changes: None.
Comments: One commenter specifically recommended laddering in the
proposed performance criteria only after norming has occurred. We
interpret this comment to mean that States should have time to collect
data on the required indicators for multiple years on all programs and
use that data to establish specific thresholds for acceptable program
performance on each indicator. This would require a longer timeline
before using the indicators to assess program performance than the
Department had proposed.
Discussion: We will not require ``laddering in'' the criteria in
Sec. 612.5 only after norming has occurred, as the commenter
suggested, because we believe that States should be able to set
identifiable targets for these criteria without respect to the current
distribution of program for an indicator (e.g., a teacher retention
rate of less than 50 percent as an indicator of low performance). These
regulations are not intended to have States identify any particular
percentage of teacher preparation programs as low-performing or at-risk
of being low-performing. Rather, while they establish indicators that
each State will use and report, they leave the process for how it
determines a teacher preparation program's overall rating to the
discretion of the State and its consultative group. If States wish to
incorporate norming, norming around specific performance thresholds
could be completed during the pilot year and, over time, performance
thresholds can be adjusted during the periodic examinations of the
evaluation systems that States must conduct.
Changes: None.
Comments: Some commenters noted that having States assess the
performance of teacher preparation programs on a yearly basis seems
likely to drain already limited State and institutional resources.
Discussion: Section 207(a) of the HEA expressly requires States to
provide an ``annual list of low-performing [and at-risk] teacher
preparation programs.'' We believe that Congress intended the State
program assessment requirement itself also to be met annually. While we
have strived to develop a system that keeps costs manageable, we also
believe that the improvement of teacher preparation programs and
consumers' use of information in the SRC on program performance
necessitate both annual reporting and program determinations.
Changes: None.
Comments: A number of commenters stated that the availability of
student growth and achievement data that are derived from State
assessment results and district-determined measures are subject to
State legislative requirements and, if the legislature changes them,
the State assessments given or the times when they are administered
could be drastically impacted. One commenter stated that, because the
State operates on a biennial budget cycle, it could not request
authority for creating the administrative position the State needs to
comply with the proposed regulations until the 2017-2019 budget cycle.
Discussion: We understand that the availability of data States will
need to calculate student learning outcomes for student achievement in
tested grades and subjects depends to some extent on State legislative
decisions to maintain compatible State assessments subject to section
1111(b)(2) of the ESEA, as amended by the ESSA. But we also assume that
State legislatures will ensure that their States have the means to
comply with this Federal law, as well as the means to permit the State
to calculate student growth based on the definition of ``student
achievement in tested grades and subjects'' in Sec. 612.2. Moreover,
we believe that our decision to revise Sec. 612.5(a)(1)(ii) to include
an option for States to use ``another State-determined measure relevant
to calculating student learning outcomes'' should address the
commenters' concerns.
In addition, the commenter who raised concerns based on the State
legislature being in session on only a biennial basis did not provide
enough information to permit us to consider why this necessarily bars
the State's compliance with these regulations.
Changes: None.
Program-Level Reporting (Including Distance Education) (34 CFR
612.4(a)(1)(i))
Comments: Some commenters supported the shift to reporting at the
individual teacher preparation program rather than at the overall
institutional level. A couple of commenters agreed that States should
perform assessments of each program, but be allowed to determine the
most appropriate way to include outcomes in the individual program
determinations, including determining how to roll-up outcomes from the
program level to the entity level. Other commenters noted that States
should be required to report outcomes by the overall entity, rather
than by the individual program, because such reporting would increase
the reliability of the measures and would be less confusing to
students. Some commenters expressed concerns that only those programs
that have data demonstrating their graduates' effectiveness in the
public schools in the State where the institution is located would
receive a top rating, and entity-level reporting and rating would
reduce this concern. If States report by entity, they could report the
range in data across programs in addition to the median, or report data
by quartile. This would make transparent the differences within an
entity while maintaining appropriate thresholds.
Commenters also stated that there are too many variations in
program size and, as we understand the comment, in the way States
credential their teacher preparation programs to mandate a single
Federal approach to disaggregated program reporting for the entire
Nation.
Discussion: We appreciate the comments supporting the shift to
reporting at the program level. The regulations provide extensive
flexibility to States to determine how to measure and use outcomes in
determining program ratings. If a State wishes to aggregate program
level outcomes to the entity level, it is free to do so, though such
aggregation would not replace the requirements to report at the program
level unless the program (and the method of aggregation) meets the
small-size requirements in Sec. 612.4(b)(3)(ii). Regarding the comment
that reporting at the institutional level is more reliable, we note
that the commenter did not provide any additional context for this
statement, though we assume this statement is based on a generalized
notion that data for the institution as a whole might be more robust
because of the overall institution's much larger number of recent
graduates. While we agree that aggregation at a higher level would
generate more data for each indicator, we believe that the program size
threshold in Sec. 612.5(b)(3) sufficiently addresses this concern
while also ensuring that the general public and prospective students
have access to data that are as specific as
[[Page 75534]]
possible to the individual programs operated by the institution.
We fail to understand how defining a teacher preparation program as
we have, in terms of initial State teacher certification or licensure
in a specific field, creates concerns that top ratings would only go to
programs with data showing the effectiveness of graduates working in
public schools in the State. So long as the number of novice teachers
the program produces meets the minimum threshold size addressed in
Sec. 612.4(b)(3) (excluding, at the State's discretion, teachers
teaching out of State and in private schools from determinations of
student learning outcomes and teacher placement and retention rates as
permitted by Sec. 612.5(a)(1) and Sec. 612.2, respectively), we are
satisfied that the reporting of program information will be
sufficiently robust and obviate concerns about data reliability.
Moreover, we disagree with the comments that students would find
reporting of outcomes at the institution level less confusing than
reporting at the teacher preparation program level. We believe students
want information about teacher preparation programs that are specific
to the areas in which they want to teach so they can make important
educational and career decisions, such as whether to enroll in a
specific teacher preparation program. This information would be
presented most clearly at the teacher preparation program level rather
than at the institutional level, where many programs would be collapsed
such that a student would not only lack information about whether a
specific program in which she is interested is low-performing or at
risk of being low-performing, but also be unable to review data
relative to indicators of the program's performance.
We also disagree with the claim that program level reporting as
required under these regulations is inappropriate due to the variation
in program size and structure across and within States. Since the
commenters did not provide an example of how the requirements of these
regulations make program level reporting impossible to implement, we
cannot address these concerns more specifically than to say that since
use of indicators of program performance will generate information
unique to each program, we fail to see why variation in program size
and structure undermine these regulations.
Changes: None.
Comments: There were many comments related to the reporting of
information for teacher preparation programs provided through distance
education. Several commenters indicated that the proposed regulations
are unclear on how the reporting process would work for distance
education programs large enough to meet a State's threshold for
inclusion on their report card (see Sec. 612.4(b)(3)), but that lack a
physical presence in the State. Commenters indicated that, under our
proposed regulations, States would need to identify out-of-State
institutions (and their teacher preparation programs) that are serving
individuals within their borders through distance education, and then
collect the data, analyze it, and provide assessments on these programs
operated from other States. Thus, commenters noted, States may need
more authority either through regulatory action or legislation to be
able to collect information from institutions over which they do not
currently have authority.
Commenters also requested that the Department clarify what would
happen to distance education programs and their currently enrolled
students if multiple States would be assessing a single program's
effectiveness and doing so with differing results. One commenter
suggested a ``home State'' model in which, rather than developing
ratings for each program in each State, all of a provider's distance
education programs would be evaluated by the State in which the
provider, as opposed to the program participants, is physically
located. The commenter argued that this model would increase the
reliability of the measures and decrease student confusion, especially
where comparability of measures between States is concerned. Unless
such a home State model is adopted, the commenter argued, other States
may discriminate against programs physically located and operated in
other States by, as we understand the comment, using the process of
evaluating program performance to create excessive barriers to entry in
order to protect in-State institutions. Another commenter asked that
the proposed regulations provide a specific definition of the term
``distance education.''
Several commenters expressed support for the change to Sec.
612.4(a)(1)(ii) proposed in the Supplemental NPRM, which would require
that reporting on the quality of all teacher preparation programs
provided through distance education in the State be made by using
procedures for reporting that are consistent with Sec. 612.4(b)(4),
but based on whether the program produces at least 25 or fewer than 25
new teachers whom the State certified to teach in a given reporting
year.
While commenters indicated that reporting on hybrid teacher
preparation programs was a complicated issue, commenters did not
provide recommendations specific to two questions regarding hybrid
programs that were posed in the Supplemental NPRM. The first question
asked under what circumstances, for purposes of both reporting and
determining the teacher preparation program's level of overall
performance, a State should use procedures applicable to teacher
education programs offered through distance education and when it
should use procedures for teacher preparation programs provided at
brick-and-mortar institutions. Second, we asked, for a single program,
if one State uses procedures applicable to teacher preparation programs
provided through distance education, and another State uses procedures
for teacher preparation programs provided at brick-and-mortar
institutions, what are the implications, especially for TEACH Grant
eligibility, and how these inconsistencies should be addressed.
In response to our questions, many commenters indicated that it was
unclear how to determine whether a teacher preparation program should
be classified as a teacher preparation program provided through
distance education for reporting under Sec. 612.4(a)(1)(ii) and asked
for clarification regarding how to determine under what circumstances a
teacher preparation program should be considered a teacher preparation
program provided through distance education. One commenter recommended
that we define a teacher preparation program provided through distance
education program to be one where the full and complete program can be
completed without an enrollee ever being physically present at the
brick-and-mortar institution or any of its branch offices.
Commenters expressed a number of concerns about reporting. Some
commenters indicated that while the December 3, 2014, NPRM allowed
States to report on programs that produced fewer than 25 new teachers,
it was unclear whether the same permission would be applied to distance
education programs through the Supplemental NPRM. Additionally, a few
commenters thought that, in cases where students apply for
certification in more than one State, the outcomes of a single student
could be reported multiple times by multiple States. Other commenters
felt that if States are expected to evaluate distance education
graduates from other States' programs, the regulations should be
revised to focus on programs that are
[[Page 75535]]
tailored to meet other States' requirements. A commenter suggested that
the State in which a distance education program is headquartered should
be responsible for gathering the data reported by the other States in
which the program operates and then, using their data along with other
States' data, that State where the program is headquartered should make
the determination as to the performance rating of that program. Doing
so would establish one rating for each distance education program,
which would come from the State in which it is headquartered. The
commenter expressed that this would create a simplified rating system
similar to brick-and-mortar institutions. Another commenter stated that
the proposed approach would force the States to create a duplicative
and unnecessary second tracking system through their licensure process
for graduates of their own teacher preparation programs provided
through distance education who remain in the State.
Many commenters voiced concerns related to the identification and
tracking of distance education programs provided through distance
education. Specifically, commenters indicated that, because the method
by which a teacher preparation program is delivered is not transcribed
or officially recorded on educational credentials, the receiving State
(the State where the teacher has applied for certification) has no way
to distinguish teacher preparation programs provided through distance
education from brick-and-mortar teacher preparation programs.
Furthermore, receiving States would not be able to readily distinguish
individual teacher preparation programs provided through distance
education from one another.
Finally, a commenter stated that the proposed regulations do not
require States to provide any notice of their rating, and do not
articulate an appeal process to enable institutions to challenge,
inspect, or correct the data and information on the basis of which they
might have received an adverse rating. Commenters also indicated that
teacher preparation programs themselves should receive data on States'
student and program evaluation criteria.
Discussion: Regarding comments that the regulations need to
describe how teacher preparation programs provided through distance
education programs should be reported, we intended for a State to
report on these programs operating in that State in the same way it
reports on the State's brick-and-mortar teacher preparation programs.
We appreciate commenters' expressions of support for the change to
the proposed regulations under Sec. 612.4(a)(1)(ii), as proposed in
the Supplemental NPRM, requiring that reporting on the quality of all
teacher preparation programs provided through distance education in the
State be made by using procedures for reporting that are consistent
with proposed Sec. 612.4(b)(4), but based on whether the program
produces at least 25 or fewer than 25 new teachers whom the State
certified to teach in a given reporting year. In considering the
language of proposed Sec. 612.4(a)(1)(ii) and the need for clarity on
the reporting requirements for teacher preparation programs provided
through distance education, we have concluded that the provision would
be simpler if it simply incorporated by reference the reporting
requirements for those programs in Sec. 612.4(b)(3) of the final
regulations.
While we agree with the commenters who stated that the proposed
regulations were unclear on what constitutes a teacher preparation
program provided through distance education, we decline to accept the
recommendation to define a distance education program where the full
and complete program can be completed without an enrollee ever being
physically present at the brick-and-mortar institution or any of its
branch offices because this definition would not be inclusive of
teacher preparation programs providing significant portions of the
program through distance education. In addition, the proposed
definition would allow the teacher preparation program to easily modify
its requirements such that it would not be considered a teacher
preparation program provided through distance education.
Instead, in order to clarify what constitutes a teacher preparation
program provided through distance education, we are adding the term
``teacher preparation program provided through distance education'' to
Sec. 612.2 and defining it as a teacher preparation program in which
50 percent or more of the program's required coursework is offered
through distance education. The term distance education is defined
under 34 CFR 600.2 to mean education that uses one or more specified
technologies to deliver instruction to students who are separated from
the instructor and to support regular and substantive interaction
between the students and the instructor, either synchronously or
asynchronously. The technologies may include the internet; one-way and
two-way transmissions through open broadcast, closed circuit, cable,
microwave, broadband lines, fiber optics, satellite, or wireless
communications devices; audio conferencing; or video cassettes, DVDs,
and CD-ROMs, if the cassettes, DVDs, or CD-ROMs are used in a course in
conjunction with any of the technologies previously in this definition.
We have incorporated this definition by reference (see Sec. 612.2(a)).
In the Supplemental NPRM, we specifically requested public comment
on how to determine when a program that has both brick-and-mortar and
distance education components should be considered a teacher
preparation program provided through distance education. While we
received no suggestions, we believe that it is reasonable that if 50
percent or more of a teacher preparation program's required coursework
is offered through distance education, it should be considered a
teacher preparation program provided through distance education because
the majority of the program is offered through distance education. This
50 percent threshold is consistent with thresholds used elsewhere in
Departmental regulations, such as those relating to correspondence
courses under 34 CFR 600.7 or treatment of institutional eligibility
for disbursement of title IV HEA funds for additional locations under
34 CFR 600.10(b)(3).
In addition, we do not agree with the suggestion for a ``home
State'' reporting model, in which all of a provider's distance
education programs would be evaluated by the State in which the
provider is physically located. First, section 205(b) of the HEA
requires States to report on the performance of their teacher
preparation programs. We feel strongly both that, to date, defining the
program at the institutional level has not produced meaningful results,
and that where programs provided through distance education prepare
individuals to teach in different States, those States--and not only
the ``home State''--should assess those programs' performance. In
addition, we believe that each State should, as the law anticipates,
speak for itself about what it concludes is the performance of each
teacher preparation program provided through distance education
operating within its boundaries. Commenters did not provide any
evidence to support their assertion that States would discriminate
against distance learning programs physically located in other States,
nor do we understand how they would do so if, as Sec. 612.4(a)
anticipates, they develop and apply the same set of criteria (taking
into consideration the need to have different employment
[[Page 75536]]
outcomes as provided in Sec. 612.4(b)(2) given the nature of these
programs) for assessing the performance of brick-and-mortar programs
and programs provided through distance education programs.
Regarding reporting concerns, we provide under Sec. 612.4(b)(3)(i)
for annual reporting on the performance of each teacher preparation
program that produces a total of 25 or more recent graduates in a given
reporting year (that is, a program size threshold of 25), or, at the
State's discretion, a lower program size threshold (e.g., 15 or 20).
Thus, States can use a lower threshold than the 25 recent graduates. We
do not agree that in cases where students apply for certification in
more than one State, a single student would necessarily be counted
multiple times. For calculations of the placement rate for a program
provided through distance education, the student who teaches in one
State but who has received teaching certification in that State and
others would be included in the denominator of placement rates
calculated by these other States only if those States chose not to
exclude recent graduates teaching out of State from their calculations.
(The same would be true of graduates of brick-and-mortar programs.) But
those other States would only report and use a placement rate in
assessing the performance of programs provided through distance
education if they have graduates of those programs who are certified in
their States (in which case the program size threshold and aggregation
procedures in Sec. 612.4(b) would apply).
Further, for the purposes of the teacher placement rate, Sec.
612.5(a)(2)(iv) permits a State, at its discretion, to assess the
teacher placement rate for teacher preparation programs provided
through distance education differently from the teacher placement rate
for other teacher preparation programs based on whether the differences
in the way the rate is calculated for teacher preparation programs
provided through distance education affect employment outcomes.
States that certify at least 25 teachers from a teacher preparation
program provided through distance education do have an interest in that
program and will be reporting on the program as a program in their
States. Moreover, we disagree that States in which distance education
programs are headquartered should round up data from other States,
determine a performance rating, and report it for several reasons. In
addition to placing a higher cost and burden on a particular State,
this methodology would undermine the goal of States having a say in the
quality of the program that is being used to certify teachers in the
State. The State where a teacher preparation program operating in
multiple States is housed is not the only State with an interest in the
program. Finally, we do not believe that the regulations would force
States to create a duplicative and unnecessary second tracking system
because a State is already required to report on teacher preparation
programs in the State.
We agree with commenters' concerns regarding the identification and
tracking of teacher preparation programs provided through distance
education. To address this concern, institutions will be asked to
report which of their teacher preparation programs are teacher
preparation programs provided through distance education in the IRC,
which the institutions provide to the State. The receiving State can
then verify this information during the teacher certification process
for a teacher candidate in the State.
We note that an appeal process regarding a teacher preparation
program's performance is provided for under Sec. 612.4(c). We also
note that teacher preparation programs will have access to data on
States' student and program evaluation criteria because State report
cards are required to be publicly available.
Changes: We are adding the term ``teacher preparation program
provided through distance education'' to Sec. 612.2 and defining it as
a teacher preparation program in which 50 percent or more of the
program's required coursework is offered through distance education. We
are also providing under Sec. 612.4(a)(1)(ii) that States must report
on the quality of all teacher preparation programs provided through
distance education in the State consistent with Sec. 612.4(b)(3).
Making the State Report Card Available on the State's Web Site (34 CFR
612.4(a)(2))
Comments: One commenter supported the proposed change that any data
used by the State to help evaluate program performance should be
published at the indicator level to ensure that programs understand the
areas they need to improve, and to provide additional information to
students about program success. Other commenters stated that posting
SRCs does not lead to constructive student learning or to meeting pre-
service preparation program improvement goals. Many commenters stated
that the method by which States would share information with consumers
to ensure understanding of a teacher preparation program's employment
outcomes or overall rating is not stipulated in the regulations and,
furthermore, that the Department does not specifically require that
this information be shared.
Discussion: We appreciate the comment supporting publication of the
SRC data on the State's Web site. The regulation specifically requires
posting ``the State report card information'' on the Web site, and this
information includes all data that reflect how well a program meets
indicators of academic content and teaching skills and other criteria
the State uses to assess a program's level of performance, the
program's identified level of performance, and all other information
contained in the SRC.
While posting of the SRC data on the State's Web site may not lead
directly to student learning or teacher preparation program
improvement, it does provide the public with basic information about
the performance of each program and other, broader measures about
teacher preparation in the State. Moreover, making this information
widely available to the general public is a requirement of section
205(b)(1) of the HEA. Posting this information on the State's Web site
is the easiest and least costly way for States to meet this
requirement. We also note that the commenters are mistaken in their
belief that our proposed regulations did not require that information
regarding teacher preparation programs be shared with consumers.
Proposed Sec. 612.4(a)(2) would require States to post on their Web
sites all of the information required to be included in their SRCs, and
these data include the data on each program's student learning
outcomes, employment outcomes, and survey outcomes, and how the data
contribute to the State's overall evaluation of the program's
performance. The final regulations similarly require the State to
include all of these data in the SRC, and Sec. 612.4(a)(2)
specifically requires the State to make the same SRC information it
provides to the Secretary in its SRC widely available to the general
public by posting it on the State's Web site.
Changes: None.
Meaningful Differentiations in Teacher Preparation Program Performance
(34 CFR 612.4(b)(1))
Comments: Multiple commenters expressed general opposition to our
proposal that in the SRC the State make meaningful differentiation of
teacher preparation program performance using at least four performance
levels. These commenters stated that such ratings would not take into
account the uniqueness of each program, such as the program's size,
mission, and diversity,
[[Page 75537]]
and therefore would not provide an accurate rating of a program.
Others noted that simply ascribing one of the four proposed
performance levels to a program is not nuanced or sophisticated enough
to fully explain the quality of a teacher preparation program. They
recommended removing the requirement that SEAs provide a single rating
to each program, and allow States instead to publish the results of a
series of performance criteria for each program.
Discussion: As noted under Sec. 612.1, we have withdrawn our
proposal to require States to identify programs that are exceptional.
Therefore, Sec. 612.4(b)(1), like section 207(a) of the HEA, requires
States in their SRCs to identify programs as being low-performing, at-
risk of being low-performing, or effective or better, with any
additional categories established at the State's discretion. This
revised rating requirement mirrors the requirements of section 207(a)
the HEA for reporting programs that are low-performing or at-risk of
being low-performing (and thus by inference also identifying those
programs that are performing well).
States cannot meet this requirement unless they establish
procedures for using criteria, including indicators of academic content
knowledge and teaching skills (see Sec. 612.4(b)(2)(i)), to determine
which programs are classified in each category. The requirement of
Sec. 612.4(b)(1) that States make meaningful differentiation of
teacher preparation program performance using at least these three
categories simply gives this statutory requirement regulatory
expression. While Sec. 612.4(b)(1) permits States to categorize
teacher preparation programs using more than three levels of
performance if they wish, the HEA cannot be properly implemented
without States making meaningful differentiation among programs based
on their overall performance.
We do not believe that these regulations disregard the uniqueness
of each program's size, mission, or diversity, as they are intended to
provide a minimum set of criteria with which States determine program
performance. They do not prescribe the methods by which programs meet a
State's criteria for program effectiveness.
Changes: We have revised Sec. 612.4(b)(1) by removing the proposed
fourth program performance level, ``exceptional teacher preparation
program,'' from the rating system.
Comments: Commenters, for various reasons, opposed our proposal to
require States, in making meaningful differentiation in program
performance, to consider employment outcomes in high-need schools and
student learning outcomes ``in significant part.'' Some commenters
requested clarification on what ``significant'' means with regard to
weighting employment outcomes for high-need schools and student
learning outcomes in determining meaningful differentiations of teacher
preparation programs. Commenters also noted that including employment
outcomes for high-need schools will add another level of complexity to
an already confusing and challenging process. Some commenters
recommended the Department maintain the focus on teacher placement and
retention rates, but eliminate the incentives to place recent graduates
in high-need schools. They stated that doing so will permit these
indicators to focus on the quality of the program without requiring the
program to have a focus on having its students teach in high-need
schools, something that may not be in the mission of all teacher
preparation programs.
Multiple other commenters expressed confusion about whether or not
the regulations incentivize placement in high-need schools by making
such placement a significant part of how States must determine the
rating of a teacher preparation program. Some commenters argued that,
on the one hand, the requirement that States use student learning
outcomes to help assess a program's overall performance could
incentivize teacher preparation programs having teaching candidates
become teachers in schools where students are likely to have higher
test scores. On the other hand, they argued that the proposed
regulations would also assess program performance using, as one
indicator, placement of candidates in high-need schools, an indicator
that commenters stated would work in the opposite direction. These
commenters argued that this could cause confusion and will create
challenges in implementing the regulations by not giving States and
programs a clear sense of which issue is of greater importance--student
learning outcomes or placement of teachers in high-need schools.
Other commenters recommended that the Department set specific
thresholds based on the affluence of the area the school serves. For
example, commenters recommended that 85 percent of program graduates
who work in affluent, high-performing schools should have a certain
level of student learning outcomes, but that, to have the same level of
program performance, only 60 percent of program graduates who work in
high-need schools have perform at that same level.
Multiple commenters also opposed the inclusion of student learning
outcomes, employment outcomes, and survey outcomes as indicators of the
performance of teacher preparation programs. These commenters believed
that student learning outcomes are embedded in the concept of VAM found
in standardized testing, a concept they believe constitutes a flawed
methodology that does not accurately represent teacher preparation
program effectiveness.
Discussion: The final regulations require meaningful
differentiation of teacher preparation programs on the basis of
criteria that include employment in high-need schools as an indicator
of program graduates' (or in the case of alternative route programs,
participants') academic content knowledge and teaching skills for
several reasons. First, like much of the education community, we
recognize that the Nation needs more teachers who are better prepared
to teach in high-need schools. We strongly believe that teacher
preparation programs should accept a share of the responsibility for
meeting this challenge. Second, data collected in response to this
indicator should actually help distinguish the distinct missions of
teacher preparation programs. For example, certain schools have
historically focused their programs on recruiting and preparing
teachers to teach in high-need schools--a contribution States and those
institutions may understandably want to recognize. Third, we know that
some indicators may be influenced by graduates' (or in the case of
alternative route programs, participants') placement in high-need
schools (e.g., teacher retention rates tend to be lower in high-need
schools), and States may also want to consider this factor as they
determine how to use the various criteria and indicators of academic
content knowledge and teaching skills to identify an overall level of
program performance.
However, while States retain the authority to determine thresholds
for performance under each indicator, in consultation with their
stakeholder groups (see Sec. 612.4(c)), we encourage States to choose
thresholds purposefully. We believe that all students, regardless of
their race, ethnicity, or socioeconomic status, are capable of
performing at high levels, and that all teacher preparation programs
need to work to ensure that teachers in all schools are capable of
helping them do so. We encourage
[[Page 75538]]
States to carefully consider whether differential performance standards
for teachers in high-need schools reflect sufficiently ambitious
targets to ensure that all children have access to a high quality
education.
Similarly, we encourage States to employ measures of student
learning outcomes that are nuanced enough to control for prior student
achievement and observable socio-economic factors so that a teacher's
contribution to student learning is not affected by the affluence of
his or her school. Overall, the concerns stated here would also be
mitigated by use of growth, rather than some indicator of absolute
performance, in the measure of student learning outcomes. But, here
again, we feel strongly that decisions about how and when student
learning outcomes are weighted differently should be left to each State
and its consultation with stakeholders.
We respond to the commenters' objections to our requirement that
States use student learning outcomes, employment outcomes, and survey
outcomes in their assessment of the performance levels of their teacher
preparation programs in our discussion of comment on these subjects in
Sec. 612.5(a). For reasons we addressed above in the discussion of
Sec. 612.1, while still strongly encouraging States to give
significant weight to these indicators in assessing a program's
performance, we have omitted from the final regulations any requirement
that States consider employment outcomes in high-need schools and
student outcomes ``in significant part.''
Changes: We have revised Sec. 612.4(b)(1) by removing the phrase
``including, in significant part, employment outcomes for high-need
schools and student learning outcomes.''
Comments: Commenters recommended that States and their stakeholders
have the authority to determine how and to what extent outcomes are
included in accountability decisions for teacher preparation programs
in order to mitigate the concerns regarding the validity and
reliability of the student growth indicators. These commenters stated
that we should give more authority to States and LEAs to identify
indicators and their relative weighting that would be the greatest
benefit to their community. Other commenters also stated that the
proposal to require States to provide meaningful differentiations in
teacher preparation programs may conflict with existing State
structures of accountability, and by giving States increased
flexibility, the Department would avoid inconsistencies with State-
determined levels of quality.
Discussion: Having withdrawn our proposal to require that student
growth and employment outcomes in high-need schools be considered ``in
significant part,'' the final regulations provide States with broad
flexibility in how they weight different indicators of academic content
knowledge and teaching skills in evaluating teacher preparation
programs. While we strongly encourage States to give significant weight
to these important indicators of a teacher preparation program's
performance, we provide each State full authority to determine, in
consultation with its stakeholders, how each of their criteria,
including the required indicators of academic content knowledge and
teaching skills, can be best used to fit the individual needs of its
schools, teachers, and teacher preparation programs.
Changes: None.
Satisfactory or Higher Student Learning Outcomes for Programs
Identified as Effective or Higher (34 CFR 612.4(b)(2))
Comments: Multiple commenters asked us to define the phrase
``satisfactory or higher student learning outcomes,'' asking
specifically what requirements a program would have to meet to be rated
as effective or higher. They also stated that States had insufficient
guidance on how to define programs as ``effective.'' Some commenters
also noted that providing flexibility to States to determine when a
program's student learning outcomes are satisfactory would diminish the
ability to compare teacher preparation programs, and opposed giving
States the flexibility to determine for themselves when a program has
``satisfactory'' student learning outcomes. However, other commenters
disagreed, stating that States should have flexibility to determine
when the teachers trained by a particular teacher preparation program
have students who have achieved satisfactory student learning outcomes
since States would have a better ability to know how individual teacher
preparation programs have helped to meet these States' needs.
Other commenters recommended modifying the regulations so that
States would need to determine programs to have ``above average student
learning outcomes'' in order to rate them in the highest category of
teacher preparation performance. Another commenter suggested that
student learning data be disaggregated by student groups to show hidden
inequities, and that States be required to develop a pilot program to
use subgroup data in their measurement of teacher preparation programs,
such that if the student subgroup performance falls short the program
could not be rated as effective or higher.
Discussion: The Department continues to believe that a teacher
preparation program should not be rated effective if the learning
outcomes of the students taught by its graduates (or, in the case of
alternative route programs, its participants) are not satisfactory. And
we appreciate the comments from those who supported our proposal.
Nonetheless, we are persuaded by the comments from those who urged that
States should have the flexibility to determine how to apply the
criteria and indicators of student academic achievement and learning
needs to determine the performance level of each program, and have
removed this provision from the regulations.
Changes: We have removed Sec. 612.4(b)(2). In addition, we have
renumbered Sec. 612.4(b)(3) through (b)(5) as Sec. 612.4(b)(2)
through (b)(4).
Data for Each Indicator (34 CFR 612.4(b)(2)(i))
Comments: One commenter requested confirmation that the commenter's
State would not be required to report the disaggregated data on student
growth based on assessment test scores for individual teachers, teacher
preparation programs, or entities on the SRC because the educator
effectiveness measure approved for its ESEA flexibility waiver meets
the requirements for student learning outcomes in proposed Sec. Sec.
612.4(b) and 612.5(a)(1) for both tested and non-tested subjects. The
commenter stated that it would be cost prohibitive to submit student
growth information on the SRC separately from reporting on its educator
effectiveness measure under ESEA flexibility. Furthermore, some
commenters were concerned that a State's student privacy laws would
make it difficult to access the disaggregated data as required.
In addition, some commenters opposed our proposed Sec.
612.4(b)(2)(i)(B) requiring each State to include in its SRC an
assurance that a teacher preparation program either is accredited or
produces teachers with content and pedagogical knowledge because of
what they described as the federalization of professional standards.
They indicated that our proposal to offer each State the option of
presenting an assurance that the program is accredited by a specialized
accrediting agency would, at best, make the specialized accreditor an
agent of the Federal government, and at worst, effectively mandate
specialized accreditation by CAEP. The commenters
[[Page 75539]]
argued instead that professional accreditation should remain a
voluntary, independent process based on evolving standards of the
profession. Commenters also noted that no definition of specialized
accreditation was proposed and requested that we include a definition
of this term. One commenter recommended that a definition of
specialized accreditation include the criteria that would be used by
the Secretary to recognize an agency for the accreditation of
professional teacher preparation programs, and that one of the criteria
for a specialized agency should be the inclusion of alternative
certification programs as eligible professional teacher preparation
programs.
Discussion: Under Sec. 612.4(b)(2)(i), States may choose to report
student learning outcomes using a teacher evaluation measure that meets
the definition in Sec. 612.2. But if they do so, States still must
report student learning outcomes for each teacher preparation program
in the SRC.
We believe that the costs of this SRC reporting will be manageable
for all States, and have provided a detailed discussion of costs in the
RIA section of this document. For further discussion of reporting on
student learning outcomes, see the discussion in this document of Sec.
612.5(a)(1). We also emphasize that States will report these data in
the aggregate at the teacher preparation program level and not at the
teacher level. Furthermore, while States will need to comply with
applicable Federal and State student privacy laws in the data they
report in their SRC, the commenters have not provided information to
help us understand how our requirements, except as we discuss for Sec.
612.4(b)(3)(ii)(E), are affected by State student privacy laws.
In addition, as we reviewed these comments and the proposed
regulatory language, we realized the word ``disaggregated'' was unclear
with regard to the factors by which the data should be disaggregated,
and redundant with regard to the description of indicators in Sec.
612.5. We have therefore removed this word from Sec. 612.4(b)(2)(i).
Under Sec. 612.5(a)(4) States must annually report whether each
program is administered by an entity that is accredited by a
specialized accrediting agency recognized by the Secretary, or produces
candidates (1) with content and pedagogical knowledge and quality
clinical preparation, and (2) who have met rigorous teacher candidate
exit qualifications. Upon review of the comments and the language of
Sec. 612.5(a)(4), we have determined that proposed Sec.
612.4(b)(3)(i)(B), which would have had States provide an assurance in
their SRCs that each program met the characteristics described in Sec.
612.5(a)(4), is not needed. We address the substantive comments offered
on that provision in our discussion of comments on Sec. 612.5(a)(4).
Finally, in reviewing the public comment, we realized that the
proposed regulations focused only on having States report in their SRCs
the data they would provide for indicators of academic knowledge and
teaching skills that are used to determine the performance level of
each teacher preparation program. This, of course, was because State
use of those indicators was the focus of the proposed regulations. But
we did not mean to suggest that in their SRCs, States would not also
report the data they would use for other indicators and criteria they
establish for identifying each's program's level of performance. While
the instructions in section V of the proposed SRCs imply that States
are to report their data for all indicators and criteria they use, we
have revised those instructions to clarify this point.
Changes: We have revised Sec. 612.4(b)(2)(i) by removing the word
``disaggregated.'' We also have removed proposed Sec.
612.4(b)(2)(i)(B) from the regulations.
Weighting of Indicators (34 CFR 612.4(b)(2)(ii))
Comments: Some commenters stated that a formulaic approach, which
they argued was implied by the requirement to establish the weights of
each indicator, will not yield meaningful differentiations among
programs. The commenters recommended that States be allowed to use a
multiple-measures system for assessing the performance of teacher
preparation programs that relies on robust evidence, includes outcomes,
and gives weight to professional judgment. In addition, some commenters
recommended that stakeholders provide input as to how and to what
extent outcomes are included in a teacher preparation program's overall
performance rating.
Several commenters noted that the flexibility our proposed
regulations provide to States to determine the weighting system for use
of criteria and indicators to assess teacher preparation program
performance undermines what the commenters state is the Department's
goal of providing meaningful data to, among other things, facilitate
State-to-State comparisons. The commenters argue that consumers might
incorrectly assume the all States are applying the same metrics to
assess program performance, and so draw incorrect conclusions
especially for programs located near each other but located in
different States. Several commenters also expressed concerns about the
Department's proposal in Sec. 612.5(a)(2) that States be able to weigh
employment outcomes differently for alternative route programs and
traditional teacher preparation programs. The commenters argued that
all teacher preparation programs should be held to the same standards
and levels of accountability.
Commenters also stated that our proposal, by which we understand
the commenters to mean the proposed use of student learning outcomes,
employment outcomes and survey outcomes as indicators of academic
content knowledge and teaching skills of teachers whom programs
prepare, should be adjusted based on the duration of the teachers'
experience. Commenters stated we should do so because information about
newer teachers' training programs should be emphasized over information
about more experienced teachers, for whom data reflecting these
indicators would likely be less useful.
Some commenters asked whether, if a VAM is used to generate
information for indicators of student learning outcomes, the indicators
should be weighted to count gains made by the lower performing third of
the student population more than gains made by the upper third of the
population because it would be harder to increase the former students'
scores. The commenters noted that poorer performing students will have
the ability to improve by greater amounts than those who score higher
on tests.
Several commenters believed that the weighting of the indicators
used to report on teacher preparation program performance is a critical
decision, particularly with respect to the weighting of indicators
specific to high-need schools, and because of this, decisions on
weighting should be determined after data are collected and analyzed.
As an example of why the group of stakeholders should have information
available prior to making weighting decisions, the commenter noted
that, if teacher placement in high-need schools has a relatively low-
weight and student growth is negatively associated with the percentage
of economically disadvantaged students enrolled in the school, programs
may game the system by choosing to counsel students to seek employment
in non-high-need schools.
Finally, several commenters stated that the regulations incentivize
programs to place graduates in better
[[Page 75540]]
performing schools, noting that the proposed regulations appeared to
require that student learning outcomes be given the most weight. On the
other hand, the commenters stated that the proposed regulations
incentivize the placement of graduates in high-need schools, and argued
that employment rates in high-need schools would receive the next
highest weight. They argued that this contradiction would lead to
confusion and challenges in implementing the regulations.
Discussion: We have included a summary of these comments here
because they generally address how States should weight the indicators
and criteria used to assess the performance of teacher preparation
programs, and advantages and disadvantages of giving weight to certain
indicators. However, we stress that we did not intend for States to
adopt any particular system of weighting to generate an overall level
of performance for each teacher preparation program from the various
indicators and criteria they would use. Rather, proposed Sec.
612.4(b)(3)(ii), like Sec. 612.4(b)(2)(ii) of the final regulations,
simply directs States to report in their SRCs the weighting it has
given to the various indicators in Sec. 612.5). Thus, we are not
requiring any State to adopt some form of formulaic approach. And
States may, if they choose, build into their indicators and criteria a
reliance on robust evidence and outcomes, and give weight to
professional judgment.
States plainly need to be able to implement procedures for taking
the data relevant to each of the indicators of academic knowledge and
teaching skills and other criteria they use to assess program
performance, and turn those data into a reported overall level of
program performance. We do not see how States can do this without
somehow providing some form of weight to each of the indicators they
use. However, the specific method by which a State does so is left to
each State, in consultation with its stakeholders (see Sec. 612.4(c)),
to determine.
As we addressed in the discussion of Sec. 612.1, we had proposed
in Sec. 612.4(b)(1) that a State's assessment of a program's
performance needed to be based ``in significant part'' on the results
for two indicators, student learning outcomes and employment outcomes
in high-need schools. But as we noted in our discussion of comment on
Sec. Sec. 612.1 and 612.4(b)(1), while strongly encouraging States to
adopt these provisions in their procedures for assessing a program's
performance, we have revised these final regulations to omit that
proposal and any other language that any regulatory indicator receive
special weight.
Furthermore, the flexibility the regulations accord to States to
determine how these factors should be weighed to determine a program's
level of performance extends to the relative weight a State might
accord to factors like a teacher's experience and to student learning
outcomes of teachers in low-performing versus high-performing schools.
It also extends to the weight a State would provide to employment
outcomes for traditional teacher preparation programs and alternative
route teacher preparation programs; after all, these types of programs
are very different in their concept, who they recruit, and when they
work with LEAs to place aspiring teachers as teachers of record. In
addition, State flexibility extends to a State's ability to assess the
overall performance of each teacher preparation program using other
indicators of academic content knowledge and teaching skills beyond
those contained in the regulations. We do not believe that this
flexibility undermines any Departmental goal, or goal that Congress had
in enacting the title II reporting system.
Thus, while a State must report the procedures and weighting of
indicators of academic content knowledge and teaching skills and other
criteria it uses to assess program performance in its SRC, we believe
States should be able to exercise flexibility to determine how they
will identify programs that are low-performing or at-risk of being so.
In establishing these regulations, we stress that our goal is simple:
to ensure that the public--prospective teaching candidates, LEAs that
will employ novice teachers, and State and national policy makers
alike--has confidence that States are reasonably identifying programs
that are and are not working, and understand how States are
distinguishing between the two. The flexibilities the regulations
accord to States to determine how to determine a program's level of
performance is fully consistent with this goal. Furthermore, given the
variation we expect to find in State approaches and the different
environments in which each State operates, we reiterate that any State-
to-State comparisons will need to be made only with utmost caution.
As noted above, our discussion of Sec. Sec. 612.1 and 612.4(b)(1)
stressed both (1) our hope that States would adopt our proposals that
student learning outcomes and employment outcomes for high-need schools
be given significant weight, and that to be considered effective a
teacher preparation program would show positive student learning
outcomes, and (2) our decision not to establish these proposals as
State requirements. Thus, we likewise leave to States issues regarding
incentives that any given weight might cause to placements of aspiring
teachers and the programs themselves.
Finally, in reviewing the public comment, we realized that the
proposed regulations focused only on having States report in their SRCs
the weights they would provide to indicators of academic knowledge and
teaching skills used to determine the performance level of each teacher
preparation program. This, of course, was because State use of those
indicators was the focus of the proposed regulations. But we did not
mean to suggest that in their SRCs, States would not also report the
weights they would provide to other indicators and criteria they
establish for identifying each program's level of performance. While
the instructions in section V of the proposed SRCs imply that States
are to report their weighting for all indicators and criteria they use,
we have revised them to clarify this point.
Changes: None.
Reporting the Performance of All Teacher Preparation Programs (34 CFR
612.4(b)(3))
Comments: Commenters stated that a number of non-traditional
teacher preparation program providers will never meet the criteria for
inclusion in annual reports due to their small numbers of students.
Commenters noted that this implies that many of the most exemplary
programs will neither be recognized nor rewarded and may even be harmed
by their omission in reports provided to the media and public.
Commenters expressed concern that this might lead prospective students
and parents to exclude them as viable options, resulting in decreased
program enrollment.
Other commenters asked for more clarity on the various methods for
a program to reach the threshold of 25 new teachers (or other threshold
set by the State). The commenters also stated that a State could design
this threshold to limit the impact on programs. Other commenters noted
that smaller teacher preparation programs may not have the technical
and human resources to collect the data for proposed reporting
requirements, i.e., tracking employment and impact on student learning,
and asked if the goal of these proposed regulations is to encourage
small programs to close or merge with larger ones.
Discussion: The regulations establish minimum requirements for
States to use
[[Page 75541]]
in assessing and reporting the performance of each teacher preparation
program, and are not intended to facilitate the merger or closure of
small programs. The proposed regulations provided States with three
methods of identifying and reporting the performance of teacher
preparation programs that produce fewer than 25 new teachers--or such
lower number as the State might choose--in a given reporting year by
aggregating data to reach the minimum thresholds. Under the final
regulations, States could: (1) Combine a teacher preparation program's
performance data with data for other teacher preparation programs that
are operated by the same teacher preparation entity and are similar to
or broader than the program in content; (2) combine data over multiple
years for up to four years until the size threshold is met; or (3) use
a combination of the two methods. Given statistical and privacy issues
that are particular to small programs, we believe that these
aggregation methods will adequately address the desire to have the
performance of all programs, large and small, reported in SRCs. In
addition, while we strongly believe that all teacher preparation
programs should want to gather student learning outcomes and results of
employment and survey results to help them to improve their programs,
States, not institutions, ultimately have the responsibility to report
under Sec. 612.4.
The proposed regulations had focused State reporting and small
program aggregation procedures on the number of new teachers a teacher
preparation program produced. Based on further consideration of these
and other comments, it became clear that the term ``new teacher'' was
problematic in this case as it was in other places. We realized that
this approach would not hold teacher preparation programs accountable
for producing recent graduates who do not become novice teachers.
Because we believe that the fundamental purpose of these programs is to
produce novice teachers, we have concluded that our proposal to have
State reporting of a program's performance depend on the number of new
teachers that the program produces was misplaced.
Therefore, in order to better account for individuals who complete
a teacher preparation program but who do not become novice teachers, we
are requiring a State to report annually on the performance of each
``brick-and-mortar'' teacher preparation program that produces a total
of 25 or more recent graduates (or such lower threshold as the State
may establish). Similarly, aggregation procedures for smaller programs
apply to each teacher preparation program that produces fewer than 25
recent graduates (or such lower threshold as the State may establish).
For teacher preparation programs provided through distance education,
the requirement is the same except that, since States are not likely to
know the number of recent graduates, States will continue to look at
whether the program has that same threshold number of 25 recent
graduates, but in this case, to be counted, these recent graduates need
to have received an initial certification or licensure from the State
that allows them to serve in the State as teachers of record for K-12
students.
Changes: We have revised Sec. 612.4(b)(3) to provide that a
State's annual reporting of a teacher preparation program's
performance, and whether it provides this reporting alternatively
through small program aggregation procedures, depends on whether the
program produces a total of 25 or more recent graduates (or such lower
threshold as the State may establish). For programs provided through
distance education, the number of recent graduates counted will be
those who have received an initial certification or licensure from the
State that allows them to serve in the State as teachers of record for
K-12 students.
Annual Performance Reporting of Teacher Preparation Programs
(612.4(b)(3)(i))
Comments: Two commenters stated that differentiated reporting for
large and small teacher preparation programs, coupled with allowing
States to establish what the commenters referred to as ``certain
criteria,'' will lead to invalid comparisons and rankings both within
and among States.
Discussion: The regulations require separate reporting of the
performance of any teacher preparation program that annually produces
25 or more recent graduates. For programs that annually produce fewer
recent graduates, the regulations also establish procedures for data
aggregation that result in reporting on all of the State's teacher
preparation programs (except for those programs that are particularly
small and for which aggregation procedures cannot be applied, or where
the aggregation would be in conflict with State or Federal privacy or
confidentiality laws). Based on concerns expressed during the
negotiated rulemaking sessions, the Department believes that use of an
``n-size'' of 25 (or such smaller number that a State may adopt) and
the means of reporting the performance of smaller programs through the
aggregation procedures address privacy and reliability concerns while
promoting the goal of having States report on the performance of as
many programs as possible. Moreover, we reiterate that the purpose of
these regulations is to identify key indicators that States will use to
assess the level of performance for each program, and provide
transparency about how it identifies that level. We are not proposing
any rankings and continue to caution against making comparisons of
programs based on data States report.
Changes: None.
Performance Reporting of Small Teacher Preparation Programs: General
(34 CFR 612.4(b)(3)(ii))
Comments: Commenters stated that the low population in some States
makes privacy of students in elementary and secondary schools, and in
teacher preparation programs, difficult or impossible to assure. The
commenters further stated that aggregating student growth data to the
school level to assure privacy in the title II report would result in
meaningless ratings, because the teachers in the schools more than
likely completed the preparation program at different institutions.
Several commenters were concerned that our proposals for
aggregating data to be used to annually identify and report the level
of performance of small teacher preparation programs would make year-
by-year comparisons and longitudinal trends difficult to assess in any
meaningful way, since it is very likely that States will use different
aggregation methods institution-by-institution and year-by-year.
Commenters noted that many small rural teacher preparation programs
and programs producing small numbers of teachers who disperse across
the country after program completion do not have the requisite
threshold size of 25. Commenters stated that for these programs, States
may be unable to collect sufficient valid data. The result will be
misinformed high-stakes decision making.
Some commenters proposed that States be able to report a minimum of
10 new teachers with aggregation when a minimum is not met instead of
25. Other options would be to report what data they have or aggregate
previous years to meet ``n'' size.
One commenter recommended that rankings be initially based on a
relatively few, normed criteria common to, and appropriate for, all
sized programs and States, i.e., a common baseline ranking system. The
commenter stated that to do otherwise
[[Page 75542]]
could result in States rushing to the lowest (not highest) common
denominator to protect both quality programs from being unfairly ranked
in comparison with weaker programs in other States, and small premier
programs from unfair comparisons with mediocre larger programs.
Two commenters stated that even though the proposed rules create
several ways in which States may report the performance of teacher
preparation programs that annually produce fewer than 25 teachers per
year, the feasibility of annual reporting at the program level in some
States would be so limited it would not be meaningful. The commenters
added that regardless of the aggregation strategy, having a minimum
threshold of 25 will protect the confidentiality of completers for
reporting, but requiring annual reporting of programs that produce 25
or more recent graduates per year will omit a significant number of
individual programs from the SRC. Several commenters had similar
concerns and stated that annual reporting of the teacher preparation
program performance would not be feasible for the majority of teacher
preparation programs across the country due to their size or where the
student lives. Commenters specifically mentioned that many programs at
Historically Black Colleges and Universities will have small cell sizes
for graduates, which will make statistical conclusions difficult.
Another commenter had concerns with the manner in which particular
individual personnel data will be protected from public disclosure,
while commenters supported procedural improvements in the proposed
regulations discussed in the negotiated rulemaking sessions that
addressed student privacy concerns by increasing the reporting
threshold from 10 to 25.
Commenters further expressed concerns that for some States, where
the number of teachers a program produces per year is less than 25, the
manual calculation that States would need to perform to combine
programs to aggregate the number of students up to 25 so that the
States would then report the assessment of program performance and
information on indicators would not only be excessive, but may lead to
significant inconsistencies across entities and from one year to the
next.
Discussion: We first reiterate that we have revised Sec.
612.5(a)(1)(ii) so that States do not need to use student growth,
either by itself or as used in a teacher evaluation measure, for
student learning outcomes when assessing a teacher preparation
program's level of performance. While we encourage them to do so, if,
for reasons the commenters provided or other reasons, they do not want
to do so, States may instead use ``another State-determined measure
relevant to calculating student learning outcomes.''
We do not share commenters' concerns about small elementary and
secondary schools where privacy concerns purportedly require a school-
level calculation of student growth measures rather than calculation of
student growth at the teacher level, or related concerns about student
learning outcomes for an individual teacher not yielding useable
information about a particular teacher preparation program. Student
learning outcomes applicable to a particular teacher preparation
program would not be aggregated at the school level. Whether measured
using student growth, a teacher evaluation measure, or another State-
determined measure relevant to calculating student learning outcomes,
each teacher--whether employed in a large school or a small school--has
some impact on student learning. Under our regulations, these impacts
would be aggregated across all schools (or at least all public schools
in the State in which the program is located) that employ novice
teachers the program had prepared.
For small teacher preparation programs, we believe that a State's
use of the aggregation methods reasonably balances the need for annual
reporting on teacher preparation program performance with the special
challenges of generating a meaningful annual snapshot of program
quality for programs that annually produce few teachers. By permitting
aggregation to the threshold level of similar or broader programs run
by the same teacher preparation entity (paragraph (b)(3)(ii)(A)) or
over a period of up to four years (ii)(B)), or both (ii)(C)), we are
offering States options for meeting their annual reporting
responsibilities for all programs. However, if aggregation under any of
the methods identified in Sec. 612.4(b)(3)(ii)(A)-(C) would still not
yield the requisite program size threshold of 25 recent graduates or
such lower number that a State establishes, or if reporting such data
would be inconsistent with Federal or State privacy and confidentiality
laws and regulations, Sec. 612.4(b)(3)(ii)(D) and Sec. 612.4(b)(5)
provide that the State would not need to report data on, or identify an
overall performance rating for, that program.
Our regulations give States flexibility to determine, with their
consultative groups, their own ways of determining a teacher
preparation program's performance. But if a State were to use the
``lowest common denominator'' in evaluating programs, as the commenter
suggested, it would not be meeting the requirement in Sec. 612.4(b)(1)
to identify meaningful differentiation between programs. We continue to
caution against making comparisons of the performance of each teacher
preparation program, or the data for each indicator and criterion a
State uses to determine the overall level of performance, that States
report in their SRCs. Each teacher preparation program is different;
each has a different mission and draws different groups of aspiring
teachers. The purpose of this reporting is to permit the public to
understand which programs a State determines to be low-performing or
at-risk of being low-performing, and the reasons for this
determination. The regulations do not create a national ranking system
for comparing the performance of programs across States. For these
reasons, we do not believe that the regulations provide perverse
incentives for States to lower their standards relative to other
States.
While we appreciate the commenter's recommendation that States be
required to use a set of normed criteria common across all sized
programs and all States, section 205(b) of the HEA requires each State
to include in its SRC its criteria for assessing program performance,
including indicators of academic content knowledge and teaching skills.
Therefore, subject only to use of the indicators of academic content
knowledge and teaching skills defined in these regulations, the law
provides that each State determine how to assess a program's
performance and, in doing so, how to weight different criteria and
indicators that bear on the overall assessment of a program's
performance.
We appreciate the commenters' statements about potential challenges
and limitations that the regulations' aggregation procedures pose for
small teacher preparation programs. However, while we agree that a
State's use of these procedures for small programs may produce results
that are less meaningful than those for programs that annually produce
25 or more recent graduates (or such lower threshold as the State
establishes), we believe that they do provide information that is far
more meaningful than the omission of information about performance of
these small programs altogether. We also appreciate commenters'
concerns that for some States, the process of aggregating program data
could entail significant effort. But we assume that data for indicators
of this and other programs of the same teacher preparation entities
would be procured
[[Page 75543]]
electronically, and, therefore, do not believe that aggregation of data
would necessarily need to be performed manually or that the effort
involved would be ``excessive''. Moreover, the commenters do not
explain why use of the aggregation methods to identify programs that
are low-performing or at-risk of being low-performing should lead to
significant inconsistencies across entities and from one year to the
next, nor do we agree this will be the case.
Like the commenter, we are concerned about protection of individual
personnel data from public disclosure. But we do not see how the
procedures for aggregating data on small programs, such that what the
State reports concerns a combined program that meets the size threshold
of 25 (or such lower size threshold as the State establishes) creates
legitimate concerns about such disclosure. And as our proposed
regulations did not contain a size threshold of 10, we do not believe
we need to make edits to address the specific commenters' concerns
regarding our threshold number.
Changes: None.
Aggregating Data for Teacher Preparation Programs Operated by the Same
Entity (34 CFR 612.4(b)(3)(ii)(A))
Comments: One commenter expressed concerns for how our proposed
definition of a teacher preparation program meshed with how States
would report data for and make an overall assessment of the performance
of small teacher preparation programs. The commenter noted that the
proposed regulations define a teacher preparation program as a program
that is ``offered by a teacher preparation entity that leads to a
specific State teacher certification or licensure in a specific
field.'' It therefore appears that a program that is a ``secondary
mathematics program'' would instead be a ``secondary program.'' Based
on the proposed regulatory language about aggregation of performance
data among teacher preparation programs that are operated by the same
teacher preparation entity and are similar to or broader than the
program (Sec. 612.4(b)(3)(ii)(A)), the commenter added that it appears
that a State can collapse secondary content areas (e.g., biology,
physics) and call it a ``secondary program.''
Discussion: As explained in our discussion of the prior comments,
we feel that meeting the program size threshold of 25 novice teachers
(or any lower threshold a State establishes) by aggregating performance
data for each of these smaller programs with performance data of
similar or broader programs that the teacher preparation entity
operates (thus, in effect, reporting on a broader-based set of teacher
preparation programs) is an acceptable and reasonable way for a State
to report on the performance of these programs. Depending on program
size, reporting could also be even broader, potentially having
reporting for the entire teacher preparation entity. Indicators of
teacher preparation performance would then be outcomes for all
graduates of the combined set of programs, regardless of what subjects
they teach. A State's use of these aggregation methods balances the
need to annually report on program performance with the special
challenges of generating a meaningful annual snapshot of program
quality for programs that annually produce few novice teachers. We
understand the commenter's concern that these aggregation measures do
not precisely align with the definition of teacher preparation program
and permit, to use the commenter's example, a program that is a
``secondary mathematics program'' to potentially have its performance
reported as a broader ``secondary program.'' But as we noted in our
response to prior comments, if a State does not choose to establish a
lower size threshold that would permit reporting of the secondary
mathematics program, aggregating performance data for that program with
another similar program still provides benefits that far exceed having
the State report no program performance information at all.
TEACH Grant eligibility would not be impacted because either the
State will determine and report the program's performance by
aggregating relevant data on that program with data for other teacher
preparation programs that are operated by the same teacher preparation
entity and are similar to or broader than the program in content, or
the program will meet the exceptions provided in Sec.
612.4(b)(3)(ii)(D) and Sec. 612.4(b)(5).
Changes: None.
Aggregating Data in Performance Reporting (34 CFR 612.4(b)(3)(ii)(B))
Comments: Several commenters stated that aggregating data for any
given teacher preparation program over four years to meet the program
size threshold would result in a significant lack of reliability; some
urged the Department to cap the number of years allowed for aggregating
data at three years. Another commenter raised concerns about reported
data on any given program being affected by program characteristics
that are prone to change significantly in the span of four years (i.e.,
faculty turnover and changes in clinical practice, curriculum, and
assessments). The commenter noted that many States' programs will not
meet the criterion of setting the minimum number of program completers,
which the commenter stated our proposed regulations set at ten. The
commenter asked the Department to consider a number of aggregation
methods to reach a higher completer count.
Discussion: The proposed regulations did not establish, as a
threshold for reporting performance data and the program's level of
performance, a minimum of ten program completers. Rather, where a
teacher preparation program does not annually produce 25 or more recent
graduates (or such lower threshold as the State may establish),
proposed Sec. 612.4(b)(3)(ii)(B) would permit a State to aggregate its
performance data in any year with performance data for the same program
generated over a period of up to four years. We appreciate that
aggregating data on a program's new teachers over a period of up to
four years is not ideal; as commenters note, program characteristics
may change significantly in the span of four years.
However, given the challenges of having States report on the
performance of small programs, we believe that providing States this
option, as well as options for aggregating data on the program with
similar or broader programs of the same teacher preparation entity
(Sec. Sec. 612.4(b)(3)(ii)(A) and (C)), allows the State to make a
reasonable determination of the program's level of performance. This is
particularly so given that the regulations require that the State
identify only whether a given teacher preparation program is low-
performing or at-risk of being low-performing. We note that States have
the option to aggregate across programs within an entity, if in
consultation with stakeholders, they find that produces a more accurate
representation of program quality. See Sec. 612.4(b)(3)(ii)(A)). We
believe that a State's use of these alternative methods would produce
more reliable and valid measures of quality for each of these smaller
programs and reasonably balance the need annually to report on program
performance with the special challenges of generating a meaningful
annual snapshot of program quality for programs that annually produce
few novice teachers.
The commenters who recommended reducing the maximum time for
aggregating data on the same small program from four years to three did
not explain why the option of having an additional year to report on
very small programs was preferable to omitting a report on program
performance
[[Page 75544]]
altogether if the program was still below the size threshold after
three years. We do not believe that it is preferable. Moreover, if a
State does not want to aggregate performance data for the same small
program over a full four years, the regulations permit it instead to
combine performance data with data for other programs operated by the
same entity that are similar or broader.
Changes: None.
Aggregating Data in Performance Reporting of Small Teacher Preparation
Programs (34 CFR 612.4(b)(3)(ii)(C))
Comments: Commenters noted that while the proposed rule asserts
that States may use their discretion on how to report on the
performance of teacher preparation programs that do not meet the
threshold of 25 novice teachers (or any lower threshold the State
establishes), the State may still be reporting on less than half of its
programs. Commenters note that if this occurs, the Department's
approach will not serve the purpose of increased accountability of all
programs. Another commenter stated that human judgment would have to be
used to aggregate data across programs or across years in order to meet
the reporting threshold, and this would introduce error in the level of
performance the State assigns to the program in what the commenter
characterizes as a high-stakes accountability system.
Another commenter appears to understand that the government wants
to review larger data fields for analysis and reporting, but stated
that the assumption that data from a program with a smaller ``n'' size
is not report worthy may dampen innovation and learning from a
sponsoring organization with a stated goal of producing a limited
number of teachers or is in a locale needing a limited number of
teachers. The commenter noted that, if a State were to combine
programs, report years, or some other combination to get to 25, the
Federally stated goal of collecting information about each program,
rather than the overall sponsoring organization, is gone. The commenter
argued that Sec. 612.4(c), which the commenter states requires that
States report on teacher preparation at the individual program level,
appears to contradict the over 25 completer rule for reporting.
Discussion: We expect that, working with their consultative group
(see Sec. 612.4(c)), States will adopt reasonable criteria for
deciding which procedure to use in aggregating performance data for
programs that do not meet the minimum threshold. We also expect that a
key factor in the State's judgment of how to proceed will be how best
to minimize error and confusion in reporting data for indicators of
academic content knowledge and teaching skills and other criteria the
State uses, and the program's overall level of performance. States will
want to produce the most reliable and valid measures of quality for
each of these smaller programs. Finally, while the commenter is correct
that Sec. 612.4(c) requires States to work with a consultative group
on procedures for assessing and reporting the performance of each
teacher preparation program in the State, how the State does so for
small programs is governed by Sec. 612.4(b)(3)(ii).
Changes: None.
No Required State Reporting on Small Teacher Preparation Programs That
Cannot Meet Reporting Options (34 CFR 612.4(b)(4)(ii)(D))
Comments: Some commenters urged the Department not to exempt from
State title II reporting those teacher preparation programs that are so
small they are unable to meet the proposed threshold size requirements
even with the options for small programs we had proposed.
Discussion: If a teacher preparation program produces so few recent
graduates that the State cannot use any of the aggregation methods to
enable reporting of program performance within a four-year period, we
do not believe that use of the regulations' indicators of academic
content knowledge and teaching skills to assess its performance will
produce meaningful results.
Changes: None.
No Required State Reporting Where Inconsistent With Federal and State
Privacy and Confidentiality Laws (34 CFR 612.4(b)(3)(ii)(E))
Comments: Two commenters objected to the proposed regulations
because of concerns that the teacher evaluation data and individual
student data that would be collected and reported would potentially
violate State statutes protecting or sharing elementary and secondary
student performance data and teacher evaluation results with any
outside entity. One commenter expressed general concern about whether
this kind of reporting would violate the privacy rights of teachers,
particularly those who are working in their initial years of teaching.
Another commenter recommended that the proposed regulations include
what the commenter characterized as the exemption in the Family
Educational Rights and Privacy Act (FERPA) (34 CFR 99.31 or 99.35) that
allows for the re-disclosure of student-level data for the purposes of
teacher preparation program accountability. The commenter stressed that
the proposed regulations do not address a restriction in FERPA that
prevents teacher preparation programs from being able to access data
that the States will receive on program performance. The commenter
voiced concern that as a result of this restriction in FERPA, IHEs will
be unable to perform the analyses to determine which components of
their teacher preparation programs are leading to improvements in
student academic growth and which are not, and urged that we include an
exemption in 34 CFR 99.31 or 99.35 to permit the re-disclosure of
student-level data to IHEs for the purposes of promoting teacher
preparation program accountability. From a program improvement
standpoint, the commenter argues that aggregated data are meaningless;
teacher preparation programs need fine-grained, person-specific data
(data at the lowest level possible) that can be linked to student
information housed within the program.
Yet another commenter stated that surveying students (by which we
interpret the comment to mean surveying elementary or secondary school
students) or parents raises general issues involving FERPA.
Discussion: The Department appreciates the concerns raised about
the privacy of information on students and teachers. Proposed Sec.
612.4(b)(4)(ii)(E) provided that a State is not required to report data
on a particular teacher preparation program that does not meet the size
thresholds under Sec. 612.4(b)(4)(ii)(A)-(C) if reporting these data
would be inconsistent with Federal or State privacy and confidentiality
laws and regulations. We had proposed to limit this provision to these
small programs because we did (and do) not believe that, for larger
programs, Federal or State laws would prohibit States or State agencies
from receiving the information they need under our indicators of
academic content knowledge and teaching skills to identify a program's
level of performance. The commenters did not provide the text of any
specific State law to make us think otherwise, and for reasons we
discuss below, we are confident that FERPA does not create such
concerns. Still, in an abundance of caution, we have revised this
provision to clarify that no reporting of data under Sec. 612.4(b) is
needed if such reporting is inconsistent with Federal or State
confidentiality laws. We also have redesignated this provision as Sec.
612.4(b)(5) to clarify that
[[Page 75545]]
it is not limited to reporting of small teacher preparation programs.
States should be aware of any restrictions in reporting because of
State privacy laws that affect students or teachers.
At the Federal level, the final regulations do not amend 34 CFR
part 99, which are the regulations implementing section 444 of the
General Education Provisions Act (GEPA), commonly referred to as FERPA.
FERPA is a Federal law that protects the privacy of personally
identifiable information in students' education records. See 20 U.S.C.
1232g; 34 CFR part 99. FERPA applies to educational agencies and
institutions (elementary and secondary schools, school districts,
colleges and universities) that are recipients of Federal funds under a
program administered by the Department. FERPA prohibits educational
agencies and institutions to which it applies from disclosing
personally identifiable information from students' education records,
without the prior written consent of the parent or eligible student,
unless the disclosure meets an exception to FERPA's general consent
requirement. The term ``education records'' means those records that
are: (1) Directly related to a student; and (2) maintained by an
educational agency or institution or by a party acting for the agency
or institution. Education records would encompass student records that
LEAs maintain and that States will need in order to have the data
needed to apply the regulatory indicators of academic content and
teaching skills to individual teacher preparation programs.
As the commenter implicitly noted, one of the exceptions to FERPA's
general consent requirement permits the disclosure of personally
identifiable information from education records by an educational
agency or institution to authorized representatives of a State
educational authority (as well as to local educational authorities, the
Secretary, the Attorney General of the United States, and the
Comptroller General of the United States) as may be necessary in
connection with the audit, evaluation, or the enforcement of Federal
legal requirements related to Federal or State supported education
programs (termed the ``audit and evaluation exception''). The term
``State and local educational authority'' is not specifically defined
in FERPA. However, we have previously explained in the preamble to
FERPA regulations published in the Federal Register on December 2, 2011
(76 FR 75604, 75606), that the term ``State and local educational
authority'' refers to an SEA, a State postsecondary commission, Bureau
of Indian Education, or any other entity that is responsible for and
authorized under local, State, or Federal law to supervise, plan,
coordinate, advise, audit, or evaluate elementary, secondary, or
postsecondary Federal- or State-supported education programs and
services in the State. Accordingly, an educational agency or
institution, such as an LEA, may disclose personally identifiable
information from students' education records to a State educational
authority that has the authority to access such information for audit,
evaluation, compliance, or enforcement purposes under FERPA.
We understand that all SEAs exercise this authority with regard to
data provided by LEAs, and therefore FERPA permits LEAs to provide to
SEAs the data the State needs to assess the indicators our regulations
require. Whether other State agencies such as those that oversee or
help to administer aspects of higher education programs or State
teacher certification requirements are also State education
authorities, and so may likewise receive such data, depends on State
law. The Department would therefore need to consider State law
(including valid administrative regulations) and the particular
responsibilities of a State agency before providing additional guidance
about whether a particular State entity qualifies as a State
educational authority under FERPA.
The commenter would have us go further, and amend the FERPA
regulations to permit State educational authorities to re-disclose this
personally identifiable information from students' education records to
IHEs or the programs themselves in order to give them the disaggregated
data they need to improve the programs. While we understand the
commenter's objective, we do not have the legal authority to do this.
Finally, in response to other comments, FERPA does not extend
privacy protections to an LEA's records on teachers. Nor do the final
regulations require any reporting of survey results from elementary or
secondary school students or their parents. To the extent that either
is maintained by LEAs, disclosures would be subject to the same
exceptions and limitations under FERPA as records of or related to
students.
Changes: We have revised Sec. 612.4(b)(3)(ii)(E) and have
redesignated it as Sec. 612.4(b)(5) to clarify that where reporting of
data on a particular program would be inconsistent with Federal or
State privacy or confidentiality laws or regulations, the exclusion
from State reporting of these data is not limited to small programs
subject to Sec. 612.4(b)(3)(ii).
Fair and Equitable Methods: Consultation With Stakeholders (34 CFR
612.4(c)(1))
Comments: We received several comments on the proposed list of
stakeholders that each State would be required to include, at a
minimum, in the group with which the State must consult when
establishing the procedures for assessing and reporting the performance
of each teacher preparation program in the State (proposed Sec.
612.4(c)(1)(i)). Some commenters supported the list of stakeholders.
One commenter specifically supported the inclusion of representatives
of institutions serving minority and low-income students.
Some commenters believed that, as the relevant stakeholders will
vary by State, the regulations should not specify any of the
stakeholders that each State must include, leaving the determination of
necessary stakeholders to each State's discretion.
Some commenters suggested that States be required to include
representatives beyond those listed in the proposed rule. In this
regard, commenters stated that representatives of small teacher
preparation programs are needed to help the State to annually revisit
the aggregation of data for programs with fewer novice teachers than
the program size threshold, as would be required under proposed Sec.
612.4(b)(4)(ii). Some commenters recommended adding advocates for low-
income and underserved elementary and secondary school students. Some
commenters also stated that advocates for students of color, including
civil rights organizations, should be required members of the group. In
addition, commenters believed that the regulations should require the
inclusion of a representative of at least one teacher preparation
program provided through distance education, as distance education
programs will have unique concerns.
One commenter recommended adding individuals with expertise in
testing and assessment to the list of stakeholders. This commenter
noted, for example, that there are psychologists who have expertise in
aspects of psychological testing and assessment across the variety of
contexts in which psychological and behavioral tests are administered.
The commenter stated that, when possible, experts such as these who are
vested stakeholders in education should be consulted in an effort to
ensure the procedures for
[[Page 75546]]
assessing teacher preparation programs are appropriate and of high
quality, and that their involvement would help prevent potential
adverse, unintended consequences in these assessments.
Some commenters supported the need for student and parent input
into the process of establishing procedures for evaluating program
performance but questioned the degree to which elementary and secondary
school students and their parents should be expected to provide input
on the effectiveness of teacher preparation programs.
One commenter supported including representatives of school boards,
but recommended adding the word ``local'' before ``school boards'' to
clarify that the phrase ``school boards'' does not simply refer to
State boards of education.
Discussion: We believe that all States must consult with the core
group of individuals and entities that are most involved with, and
affected by, how teachers are prepared to teach. To ensure that this is
done, we have specified this core group of individuals and entities in
the regulations. We agree with the commenters that States should be
required to include in the group of stakeholders with whom a State must
consult representatives of small teacher preparation programs (i.e.,
programs that produce fewer than a program size threshold of 25 novice
teachers in a given year or any lower threshold set by a State, as
described in Sec. 612.4(b)(3)(ii)). We agree that the participation of
representatives of small programs, as is required by Sec.
612.4(c)(ii)(D), is essential because one of the procedures for
assessing and reporting the performance of each teacher preparation
program that States must develop with stakeholders includes the
aggregation of data for small programs (Sec. 612.4(c)(1)(ii)(B)).
We also agree with commenters that States should be required to
include as stakeholders advocates for underserved students, such as
low-income students and students of color, who are not specifically
advocates for English learners and students with disabilities. Section
612.4(c)(ii)(I) includes these individuals, and they could be, for
example, representatives of civil rights organizations. To best meet
the needs of each State, and to provide room for States to identify
other groups of underserved students, the regulations do not specify
what those additional groups of underserved students must be.
We agree with the recommendation to require States to include a
representative of at least one teacher preparation program provided
through distance education in the group of stakeholders as we agree
teacher preparation programs provided through distance education are
different from brick-and-mortar programs, and warrant representation on
the stakeholder group. Under the final regulations, except for the
teacher placement rates, States collect information on those programs
and report their performance on the same basis as brick-and-mortar
programs. See the discussion of comment on Program-Level Reporting
(including distance education) (34 CFR 612.4(a)(1)(i)).
While a State may include individuals with expertise in testing and
assessment in the group of stakeholders, we do not require this because
States alternatively may either wish to consult with such individuals
through other arrangements, or have other means for acquiring
information in this area that they need.
Nonetheless, we encourage States to use their discretion to add
representatives from other groups to ensure the process for developing
their procedures and for assessing and reporting program performance
are fair and equitable.
We thank commenters for their support for our inclusion of
representatives of ``elementary through secondary students and their
parents'' in the consultative group. We included them because of the
importance of having teacher preparation programs focus on their
ultimate customers--elementary and secondary school students.
Finally, we agree that the regulation should clarify that the
school board representatives whom a State must include in its
consultative group of stakeholders are those of local school boards.
Similarly, we believe that the regulation should clarify that the
superintendents whom a State must include in the group of stakeholders
are LEA superintendents.
Changes: We have revised Sec. 612.4(c)(1)(i) to clarify that a
State must include representatives of small programs, other groups of
underserved students, of local school boards and LEA superintendents
and a representative of at least one teacher preparation program
provided through distance education in the group with which the State
must consult when establishing its procedures.
Comments: Commenters recommended that States should not be required
to establish consequences (associated with a program's identification
as low-performing or at-risk of being low-performing), as required
under proposed Sec. 612.4(c)(1)(ii)(C), until after the phase-in of
the regulations. Commenters stated that, because errors will be made in
the calculation of data and in determining the weights associated with
specific indicators, States should be required to calculate, analyze,
and publish the data for at least two years before high-stakes
consequences are attached. Commenters believed that this would ensure
initial unintended consequences are identified and addressed before
programs are subject to high-stakes consequences. Commenters also
expressed concerns about the ability of States, under the proposed
timeline for implementation, to implement appropriate opportunities for
programs to challenge the accuracy of their performance data and
classification of their program under proposed Sec.
612.4(c)(1)(ii)(D).
Commenters also stated that the proposed requirement that the
procedures for assessing and reporting the performance of each teacher
preparation program in the State must include State-level rewards and
consequences associated with the designated performance levels is
inappropriate because the HEA does not require States to develop
rewards or consequences associated with the designated performance
levels of teacher preparation programs. Commenters also questioned the
amount of information that States would have to share with the group of
stakeholders establishing the procedures on the fiscal status of the
State to determine what the rewards should be for high-performing
programs. Commenters noted that rewards are envisioned as financial in
nature, but States operate under tight fiscal constraints. Commenters
believed that States would not want to find themselves in an
environment where rewards could not be distributed yet consequences
(i.e., the retracting of monies) would ensue.
In addition, commenters were concerned about the lack of standards
in the requirement that States implement a process for programs to
challenge the accuracy of their performance data and classification.
Commenters noted that many aspects of the rating system carry the
potential for inaccurate data to be inputted or for data to be
miscalculated. Commenters noted that the proposed regulations do not
address how to ensure a robust and transparent appeals process for
programs to challenge their classification.
Discussion: We believe the implementation schedule for these final
regulations provides sufficient time for States to implement the
regulations, including the time necessary to develop the procedures for
assessing and reporting the performance of each
[[Page 75547]]
teacher preparation program in the State (see the discussion of
comments related to the implementation timeline for the regulations in
General (Timeline) (34 CFR 612.4(a)(1)(i)) and Reporting of Information
on Teacher Preparation Program Performance (Timeline) (34 CFR
612.4(b)). We note that States can use results from the pilot reporting
year, when States are not required to classify program performance, to
adjust their procedures. These adjustments could include the weighting
of indicators, the procedure for program challenges, and other changes
needed to ensure that unintended consequences are identified and
addressed before the consequences have high stakes for programs.
Additionally, under Sec. 612.4(c)(2), a State has the discretion to
determine how frequently it will periodically examine the quality of
the data collection and reporting activities it conducts, and States
may find it beneficial to examine and make changes to their systems
more frequently during the initial implementation stage.
The regulations do not require a State to have State-level rewards
or consequences associated with teacher preparation performance levels.
To the extent that the State does, Sec. 612.4(b)(2)(iii) requires a
State to provide that information in the SRC, and Sec.
612.4(c)(1)(ii)(C) requires the State to include those rewards or
consequences in the procedures for assessing and reporting program
performance it establishes in consultation with a representative group
of stakeholders in accordance with Sec. 612.4(c)(1)(i).
Certainly, whether a State can afford to provide financial rewards
is an essential consideration in the development of any State-level
rewards. We leave it up to each State to determine, in accordance with
any applicable State laws or regulations, the amount of information to
be shared in the development of any State-level rewards or
consequences.
As a part of establishing appropriate opportunities for teacher
preparation programs to challenge the accuracy of their performance
data and program classification, States are responsible for determining
the related procedures and standards, again in consultation with the
required representative group of stakeholders. We expect that these
procedures and standards will afford programs meaningful and timely
opportunities to appeal the accuracy of their performance data and
overall program performance level.
Changes: None.
Fair and Equitable Methods: State Examination of Data Collection and
Reporting (34 CFR 612.4(c)(2))
Comments: Commenters asserted that the proposed requirement for a
State to periodically examine the quality of its data collection and
reporting activities under proposed Sec. 612.4(c)(2) is insufficient.
The commenters contended that data collection and reporting activities
must be routinely and rigorously examined and analyzed to ensure
transparency and accuracy in the data and in the high-stakes results
resulting from the use of the data. According to these commenters,
State data systems are not at this time equipped to fully implement the
regulations, and thus careful scrutiny of the data collection--
especially in the early years of the data systems--is vital to ensure
that data from multiple sources are accurate, and, if they are not,
that modifications are made. Commenters also suggested that there
should be a mechanism to adjust measures when schools close or school
boundaries change as programs with smaller numbers of graduates
concentrated in particular schools could be significantly impacted by
these changes that are outside the control of teacher preparation
programs.
Discussion: The regulations do not specify how often a State must
examine the quality of its data collection and reporting activities and
make any appropriate modifications, requiring only that it be done
``periodically.'' We think that the frequency and extent of this review
is best left to each State, in consultation with its representative
group of stakeholders. We understand, as indicated by commenters, that
many State data systems are not currently ready to fully implement the
regulations, and therefore it is likely that such examinations and
modifications will need to be made more frequently during the
development stage than will be necessary once the systems have been in
place and operating for a while. As States have the discretion to
determine the frequency of their examinations and modifications, they
may establish triggers for examining and, if necessary, modifying their
procedures. This could include developing a mechanism to modify the
procedures in certain situations, such as where school closures and
school boundary changes may inadvertently affect certain teacher
preparation programs.
Changes: None.
Section 612.5 What indicators must a State use to report on teacher
preparation program performance for purposes of the State report card?
Indicators a State Must Use To Report on Teacher Preparation Programs
in the State Report Card (34 CFR 612.5(a))
Comments: Some commenters expressed support for the proposed
indicators, believing they may push States to hold teacher preparation
programs more accountable. Some commenters were generally supportive of
the feedback loop where teacher candidate placement, retention, and
elementary and secondary classroom student achievement results can be
reported back to the programs and published so that the programs can
improve.
In general, many commenters opposed the use of the indicators of
academic content knowledge and teaching skills in the SRC, stating that
these indicators are arbitrary, and that there is no empirical evidence
that connects the indicators to a quality teacher preparation program;
that the proposed indicators have never been tested or evaluated to
determine their workability; and that there is no consensus in research
or among the teaching profession that the proposed performance
indicators combine to accurately represent teacher preparation program
quality. Other commenters opined that there is no evidence that the
indicators selected actually represent program effectiveness, and
further stated that no algorithm would accurately reflect program
effectiveness and be able to connect those variables to a ranking
system. Many commenters expressed concern about the proposed assessment
system, stating that reliability and validity data are lacking. Some
commenters indicated that reporting may not need to be annual since
multi-year data are more reliable.
Commenters also stated that valid conclusions about teacher
preparation program quality cannot be drawn using data with
questionable validity and with confounding factors that cannot be
controlled at the national level to produce a national rating system
for teacher preparation programs. Many other commenters stated that
teacher performance cannot be equated with the performance of the
students they teach and that there are additional factors that impact
teacher preparation program effectiveness that have not been taken into
account by the proposed regulations. We interpret other comments as
expressing concern that use of the outcome indicators would not
necessarily help to ensure that teachers
[[Page 75548]]
are better prepared before entering the classroom.
Commenters stated that there are many potential opportunities for
measurement error in the outcome indicators and therefore the existing
data do not support a large, fully scaled implementation of this
accountability system. Commenters argued that the regulations extend an
untested performance assessment into a high-stakes realm by determining
eligibility for Federal student aid through assessing the effectiveness
of each teacher preparation program. One commenter stated that, in
proposing the regulations, the Department did not consider issues that
increase measurement error, and thus decrease the validity of
inferences that can be made about teacher quality. For example,
students who graduate but do not find a teaching job because they have
chosen to stay in a specific geographic location would essentially
count against a school and its respective ranking. Several commenters
suggested that we pilot the proposed system and assess its outcomes,
using factors that are flexible and contextualized within a narrative,
without high-stakes consequences until any issues in data collection
are worked out.
Discussion: We appreciate commenters' concerns about the validity
and reliability of the individual indicators of academic content
knowledge and teaching skill in the proposed regulations, as well as
the relationship between these indicators and the level of performance
of a teacher preparation program. However, we believe the commenters
misunderstood the point we were making in the preamble to the NPRM
about the basis for the proposed indicators. We were not asserting that
rigorous research studies had necessarily demonstrated the proposed
indicators--and particularly those for student learning outcomes,
employment outcomes, employment outcomes in high-need schools and
survey outcomes--to be valid and reliable. Where we believe that such
research shows one or more of the indicators to be valid and reliable,
we have highlighted those findings in our response to the comment on
that indicator. But our assertion in the preamble to the NPRM was that
use of these indicators would produce information about the
performance-level of each teacher preparation program that, speaking
broadly, is valid and reliable. We certainly did not say that these
indicators were necessarily the only measures that would permit the
State's identification of each program's level of performance to be
appropriate. And in our discussion of public comments we have clarified
that States are free to work with their consultative group (see Sec.
612.4(c)) to establish other measures the State would use as well.
In broad terms, validity here refers to the accuracy of these
indicators in measuring what they are supposed to measure, i.e., that
they collectively work to provide significant information about a
teacher preparation program's level of performance. Again, in broad
terms, reliability here refers to the extent to which these indicators
collectively can be used to assess a program's level of performance and
to yield consistent results.
For reasons we explain below, we believe it is important that
teacher preparation programs produce new teachers who positively impact
student academic success, take jobs as teachers and stay in the
profession at least three years, and feel confident about the training
the programs have provided to them. This is what these three indicators
in our final regulations do--and by contrast what is missing from the
criteria that States have reported in SRCs that they have used to date
to assess program performance.
We do not believe that State conclusions about the performance
levels of their teacher preparation programs can be valid or reliable
if they, as State criteria have done to date, focus on inputs a program
offers any more than an automobile manufacturer's assessment of the
validity and reliability of its safety and performance testing make
sense if they do not pay attention to how the vehicles actually perform
on the road.
Our final regulations give States, working with their stakeholders,
the responsibility for establishing procedures for ensuring that use of
these indicators, and such other indicators of academic content
knowledge and teaching skills and other criteria the State may
establish, permits the State to reasonably identify (i.e., with
reasonable validity and reliability) those teacher preparation programs
that are low-performing or at-risk of being low-performing. We
understand that, to do this, they will need to identify and implement
procedures for generating relevant data on how each program reflects
these measures and criteria, and for using those data to assess each
program in terms of its differentiated levels of performance. But we
have no doubt that States can do this in ways that are fair to entities
that are operating good programs while at the same time are fair to
prospective teachers, prospective employers, elementary and secondary
school students and their parents, and the general public--all of whom
rely on States to identify and address problems with low-performing or
at-risk programs.
We further note that by defining novice teacher to include a three-
year teaching period, which applies collected for student learning
outcomes and employment outcomes, the regulations will have States use
data for these indicators of program performance over multiple years.
Doing so will increase reliability of the overall level of performance
the State assigns to each program in at least two respects. First, it
will decrease the chance that one aberrational year of performance or
any given cohort of program graduates (or program participants in the
case of alternative route teacher preparation programs) has a
disproportionate effect on a program's performance. And second, it will
decrease the chance that the level of performance a State reports for a
program will be invalid or unreliable.
We stress, however, that the student learning outcomes, employment
outcomes, and survey outcomes that the regulations require States to
use as indicators of academic content and teaching skills are not
simply measures that logically are important to assessing a program's
true level of performance. Rather, as we discuss below, we believe that
these measures are also workable, based on research, and reflective of
the direction in which many States and programs are going, even if not
reflecting an outright consensus of all teacher preparation programs.
In this regard, we disagree with the commenters' assertions that
these measures are arbitrary, lack evidence of support, and have not
been tested. The Department's decision to require use of these measures
as indicators of academic content knowledge and teaching skills is
reinforced by the adoption of similar indicators by CAEP,\15\ which
reviews over half of the Nation's teacher preparation programs--and by
the States of North Carolina, Tennessee, Ohio, and Louisiana, which
already report annually on indictors of teacher preparation program
performance based on data from State assessments. The recent GAO report
determined that more than half the States already utilize data on
program completers' effectiveness (such as surveys, placement rates,
and teacher evaluation results) in assessing
[[Page 75549]]
programs, with at least ten more planning to do so.\16\ These measures
also reflect input received from many non-Federal negotiators during
negotiated rulemaking. Taken together, we believe that the adoption of
these measures of academic content knowledge and teaching skills
reflects the direction in which the field is moving, and the current
use of similar indicators by several SEAs demonstrates their
feasibility.
---------------------------------------------------------------------------
\15\ CAEP 2013 Accreditation Standards, Standard 4, Indicator 4.
(2013). Retrieved from https://caepnet.org/standards/introduction.
Amended by the CAEP Board of Directors February 13, 2015.
\16\ GAO at 13-14.
---------------------------------------------------------------------------
We acknowledge that many factors account for the variation in a
teacher's impact on student learning. However, we strongly believe that
a principal function of any teacher preparation program is to train
teachers to promote the academic growth of all students regardless of
their personal and family circumstances, and that the indicators whose
use the regulations prescribe are already being used to help measure
programs' success in doing so. For example, Tennessee employs some of
the outcome measures that the regulations require, and reports that
some teacher preparation programs consistently produce teachers with
statistically significant student learning outcomes over multiple
years.\17\ Delaware also collects and reports data on the performance
and effectiveness of program graduates by student achievement and
reports differentiated student learning outcomes by teacher preparation
program.\18\ Studies of programs in Washington State \19\ and New York
City,\20\ as well as data from the University of North Carolina
system,\21\ also demonstrate that graduates of different teacher
preparation programs show statistically significant differences in
value-added scores. The same kinds of data from Tennessee and North
Carolina show large differences in teacher placement and retention
rates among programs. In Ohio \22\ and North Carolina, survey data also
demonstrate that, on average, graduates of teacher preparation programs
can have large differences in opinions of the quality of their
preparation for the classroom. And a separate study of North Carolina
teacher preparation programs found statistically significant
correlations between programs that collect outcomes data on graduates
and their graduate's value-added scores.\23\ These results reinforce
that teacher preparation programs play an important role in teacher
effectiveness, and so give prospective students and employers important
information about which teacher preparation programs most consistently
produce teachers who can best promote student academic achievement.
---------------------------------------------------------------------------
\17\ See Report Card on the Effectiveness of Teacher Training
Programs, Tennessee 2014 Report Card. (n.d.). Retrieved from
www.tn.gov/thec/article/report-card.
\18\ See 2015 Delaware Educator Preparation Program Reports.
(n.d.). Retrieved June 27, 2016 from www.doe.k12.de.us/domain/398.
\19\ Goldhaber, D., & Liddle, S. (2013). The Gateway to the
Profession: Assessing Teacher Preparation Programs Based on Student
Achievement. Economics of Education Review, 34, 29-44.
\20\ Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff,
J. (2009). Teacher Preparation and Student Achievement. Education
Evaluation and Policy Analysis, 31(4), 416-440.
\21\ See UNC Educator Quality Dashboard. (n.d.). Retrieved from
https://tqdashboard.northcarolina.edu/performance-employment/.
\22\ See, for example: 2013 Educator Preparation Performance
Report Adolescence to Young Adult (7-12) Integrated Mathematics Ohio
State University. Retrieved from https://regents.ohio.gov/educator-accountability/performance-report/2013/OhioStateUniversity/OHSU_IntegratedMathematics.pdf.
\23\ Henry, G., & Bastian, K. (2015). Measuring Up: The National
Council on Teacher Quality's Ratings of Teacher Preparation Programs
and Measures of Teacher Performance.
---------------------------------------------------------------------------
While we acknowledge that some studies of teacher preparation
programs \24\ find very small differences at the program level in
graduates' average effect on student outcomes, we believe that the
examples we have cited above provide a reasonable basis for States' use
of student learning outcomes weighted in ways that they have determined
best reflect the importance of this indicator. In addition, we believe
the data will help programs develop insights into how they can more
consistently generate high-performing graduates.
---------------------------------------------------------------------------
\24\ For example: C. Koedel, E. Parsons, M. Podgursky, & M.
Ehlert (2015). ``Teacher Preparation Programs and Teacher Quality:
Are There Real Differences Across Programs?'' Education Finance and
Policy, 10(4): 508-534; P. von Hippel, L. Bellows, C. Osborne, J.
Arnold Lincove, & N. Mills (2014). ``Teacher Quality Differences
Between Teacher Preparation Programs: How Big? How Reliable? Which
Programs Are Different?'' Retrieved from Social Science Research
Network, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2506935.
---------------------------------------------------------------------------
We have found little research one way or the other that directly
ties the performance of teacher preparation programs to employment
outcomes and survey outcomes. However, we believe that these other
measures--program graduates and alternative route program participants'
employment as teachers, retention in the profession, and perceptions
(with those of their employers) of how well their programs have trained
them for the classroom--strongly complement use of student learning
outcomes in that they help to complete the picture of how well programs
have really trained teachers to take and maintain their teaching
responsibilities.
We understand that research into how best to evaluate both teacher
effectiveness and the quality of teacher preparation programs
continues. To accommodate future developments in research that improve
a State's ability to measure program quality as well as State
perspectives of how the performance of teacher preparation programs
should best be measured, the regulations allow a State to include other
indicators of academic content knowledge and teaching skills that
measure teachers' effects on student performance (see Sec. 612.5(b)).
In addition, given their importance, while we strongly encourage States
to provide significant weight in particular to the student learning
outcomes and retention rate outcomes in high-need schools in their
procedures for assessing program performance, the Department has
eliminated the proposed requirements in Sec. 612.4(b)(1) that States
consider these measures ``in significant part.'' The change confirms
States' ability to determine how to weight each of these indicators to
reflect their own understanding of how best to assess program
performance and address any concerns with measurement error. Moreover,
the regulations offer States a pilot year, corresponding to the 2017-18
reporting year (for data States are to report in SRCs by October 31,
2018, in which to address and correct for any issues with data
collection, measurement error, validity, or reliability in their
reported data.
Use of these indicators themselves, of course, does not ensure that
novice teachers are prepared to enter the classroom. However, we
believe that the regulations, including the requirement for public
reporting on each indicator and criterion a State uses to assess a
program's level of performance, provide strong incentives for teacher
preparation programs to use the feedback from these measures to ensure
that the novice teachers they train are ready to take on their teaching
responsibilities when they enter the classroom.
We continue to stress that the data on program performance that
States report in their SRCs do not create and are not designed to
promote any kind of a national, in-State, or interstate rating system
for teacher preparation programs, and caution the public against using
reported data in this way. Rather, States will use reported data to
evaluate program quality based on the indicators of academic content
knowledge and teaching skills and other criteria of program performance
that they decide to use for this purpose. Of
[[Page 75550]]
course, the Department and the public at large will use the reported
information to gain confidence in State decisions about which programs
are low-performing and at-risk of being low-performing (and are at any
other performance level the State establishes) and the process and data
States use to make these decisions.
Changes: None.
Comments: Commenters stated that it is not feasible to collect and
report student learning outcomes or survey data separately by
credential program for science, technology, engineering, and
mathematics (STEM) programs in a meaningful way when only one science
test is administered, and teacher preparation program graduates teach
two or more science disciplines with job placements in at least two
fields.
Discussion: We interpret these comments to be about teacher
preparation programs that train teachers to teach STEM subjects. We
also interpret these comments to mean that certain conditions--
including, the placement or retention of recent graduates in more than
one field, having only one statewide science assessment at the high
school level, and perhaps program size--may complicate State data
collection and reporting on the required indicators for preparation
programs that produce STEM teachers.
The regulations define the term ``teacher of record'' to clarify
that teacher preparation programs will be assessed on the aggregate
outcomes of novice teachers who are assigned the lead responsibility
for a student's learning in the subject area. In this way, although
they may generate more data for the student learning outcomes measure,
novice teachers who are teachers of record for more than one subject
area are treated the same as those who teach in only one subject area.
We do not understand why a science teacher whose district
administers only one examination is in a different position than a
teacher of any other subject. More important, science is not yet a
tested grade or subject under section 1111(b)(2) of the ESEA, as
amended by ESSA. Therefore, for the purposes of generating data on a
program's student learning outcomes, States that use the definition of
``student growth'' in Sec. 612.2 will determine student growth for
teacher preparation programs that train science teachers through use of
measures of student learning and performance that are rigorous,
comparable across schools, and consistent with State guidelines. These
might include student results on pre-tests and end-of-course tests,
objective performance-based assessments, and student learning
objectives.
To the extent that the comments refer to small programs that train
STEM teachers, the commenters did not indicate why our proposed
procedures for reporting data and levels of performance for small
teacher preparation programs did not adequately address their concerns.
For reasons we discussed in response to comments on aggregating and
then reporting data for small teacher preparation programs (Sec.
612.4(b)(3)(ii)), we believe the procedures the regulations establish
for reporting performance of small programs adequately address concerns
about program size.
Changes: None.
Comments: Commenters noted that the transition to new State
assessments may affect reporting on student learning outcomes and
stated that the proposed regulations fail to indicate when and how
States must use the results of State assessments during such a
transition for the purpose of evaluating teacher preparation program
quality.
Discussion: For various reasons, one or more States are often
transitioning to new State assessments, and this is likely to continue
as States implement section 1111(b)(2) of the ESEA, as amended by ESSA.
Therefore, transitioning to new State assessments should not impact a
State's ability to use data from these assessments as a measure of
student learning outcomes, since there are valid statistical methods
for determining student growth even during these periods of transition.
However, how this should occur is best left to each State that is going
through such a transition, just as it is best to leave to each State
whether to use another State-determined measure relevant to calculating
student learning outcomes as permitted by Sec. 612.5(a)(1)(ii)(C)
instead.
Changes: None.
Comments: Commenters recommended that the student learning outcomes
indicator take into account whether a student with disabilities uses
accommodations, and who is providing the accommodation. Another
commenter was especially concerned about special education teachers'
individualized progress monitoring plans created to evaluate a
student's progress on individualized learning outcomes. The commenter
noted that current research cautions against aggregation of student
data gathered with these tools for the purposes of teacher evaluation.
Discussion: Under the regulations, outcome data is reported on
``teachers of record,'' defined as teachers (including a teacher in a
co-teaching assignment) who have been assigned the lead responsibility
for a student's learning in a subject or course section. The teacher of
record for a class that includes students with disabilities who require
accommodations is responsible for the learning of those students, which
may include ensuring the proper accommodations are provided. We decline
to require, as data to be reported as part of the indicator, the number
of students with disabilities requiring special accommodations because
we assume that the LEA will meet its responsibilities to provide needed
accommodations, and out of consideration for the additional reporting
burden the proposal would place on States. However, States are free to
adopt this recommendation if they choose to do so.
In terms of gathering data about the learning outcomes for students
with disabilities, the regulations do not require the teacher of record
to use special education teachers' individualized monitoring plans to
document student learning outcomes but rather expect teachers to
identify, based on the unique needs of the students with disabilities,
the appropriate data source. However, we stress that this issue
highlights the importance of consultation with key stakeholders, like
parents of and advocates for students with disabilities, as States
determine how to calculate their student learning outcomes.
Changes: None.
Comments: Commenters recommended that the regulations establish the
use of other or additional indicators, including the new teacher
performance assessment edTPA, measures suggested by the Higher
Education Task Force on Teacher Preparation, and standardized
observations of teachers in the classroom. Some commenters contended
that a teacher's effectiveness can only be measured by mentor teachers
and university field instructors. Other commenters recommended applying
more weight to some indicators, such as students' evaluations of their
teachers, or increasing emphasis on other indicators, such as teachers'
scores on their licensure tests.
Discussion: We believe that the indicators of academic content
knowledge and teaching skills that the regulations require States to
use in assessing a program's performance (i.e., student learning
outcomes, employment outcomes, survey outcomes, and
[[Page 75551]]
information about basic aspects of the program) are the most important
such indicators in that, by focusing on a few key areas, they provide
direct information about whether the program is meeting its basic
purposes. We decline to require that States use additional or other
indicators like those suggested because we strongly believe they are
less direct measures of academic content knowledge and teaching skills
that would also add significant cost and complexity. However, we note
that if district evaluations of novice teachers use multiple valid
measures in determining performance levels that include, among other
things, data on student growth for all students, they are ``teacher
evaluation measures'' under Sec. 612.2. Therefore, Sec.
612.5(a)(1)(ii) permits the State to use and report the results of
those evaluations as student learning outcomes.
Moreover, under Sec. 612.5(b), in assessing the performance of
each teacher preparation program, a State may use additional indicators
of academic content and teaching skills of its choosing, provided the
State uses a consistent approach for all of its teacher preparation
programs and these additional indicators provide information on how the
graduates produced by the program perform in the classroom. In
consultation with their stakeholder groups, States may wish to use
additional indicators, such as edTPA, teacher classroom observations,
or student survey results, to assess teacher preparation program
performance.
As we addressed in our discussion of comment on Sec.
612.4(b)(2)(ii) (Weighting of Indicators), we encourage States to give
significant weight to student learning outcomes and employment outcomes
in high-need schools. However, we have removed from the final
regulations any requirement that States give special weight to these or
other indicators of academic content knowledge and teaching skills.
Thus, while States must include in their SRCs the weights they give to
each indicator and any other criteria they use to identify a program's
level of performance, each State has full authority to determine the
weighting it gives to each indicator or criterion.
Changes: None.
Comments: Some commenters expressed concerns that the regulations
permit the exclusion of some program graduates (e.g., those leaving the
State or taking jobs in private schools), thus providing an incomplete
representation of program performance. In particular, commenters
recommended using measures that capture the academic content knowledge
and teaching skills of all recent graduates, such as State licensure
test scores, portfolio assessments, student and parent surveys,
performance on the edTPA, and the rate at which graduates retake
licensure assessments (as opposed to pass rates).
Discussion: While the three outcome-based measures required by the
regulations assess the performance of program graduates who become
novice teachers, the requirement in Sec. 612.5(a)(4) for an indication
of either a program's specialized accreditation or that it provides
certain minimum characteristics examines performance based on multiple
input-based measures that apply to all program participants, including
those who do not become novice teachers. States are not required to
also assess teacher preparation programs on the basis of any of the
additional factors that commenters suggest, i.e., State licensure test
scores, portfolio assessments, student and parent surveys, performance
on the edTPA, and the rate at which graduates retake licensure
assessments. However, we note that IHEs must continue to include
information in their IRCs on the pass rates of a program's students on
assessments required for State certification. Furthermore, in
consultation with their stakeholders, States may choose to use the data
and other factors commenters recommend to help determine a program's
level of performance.
Changes: None.
Comments: One commenter recommended that the Department fund a
comprehensive five-year pilot of a variety of measures for assessing
the range of K-12 student outcomes associated with teacher preparation.
Discussion: Committing funds for research is outside the scope of
the regulations. We note that the Institute of Education Sciences and
other research organizations are conducting research on teacher
preparation programs that the Department believes will inform advances
in the field.
Changes: None.
Comments: Some commenters stated that a teacher preparation
program's cost of attendance and the average starting salary of the
novice teachers produced by the program should be included as mandatory
indicators for program ratings because these two factors, along with
student outcomes, would better allow stakeholders to understand the
costs and benefits of a specific teacher preparation program.
Discussion: Section 205(b)(1)(F) of the HEA requires each State to
identify in its SRC the criteria it is using to identify the
performance of each teacher preparation program within the State,
including its indicators of the academic knowledge and teaching skills
of the program's students. The regulations define these indicators to
include four measures that States must use as these indicators.
While we agree that information that helps prospective students
identify programs that offer a good value is important, the purpose of
sections 205(b)(1)(F) and 207(a) of the HEA, and thus our regulations,
is to have States identify and report on meaningful criteria that they
use to identify a program's level of performance--and specifically
whether the program is low-performing or at-risk of being low-
performing. While we encourage States to find ways to make information
on a program's costs available to the public, we do not believe the
information is sufficiently related to a program's level of performance
to warrant the additional costs of requiring States to report it. For
similar reasons, we decline to add this consumer information to the SRC
as additional data States need to report independent of its use in
assessing the program's level of performance.
Changes: None.
Comments: Multiple commenters stated that the teacher preparation
system in the United States should mirror that of other countries and
broaden the definition of classroom readiness. These commenters stated
that teacher preparation programs should address readiness within a
more holistic, developmental, and collective framework. Others stated
that the teacher preparation system should emphasize experiential and
community service styles of teaching and learning to increase student
engagement.
Discussion: While we appreciate commenters' suggestions that
teacher preparation programs should be evaluated using holistic
measures similar to those used by other countries, we decline to
include these kinds of criteria because we believe that the ability to
influence student growth and achievement is the most direct measure of
academic knowledge and teaching skills. However, the regulations permit
States to include indicators like those recommended by the commenters
in their criteria for assessing program performance.
Changes: None.
Comments: Commenters noted that post-graduation professional
development impacts a teacher's job performance in that there may be a
difference between teachers who continue to learn during their early
[[Page 75552]]
teaching years compared to those who do not, but that the proposed
regulations did not take this factor into account.
Discussion: By requiring the use of data from the first, second,
and third year of teaching, the student learning outcomes measure
captures improvements in the impact of teachers on student learning
made over the first three years of teaching. To the extent that
professional development received in the first three years of teaching
contributes to a teacher's impact on student learning, the student
learning outcomes measure may reflect it.
The commenters may be suggesting that student learning outcomes of
novice teachers are partially the consequence of the professional
development they receive, yet the proposed regulations seem to
attribute student learning outcomes to only the teacher preparation
program. The preparation that novice teachers receive in their teacher
preparation programs, of course, is not the only factor that influences
student learning outcomes. But for reasons we have stated, the failure
of recent graduates as a whole to demonstrate positive student learning
outcomes is an indicator that something in the teacher preparation
program is not working. We recognize that novice teachers receive
various forms of professional development, but believe that high-
quality teacher preparation programs produce graduates who have the
knowledge and skills they need to earn positive reviews and stay in the
classroom regardless of the type of training they receive on the job.
Changes: None.
Comments: Commenters were concerned that the proposed regulations
would pressure States to rate some programs as low-performing even if
all programs in a State are performing adequately. Commenters noted
that the regulations need to ensure that programs are all rated on
their own merits, rather than ranked against one another--i.e.,
criterion-referenced rather than norm-referenced. The commenters
contended that, otherwise, programs would compete against one another
rather than work together to continually improve the quality of novice
teachers. Commenters stated that such competition could lead to further
isolation of programs rather than fostering the collaboration necessary
for addressing shortages in high-need fields.
Some commenters stated that although there can be differences in
traditional and alternative route programs that make comparison
difficult, political forces that are pro- or anti-alternative route
programs can attempt to make certain types of programs look better or
worse. Further, commenters noted that it will be difficult for the
Department to enforce equivalent levels of accountability and reporting
when differences exist across States' indicators and relative weighting
decisions.
Another commenter recommended that, to provide context, programs
and States should also report raw numbers in addition to rates for
these metrics.
Discussion: We interpret the comment on low-performing programs to
argue that these regulations might be viewed as requiring a State to
rate a certain number of programs as low performing regardless of their
performance. Section 207(a) of the HEA requires that States provide in
the SRCs an annual list of low-performing teacher preparation programs
and identify those programs that are at risk of being put on the list
of low-performing programs. While the regulations require States to
establish at least three performance categories (those two and all
other programs, which would therefore be considered effective or
higher), we encourage States also to differentiate between teacher
preparation programs whose performance is satisfactory and those whose
performance is truly exceptional. We believe that recognizing, and
where possible rewarding (see Sec. 612.4(c)(1)(ii)(C)), excellence
will help other programs learn from best practice and facilitate
program improvement of teacher preparation programs and entities.
Actions like these will encourage collaboration, especially in
preparing teachers to succeed in high-need areas.
However, we stress that the Department has no expectation or desire
that a State will designate a certain number or percentage of its
programs as low-performing or at-risk of being low-performing. Rather,
we want States to do what our regulations provide: Assess the level of
performance of each teacher preparation program based on what they
determine to be differentiated levels of performance, and report in the
SRCs (1) the data they secure about each program based on the
indicators and other criteria they use to assess program performance,
(2) the weighting of these data to generate the program's level of
performance, and (3) a list of programs it found to be low-performing
or at-risk of being low-performing. Beyond this, these regulations do
not create, and are not designed to promote, an in-State or inter-State
ranking system, or to rank traditional versus alternative route
programs based on the reported data.
We acknowledge that if they choose, States may employ growth
measures specifically based on a relative distribution of teacher
scores statewide, which could constitute a ``norm-referenced''
indicator. While these statewide scores may not improve on the whole,
an individual teacher preparation program's performance can still show
improvement (or declines) relative to average teacher performance in
the State. The Department notes that programs are evaluated on multiple
measures of program quality and the other required indicators can be
criterion-referenced. For example, a State may set a specific threshold
for retention rate or employer satisfaction that a program must meet to
be rated as effective. Additionally, States may decide to compare any
norm-referenced student learning outcomes, and other indicators, to
those of teachers prepared out of State to determine relative
improvement of teacher preparation programs as a whole.\25\ But whether
or not to take steps like these is purely a State decision.
---------------------------------------------------------------------------
\25\ See, for example: See UNC Educator Quality
Dashboard.(n.d.). Retrieved from https://tqdashboard.northcarolina.edu/performance-employment/.
---------------------------------------------------------------------------
With respect to the recommendation that report cards include raw
numbers as well as rates attributable to the indicators and other
criteria used to assess program performance, Sec. 612.4(b)(2)(i)
requires the State to report data relative to each indicator identified
in Sec. 612.5. Section V of the instructions for the SRC asks for the
numbers and percentages used in the calculation of the indicators of
academic content knowledge and teaching skills and any other indicators
and criteria a State uses.
Changes: None.
Comments: Commenters contended that the proposed regulations do not
specifically address the skills enumerated in the definition of
``teaching skills.''
Discussion: The commenters are correct that the regulations do not
specifically address the various ``teaching skills'' identified in the
definition of the term in section 200(23) of the HEA. However, we
strongly believe that they do not need to do so.
The regulations require States to use establish four indicators of
academic content knowledge and teaching skills--student learning
outcomes, employment outcomes, survey results, and minimum program
characteristics--in assessing the level of a teacher preparation
program's performance under sections 205(b)(1)(F) and 207(a) of the
HEA. In
[[Page 75553]]
establishing these indicators, we are mindful of the definition of
``teaching skills'' in section 200(23) of the HEA, which includes
skills that enable a teacher to increase student learning, achievement,
and the ability to apply knowledge, and to effectively convey and
explain academic subject matter. In both the NPRM and the discussion of
our response to comment on Sec. 612.5(a)(1)-(4), we explain why each
of the four measures is, in fact, a reasonable indicator of whether
teachers have academic content knowledge and teaching skills. We see no
reason the regulations need either to enumerate the definition of
teaching skills in section 200(23) or to expressly tie these indicators
to the statutory definition of one term included in ``academic content
knowledge and teaching skills''.
Changes: None.
Comments: Some commenters stated that the use of a rating system
with associated consequences is a ``test and punish'' accountability
model similar to the K-12 accountability system under the ESEA, as
amended by the No Child Left Behind Act (NCLB). They contended that
such a system limits innovation and growth within academia and denies
the importance of capacity building.
Discussion: We do not believe that the requirements the regulations
establish for the title II reporting system are punitive. The existing
HEA title II reporting framework has not provided useful feedback to
teacher preparation programs, prospective teachers, other stakeholders,
or the public on program performance. Until now, States have identified
few programs deserving of recognition or remediation. This is because
few of the criteria they to date have reported that they use to assess
program performance, under section 205(b)(1)(F) of the HEA, rely on
information that examines program quality from the most critical
perspective--teachers' ability to impact student achievement once they
begin teaching. Given the importance of academic knowledge and teaching
skills, we are confident that the associated indicators in the
regulations will help provide more meaningful information about the
quality of these programs, which will then facilitate self-improvement
and, by extension, production of novice teachers better trained to help
students achieve once they enter the classroom.
Thus, the regulations address shortcomings in the current State
reporting system by defining indicators of academic content knowledge
and teaching skills, focusing on program outcomes that States will use
to assess program performance. The regulations build on current State
systems and create a much-needed feedback loop to facilitate program
improvement and provide valuable information to prospective teachers,
potential employers, the general public, and the programs themselves.
We agree that program innovation and capacity building are worthwhile,
and we believe that what States will report on each program will
encourage these efforts.
Under the regulations, teacher preparation programs whose graduates
(or participants, if they are teachers while being trained in an
alternative route program) do not demonstrate positive student learning
outcomes are not punished, nor are States required to punish programs.
To the extent that proposed Sec. 612.4(b)(2), which would have
permitted a program to be considered effective or higher only if the
teachers it produces demonstrate satisfactory or higher student
learning outcomes, raised concerns about the regulations seeming
punitive, we have removed that provision from the final regulations.
Thus, the regulations echo the requirements of section 207(a) of the
HEA, which requires that States annually identify teacher preparation
programs that are low-performing or that are at-risk of becoming low-
performing, and section 207(b) of the HEA, which prescribes the
consequences for a program from which the State has withdrawn its
approval or terminated its financial support. For a discussion of the
relationship between the State classification of teacher preparation
programs and TEACH Grant eligibility, see Sec. 686.2 regarding a TEACH
Grant-eligible program.
Changes: None.
Comments: None.
Discussion: In removing the term ``new teacher'' and adding the
term ``novice teacher,'' as discussed earlier in this document, it
became unclear for what period of time a State must report data related
to those teachers. To resolve this, we have clarified that a State may,
at its discretion, exclude from reporting those individuals who have
not become novice teachers after three years of becoming a ``recent
graduate,'' as defined in the regulations. We believe that requiring
States to report on individuals who become novice teachers more than
three years after those teachers graduated from a teacher preparation
program is overly burdensome and would not provide an accurate
reflection of teacher preparation program quality.
Changes: We have added Sec. 612.5(c) to clarify that States may
exclude from reporting under Sec. 612.5(a)(1)-(3) individuals who have
not become novice teachers after three years of becoming recent
graduates.
Student Learning Outcomes (34 CFR 612.5(a)(1))
Growth, VAM, and Other Methodological Concerns
Comments: Many commenters argued that the proposed definition of
``student learning outcomes'' invites States to use VAM to judge
teachers and teacher preparation programs. Those commenters argued that
because the efficacy of VAM is not established, the definition of
``student learning outcomes'' is not solidly grounded in research.
Discussion: For those States that choose to do so, the final
regulations permit States to use any measures of student growth for
novice teachers that meet the definitions in Sec. 612.2 in reporting
on a program's student learning outcomes. Their options include a
simple comparison of student scores on assessments between two points
in time for grades and subjects subject to section 1111(b)(2) of the
ESEA, as amended by ESSA, a range of options measuring student learning
and performance for non-tested grades and subjects (which can also be
used to supplement scores for tested grads and subjects), or more
complex statistical measures, like student growth percentiles (SGPs) or
VAM that control for observable student characteristics. A detailed
discussion of the use of VAM as a specific growth measure follows
below; the discussion addresses the use of VAM in student learning
outcomes, should States choose to use it. However, we also note that
the requirement for States to assess teacher preparation programs
based, in part, on student learning outcomes also allows States that
choose not to use student growth to use a teacher evaluation measure or
another State-determined measure relevant to calculating student
learning outcomes. Nothing in the final regulations require the use of
VAM over other methodologies for calculating student growth,
specifically, or student learning outcomes, more broadly.
These comments also led us to see potential confusion in the
proposed definitions of student learning outcomes and student growth.
In reviewing the proposed regulations, we recognized that the original
structure of the definition of ``student learning outcomes'' could
cause confusion. We are concerned that having a definition for the
term, which was intended only to operationalize the other definitions
in the context of Sec. 612.5, was not the
[[Page 75554]]
clearest way to present the requirements. To clarify how student
learning outcomes are considered under the regulations, we have removed
the definition of ``student learning outcomes'' from Sec. 612.2, and
revised Sec. 612.5(a)(1) to incorporate, and operationalize, that
definition.
Changes: We have removed the definition of ``student learning
outcomes'' and revised Sec. 612.5(a)(1) to incorporate key aspects of
that proposed definition. In addition, we have provided States with the
option to determine student learning outcomes using another State-
determined measure relevant to calculating student learning outcomes.
Comments: Many commenters stated that the proposed student learning
outcomes would not adequately serve as an indicator of academic content
knowledge and teaching skills for the purpose of assessing teacher
preparation program performance. Commenters also contended that tests
only measure the ability to memorize and that several kinds of
intelligence and ways of learning cannot be measured by testing.
In general, commenters questioned the Department's basis for the
use of student learning outcomes as one measure of teacher preparation
program performance, citing research to support their claim that the
method of measuring student learning outcomes as proposed in the
regulations is neither valid nor reliable, and that there is no
evidence to support the idea that student outcomes are related to the
quality of the teacher preparation program attended by the teacher.
Commenters further expressed concerns about the emphasis on linking
children's test scores on mandated standardized tests to student
learning outcomes. Commenters also stated that teacher preparation
programs are responsible for only a small portion of the variation in
teacher quality.
Commenters proposed that aggregate teacher evaluation results be
the only measure of student learning outcomes so long as the State
teacher evaluations do no overly rely on results from standardized
tests. Commenters stated that in at least one State, teacher
evaluations cannot be used as part of teacher licensure decisions or to
reappoint teachers due to the subjective nature of the evaluations.
Some commenters argued that student growth cannot be defined as a
simple comparison of achievement between two points in time.
One commenter, who stated that the proposed regulatory approach is
thorough and aligned with current trends in evaluation, also expressed
concern that K-12 student performance (achievement) data are generally
a snapshot in time, typically the result of one standardized test, that
does not identify growth over time, the context of the test taking, or
other variables that impact student learning.
Commenters further cited research that concluded that student
achievement in the classroom is not a valid predictor of whether the
teacher's preparation program was high quality and asserted that other
professions do not use data in such a simplistic way.
Another commenter stated that local teacher evaluation instruments
vary significantly across towns and States.
Another commenter stated that student performance data reported in
the aggregate and by subgroups to determine trends and areas for
improvement is acceptable but should not be used to label or categorize
a school system, school, or classroom teacher.
Discussion: As discussed above, in the final regulations we have
removed the requirement that States consider student growth ``in
significant part,'' in their procedures for annually assessing teacher
preparation program performance. Therefore, while we encourage States
to use student growth as their measure of student learning outcomes and
to adopt such a weighting of student learning outcomes on their own,
our regulations give States broad flexibility to decide how to weight
student learning outcomes in consultation with stakeholders (see Sec.
612.4(c)), with the aim of it being a sound and reasonable indicator of
teacher preparation program performance. Similarly, we decline
commenters' suggestions to restrict the measure of student learning
outcomes to only aggregated teacher evaluation results, in order to
maintain that flexibility. With our decision to permit States to use
their own State-determined measure relevant to calculating student
learning outcomes rather than student growth or a teacher evaluation
measure, we have provided even more State flexibility in calculating
student learning outcomes than commenters had requested.
As we have previously stated, we intend the use of all indicators
of academic content knowledge and teaching skills to produce
information about the performance-level of each teacher preparation
program that, speaking broadly, is valid and reliable. It is clear from
the comments we received that there is not an outright consensus on
using student learning outcomes to help measure teacher preparation
program performance; however, we strongly believe that a program's
ability to prepare teachers who can positively influence student
academic achievement is both an indicator of their academic content
knowledge and teaching skills, and a critical measure for assessing a
teacher preparation program's performance. Student learning outcomes
therefore belong among multiple measures States must use. We continue
to highlight growth as a particularly appropriate way to measure a
teacher's effect on student learning because it takes a student's prior
achievement into account, gives a teacher an opportunity to demonstrate
success regardless of the student characteristics of the class, and
therefore reflects the contribution of the teacher to student learning.
Even where student growth is not used, producing teachers who can make
a positive contribution to student learning should be a fundamental
objective of any teacher preparation program and the reason why it
should work to provide prospective teachers with academic content and
teaching skills. Hence, student learning outcomes, as we define them in
the regulations, associated with each teacher preparation program are
an important part of an assessment of any program's performance.
States therefore need to collect data on student learning
outcomes--through either student growth that examines the change in
student achievement in both tested and non-tested grades and subjects,
a teacher evaluation measure as defined in the regulations, or another
State-determined measure relevant to calculating student learning
outcomes--and then link these data to the teacher preparation program
that produced (or in the case of an alternative route program, is
producing) these teachers.
In so doing, States may if they wish choose to use statistical
measures of growth, like VAM or student growth percentiles, that
control for student demographics that are typically associated with
student achievement. There are multiple examples of the use of similar
student learning outcomes in existing research and State reporting.
Tennessee, for example, reports that some teacher preparation programs
consistently exhibit statistically significant differences in student
learning outcomes over multiple years, indicating that scores are
reliable from one year to the next.\26\ Studies from Washington State
\27\ and New York
[[Page 75555]]
City \28\ also find statistically significant differences in the
student learning outcomes of teachers from different teacher
preparation programs as does the University of North Carolina in how it
assesses its own teacher preparation programs.\29\ Moreover, a
teacher's effect on student growth is commonly used in education
research and evaluation studies conducted by the Institute of Education
Sciences as a valid measure of the effectiveness of other aspects of
teacher training, like induction or professional development.\30\
---------------------------------------------------------------------------
\26\ See Report Card on the Effectiveness of Teacher Training
Programs, Tennessee 2014 Report Card. (n.d.). Retrieved from
www.tn.gov/thec/article/report-card.
\27\ D. Goldhaber & S. Liddle (2013). ``The Gateway to the
Profession: Assessing Teacher Preparation Programs Based on Student
Achievement.'' Economics of Education Review, 34: 29-44.
\28\ D. Boyd, P. Grossman, H. Lankford, S. Loeb, & J. Wyckoff.
(2009). Teacher Preparation and Student Achievement. Education
Evaluation and Policy Analysis, 31(4), 416-440.
\29\ See UNC Educator Quality Dashboard.(n.d.). Retrieved from
https://tqdashboard.northcarolina.edu/performance-employment/.
\30\ See for example, S. Glazerman, E. Isenberg, S. Dolfin, M.
Bleeker, A. Johnson, M. Grider & M. Jacobus. 2010). Impacts of
comprehensive teacher induction: Final results from a randomized
controlled study (NCEE 2010-4027). Washington, DC: National Center
for Education Evaluation and Regional Assistance, Institute of
Education Sciences, U.S. Department of Education.
---------------------------------------------------------------------------
While some studies of teacher preparation programs \31\ in other
States have not found statistically significant differences at the
preparation program level in graduates' effects on student outcomes, we
believe that there are enough examples of statistically significant
differences in program performance on student learning outcomes to
justify their inclusion in the SRC. In addition, because even these
studies show a wide range of individual teacher effectiveness within a
program, using these data can provide new insights that can help
programs to produce more consistently high-performing graduates.
---------------------------------------------------------------------------
\31\ Koedel, C., Parsons, E., Podgursky, M., & Ehlert, M.
(2015). Teacher Preparation Programs and Teacher Quality: Are There
Real Differences Across Programs? Education Finance and Policy,
10(4), 508-534.
---------------------------------------------------------------------------
Moreover, looking at the related issue of educator evaluations,
there is debate about the level of reliability and validity of the
individual elements used in different teacher evaluation systems.
However, there is evidence that student growth can be a useful and
effective component in teacher evaluation systems. For example, a study
found that dismissal threats and financial incentives based partially
upon growth scores positively influenced teacher
performance.32 33 In addition, there is evidence that
combining multiple measures, including student growth, into an overall
evaluation result for a teacher can produce a more valid and reliable
result than any one measure alone.\34\ For these reasons, this
regulation and Sec. 612.5(b) continue to give States the option of
using teacher evaluation systems based on multiple measures that
include student growth to satisfy the student learning outcomes
requirement.
---------------------------------------------------------------------------
\32\ Dee, T., & Wyckoff, J. (2015). Incentives, Selection, and
Teacher Performance: Evidence from IMPACT. Journal of Policy
Analysis and Management, 34(2), 267-297. doi:10.3386/w19529.
\33\ Henry, G., & Bastian, K. (2015). Measuring Up: The National
Council on Teacher Quality's Ratings of Teacher Preparation Programs
and Measures of Teacher Performance.
\34\ Mihaly, K., McCaffrey, D., Staiger, D., & Lockwood, J.
(2013, January 8). A Composite Estimator of Effective Teaching.
---------------------------------------------------------------------------
Teacher preparation programs may well only account for some of the
variation in student learning outcomes. However, this does not absolve
programs from being accountable for the extent to which their graduates
positively impact student achievement. Thus, while the regulations are
not intended to address the entire scope of student achievement or all
factors that contribute to student learning outcomes, the regulations
focus on student learning outcomes as an indicator of whether or not
the program is performing properly. In doing so, one would expect that,
through a greater focus on their student learning outcomes, States and
teacher preparation programs will thereby have the benefit of some
basic data about where their work to provide all students with academic
content knowledge and teaching skills need to improve.
Changes: None.
Comments: Other commenters stated that there are many additional
factors that can impact student learning outcomes that were not taken
into account in the proposed regulations; that teacher evaluation is
incomplete without taking into account the context in which teachers
work on a daily basis; and that VAM only account for some contextual
factors. Commenters stated that any proposed policies to directly link
student test scores to teacher evaluation and teacher preparation
programs must recognize that schools and classrooms are situated in a
broader socioeconomic context.
Commenters pointed out that not all graduates from a specific
institution or program will be teaching in similar school contexts and
that many factors influencing student achievement cannot be controlled
for between testing intervals. Commenters also cited other contributing
factors to test results that are not in a teacher's control, including
poverty and poverty-related stress; inadequate access to health care;
food insecurity; the student's development, family, home life, and
community; the student's background knowledge; the available resources
in the school district and classroom; school leadership, school
curriculum, students not taking testing situations seriously; and
school working conditions. Commenters also noted that students are not
randomly placed into classrooms or schools, and are often grouped by
socioeconomic class, and linguistic segregation, which influences test
results.
Discussion: Many commenters described unmeasured or poorly measured
student and classroom characteristics that might bias the measurement
of student outcomes and noted that students are not randomly assigned
to teachers. These are valid concerns and many of the factors stated
are correlated with student performance.
However, teacher preparation programs should prepare novice
teachers to be effective and successful in all classroom environments,
including in high-need schools. It is for this reason, as well as to
encourage States to highlight successes in these areas, that we include
as indicators of academic content knowledge and teaching skills,
placement and retention rates in high-need schools.
In addition, States and school districts can control for different
kinds of student and classroom characteristics in the ways in which
they determine student learning outcomes (and student growth). States
can, for example, control for school level characteristics like the
concentration of low-income students in the school and in doing so
compare teachers who teach in similar schools. Evidence cited below
that student growth, as measured by well-designed statistical models,
captures the causal effects of teachers on their students also suggests
that measures of student growth can successfully mitigate much of
potential bias, and supports the conclusion that non-random sorting of
students into classrooms does not cause substantial bias in student
learning outcomes. We stress, however, the decision to use such
controls and other statistical measures to control for student and
school characteristics in calculating student learning outcomes is up
to States in consultation with their stakeholder groups.
Changes: None.
Comments: Commenters contended that although the proposed
regulations offer States the option of using a teacher evaluation
measure in lieu of, or in addition to, a student growth measure, this
option does not provide a real alternative because it also requires
that
[[Page 75556]]
the three performance levels in the teacher evaluation measure include,
as a significant factor, data on student growth, and student growth
relies on student test scores. Also, while the regulations provide that
evaluations need not rely on VAM, commenters suggested that VAM will
drive teacher effectiveness determinations because student learning is
assessed either through student growth (which includes the use of VAM)
or teacher evaluation (which is based in large part on student growth),
so there really is no realistic option besides VAM. Commenters also
stated that VAM requirements in Race to the Top and ESEA flexibility,
along with State-level legislative action, create a context in which
districts are compelled to use VAM.
A large number of commenters stated that research points to the
challenges and ineffectiveness of using VAM to evaluate both teachers
and teacher preparation programs, and asserted that the data collected
will be neither meaningful nor useful. Commenters also stated that use
of VAM for decision-making in education has been discredited by leading
academic and professional organizations such as the American
Statistical Association (ASA) \35\, the American Educational Research
Association, and the National Academy of Education.36 37
Commenters provided research in support of their arguments, asserting
in particular ASA's contention that VAM do not meet professional
standards for validity and reliability when applied to teacher
preparation programs. Commenters voiced concerns that VAM typically
measure correlation and not causation, often citing the ASA's
assertions. Commenters also contended that student outcomes have not
been shown to be correlated with, much less predictive of, good
teaching; VAM scores and rankings can change substantially when a
different model or test is used, and variation among teachers accounts
for a small part of the variation in student test scores. One commenter
stated that student learning outcomes are not data but target skills
and therefore the Department incorrectly defined ``student learning
outcomes.'' We interpret this comment to mean that tests that may form
the base of student growth only measure certain skills rather than
longer term student outcomes.
---------------------------------------------------------------------------
\35\ American Statistical Association. (2014). ASA Statement on
Using Value-Added Models for Educational Assessment: www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf.
\36\ American Education Research Association (AERA) and National
Academy of Education. (2011).Getting teacher evaluation right: A
brief for policymakers. Washington, DC: AERA.
\37\ Feuer, M. J., Floden, R. E., Chudowsky, N., & Ahn, J.
(2013). Evaluation of Teacher Preparation Programs: Purposes,
Methods, and Policy Options. Washington, DC: National Academy of
Education.
---------------------------------------------------------------------------
Many commenters also noted that value-added models of student
achievement are developed and normed to test student achievement, not
to evaluate educators, so using these models to evaluate educators is
invalid because the tests have not been validated for that purpose.
Commenters further noted that value-added models of student achievement
tied to individual teachers should not be used for high-stakes,
individual-level decisions or comparisons across highly dissimilar
schools or student populations.
Commenters stated that in psychometric terms, VAM are not reliable.
They contended that it is a well-established principle that reliability
is a necessary but not sufficient condition for validity. If judgments
about a teacher preparation program vary based on the method of
estimating value-added scores, inferences made about programs cannot be
trusted.
Others noted Edward Haertel's \38\ conclusion that no statistical
manipulation can assure fair comparisons of teachers working in very
different schools, with very different students, under very different
conditions. Commenters also noted Bruce Baker's conclusions that even a
20 percent weight to VAM scores can skew results too much. Thus,
according to the commenters, though the proposed regulations permit
States to define what is ``significant'' for the purposes of using
student learning outcomes ``in significant part,'' unreliable and
invalid VAM scores end up with at least a 20 percent weight in teacher
evaluations.
---------------------------------------------------------------------------
\38\ Haertel, E. 2013. Reliability and Validity on Inferences
about Teachers Based on Student Test Scores. The 14th William H.
Angoff Memorial Lecture, March 22. Princeton, NJ: Educational
Testing Service. Retrieved from www.ets.org/Media/Research/pdf/PICANG14.pdf.
---------------------------------------------------------------------------
Discussion: The proposed definition of teacher evaluation measure
in Sec. 612.2 did provide that student growth be considered in
significant part, but we have removed that aspect of the definition of
teacher evaluation measure from the final regulations. Moreover, we
agree that use of such an evaluation system may have been required, for
example, in order for a State to receive ESEA flexibility, and States
may still choose to consider student growth in significant part in a
teacher evaluation measure. However, not only are States not required
to include growth ``in significant part'' in a teacher evaluation
measure used for student learning outcomes, but Sec. 612.5(a)(1)(ii)
clarifies that States may choose to measure student learning outcomes
without using student growth at all.
On the use of VAM specifically, we reiterate that the regulations
permit multiple ways of measuring student learning outcomes without use
of VAM; if they use student growth, States are not required to use VAM.
We note also that use of VAM was not a requirement of Race to the Top,
nor was it a requirement of ESEA Flexibility, although many States that
received Race to the Top funds or ESEA flexibility committed to using
statistical models of student growth based on test scores. We also
stress that in the context of these regulations, a State that chooses
to use VAM and other statistical measures of student growth would use
them to help assess the performance of teacher preparation programs as
a whole. Neither the proposed nor final regulations address, as many
commenters stated, how or whether a State or district might use the
results of a statistical model for individual teachers' evaluations and
any resulting personnel actions.
Many States and districts currently use a variety of statistical
methods in teacher, principal, and school evaluation, as well as in
State accountability systems. VAM are one such way of measuring student
learning outcomes that are used by many States and districts for these
accountability purposes. While we stress that the regulations do not
require or anticipate the use of VAM to calculate student learning
outcomes or teacher evaluation measures, we offer the following summary
of VAM in view of the significant amount of comments the Department
received on the subject.
VAM are statistical methodologies developed by researchers to
estimate a teacher's unique contribution to growth in student
achievement, and are used in teacher evaluation and evaluation of
teacher preparation programs. Several experimental and quasi-
experimental studies conducted in a variety of districts have found
that VAM scores can measure the causal impact teachers have on student
learning.\39\ There is also
[[Page 75557]]
strong evidence that VAM measure more than a teacher's ability to
improve test scores; a recent paper found that teachers with higher VAM
scores improved long term student outcomes such as earnings and college
enrollment.\40\ While tests often measure specific skills, these long-
term effects show that measures of student growth are, in fact,
measuring a teacher's effect on student outcomes rather than simple,
rote memorization, test preparation on certain target skills, or a
teacher's performance based solely on one specific student test. VAM
have also been shown to consistently measure teacher quality over time
and across different kinds of schools. A well-executed, randomized
controlled trial found that, after the second year, elementary school
students taught by teachers with high VAM scores who were induced to
transfer to low-performing schools had higher reading and mathematics
scores than students taught by comparison teachers in the same kinds of
schools.\41\
---------------------------------------------------------------------------
\39\ For example: Kane, T., & Staiger, D. (2008). Estimating
teacher impacts on student achievement: An experimental evaluation.
doi:10.3386/w14607;; Kane, T., McCaffrey, D., Miller, T., & Staiger,
D. (2013). Have We Identified Effective Teachers? Validating
Measures of Effective Teaching Using Random Assignment; Bacher-
Hicks, A., Kane, T., & Staiger, D.(2014). Validating Teacher Effect
Estimates Using Changes in Teacher Assignment in Los Angeles
(Working Paper No. 20657). Retrieved from National Bureau of
Economic Research Web Web site: www.nber.org/papers/w20657; Chetty,
et al. at 2633-2679 and 2593-2632.
\40\ Chetty, et al at 2633-2679.
\41\ Glazerman, S., Protik, A., Teh, B., Bruch, J., & Max, J.
(2013). Transfer incentives for high-performing teachers: Final
results from a multisite randomized experiment (NCEE 2014-4003).
Washington, DC: National Center for Education Evaluation and
Regional Assistance, Institute of Education Sciences, U.S.
Department of Education. https://files.eric.ed.gov/fulltext/ED544269.pdf.
---------------------------------------------------------------------------
The Department therefore disagrees with commenters who state that
the efficacy of VAM is not grounded in sound research. We believe that
VAM is commonly used as a component in many teacher evaluation systems
precisely because the method minimizes the influence of observable
factors independent of the teacher that might affect student
achievement growth, like student poverty levels and prior levels of
achievement.
Several commenters raised important points to consider with using
VAM for teacher evaluation. Many cited the April 8, 2014, ``ASA
Statement on Using Value-Added Models for Educational Assessment''
cited in the summary of comment, that makes several reasonable
recommendations regarding the use of VAM, including its endorsement of
wise use of data, statistical models, and designed experiments for
improving the quality of education. We believe that the definitions of
``student learning outcomes'' and ``student growth'' in the
regulations, is fully compatible with valid and reliable ways of
including VAM to assess the impact of teachers on student academic
growth. Therefore, States that chose to use VAM to generate student
learning outcomes would have the means to do what the ASA study
recommends: Use data and statistical models to improve the quality of
their teacher preparation programs. The ASA also wisely cautions that
VAMs are complex statistical models, necessitating high levels of
statistical expertise to develop and run and should include estimates
of the model's precision. These specific recommendations are entirely
consistent with the regulations, and we encourage States to follow them
when using VAM.
We disagree, however, with the ASA and commenters' assertions that
VAM typically measures correlation, not causation, and that VAM does
not measure teacher contributions toward other student outcomes. These
assertions contradict the evidence cited above that VAM does measure
the causal effects of teachers on student achievement, and that
teachers with high VAM scores also improve long-term student outcomes.
The implication of the various studies we cited in this section is
clear; not only can VAM identify teachers who improve short- and long-
term student outcomes, but VAM can play a substantial role in
effective, useful teacher evaluation systems.
However, as we have said, States do not need to use VAM to generate
student learning outcomes. Working with their stakeholders States can,
if they choose, establish other means of reporting a teacher
preparation program's ``student learning outcomes'' that meet the basic
standard in Sec. 612.5(a)(1).
Changes: None.
Comments: Two commenters suggested that the United States
Government Accountability Office (GAO) do an analysis and suggest
alternatives to VAM.
Discussion: The Secretary of Education has no authority to direct
GAO's work, so these comments are outside the Department's authority,
and the scope of the regulations.
Changes: None.
Comments: Several commenters opined that it is not fair to measure
new teachers in the manner proposed in the regulations because it takes
new teachers three to five years to become good at their craft. Other
commenters mentioned that value-added scores cannot be generated until
at least two years after a teacher candidate has graduated.
Discussion: We recognize the importance of experience in a
teacher's development. However, while teachers can be expected to
improve in effectiveness throughout their first few years in the
classroom, under Sec. 612.5(a)(1)) a State is not using student
learning outcomes to measure or predict the future or long-term
performance of any individual teacher. It is using student learning
outcomes to measure the performance of the teacher preparation program
that the novice teacher completed--performance that, in part, should be
measured in terms of a novice teacher's ability to achieve positive
student learning outcomes in the first year the teacher begins to
teach.
We note, however, that there is strong evidence that early career
performance is a significant predictor of future performance. Two
studies have found that growth scores in the first two years of a
teacher's career, as measured by VAM, better predict future performance
than measured teacher characteristics that are generally available to
districts, such as a teacher's pathway into teaching, available
credentialing scores and SAT scores, and competitiveness of
undergraduate institution.\42\ Given that early career performance is a
good predictor of future performance, it is reasonable to use early
career results of the graduates of teacher preparation programs as an
indicator of the performance of those programs. These studies also
demonstrate that VAM scores can be calculated for first-year teachers.
---------------------------------------------------------------------------
\42\ Atteberry, A., Loeb, S., & Wyckoff, J. (2015). Do first
impressions matter? Improvement in early career teacher
effectiveness. American Educational Research Association (AERA)
Open.; Goldhaber, D., & Hansen, M. (2010). Assessing the Potential
of Using Value-Added Estimates of Teacher Job Performance for Making
Tenure Decisions. Working Paper 31. National Center for Analysis of
Longitudinal Data in Education Research.
---------------------------------------------------------------------------
Moreover, even if States choose not to use VAM results as student
growth measures, the function of teacher preparation programs is to
train teachers to be ready to teach when they enter the classroom. We
believe student learning outcomes should be measured early in a
teacher's career, when the impact of their preparation is likely to be
the strongest. However, while we urge States to give significant weight
to their student outcome measures across the board, the regulations
leave to each State how to weight the indicators of academic content
knowledge and teaching skills for novice teachers in their first and
other years of teaching.
Changes: None.
Differences Between Accountability and Improvement
Comments: Commenters stated that the Department is confusing
accountability with improvement by requiring data on and accountability
of programs. Several commenters
[[Page 75558]]
remarked that VAM will not guarantee continuous program improvement.
Discussion: The regulations require States to use the indicators of
academic content knowledge and teaching skills identified in Sec.
612.5(a), which may include VAM if a State chooses, to determine the
performance level of each teacher preparation program, to report the
data generated for each program, and to provide a list of which
programs the State considers to be low-performing or at-risk of being
low-performing. In addition, reporting the data the State uses to
measure student learning outcomes will help States, IHEs, and other
entities with teacher preparation programs to determine where their
program graduates (or program participants in the case of alternative
route to teaching programs) are or are not succeeding in increasing
student achievement. No information available to those operating
teacher preparation programs, whether from VAM or another source, can,
on its own, ensure the programs' continuous improvement. However, those
operating teacher preparation programs can use data on a program's
student learning outcomes--along with data from employment outcomes,
survey outcomes, and characteristics of the program--to identify key
areas for improvement and focus their efforts. In addition, the
availability of these data will provide States with key information in
deciding what technical assistance to provide to these programs.
Changes: None.
Consistency
Comments: One commenter noted the lack of consistency in
assessments at the State level, which we understand to be assessments
of students across LEAs within the same State, will make the
regulations almost impossible to operationalize. Another commenter
noted that the comparisons will be invalid, unreliable, and inherently
biased in favor of providers that enjoy State sponsorship and are most
likely to receive favorable treatment under a State-sponsored
assessment schema (which we understand to mean ``scheme''). Until there
is a common State assessment which we understand to mean common
assessment of students across States, the commenter argued that any
evaluation of teachers using student progress and growth will be
variable at best.
Discussion: We first note that, regardless of the assessments a
State uses to calculate student learning outcomes, the definition of
student growth in Sec. 612.2 requires that such assessments be
comparable across schools and consistent with State policies. While
comparability across LEAs is not an issue for assessments administered
pursuant to section 1111(b)(2) of the ESEA--which are other assessments
used by the State for purposes of calculating student growth may not be
identical, but are required to be comparable. As such, we do not
believe that LEA-to-LEA or school-to-school variation in the particular
assessments that are administered should inherently bias the
calculation of student learning outcomes across teacher preparation
programs.
Regarding comparability across States in the assessments
administered to students, nothing in this regulation requires such
comparability and, we believe such a requirement would infringe upon
the discretion States have historically been provided under the ESEA in
determining State standards, assessments, and curricula.
We understand the other comment to question the validity of
comparisons of teacher preparation program ratings, as reported in the
SRC. We continue to stress that the data regarding program performance
reported in the SRCs and required by the regulations do not create, or
intend to promote, any in-State or inter-State ranking system. Rather,
we anticipate that States will use reported data to evaluate program
performance based on State-specific weighting.
Changes: None.
Special Populations and Untested Subjects
Comments: Two commenters stated that VAMs will have an unfair
impact on special education programs. Another commenter stated that for
certain subjects, such as music education, it is difficult for students
to demonstrate growth.
One commenter stated that there are validity issues with using
tests to measure the skills of deaf children since standardized tests
are based on hearing norms and may not be applicable to deaf children.
Another commenter noted that deaf and hard-of-hearing K-12 students
almost always fall below expected grade level standards, impacting
student growth and, as a result, teacher preparation program ratings
under our proposed regulations. In a similar vein, one commenter
expressed concern that teacher preparation programs that prepare
teachers of English learners may be unfairly branded as low-performing
or at-risk because the students are forced to conform to tests that are
neither valid nor reliable for them.
Discussion: The Department is very sensitive to the different
teaching and learning experiences associated with students with
disabilities (including deaf and hard-of-hearing students) and English
learners, and encourages States to use student learning outcome
measures that allow teachers to demonstrate positive impact on student
learning outcomes regardless of the prior achievement or other
characteristics of students in their classroom. Where States use the
results of assessments or other tests for student learning outcomes,
such measures must also conform to appropriate testing accommodations
provided to students that allow them to demonstrate content mastery
instead of reflecting specific disabilities or language barriers.
We expect that these measures of student learning outcomes and
other indicators used in State systems under this regulation will be
developed in consultation with key stakeholders (see Sec. 612.4(c)),
and be based on measures of achievement that conform to student
learning outcomes as described in in Sec. 612.5(a)(1)(ii).
Changes: None.
Comments: Several commenters cited a study \43\ stating unintended
consequences associated with the high-stakes use of VAM, which emerged
through teachers' responses. Commenters stated that the study revealed,
among other things, that teachers felt heightened pressure and
competition. This reduced morale and collaboration, and encouraged
cheating or teaching to the test.
---------------------------------------------------------------------------
\43\ Collins, C (2014). Houston, we have a problem: Teachers
find no value in the SAS education value-added assessment system
(EVAAS[supreg]), Education Policy Analysis Archives, 22(98).
---------------------------------------------------------------------------
Some commenters stated that by, in effect, telling teacher
preparation programs that their graduates should engage in behaviors
that lift the test scores of their students, the likely main effect
will be classrooms that are more directly committed to test preparation
(and to what the psychometric community calls score inflation) than to
advancement of a comprehensive education.
Discussion: The Department is sensitive to issues of pressure on
teachers to artificially raise student assessment scores, and
perceptions of some teachers that this emphasis on testing reduces
teacher morale and collaboration. However, States and LEAs have
responsibility to ensure that test data are monitored for cheating and
other forms of manipulation, and we have no reason to believe that the
regulations will increase these incidents. With regard to reducing
teacher morale and collaboration, value-
[[Page 75559]]
added scores are typically calculated statewide for all teachers in a
common grade and subject. Because teachers are compared to all
similarly situated teachers statewide, it is very unlikely that a
teacher could affect her own score by refusing to collaborate with
other teachers in a single school. We encourage teachers to collaborate
across grades, subjects, and schools to improve their practice, but
also stress that the regulations use student learning outcomes only to
help assess the performance of teacher preparation programs. Under the
regulations, where a State does not use student growth or teacher
evaluation data already gathered for purposes of an LEA educator
evaluation, data related to student learning outcomes is only used to
help assess the quality of teacher preparation programs, and not the
quality of individual teachers.
Changes: None.
Comments: Commenters were concerned that the regulations will not
benefit high-need schools and communities because the indicator for
student learning outcomes creates a disincentive for programs to place
teachers in high-need schools and certain high-need fields, such as
English as a Second Language. In particular, commenters expressed
concern about the requirements that student learning outcomes be given
significant weight and that a program have satisfactory or higher
student learning outcomes in order to be considered effective.
Commenters expressed particular concern in these areas with regard to
Historically Black Colleges and Universities and other programs whose
graduates, the commenters stated, are more likely to work in high-need
schools.
Commenters opined that, to avoid unfavorable outcomes, teacher
preparation programs will seek to place their graduates in higher-
performing schools. Rather than encouraging stronger partnerships,
commenters expressed concern that programs will abandon efforts to
place graduates in low-performing schools. Others were concerned that
teachers will self-select out of high-need schools, and a few
commenters noted that high-performing schools will continue to have the
most resources while teacher shortages in high-need schools, such as
those in Native American communities, will be exacerbated.
Some commenters stated that it was unfair to assess a teacher
preparation program based on, as we interpret the comment, the student
learning outcomes of the novice teachers produced by the program
because the students taught by novice teachers may also receive
instruction from other teachers who may have more than three years of
experience teaching.
Discussion: As we have already noted, under the final regulations,
States are not required to apply special weight to any of the
indicators of academic content knowledge and teaching skills. Because
of their special importance to the purpose of teacher preparation
programs, we strongly encourage, but do not require, States to include
employment outcomes for high-need schools and student learning outcomes
in significant part when assessing teacher preparation program
performance. We also encourage, but do not require, States to identify
the quality of a teacher preparation program as effective or higher if
the State determined that the program's graduates produce student
learning outcomes that are satisfactory or higher.
For the purposes of the regulations, student learning outcomes may
be calculated using student growth. Because growth measures the change
in student achievement between two or more points in time, the prior
achievement of students is taken into account. Teacher preparation
programs may thus be assessed, in part, based on their recent
graduates' efforts to increase student growth, not on whether the
teachers' classrooms contained students who started as high or low
achieving. For this reason, teachers--regardless of the academic
achievement level of the students they teach--have the same opportunity
to positively impact student growth. Likewise, teacher preparation
programs that place students in high-need schools have the same
opportunity to achieve satisfactory or higher student learning
outcomes. These regulations take into account the commenters' concerns
related to teacher equity as placement and retention in high-need
schools are required metrics.
We recognize that many factors influence student achievement.
Commenters who note that students taught by novice teachers may also
receive instruction from other teachers who may have more than three
years of experience teaching cite but one factor. But the objective in
having States use student growth as an indicator of the performance of
a teacher preparation program is not to finely calculate how novice
teachers impact student growth. As we have said, it rather is to have
the State determine whether a program's student learning outcomes are
so far from the mark as to be an indicator of poor program performance.
For these reasons, we disagree with commenters that the student
learning outcomes measure will discourage preparation programs and
teachers from serving high-need schools. We therefore decline to make
changes to the regulations.
Changes: None.
Comments: Commenters expressed concern with labeling programs as
low-performing if student data are not made available about such
programs. The commenters stated that this may lead to identifying high-
quality programs as low-performing. They were also concerned about
transparency, and noted that it would be unfair to label any program
without actual information on how that label was earned.
Discussion: We interpret the commenters' concern to be that States
may not be able to report on student learning outcomes for particular
teacher preparation programs because districts do not provide data on
student learning outcomes, and yet still identify programs as low
performing. In response, we clarify that the State is responsible for
securing the information needed to report on each program's student
learning outcomes. Given the public interest in program performance and
the interest of school districts in having better information about the
programs in which prospective employees have received their training,
we are confident that each State can influence its school districts to
get maximum cooperation in providing needed data.
Alternatively, to the extent that the commenter was referring to
difficulties obtaining data for student learning outcomes (or other of
our indicators of academic content and teaching skills) because of the
small size of the teacher preparation programs, Sec. 612.4(b)(3)(ii)
provides different options for aggregation of data so the State can
provide these programs with appropriate performance ratings. In this
case, except for teacher preparation programs that are so small that
even these aggregation methods will not permit the State to identify a
performance level (see Sec. 612.4(b)(3)(ii)(D) and Sec. 612.4(b)(5)),
all programs will have data on student learning outcomes with which to
determine the program's level of performance.
Changes: None.
State and Local Concerns
Comments: Several commenters expressed concerns about their
specific State laws regarding data collection as they affect data
needed for student learning outcomes. Other commenters noted that some
States have specific laws preventing aggregated student
[[Page 75560]]
achievement data from being reported for individual teachers. One
commenter said that its State did not require annual teacher
evaluations. Some commenters indicated that State standards should be
nationally coordinated.
One commenter asked the Department to confirm that the commenters'
State's ESEA flexibility waiver would meet the student learning outcome
requirements for both tested and non-tested grades and subjects, and if
so, given the difficulty and cost, whether the State would still be
required to report disaggregated data on student growth in assessment
test scores for individual teachers, programs, or entities in the SRC.
Commenters also noted that LEAs could be especially burdened, with no
corresponding State or Federal authority to compel LEA compliance. A
commenter stated that in one city most teachers have 20 to 40 percent
of their evaluations based on tests in subjects they do not teach.
Commenters urged that States be given flexibility in determining
the components of data collection and reporting systems with minimal
common elements. This would, as commenters indicated, ultimately delay
the State's ability to make valid and reliable determinations of
teacher preparation program quality. Some commenters stated that States
should be required to use student learning outcomes as a factor in
performance designations, but allow each State to determine how best to
incorporate these outcomes into accountability systems.
Commenters noted that a plan for creating or implementing a measure
of student achievement in content areas for which States do not have
valid statewide achievement data was not proposed, nor was a plan
proposed to pilot or fund such standardized measures.
Discussion: We agree and understand that some States may have to
make changes (including legislative, regulatory, budgetary, etc.) in
order to comply with the regulations. We have allowed time for these
activities to take place, if necessary, by providing time for data
system set-up and piloting before full State reporting is required as
of October 31, 2019. We note that Sec. 612.4(b)(4)(ii)(E) of the
proposed regulations and Sec. 612.4(b)(5)) of the final regulations
expressly exempt reporting of data where doing so would violate Federal
or State privacy laws or regulations. We also provide in Sec.
612.4(c)(2) that States must periodically examine the quality of the
data collection and make adjustments as necessary. So if problems
arise, States need to work on ways to resolve them.
Regarding the suggestion that State standards for student learning
outcomes should be nationally coordinated, States are free to
coordinate. But how each State assesses a program's performance is a
State decision; the HEA does not otherwise provide for such national
coordination.
With respect to the comment asking whether a State's ESEA
flexibility waiver would meet the student learning outcomes requirement
for both tested and non-tested grades and subjects, this issue is
likely no longer relevant since the enactment of the ESSA will make
ESEA flexibility waivers null and void on August 1, 2016. However, in
response to the commenters' question, so long as the State is
implementing the evaluation systems as they committed to do in order to
receive ESEA flexibility, the data it uses for student learning
outcomes would most likely represent an acceptable way, among other
ways, to comply with the title II reporting requirements.
We understand the comment, that LEAs would be especially burdened
with no corresponding State or Federal authority to compel LEA
compliance, to refer to LEA financial costs. It is unclear that LEAs
would be so burdened. We believe that our cost estimates, as revised to
respond to public comment, are accurate. Therefore, we also believe
that States, LEAs, and IHEs will be able meet responsibilities under
this reporting system without need for new funding sources. We discuss
authorities related to LEA compliance in the discussion under Sec.
612.1.
Regarding specific reporting recommendations for State flexibility
in use of student learning outcomes, State must use the indicators of
academic content knowledge and teaching skills identified in Sec.
612.5(a). However, States otherwise determine for themselves how to use
these indicators and other indicators and criteria they may establish
to assess a program's performance. In identifying the performance level
of each program, States also determine the weighting of all indicators
and criteria they use to assess program performance.
Finally, we understand that all States are working to implement
their responsibilities to provide results of student assessments for
grades and subjects in which assessments are required under section
1111(b)(2) of the ESEA, as amended by ESSA. With respect to the comment
that the Department did not propose a plan for creating or implementing
a measure of student achievement in content areas for which States do
not have valid statewide achievement data, the regulations give States
substantial flexibility in how they measure student achievement.
Moreover, we do not agree that time to pilot such new assessments or
growth calculations, or more Federal funding in this area, is needed.
Changes: None.
Permitted Exclusions From Calculation of Student Learning Outcomes
Comments: None.
Discussion: In proposing use of student learning outcomes for
assessing a teacher preparation program's performance, we had intended
that States be able, in their discretion, to exclude student learning
outcomes associated with recent graduates who take teaching positions
out of State or in private schools--just as the proposed regulations
would have permitted States to do in calculating employment outcomes.
Our discussion of costs associated with implementation of student
learning outcomes in the NPRM (79 FR 71879) noted the proposed
regulations permitted the exclusion for teachers teaching out of State.
And respectful of the autonomy accorded to private schools, we never
intended that States be required to obtain data on student learning
outcomes regarding recent graduates teaching in those schools.
However, upon review of the definitions of the terms ``student
achievement in non-tested grades and subjects,'' ``student achievement
in tested grades and subjects,'' and ``teacher evaluation measure'' in
proposed Sec. 612.2, we realized that these definitions did not
clearly authorize States to exclude student learning outcomes
associated with these teachers from their calculation of a teacher
preparation program's aggregate student learning outcomes. Therefore,
we have revised Sec. 612.5(a)(1) to include authority for the State to
exclude data on student learning outcomes for students of novice
teachers teaching out of State or in private schools from its
calculation of a teacher preparation program's student learning
outcomes. In doing so, as with the definitions of teacher placement
rate and teacher retention rate, we have included in the regulations a
requirement that the State use a consistent approach with regard to
omitting or using these data in assessing and reporting on all teacher
preparation programs.
Changes: We have revised section 612.5(a)(1) to provide that in
calculating a teacher preparation program's aggregate student learning
outcomes, at its discretion a State may exclude student learning
outcomes of students taught by novice teachers teaching out
[[Page 75561]]
of State or in private schools, or both, provided that the State uses a
consistent approach to assess and report on all of the teacher
preparation programs in the State.
Employment Outcomes (34 CFR 612.5(a)(2))
Measures of Employment Outcomes
Comments: Many commenters suggested revisions to the definition of
``employment outcomes.'' Some commenters mentioned that the four
measures included in the definition (placement rates, high-need school
placement rates, retention rates, and high-need school retention rates)
are not appropriate measures of a program's success in preparing
teachers. One commenter recommended that high-need school placement
rates not be included as a required program measure, and that instead
the Department allow States to use it at their discretion. Other
commenters recommended including placement and retention data for
preschool teachers in States where their statewide preschool program
postsecondary training and certification is required, and the State
licenses those educators.
Discussion: For several reasons, we disagree with commenters that
the employment outcome measures are inappropriate measures of teacher
preparation program quality. The goals of any teacher preparation
program should be to provide prospective teachers with the skills and
knowledge needed to pursue a teaching career, remain successfully
employed as a teacher, and in doing so produce teachers who meet the
needs of LEAs and their students. Therefore, the rate at which a
program's graduates become and remain employed as teachers is a
critical indicator of program quality.
In addition, programs that persistently produce teachers who fail
to find jobs, or, once teaching, fail to remain employed as teachers,
may well not be providing the level of academic content knowledge and
teaching skills that novice teachers need to succeed in the classroom.
Working with their stakeholders (see Sec. 612.4(c)), each State will
determine the point at which the reported employment outcomes for a
program go from the acceptable to the unacceptable, the latter
indicating a problem with the quality of the program. We fully believe
that these outcomes reflect another reasonable way to define an
indicator of academic content knowledge and teaching skills, and that
unacceptable employment outcomes show something is wrong with the
quality of preparation the teaching candidates have received.
Further, we believe that given the need for teacher preparation
programs to produce teachers who are prepared to address the needs of
students in high-need schools, it is reasonable and appropriate that
indicators of academic content and teaching skills used to help assess
a program's performance focus particular attention on teachers in those
schools. Therefore, we do not believe that States should have the
option to include teacher placement rates (and teacher retention rates)
for high-need schools in their SRCs.
We agree with commenters that, in States where postsecondary
training and certification is required, and the State licenses those
teachers, data on the placement and retention of preschool teachers
should be reported. We strongly encourage States to report this
information. However, we decline to require that they do so because
pre-kindergarten licensure and teacher evaluation requirements vary
significantly between States and among settings, and given these State
and local differences in approach we believe that it is important to
leave the determination of whether and how to include preschool
teachers in this measure to the States.
Changes: None.
Teacher Placement Rate
Comments: One commenter recommended that the teacher placement rate
account for ``congruency,'' which we interpret to mean whether novice
teachers are teaching in the grade level, grade span, and subject area
in which they were prepared. The commenter noted that teacher
preparation programs that are placing teachers in out-of-field
positions are not aligning with districts' staffing needs. In addition,
we understand the commenter was noting that procedures LEAs use for
filling vacancies with teachers from alternative route programs need to
acknowledge the congruency issue and build in a mechanism to remediate
it.
Discussion: We agree that teachers should be placed in a position
for which they have content knowledge and are prepared. For this
reason, the proposed and final regulations define ``teacher placement
rate'' as the percentage of recent graduates who have become novice
teachers (regardless of retention) for the grade level, grade span, and
subject area in which they were prepared, except, as discussed in the
section titled ``Alternative Route Programs,'' we have revised the
regulations to provide that a State is not required to calculate a
teacher placement rate for alternative route to certification programs.
While we do not agree that teacher preparation programs typically place
teachers in their teaching positions, programs that do not work to
ensure that novice teachers obtain employment as teachers in a grade
level, span, or subject area that is the same as that or which they
were prepared will likely fare relatively poorly on the placement rate
measure.
We disagree with the commenter's suggestion that alternative route
program participants are teaching in out-of-field positions. Employment
as a teacher is generally a prerequisite to entry into alternative
route programs, and the alternative route program participants are
being prepared for an initial certification or licensure in the field
in which they are teaching. We do not know of evidence to suggest that
most participants in alternative route programs become teachers of
record without first having demonstrated adequate subject-matter
content knowledge in the subjects they teach.
Nonetheless, traditional route programs and alternative route
programs recruit from different groups of prospective teachers and have
different characteristics. It is for this reason that, both in our
proposed and final regulations, States are permitted to assess the
employment outcomes of traditional route programs versus alternative
route programs differently, provided that the different assessments
result in equivalent standards of accountability and reporting.
Changes: None.
Teacher Retention Rate
Comments: Many commenters expressed concern that the teacher
retention rate measure does not consider other factors that influence
retention, including induction programs, the support novice teachers
receive in the classroom, and the districts' resources. Other
commenters suggested requiring each State to demand from its accredited
programs a 65 percent retention rate after five years.
Some commenters also expressed concern about how the retention rate
measure will be used to assess performance during the first few years
of implementation. They stated that it would be unfair to rate teacher
preparation programs without complete information on retention rates.
Discussion: We acknowledge that retention rates are affected by
factors outside the teacher preparation program's control. However, we
believe that a teacher retention rate that is extraordinarily low, just
as one that is
[[Page 75562]]
extraordinarily high, is an important indicator of the degree to which
a teacher preparation program adequately prepares teachers to teach in
the schools that hire them and thus is a useful and appropriate
indicator of academic content knowledge and teaching skills that the
State would use to assess the program's performance. The regulations
leave to the States, in consultation with their stakeholders (see Sec.
612.4(c)) the determination about how they calculate and then weight a
program's retention rate. While we agree that programs should strive
for high retention rates, and encourage States to set rigorous
performance goals for their programs, we do not believe that the
Department should set a specific desired rate for this indicator.
Rather, we believe the States are best suited to determine how to
implement and weight this measure. However, we retain the proposal to
have the retention rate apply over the first three years of teaching
both because we believe that having novice teachers remain in teaching
for the first three years is key, and because having States continue to
generate data five years out as the commenter recommended is
unnecessarily burdensome.
We understand that, during the initial years of implementation,
States will not have complete data on retention. We expect that States
will weigh indicators for which data are unavailable during these
initial implementation years in a way that is consistent and applies
equivalent levels of accountability across programs. For further
discussion of the reporting cycle and implementation timeline, see
Sec. 612.4(a). We also note that, as we explain in our response to
comments on the definition of ``teacher retention rate'', under the
final regulations States will report on teachers who remain in the
profession in the first three consecutive years after placement.
Changes: None.
Comments: Commenters expressed concern that the categories of
teachers who can be excluded from the ``teacher placement rate''
calculation are different from those who can be excluded from the
``teacher retention rate'' calculation. Commenters believed this could
unfairly affect the rating of teacher preparation programs.
Discussion: We agree that differences in the categories of teachers
who can be excluded from the ``teacher placement rate'' calculation and
the ``teacher retention rate'' calculation should not result in an
inaccurate portrayal of teacher preparation program performance on
these measures. Under the proposed regulations, the categories of
teachers who could be excluded from these calculations would have been
the same with two exceptions: Novice teachers who are not retained
specifically and directly due to budget cuts may be excluded from the
calculation of teacher retention rate only, as may recent graduates who
have taken teaching positions that do not require State certification.
A teacher placement rate captures whether a recent graduate has ever
become a novice teacher and therefore is reliant on initial placement
as a teacher of record. Retention in a teaching position has no bearing
on this initial placement, and therefore allowing States to exclude
teachers from the placement rate who were not retained due to budget
cuts would not be appropriate. Therefore, the option to exclude this
category of teachers from the retention rate calculation does not
create inconsistencies between these measures.
However, permitting States to exclude from the teacher placement
rate calculation, but not from the teacher retention rate calculation,
recent graduates who have taken teaching positions that do not require
State certification could create inconsistencies between the measures.
Moreover, upon further review, we believe permitting the exclusion of
this category of teachers from either calculation runs contrary to the
purpose of the regulations, which is to assess the performance of
programs that lead to an initial State teacher certification or
licensure in a specific field. For these reasons, the option to exclude
this category of teachers has been removed from the definition of
``teacher placement rate'' in the final regulations (see Sec. 612.2).
With this change, the differences between the categories of teachers
that can be excluded from teacher placement rate and teacher retention
rate will not unfairly impact the outcomes of these measures, so long
as the State uses a consistent approach to assess and report on all
programs in the State.
Changes: None.
Comments: Commenters stated that this the teacher retention rate
measure would reflect poorly on special education teachers, who have a
high turnover rate, and on the programs that prepare them. They argued
that, in response to the regulations, some institutions will reduce or
eliminate their special education preparation programs rather than risk
low ratings.
Discussion: Novice special education teachers have chosen their
area of specialization, and their teacher preparation programs trained
them consistent with State requirements. The percentage of these
teachers, like teachers trained in other areas, who leave their area of
specialization within their first three years of teaching, or leave
teaching completely, is too high on an aggregated national basis.
We acknowledge that special education teachers face particular
challenges, and that like other teachers, there are a variety of
reasons--some dealing with the demands of their specialty, and some
dealing with a desire for other responsibilities, or personal factors--
for novice special education teachers to decide to move to other
professional areas. For example, some teachers with special education
training, after initial employment, may choose to work in regular
education classrooms, where many children with disabilities are taught
consistent with the least restrictive environment provisions of the
Individuals with Disabilities Education Act. Their specialized training
can be of great benefit in the regular education setting.
Under our regulations, States will determine how to apply the
teacher retention indicator, and so determine in consultation with
their stakeholders (see Sec. 612.4(c)) what levels of retention would
be so unreasonably low (or so unexpectedly high) to reflect on the
quality of the teacher preparation program. We believe this State
flexibility will incorporate consideration of the programmatic quality
of special education teacher preparation and the general circumstances
of employment of these teachers. Special education teachers are
teachers first and foremost, and we do not believe the programs that
train special education teachers should be exempted from the State's
overall calculations of their teacher retention rates. Demand for
teachers trained in special education is expected to remain high, and
given the flexibility States have to determine what is a reasonable
retention rate for novice special education teachers, we do not believe
that this indicator of program quality will result in a reduction of
special education preparation programs.
Changes: None.
Placement in High-Need Schools
Comments: Many commenters noted that incentivizing the placement of
novice teachers in high-need schools contradicts the ESEA requirement
that States work against congregating novice teachers in high-need
schools. The ``Excellent Educators for All'' \44\ initiative asks
States to work to ensure
[[Page 75563]]
that high-need schools obtain and retain more experienced teachers.
Commenters believed States would be challenged to meet the
contradictory goals of the mandated rating system and the Department's
other initiatives.
---------------------------------------------------------------------------
\44\ Equitable Access to Excellent Educators: State Plans to
Ensure Equitable Access to Excellent Educators.(2014). Retrieved
from https://www2.ed.gov/programs/titleiparta/resources.html.
---------------------------------------------------------------------------
Discussion: The required use of teacher placement and retention
rates (i.e., our employment rate outcomes) are intended to provide data
that confirm the extent to which those whom a teacher preparation
program prepares go on to become novice teachers and remain in teaching
for at least three years. Moreover, placement rates overall are
particularly important, in that they provide a baseline context for
evaluating a program's retention rates. Our employment outcomes include
similar measures that focus on high-need schools because of the special
responsibility of programs to meet the needs of those schools until
such time as SEAs and LEAs truly have implemented their
responsibilities under 1111(g)(1)(B) and 1112(b)(2) of the ESEA, as
amended by ESSA, (corresponding to similar requirements in sections
1111(b)(8)(C) and 1112(c)(1)(L) of the ESEA, as previously amended by
NCLB) to take actions to ensure that low-income children and children
of color are not taught at higher rates than other children by
inexperienced, unqualified, or out-of-field teachers.
The Department required all States to submit State Plans to Ensure
Equitable Access to Excellent Educations (Educator Equity Plans) to
address this requirement, and we look forward to the time when
employment outcomes that focus on high-need schools are unnecessary.
However, it is much too early to remove employment indicators that
focus on high-need schools. For this reason, we decline to accept the
commenters' recommendation that we do so because of concern that these
reporting requirements are inconsistent with those under the ESEA.
We add that, just as States will establish the weights to these
outcomes in assessing the level of program performance, States also may
adjust their expectations for placement and retention rates for high-
need schools in order to support successful implementation of their
State plans.
Changes: None.
Comments: Many commenters expressed concern about placing novice
teachers in high-need schools without additional support systems.
Several other commenters stated that the proposed regulations would add
to the problem of chronic turnover of the least experienced teachers in
high-need schools.
Discussion: We agree that high-need schools face special
challenges, and that teachers who are placed in high-need schools need
to be prepared for those challenges so that they have a positive impact
on the achievement and growth of their students. By requiring
transparency in reporting of employment outcomes through disaggregated
information about high-need schools, we hope that preparation programs
and high-need schools and districts will work together to ensure novice
teachers have the academic content knowledge and teaching skills they
need when placed as well as the supports they need to stay in high-need
schools.
We disagree with commenters that the regulations will lead to
higher turnover rates. By requiring reporting on teacher preparation
rates by program, we believe that employers will be better able to
identify programs with strong track records for preparing novice
teachers who stay, and succeed, in high-need schools. This information
will help employers make informed hiring decisions and may ultimately
help districts reduce teacher turnover rates.
Changes: None.
State Flexibility To Define and Incorporate Measures
Comments: Commenters suggested that States be able to define the
specific employment information they are collecting, as well as the
process for collecting it, so that they can use the systems they
already have in place. Other commenters suggested that the Department
require that States use employment outcomes as a factor in performance
designations, but allow each State to determine how best to incorporate
these outcomes into accountability systems.
Several commenters suggested additional indicators that could be
used to report on employment outcomes. Specifically, commenters
suggested that programs should report the demographics and outcomes of
enrolled teacher candidates by race and ethnicity (graduation rate,
dropout rates, placement rates for graduates, first-year evaluation
scores (if available), and the percentage of teachers candidates who
stay within the teaching profession for one, three, and five years).
Also, commenters suggested that the Department include the use of
readily-available financial data when reporting employment outcomes.
Another commenter suggested that the Department collect information on
how many teachers from each teacher preparation program attain an
exemplary rating through the statewide evaluation systems. Finally, one
commenter suggested counting the number of times schools hire graduates
from the same teacher preparation program.
Discussion: As with the other indicators, States have flexibility
to determine how the employment outcome measures will be implemented
and used to assess the performance of teacher preparation programs. If
a State wants to adopt the recommendations in the way it implements
collecting data on placement and retention rates, it certainly may do
so. But we are mindful of the additional costs associated with
calculating these employment measures for each teacher preparation
program that would come from adopting commenters' recommendations to
disaggregate their employment measures by category of teachers or to
include the other categories of data they recommend.
We do not believe that further disaggregation of data as
recommended will produce a sufficiently useful indicator of teacher
preparation program performance to justify a requirement that all
States implement one or more of these recommendations. We therefore
decline to adopt them. We also do not believe additional indicators are
necessary to assess the academic content knowledge and teaching skills
of the novice teachers from each teacher preparation program though
consistent with Sec. 612.5(b), States are free to adopt them if they
choose to do so.
Changes: None.
Employment Outcomes as a Measure of Program Performance
Comments: Commenters suggested that States be expected to report
data on teacher placement, without being required to use the data in
making annual program performance designations.
Several commenters noted that school districts often handle their
own decisions about hiring and placement of new school teachers, which
severely limits institutions' ability to place teachers in schools.
Many commenters advised against using employment data in assessments of
teacher preparation programs. Some stated that these data would fail to
recognize the importance of teacher preparation program students'
variable career paths and potential for employment in teaching-related
fields. To narrowly define teacher preparation program quality in terms
of a limited conception of employment for graduates is misguided and
unnecessarily damaging.
Other commenters argued that the assumption underlying this
proposed
[[Page 75564]]
measure of a relationship between program quality and teacher turnover
is not supported by research, especially in high-need schools. They
stated that there are too many variables that impact teacher hiring,
placement, and retention to effectively connect that variable to the
quality of teacher preparation programs. Examples provided include: The
economy and budget cuts, layoffs that poor school districts are likely
to implement, State politics, the unavailability of a position in given
content area, personal choices (e.g., having a family), better paying
positions, out of State positions, private school positions, military
installations and military spouses, few opportunities for advancement,
and geographic hiring patterns (e.g., rural versus urban hiring
patterns). Some commenters also stated that edTPA, which they described
as an exam that is similar to a bar exam for teaching, would be a much
more direct, valid measure of a graduate's skills.
Discussion: We acknowledge that there are factors outside of a
program's control that influence teacher placement rates and teacher
retention rates. As commenters note, teacher preparation program
graduates (or alternative route program participants if a State chooses
to look at them rather than program graduates) may decide to enter or
leave the profession due to family considerations, working conditions
at their school, or other reasons that do not necessarily reflect upon
the quality of their teacher preparation program or the level of
content knowledge and teaching skills of the program's graduates.
In applying these employment outcome measures, it would be absurd
to assume that States will treat a rate that is below 100 percent as a
poor reflection on the quality of the teacher preparation program.
Rather, in applying these measures States may determine what placement
rates and retention rates would be so low (or so high, if they choose
to identify exceptionally performing programs) as to speak to the
quality of the program itself.
However, while factors like those commenters identify affect
employment outcomes, we believe that the primary goal of teacher
preparation programs should be to produce graduates who successfully
become classroom teachers and stay in teaching at least several years.
We believe that high placement and retention rates are indicators that
a teacher preparation program's graduates (or an alternative route
program's participants if a State chooses to look at them rather than
program graduates) have the requisite content knowledge and teaching
skills to demonstrate sufficient competency to find a job, earn
positive reviews, and choose to stay in the profession. This view is
shared by States like North Carolina, Louisiana, and Tennessee, as well
as CAEP, which require reporting on similar outcomes for teacher
preparation programs.
Commenters accurately point out that teachers in low-performing
schools with high concentrations of students of color have
significantly higher rates of turnover. Research from New York State
confirms this finding, but also shows that first-year teachers who
leave a school are, on average, significantly less effective than those
who stay.\45\ This finding, along with other similar findings,\46\
indicates that teacher retention and teaching skills are positively
associated with one another. Another study found that when given a
choice between teachers who transfer schools, schools tend to choose
the teachers with greater impact on student outcomes,\47\ suggesting
that hiring decisions are also indications of teacher skills and
content knowledge. Research studies \48\ and available State data \49\
on teacher preparation programs placement and retention rates also show
that there can be large differences in employment outcomes across
programs within a State. While these rates are no doubt influenced by
many factors, the Department believes that they are in part a
reflection of the quality of the program, because they signal a
program's ability to produce graduates that schools and districts deem
to be qualified.
---------------------------------------------------------------------------
\45\ Boyd, D., Grossman, P., Lankford, H., & Loeb, S. (2008).
Who Leaves? Teacher Attrition and Student Achievement? (Working
Paper No. 14022). Retrieved from National Bureau of Economic
Research.
\46\ Goldhaber, D., Gross, P., & Player, D. (2007). Are public
schools really losing their ``best''? Assessing the career
transitions of teachers and their implications for the quality of
the teacher workforce (Working Paper No. 12).
\47\ Boyd, D., Lankford, H., Loeb, S., Ronfeldt, M., & Wyckoff,
J. (2011). The role of teacher quality in retention and hiring:
Using applications to transfer to uncover preferences of teachers
and schools. Journal of Policy Analysis and Management, 30(1), 88-
110.
\48\ Kane, T., Rockoff, J., & Staiger, D. (2008). What does
certification tell us about teacher effectiveness? Evidence from New
York City. Economics of Education Review, 27(6), 615-631-615-631.
\49\ See, for example information on these indicators reported
by Tennessee and North Carolina: Report Card on the Effectiveness of
Teacher Training Programs, Tennessee 2014 Report Card. (n.d.).
Retrieved November 30, 2015, from www.tn.gov/thec/Divisions/AcademicAffairs/rttt/report_card/2014/report_card/14report_card.shtml; UNC Educator Quality Dashboard. (n.d.).
Retrieved from https://tqdashboard.northcarolina.edu/performance-employment/.
---------------------------------------------------------------------------
The use of employment outcomes as indicators of the performance of
a teacher preparation program also reflects the relationship between
teacher retention rates and student outcomes. At the school level, high
teacher turnover can have multiple negative effects on student
learning. When a teacher leaves a school, it is more likely that the
vacancy will be filled by a less-experienced and, on average, less-
effective teacher, which will lower the achievement of students in the
school. In addition to this effect on the composition of a school's
teacher workforce, the findings of Ronfeldt, et al. suggest that
disruption from teacher turnover has an additional negative effect on
the school as a whole, in part, by lowering the effectiveness of the
teachers who remain in the school.\50\
---------------------------------------------------------------------------
\50\ Ronfeldt, M., Loeb, S., & Wyckoff, J. (2013). How Teacher
Turnover Harms Student Achievement. American Education Research
Journal, 50(1), 4-36.
---------------------------------------------------------------------------
Thus, we believe that employment outcomes, taken together, serve
not only as reasonable indicators of academic content knowledge and
teaching skill, but also as potentially important incentives for
programs and States to focus on a program's ability to produce
graduates with the skills and preparation to teach for many years.
Placement rates overall and in high-need schools specifically, are
particularly important, in that they provide a baseline context for
evaluating a program's retention rates. In an extreme example, a
program may have 100 graduates, but if only one graduate who actually
secures employment as a teacher, and continues to teach, that school
would have a retention rate of 100 percent. Plainly, such a retention
rate does not provide a meaningful or complete assessment of the
program's impact on teacher retention rate, and thus on this indicator
of program quality. Similarly, two programs may each produce 100
teachers, but one program only places teachers in high-need schools,
while the other places no teachers in high-need schools. Even if the
programs produced graduates of the exact same quality, the program that
serves high-need schools would be likely to have lower retention rates,
due to the challenges described in comments and above.
Finally, we reiterate that States have flexibility to determine how
employment outcomes should be weighted, so that they may match their
metrics to their individual needs and conditions. In regard to using
other available measures of teaching ability and academic content
knowledge, like edTPA, we believe that, taken together, outcome-based
measures that we require
[[Page 75565]]
(student learning outcomes, employment outcomes, and survey outcomes)
are the most direct measures of academic content knowledge and teaching
skills. Placement and retention rates reflect the experiences of
program's recent graduates and novice teachers over the course of three
to six years (depending on when recent graduates become novice
teachers), which cannot be captured by other measures. We acknowledge
that States may wish to include additional indicators, such as student
survey results, to assess teacher preparation program performance.
Section 612.5(b) permits States to do so. However, we decline to
require that States use additional or other indicators like those
suggested in place of employment outcomes, because we strongly believe
they are less direct measures of academic content knowledge and
teaching skills.
Changes: None.
Validity and Reliability
Comments: Several commenters indicated that the teacher retention
data that States would need to collect for each program do not meet the
standards for being valid or reliable. They stated that data on program
graduates will be incomplete because States can exclude teachers who
move across State lines, teach in private schools or in positions which
do not require certification, or who join the military or go to
graduate school. Commenters further expressed concern over the numerous
requests for additional data regarding persistence, academic
achievement, and job placement that are currently beyond the reach of
most educator preparation programs.
Discussion: As we have previously stated, we intend the use of all
indicators of academic content knowledge and teaching skill to produce
information about the performance-level of each teacher preparation
program that, speaking broadly, is valid and reliable. See, generally,
our discussion of the issue in response to public comment on Indicators
a State Must Use to Report on Teacher Preparation Programs in the State
Report Card (34 CFR 612.5(a)).
It is clear from the comments we received that there is not an
outright consensus on using employment outcomes to measure teacher
preparation programs; however, we strongly believe that the inclusion
of employment outcomes with other measures contributes to States'
abilities to make valid and reliable decisions about program
performance. Under the regulations, States will work with their
stakeholders (see Sec. 612.4(c)) to establish methods for evaluating
the quality of data related to a program's outcome measures, and all
other indicators, to ensure that the reported data are fair and
equitable. As we discussed in the NPRM, in doing so, the State should
use this process to ensure the reliability, validity, integrity, and
accuracy of all data reported about the performance of teacher
preparation programs. We recognize the burden that reporting on
employment outcomes may place on individual programs, and for this
reason, we suggest, but do not require, that States examine their
capacity, within their longitudinal data systems, to track employment
outcomes because we believe this will reduce costs for IHEs and
increase efficiency of data collection.
We recognize that program graduates may not end up teaching in the
same State as their teacher preparation program for a variety of
reasons and suggest, but do not require, that States create inter-State
partnerships to better track employment outcomes of program completers
as well as agreements that allow them to track military service,
graduate school enrollment, and employment as teacher in a private
school. But we do not believe that the exclusion of these recent
graduates, or those who go on to teach in private schools, jeopardizes
reasonable use of this indicator of teacher preparation program
performance. As noted, previously, we have revised the regulations so
that States may not exclude recent graduates employed in positions
which do not require certification from their calculations of
employment outcomes. Working with their stakeholders (see Sec.
612.4(c) States will be able to determine how best to apply the
retention rate data that they have.
Finally, we understand that many teacher preparation programs do
not currently collect data on factors like job placement, how long
their graduates who become teachers stay in the profession, and the
gains in academic achievement that are associated with their graduates.
However, collecting this information is not beyond those programs'
capacity. Moreover, the regulations make the State responsible for
ensuring that data needed for each indicator to assess program
performance are secured and used. How they will do so would be a
subject for State discussion with its consultative group.
Changes: None.
Data Collection and Reporting Concerns
Comments: Commenters recommended that placement-rate data be
collected beyond the first year after graduation and across State
boundaries. Another commenter noted that a State would need to know
which ``novice teachers'' or ``recent graduates'' who attended teacher
preparation programs in their State are not actually teaching in their
State, and it is unclear how a State would be able to get this
information. Several commenters further stated that States would need
information about program graduates who teach in private schools that
is not publically available and may violate privacy laws to obtain.
Commenters were concerned about how often data will be updated by
the Department. They stated that, due to teachers changing schools mid-
year, data will be outdated and not helpful to the consumer. Several
commenters suggested that a national database would need to be in place
for accurate data collection so institutions would be able to track
graduates across State boundaries. Two commenters noted that it will be
difficult to follow graduates over several years and collect accurate
data to address all of the areas relevant to a program's retention
rate, and that therefore reported rates would reflect a great deal of
missing data.
Another commenter suggested that the Department provide support for
the development and implementation of data systems that will allow
States to safely and securely share employment, placement, and
retention data.
Discussion: We note first that, due to the definition of the terms
``teacher placement rate'' and ``recent graduate'' (see Sec. 612.2),
placement rate data is collected on individuals who have met the
requirements of program in any of the three title II reporting years
preceding the current reporting year.
In order to decrease the costs associated with calculating teacher
placement and teacher retention rates and to better focus the data
collection, our proposed and final definitions of teacher placement
rate and teacher retention rate in Sec. 612.2 permit States to exclude
certain categories of novice teachers from their calculations for their
teacher preparation programs, provided that each State uses a
consistent approach to assess and report on all of the teacher
preparation programs in the State. As we have already noted, these
categories include teachers who teach in other States, teach in private
schools, are not retained specifically and directly due to budget cuts,
or join the military or enroll in graduate school. While we encourage
States to work to capture these data to make the placement and
retention rates for each program as robust as possible, we understand
that
[[Page 75566]]
current practicalities may affect their ability to do so for one or
more of these categories of teachers. But we strongly believe that,
except in rare circumstances, States will have enough data on
employment outcomes for each program, based on the numbers of recent
graduates who take teaching positions in the State, to use as an
indicator of the program's performance.
To address confidentiality concerns, Sec. 612.4(b)(5) expressly
exempts reporting of data where doing so would violate Federal or State
privacy laws or regulations.
The regulations do not require States to submit documentation with
the SRCs that supports their data collections; they only must submit
the ultimate calculation for each program's indicator (and its
weighting). However, States may not omit program graduates (or
participants in alternative route programs if a State chooses to look
at participants rather than program graduates) from any of the
calculations of employment or survey outcomes indicators without being
able to verify that these individuals are in the groups that the
regulators permit States to omit.
Some commenters recommended that the Department maintain a national
database, while others seemed to think that we plan to maintain such a
database. States must submit their SRCs to the Department annually, and
the Department intends to make these reports and the data they include,
like SRCs that States annually submitted in prior years, publicly
available. The Department has no other plans for activities relevant to
a national database.
Commenters were concerned about difficulties in following graduates
for the three-year period proposed in the NPRM. As discussed in
response to comment on the ``teacher retention rate'' definition in
Sec. 612.2, we have modified the definition of ``teacher retention
rate'' so that States will be reporting on the first three years a
teacher is in the classroom rather than three out of the first five
years. We believe this change addresses the commenters' concerns.
As we interpret the comment, one commenter suggested we provide
support for more robust data systems so that States have access to the
employment data of teachers who move to other States. We have technical
assistance resources dedicated to helping States collect and use
longitudinal data, including the Statewide Longitudinal Data System's
Education Data Technical Assistance Program and the Privacy Technical
Assistance Center, which focuses on the privacy and security of student
data. We will look into whether these resources may be able to help
address this matter.
Changes: None.
Alternative Route Programs
Comments: Commenters stated that the calculation of placement and
retention rates for alternative route teacher preparation programs
should be different from those for traditional route teacher
preparation programs. Others asked that the regulations ensure the use
of multiple measures by States in assessing traditional and alternative
route programs. Many commenters stated that the proposed regulations
give advantages to alternative route programs, as programs that train
teachers on the job get significant advantages by being allowed to
count all of their participants as employed while they are still
learning to teach, virtually ensuring a very high placement rate for
those programs. Other commenters suggested that the common starting
point for both alternative and traditional route programs should be the
point at which a candidate has the opportunity to become a teacher of
record.
As an alternative, commenters suggested that the Department alter
the definition of ``new teacher'' so that both traditional and
alternative route teacher candidates start on equal ground. For
example, the definition might include ``after all coursework is
completed,'' ``at the point a teacher is placed in the classroom,'' or
``at the moment a teacher becomes a teacher of record.'' Commenters
recommended that teacher retention rate should be more in line with
CAEP standards, which do not differentiate accountability for alternate
and traditional route teacher preparation programs.
Many commenters were concerned about the ability of States to
weight employment outcomes differently for alternative and traditional
route programs, thus creating unfair comparisons among States or
programs in different States while providing the illusion of fair
comparisons by using the same metrics. One commenter was concerned
about a teacher preparation program's ability to place candidates in
fields where a degree in a specific discipline is needed, as those jobs
will go to those with the discipline degree and not to a teacher
preparation program degree, thus giving teachers from alternative route
programs an advantage. Others stated that demographics may impact
whether a student enrolls in a traditional or an alternative route
program, so comparing the two types of programs in any way is not
appropriate.
Discussion: We agree that employment outcomes could vary based
solely on the type, rather than the quality, of a teacher preparation
program. While there is great variability both among traditional route
programs and among alternative route programs, those two types of
programs have characteristics that are generally very different from
each other. We agree with commenters that, due to the fundamental
characteristics of alternative certification programs (in particular
the likelihood that all participants will be employed as teachers of
record while completing coursework), the reporting of teacher placement
rate data of individuals who participated in such programs will
inevitably result in 100 percent placement rate. However, creation of a
different methodology for calculating the teacher placement rate solely
for alternative route programs would be unnecessarily complex and
potentially confusing for States as they implement these regulations
and for the public as they examine the data. Accordingly, we have
removed the requirement that States report and assess the teacher
placement rate of alternative route programs from the final
regulations. States may, at their discretion, continue to include
teacher placement rate for alternative certification programs in their
reporting system if they determine that this information is meaningful
and deserves weight. However, they are not required to do so by these
final regulations.
For reasons discussed in the Meaningful Differentiations in Teacher
Preparation Program Performance section of this preamble, we have not
removed the requirement that States report the teacher placement rate
in high-need schools for alternative route programs. If a teacher is
employed as a teacher of record in a high-need school prior to program
completion, that teacher will be considered to have been placed when
the State calculates and reports a teacher placement rate for high-need
schools. Unlike teacher placement rate generally, the teacher placement
rate in high-need schools can be used to meaningfully differentiate
between programs of varying quality.
Recognizing both that (a) the differences in the characteristics of
traditional and alternative route programs may create differences
between teacher placement rate in high-need schools and (b) our removal
of the requirement to include teacher placement rate for alternative
certification programs creates a different number of required
indicators for Employment Outcomes between the two
[[Page 75567]]
program types, we have revised Sec. 612.5(a)(2) to clarify that (1) in
their overall assessment of program performance States may assess
employment outcomes for these programs differently, and (2) States may
do so provided that differences in assessments and the reasons for
those differences are transparent and that assessments result in
equivalent levels of accountability and reporting irrespective of the
type of program.
We believe States are best suited to analyze their traditional and
alternative route programs and determine how best to apply employment
outcomes to assess the overall performance of these programs. As such,
to further promote transparency and fair treatment, we have revised
section V of the SRC to include the need for each State to describe the
rationale for how the State is treating the employment outcomes
differently, provided it has not chosen to add a measure of placement
rate for alternative route programs and does in fact have different
bases for accountability.
We also believe that, as we had proposed, States should apply
equivalent standards of accountability in how they treat employment
outcomes for traditional programs and alternative route programs, and
suggest a few approaches States might consider for achieving such
equivalency.
For example, a State might devise a system with five areas in which
a teacher preparation program must have satisfactory outcomes in order
to be considered not low-performing or at-risk of being low-performing.
For the employment outcomes measure (and leaving aside the need for
employment outcomes for high-need schools), a State might determine
that traditional route programs must have a teacher placement rate of
at least 80 percent and a second-year teacher retention rate of at
least 70 percent to be considered as having satisfactory employment
outcomes. The State may, in consultation with stakeholders, determine
that a second-year retention rate of 85 percent for alternative
certification programs results in an equivalent level of accountability
for those programs, given that almost all participants in such programs
in the State are placed and retained for some period of time during
their program.
As another example, a State might establish a numerical scale
wherein the employment outcomes for all teacher preparation programs in
the State account for 20 percent. A State might then determine that
teacher placement (overall and at high-needs schools) and teacher
retention (overall and at high-needs schools) outcomes are weighted
equally, say at 10 percent each, for all traditional route programs,
but weight the placement rate in high-need schools at 10 percent and
retention rate (overall and at high-needs schools) at 10 percent for
alternative route programs.
We also recognize that some alternative route programs are
specifically designed to recruit high-quality participants who may be
committed to teach only for a few years. Many also recruit participants
who in college had academic majors in fields similar to what they will
teach. Since a significant aspect of our indicators of academic content
knowledge and teaching skills focus on the success of novice teachers
regardless of the nature of their teacher preparation program, we do
not believe we should establish a one-size-fits-all rule here. Rather,
we think that States are in a better position to determine how the
employment outcomes should best be used to help assess the performance
of alternative route and traditional route programs.
We agree that use of multiple measures of program performance is
important. We reiterate that the regulations require that, in reporting
the performance of all programs, both traditional and alternative
route, States must use the four indicators of academic content
knowledge and teaching skills the regulations identify in Sec.
612.5(a), including employment outcomes--the teacher placement rate
(excepting the requirement here for alternative route programs),
teacher placement rate in high-need schools, teacher retention rate,
and teacher retention rate in high-need schools--in addition to any
indicators of academic content knowledge and teaching skills and other
criteria they may establish on their own.
However, we do not know of any inherent differences between
traditional route programs and alternative route programs that should
require different treatment of the other required indicators--student
learning outcomes, survey outcomes, and the basic characteristics of
the program addressed in Sec. 612.5(a)(4). Nor do we see any reason
why any differences in the type of individuals that traditional route
programs and alternative route programs enroll should mean that the
program's student learning outcomes should be assessed differently.
Finally, while some commenters argued about the relative advantage
of alternative route or traditional route programs in reporting on
employment outcomes, we reiterate that neither the regulations nor the
SRCs pit programs against each other. Each State determines what
teacher preparation programs are and are not low-performing or at-risk
of being low-performing (as well as in any other category of
performance it may establish). Each State then reports the data that
reflect the indicators and criteria used to make this determination,
and identifies those programs that are low-performing or at-risk of
being low-performing. Of course, any differences in how employment
outcomes are applied to traditional route and alternative route
programs would need to result in equivalent levels of accountability
and reporting (see Sec. 612.5(a)(2)(B)). But the issue for each State
is identifying each program's level of performance relative to the
level of expectations the State established--not relative to levels of
performance or results for indicators or criteria that apply to other
programs.
Changes: We have revised Sec. 612.5(a)(2)(iii) to clarify that in
their overall assessment of program performance States may assess
employment outcomes for traditional route programs and programs
provided through alternative routes differently provided that doing
results in equivalent levels of accountability.
We have also added a new Sec. 612.5(a)(2)(v) to provide that a
State is not required to calculate a teacher placement rate under
paragraph (a)(2)(i)(A) of that section for alternative route to
certification programs.
Teacher Preparation Programs Provided Through Distance Education
Comments: None.
Discussion: In reviewing the proposed regulations, we recognized
that, as with alternative route programs, teacher preparation programs
provided through distance education may pose unique challenges to
States in calculating employment outcomes under Sec. 612.5(a)(2).
Specifically, because such programs may operate across State lines, an
individual State may be unable to accurately determine the total number
of recent graduates from any given program and only a subset of that
total would be, in theory, preparing to teach in that State. For
example, a teacher preparation entity may be physically located in
State A and operate a teacher preparation program provided through
distance education in both State A and State B. While the teacher
preparation entity is required to submit an IRC to State A, which would
include the total number of recent graduates from their program, only a
subset of that total number would be residing in or preparing to teach
in State A. Therefore, when State A calculates the teacher placement
rate for that program, it
[[Page 75568]]
would generate an artificially low rate. In addition, State B would
face the same issue if it had ready access to the total number of
recent graduates (which it would not as the program would not be
required to submit an IRC to State B). Any teacher placement rate that
State B attempts to calculate for this, or any other, teacher
preparation program provided through distance education would be
artificially low as recent graduates who did not reside in State B, did
not enroll in a teacher preparation program in State B, and never
intended to seek initial certification or licensure in State B would be
included in the denominator of the teacher placement rate calculation.
Recognizing these types of issues, the Department has determined
that it is appropriate to create an alternative method for States to
calculate employment outcomes for teacher preparation programs provided
through distance education. Specifically, we have revised the
definition of teacher placement rate to allow States, in calculating
teacher placement rate for teacher preparation programs provided
through distance education, to use the total number of recent graduates
who have obtained initial certification or licensure in the State
during the three preceding title II reporting years as the denominator
in their calculation instead of the total number of recent graduates.
Additionally, we believe it is appropriate to give States greater
flexibility in assessing these outcomes, and have added a new Sec.
612.5(a)(2)(iv) which allows States to assess teacher placement rates
differently for teacher preparation programs provided through distance
education provided that the differences in assessment are transparent
and result in similar levels of accountability for all teacher
preparation programs.
Changes: We have added Sec. 612.5(a)(2)(iv), which allows States
to assess teacher placement rates differently for teacher preparation
programs provided through distance education so long as the differences
in assessment are transparent and result in similar levels of
accountability.
Survey Outcomes (34 CFR 612.5(a)(3))
Comments: Several commenters agreed that there is value in using
surveys of teacher preparation program graduates and the administrators
who employ and supervise them to evaluate the programs, with some
commenters noting that such surveys are already in place. Some
commenters expressed concerns about the use of survey data as part of a
rating system with high-stakes consequences for teacher preparation
programs. Some commenters felt that States should have discretion about
how or even whether to incorporate survey outcomes into an
accountability system. Other commenters suggested making surveys one of
a number of options that States could elect to include in their systems
for evaluating the quality of teacher preparation programs. Still other
commenters felt that, because surveys are currently in place for the
evaluation of teacher preparation programs (for example, through State,
accrediting agency, and institutional requirements), Federal
regulations requiring the use of survey outcomes for this purpose would
be either duplicative or add unnecessary burden if they differ from
what currently exists. One commenter stated that Massachusetts is
currently building valid and reliable surveys of novice teachers,
recent graduates, employers, and supervising practitioners on educator
preparation, and this work exceeds the expectation of the proposed
rules. However, the commenter also was concerned about the reliability,
validity, and feasibility of using survey outcomes as an independent
measure for assessing teacher preparation program performance. The
commenter felt that the proposed regulations do not specify how States
would report survey results in a way that captures both qualitative and
quantitative data. The commenter expressed doubt that aggregating
survey data into a single data point for reporting purposes would
convey valuable information, and stated that doing so would diminish
the usefulness of the survey data and could lead to distorted
conclusions.
In addition, commenters recommended allowing institutions
themselves to conduct and report annual survey data for teacher
graduates and employers, noting that a number of institutions currently
conduct well-honed, rigorous surveys of teacher preparation program
graduates and their employers. Commenters were concerned with the
addition of a uniform State-level survey for assessing teacher
preparation programs, stating that it is not possible to obtain high
individual response rates for two surveys addressing the same area.
Commenters contended that, as a result, the extensive longitudinal
survey databases established by some of the best teacher education
programs in the Nation will be at-risk, resulting in the potential loss
of the baseline data, the annual data, and the continuous improvement
systems associated with these surveys despite years of investment in
them and substantial demonstrated benefits.
Some commenters noted that it is hard to predict how reliable the
teacher and employer surveys required by the regulations would be as an
indicator of teacher preparation program quality, since the proposed
regulations do not specify how these surveys would be developed or
whether they would be the same across the State or States. In addition,
the commenters noted that it is hard to predict how reliable the
surveys may be in capturing teacher and employer perceptions of how
adequately prepared teachers are since these surveys do not exist in
most places and would have to be created. Commenters also stated that
survey data will need to be standardized for all of a State's
institutions, which will likely result in a significant cost to States.
Some commenters stated that, in lieu of surveys, States should be
allowed to create preparation program-school system partnerships that
provide for joint design and administration of the preparation program.
They claimed when local school systems and preparation programs jointly
design and oversee the preparation program, surveys are unnecessary
because the partnership creates one preparation program entity that is
responsible for the quality of preparation and satisfaction of district
and school leaders.
Discussion: As we stressed in the NPRM, many new teachers report
entering the profession feeling unprepared for classroom realities.
Since teacher preparation programs have responsibility for preparing
teachers for these classroom realities, we believe that asking novice
teachers whether they feel prepared to teach, and asking those who
supervise them whether they feel those novice teachers are prepared to
teach, generate results that are necessary components in any State's
process of assessing the level of a teacher preparation program's
performance. Moreover, while all States do not have experience
employing surveys to determine program effectiveness, we believe that
their use for this purpose has been well established. As noted in the
NPRM, two major national organizations focused on teacher preparation
and others in the higher education world are now incorporating this
kind of survey data as an indicator of program quality (see 79 FR
71840).
We share the belief of these organizations that a novice teacher's
perception, and that of his or her employer, of the teacher's readiness
and capability during the first year teaching are key indicators of
that individual's academic knowledge and teaching skills as well as
whether his or her preparation program is training teachers
[[Page 75569]]
well. In addition, aside from wanting to ensure that what States report
about each program's level of performance is reasonable, a major
byproduct of the regulations is that they can ensure that States have
accurate information on the quality of teacher preparation programs so
that they and the programs can make improvements where needed and
recognize excellence where it exists.
Regarding commenters concerns about the validity and reliability of
the use of survey results to help assess program performance, we first
reference our general discussion of the issue in response to public
comment on Indicators a State Must Use to Report on Teacher Preparation
Programs in the State Report Card (34 CFR 612.5(a)).
Beyond this, it plainly is important that States develop procedures
to enable teachers' and employers' perceptions to be appropriately used
and have the desired impacts, and at the same time to enable States to
use survey results in ways that treat all programs fairly. To do so, we
strongly encourage States to standardize their use of surveys so that
for novice teachers who are similarly situated, they seek common
information from them and their employers. We are confident that, in
consultation with key stakeholders as provided for in Sec.
612.4(c)(1), States will be able to develop a standardized, unbiased,
and reliable set of survey questions, or ensure that IHE surveys meet
the same standard. This goal would be very difficult to achieve,
however, if States relied on existing surveys (unless modified
appropriately) whose questions vary in content and thus solicit
different information and responses. Of course, it is likely that many
strong surveys already exist and are in use, and we encourage States to
consider using such an existing survey so long as it comports with
Sec. 612.5(a)(3). Where a State finds an existing survey of novice
teachers and their employers to be adequate, doing so will avoid the
cost and time of preparing another, and to the extent possible, prevent
the need for teachers and employers to complete more than one survey,
which commenters reasonably would like to avoid. Concerns about the
cost and burden of implementing teacher and employer surveys are
discussed further with the next set of comments on this section.
We note that States have the discretion to determine how they will
publicly post the results of surveys and how they will aggregate the
results associated with teachers from each program for use as an
indicator of that program's performance. We encourage States to report
survey results disaggregated by question (as is done, for example, by
Ohio \51\ and North Carolina \52\), as we believe this information
would be particularly useful for prospective teachers in evaluating the
strengths of different teacher preparation programs. At some point,
however, States must identify any programs that are low-performing or
at-risk of being low-performing, and to accomplish this they will need
to aggregate quantitative and qualitative survey responses in some way,
in a method developed in consultation with key stakeholders as provided
for in Sec. 612.4(c)(1).
---------------------------------------------------------------------------
\51\ See, for example: 2013 Educator Preparation Performance
Report Adolescence to Young Adult (7-12) Integrated Mathematics Ohio
State University. (2013). Retrieved from https://regents.ohio.gov/educator-accountability/performance-report/2013/OhioStateUniversity/OHSU_IntegratedMathematics.pdf.
\52\ See UNC Educator Quality Dashboard. Retrieved from https://tqdashboard.northcarolina.edu/performance-employment/.
---------------------------------------------------------------------------
Like those who commented, we believe that partnerships between
teacher preparation programs and local school systems have great value
in improving the transition of individuals whom teacher preparation
programs train to the classroom and a novice teacher's overall
effectiveness. However, these partnerships cannot replace survey
results as an indicator of the program's performance.
Changes: None.
Comments: Commenters suggested that the Department consider options
for reducing the cost and burden of implementation, such as clarifying
that States would not have to survey 100 percent of novice teachers or
permitting States to conduct surveys less frequently than every year.
Commenters stated that, if used as expected for comparability
purposes, the survey would likely need to be designed by and conducted
through a third-party agency with professional credentials in survey
design and survey administration. They stated that sampling errors and
various forms of bias can easily skew survey results and the survey
would need to be managed by a professional third-party group, which
would likely be a significant cost to States.
One commenter recommended that a national training and technical
assistance center be established to build data capacity, consistency,
and quality among States and educator preparation providers to support
scalable continuous improvement and program quality in teacher
preparation. In support of this recommendation, the commenter, an
accreditor of education preparation providers, stated that, based on
its analysis of its first annual collection of outcome data from
education preparation providers, and its follow-up survey of education
preparation providers, the availability of survey outcomes data differs
by survey type. The commenter noted that while 714 teacher preparation
program providers reported that they have access to completer survey
data, 250 providers reported that they did not have access. In
addition, the commenter noted that teacher preparation program
providers indicated that there were many challenges in reporting
employment status, including State data systems as well as programs
that export completers across the nation or internationally.
Discussion: To obtain the most comprehensive feedback possible, it
is important for States to survey all novice teachers who are employed
as teachers in their first year of teaching and their employers. This
is because feedback from novice teachers is one indicator of how
successfully a preparation program imparts knowledge of content and
academic skills, and survey results from only a sample may introduce
unnecessary opportunities for error and increased cost and burden.
There is no established n-size at which point a sample is guaranteed to
be representative, but rather, statistical calculations must be made to
verify that the sample is representative of the characteristics of
program completers or participants. While drawing a larger sample often
increases the likelihood that it will be representative, we believe
that for nearly all programs, a representative sample will not be
substantially smaller than the total population of completers.
Therefore, we do not believe that there is a meaningful advantage to
undertaking the analysis required to draw a representative sample.
Furthermore, we believe that any potential advantage does not outweigh
the potential for error that could be introduced by States or programs
that unwittingly draw a biased sample, or report that their sample is
representative, when in fact it is not. As with student learning
outcomes and employment outcomes, we have clarified in Sec.
612.5(a)(3)(ii) that a State may exclude from its calculations of a
program's survey outcomes those survey outcomes for all novice teachers
who have taken teaching positions in private schools so long as the
State uses a consistent approach to assess and report on all of the
teacher preparation programs in the State.
We note that in ensuring that the required surveys are reasonable
and appropriate, States have some control
[[Page 75570]]
over the cost of, and the time necessary for, implementing the surveys.
Through consultation with their stakeholders (see Sec. 612.4(c)), they
determine the number and type of questions in each survey, and the
method of dissemination and collection. However, we believe that it is
important that teacher and employer surveys be conducted at least
annually. Section 207(a) of the HEA requires that States annually
identify teacher preparation programs that are low-performing or at-
risk of being low-performing. To implement this requirement, we
strongly believe that States need to use data from the year being
evaluated to identify those programs. If data from past years were used
for annual evaluations, it would impede low-performing programs from
seeing the benefits of their improvement, tend to enable deteriorating
programs to rest on their laurels, and prevent prospective teachers and
employers and the public at large from seeing the program's actual
level of performance. Moreover, because the regulations require these
surveys only of novice teachers in their first year of teaching, the
commenter's proposal to collect survey outcomes less than annually
would mean that entire cohorts of graduates would not be providing
their assessment of the quality of their preparation program.
In considering the comment, we realized that while we estimated
costs of reporting all indicators of academic content knowledge and
teaching skills, including survey outcomes, on an annual basis, the
regulations did not adequately clarify the need to collect and report
data related to each indicator annually. Therefore, we have revised
Sec. 612.4(b)(2)(i) to require that data for each indicator be
provided annually for the most recent title II reporting year.
Further discussion regarding the cost and burden of implementing
teacher and employer surveys can be found in the Discussion of Costs,
Benefits, and Transfers in the RIA section of this document.
The regulations do not prescribe any particular method for
obtaining the completed surveys, and States may certainly work with
their teacher preparation programs and teacher preparation entities to
implement effective ways to obtain survey results. Beyond this, we
expect that States will seek and employ the assistance that they need
to develop, implement, and manage teacher and employer surveys as they
see fit. We expect that States will ensure the validity and reliability
of survey outcomes--including how to address responder bias--when they
establish their procedures for assessing and reporting the performance
of each teacher preparation program with a representative group of
stakeholders, as is required under Sec. 612.4(c)(1)(i). The
regulations do not specify the process States must use to develop,
implement, or manage their employer surveys, so whether they choose to
use third-party entities to help them do so is up to them.
Finally, we believe it is important for the Department to work with
States and teacher preparation programs across the nation to improve
those programs, and we look forward to engaging in continuing dialogue
about how this can be done and what the appropriate role of the
Department should be. However, the commenters' request for a national
training and technical assistance center to support scalable continuous
improvement and to improve program quality is outside the scope of this
regulation--which is focused on the States' use of indicators of
academic content knowledge and teaching skills in their processes of
identifying those programs that are low-performing, or at-risk of being
low-performing, and other matters related to reporting under the title
II reporting system.
Changes: We have added Sec. 612.5(a)(3)(ii) to clarify that a
State may exclude form its calculations of a program's survey outcomes
those for novice teachers who take teaching positions in private
schools so long as the State uses a consistent approach to assess and
report on all of the teacher preparation programs in the State. In
addition, we have revised 612.4(b)(2)(i) to provide that data for each
of the indicators identified in Sec. 612.5 is to be for the most
recent title II reporting year.
Comments: Commenters also expressed specific concerns about
response bias on surveys, such as the belief that teacher surveys often
end up providing information about the personal likes or dislikes of
the respondent that can be attributed to issues not related to program
effectiveness. Commenters stated that surveys can be useful tools for
the evaluation of programs and methods, but believed the use of surveys
in a ratings scheme is highly problematic given how susceptible they
are to what some commenters referred to as ``political manipulation.''
In addition, commenters stated that surveys of employer satisfaction
may be substantially biased by the relationship of school principals to
the teacher preparation program. Commenters felt that principals who
are graduates of programs at specific institutions are likely to have a
positive bias toward teachers they hire from those institutions.
Commenters also believed that teacher preparation programs unaffiliated
with the educational leadership at the school will be disadvantaged by
comparison.
Commenters also felt that two of our suggestions in the NPRM to
ensure completion of surveys--that States consider using commercially
available survey software or that teachers be required to complete a
survey before they can access their class rosters--raise tremendous
questions about the security of student data and the sharing of
identifying information with commercial entities.
Discussion: We expect that States will ensure the validity and
reliability of survey outcomes, including how to address responder bias
and avoid ``political manipulation'' and like problems when they
establish their procedures for assessing and reporting the performance
of each teacher preparation program with a representative group of
stakeholders, as is required under Sec. 612.4(c)(1)(i).
While it may be true that responder bias could impact any survey
data, we expect that the variety and number of responses from novice
teachers employed at different schools and within different school
districts will ensure that such bias will not substantially affect
overall survey results.
There is no reason student data should ever be captured in any
survey results, even if commercially available software is used or
teachers are required to complete a survey before they can access and
verify their class rosters. Commenters did not identify any particular
concerns related to State or Federal privacy laws, and we do not
understand what they might be. That being said, we fully expect States
will design their survey procedures in keeping with requirements of any
applicable privacy laws.
Changes: None.
Comments: Some commenters expressed concerns with the effect that a
low response rate would have on the use of survey data as an indicator
of teacher preparation program quality. Commenters noted that obtaining
responses to teacher and employer surveys can be quite burdensome due
to the difficulty in tracking graduates and identifying their
employers. Moreover, commenters stated that obtaining their responses
is frequently unsuccessful. Some commenters noted that, even with
aggressive follow-up, it would be difficult to obtain a sufficient
number of responses to warrant using them in high-stakes decision
making about program quality. Some commenters felt
[[Page 75571]]
that the regulations should offer alternatives or otherwise address
what happens if an institution is unable to secure sufficient survey
responses.
One commenter shared that, since 2007, the Illinois Association of
Deans of Public Colleges of Education has conducted graduate surveys of
new teachers from the twelve Illinois public universities, by mailing
surveys to new teachers and their employers. The response rate for new
teachers has been extremely low (44.2 percent for the 2012 survey and
22.6 percent for the 2013 survey). The supervisor response has been
higher, but still insufficient, according to the commenter, for the
purpose of rating programs (65.3 percent for the 2012 survey and 40.5
percent for the 2013 survey). In addition, the commenter stated that
some data from these surveys indicate differences in the responses
provided by new teachers and their supervisors. The commenter felt that
the low response rate is compounded when trying to find matched pairs
of teachers and supervisors. Using results from an institution's new
teacher survey data, the commenter was only able to identify 29 out of
104 possible matched pairs in 2012 and 11 out of 106 possible matched
pairs in 2013.
One commenter from an IHE stated that the institution's return rate
on graduate surveys over the past 24 years has been 10 to 24 percent,
which they stated is in line with national response rates. While the
institution's last survey of 50 school principals had a 50 percent
return rate, the commenter noted that her institution only surveys
those school divisions which they know regularly hire its graduates
because it does not have a source from which it can obtain actual
employment information for all graduates. According to the commenter, a
statewide process that better ensures that all school administrators
provide feedback would be very helpful, but could also be very
burdensome for the schools.
Another commenter noted that the response rate from the
institution's graduates increased significantly when the questionnaire
went out via email, rather than through the United States Postal
Service; however, the response rate from school district administrators
remained dismal, no matter what format was used--mail, email, Facebook,
Instagram, SurveyMonkey, etc. One commenter added that defaulting back
to the position of having teachers complete surveys during their school
days, and thus being yet another imposition on content time in the
classroom, was not a good alternative to address low response rates.
Commenters saw an important Federal role in accurately tracking program
graduates across State boundaries.
Discussion: We agree that low response rates can affect the
validity and reliability of survey outcomes as an indicator of program
performance. While we are not sure why States would necessarily need to
have match pairs of surveys from novice teachers and their employers as
long as they achieve what the State and its consultative group
determine to be a sufficient response rate, we expect that States will
work to develop procedures that will promote adequate response rates in
their consultation with stakeholders, as required under Sec.
612.4(c)(1)(i)). We also expect that States will use survey data
received for the initial pilot reporting year (2017-2018), when States
are not required to identify program performance, to adjust their
procedures, address insufficient response rates, and address other
issues affecting validity and reliability of survey results. We also
note that since States, working with their stakeholders, may determine
how to weight the various indicators and criteria they use to come up
with a program's overall level of performance, they also have the means
to address survey response rates that they deem too low to provide any
meaningful indicator of program quality.
We believe that States can increase their response rate by
incorporating the surveys into other structures, for example, having
LEAs disseminate the survey at various points throughout teachers'
induction period. Surveys may also be made part of required end-of-year
closeout activities for teachers and their supervisors. As the
regulations require States to survey only those teachers who are
teaching in public schools and the public school employees who employ
them (see the discussion of the definition of a novice teacher under
Sec. 612.2(d)), we believe that approaches such as these will enable
States to achieve reasonably high response rates and, thus, valid
survey results.
Finally, before the Department would consider working to develop a
system, like one the commenter suggested, for tracking program
graduates across State boundaries, we would want to consult with
States, IHEs and other stakeholders.
Changes: None.
Specialized Accreditation (34 CFR 612.5(a)(4)(i))
Comments: Commenters were both supportive of and opposed to the
proposed provision regarding specialized accreditation. Some commenters
noted that CAEP, the new specialized accreditor for teacher preparation
programs, is not an accreditor currently recognized by the Department,
which creates the possibility that there would be no federally-
recognized specialized accreditor for teacher preparation programs.
Commenters believed that the inclusion of this metric is premature
without an organization, which the Secretary recognizes, that can
confer accreditation on these programs. Other commenters argued that
this provision inserts the Federal government into the State program
approval process by mandating specific requirements that a State must
consider when approving teacher preparation programs within its
jurisdiction. They further stated that, although the Department
references CAEP and its standards for what they referred to as a
justification for some of the mandated indicators, CAEP does not
accredit at the program level. They noted that, in fact, no accreditor
provides accreditation specifically to individual teacher preparation
programs; CAEP does so only to entities that offer these programs.
Commenters raised an additional concern that the Department is
seeking to implicitly mandate national accreditation, which would
result in increased costs; and that the proposed regulations set a
disturbing precedent by effectively mandating specialized accreditation
as a requirement for demonstrating program quality. Some commenters
were concerned that with CAEP as the only national accreditor for
teacher preparation, variety of and access to national accreditation
would be limited and controlled.
Other commenters expressed concern that our proposal to offer each
State the option of presenting an assurance that the program is
accredited by a specialized accrediting agency would, at best, make the
specialized accreditor an agent of the Federal government, and at
worst, effectively mandate specialized accreditation by CAEP. The
commenters argued instead that professional accreditation should remain
a voluntary, independent process based on evolving standards of the
profession.
Some commenters asked that the requirement for State reporting on
accreditation or program characteristics in Sec. 612.5(a)(4)(i) and
(ii) be removed because these are duplicative of existing State efforts
with no clear benefit to understanding whether a teacher preparation
program can effectively prepare candidates for classroom success, and
because the proposed regulations are redundant to work being done for
State and national
[[Page 75572]]
accreditation. Other commenters recommended that States should not be
required to adhere to one national system because absent a floor for
compliance purposes, States may build better accreditation systems. One
commenter proposed that, as an alternative to program accreditation,
States be allowed to include other indicators predictive of a teacher's
effect on student performance, such as evidence of the effective use of
positive behavioral interventions and supports on the basis of the
aggregate number of suspensions and expulsions written by educators
from each teacher preparation program.
Some commenters argued that stronger standards are essential to
improving teacher preparation programs, and providing some gradation of
ratings of how well preparation programs are doing would provide useful
information to the prospective candidates, hiring districts, and the
teacher preparation programs the IRCs and SRCs are intended to inform.
They noted that as long as CAEP continued with these accreditation
levels, rather than lumping them all together under a high-level
assurance, indicators of these levels should be reflected in the rating
system. They also stated that where States do not require
accreditation, States should attempt to assess the level at which
programs are meeting the additional criteria.
Some commenters argued that accreditation alone is sufficient to
hold teacher preparation programs accountable. Other commenters stated
their agreement that active participation in professional accreditation
should be recognized as an indicator of program quality. One commenter
supported the alignment between the proposed regulations and CAEP's
annual outcomes-based reporting measures, but was concerned that the
regulations as proposed would spawn 50 separate State reporting
systems, data definitions, and processes for quality assurance. The
commenter supported incentivizing accreditation and holding all teacher
preparation programs to the same standards and reporting requirements,
and stated that CAEP's new accreditation process would achieve the
goals of the proposed rules on a national level, while removing burden
from the States. The commenter expressed concern about the requirement
that the Secretary recognize the specialized accrediting agency, and
the statement in the preamble of the NPRM that alternative route
programs are often not eligible for specialized accreditation.
The commenter also indicated that current input- and compliance-
based system requirements within the Department's recognition process
for accreditors runs counter to the overarching goal of providing
meaningful data and feedback loops for continuous improvement. The
commenter noted that CAEP was launched to bring all teacher preparation
programs, whether alternative, higher education based, or online-based,
into the fold of accreditation. The commenter recommended that
specialized accrediting agencies recognized by the Council for Higher
Education Accreditation (CHEA) should be allowed to serve as a State
indicator for program quality.
Commenters also noted that no definition of specialized
accreditation was proposed, and requested that we include a definition
of this term. One commenter recommended that a definition of
specialized accreditation include the criteria that the Secretary would
use to recognize an agency for the accreditation of professional
teacher preparation programs, and that one of the criteria for a
specialized agency should be the inclusion of alternative certification
programs as eligible professional teacher preparation programs.
Discussion: First, it is important to note that these regulations
do not set requirements for States' teacher preparation program
approval processes. The regulations establish requirements for States'
reporting to the Secretary on teacher preparation programs in their
States, and specifically their identification of programs determined to
be low-performing or at-risk of being low-performing, and the basis for
those determinations.
Also, upon review of the comments, we realized that imprecise
wording in the proposed regulations likely led to misunderstanding of
its intent regarding program-level accreditation. Our intent was
simple: to allow States able to certify that the entity offering the
teacher preparation program had been accredited by a teacher
preparation program accreditor recognized by the Secretary to rely on
that accreditation to demonstrate that the program produces teacher
candidates with the basic qualifications identified in Sec.
612.5(a)(4)(ii) rather than having to separately report on those
qualifications. The proposed regulations would not have required
separate accreditation of each individual program offered by an entity,
but we have revised Sec. 612.5(a)(4)(i) to better reflect this intent.
In response to the concern about whether an entity that administers an
alternative route program can receive such accreditation, the entity
can apply for CAEP accreditation, as one of the commenters noted.
As summarized above, commenters presented opposing views of the
role in the regulations of national accreditation through an accreditor
recognized by the Secretary: Opinions that the inclusion of national
accreditation in the regulations represented an unauthorized mandate
for accreditation on the one hand, and an implication that
accreditation alone was sufficient, thus making other options or
further indicators unnecessary, on the other. Similarly, some
commenters argued that the regulations require too much standardization
across States (through either accreditation or a consistent set of
broad indicators), while others argued that the regulations either
allow too much variability among States (leading to lack of
comparability) or encourage the duplicative effort of creating over 50
separate systems.
In the final regulations we seek to balance these concerns. States
are to assess whether a program either has Federally recognized
accreditation (Sec. 612.5(a)(4)(i)) or produces teacher candidates
with certain characteristics (Sec. 612.5(a)(4)(ii)). Allowing States
to report and assess whether their teacher preparation programs have
specialized accreditation or produce teacher candidates with specific
characteristics is not a mandate that a program fulfill either option,
and it may eliminate or reduce duplication of effort by the State. If a
State has an existing process to assess the program characteristics in
Sec. 612.5(a)(4)(ii), it can use that process rather than report on
whether a program has specialized accreditation; conversely, if a State
would like to simply use accreditation by an agency that evaluates
factors in Sec. 612.5(a)(4)(ii) (whether federally recognized or not)
to fulfill this requirement, it may choose do so. We believe these
factors do relate to preparation of effective teachers, which is
reflected in standards and expectations developed by the field,
including the CAEP standards. And since accreditation remains a
voluntary process, we cannot rely on it alone for transparency and
accountability across all programs.
We now address the commenters' statement that there may be no
federally recognized accreditor for educator preparation entities. If
there is none, and a State would like to use accreditation by an agency
whose standards align with the elements listed
[[Page 75573]]
in Sec. 612.5(a)(4)(ii) (whether federally recognized or not) to
fulfill the requirements in Sec. 612.5(a)(4)(ii), it may do so. In
fact, many States have worked or are working with CAEP on partnerships
to align standards, data collection, and processes.
As we summarized above, some commenters requested that we include a
definition of specialized accreditation, and that it include criteria
the Secretary would use to recognize an agency for accreditation of
teacher preparation programs, and that one of the criteria should be
inclusion of alternative certification programs as eligible programs.
While we appreciate these comments, we believe they are outside the
scope of the proposed and final regulations.
Finally, because teacher preparation program oversight authority
lies with the States, we do not intend for the regulations to require a
single approach--via accreditation or otherwise--for all States to use
in assessing the characteristics of teacher preparation programs. We
do, however, encourage States to work together in designing data
collection processes, in order to reduce or share costs, learn from one
another, and allow greater comparability across States.
In terms of the use of other specific indicators (e.g., positive
behavioral interventions), we encourage interested parties to bring
these suggestions forward to their States in the stakeholder engagement
process required of all States under Sec. 612.4(c).
As one commenter noted, the current statutory recognition process
for accreditors is heavily input based, while the emphasis of the
regulations is on outcomes. Any significant reorientation of the
accreditor recognition process would require statutory change.
Nonetheless, given the rigor and general acceptance of the Federal
recognition process, we believe that accreditation only by a Federally
recognized accreditor be specifically assessed in Sec. 612.5(a)(4)(i),
rather than accreditors recognized by outside agencies such as CHEA.
For programs not accredited by a federally recognized accreditor,
States determine whether or to what degree a program meets
characteristics for the alternative, Sec. 612.5(a)(4)(ii).
Because the regulation provides for use of State procedures as an
alternative to specialized accreditor recognized by the Secretary,
nothing in Sec. 612.5(a)(4) would mandate program accreditation by
CAEP or any other entity. Nor would the regulation otherwise interfere
in what commenters argue should be a voluntary, independent process
based on evolving standards of the profession. Indeed, this provision
does not require any program accreditation at all.
Changes: We have revised Sec. 612.5(a)(4)(i) to clarify that the
assessment of whether a program is accredited by a specialized
accreditor could be fulfilled by assessing the accreditation of the
entity administering teacher preparation programs, not by accreditation
of the individual programs themselves.
Characteristics of Teacher Preparation Programs (34 CFR
612.5(a)(4)(ii))
Comments: Multiple commenters expressed opposition to this
provision, which would have States report whether a program lacking
specialized accreditation under Sec. 612.5(a)(4)(ii), has certain
basic program characteristics. They stated that it is Federal overreach
into areas of State or institutional control. For example, while
commenters raised the issue in other contexts, one commenter noted that
entrance and exit qualifications of teacher candidates have
traditionally been the right of the institution to determine when
considering requirements of State approval of teacher preparation
programs. Other commenters expressed concern about Federal involvement
in State and accrediting agency approval of teacher preparation
programs, in which they stated that the Federal government should have
limited involvement.
Other commenters expressed concern about the consequences of
creating rigorous teacher candidate entry and exit qualifications. Some
commenters expressed concerns that this requirement does not take into
account the unique missions of the institutions and will have a
disproportionate and negative impact on MSIs, which may see decreases
in eligible teacher preparation program candidates by denying entry to
candidates who do not meet entry requirements established by this
provision. These commenters were concerned that rigorous entrance
requirements could decrease diversity in the teaching profession.
Commenters also expressed general opposition to requiring rigorous
entry and exit qualifications because they felt that the general
assurance of entry and exit requirements did little to provide
transparency or differentiate programs by program quality. Therefore,
the provisions were unneeded, and only added to the confusion and
bureaucracy of these requirements.
Other commenters noted that a lack of clinical experience similar
to the teaching environment in which they begin their careers results
in a struggle for novice teachers, limiting their ability to meet the
needs of their students in their early years in the classroom. They
suggested that the regulations include ``teaching placement,'' for
example, or ``produces teacher candidates with content and pedagogical
knowledge and quality clinical preparation relevant to their teaching
placement, who have met rigorous teacher candidate entry and exit
qualifications pursuant'' to increase the skills and knowledge of
teacher preparation program completers who are being placed in the
classroom as a teacher.
Discussion: While some commenters expressed concern with Federal
overreach, as noted in the earlier discussion of Sec. 612.5(a)(4)(i)
these regulations do not set any requirements that States have
established for approving teacher preparation programs; they establish
requirements for State reporting to the Secretary on teacher
preparation programs and how they determined whether any given program
was low-performing or at-risk of being low-performing. In addition, a
State may report whether institutions have fulfilled requirements in
Sec. 612.5(a)(4) through one of two options: Accreditation by an
accreditor recognized by the Secretary or, consistent with Sec.
612.5(a)(4)(ii), showing that the program produces teacher candidates
(1) with content and pedagogical knowledge and quality clinical
preparation, and (2) who have met rigorous exit qualifications
(including, as we observe in response to the comments summarized
immediately above, by being accredited by an agency whose standards
align with the elements listed in Sec. 612.5(a)(4)(ii)). Thus, the
regulations do not require that programs produce teacher candidates
with any Federally prescribed rigorous exit requirements or quality
clinical preparation.
Rather, as discussed in our response to public comment in the
section on Specialized Accreditation, States have the authority to use
their own process to determine whether a program has these
characteristics. We feel that this authority provides ample flexibility
for State discretion in how to treat this indicator in assessing
overall program performance and the information about each program that
could help that program in areas of program design. Moreover, the basic
elements identified in Sec. 612.5(a)(4)(ii) reflect recommendations of
the non-Federal negotiators, and we agree with them that the presence
or absence of these elements should impact the overall level of a
teacher preparation program's performance.
The earlier discussion of ``rigorous entry and exit requirements''
in our
[[Page 75574]]
discussion of public comment on Definitions addresses the comments
regarding rigorous entry requirements. We have revised Sec.
612.5(a)(4)(ii)(C) accordingly to focus solely on rigorous exit
standards. As mentioned in that previous discussion, the Department
also encourages all States to include diversity of program graduates as
an indicator in their performance rating systems, to recognize those
programs that are addressing this critical need in the teaching
workforce.
Ensuring that the program produces teacher candidates who have met
rigorous exit qualifications alone will not provide necessary
transparency or differentiation of program quality. However, having
States report data on the full set of indicators for each program will
provide significant and useful information, and explain the basis for a
State's determination that a particular program is or is not low-
performing or at-risk of being low-performing.
We agree with the importance of high quality clinical experience.
However, it is unrealistic to require programs to ensure that each
candidate's clinical experience is directly relevant to his or her
future, as yet undetermined, teaching placement.
Changes: We have revised Sec. 612.5(a)(4)(ii)(C) to require a
State to assess whether the teacher preparation program produces
teacher candidates who have met rigorous teacher candidate exit
qualifications. We have removed the proposed requirement that States
assess whether teacher candidates meet rigorous entry requirements.
Comments: None.
Discussion: Under Sec. 612.5(a)(4) States must annually report
whether a program is administered by an entity that is accredited by a
specialized accrediting agency or produces candidates with the same
knowledge, preparation, and qualifications. Upon review of the comments
and the language of Sec. 612.5(a)(4), we realized that the proposed
lead stem to Sec. 612.5(a)(4)(ii), ``consistent with Sec.
612.4(b)(3)(i)(B)'', is not needed since the proposed latter provision
has been removed.
Changes: We have removed the phrase ``consistent with Sec.
612.4(b)(3)(i)(B)'' from Sec. 612.5(a)(4)(ii).
Other Indicators of a Teacher's Effect on Student Performance (34 CFR
612.5(b))
Comments: Multiple commenters provided examples of other indicators
that may be predictive of a teacher's effect on student performance and
requested the Department to include them. Commenters stated that a
teacher preparation program (by which we assume the commenters meant
``State'') should be required to report on the extent to which each
program meets workforce demands in their State or local area.
Commenters argued this would go further than just reporting job
placement, and inform the public about how the program works with the
local school systems to prepare qualified teacher candidates for likely
positions. Other commenters stated that, in addition to assessments,
students should evaluate their own learning, reiterating that this
would be a more well-rounded approach to assessing student success. One
commenter recommended that the diversity of a teacher preparation
program's students should be a metric to assess teacher preparation
programs to ensure that teacher preparation programs have significant
diversity in the teachers who will be placed in the classroom.
Discussion: We acknowledge that a State might find that other
indicators beyond those the regulations require including those
recommended by the commenters, could be used to provide additional
information on teacher preparation program performance. The regulations
permit States to use (in which case they need to report on) additional
indicators of academic content knowledge and teaching skills to assess
program performance, including other measures that assess the effect of
novice teachers on student performance. In addition, as we have
previously noted, States also may apply and report on other criteria
they have established for identifying which teacher preparation
programs are low-performing or at-risk of being low-performing.
In reviewing commenters' suggestions, we realized that the term
``predictive'' in the phrase ``predictive of a teacher's effect on
student performance'' is inaccurate. The additional measures States may
use are indicators of their academic content knowledge and teaching
skill, rather than predictors of teacher performance.
We therefore are removing the word ``predictive'' from the
regulations. If a State uses other indicators of academic content
knowledge and teaching skills, it must, as we had proposed, apply the
same indicators for all of its teacher preparation programs to ensure
consistent evaluation of preparation programs within the State.
Changes: We have removed the word ``predictive'' from Sec.
612.5(b).
Comments: None.
Discussion: As we addressed in the discussion of public comments on
Scope and Purpose (Sec. 612.1), we have removed the proposed
requirement that in assessing the performance of each teacher
preparation program States consider student learning outcomes ``in
significant part.'' In addition, as we addressed in the discussion of
public comments on Requirements for State Reporting on Characteristics
of Teacher Preparation Programs (Sec. 612.5(a)(4)(ii)), we have
removed rigorous entry requirements from the characteristics of teacher
preparation programs whose administering entities do not have
accreditation by an agency approved by the Secretary. Proposed Sec.
612.6(a)(1) stated that States must use student learning outcomes in
significant part to identify low-performing or at risk programs, and
proposed Sec. 612.6(b) stated that the technical assistance that a
State must provide to low-performing programs included technical
assistance in the form of information on assessing the rigor of their
entry requirements. We have removed both phrases from the final
regulations.
Changes: The phrase ``in significant part'' has been removed from
Sec. 612.6(a)(1), and ``entry requirement and'' has been removed from
Sec. 612.6(b).
What must a State consider in identifying low-performing teacher
preparation programs or at-risk teacher preparation programs, and what
actions must a State take with respect to those identified as low-
performing? (34 CFR 612.6)
Comments: Some commenters supported the requirement in Sec.
612.6(b) that at a minimum, a State must provide technical assistance
to low-performing teacher preparation programs in the State to help
them improve their performance. Commenters were supportive of targeted
technical assistance because it has the possibility of strengthening
teacher preparation programs and the proposed requirements would allow
States and teacher preparation programs to focus on continuous
improvement and particular areas of strength and need. Commenters
indicated that they were pleased that the first step for a State upon
identifying a teacher preparation program as at-risk or low-performing
is providing that program with technical support, including sharing
data from specific indicators to be used to improve instruction and
clinical practice. Commenters noted that States can help bridge the gap
between teacher preparation programs and LEAs by using that data to
create supports for those teachers whose needs were not met by their
program. Commenters commended the examples of technical assistance
provided in the regulations.
[[Page 75575]]
Some commenters suggested additional examples of technical
assistance to include in the regulations. Commenters believed that
technical assistance could include: Training teachers to serve as
clinical faculty or cooperating teachers using the National Board for
Professional Teaching Standards; integrating models of accomplished
practice into the preparation program curriculum; and assisting
preparation programs to provide richer clinical experiences. Commenters
also suggested including first-year teacher mentoring programs and peer
networks as potential ways in which a State could provide technical
assistance to low-performing programs. One commenter noted that, in a
recent survey of educators, teachers cite mentor programs in their
first year of teaching (90 percent) and peer networks (84 percent) as
the top ways to improve teacher training programs.
Commenters recommended that States have the discretion to determine
the scope of the technical assistance, rather than requiring that
technical assistance focus only on low-performing programs. This would
allow States to distribute support as appropriate in an individual
context, and minimize the risk of missing essential opportunities to
identify best practices from high-performing programs and supporting
those programs who are best-positioned to be increasingly productive
and effective providers. Commenters suggested that entities who
administer teacher preparation programs be responsible for seeking and
resourcing improvement for their low-performing programs.
Some commenters suggested that the Federal government provide
financial assistance to States to facilitate the provision of technical
assistance to low-performing programs. Commenters suggested that the
Department make competitive grants available to States to distribute to
low-performing programs in support of program improvement. Commenters
also suggested that the Federal government offer meaningful incentives
to help States design, test, and share approaches to strengthening weak
programs and support research to assess effective interventions, as it
would be difficult for States to offer the required technical
assistance because State agencies have little experience and few staff
in this area. In addition, commenters recommended that a national
training and technical assistance center be established to build data
capacity, consistency, and quality among States and teacher preparation
programs to support scalable continuous improvement and program quality
in educator preparation.
Commenters recommended that, in addition to a description of the
procedures used to assist low-performing programs as required by
section 207 of the HEA, States should be required to describe in the
SRC the technical assistance they provide to low-performing teacher
preparation programs in the last year. Commenters suggested that this
would shift the information reported from descriptions of processes to
more detailed information about real technical assistance efforts,
which could inform technical assistance efforts in other States.
Commenters suggested adding a timeframe for States to provide the
technical assistance to low-performing programs. Commenters suggested a
maximum of three months from the time that the program is identified as
low-performing because, while waiting for the assistance, and in the
early stages of its implementation, the program will continue to
produce teacher candidates of lower quality.
Commenters suggested that States should be required to offer the
assistance of a team of well-recognized scholars in teacher education
and in the education of diverse students in P-12 schools to assist in
the assessment and redesign of programs that are rated below effective.
Some commenters noted that States with publically supported
universities designated as Historically Black Colleges and
Universities, Hispanic Serving Institutions, and tribal institutions
are required to file with the Secretary a supplemental report of equity
in funding and other support to these institutions. Private and
publically supported institutions in these categories often lack the
resources to attract the most recognized scholars in the field.
Discussion: The Department appreciates the commenters' support for
the requirement that States provide technical assistance to improve the
performance of any teacher preparation program in its State that has
been identified as low-performing.
We decline to adopt the recommendations of commenters who suggested
that the regulations require States to provide specific types of
technical assistance because we seek to provide States with flexibility
to design technical assistance that is appropriate for the
circumstances of each low-performing program. States have the
discretion to implement technical assistance in a variety of ways. The
regulations outline the minimum requirements, and we encourage States
that wish to do more, such as providing assistance to at-risk or other
programs, to do so. Furthermore, nothing in the regulations prohibits
States from providing technical assistance to at-risk programs in
addition to low-performing programs. Similarly, while we encourage
States to provide timely assistance to low-performing programs, we
decline to prescribe a certain timeframe so that States have the
flexibility to meet these requirements according to their capacity. In
the SRC, States are required to provide a description of the process
used to determine the kind of technical assistance to provide to low-
performing programs and how such assistance is administered.
The Department appreciates comments requesting Federal guidance and
resources to support high-quality technical assistance. We agree that
such activities could be beneficial. However, the commenters'
suggestions that the Department provide financial assistance to States
to facilitate their provision of technical assistance, and to teacher
preparation programs to support their improvement, and request for
national technical assistance centers to support scalable continuous
improvement and to improve program quality, are outside the scope of
this regulation, which is focused on reporting. The Department will
consider ways to help States implement this and other provisions of the
regulations, including by facilitating the sharing of best practices
across States.
Changes: None.
Subpart C--Consequences of Withdrawal of State Approval or Financial
Support
What are the consequences for a low-performing teacher preparation
program that loses the State's approval or the State's financial
support? (34 CFR 612.7(a))
Comments: Multiple commenters opposed the consequences for a low-
performing teacher preparation program based on their opinion that the
loss of TEACH Grant eligibility will result in decreased access to
higher education for students. Commenters noted that, as institutions
become unable to award TEACH Grants to students in low-performing
teacher preparation programs, students attending those programs would
also lose access to TEACH Grant funds and thereby be responsible for
the additional costs that the financial aid program normally would have
covered. If low-income students are required to cover additional
[[Page 75576]]
amounts of their tuition, the commenters asserted, they will be less
likely to continue their education or to enroll in the first place, if
they are prospective students. The commenters noted that this would
disproportionately impact low-income and minority teacher preparation
students and decrease the enrollment for those populations.
A number of commenters expressed their concerns about the impacts
of losing financial aid eligibility, and stated that decreasing
financial aid for prospective teachers would negatively impact the
number of teachers joining the profession. As costs for higher
education continue to increase and less financial aid is available,
prospective teacher preparation program students may decide not to
enroll in a teacher preparation program, and instead pursue other
fields that may offer other financial incentives to offset the costs
associated with college. The commenters believed this would result in
fewer teachers entering the field because fewer students would begin
and complete teacher preparation programs, thus increasing teacher
shortages. Other commenters were concerned about how performance
results of teacher preparation programs may impact job outcomes for
students who attended those programs in the past as their ability to
obtain jobs may be impacted by the rating of a program they have not
attended recently. The commenters noted that being rated as low-
performing would likely reduce the ability of a program to recruit,
enroll, and retain students, which would translate into fewer teachers
being available for teaching positions. Others stated that there will
be a decrease in the number of students who seek certification in a
high-need subject area due to link between TEACH Grant eligibility and
teacher preparation program metrics. They believe this will increase
teacher shortages in areas that have a shortage of qualified teachers.
Additional commenters believed that results from an individual teacher
would affect privacy concerns and further drive potential teachers away
from the field due to fears that their performance would be published
in a public manner.
Some commenters were specifically concerned about the requirement
that low-performing programs be required to provide transition support
and remedial services to students enrolled at the time of termination
of State support or approval. The commenters noted that low-performing
programs are unlikely to have the resources or capacity to provide
transitional support to students.
Discussion: As an initial matter, we note that the requirements in
Sec. 612.7(a) are drawn directly from section 207(b) of the HEA, which
provides that a teacher preparation program from which the State has
withdrawn its approval or financial support due to the State's
identification of the program as low-performing may not, among other
things, accept or enroll any student who receives title IV student aid.
Section 207(b) of the HEA and Sec. 612.7(a) do not concern simply the
consequences of a program being rated as low-performing, but rather the
consequences associated with a State's withdrawal of the approval of a
program or the State's termination of its financial support based on
such a rating. Similarly, section 207(b) of the HEA and Sec. 612.7(a)
do not concern a program's loss of eligibility to participate in the
TEACH Grant program pursuant to part 686, but rather the statutory
prohibition on the award of title IV student aid to students enrolled
in such a teacher preparation program.
We disagree with the commenters that the loss of TEACH Grant funds
will have a negative impact on affordability of, and access to attend,
teacher preparation programs. A program that loses its eligibility
would be required to provide transitional support, if necessary, to
students enrolled at the institution at the time of termination of
financial support or withdrawal of approval to assist students in
finding another teacher preparation program that is eligible to enroll
students receiving title IV, HEA funds. By providing transition
services to students, individuals who receive title IV, HEA funds would
be able to find another program in which to use their financial aid and
continue in a teacher preparation program in a manner that will still
address college affordability. We also disagree with the commenters who
stated that low-performing programs are unlikely to have the resources
to provide transitional support to students. We believe that an IHE
with a low-performing teacher preparation program will be offering
other programs that may not be considered low-performing. As such, an
IHE will have resources to provide transition services to students
affected by the teacher preparation program being labeled as low-
performing even if the money does not come directly from the teacher
preparation program.
While teacher preparation program labels may negatively impact job
market outcomes because low-performing teacher preparation programs'
ability to recruit and enroll future cohorts of students would be
negatively impacted by the rating, we believe these labels better serve
the interests of students who deserve to know the quality of the
program they may enroll in. As we have explained, Sec. 612.7 applies
only to programs that lose State approval or financial support as a
result of being identified by the State as low-performing. It does not
apply to every program that is identified as low-performing. We believe
that, while providing information about the quality of a program to a
prospective student may impact the student's enrollment decision, a
student who wishes to become a teacher will find and enroll in a
program that has not lost State approval or State financial support. We
believe that providing quality consumer information to prospective
students will allow them to make informed enrollment decisions.
Students who are aware that a teacher preparation program is not
approved by the State may reasonably choose not to enter that program.
Individuals who wish to enter the teaching field will continue to find
programs that prepare them for the workforce, while avoiding less
effective programs. By doing so, we believe, the overall impact to the
number of individuals entering the field will be minimal. Section
612.4(b) implements protections and allowances for teacher preparation
programs with a program size of fewer than 25 students, which would
help to protect against privacy violations, but does not require
sharing information on individual teacher effectiveness with the
general public.
In addition, we believe that, as section 207(b) of the HEA
requires, removing title IV, HEA program eligibility from low-
performing teacher preparation programs that lose State approval or
financial support as a result of the State assessment will encourage
individuals to enroll in more successful teacher preparation programs.
This will keep more prospective teachers enrolled and will mitigate any
negative impact on teacher employment rates.
While these regulations specify that the teacher placement rate and
the teacher retention rate be calculated separately for high-need
schools, no requirements have been created to track employment outcomes
based on high-need subject areas. We believe that an emphasis on high-
need schools will help focus on improving student success across the
board for students in these schools. In addition, the requirement to
report performance at the individual teacher preparation program level
will likely promote reporting by high-need subjects as well.
Section 612.7(a) codifies statutory requirements related to teacher
[[Page 75577]]
preparation programs that lose State approval or State financial
support, and the Department does not have flexibility to alter the
language. This includes the requirements for providing transitional
services to students enrolled. However, we believe that many transition
services are already being offered by colleges and universities, as
well as through community organizations focused on student transition
to higher education. For example, identifying potential colleges and
support in admissions and financial aid application completion,
disability support services, remedial education, as well as career
services support are all components of transition services that most
IHEs offer to some degree to their student body.
The regulations do not require that an institution dictate how a
student is assisted at the time of termination of financial support or
withdrawal of approval from the State. Transition services may include
helping a student transfer to another program at the same institution
that still receives State funding and State approval, or another
program at another institution. The transition services offered by the
institution should be in the best interest of the student and assist
the student in meeting their educational and occupational goals.
However, the Department believes that teacher preparation programs may
be offering these services through their staff already and those
services should not stop because of the consequences of withdrawal of
State approval or financial support.
Changes: None.
Institutional Requirements for Institutions Administering a Teacher
Preparation Program That Has Lost State Approval or Financial Support
(34 CFR 612.7(b))
Comments: One commenter believed that the Department should require
States to notify K-12 school officials in the instance where a teacher
preparation program student is involved in clinical practice at the
school, noting that the K-12 school would be impacted by the loss of
State support for the teacher preparation program.
Discussion: We decline to require schools and districts to be
notified directly when a teacher preparation program of a student
teacher is assessed as low-performing. While that information would be
available to the public, we believe that directly notifying school
officials may unfairly paint students within that program as
ineffective. A student enrolled in a low-performing teacher preparation
program may be an effective and successful teacher and we believe that
notifying school officials directly may influence the school officials
to believe the student teacher would be a poor performer even though
there would be no evidence about the individual supporting this
assumption.
Additionally, we intend Sec. 612.7(b) to focus exclusively on the
title IV, HEA consequences to the teacher preparation program that
loses State approval or financial support and on the students enrolled
in those programs. This subsection describes the procedure that a
program must undertake to ensure that students are informed of the loss
of State approval or financial support.
Changes: None.
How does a low-performing teacher preparation program regain
eligibility to accept or enroll students receiving title IV, HEA
program funds after a loss of the State's approval or the State's
financial support? (34 CFR 612.8(a))
Comments: One commenter noted that even if a State has given its
reinstatement of funds and recognition of improved performance, the
program would have to wait for the Department's approval to be fully
reinstated. The commenter stated that this would be Federal overreach
into State jurisdiction and decision-making. Additionally, the
commenter noted that the regulations appear to make access to title IV,
HEA funds for an entire institution contingent on the ratings of
teacher preparation programs.
Another commenter noted that some programs might not ever regain
authorization to prepare teachers if they must transfer students to
other programs since there will not be any future student outcomes
associated with the recent graduates of the low-performing programs.
Discussion: We decline to adopt the suggestion of the commenter
that the Department should not require an application by a low-
performing teacher preparation program to regain their eligibility to
accept or enroll students receiving title IV, HEA funds which had
previously lost their eligibility to do so. Section 207(b)(4) of the
HEA provides that a teacher preparation program that loses eligibility
to enroll students receiving title IV, HEA funds may be reinstated upon
demonstration of improved performance, as determined by the State.
Reinstatement of eligibility of a low-performing teacher preparation
program would occur if the program meets two criteria: (1) Improved
performance on the teacher preparation program performance criteria in
Sec. 612.5 as determined by the State; and (2) reinstatement of the
State's approval or the State's financial support, or, if both were
lost, the State's approval and the State's financial support. Section
612.8 operationalizes the process for an institution to notify the
Secretary that the State has determined the program has improved its
performance sufficiently to regain the States approval or financial
support and the teacher preparation should again be permitted to enroll
students receiving title IV aid.
We do not propose to tie the entire institution's eligibility for
title IV, HEA funds to the performance of their teacher preparation
program. Any loss of title IV, HEA funds based on these regulations
would only apply to the institution's teacher preparation program and
not to the entire institution. Therefore, an institution would be able
to have both title IV eligible and non-title IV eligible programs at
their institution. In addition, based on the reporting by program, an
institution could have both eligible and non-eligible title IV teacher
preparation programs based on the rating of each program. The remaining
programs at the institution would still be eligible to receive title
IV, HEA funds. We are concerned that our inclusion of proposed Sec.
612.8(b)(2) may have led the commenter to believe that an entire
institution would be prohibited from participating in the title IV
programs as a result of a teacher preparation program's loss of
approval or financial support based on low performance. To avoid such
confusion, we have removed Sec. 612.8(b)(2) from the final
regulations. The institutional eligibility requirements in part 600
sufficiently describe the requirements for institutions to participate
in the title IV, HEA programs.
We believe that providing transitional support to students enrolled
at the institution at the time a State may terminate financial support
or withdraw approval of a teacher preparation program will provide
appropriate consumer protections to students. We disagree with the
commenter who stated it would be impossible for a program to improve
its performance on the State assessment, because there could not be any
data available on which the program could be assessed, such as student
learning outcomes associated with programs if the program was
prohibited from enrolling additional title IV eligible students.
Programs would not be prohibited from enrolling students to determine
future student outcomes. Programs that have lost State approval or
financial support would be limited only in their ability to enroll
additional title IV eligible students, not to enroll all students.
[[Page 75578]]
Changes: We have removed Sec. 612.8(b)(2), which was related to
institutional eligibility.
Part 686--Teacher Education Assistance for College and Higher Education
(TEACH) Grant Program
Subpart A--Scope, Purpose, and General Definitions
Section 686.1 Scope and Purpose
Comments: None.
Discussion: The Higher Education Opportunity Act of 2008 (HEOA)
(Pub. L. 110-315) amended section 465(a)(2)(A) of the HEA to include
educational service agencies in the description of the term low-income
school, and added a new section 481(f) that provides that the term
``educational service agency'' has the meaning given the term in
section 9101 of the ESEA. Also, the ESSA maintained the definition of
the term ``educational service agency'', but it now appears in section
8101 of the ESEA, as amended by the ESSA. We proposed changes to the
TEACH Grant program regulations to incorporate the statutory change,
such as replacing the definition of the term ``school serving low-
income students (low-income school)'' in Sec. 686.2 with the term
``school or educational service agency serving low-income students
(low-income school).'' Previously, Sec. 686.1 stated that in exchange
for a TEACH Grant, a student must agree to serve as a full-time teacher
in a high-need field in a school serving low-income students. We revise
the section to provide that a student must teach in a school or
educational service agency serving low-income students.
Changes: We revised Sec. 686.1 to update the citation in the
definition of the term educational service agency to section 8101 of
the ESEA, as amended, and to use the new term ``school or educational
service agency serving low-income students (low-income school)'' in
place of the term ``school serving low-income students (low-income
school).''
Section 686.2 Definitions
Classification of Instructional Program (CIP)
Comments: None.
Discussion: In the NPRM, we proposed to use the CIP to identify
TEACH Grant-eligible STEM programs. Because, as discussed below, we are
no longer identifying TEACH Grant-eligible STEM programs, the term CIP
is not used in the final regulations.
Changes: We have removed the definition of the term CIP from Sec.
686.2.
High-Quality Teacher Preparation Program Not Provided Through Distance
Education Sec. 686.2
Comments: None.
Discussion: In the NPRM, we proposed a definition for the term
``high-quality teacher preparation program.'' In response to comments,
we have added a definition of a ``high-quality teacher preparation
program provided through distance education'' in Sec. 686.2. We make a
corresponding change to the proposed definition of the term ``high-
quality teacher preparation program'' to distinguish a ``high-quality
teacher preparation program not provided through distance education''
from a ``high-quality teacher preparation program provided through
distance education.''
Furthermore, to ensure that the TEACH Grant program regulations are
consistent with the changes made to part 612, we have revised the
timelines that we proposed in the definition of the term high-quality
teacher preparation program in part 686 that we now incorporate in the
terms ``high quality teacher preparation not provided through distance
education'' and ``high quality teacher preparation program provided
through distance education.'' We have also removed the phrase ``or of
higher quality'' from ``effective or of higher quality'' to align the
definition of ``high-quality teacher preparation program not provided
through distance education'' with the definition of the term
``effective teacher preparation program'' in 34 CFR 612.1(d), which
provides that an effective teacher preparation program is a program
with a level of performance higher than a low-performing teacher
preparation or an at-risk teacher preparation program. The phrase ``or
of higher quality'' was redundant and unnecessary.
The new definition is consistent with changes we made respect to
program-level reporting (including distance education), which are
described in the section of the preamble related to Sec.
612.4(a)(1)(i). We note that the new definition of the term ``high
quality teacher preparation program not provided through distance
education'' relates to the classification of the program under 34 CFR
612.4(b) made by the State where the program was located, as the
proposed definition of the term ``high-quality teacher preparation
program'' provided. This is in contrast to the definition of the term
``high-quality teacher preparation program provided through distance
education'' discussed later in this document.
Also, the proposed definition provided that in the 2020-2021 award
year, a program would be ``high-quality'' only if it was classified as
an effective teacher preparation program in either or both the April
2019 and/or April 2020 State Report Cards. We have determined that this
provision is unnecessary and have deleted it. Now, because the first
State Report Cards under the regulations will be submitted in October
2019, we have provided that starting with the 2021-2022 award year, a
program is high-quality if it is not classified by the State to be less
than an effective teacher preparation program based on 34 CFR 612.4(b)
in two out of the previous three years. We note that in the NPRM, the
definition of the term ``high-quality teacher preparation program''
contained an error. The proposed definition provided that a program
would be considered high-quality if it were classified as effective or
of higher quality for two out of three years. We intended the
requirement to be that a program is high-quality if it is not rated at
a rating lower than effective for two out of three years. This is a
more reasonable standard, and allows a program that has been rated as
less than effective to improve its rating before becoming ineligible to
award TEACH Grants.
Changes: We have added to Sec. 686.2 the term ``high-quality
teacher preparation program not provided through distance education''
and defined it as a teacher preparation program at which less than 50
percent of the program's required coursework is offered through
distance education that, starting with the 2021-2022 award year and
subsequent award years, is not classified by the State to be less than
an effective teacher preparation program, based on 34 CFR 612.4(b) in
two out of the previous three years or meets the exception from State
reporting of teacher preparation program performance under 34 CFR
612.4(b)(3)(ii)(D) or 34 CFR 612.4(b)(5).
High-Quality Teacher Preparation Program Provided Through Distance
Education Sec. 686.2
Comments: In response to the Supplemental NPRM, many commenters
stated that it was unfair that one State's classification of a teacher
preparation program provided through distance education as low-
performing or at-risk of being low-performing would determine TEACH
Grant eligibility for all students enrolled in that program who receive
TEACH Grants, even if other States classified the program as effective.
Commenters did not propose alternative options. One
[[Page 75579]]
commenter argued that the determination of institutional eligibility to
disburse TEACH Grants is meant to rest squarely with the Department,
separate from determinations relating to the title II reporting system.
Another commenter suggested that there should be a single set of
performance standards for TEACH Grants to which all States agree to
hold distance education program accountable. Some commenters felt
teacher preparation programs provided through distance education might
have few students in a State and, as a result, might become victims of
an unusually unrepresentative sample in a particular State.
Several commenters stated that it was unclear how the proposed
regulations would take into account TEACH Grant eligibility for
students enrolled in a teacher preparation program provided through
distance education that does not lead to initial certification or if
the program does not receive an evaluation by a State. Another
commenter stated that the proposed regulations would effectively impose
a requirement for distance education institutions to adopt a 50-State
authorization compliance strategy to offer their distance education
teacher licensure programs to students in all 50 States.
Discussion: We are persuaded by the commenters that the proposed
regulations were too stringent. Consequently, we are revising the
proposed definition of ``high-quality teacher preparation program
provided through distance education'' such that, to become ineligible
to participate in the TEACH Grant program, the teacher preparation
program provided through distance education would need to be rated as
low-performing or at-risk for two out of three years by the same State.
This revision focuses on the classification of a teacher preparation
program provided through distance education as provided by the same
State rather than the classification of a program by multiple States to
which the commenters objected. Moreover, this is consistent with the
treatment of teacher preparation programs at brick-and-mortar
institutions which also have to be classified as low-performing or at-
risk for two out of three years by the same State to become ineligible
to participate in the TEACH Grant program.
We disagree with the commenter that the determination of
institutional eligibility to disburse TEACH Grants is meant to rest
squarely with the Department, separate from determinations relating to
teacher preparation program performance under title II of the HEA. The
HEA provides that the Secretary determines which teacher preparation
programs are high-quality, and the Secretary has reasonably decided to
rely, in part, on the classification of teacher preparation program
performance by States under title II of the HEA. Further, as the
performance rating of teacher preparation programs not provided through
distance education could also be subject to unrepresentative samples
(for example, programs located near a State border), this concern is
not limited to teacher preparation programs provided through distance
education.
The performance standards related to title II are left to a State's
discretion; thus, if States want to work together create a single set
of performance standards, there is no barrier to them doing so.
By way of clarification, the HEA and current regulations provide
for TEACH Grant eligibility for students enrolled in post-baccalaureate
and master's degree programs. The eligibility of programs that do not
lead to initial certification is not based on a title II performance
rating. In addition, if the teacher preparation program provided
through distance education is not classified by a State for a given
year due to small n-size, students would still be able to receive TEACH
Grants if the program meets the exception from State reporting of
teacher preparation program performance under 34 CFR 612.4(b)(3)(ii)(D)
or 34 CFR 612.4(b)(5). We disagree that the regulations effectively
impose a requirement for distance education institutions to adopt a 50-
State authorization compliance strategy to offer their distance
education teacher licensure programs to students in all 50 States.
Rather, our regulations provide, in part, for reporting on teacher
preparation programs provided through distance education under the
title II reporting system with the resulting performance level
classification of the program based on that reporting forming the basis
for that program's eligibility to disburse TEACH Grants.
Changes: We have revised the definition of a high-quality teacher
preparation program provided through distance education to be a teacher
preparation program at which at least 50 percent of the program's
required coursework is offered through distance education and that
starting with the 2021-2022 award year and subsequent award years, is
not classified by the same State to be less than an effective teacher
preparation program based on 34 CFR 612.4(b) in two of the previous
three years or meets the exception from State reporting of teacher
preparation program performance under 34 CFR 612.4(b)(3)(ii)(D) or 34
CFR 612.4(b)(5).
TEACH Grant-Eligible Institution
Comments: Several commenters disagreed with our proposal to link
TEACH Grant program eligibility to State ratings of teacher preparation
program performance conducted under the title II reporting system
described in part 612. Commenters asserted that State ratings of
teacher preparation programs should not determine TEACH Grant program
eligibility because it is not a good precedent to withhold financial
aid from qualified students on the basis of the quality of the program
in which the student is enrolled. Commenters also expressed concern
that, under part 612, each State may develop its own criteria for
assessing teacher preparation program quality, and that this variation
between States will impact teacher preparation programs' eligibility
for TEACH Grants. Commenters stated that using different quality
measures to determine student eligibility for TEACH Grants will be
unfair to students, as programs in different States will be evaluated
using different criteria.
A commenter that offers only graduate degree programs and no
programs that lead to initial certification noted that the HEA provides
that current teachers may be eligible for TEACH Grants to obtain
graduate degrees, and questioned how those students could obtain TEACH
Grants under the proposed definitions of the terms ``TEACH Grant-
eligible institution'' and ``TEACH Grant-eligible program.''.
Commenters also expressed concern that the proposed definition of
the term TEACH Grant-eligible institution will result in an overall
reduction in the number of institutions that are eligible to provide
TEACH Grants, and that, because of this reduction, fewer students will
pursue high-need fields such as special education, or teach in high-
poverty, diverse, urban or rural communities where student test scores
may be lower. One commenter stated that it is unfair to punish students
by denying them access to financial aid when the States they live in
and the institutions they attend may not be able to supply the data on
which the teacher preparation programs are being assessed.
Discussion: We believe that creating a link between institutions
with teacher preparation programs eligible for TEACH Grants and the
ratings of teacher preparation programs under the title II reporting
system is critical, and will allow the Secretary to identify what
teacher preparation programs are high-
[[Page 75580]]
quality. An ``eligible institution,'' as defined in section 420L(1)(A)
of the HEA, is one that the Secretary determines ``provides high-
quality teacher preparation and professional development services,
including extensive clinical experience as part of pre-service
preparation,'' among other requirements. Consistent with this
requirement, we have defined the term ``TEACH Grant-eligible program''
to include those teacher preparation programs that a State has
determined provide at least effective teacher preparation. Under title
II of the HEA, States are required to assess the quality of teacher
preparation programs in the State and to make a determination as to
whether a program is low-performing or at-risk of being low-performing.
A teacher preparation program that does not fall under either one of
these categories is considered an effective teacher preparation program
under these final regulations. It is appropriate and reasonable for the
Secretary to rely on a State's assessment of the quality of teacher
preparation programs in that State for purposes of determining which
programs are TEACH Grant-eligible programs.
We agree that States will assess teacher preparation programs based
on different criteria and measures. The HEA only requires a State to
assess the quality of teacher preparation in that State and does not
require comparability between States. That different States may use
different standards is not necessarily unfair, as it is reasonable for
States to consider specific conditions in their States when designing
their annual assessments. We believe it is important that students
receiving TEACH Grants be enrolled in programs that the State has
identified as providing effective teacher preparation.
We agree that in addition to ensuring that students wishing to
achieve initial certification to become teachers are eligible for TEACH
Grants, the HEA provides that a teacher or a retiree from another
occupation with expertise in a field in which there is a shortage of
teachers, such as mathematics, science, special education, English
language acquisition, or another high-need field, or a teacher who is
using high-quality alternative certification routes to become certified
is eligible to receive TEACH Grants. To ensure that these eligible
students are able to obtain TEACH grants, we have modified the
definitions of the terms ``TEACH Grant-eligible institution'' and
``TEACH Grant-eligible program.''
We also acknowledge the possibility that the overall number of
institutions eligible to award TEACH Grants could decrease, because a
TEACH Grant-eligible institution now must, in most cases, provide at
least one high quality teacher preparation program, while in the
current regulation, an institution may be TEACH Grant-eligible if it
offers a baccalaureate degree that, in combination with other training
or experience, will prepare an individual to teach in a high-need field
and has entered into an agreement with another institution to provide
courses necessary for its students to begin a career in teaching. We
note that so long as an otherwise eligible institution has one high-
quality teacher preparation program not provided through distance
education or one high-quality program provided through distance
education, it continues to be a TEACH Grant-eligible institution.
Furthermore, we do not believe that fewer incentives for students to
pursue fields such as special education or to teach in high-poverty,
diverse, or rural communities where test scores may be lower would
necessarily be created. TEACH Grants will continue to be available to
students so long as their teacher preparation programs are classified
as effective teacher preparation programs by the State (subject to the
exceptions previously discussed), and we are not aware of any evidence
that programs that prepare teachers who pursue fields such as special
education or who teach in communities where test scores are lower will
be classified as at-risk or low-performing teacher preparation programs
on the basis of lower test scores. We believe that those students will
choose to pursue those fields while enrolled in high-quality programs.
The larger reason that the number of institutions providing TEACH
Grants may decrease is that the final regulations narrow the definition
of a TEACH Grant-eligible institution to generally those institutions
that offer at least one high-quality teacher preparation program not
provided through distance education or one high-quality teacher
preparation program provided through distance education at the
baccalaureate or master's degree level (that also meets additional
requirements) and institutions that provide a high-quality teacher
preparation program not provided through distance education or one
high-quality teacher preparation program provided through distance
education that is a post-baccalaureate program of study.
We do not agree that student learning outcomes for any subgroup,
including for teachers who teach students with disabilities, would
necessarily be lower if properly measured. Further, student learning
outcomes is one of multiple measures used to determine a rating and,
thereby, TEACH eligibility. So a single measure, whether student
learning outcomes or another, would not necessarily lead to the teacher
preparation program being determined by the State to be low-performing
or at-risk of being low-performing and correspondingly being ineligible
for TEACH Grants. As discussed elsewhere in this document, States
determine the ways to measure student learning outcomes that give all
teachers a chance to demonstrate effectiveness regardless of the
composition of their classrooms, and States may also determine weights
of the criteria used in their State assessments of teacher preparation
program quality.
We do not agree with the comment that the definition of the term
Teach Grant-eligible program will unfairly punish students who live in
States or attend institutions that fail to comply with the regulations
in part 612 by failing to supply the data required in that part.
Section 205 of the HEA requires States and institutions to submit IRCs
and SRCs annually. In addition, students will have access to
information about a teacher preparation program's eligibility before
they enroll so that they may select programs that are TEACH Grant-
eligible. Section 686.3(c) also allows students who are currently
enrolled in a TEACH Grant-eligible program to receive additional TEACH
Grants to complete their program, even if the program becomes
ineligible to award TEACH Grants to new students.
For reasons discussed under the TEACH Grant-eligible program
section of this document, we have made conforming changes to the
definition of a TEACH Grant-eligible program that are reflected in the
definition of TEACH Grant-eligible institution where applicable.
Changes: We have revised the definition of a TEACH Grant-eligible
institution to provide that, if an institution provides a program that
is the equivalent of an associate degree as defined in Sec.
668.8(b)(1) that is acceptable for full credit toward a baccalaureate
degree in a high-quality teacher preparation program not provided
through distance education or one high-quality teacher preparation
program provided through distance education or provides a master's
degree program that does not meet the definition of the terms ``high
quality teacher preparation not provided through distance education''
or ``high quality teacher preparation program that
[[Page 75581]]
is provided through distance education'' because it is not subject to
reporting under 34 CFR part 612, but that prepares (1) a teacher or a
retiree from another occupation with expertise in a field in which
there is a shortage of teachers, such as mathematics, science, special
education, English language acquisition, or another high-need field; or
(2) a teacher who is using high-quality alternative certification
routes to become certified, the institution is considered a TEACH
Grant-eligible institution.
TEACH Grant-Eligible Program
Comments: A commenter recommended the definition of Teach Grant-
eligible program be amended to add ``or equivalent,'' related to the
eligibility of a two-year program so that the definition would read,
``Provides a two-year or equivalent program that is acceptable for full
credit toward a baccalaureate degree in a high-quality teacher
preparation program'' because some programs could be less than two
years, but the curriculum covered is the equivalent of a two-year
program.
Discussion: We agree with the commenter that some programs could be
less than two years, but the curriculum could cover the equivalent of a
two-year program, and therefore agree that the provision regarding what
constitutes an eligible two-year program of study should be revised.
However, we base the revision on already existing regulations regarding
``eligible program'' rather than the commenter's specific language
recommendations. The regulations for ``eligible program'' in Sec.
668.8 provide that an eligible program is an educational program that
is provided by a participating institution and satisfies other relevant
requirements contained in the section, including that an eligible
program provided by an institution of higher education must, in part,
lead to an associate, bachelors, professional, or graduate degree or be
at least a two academic-year program that is acceptable for full credit
toward a bachelor's degree. For purposes of Sec. 668.8, the Secretary
considers an ``equivalent of an associate degree'' to be, in part, the
successful completion of at least a two-year program that is acceptable
for full credit toward a bachelor's degree and qualifies a student for
admission into the third year of a bachelor's degree program. Based on
these existing regulations, we amended the proposed definition of TEACH
Grant-eligible program to provide that a program that is the equivalent
of an associate degree as defined in Sec. 668.8(b)(1) that is
acceptable for full credit toward a baccalaureate degree in a high-
quality teacher preparation program is considered to be a TEACH Grant-
eligible program. In addition, as described in the discussion of the
term ``TEACH Grant-eligible institution,'' we have made a corresponding
change to the definition of the term ``TEACH Grant-eligible program''
to ensure that programs that prepare graduate degree students who are
eligible to receive TEACH grants pursuant to section 420N(a)(2)(B) of
the HEA are eligible programs. This change applies to programs that are
not assessed by a State under title II of the HEA.
Changes: We have revised the definition of TEACH Grant-eligible
program to provide that a program that is a two-year program or is the
equivalent of an associate degree as defined in Sec. 668.8(b)(1) that
is acceptable for full credit toward a baccalaureate degree in a high
quality teacher preparation program is also considered to be a TEACH
Grant-eligible program. We have also clarified that a master's degree
program that does not meet the definition of the terms ``high quality
teacher preparation not provided through distance education'' or ``high
quality teacher preparation program that is provided through distance
education'' because it is not subject to reporting under 34 CFR part
612, but that prepares (1) a teacher or a retiree from another
occupation with expertise in a field in which there is a shortage of
teachers, such as mathematics, science, special education, English
language acquisition, or another high-need field; or (2) a teacher who
is using high-quality alternative certification routes to become
certified is a TEACH Grant-eligible program.
TEACH Grant-Eligible STEM Program
Comments: Multiple commenters stated that the proposed definition
of the term TEACH Grant-eligible STEM program was not discussed during
the negotiated rulemaking process and unreasonably creates a separate
standard for TEACH Grant eligibility without the corresponding
reporting required in the SRC. Commenters generally stated that all
teacher preparation programs should be held accountable in a fair and
equitable manner. Commenters further stated that the Department did not
provide any rationale for excepting STEM programs from the ratings of
teacher preparation programs described in part 612. Commenters also
noted that the proposed definition ignores foreign language, special
education, bilingual education, and reading specialists, which are
identified as high-need fields in the HEA. Several commenters also
disagreed with the different treatment provided to STEM programs under
the definition because they believed that STEM fields were being given
extra allowances with respect to failing programs and that creating
different standards of program effectiveness for STEM programs and
teacher preparation programs makes little sense. Commenters suggested
that, instead, the Department should require that STEM programs to be
rated as effective or exceptional in order for students in those
programs to receive TEACH Grants.
Commenters also questioned what criteria the Secretary would use to
determine eligibility, since the Secretary would be responsible for
determining which STEM programs are TEACH Grant-eligible. Finally,
commenters emphasized the importance of the pedagogical aspects of
teacher education.
Discussion: We agree that it is important that teacher preparation
programs that are considered TEACH Grant-eligible programs be high-
quality programs, and that the proposed definition of the term TEACH
Grant-eligible STEM program may not achieve that goal. The regulations
in part 612 only apply to teacher preparation programs, which are
defined in that part generally as programs that lead to an initial
State teacher certification or licensure in a specific field. Many STEM
programs do not lead to an initial State teacher certification or
licensure, and hence are not subject to the State assessments described
in part 612 and section 207 of the HEA. We have carefully considered
the commenters' concerns, and have decided to remove our proposed
definition of the term TEACH Grant-eligible STEM program because it
would be difficult to implement and would result in different types of
programs being held to different quality standards. We also acknowledge
the importance of the pedagogical aspects of teacher education. A
result of the removal of this definition will be that a student must be
enrolled in a high-quality teacher preparation program as defined in
Sec. 686.2(e) to be eligible for a TEACH Grant, and that few students
participating in STEM programs will receive TEACH Grants. Those
students may be eligible for TEACH Grants for post-baccalaureate or
graduate study after completion of their STEM programs.
Changes: We have removed the TEACH Grant-eligible STEM program
definition from Sec. 686.2, as well as references to and uses of that
definition elsewhere in part 686 where this term appeared.
[[Page 75582]]
Section 686.11 Eligibility To Receive a TEACH Grant
Comments: Some commenters supported linking TEACH Grant eligibility
to the title II reporting system for the 2020-2021 title IV award year,
noting that this would prevent programs that fail to prepare teachers
effectively from remaining TEACH Grant-eligible, and that linking TEACH
Grant program eligibility to teacher preparation program quality is an
important lever to bring accountability to programs equipping teachers
to teach in the highest need schools. Other commenters were concerned
that linking title II teacher preparation program ratings to TEACH
Grant eligibility will have a negative impact on recruitment for
teacher preparation programs, will restrict student access to TEACH
Grants, and will negatively impact college affordability for many
students, especially for low- and middle-income students and students
of color who may be disproportionately impacted because these students
typically depend more on Federal student aid. Commenters were concerned
that limiting aid for these students, as well as for students in rural
communities or students in special education programs, would further
increase teacher shortages in these areas, would slow down progress in
building a culturally and racially representative educator workforce,
and possibly exacerbate current or pending teacher shortages across the
nation in general. Many commenters opined that, because there is no
evidence supporting the use of existing student growth models for
determining institutional eligibility for the TEACH Grant program,
institutional eligibility for TEACH Grants and student eligibility for
all title IV Federal student aid in a teacher preparation program would
be determined based on an invalid and unreliable rating system. Some
commenters recommended that Federal student aid be based on student
need, not institutional ratings, that they asserted lack a sound
research base because of the potential unknown impacts on
underrepresented groups. Others expressed concern that financial aid
offices would experience more burden and more risk of error in the
student financial aid packaging process because they would have more
information to review to determine student eligibility. This would
include, for distance education programs, where each student lives and
which programs are eligible in which States.
Many commenters stated that the proposed regulations would grant
the State, rather than the Department of Education, authority to
determine TEACH Grant eligibility, which is a delegation of authority
that Congress did not provide the Department, and that a State's strict
requirements may make the TEACH Grant program unusable by institutions,
thereby eliminating TEACH Grant funding from students at those
institutions. It was recommended that the regulations allow for
professional judgment regarding TEACH Grant eligibility, that TEACH
Grants mimic Federal Pell grants in annual aggregates, and that a link
should be available at studentloans.gov for TEACH Grant requirements.
One commenter further claimed that the proposed regulations represent a
profound and unwelcome shift in the historic relationship between
colleges, States, and the Federal government and that there is no
indication that the HEA envisions the kind of approach to institutional
and program eligibility for TEACH Grants proposed in the regulations.
The commenter opined that substantive changes to the eligibility
requirements should be addressed through the legislative process,
rather than through regulation. A commenter noted that a purpose of the
proposed regulations is to deal with deficiencies in the TEACH Grant
program, and thus, the Department should focus specifically on issues
with the TEACH Grant program and not connect these to reporting of the
teacher preparation programs.
Discussion: We appreciate the comments supporting the linking of
TEACH Grant eligibility to the title II reporting system for the 2021-
2022 title IV award year. We disagree, however, with comments
suggesting that such a link will have a negative impact on recruitment
for teacher preparation programs and restrict student access to TEACH
Grants because this circumstance would only arise in the case of
programs rated other than effective, and it is not unreasonable for
students to choose to attend teacher preparation programs that are
effective over those that are not. While we agree that low- and middle-
income students and students of color are more likely to depend on
Federal student aid, the regulations would not affect their eligibility
for Federal student aid as long as they are enrolled in a TEACH Grant-
eligible teacher preparation program at a TEACH Grant-eligible
institution. The same would be true for students in rural communities
or in special education programs. Because student eligibility for
Federal student aid would not be affected in these circumstances,
teacher shortages in these areas also would not be impacted. In 2011,
only 38 institutions were identified by their States as having a low-
performing teacher preparation program.\53\ That evaluation was based
on an institution-wide assessment of quality. Under part 612, each
individual teacher preparation program offered by an institution will
be evaluated by the State, and it would be unlikely for all teacher
preparation programs at an institution to be rated as low-performing.
We believe that students reliant on Federal student aid will have
sufficient options to enroll in high-quality teacher preparation
programs under the final regulations. While we hope that students would
use the ratings of teacher preparation programs to pick more effective
programs initially, we also provide under Sec. 686.3 that an otherwise
eligible student who received a TEACH Grant for enrollment in a TEACH
Grant-eligible program is eligible to receive additional TEACH Grants
to complete that program, even if that program is no longer considered
TEACH Grant-eligible. An otherwise eligible student who received a
TEACH Grant for enrollment in a program before July 1 of the year these
final regulations become effective would remain eligible to receive
additional TEACH Grants to complete the program even if the program is
no longer considered TEACH Grant-eligible under Sec. 686.2(e).
---------------------------------------------------------------------------
\53\ U.S. Department of Education, Office of Postsecondary
Education (2013). Preparing and Credentialing the Nation's Teachers:
The Secretary's Ninth Report on Teacher Quality. Washington, DC.
Retrieved from https://title2.ed.gov/Public/TitleIIReport13.pdf.
(Hereafter referred to as ``Secretary's Ninth Report.'')
---------------------------------------------------------------------------
With respect to comments objecting to the use of student growth to
determine TEACH Grant eligibility, student growth is only one of the
many indicators that States use to assess teacher preparation program
quality in part 612, and States have discretion to determine the weight
assigned to that indicator in their assessment.
While the new regulations will require financial aid offices to
track and review additional information with respect to student
eligibility for TEACH Grants, we do not agree that this would result in
greater risk of incorrect packaging of financial aid. For an
institution to begin and continue to participate in any title IV, HEA
program, the institution must demonstrate to the Secretary that it is
capable of administering that program under the standards of
administrative capability provided under Sec. 668.16 (Standards of
administrative capability). An institution that does not meet
administrative capability standards
[[Page 75583]]
would not be eligible to disburse any title IV, HEA funds, including
TEACH Grants. Moreover, institutions have always had to determine
whether a student seeking a TEACH Grant is enrolled in a TEACH Grant-
eligible program. The final regulations require the institution to be
aware of whether any of the teacher preparation programs at the
institution have been rated as low-performing or at-risk by the State
when identifying which programs that it offers are TEACH Grant-eligible
programs.
We disagree with comments asserting that the proposed regulations
would grant States, rather than the Department, authority to determine
TEACH Grant eligibility, which they claimed is a delegation of
authority that Congress did not authorize. The HEA provides that an
``eligible institution'' for purposes of the TEACH Grant program is one
``that the Secretary determines . . . provides high quality teacher
preparation . . . .'' The Secretary has determined that States are in
the best position to assess the quality of teacher preparation programs
located in their States, and it is reasonable for the Secretary to rely
on the results of the State assessment required by section 207 of the
HEA. We believe that it is appropriate to use the regulatory process to
define how the Secretary determines that an institution provides high
quality teacher preparation and that the final regulations reasonably
amend the current requirements so that they are more meaningful.
We also disagree with commenters that a State's strict requirements
may make the TEACH Grant program unusable by institutions and thereby
eliminate TEACH Grant funding for students at those institutions. We
believe that States will conduct careful and reasonable assessments of
teacher preparation programs located in their States, and we also
believe if a State determines a program is not effective at providing
teacher preparation, students should not receive TEACH Grants to attend
that program.
Regarding the recommendation that the regulations allow for
professional judgment regarding TEACH Grant eligibility, there is no
prohibition regarding the use of professional judgment for the TEACH
Grant program, provided that all applicable regulatory requirements are
met. With respect to the comment suggesting that the TEACH Grant
program should mimic the Pell Grant program in annual aggregates, we
note that, just as the Pell Grant program has its own annual
aggregates, the TEACH Grant program has its own statutory annual award
limits that must be adhered to. The HEA provides that a undergraduate
or post-graduate student may receive up to $4,000 per year, and Sec.
686.3(a) provides that an undergraduate or post-baccalaureate student
may receive the equivalent of up to four Scheduled Awards during the
period required for completion of the first undergraduate baccalaureate
program of study and the first post-baccalaureate program of study
combined. For graduate students, the HEA provides up to $4,000 per
year, and Sec. 686.3(b) stipulates that a graduate student may receive
the equivalent of up to two Scheduled Awards during the period required
for the completion of the TEACH Grant-eligible master's degree program
of study.
Regarding the comment requesting a link to the TEACH Grant program
via the studentloans.gov Web site, we do not believe that adding a link
to the studentloans.gov Web site for TEACH Grants would be helpful, and
could in fact be confusing. This Web site is specific to loans, not
grants. Only if a student does not fulfill the Agreement to Serve is
the TEACH Grant converted to a Direct Unsubsidized Loan. The Web site
already includes a link to the teach-ats.ed.gov Web site, where
students can complete TEACH Grant counseling and the Agreement to
Serve. The Department does provide information about the TEACH Grant
program on its studentaid.ed.gov Web site.
We disagree with the comment that the Department should focus
specifically on issues or deficiencies with the TEACH Grant program and
not connect any issues or deficiencies to reporting of teacher
preparation programs under title II. The regulations are intended to
improve the TEACH Grant program, in part, by operationalizing the
definition of a high-quality teacher preparation program by connecting
the definition to the ratings of teacher preparation programs under the
title II reporting system. The regulations are not meant to address
specific TEACH Grant program issues or program deficiencies.
We decline to adopt the suggestion that an at-risk teacher
preparation program should be given the opportunity and support to
improve before any consequences, including those regarding TEACH
Grants, are imposed. The HEA specifies that TEACH Grants may only be
provided to high-quality teacher preparation programs, and we do not
believe that a program identified as being at-risk should be considered
a high-quality teacher preparation program. With respect to the comment
that institutions in the specific commenter's State will remove
themselves from participation in the TEACH Grant program rather than
pursue high-stakes Federal requirements, we note that, while we cannot
prevent institutions from ending their participation in the program, we
believe that institutions understand the need for providing TEACH
Grants to eligible students and that institutions will continue to try
to meet that need. Additionally, we note that all institutions that
enroll students receiving Federal financial assistance are required to
submit an annual IRC under section 205(a) of the HEA, and that all
States that receive funds under the HEA must submit an annual SRC.
These provisions apply whether or not an institution participates in
the TEACH Grant program.
We agree with the commenters who recommended avoiding specific
carve-outs for potential mathematics and science teachers. As discussed
under the section titled ``TEACH Grant-eligible STEM program,'' we have
removed the TEACH Grant-eligible STEM program definition from Sec.
686.2 and deleted the term where it appeared elsewhere in Sec. 686.
Changes: None.
Sec. 686.42 Discharge of Agreement To Serve
Comments: None.
Discussion: Section 686.42(b) describes the procedure we use to
determine a TEACH Grant recipient's eligibility for discharge of an
agreement to serve based on the recipient's total and permanent
disability. We intend this procedure to mirror the procedure outlined
in Sec. 685.213 which governs discharge of Direct Loans. We are making
a change to Sec. 686.42(b) to make the discharge procedures for TEACH
Grants more consistent with the Direct Loan discharge procedures.
Specifically, Sec. 685.213(b)(7)(ii)(C) provides that the Secretary
does not require a borrower to pay interest on a Direct Loan for the
period from the date the loan was discharged until the date the
borrower's obligation to repay the loan was reinstated. This idea was
not clearly stated in Sec. 686.42(b). We have added new Sec.
686.42(b)(4) to explicitly state that if the TEACH Grant of a recipient
whose TEACH Grant agreement to serve is reinstated is later converted
to a Direct Unsubsidized Stafford Loan, the recipient will not be
required to pay interest that accrued on the TEACH Grant disbursements
from the date the agreement to serve was discharged until the date the
agreement to serve was reinstated. Similarly, Sec. 685.213(b)(7)(iii)
describes the information that the Secretary's notification to a
borrower in
[[Page 75584]]
the event of reinstatement of the loan will include. We have amended
Sec. 686.42(b)(3) to make the TEACH Grant regulations more consistent
with the Direct Loan regulations. Specifically, we removed proposed
Sec. 686.42(b)(3)(iii), which provided that interest accrual would
resume on TEACH Grant disbursements made prior to the date of discharge
if the agreement was reinstated.
Changes: We have removed proposed Sec. 686.42(b)(3)(iii) and added
a new Sec. 686.42(b)(4) to more clearly describe that, if the TEACH
Grant of a recipient whose TEACH Grant agreement to serve is reinstated
is later converted to a Direct Unsubsidized Stafford Loan, the
recipient will not be required to pay interest that accrued on the
TEACH Grant disbursements from the date the agreement to serve was
discharged until the date the agreement to serve was reinstated. This
change also makes the TEACH Grant regulation related to total and
permanent disability more consistent with the Direct Loan discharge
procedures.
Executive Orders 12866 and 13563
Regulatory Impact Analysis
Under Executive Order 12866, the Secretary must determine whether
this regulatory action is ``significant'' and, therefore, subject to
the requirements of the Executive order and subject to review by the
Office of Management and Budget (OMB). Section 3(f) of Executive Order
12866 defines a ``significant regulatory action'' as an action likely
to result in a rule that may--
(1) Have an annual effect on the economy of $100 million or more,
or adversely affect a sector of the economy, productivity, competition,
jobs, the environment, public health or safety, or State, local, or
tribal governments or communities in a material way (also referred to
as an ``economically significant'' rule);
(2) Create serious inconsistency or otherwise interfere with an
action taken or planned by another agency;
(3) Materially alter the budgetary impacts of entitlement grants,
user fees, or loan programs or the rights and obligations of recipients
thereof; or
(4) Raise novel legal or policy issues arising out of legal
mandates, the President's priorities, or the principles stated in the
Executive order.
This final regulatory action is a significant regulatory action
subject to review by OMB under section 3(f) of Executive Order 12866.
We have also reviewed these regulations under Executive Order
13563, which supplements and explicitly reaffirms the principles,
structures, and definitions governing regulatory review established in
Executive Order 12866. To the extent permitted by law, Executive Order
13563 requires that an agency--
(1) Propose or adopt regulations only on a reasoned determination
that their benefits justify their costs (recognizing that some benefits
and costs are difficult to quantify);
(2) Tailor its regulations to impose the least burden on society,
consistent with obtaining regulatory objectives and taking into
account--among other things and to the extent practicable--the costs of
cumulative regulations;
(3) In choosing among alternative regulatory approaches, select
those approaches that maximize net benefits (including potential
economic, environmental, public health and safety, and other
advantages; distributive impacts; and equity);
(4) To the extent feasible, specify performance objectives, rather
than the behavior or manner of compliance a regulated entity must
adopt; and
(5) Identify and assess available alternatives to direct
regulation, including economic incentives--such as user fees or
marketable permits--to encourage the desired behavior, or provide
information that enables the public to make choices.
Executive Order 13563 also requires an agency ``to use the best
available techniques to quantify anticipated present and future
benefits and costs as accurately as possible.'' The Office of
Information and Regulatory Affairs of OMB has emphasized that these
techniques may include ``identifying changing future compliance costs
that might result from technological innovation or anticipated
behavioral changes.''
We are issuing these final regulations only on a reasoned
determination that their benefits justify their costs. In choosing
among alternative regulatory approaches, we selected those approaches
that maximize net benefits. Based on the analysis that follows, the
Department believes that these regulations are consistent with the
principles in Executive Order 13563.
We also have determined that this regulatory action does not unduly
interfere with State, local, or tribal governments in the exercise of
their governmental functions.
In this RIA we discuss the need for regulatory action, the
potential costs and benefits, net budget impacts, assumptions,
limitations, and data sources, as well as regulatory alternatives we
considered. Although the majority of the costs related to information
collection are discussed within this RIA, elsewhere in this document
under Paperwork Reduction Act of 1995, we also identify and further
explain burdens specifically associated with information collection
requirements.
1. Need for Regulatory Action
Recent international assessments of student achievement have
revealed that students in the United States are significantly behind
students in other countries in science, reading, and mathematics.\54\
Although many factors influence student achievement, a large body of
research has used value-added modeling to demonstrate that teacher
quality is the largest in-school factor affecting student
achievement.\55\ We use ``value-added'' modeling and related terms to
refer to statistical methods that use changes in the academic
achievement of students over time to isolate and estimate the effect of
particular factors, such as family, school, or teachers, on changes in
student achievement.\56\ One study found that the difference between
having a teacher who performed at a level one standard deviation below
the mean and a teacher who performed at a level one standard deviation
above the mean was equivalent to student learning gains of a full
year's worth of knowledge.\57\
---------------------------------------------------------------------------
\54\ Kelly, D., Xie, H., Nord, C.W., Jenkins, F., Chan, J.Y.,
Kastberg, D. (2013). Performance of U.S. 15-Year-Old Students in
Mathematics, Science, and Reading Literacy in an International
Context: First Look at PISA 2012 (NCES 2014-024). Retrieved from
U.S. Department of Education, National Center for Education
Statistics Web site: https://nces.ed.gov/pubs2014/2014024rev.pdf.
\55\ Sanders, W., Rivers, J.C. (1996). Cumulative and Residual
Effects of Teachers on Future Student Academic Achievement.
Retrieved from University of Tennessee, Value-Added Research and
Assessment Center; Rivkin, S., Hanushek, E., & Kane, T. (2005).
Teachers, Schools, and Academic Achievement. Econometirica, 417-458;
Rockoff, J. (2004). The Impact of Individual Teachers on Student
Achievement: Evidence from Panel Data. American Economic Review,
94(2), 247-252.
\56\ For more information on approaches to value-added modeling,
see also: Braun, H. (2005). Using Student Progress to Evaluate
Teachers: A Primer on Value-Added Models. Retrieved from https://files.eric.ed.gov/fulltext/ED529977.pdf; Sanders, W.J. (2006).
Comparisons Among Various Educational Assessment Value-Added Models,
Power of Two--National Value-Added Conference, Battelle for Kids,
Columbus, OH. SAS, Inc.
\57\ E. Hanushek. (1992). The Trade-Off between Child Quantity
and Quality. Journal of Political Economy, 100(1), 84-117.
---------------------------------------------------------------------------
A number of factors are associated with teacher quality, including
academic content knowledge, in-service training, and years of
experience, but researchers and policymakers have begun to examine
whether student achievement discrepancies can be
[[Page 75585]]
explained by differences in the preparation their teachers received
before entering the classroom.\58\ An influential study on this topic
found that the effectiveness of teachers in public schools in New York
City who were prepared through different teacher preparation programs
varied in statistically significant ways, as the student growth found
using value-added measures shows.\59\
---------------------------------------------------------------------------
\58\ D. Harris & T. Sass. (2011). Teacher Training, Teacher
Quality, and Student Achievement. Journal of Public Economics, 95(7-
8), 798-812; D. Aaronson, L. Barrow, & W. Sanders. (2007). Teachers
and Student Achievement in the Chicago Public High Schools. Journal
of Labor Economics, 25(1), 95-135; D. Boyd, H. Lankford, S. Loeb, J.
Rockoff, & Wyckoff, J. (2008). The Narrowing Gap in New York City
Teacher Qualifications and Its Implications for Student Achievement
in High-Poverty Schools. Journal of Policy Analysis and Management,
27(4), 793-818.
\59\ D. Boyd, P. Grossman, H. Lankford, S. Loeb, & J. Wyckoff
(2009). ``Teacher Preparation and Student Achievement.'' Education
Evaluation and Policy Analysis, 31(4): 416-440.
---------------------------------------------------------------------------
Subsequent studies have examined the value-added scores of teachers
prepared through different teacher preparation programs in Missouri,
Louisiana, North Carolina, Tennessee, and Washington.\60\ Many of these
studies have found statistically significant differences between
teachers prepared at different preparation programs. For example, State
officials in Tennessee and Louisiana have worked with researchers to
examine whether student achievement could be used to inform teacher
preparation program accountability. After controlling for observable
differences in students, researchers in Tennessee found that the most
effective teacher preparation programs in that State produced graduates
who were two to three times more likely than other novice teachers to
be in the top quintile of teachers in a particular subject area, as
measured by increases in the achievement of their students, with the
least-effective programs producing teachers who were equally likely to
be in the bottom quintile.\61\ Analyses based on Louisiana data on
student growth linked to the programs that prepared students' teachers
found some statistically significant differences in teacher
effectiveness.\62\ Although the study's sample size was small, three
teacher preparation programs produced novice teachers who appeared, on
average, to be as effective as teachers with at least two years of
experience, based on growth in student achievement in four or more
content areas.\63\ A study analyzing differences between teacher
preparation programs in Washington based on the value-added scores of
their graduates also found a few statistically significant differences,
which the authors argued were educationally meaningful.\64\ In
mathematics, the average difference between teachers from the highest
performing program and the lowest performing program was approximately
1.5 times the difference in performance between students eligible for
free or reduced-price lunches and those who are not, while in reading
the average difference was 2.3 times larger.\65\
---------------------------------------------------------------------------
\60\ Koedel, C., Parsons, E., Podgursky, M., & Ehlert, M.
(2015). Teacher Preparation Programs and Teacher Quality: Are There
Real Differences Across Programs? Education Finance and Policy,
10(4), 508-534.; Campbell, S., Henry, G., Patterson, K., Yi, P.
(2011). Teacher Preparation Program Effectiveness Report. Carolina
Institute for Public Policy; Goldhaber, D., & Liddle, S. (2013). The
Gateway to the Profession: Assessing Teacher Preparation Programs
Based on Student Achievement. Economics of Education Review, 34, 29-
44.
\61\ Tennessee Higher Education Commission. Report Card on the
Effectiveness of Teacher Training Programs, 2010.
\62\ Gansle, K., Noell, G., Knox, R.M., Schafer, M.J. (2010).
Value Added Assessment of Teacher Preparation Programs in Louisiana:
2007-2008 TO 2009-2010 Overview of 2010-11 Results. Retrieved from
Louisiana Board of Regents.
\63\ Ibid.
\64\ Goldhaber, D., & Liddle, S. (2013). The Gateway to the
Profession: Assessing Teacher Preparation Programs Based on Student
Achievement. Economics of Education Review, 34, 29-44.
\65\ Ibid. 1.5 times the difference between students eligible
for free or reduced price lunch is approximately 12 percent of a
standard deviation, while 2.3 times the difference is approximately
19 percent of a standard deviation.
---------------------------------------------------------------------------
In contrast to these findings, Koedel, et al. found very small
differences in effectiveness between teachers prepared at different
programs in Missouri.\66\ The vast majority of variation in teacher
effectiveness was within programs, instead of between programs.\67\
However, the authors noted that the lack of variation between programs
in Missouri could reflect a lack of competitive pressure to spur
innovation within traditional teacher preparation programs.\68\ A
robust evaluation system that included outcomes could spur innovation
and increase differentiation between teacher preparation programs.\69\
---------------------------------------------------------------------------
\66\ Koedel, C., Parsons, E., Podgursky, M., & Ehlert, M.
(2015). Teacher Preparation Programs and Teacher Quality: Are There
Real Differences Across Programs? Education Finance and Policy,
10(4), 508-534.
\67\ Ibid.
\68\ Ibid.
\69\ Ibid.
---------------------------------------------------------------------------
We acknowledge that there is debate in the research community about
the specifications that should be used when conducting value-added
analyses of the effectiveness of teachers prepared through different
preparation programs,\70\ but also recognize that the field is moving
in the direction of weighting value-added analyses in assessments of
teacher preparation program quality.
---------------------------------------------------------------------------
\70\ For a discussion of issues and considerations related to
using school fixed effects models to compare the effectiveness of
teachers from different teacher preparation programs who are working
in the same school, see Lockwood, J.R., McCaffrey, D., Mihaly, K.,
Sass, T.(2012). Where You Come From or Where You Go? Distinguishing
Between School Quality and the Effectiveness of Teacher Preparation
Program Graduates. (Working Paper 63). Retrieved from National
Center for Analysis of Longitudinal Data in Education Research.
---------------------------------------------------------------------------
Thus, despite the methodological debate in the research community,
CAEP has developed new standards that require, among other measures,
evidence that students completing a teacher preparation program
positively impact student learning.\71\ The new standards are currently
voluntary for the more than 900 education preparation providers who
participate in the education preparation accreditation system.
Participating institutions account for nearly 60 percent of the
providers of educator preparation in the United States, and their
enrollments account for nearly two-thirds of newly prepared teachers.
The new CAEP standards will be required beginning in 2016.\72\ The
standards are an indication that the effectiveness ratings of teachers
trained through teacher preparation programs are increasingly being
used as a way to evaluate teacher preparation program performance. The
research on teacher preparation program effectiveness is relevant to
the elementary and secondary schools that rely on teacher preparation
programs to recruit and select talented individuals and prepare them to
become future teachers. In 2011-2012 (the most recent year for which
data are available), 203,701 individuals completed either a traditional
teacher preparation program or an alternative route program. The
National Center for Education Statistics (NCES) projects that by 2020,
public and private schools will need to hire as many as 362,000
teachers each year due to teacher retirement and attrition and
increased student enrollment.\73\ In order to meet the needs of public
and private schools, States may have to expand traditional and
alternative route
[[Page 75586]]
programs to prepare more teachers, find new ways to recruit and train
qualified individuals, or reduce the need for novice teachers by
reducing attrition or developing different staffing models. Better
information on the quality of teacher preparation programs will help
States and LEAs make sound staffing decisions.
---------------------------------------------------------------------------
\71\ CAEP 2013 Accreditation Standards.(2013). Retrieved from
https://caepnet.files.wordpress.com/2013/09/final_board_approved1.
\72\ Teacher Preparation: Ensuring a Quality Teacher in Every
Classroom. Hearing before the Senate Committee on Health, Education,
Labor and Pensions. 113th Congress. 113th Cong. (2014)(Statement by
Mary Brabeck).
\73\ U.S. Department of Education (2015). Table 208.20. Digest
of Education Statistics, 2014. Retrieved from National Center for
Education Statistics.
---------------------------------------------------------------------------
Despite research suggesting that the academic achievement of
students taught by graduates of different teacher preparation programs
may vary with regard to their teacher's program, analyses linking
student achievement to teacher preparation programs have not been
conducted and made available publicly for teacher preparation programs
in all States. Congress has recognized the value of assessing and
reporting on the quality of teacher preparation, and requires States
and IHEs to report detailed information about the quality of teacher
preparation programs in the State under the HEA. When reauthorizing the
title II reporting system, members of Congress noted a goal of having
teacher preparation programs explore ways to assess the impact of their
programs' graduates on student academic achievement. In fact, the
report accompanying the House Bill (H. Rep. 110-500) included the
following statement, ``[i]t is the intent of the Committee that teacher
preparation programs, both traditional and those providing alternative
routes to State certification, should strive to increase the quality of
individuals graduating from their programs with the goal of exploring
ways to assess the impact of such programs on student's academic
achievement.''
Moreover, in roundtable discussions and negotiated rulemaking
sessions held by the Department, stakeholders repeatedly expressed
concern that the current title II reporting system provides little
meaningful data on the quality of teacher preparation programs or the
impact of those programs' graduates on student achievement. The recent
GAO report on teacher preparation programs noted that half or more of
the States and teacher preparation programs surveyed said the current
title II data collection was not useful to assessing their programs;
and none of the surveyed school district staff said they used the
data.\74\
---------------------------------------------------------------------------
\74\ GAO at 26.
---------------------------------------------------------------------------
Currently, States must annually calculate and report data on more
than 400 data elements, and IHEs must report on more than 150 elements.
While some information requested in the current reporting system is
statutorily required, other elements--such as whether the IHE requires
a personality test prior to admission--are not required by statute and
do not provide information that is particularly useful to the public.
Thus, stakeholders stressed at the negotiated rulemaking sessions that
the current system is too focused on inputs and that outcome-based
measures would provide more meaningful information.
Similarly, even some of the statutorily-required data elements in
the current reporting system do not provide meaningful information on
program performance and how program graduates are likely to perform in
a classroom. For example, the HEA requires IHEs to report both scaled
scores on licensure tests and pass rates for students who complete
their teacher preparation programs. Yet, research provides mixed
findings on the relationship between licensure test scores and teacher
effectiveness.\75\ This may be because most licensure tests were
designed to measure the knowledge and skills of prospective teachers
but not necessarily to predict classroom effectiveness.\76\ The
predictive value of licensure exams is further eroded by the
significant variation in State pass/cut scores on these exams, with
many States setting pass scores at a very low level. The National
Council on Teacher Quality found that every State except Massachusetts
sets its pass/cut scores on content assessments for elementary school
teachers below the average score for all test takers, and most States
set pass/cut scores at the 16th percentile or lower.\77\ Further, even
with low pass/cut scores, some States allow teacher candidates to take
licensure exams multiple times. Some States also permit IHEs to exclude
students who have completed all program coursework but have not passed
licensure exams when the IHEs report pass rates on these exams for
individuals who have completed teacher preparation programs under the
current title II reporting system. This may explain, in part, why
States and IHEs have reported over the past three years a consistently
high average pass rate on licensure or certification exams ranging
between 95 and 96 percent for individuals who completed traditional
teacher preparation programs in the 2009-10 academic year.\78\
---------------------------------------------------------------------------
\75\ Clotfelter, C., Ladd, H., & Vigdor, J. (2010). Teacher
Credentials and Student Achievement: Longitudinal Analysis with
Student Fixed Effects. Economics of Education Review, 26(6), 673-
682; Goldhaber, D. (2007). Everyone's Doing It, But What Does
Teacher Testing Tell Us about Teacher Effectiveness? The Journal of
Human Resources, 42(4), 765-794; Buddin, R., & Zamarro, G. (2009).
Teacher Qualifications and Student Achievement in Urban Elementary
Schools. Journal of Urban Economics,66, 103-115.
\76\ Goldhaber, D. (2007). Everyone's Doing It, But What Does
Teacher Testing Tell Us about Teacher Effectiveness? The Journal of
Human Resources, 42(4), 765-794.
\77\ National Council on Teacher Quality, State Teacher Policy
Yearbook, 2011. Washington, DC: National Council on Teacher Quality
(2011). For more on licensure tests, see U.S. Department of
Education, Office of Planning, Evaluation, and Policy Development,
Policy and Program Studies Service (2010), Recent Trends in Mean
Scores and Characteristics of Test-Takers on Praxis II Licensure
Tests. Washington, DC: U.S. Department of Education.
\78\ Secretary's Tenth Report.
---------------------------------------------------------------------------
Thus, while the current title II reporting system produces detailed
and voluminous data about teacher preparation programs, the data do not
convey a clear picture of program quality as measured by how program
graduates will perform in a classroom. This lack of meaningful data
prevents school districts, principals, and prospective teacher
candidates from making informed choices, creating a market failure due
to imperfect information.
On the demand side, principals and school districts lack
information about the past performance of teachers from different
teacher preparation programs and may rely on inaccurate assumptions
about the quality of teacher preparation programs when recruiting and
hiring novice teachers. An accountability system that provides
information about how teacher preparation program graduates are likely
to perform in a classroom and how likely they are to stay in the
classroom will be valuable to school districts and principals seeking
to efficiently recruit, hire, train, and retain high-quality educators.
Such a system can help to reduce teacher attrition, a particularly
important problem because many novice teachers do not remain in the
profession, with more than a quarter of novice teachers leaving the
teaching profession altogether within three years of becoming classroom
teachers.\79\ High teacher turnover rates are problematic because
research has demonstrated that, on average, student achievement
increases considerably with more years of teacher experience in the
first three through five years of teaching.\80\
---------------------------------------------------------------------------
\79\ Ingersoll, R. (2003). Is There Really a Teacher Shortage?
Retrieved from University of Washington Center for the Study of
Teaching and Policy Web site: https://depts.washington.edu/ctpmail/PDFs/Shortage-RI-09-2003.pdf.
\80\ Ferguson, R.F. & Ladd, H.F. (1996). How and why money
matters: An analysis of Alabama schools. In H.F. Ladd (Ed.), Holding
schools accountable: Performance-based education reform (pp. 265-
298). Washington, DC: The Brookings Institution; Hanushek, E., Kain,
J., O'Brien, D., & Rivkin, S. (2005). The Market for Teacher Quality
(Working Paper no. 11154). Retrieved from National Bureau for
Economic Research Web site: www.nber.org/papers/w11154; Gordon, R.,
Kane, T., Staiger, D. (2006). Identifying Effective Teachers Using
Performance on the Job; Clotfelter, C., Ladd, H., & Vigdor, J.
(2007). How and Why Do Teacher Credentials Matter for Student
Achievement? (Working Paper No. 2). Retrieved from National Center
for Analysis of Longitudinal Data in Education Research.; Kane, T.,
Rockoff, J., Staiger, D. (2008). What does certification tell us
about teacher effectiveness? Evidence from New York City. Economics
of Education Review, 27(6), 615-631.
---------------------------------------------------------------------------
[[Page 75587]]
On the supply side, when considering which program to attend,
prospective teachers lack comparative information about the placement
rates and effectiveness of a program's graduates. Teacher candidates
may enroll in a program without the benefit of information on
employment rates post-graduation, employer and graduate feedback on
program quality, and, most importantly, without understanding how well
the program prepared prospective teachers to be effective in the
classroom. NCES data indicate that 66 percent of certified teachers who
received their bachelor's degree in 2008 took out loans to finance
their undergraduate education. These teachers borrowed an average of
$22,905.\81\ The average base salary for full-time teachers with a
bachelor's degree in their first year of teaching in public elementary
and secondary schools is $38,490.\82\ Thus, two-thirds of prospective
teacher candidates may incur debt equivalent to 60 percent of their
starting salary in order to attend teacher preparation programs without
access to reliable indicators of how well these programs will prepare
them for classroom teaching or help them find a teaching position in
their chosen field. A better accountability system with more meaningful
information will enable prospective teachers to make more informed
choices while also enabling and encouraging States, IHEs, and
alternative route providers to monitor and continuously improve the
quality of their teacher preparation programs.
---------------------------------------------------------------------------
\81\ National Center for Education Statistics (2009).
Baccalaureate and Beyond Longitudinal Study. Washington, DC: U.S.
Department of Education.
\82\ National Center for Education Statistics (2015). Digest of
Education Statistics, 2014. Washington, DC: U.S. Department of
Education (2015): Table 211.20.
---------------------------------------------------------------------------
The lack of meaningful data also prevents States from restricting
program credentials to programs with the demonstrated ability to
prepare more effective teachers, or accurately identifying low-
performing and at-risk teacher preparation programs and helping these
programs improve. Not surprisingly, States have not identified many
programs as low-performing or at-risk based on the data currently
collected. In the latest title II reporting requirement submissions,
twenty-one States did not classify any teacher preparation programs as
low-performing or at-risk.\83\ Of the programs identified by States as
low-performing or at-risk, 28 were based in IHEs that participate in
the Teacher Education Assistance for College and Higher Education
(TEACH) Grant program. The GAO also found that some States were not
assessing whether programs in their State were low performing at
all.\84\ Since the beginning of Title II, HEA reporting in 2001, 29
States and territories have never identified a single IHE with an at-
risk or low-performing teacher preparation program.\85\ Under the final
regulations, however, every State will collect and report more
meaningful information about teacher preparation program performance
which will enable them to target scarce public funding more efficiently
through direct support to more effective teacher preparation programs
and State financial aid to prospective students attending those
programs.
---------------------------------------------------------------------------
\83\ Secretary's Tenth Report.
\84\ GAO at 17.
\85\ Secretary's Tenth Report.
---------------------------------------------------------------------------
Similarly, under the current title II reporting system, the Federal
government is unable to ensure that financial assistance for
prospective teachers is used to help students attend programs with the
best record for producing effective classroom teachers. The final
regulations help accomplish this by ensuring that program performance
information is available for all teacher preparation programs in all
States and by restricting eligibility for Federal TEACH Grants to
programs that are rated ``effective.''
Most importantly, elementary and secondary school students,
including those students in high-need schools and communities who are
disproportionately taught by recent teacher preparation program
graduates, will be the ultimate beneficiaries of an improved teacher
preparation program accountability system.\86\ Such a system better
focuses State and Federal resources on promising teacher preparation
programs while informing teacher candidates and potential employers
about high-performing teacher preparation programs and enabling States
to more effectively identify and improve low-performing teacher
preparation programs.
---------------------------------------------------------------------------
\86\ Several studies have found that inexperienced teachers are
far more likely to be assigned to high-poverty schools, including
Boyd, D., Lankford, H., Loeb, S., Rockoff, J., & Wyckoff, J. (2008).
The Narrowing Gap in New York City Teacher Qualifications and Its
Implications for Student Achievement in High-Poverty Schools.
Journal of Policy Analysis and Management, 27(4), 793-818;
Clotfelter, C., Ladd, H., Vigdor, J., & Wheeler, J. (2007). High
Poverty Schools and the Distribution of Teachers and Principals.
North Carolina Law Review, 85, 1345-1379; Sass, T., Hannaway, J.,
Xu, Z., Figlio, D., & Feng, L. (2010). Value Added of Teachers in
High-Poverty Schools and Lower-Poverty Schools (Working Paper No.
52). Retrieved from National Center for Analysis of Longitudinal
Data in Education Research at www.coweninstitute.com/wp-content/uploads/2011/01/1001469-calder-working-paper-52-1.pdf.
---------------------------------------------------------------------------
Recognizing the benefits of improved information on teacher
preparation program quality and associated accountability, several
States have already developed and implemented systems that map teacher
effectiveness data back to teacher preparation programs. The
regulations help ensure that all States generate useful data that are
accessible to the public to support efforts to improve teacher
preparation programs.
Brief Summary of the Regulations
The Department's plan to improve teacher preparation has three core
elements: (1) Reduce the reporting burden on IHEs while encouraging
States to make use of data on teacher effectiveness to build an
effective teacher preparation accountability system driven by
meaningful indicators of quality (title II accountability system); (2)
reform targeted financial aid for students preparing to become teachers
by directing scholarship aid to students attending higher-performing
teacher preparation programs (TEACH Grants); and (3) provide more
support for IHEs that prepare high-quality teachers.
The regulations address the first two elements of this plan.
Improving institutional and State reporting and State accountability
builds on the work that States like Louisiana and Tennessee have
already started, as well as work that is underway in States receiving
grants under Phase One or Two of the Race to the Top Fund.\87\ All of
these States have, will soon have, or plan to have statewide systems
that track the academic growth of a teacher's students by the teacher
preparation program from which the teacher graduated and, as a result,
will be better able to identify the teacher preparation programs that
are producing effective teachers and the policies and programs that
need to be strengthened to scale those effects.
---------------------------------------------------------------------------
\87\ The applications and Scopes of Work for States that
received a grant under Phase One or Two of the Race to the Top Fund
are available online at: https://www2.ed.gov/programs/racetothetop/awards.html.
---------------------------------------------------------------------------
Consistent with feedback the Department has received from
stakeholders, under the regulations
[[Page 75588]]
States must assess the quality of teacher preparation programs
according to the following indicators: (1) Student learning outcomes of
students taught by graduates of teacher preparation programs (as
measured by aggregating learning outcomes of students taught by
graduates of each teacher preparation program); (2) job placement and
retention rates of these graduates (based on the number of program
graduates who are hired into teaching positions and whether they stay
in those positions); and (3) survey outcomes for surveys of program
graduates and their employers (based on questions about whether or not
graduates of each teacher preparation program are prepared to be
effective classroom teachers).
The regulations will help provide meaningful information on program
quality to prospective teacher candidates, school districts, States,
and IHEs that administer traditional teacher preparation programs and
alternative routes to State certification or licensure programs. The
regulations will make data available that also can inform academic
program selection, program improvement, and accountability.
During public roundtable discussions and subsequent negotiated
rulemaking sessions, the Department consulted with representatives from
the teacher preparation community, States, teacher preparation program
students, teachers, and other stakeholders about the best way to
produce more meaningful data on the quality of teacher preparation
programs while also reducing the reporting burden on States and teacher
preparation programs where possible. The regulations specify three
types of outcomes States must use to assess teacher preparation program
quality, but States retain discretion to select the most appropriate
methods to collect and report these data. In order to give States and
stakeholders sufficient time to develop these methods, the requirements
of these regulations are implemented over several years.
2. Discussion of Costs, Benefits, and Transfers
The Department has analyzed the costs of complying with the final
regulations. Due to uncertainty about the current capacity of States in
some relevant areas and the considerable discretion the regulations
will provide States (e.g., the flexibility States would have in
determining who conducts the teacher and employer surveys), we cannot
evaluate the costs of implementing the regulations with absolute
precision. In the NPRM, the Department estimated that the total
annualized cost of these regulations would be between $42.0 million and
$42.1 million over ten years. However, based on public comments
received, it became clear to us that this estimate created confusion.
In particular, a number of commenters incorrectly interpreted this
estimate as the total cost of the regulations over a ten year period.
That is not correct. The estimates in the NPRM captured an annualized
cost (i.e., between $42.0 million and $42.1 million per year over the
ten year period) rather than a total cost (i.e., between $42.0 million
and $42.1 million in total over ten years). In addition, these
estimated costs reflected both startup and ongoing costs, so affected
entities would likely see costs higher than these estimates in the
first year of implementation and costs lower than these estimates in
subsequent years. The Department believed that these assumptions were
clearly outlined for the public in the NPRM; however, based on the
nature of public comments received, we recognize that additional
explanation is necessary.
The Department has reviewed the comments submitted in response to
the NPRM and has revised some assumptions in response to the
information we received. We discuss specific public comments, where
relevant, in the appropriate sections below. In general, we do not
discuss non-substantive comments.
A number of commenters expressed general concerns regarding the
cost estimates included in the NPRM and indicated that implementing
these regulations would cost far more than $42.0 million over ten
years. As noted above, we believe most of these comments arose from a
fundamental misunderstanding of the estimates presented in the NPRM.
While several commenters attempted to provide alternate cost estimates,
we note that many of these estimates were unreasonably high because
they included costs for activities or initiatives that are not required
by the regulations. For instance, in one alternate estimate (submitted
jointly by the California Department of Education, the California
Commission on Teacher Credentialing, and the California State Board of
Education) cited by a number of commenters, over 95 percent of the
costs outlined were due to non-required activities such as dramatically
expanding State standardized assessments to all grades and subjects or
completing time- and cost-intensive teacher evaluations of all teachers
in the State in every year. Nonetheless, we have taken portions of
those estimates into account where appropriate (i.e., where the
alternate estimates reflect actual requirements of the final
regulations) in revising our assumptions.
In addition, some commenters argued that our initial estimates were
too low because they did not include costs for activities not directly
required by the regulations. These activities included making changes
in State laws where those laws prohibited the sharing of data between
State entities responsible for teacher certification and the State
educational agency. Upon reviewing these comments, we have declined to
include estimates of these potential costs. Such costs are difficult to
quantify, as it is unclear how many States would be affected, how
extensive the needed changes would be, or how much time and resources
would be required on the part of State legislatures. Also, we believe
that many States removed potential barriers in order to receive ESEA
flexibility prior to the passage of ESSA, further minimizing the
potential cost of legislative changes. To the extent that States do
experience costs associated with these actions, or other actions not
specifically required by the regulations and therefore not outlined
below (e.g., costs associated with including more than the minimum
number of participants in the consultation process described in Sec.
612.4(c)), our estimates will not account for those costs.
We have also updated our estimates using the most recently
available wage rates from the Bureau of Labor Statistics. We have also
updated our estimates of the number of teacher preparation programs and
teacher preparation entities using the most recent data submitted to
the Department in the 2015 title II data collection. While no
commenters specifically addressed these issues, we believe that these
updates will provide the most reasonable estimate of costs.
Based on revised assumptions, the Department estimates that the
total annualized cost of the regulations will be between $27.5 million
and $27.7 million (see the Accounting Statement section of this
document for further detail). This estimate is significantly lower than
the total annualized cost estimated in the proposed rule. The largest
driver of this decrease is the increased flexibility provided to States
under Sec. 612.5(a)(1)(ii), as explained below. To provide additional
context, we provide estimates in Table 3 for IHEs, States, and LEAs in
Year 1 and Year 5. These estimates are not annualized or calculated on
a net present value basis, but instead represent real dollar estimates.
[[Page 75589]]
Table 4--Estimated Costs by Entity Type in Years 1 and 5
------------------------------------------------------------------------
Year 1 Year 5
------------------------------------------------------------------------
IHE..................................... $4,800,050 $4,415,930
State................................... $24,077,040 $16,111,570
LEA..................................... $5,859,820 $5,859,820
rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
Total................................. $34,736,910 $26,387,320
------------------------------------------------------------------------
Relative to these costs, the major benefit of the requirements,
taken as a whole, will be better publicly available information on the
effectiveness of teacher preparation programs that can be used by
prospective students when choosing programs to attend; employers in
selecting teacher preparation program graduates to recruit, train, and
hire; States in making funding decisions; and teacher preparation
programs themselves in seeking to improve.
The following is a detailed analysis of the estimated costs of
implementing the specific requirements, including the costs of
complying with paperwork-related requirements, followed by a discussion
of the anticipated benefits.\88\ The burden hours of implementing
specific paperwork-related requirements are also shown in the tables in
the Paperwork Reduction Act section of this document.
---------------------------------------------------------------------------
\88\ Unless otherwise specified, all hourly wage estimates for
particular occupation categories were taken from the May 2014
National Occupational Employment and Wage Estimates for Federal,
State, and local government published by the Department of Labor's
Bureau of Labor Statistics and available online at www.bls.gov/oes/current/999001.htm.
---------------------------------------------------------------------------
Title II Accountability System (HEA Title II Regulations)
Section 205(a) of the HEA requires that each IHE that provides a
teacher preparation program leading to State certification or licensure
report on a statutorily enumerated series of data elements for the
programs it provides. Section 205(b) of the HEA requires that each
State that receives funds under the HEA provide to the Secretary and
make widely available to the public information on the quality of
traditional and alternative route teacher preparation programs that
includes not less than the statutorily enumerated series of data
elements it provides. The State must do so in a uniform and
comprehensible manner, conforming with definitions and methods
established by the Secretary. Section 205(c) of the HEA directs the
Secretary to prescribe regulations to ensure the validity, reliability,
accuracy, and integrity of the data submitted. Section 206(b) requires
that IHEs provide assurance to the Secretary that their teacher
training programs respond to the needs of LEAs, be closely linked with
the instructional decisions novice teachers confront in the classroom,
and prepare candidates to work with diverse populations and in urban
and rural settings, as applicable. Consistent with these statutory
provisions, the Department is issuing regulations to ensure that the
data reported by IHEs and States is accurate. The following sections
provide a detailed examination of the costs associated with each of the
regulatory provisions.
Institutional Report Card Reporting Requirements
The regulations require that beginning on April 1, 2018, and
annually thereafter, each IHE that conducts a traditional teacher
preparation program or alternative route to State certification or
licensure program and enrolls students receiving title IV, HEA funds,
report to the State on the quality of its program using an IRC
prescribed by the Secretary.
Under the current IRC, IHEs typically report at the entity level,
rather than the program level, such that an IHE that administers
multiple teacher preparation programs typically gathers data on each of
those programs, aggregates the data, and reports the required
information as a single teacher preparation entity on a single report
card. By contrast, the regulations generally require that States report
on program performance at the individual program level. The Department
originally estimated that the initial burden for each IHE to adjust its
recordkeeping systems in order to report the required data separately
for each of its teacher preparation programs would be four hours per
IHE. Numerous commenters argued that this estimate was low. Several
commenters argued that initial set-up would take 8 to 12 hours, while
others argued that it would take 20 to 40 hours per IHE. While we
recognize that the amount of time it will take to initially adjust
their record-keeping systems will vary, we believe that the estimates
in excess of 20 hours are too high, given that IHEs will only be
adjusting the way in which they report data, rather than collecting new
data. However, the Department found arguments in favor of both 8 hours
and 12 hours to be compelling and reasonable. We believe that eight
hours is a reasonable estimate for how long it will take to complete
this process generally; and for institutions with greater levels of
oversight, review, or complexity, this process may take longer. Without
additional information about the specific levels of review and
oversight at individual institutions, we assume that the amount of time
it will take institutions to complete this work will be normally
distributed between 8 and 12 hours, with a national average of 10 hours
per institution. Therefore, the Department has upwardly revised its
initial estimate of four hours to ten hours. In the most recent year
for which data are available, 1,490 IHEs submitted IRCs to the
Department, for an estimated one-time cost of $384,120.\89\
---------------------------------------------------------------------------
\89\ Unless otherwise specified, for paperwork reporting
requirements, we use a wage rate of $25.78, which is based on a
weighted national average hourly wage for full-time Federal, State
and local government workers in office and administrative support
(75 percent) and managerial occupations (25 percent), as reported by
the Bureau of Labor Statistics in the National Occupational
Employment and Wage Estimates, May 2014.
---------------------------------------------------------------------------
One commenter argued that institutions would have to make costly
updates and upgrades to their existing information technology (IT)
platforms in order to generate the required new reports. However, given
that institutions will not be required to generate reports on any new
data elements, but only disaggregate the data already being collected
by program, and that we include cost estimates for making the necessary
changes to their existing systems in order to generate reports in that
way, we do not believe it would be appropriate to include additional
costs associated with large IT purchases in this cost estimate.
The Department further estimated that each of the 1,490 IHEs would
need to spend 78 hours to collect the data elements required for the
IRC for its teacher preparation programs. Several commenters argued
that it would take longer than 78 hours to collect the data elements
required for the IRC each year. The Department reviewed its original
estimates in light of these comments and the new requirement for IHEs
to identify, in their IRCs, whether each program met the definition of
a teacher preparation program provided through distance education.
Pursuant to that review, the Department has increased its initial
estimate to 80 hours, for an annual cost of $3,072,980.
We originally estimated that entering the required information into
the information collection instrument would take 13.65 hours per
entity. We currently estimate that, on average, it takes one hour for
institutions to enter the data for the current IRC. The Department
believed that it would take institutions approximately as long to
complete the report for each program as it does currently for the
entire entity. As such, the regulations would result in an additional
burden of the time to complete all individual program level
[[Page 75590]]
reports minus the current entity time burden. In the NPRM, this
estimate was based on an average of 14.65 teacher preparation programs
per entity--22,312 IHE-based programs divided by 1,522 IHEs. Given that
entities are already taking approximately one hour to complete the
report, we estimated the time burden associated with this regulation at
13.65 hours (14.65 hours to complete individual program level reports
minus one hour of current entity time burden). Based on the most recent
data available, we now estimate an average of 16.40 teacher preparation
programs per entity--24,430 IHE-based programs divided by 1,490 IHEs.
This results in a total cost of $591,550 to the 1,490 IHEs. One
commenter stated that it would take a total of 140 hours to enter the
required information into the information collection instrument.
However, it appears that this estimate is based on an assumption that
it would require 10 hours of data entry for each program at an
institution. Given the number of data elements involved and our
understanding of how long institutions have historically taken to
complete data entry tasks, we believe this estimate is high, and that
our revised estimate, as described above, is appropriate.
The regulations also require that each IHE provide the information
reported on the IRC to the general public by prominently and promptly
posting the IRC on the IHE's Web site, and, if applicable, on the
teacher preparation portion of the Web site. We originally estimated
that each IHE would require 30 minutes to post the IRC. One commenter
stated that this estimate was reasonable given the tasks involved,
while two commenters argued that this was an underestimate. One of
these commenters stated that posting data on the institutional Web site
often involved multiple staff, which was not captured in the
Department's initial estimate. Another commenter argued that this
estimate did not take into account time for data verification, drafting
of summary text to accompany the document, or ensuring compliance with
the Americans with Disabilities Act (ADA). Given that institutions will
simply be posting on their Web site the final IRC that was submitted to
the Department, we assume that the document has already been reviewed
by all necessary parties and that all included data have been verified
prior to being submitted to the Department. As such, the requirement to
post the IRC to the Web site should not incur any additional levels of
review or data validation. Regarding ADA compliance, we assume the
commenter was referring to the broad set of statutory requirements
regarding accessibility of communications by entities receiving Federal
funding. In general, it is our belief that the vast majority of
institutions, when developing materials for public dissemination,
already ensure that such materials meet government- and industry-
recognized standards for accessibility. To the extent that they do not
already do so, nothing in the regulations imposes additional
accessibility requirements beyond those in the Rehabilitation Act of
1973, as amended, or the ADA. As such, while there may be
accessibility-related work associated with the preparation of these
documents that is not already within the standard procedures of the
institution, such work is not a burden created by the regulations.
Thus, we believe our initial estimate of 30 minutes is appropriate, for
an annual cumulative cost of $19,210. The estimated total annual cost
to IHEs to meet the requirements concerning IRCs would be $3,991,030.
We note that several commenters, in response to the Supplemental
NPRM, argued that institutions would experience increased compliance
costs given new provisions related to teacher preparation programs
provided through distance education. However, nothing in the
Supplemental NPRM proposed changes to institutional burden under Sec.
612.3. Under the final regulations, the only increased burden on IHEs
with respect to teacher preparation programs provided through distance
education is that they identify whether each of the teacher preparation
programs they offer meet the definition in Sec. 612.2. We believe that
the additional two hours estimated for data collection above the
Department's initial estimate provides more than enough time for IHEs
to meet this requirement. We do not estimate additional compliance
costs to accrue to IHEs as a result of provisions in this regulation
related to teacher preparation programs provided through distance
education.
State Report Card Reporting Requirements
Section 205(b) of the HEA requires each State that receives funds
under the HEA to report annually to the Secretary on the quality of
teacher preparation in the State, both for traditional teacher
preparation programs and for alternative routes to State certification
or licensure programs, and to make this report available to the general
public. In the NPRM, the Department estimated that the 50 States, the
District of Columbia, the Commonwealth of Puerto Rico, Guam, American
Samoa, the United States Virgin Islands, the Commonwealth of the
Northern Mariana Islands, and the Freely Associated States, which
include the Republic of the Marshall Islands, the Federated States of
Micronesia, and the Republic of Palau would each need 235 hours to
report the data required under the SRC.
In response to the original NPRM, two commenters argued that this
estimate was too low. Specifically, one commenter stated that, based on
the amount of time their State has historically devoted to reporting
the data in the SRC, it would take approximately 372.5 hours to
complete. We note that not all States will be able to complete the
reporting requirements in 235 hours and that some States, particularly
those with more complex systems or more institutions, will take much
longer. We also note that the State identified by the commenter in
developing the 372.5 hour estimate meets both of those conditions--it
uses a separate reporting structure to develop its SRC (one of only two
States nationwide to do so), and has an above-average number of
preparation programs. As such, it is reasonable to assume that this
State would require more than the nationwide average amount of time to
complete the process. Another commenter stated that the Department's
estimates did not take into account the amount of time and potential
staff resources needed to prepare and post the information. We note
that there are many other aspects of preparing and posting the data
that are not reflected in this estimate, such as collecting, verifying,
and validating the data. We also note that this estimate does not take
into account the time required to report on student learning outcomes,
employment outcomes, or survey results. However, all of these estimates
are included elsewhere in these cost estimates. We believe that, taken
as a whole, all of these various elements appropriately capture the
time and staff resources necessary to comply with the SRC reporting
requirement.
As proposed in the Supplemental NPRM, and as described in greater
detail below, in these final regulations, States will be required to
report on teacher preparation programs offered through distance
education that produce 25 or more certified teachers in their State.
The Department estimates that the reporting on these additional
programs, in conjunction with the reduction in the total number of
teacher preparation programs from our initial estimates in the NPRM,
will result in a net increase in the time necessary to report the data
required in the SRC from the 235 hours
[[Page 75591]]
estimated in the NPRM to 243 hours, for an annual cost of $369,610.
Section 612.4(a)(2) requires that States post the SRC on the
State's Web site. Because all States already have at least one Web site
in operation, we originally estimated that posting the SRC on an
existing Web site would require no more than half an hour at a cost of
$25.78 per hour. Two commenters suggested that this estimate was too
low. One commenter argued that the Department's initial estimate did
not take into account time to create Web-ready materials or to address
technical errors. In general, the regulations do not require the SRC to
be posted in any specific format and we believe that it would take a
State minimal time to create a file that would be compliant with the
regulations by, for example, creating a PDF containing the SRC. We were
unable to determine from this comment the specific technical errors
that the commenter was concerned about, but believe that enough States
will need less than the originally estimated 30 minutes to post the SRC
so that the overall average will not be affected if a handful of States
encounter technical issues. Another commenter estimated that, using its
current Web reporting system, it would take approximately 450 hours to
initially set up the SRC Web site with a recurring 8 hours annually to
update it. However, we note that the system the commenter describes is
more labor intensive and includes more data analysis than the
regulations require. While we recognize the value in States' actively
trying to make the SRC data more accessible and useful to the public,
we cannot accurately estimate how many States will choose to do more
than the regulations require, or what costs they would encounter to do
so. We have therefore opted to estimate only the time and costs
necessary to comply with the regulations. As such, we retain our
initial estimate of 30 minutes to post the SRC. For the 50 States, the
District of Columbia, the Commonwealth of Puerto Rico, Guam, American
Samoa, the United States Virgin Islands, the Commonwealth of the
Northern Mariana Islands, the Freely Associated States, which include
the Republic of the Marshall Islands, the Federated States of
Micronesia, and the Republic of Palau the total annual estimated cost
of meeting this requirement would be $760.
Scope of State Reporting
The costs associated with the reporting requirements in paragraphs
(b) and (c) of Sec. 612.4 are discussed in the following paragraphs.
The requirements regarding reporting of a teacher preparation program's
indicators of academic content knowledge and teaching skills do not
apply to the insular areas of American Samoa, Guam, the Commonwealth of
the Northern Mariana Islands, the U.S. Virgin Islands, the freely
associated States of the Republic of the Marshall Islands, the
Federated States of Micronesia, and the Republic of Palau. Due to their
size and limited resources and capacity in some of these areas, we
believe that the cost to these insular areas of collecting and
reporting data on these indicators would not be warranted.
Number of Distance Education Programs
As described in the Supplemental NPRM (81 FR 18808), the Department
initially estimated that the portions of this regulation relating to
reporting on teacher preparation programs offered through distance
education would result in 812 additional reporting instances for
States. A number of commenters acknowledged the difficulty in arriving
at an accurate estimate of the number of teacher preparation programs
offered through distance education that would be subject to reporting
under the final regulation. However, those commenters also noted that,
without a clear definition from the Department on what constitutes a
teacher preparation program offered through distance education, it
would be exceptionally difficult to offer an alternative estimate. No
commenters provided alternate estimates. In these final regulations,
the Department has adopted a definition of teacher preparation program
offered through distance education. We believe that this definition is
consistent with our initial estimation methodology and have no reason
to adjust that estimate at this time.
Reporting of Information on Teacher Preparation Program Performance
Under Sec. 612.4(b)(1), a State would be required to make
meaningful differentiations in teacher preparation program performance
using at least three performance levels--low-performing teacher
preparation program, at-risk teacher preparation program, and effective
teacher preparation program--based on the indicators in Sec. 612.5,
including student learning outcomes and employment outcomes for
teachers in high-need schools. Because States would have the discretion
to determine the weighting of these indicators, the Department assumes
that States would consult with early adopter States or researchers to
determine best practices for making such determinations and whether an
underlying qualitative basis should exist for these decisions. The
Department originally estimated that State higher education authorities
responsible for making State-level classifications of teacher
preparation programs would require at least 35 hours to discuss methods
for ensuring that meaningful differentiations are made in their
classifications. This initial estimate also included determining what
it meant for particular indicators to be included ``in significant
part'' and what constituted ``satisfactory'' student learning outcomes,
which are not included in the final regulations.
A number of commenters stated that 35 hours was an underestimate.
Of the commenters that suggested alternative estimates, those estimates
typically ranged from 60 to 70 hours (the highest estimate was 350
hours). Based on these comments, the Department believes that its
original estimate would not have provided sufficient time for multiple
staff to meet and discuss teacher preparation program quality in a
meaningful way. As such, and given that these staff will be making
decisions regarding a smaller range of issues, the Department is
revising its estimate to 70 hours per State. We believe that this
amount of time would be sufficient for staff to discuss and make
decisions on these issues in a meaningful and purposeful way. To
estimate the cost per State, we assume that the State employee or
employees would likely be in a managerial position (with national
average hourly earnings of $45.58), for a total one-time cost for each
of the 50 States, the District of Columbia, and the Commonwealth of
Puerto Rico of $165,910.
Fair and Equitable Methods
Section 612.4(c)(1) requires States to consult with a
representative group of stakeholders to determine the procedures for
assessing and reporting the performance of each teacher preparation
program in the State. The regulations specify that these stakeholders
must include, at a minimum, representatives of leaders and faculty of
traditional teacher preparation programs and alternative routes to
State certification or licensure programs; students of teacher
preparation programs; LEA superintendents; local school board members;
elementary and secondary school leaders and instructional staff;
elementary and secondary school students and their parents; IHEs that
serve high proportions of low-income students or students of color, or
English learners; advocates for English learners and students with
disabilities; officials
[[Page 75592]]
of the State's standards board or other appropriate standards body; and
a representative of at least one teacher preparation program provided
through distance education. Because the final regulations do not
prescribe any particular methods or activities, we expect that States
will implement these requirements in ways that vary considerably,
depending on their population and geography and any applicable State
laws concerning public meetings.
Many commenters stated that their States would likely adopt methods
different from those outlined below. In particular, these commenters
argued that their States would include more than the minimum number of
participants we used for these estimates. In general, while States may
opt to do more than what is required by the regulations, for purposes
of estimating the cost, we have based the estimate on what the
regulations require. If States opt to include more participants or
consult with them more frequently or for longer periods of time, then
the costs incurred by States and the participants would be higher.
In order to estimate the cost of implementing these requirements,
we assume that the average State will convene at least three meetings
with at least the following representatives from required categories of
stakeholders: One administrator or faculty member from a traditional
teacher preparation program, one administrator or faculty member from
an alternative route teacher preparation program, one student from a
traditional or alternative route teacher preparation program, one
teacher or other instructional staff, one representative of a small
teacher preparation program, one LEA superintendent, one local school
board member, one student in elementary or secondary school and one of
his or her parents, one administrator or faculty member from an IHE
that serves high percentages of low-income students or students of
color, one representative of the interests of English learners, one
representative of the interests of students with disabilities, one
official from the State's standards board or other appropriate
standards body, and one administrator or faculty from a teacher
preparation program provided through distance education. We note that a
representative of a small teacher preparation program and a
representative from a teacher preparation program provided through
distance education were not required stakeholders in the proposed
regulations, but are included in these final regulations.
To estimate the cost of participating in these meetings for the
required categories of stakeholders, we initially assumed that each
meeting would require four hours of each participant's time and used
the following national average hourly wages for full-time State
government workers employed in these professions: Postsecondary
education administrators, $50.57 (4 stakeholders); elementary or
secondary education administrators, $50.97 (1 stakeholder);
postsecondary teachers, $45.78 (1 stakeholder); primary, secondary, and
special education school teachers, $41.66 (1 stakeholder). For the
official from the State's standards board or other appropriate
standards body, we used the national average hourly earnings of $59.32
for chief executives employed by State governments. For the
representatives of the interests of students who are English learners
and students with disabilities, we used the national average hourly
earnings of $62.64 for lawyers in educational services (including
private, State, and local government schools). For the opportunity cost
to the representatives of elementary and secondary school students, we
used the Federal minimum wage of $7.25 per hour and the average hourly
wage for all workers of $22.71. These wage rates could represent either
the involvement of a parent and a student at these meetings, or a
single representative from an organization representing their interests
who has an above average wage rate (i.e., $29.96). We used the average
hourly wage rate for all workers ($22.71) for the school board
official. For the student from a traditional or alternative route
teacher preparation program, we used the 25th percentile of hourly wage
for all workers of $11.04. We also assumed that at least two State
employees in managerial positions (with national average hourly
earnings of $45.58) would attend each meeting, with one budget or
policy analyst to assist them (with national average hourly earnings of
$33.98).\90\
---------------------------------------------------------------------------
\90\ Unless otherwise noted, all wage rates in this section are
based on average hourly earnings as reported by in the May 2014
National Occupational Employment and Wage Estimates from the Bureau
of Labor Statistics available online at www.bls.gov/oes/current/oessrci.htm. Where hourly wages were unavailable, we estimated
hourly wages using average annual wages from this source and the
average annual hours worked from the National Compensation Survey,
2010.
---------------------------------------------------------------------------
A number of commenters stated that this consultation process would
take longer than the 12 hours in our initial estimate and that our
estimates did not include time for preparation for the meetings or for
participant travel. Alternate estimates from commenters ranged from 56
hours to 3,900 hours. Based on the comments we received, the Department
believes that both States and participants may opt to meet for longer
periods of time at each meeting or more frequently. However, we believe
that many of the estimates from commenters were overestimates for an
annual process. For example, the 3,900 hour estimate would require a
commitment on the part of participants totaling 75 hours per week for
52 weeks per year. We believe this is highly unrealistic. However, we
do recognize that States and interested parties may wish to spend a
greater amount of time in the first year to discuss and establish the
initial framework than we initially estimated. As such, we are
increasing our initial estimate of 12 hours in the first year to 60
hours. We believe that this amount of time will provide an adequate
amount of time for discussion of these important issues. We therefore
estimate the cumulative cost to the 50 States, the District of
Columbia, and Puerto Rico to be $2,385,900.
We also recognize that, although the Department initially only
estimated this consultative process occurring once every five years,
States may wish to have a continuing consultation with these
stakeholders. We believe that this engagement would take place either
over email or conference call, or with an on-site meeting. We therefore
are adding an estimated 20 hours per year for the intervening years for
consulting with stakeholders. We therefore estimate that these
additional consultations with stakeholders will cumulatively cost the
50 States, the District of Columbia, and Puerto Rico $690,110.
States would also be required to report on the State-level rewards
or consequences associated with the designated performance levels and
on the opportunities they provide for teacher preparation programs to
challenge the accuracy of their performance data and classification of
the program. Costs associated with implementing these requirements are
estimated in the discussion of annual costs associated with the SRC.
Procedures for Assessing and Reporting Performance
Under final Sec. 612.4(b)(3), a State would be required to ensure
that teacher preparation programs in the State are included on the SRC,
but with some flexibility due to the Department's recognition that
reporting on teacher preparation programs particularly consisting of a
small number of prospective teachers could present privacy and data
validity concerns. See Sec. 612.4(b)(5). The Department originally
[[Page 75593]]
estimated that each State would need up to 14 hours to review and
analyze applicable State and Federal privacy laws and regulations and
existing research on the practices of other States that set program
size thresholds in order to determine the most appropriate aggregation
level and procedures for its own teacher preparation program reporting.
Most of the comments the Department received on this estimate focused
on the comparability of data across years and stated that this process
would have to be conducted annually in order to reassess appropriate
cut points. The Department agrees that comparability could be an issue
in several instances, but is equally concerned with variability in the
data induced solely by the small size of programs. As such, we believe
providing States the flexibility to aggregate data across small
programs is key to ensuring meaningful data for the public. Upon
further review, the Department also recognized an error in the NPRM, in
which we initially stated that this review would be a one-time cost.
Contrary to that statement, our overall estimates in the NPRM included
this cost on an annual basis. This review will likely take place
annually to determine whether there are any necessary changes in law,
regulation, or practice that need to be taken into consideration. As
such, we are revising our statement to clarify that these costs will be
reflected annually. However, because of the error in the original
description of the burden estimate, this change does not substantively
affect the underlying calculations.
Two commenters stated that the Department's initial estimate seemed
low given the amount of work involved and three other commenters stated
that the Department's initial estimates were adequate. Another
commenter stated that this process would likely take longer in his
State. No commenters offered alternative estimates. For the vast
majority of States, we continue to believe that 14 hours is a
sufficient amount of time for staff to review and analyze the
applicable laws and statutes. However, given the potential complexity
of these issues, as raised by commenters, we recognize that there may
be additional staff involved and additional meetings required for
purposes of consultation. In order to account for these additional
burdens where they may exist, the Department is increasing its initial
estimate to 20 hours. We believe that this will provide sufficient time
for review, analysis, and discussion of these important issues. This
provides an estimated cost to the 50 States, the District of Columbia,
and the Commonwealth of Puerto Rico of $51,750, based on the average
national hourly earnings for a lawyer employed full-time by a State
government ($49.76).
Required Elements of the State Report Card
For purposes of reporting under Sec. 612.4, each State will need
to establish indicators that would be used to assess the academic
content knowledge and teaching skills of the graduates of teacher
preparation programs within its jurisdiction. At a minimum, States must
base their assessments on student learning outcomes, employment
outcomes, survey outcomes, and whether or not the program is accredited
by a specialized accrediting agency recognized by the Secretary for
accreditation of professional teacher education programs, or provides
teacher candidates with content and pedagogical knowledge, and quality
clinical preparation, and has rigorous teacher candidate exit
qualifications.
States are required to report these outcomes for teacher
preparation programs within their jurisdiction, with the only
exceptions being for small programs for which aggregation under Sec.
612.4(b)(3)(ii) would not yield the program size threshold (or for a
State that chooses a lower program size threshold, would not yield the
lower program size threshold) for that program, and for any program
where reporting data would lead to conflicts with Federal or State
privacy and confidentiality laws and regulations.
Student Learning Outcomes
In Sec. 612.5, the Department requires that States assess the
performance of teacher preparation programs based in part on data on
the aggregate learning outcomes of students taught by novice teachers
prepared by those programs. States have the option of calculating these
outcomes using student growth, a teacher evaluation measure that
includes student growth, another State-determined measure relevant to
calculating student learning outcomes, or a combination of the three.
Regardless of how they determine student learning outcomes, States are
required to link these data to novice teachers and their teacher
preparation programs. In the NPRM, we used available sources of
information to assess the extent to which States appeared to already
have the capacity to measure student learning outcomes and estimated
the additional costs States that did not currently have the capacity
might incur in order to comply with the regulations. However, in these
final regulations, the Department has expanded the definition of
``teacher evaluation measure'' and provided States with the discretion
to use a State-determined measure relevant to calculating student
learning outcomes, which they did not have in the proposed regulations.
In our initial estimates, the Department assumed that only eight States
would experience costs associated with measuring student learning
outcomes. Of those, the Department noted that two already had annual
teacher evaluations that included at least some objective evidence of
student learning. For these two States, we estimated it would cost
approximately $596,720 to comply with the proposed regulations. For the
six remaining States, we estimated a cost of $16,079,390. We note that
several commenters raised concerns about the specifics of some of our
assumptions in making these estimates, particularly the amount of time
we assumed it would take to complete the tasks we described. We outline
and respond to those comments below. However, given the revised
definition of ``teacher evaluation measure,'' the additional option for
States to use a State-defined measure other than student growth or a
teacher evaluation measure, and the measures that States are already
planning to implement consistent with ESSA, we believe all States
either already have in place a system for measuring student learning
outcomes or are already planning to have one in place absent these
regulations. As such, we no longer believe that States will incur costs
associated with measuring student learning outcomes solely as a result
of these regulations.
Tested Grades and Subjects
In the NPRM, we assumed that the States would not need to incur any
additional costs to measure student growth for tested grades and
subjects and would only need to link these outcomes to teacher
preparation programs by first linking the students' teachers to the
teacher preparation program from which they graduated. The costs of
linking student learning outcomes to teacher preparation programs are
discussed below. Several commenters stated that assuming no costs for
teachers in tested grades and subjects was unrealistic because this
estimate was based on assurances provided by States, rather than on an
assessment of actual State practice. We recognize the commenters'
point. States that have made assurances to provide these student growth
data may not currently be providing this information
[[Page 75594]]
to teachers and therefore will still incur a cost to do so. However,
such cost and burden is not occurring as a result of the regulations,
but as a result of prior assurances made by the States under other
programs. In general, we do not include costs herein that arise from
other programs or requirements, but only those that are newly created
by the final rule. As such, we continue to estimate no new costs in
this area for States to comply with this final rule.
Non-Tested Grades and Subjects
In the NPRM, we assumed that the District of Columbia, Puerto Rico,
and the 42 States, which all that had their requests for flexibility
regarding specific requirements of the ESEA approved, would not incur
additional costs to comply with the proposed regulations. This was, in
part, because the teacher evaluation measures that they agreed to
implement as part of the flexibility would meet the definition of a
``teacher evaluation measure'' under the proposed regulations. Some
commenters expressed doubt that there would be no additional costs for
these States, and others cited costs associated with developing new
assessments for all currently non-tested grades and subjects (totaling
as many as 57 new assessments). We recognize that States likely
incurred costs to implement statewide comprehensive teacher
evaluations. However, those additional costs did not accrue to States
as a result of the regulations, but instead as part of their efforts
under flexibility agreements. Therefore, we do not include an analysis
of costs for States that received ESEA flexibility herein.
Additionally, as noted previously, the regulations do not require
States to develop new assessments for all currently non-tested grades
and subjects. Therefore, we do not include costs for such efforts in
these estimates.
To estimate, in the NPRM, the cost of measuring student growth for
teachers in non-tested grades and subjects in the eight States that
were not approved for ESEA flexibility, we divided the States into two
groups--those who had annual teacher evaluations with at least some
objective evidence of student learning outcomes and those that did not.
For those States that did not have an annual teacher evaluation in
place, we estimated that it would take approximately 6.85 hours of a
teacher's time and 5.05 hours of an evaluator's time to measure student
growth using student learning objectives. Two commenters stated that
these were underestimates, specifically noting that certain student
outcomes (e.g., in the arts) are process-oriented and would likely take
longer. We recognize that it may be more time-intensive to develop
student learning objectives to measure student growth in some subject
areas. However, the Rhode Island model we used as a basis for these
estimates was designed to be used across subject areas, including the
arts. Further, we believe that both teachers and evaluators would have
sufficient expertise in their content areas that they would be able to
complete the activities outlined in the Rhode Island guidance in times
approximating our initial estimates. As such, we continue to believe
those estimates were appropriate for the average teacher.
In fact, we believe that this estimate likely overstated the cost
to States that already require annual evaluations of all novice
teachers because many of these evaluations would already encompass many
of the activities in the framework. The National Council on Teacher
Quality has reported that two of the eight States that did not receive
ESEA flexibility required annual evaluations of all novice teachers and
that those evaluations included at least some objective evidence of
student learning. In these States, we initially estimated that teachers
and evaluators would need to spend only a combined three hours to
develop and measure against student learning objectives for the 4,629
novice teachers in these States.
Several commenters stated that their States did not currently have
these data, and others argued that this estimate did not account for
the costs of verifying the data. We understand that States may not
currently have structures in place to measure student learning outcomes
as defined in the proposed rules. However, we believe that the
revisions in the final rule provide sufficient flexibility to States to
ensure that they can meet the requirements of this section without
incurring additional measurement costs as a result of compliance with
this regulation. We have included costs for challenging data elsewhere
in these estimates.
Linking Student Learning Outcomes to Teacher Preparation Programs
Whether using student scores on State assessments, teacher
evaluation ratings, or other measures of student growth, under the
regulations States must link the student learning outcomes data back to
the teacher, and then back to that teacher's preparation program. The
costs to States to comply with this requirement will depend, in part,
on the data and linkages in their statewide longitudinal data system.
Through the Statewide Longitudinal Data Systems (SLDS) program, the
Department has awarded $575.7 million in grants to support data systems
that, among other things, allow States to link student achievement data
to individual teachers and to postsecondary education systems. Forty-
seven States, the District of Columbia, and the Commonwealth of Puerto
Rico have already received at least one grant under this program to
support the development of these data systems, so we expect that the
cost to these States of linking student learning outcomes to teacher
preparation programs would be lower than for the remaining States.
According to information from the SLDS program in June 2015, nine
States currently link K-12 teacher data including data on both teacher/
administrator evaluations and teacher preparation programs to K-12
student data. An additional 11 States and the District of Columbia are
currently in the process of establishing this linkage, and ten States
and the Commonwealth of Puerto Rico have plans to add this linkage to
their systems during their SLDS grant. Based on this information, it
appears that 30 States, the Commonwealth of Puerto Rico, and the
District of Columbia either already have the ability to aggregate data
on student achievement of students taught by program graduates and link
those data back to teacher preparation programs or have committed to
doing so; therefore, we do not estimate any additional costs for these
States to comply with this aspect of the regulations. We note that,
based on information from other Department programs and initiatives, a
larger number of States currently make these linkages and would
therefore incur no additional costs associated with the regulations.
However, for purposes of this estimate, we use data from the SLDS
program. As a result, these estimates are likely overestimates of the
actual costs borne by States to make these data connections.
During the development of the regulations, the Department consulted
with experts familiar with the development of student growth models and
longitudinal data systems. These experts indicated that the cost of
calculating growth for students taught by individual teachers and
aggregating these data according to the teacher preparation program
that these teachers completed would vary among States. For example, in
States in which data on teacher preparation programs are housed within
different or even multiple different postsecondary data systems that
are not currently linked to data systems for elementary through
secondary education students and teachers, these experts suggested that
a
[[Page 75595]]
reasonable estimate of the cost of additional staff or vendor time to
link and analyze the data would be $250,000 per State. For States that
already have data systems that include data from elementary to
postsecondary education levels, we estimate that the cost of additional
staff or vendor time to analyze the data would be $100,000. Since we do
not know enough about the data systems in the remaining 20 States to
determine whether they are likely to incur the higher or lower estimate
of costs, we averaged the higher and lower figure. Accordingly we
estimate that the remaining 20 States will need to incur an average
cost of $175,000 to develop models to calculate growth for students
taught by individual teachers and then link these data to teacher
preparation programs for a total cost of $3,500,000.
Several commenters stated that their States did not currently have
the ability to make these linkages and their data systems would have to
be updated and that, even in States that already have these linkages,
there may be required updates to the system. We recognize that some
States for which we assume no costs do not yet have the required
functionality in their State data systems to make the links required
under the regulations. However, as noted elsewhere, we reasonably rely
on the assurances made by States that they are already planning on
establishing these links, and are not doing so as a result of the
regulations. As a result, we do not estimate costs for those States
here. In regards to States that already have systems with these links
in place, we are not aware of any updates that will need to be made to
any of these systems solely in order to comply with the regulations,
and therefore estimate no additional costs to these States.
Employment Outcomes
The final regulations require States to report employment outcomes,
including data on both the teacher placement rate and the teacher
retention rate, and on the effectiveness of a teacher preparation
program in preparing, placing, and supporting novice teachers
consistent with local educational needs. We have limited information on
the extent to which States currently collect and maintain data on
placement and retention for individual teachers.
Under Sec. 612.4(b), States are required to report annually, for
each teacher preparation program, on the teacher placement rate for
traditional teacher preparation programs, the teacher placement rate
calculated for high-need schools for all teacher preparation programs
(whether traditional or alternative route), the teacher retention rate
for all teacher preparation programs (whether traditional or
alternative route), and the teacher retention rate calculated for high-
need schools for all teacher preparation programs (whether traditional
or alternative route). States are not required to report on the teacher
placement rate for alternative route programs. The Department has
defined the ``teacher placement rate'' as the percentage of recent
graduates who have become novice teachers (regardless of retention) for
the grade level, span, and subject area in which they were prepared.
``High-need schools'' is defined in Sec. 612.2(d) by using the
definition of ``high-need school'' in section 200(11) of the HEA. The
regulations will give States discretion to exclude recent graduates
from this measure if they are teaching in a private school, teaching in
another State, teaching in a position that does not require State
certification, enrolled in graduate school, or engaged in military
service.
Section 612.5(a)(2) and the definition of ``teacher retention
rate'' in Sec. 612.2 require a State to provide data on each teacher
preparation program's teacher retention rate, by calculating, for each
of the last three cohorts of novice teachers preceding the current
title II reporting year, the percentage of those teachers who have been
continuously employed as teachers of record in each year between their
first year as a novice teacher and the current reporting year. For the
purposes of this definition, a cohort of novice teachers is determined
by the first year in which they were identified as a novice teacher by
the State. High-need schools is defined in Sec. 612.2 by using the
definition of ``high-need school'' from section 200(11) of the HEA. The
regulations give States discretion to exclude novice teachers from this
measure if they are teaching in a private school or another State,
enrolled in graduate school, or serving in the military. States also
have the discretion to treat this rate differently for alternative
route and traditional route providers.
In its comments on the Department's Notice of Intention to Develop
Proposed Regulations Regarding Teacher Preparation Reporting
Requirements, the Data Quality Campaign reported that 50 States, the
District of Columbia, and the Commonwealth of Puerto Rico all collect
some certification information on individual teachers and that a subset
of States collect the following specific information on teacher
preparation or qualifications that is relevant to the requirements:
Type of teacher preparation program (42 States), location of teacher
preparation program (47 States), and year of certification (51
States).\91\
---------------------------------------------------------------------------
\91\ ED's Notice of Intention to Develop Proposed Regulations
Regarding Teacher Preparation Reporting Requirements: DQC Comments
to Share Knowledge on States' Data Capacity. Retrieved from
www.dataqualitycampaign.org/files/HEA%20Neg%20Regs%20formatted.pdf.
---------------------------------------------------------------------------
Data from the SLDS program indicate that 24 States can currently
link data on individual teachers with their teacher preparation
programs, including information on their current certification status
and placement. In addition, seven States are currently in the process
of making these links, and 10 States plan to add this capacity to their
data systems, but have not yet established the link and process for
doing so. Because these States would also maintain information on the
certification status and year of certification of individual teachers,
we assume they would already be able to calculate the teacher placement
and retention rates for novice teachers but may incur additional costs
to identify recent graduates who are not employed in a full-time
teaching position within the State. It should be possible to do this at
minimal cost by matching rosters of recent graduates from teacher
preparation programs against teachers employed in full-time teaching
positions who received their initial certification within the last
three years. Additionally, because States already maintain the
necessary information in State databases to identify schools as ``high-
need,'' we do not believe there would be any appreciable additional
cost associated with adding ``high-need'' flags to any accounting of
teacher retention or placement rates in the State.
Several commenters stated that it was unrealistic to assume that
any States currently had the information required under the regulations
as the requirements were new. While we recognize that States may not
have previously conducted these specific data analyses in the past,
this does not mean that their systems are incapable of doing so. In
fact, as outlined above, information available to the Department
indicates that at least 24 States already have this capacity and that
an additional 17 are in the process of developing it or plan to do so.
Therefore, regardless of whether the specific data analysis itself is
new, these States will not incur additional costs associated with the
final regulations to establish that functionality.
The remaining 11 States may need to collect additional information
from teacher preparation programs and LEAs because they do not appear
to be able to link information on the employment,
[[Page 75596]]
certification, and teacher preparation program for individual teachers.
If it is not possible to establish this link using existing data
systems, States may need to obtain some or all of this information from
teacher preparation programs or from the teachers themselves. The
American Association of Colleges for Teacher Education reported that,
in 2012, 495 of 717 institutions (or about 70 percent) had begun
tracking their graduates into job placements. Although half of those
institutions have successfully obtained placement information, these
efforts suggest that States may be able to take advantage of work
already underway.\92\
---------------------------------------------------------------------------
\92\ American Association of Colleges for Teacher Education
(2013), The Changing Teacher Preparation Profession: A report from
AACTE's Professional Education Data System (PEDS).
---------------------------------------------------------------------------
A number of commenters stated that IHEs would experience
substantial burden in obtaining this information from all graduates. We
agree that teacher preparation programs individually tracking and
contacting their recent graduates would be highly burdensome and
inefficient. However, in the regulations, the reporting burden falls on
States, rather than institutions. As such, we believe it would be
inappropriate to assume data collection costs and reporting burdens
accruing to institutions.
For each of these 11 States, the Department originally estimated
that 150 hours may be required at the State level to collect
information about novice teachers employed in full-time teaching
positions (including designing the data collection instruments,
disseminating them, providing training or other technical assistance on
completing the instruments, collecting the data, and checking their
accuracy). Several commenters stated that the Department's estimates
were too low. One commenter estimated that this process would take 350
hours. Another commenter indicated that his State takes approximately
100 hours to collect data on first year teachers and that data
collection on more cohorts would take more time. Generally, the
Department believes that this sort of data collection is subject to
economies of scale--that for each additional cohort on which data are
collected in a given year, the average time and cost associated with
each cohort will decrease. This belief arises from the fact that many
of the costs associated with such a collection, such as designing the
data request instruments and disseminating them, are largely fixed. As
such, we do not think that collecting data on three cohorts will take
three times as long as collecting data on one. However, we do recognize
that there could be wide variation across States depending on the
complexity of their systems and the way in which they opt to collect
these data. For example, a State that sends data requests to individual
LEAs to query their own data systems will experience a much higher
overall burden with this provision than one that sends data requests to
a handful of analysts at the State level who perform a small number of
queries on State databases. Because of this potentially wide variation
in burden across States, it is difficult to accurately estimate an
average. However, based on public comment, we recognize that our
initial estimate may have been too low. However, we also believe that
States will make every effort to reduce the burdens associated with
this provision. As such, we are increasing our estimate to 200 hours,
with an expectation that this may vary widely across States. Using this
estimate, we calculate a total annual cost to the 11 States of
$112,130, based on the national average hourly wage for education
administrators of $50.97.
Teacher Preparation Program Characteristics
Under Sec. 612.5(a)(4) States are required to report whether each
teacher preparation program in the State either: (a) Is accredited by a
specialized accrediting agency recognized by the Secretary for
accreditation of professional teacher education programs, or (b)
provides teacher candidates with content and pedagogical knowledge and
quality clinical preparation, and has rigorous teacher candidate exit
standards. As discussed in greater detail in the Paperwork Reduction
Act section of this document, we estimate that the total cost to the 50
States, the District of Columbia, and the Commonwealth of Puerto Rico
of providing these assurances for the estimated 15,335 teacher
preparation programs nationwide for which States have already
determined are accredited based on previous title II reporting
submissions would be $790,670, assuming that 2 hours were required per
teacher preparation program and using an estimated hourly wage of
$25.78. Several commenters argued that these estimates did not
accurately reflect the costs associated with seeking specialized
accreditation. We agree with this statement. However, the regulations
do not require programs to seek specialized accreditation. Thus, there
would be no additional costs associated with this requirement for
programs that are already seeking or have obtained specialized
accreditation. If teacher preparation programs that do not currently
have specialized accreditation decide to seek it, they would not be
doing so because of a requirement in these regulations, and therefore,
it would be inappropriate to include those costs here.
Survey Outcomes
The Department requires States to report--disaggregated for each
teacher preparation program--qualitative and quantitative data from
surveys of novice teachers and their employers in order to capture
their perceptions of whether novice teachers who were prepared at a
teacher preparation program in that State possess the skills needed to
succeed in the classroom. The design and implementation of these
surveys would be determined by the State, but we provide the following
estimates of costs associated with possible options for meeting this
requirement.
Some States and IHEs currently survey graduates or recent graduates
of teacher preparation programs. According to experts consulted by the
Department, depending on the number of questions and the size of the
sample, some of these surveys have been administered quite
inexpensively. Oregon conducted a survey of a stratified random sample
of approximately 50 percent of its teacher preparation program
graduates and estimated that it cost $5,000 to develop and administer
the survey and $5,000 to analyze and report the data. Since these data
will be used to assess and publicly report on the quality of each
teacher preparation program, we expect that the cost of implementing
the proposed regulations is likely to be higher, because States may
need to survey a larger sample of teachers and their employers in order
to capture information on all teacher preparation programs.
Another potential factor in the cost of the teacher and employer
surveys would be the number and type of questions. We have consulted
with researchers experienced in the collection of survey data, and they
have indicated that it is important to balance the burden on the
respondent with the need to collect adequate information. In addition
to asking teachers and their employers whether graduates of particular
teacher preparation programs are adequately prepared before entering
the classroom, States may also wish to ask about course-taking and
student teaching experiences, as well as to collect demographic
information on the respondent, including information on the school
environment in which the
[[Page 75597]]
teacher is currently employed. Because the researchers we consulted
stressed that teachers and their employers are unlikely to respond to a
survey that requires more than 30 minutes to complete, we assume that
the surveys would not exceed this length.
Based on our consultation with experts and previous experience
conducting surveys of teachers through evaluations of Department
programs or policies, we originally estimated that it would cost the
average State approximately $25,000 to develop the survey instruments,
including instructions for the survey recipients. However, a number of
commenters argued that these development costs were far too low.
Alternate estimates provided by commenters ranged from $50,000 per
State to $200,000, with the majority of commenters offering a $50,000
estimate. As such, the Department has revised its original estimate to
$50,000. This provides a total cost to the 50 States, the District of
Columbia, and the Commonwealth of Puerto Rico of $2,600,000. However,
we recognize that the cost would be lower for States that identify an
existing instrument that could be adapted or used for this purpose,
potentially including survey instruments previously developed by other
States.\93\ If States surveyed all individuals who completed teacher
preparation programs in the previous year, we estimate that they would
survey 180,744 teachers, based on the reported number of individuals
completing teacher preparation programs, both traditional and
alternative route programs, during the 2013-2014 academic year.
---------------------------------------------------------------------------
\93\ The experts with whom we consulted did not provide
estimates of the number of hours involved in the development of this
type of survey. For the estimated burden hours for the Paperwork
Reduction Act section, this figure represents 1,179 hours at an
average hourly wage rate of $42.40, based on the hourly wage for
faculty at a public IHE and statisticians employed by State
governments.
---------------------------------------------------------------------------
To estimate the cost of administering these surveys, we consulted
researchers with experience conducting a survey of all recent graduates
of teacher preparation programs in New York City.\94\ In order to meet
the target of a 70 percent response rate for that survey, the
researchers estimated that their cost per respondent was $100, which
included an incentive for respondents worth $25. We believe that it is
unlikely that States will provide cash incentives for respondents to
the survey, thus providing an estimate of $75 per respondent. However,
since the time of data collection in that survey, there have been
dramatic advances in the availability and usefulness of online survey
software with a corresponding decrease in cost. As such, we believe
that the $75 per respondent estimate may actually provide an extreme
upper bound and may dramatically over-estimate the costs associated
with administering any such survey. For example, several prominent
online survey companies offer survey hosting services for as little as
$300 per year for unlimited questions and unlimited respondents. In the
NPRM, using that total cost, and assuming surveys administered and
hosted by the State and using the number of program graduates in 2013
(203,701), we estimated that the cost per respondent would range from
$0.02 to $21.43, with an average cost per State of $0.97. We recognize
that this estimate would represent an extreme lower bound and many
States are unlikely to see costs per respondent that low until the
survey is fully integrated into existing systems. For example, States
may be able to provide teachers with a mechanism, such as an online
portal, to both verify their class rosters and complete the survey.
Because teachers would be motivated to ensure that they were not
evaluated based on the performance of students they did not teach,
requiring novice teachers to complete the survey in order to access
their class rosters would increase the response rate for the survey and
allow novice teachers to select their teacher preparation program from
a pull-down menu, reducing the amount of time required to link the
survey results to particular programs. States could also have teacher
preparation programs disseminate the novice teacher survey with other
information for teacher preparation program alumni or have LEAs
disseminate the novice teacher survey during induction or professional
development activities. We believe that, as States incorporate these
surveys into other structures, data collection costs will dramatically
decline towards the lower bounds noted above.
---------------------------------------------------------------------------
\94\ These cost estimates were based primarily on our
consultation with a researcher involved in the development,
implementation, and analysis of surveys of teacher preparation
program graduates and graduates of alternative certification
programs in New York City in 2004 as part of the Teacher Pathways
Project. These survey instruments are available online at:
www.teacherpolicyresearch.org/TeacherPathwaysProject/Surveys/tabid.
---------------------------------------------------------------------------
The California State School Climate Survey (CSCS) is one portion of
the larger California School Climate, Health, & Learning Survey,
designed to survey teachers and staff to address questions of school
climate. While the CSCS is subsidized by the State of California, it is
also offered to school districts outside of the State for a fee,
ranging from $500 to $1,500 per district, depending on its enrollment
size. Applying this cost structure to all school districts nationwide
with enrollment (as outlined in the Department's Common Core of Data),
we estimated in the NPRM that costs would range from a low of $0.05 per
FTE teacher to $500 per FTE teacher with an average of $21.29 per FTE.
However, these costs are inflated by single-school, single-teacher
districts, which are largely either charter schools or small, rural
school districts unlikely to administer separate surveys. When removing
single-school, single-teacher districts, the average cost per
respondent decreased to $12.27.
Given the cost savings associated with online administration of
surveys and the likelihood that States will fold these surveys into
existing structures, we believe that many of these costs are likely
over-estimates of the actual costs that States will bear in
administering these surveys. However, for purposes of estimating costs
in this context, we use a rate of $30.33 per respondent, which
represents a cost per respondent at the 85th percentile of the CSCS
administration and well above the maximum administration cost for
popular consumer survey software. One commenter stated that the
Department's initial estimate was appropriate; but also suggested that,
to reduce costs further, a survey could be administered less than
annually, or only a subset of novice teachers could be surveyed. One
commenter argued that this estimate was too low and provided an
alternate estimate of aggregate costs for their State of $300,000 per
year. We note, however, that this commenter's alternate estimate was
actually a lower cost per respondent than the Department's initial
estimate--approximately $25 per respondent compared to $30.33. Another
commenter argued that administration of the survey would cost $100 per
respondent. Some commenters also argued that administering the survey
would require additional staff. Given the information discussed above
and that public comment was divided on whether our estimate was too
high, too low, or appropriate, we do not believe there is adequate
reason to change our initial estimate of $30.33 per respondent.
Undoubtedly, some States may bear the administration costs by hiring
additional staff while others will contract with an outside entity for
the administration of the survey. In either case, we believe our
original estimates to be reasonable. Using that estimate, we estimate
that, if States surveyed a combined sample of 180,744 teachers and an
equivalent number of
[[Page 75598]]
employers,\95\ with a response rate of 70 percent, the cumulative cost
to the 50 States, the District of Columbia, and the Commonwealth of
Puerto Rico of administering the survey would be $7,674,760.
---------------------------------------------------------------------------
\95\ We note that, to the extent that multiple novice teachers
are employed in the same school, there would be fewer employers
surveyed than the estimates outlined above. However, for purposes of
this estimate, we have assumed an equivalent number of employers.
This assumption will result in an overestimate of actual costs.
---------------------------------------------------------------------------
If States surveyed all teacher preparation program graduates and
their employers, assuming that both the teacher and employer surveys
would take no more than 30 minutes to complete, that the employers are
likely to be principals or district administrators, and a response rate
of 70 percent of teachers and employers surveyed, the total estimated
burden for 126,521 teachers and their 126,521 employers of completing
the surveys would be $2,635,430 and $3,224,390 respectively, based on
the national average hourly wage of $41.66 for elementary and secondary
public school teachers and $50.97 for elementary and secondary school
level administrators. These costs would vary depending on the extent to
which a State determines that it can measure these outcomes based on a
sample of novice teachers and their employers. This may depend on the
distribution of novice teachers prepared by teacher preparation
programs throughout the LEAs and schools within each State and also on
whether or not some of this information is available from existing
sources such as surveys of recent graduates conducted by teacher
preparation programs as part of their accreditation process.
One commenter stated that principals would be unlikely to complete
these surveys unless paid to do so. We recognize that some
administrators may see these surveys as a burden and may be less
willing to complete these surveys. However, we believe that States will
likely take this factor into consideration when designing and
administering these surveys by either reducing the amount of time
necessary to complete the surveys, providing a financial incentive to
complete them, or incorporating the surveys into other, pre-existing
instruments that already require administrator input. Some States may
also simply make completion a mandatory part of administrators' duties.
Annual Reporting Requirements Related to State Report Card
As discussed in greater detail in the Paperwork Reduction Act
section of this document, Sec. 612.4 includes several requirements for
which States must annually report on the SRC. Using an estimated hourly
wage of $25.78, we estimate that the total cost for the 50 States, the
District of Columbia, and the Commonwealth of Puerto Rico to report the
following required information in the SRC would be: Classifications of
teacher preparation programs ($370,280, based on 0.5 hours per 28,726
programs); assurances of accreditation ($98,830, based on 0.25 hours
per 15,335 programs); State's weighting of the different indicators in
Sec. 612.5 ($340 annually, based on 0.25 hours per State); State-level
rewards and consequences associated with the designated performance
levels ($670 in the first year and $130 thereafter, based on 0.5 hours
per State in the first year and 0.1 hours per State in subsequent
years); method of program aggregation ($130 annually, based on 0.1
hours per State); and process for challenging data and program
classification ($4,020 in the first year and $1,550 thereafter, based
on 3 hours per State in the first year and 6 hours for 10 States in
subsequent years).
The Department's initial estimates also included costs associated
with the examination of data collection quality (5.3 hours per State
annually), and recordkeeping and publishing related to appeal decisions
(5.3 hours per State). However, one commenter stated that the
examination of data quality would take a high level of scrutiny and
would take more time than was originally estimated and that our
estimate associated with recordkeeping and publishing was low.
Additionally, several commenters responded generally to the overall
cost estimates in the NPRM with concerns about data quality and review.
In response to these general concerns, and upon further review, the
Department believes that States are likely to engage in a more robust
data quality review process in response to these regulations.
Furthermore, we believe that the associated documentation and
recordkeeping estimates may have been lower than those reasonably
expected by States. As such, the Department has increased its estimate
of the time required from the original 5.3 hour estimate to 10 hours in
both cases. These changes result in an estimated cost of $13,410 for
each of the two components. The sum of these annual reporting costs
would be $495,960 for the first year and $492,950 in subsequent years,
based on a cumulative burden hours of 19,238 hours in the first year
and 19,121 hours in subsequent years.
In addition, a number of commenters expressed concern that our
estimates included time and costs associated with challenging data and
program classification but did not reflect time and costs associated
with allowing programs to actually review data in the SRC to ensure
that the teachers attributed to them were actual recent program
graduates. We agree that program-level review of these data may be
necessary, particularly in the first few years, in order to ensure
valid and reliable data. As such, we have revised our cost estimates to
include time for programs to individually review data reports to ensure
their accuracy. We assume that this review will largely consist of
matching lists of recent teacher preparation program graduates with
prepopulated lists provided by the State. Based on the number of
program completers during the 2013-2014 academic year, and the total
number of teacher preparation programs in that year, we estimate the
average program would review a list of 19 recent graduates (180,744
program completers each year over three years divided by 27,914
programs). As such, we do not believe this review will take a
considerable amount of time. However, to ensure that we estimate
sufficient time for this review, we estimate 1 hour per program for a
total cost for the 27,914 teacher preparation programs of $719,620.
Under Sec. 612.5, States would also incur burden to enter the
required aggregated information on student learning, employment, and
survey outcomes into the information collection instrument for each
teacher preparation program. Using the estimated hourly wage rate of
$25.78, we estimate the following cumulative costs to the 50 States,
the District of Columbia, and Puerto Rico to report on 27,914 teacher
preparation programs and 812 teacher preparation programs provided
through distance education: Annual reporting on student learning
outcomes ($1,851,390 annually, based on 2.5 hours per program); and
annual reporting of employment outcomes ($2,591,950 annually, based on
3.5 hours per program); and annual reporting of survey outcomes
($740,560 annually, based on 1 hour per program).
After publication of the NPRM, we recognized that our initial
estimates did not include costs or burden associated with States'
reporting data on any other indicators of academic content knowledge
and teaching skills. To the
[[Page 75599]]
extent that States use additional indicators not required by these
regulations, we believe that they will choose to use indicators
currently in place for identifying low-performing teacher preparation
programs rather that instituting new indicators and new data collection
processes. As such, we do not believe that States will incur any
additional data collection costs. Additionally, we assume that
transitioning reporting on these indicators from the entity level to
the program level will result in minimal costs at the State level that
are already captured elsewhere in these estimates. As such, we believe
the only additional costs associated with these other indicators will
be in entering the aggregated information into the information
collection instrument. We assume that, on average, it will take States
1 hour per program to enter this information. States with no or few
other indicators will experience much lower costs than those estimated
here. Those States that use a large number of other indicators may
experience higher costs than those estimated here, though we believe it
is unlikely that the data entry process per program for these other
indicators will exceed this estimate. As such, we estimate an annual
cost to the 50 States, the District of Columbia, and Puerto Rico of
$740,560 to report on other indicators of academic content knowledge
and teaching skills.
Our estimate of the total annual cost of reporting these outcome
measures on the SRC related to Sec. 612.5 is $5,924,460, based on
229,808 hours.
Potential Benefits
The principal benefits related to the evaluation and classification
of teacher preparation programs under the regulations are those
resulting from the reporting and public availability of information on
the effectiveness of teachers prepared by teacher preparation programs
within each State. The Department believes that the information
collected and reported as a result of these requirements will improve
the accountability of teacher preparation programs, both traditional
and alternative route to certification programs, for preparing teachers
who are equipped to succeed in classroom settings and help their
students reach their full potential.
Research studies have found significant and substantial variation
in teaching effectiveness among individual teachers and some variation
has also been found among graduates of different teacher preparation
programs.\96\ For example, Tennessee reports that some teacher
preparation programs consistently report statistically significant
differences in student learning outcomes for grades and subjects
covered by State assessments over multiple years and meaningful
differences in teacher placement and retention rates.\97\ Because this
variation in the effectiveness of graduates is not associated with any
particular type of preparation program, the only way to determine which
programs are producing more effective teachers is to link information
on the performance of teachers in the classroom back to their teacher
preparation programs.\98\ The regulations do this by requiring States
to link data on student learning outcomes, employment outcomes, and
teacher and employer survey outcomes back to the teacher preparation
programs, rating each program based on these data, and then making that
information available to the public.
---------------------------------------------------------------------------
\96\ See, for example: Boyd, D., Grossman, P., Lankford, H.,
Loeb, S., & Wyckoff, J. (2009). Teacher Preparation and Student
Achievement. Education Evaluation and Policy Analysis, 31(4), 416-
440.
\97\ See Report Card on the Effectiveness of Teacher Training
Programs, Tennessee 2014 Report Card. (n.d.). Retrieved from https://www.tn.gov/thec/article/report-card.
\98\ Kane, T., Rockoff, J., & Staiger, D. (2008). What does
certification tell us about teacher effectiveness? Evidence from New
York City. Economics of Education Review, 27(6), 615-631.
---------------------------------------------------------------------------
The Department recognizes that simply requiring States to assess
the performance of teacher preparation programs and report this
information to the public will not produce increases in student
achievement, but it is an important part of a larger set of policies
and investments designed to attract talented individuals to the
teaching profession; prepare them for success in the classroom; and
support, reward, and retain effective teachers. In addition, the
Department believes that, once information on the performance of
teacher preparation programs is more readily available, a variety of
stakeholders will become better consumers of these data, which will
ultimately lead to improved student achievement by influencing the
behavior of States seeking to provide technical assistance to low-
performing programs, IHEs engaging in deliberate self-improvement
efforts, prospective teachers seeking to train at the highest quality
teacher preparation programs, and employers seeking to hire the most
highly qualified novice teachers.
Louisiana has already adopted some of the proposed requirements and
has begun to see improvements in teacher preparation programs. Based on
data suggesting that the English Language Arts teachers prepared by the
University of Louisiana at Lafayette were producing teachers who were
less effective than other novice teachers prepared by other programs,
Louisiana identified the program in 2008 as being in need of
improvement and provided additional analyses of the qualifications of
the program's graduates and of the specific areas where the students
taught by program graduates appeared to be struggling.\99\ When data
suggested that students struggled with essay questions, faculty from
the elementary education program and the liberal arts department in the
university collaborated to restructure the teacher education curriculum
to include more writing instruction. Based on 2010-11 data, student
learning outcomes for teachers prepared by this program are now
comparable to other novice teachers in the State, and the program is no
longer identified for improvement.\100\
---------------------------------------------------------------------------
\99\ Sawchuk, S. (2012). Value Added Concept Proves Beneficial
to Teacher Colleges. Retrieved from www.edweek.org.
\100\ Gansle, K., Noell, G., Knox, R.M., Schafer, M.J. (2010).
Value Added Assessment of Teacher Preparation Programs in Louisiana:
2007-2008 to 2009-2010 Overview of 2010-11 Results. Retrieved from
Louisiana Board of Regents.
---------------------------------------------------------------------------
This is one example, but it suggests that States can use data on
student learning outcomes for graduates of teacher preparation programs
to help these programs identify weaknesses and implement needed reforms
in a reasonable amount of time. As more information becomes available
and if the data indicate that some programs produce more effective
teachers, LEAs seeking to hire novice teachers will prefer to hire
teachers from those programs. All things being equal, aspiring teachers
will elect to pursue their degrees or certificates at teacher
preparation programs with strong student learning outcomes, placement
and retention rates, survey outcomes, and other measures.
TEACH Grants
The final regulations link program eligibility for participation in
the TEACH Grant program to the State assessment of program quality
under 34 CFR part 612. Under Sec. Sec. 686.11(a)(1)(iii) and 686.2(d),
to be eligible to receive a TEACH Grant for a program, an individual
must be enrolled in a high-quality teacher preparation program--that
is, a program that is classified by the State as effective or higher in
either or both the October 2019 or October 2020 SRC for the 2021-2022
title IV, HEA award year; or, classified by the State as effective or
higher in two out of
[[Page 75600]]
the previous three years, beginning with the October 2020 SRC, for the
2022-2023 title IV, HEA award year, under 34 CFR 612.4(b). As noted in
the NPRM, the Department estimates that approximately 10 percent of
TEACH Grant recipients are not enrolled in teacher preparation
programs, but are majoring in such subjects as STEM, foreign languages,
and history. Under the final regulations, in a change from the NPRM and
from the current TEACH Grant regulations, students would need to be in
an effective teacher preparation program as defined in Sec. 612.2, but
those who pursue a dual-major that includes a teacher preparation
program would be eligible for a TEACH Grant. Additionally, institutions
could design and designate programs that aim to develop teachers in
STEM and other high-demand teaching fields that combine subject matter
and teacher preparation courses as TEACH Grant eligible programs.
Therefore, while we expect some reduction in TEACH Grant volume as
detailed in the Net Budget Impacts section of this Regulatory Impact
Analysis (RIA), we do expect that many students interested in teaching
STEM and other key subjects will still be able to get TEACH Grants at
some point in their postsecondary education.
In addition to the referenced benefits of improved accountability
under the title II reporting system, the Department believes that the
regulations relating to TEACH Grants will also contribute to the
improvement of teacher preparation programs. Linking program
eligibility for TEACH Grants to the performance assessment by the
States under the title II reporting system provides an additional
factor for prospective students to consider when choosing a program and
an incentive for programs to achieve a rating of effective or higher.
In order to analyze the possible effects of the regulations on the
number of programs eligible to participate in the TEACH Grant program
and the amount of TEACH Grants disbursed, the Department analyzed data
from a variety of sources. This analysis focused on teacher preparation
programs at IHEs. This is because, under the HEA, alternative route
programs offered independently of an IHE are not eligible to
participate in the TEACH Grant program. For the purpose of analyzing
the effect of the regulations on TEACH Grants, the Department estimated
the number of teacher preparation programs based on data from the
Integrated Postsecondary Education Data System (IPEDS) about program
graduates in education-related majors as defined by the Category of
Instructional Program (CIP) codes and award levels. For the purposes of
this analysis, ``teacher preparation programs'' refers to programs in
the relevant CIP codes that also have the IPEDS indicator flag for
being a State-approved teacher education program.
As detailed in the NPRM published December 3, 2014, in order to
estimate how many programs might be affected by a loss of TEACH Grant
eligibility, the Department had to estimate how many programs will be
individually evaluated under the regulations, which encourage States to
report on the performance of individual programs offered by IHEs rather
than on the aggregated performance of programs at the institutional
level as currently required. As before, the Department estimates that
approximately 3,000 programs may be evaluated at the highest level of
aggregation and approximately 17,000 could be evaluated if reporting is
done at the most disaggregated level. Table 3 summarizes these two
possible approaches to program definition that represent the opposite
ends of the range of options available to the States. Based on IPEDS
data, approximately 30 percent of programs defined at the six digit CIP
code level have at least 25 novice teachers when aggregated across
three years, so States may add one additional year to the analysis or
aggregate programs with similar features to push more programs over the
threshold, pursuant to the regulations. The actual number of programs
at IHEs reported on will likely fall between these two points
represented by Approach 1 and Approach 2. The final regulations define
a teacher preparation program offered through distance education as a
teacher preparation program at which at least 50 percent of the
program's required coursework is offered through distance education and
that starting with the 2021-2022 award year and subsequent award years,
is not classified as less than effective, based on 34 CFR 612.4(b), by
the same State for two out of the previous three years or meets the
exception from State reporting of teacher preparation program
performance under 34 CFR 612.4(b)(3)(ii)(D) or (E). The exact number of
these programs is uncertain, but in the Supplemental NPRM concerning
teacher preparation programs offered through distance education, the
Department estimated that 812 programs would be reported. Whatever the
number of programs, the TEACH Grant volume associated with these
schools is captured in the amounts used in our Net Budget Impacts
discussion. In addition, as discussed earlier in the Analysis of
Comments and Changes section, States will have to report on alternative
certification teacher preparation programs that are not housed at IHEs,
but they are not relevant for analysis of the effects on TEACH Grants
because they are ineligible for title IV, HEA funds and are not
included in Table 5.
Table 5--Teacher Preparation Programs at IHEs and TEACH Grant Program
----------------------------------------------------------------------------------------------------------------
Approach 1 Approach 1 Approach 2 Approach 2
---------------------------------------------------------------------------
TEACH Grant TEACH Grant
Total participating Total participating
----------------------------------------------------------------------------------------------------------------
Public Total........................ 2,522 1,795 11,931 8,414
4-year.............................. 2,365 1,786 11,353 8,380
2-year or less...................... 157 9 578 34
Private Not-for-Profit Total........ 1,879 1,212 12,316 8,175
4-year.............................. 1,878 1,212 12,313 8,175
2-year or less...................... 1 ................. 3 .................
Private For-Profit Total............ 67 39 250 132
4-year.............................. 59 39 238 132
2-year or less...................... 8 ................. 12 .................
---------------------------------------------------------------------------
Total........................... 4,468 3,046 24,497 16,721
----------------------------------------------------------------------------------------------------------------
[[Page 75601]]
Given the number of programs and their TEACH Grant participation
status as described in Table 3, the Department examined IPEDS data and
the Department's budget estimates for 2017 related to TEACH Grants to
estimate the effect of the regulations on TEACH Grants beginning with
the FY 2021 cohort when the regulations would be in effect. Based on
prior reporting, only 37 IHEs (representing an estimated 129 programs)
were identified as having a low-performing or at-risk program in 2010
and twenty-seven States have not identified any low-performing programs
in twelve years. Given prior identification of such programs and the
fact that the States would continue to control the classification of
teacher preparation programs subject to analysis, the Department does
not expect a large percentage of programs to be subject to a loss of
eligibility for TEACH Grants. Therefore, the Department evaluated the
effects on the amount of TEACH Grants disbursed and the number of
recipients on the basis of the States classifying a range of three
percent, five percent, or eight percent of programs to be low-
performing or at-risk. These results are summarized in Table 6.
Ultimately, the number of programs affected is subject to the program
definition, rating criteria, and program classifications adopted by the
individual States, so the distribution of those effects is not known
with certainty. However, the maximum effect, whatever the distribution,
is limited by the amount of TEACH Grants made and the percentage of
programs classified as low-performing and at-risk that participate in
the TEACH Grant program. In the NPRM, the Department invited comments
about the expected percentage of programs that will be found to be low-
performing and at-risk. No specific comments were received, so the
updated numbers based on the budget estimates for 2017 apply the same
percentages as were used in the NPRM.
Table 6--Estimated Effect in 2021 on Programs and TEACH Grant Amounts of Different Rates of Ineligibility
----------------------------------------------------------------------------------------------------------------
3% 5% 8%
----------------------------------------------------------------------------------------------------------------
[Percentage of low-performing or at-risk programs]
----------------------------------------------------------------------------------------------------------------
Programs:
Approach 1......................................... 214 356 570
Approach 2......................................... 385 641 1,026
TEACH Grant Recipients............................. 1,061 1,768 2,828
TEACH Grant Amount at Low-Performing or At-Risk $3,127,786 $5,212,977 $8,340,764
programs..........................................
----------------------------------------------------------------------------------------------------------------
The estimated effects presented in Table 4 reflect assumptions
about the likelihood of a program being ineligible and do not take into
account the size of the program or participation in the TEACH Grant
program. The Department had no program level performance information
and treats the programs as equally likely to become ineligible for
TEACH Grants. If, in fact, factors such as size or TEACH Grant
participation were associated with high or low performance, the number
of TEACH Grant recipients and TEACH Grant volume could deviate from
these estimates.
Whatever the amount of TEACH Grant volume at programs found to be
ineligible, the effect on IHEs will be reduced from the full amounts
represented by the estimated effects presented here as students could
elect to enroll in other programs at the same IHE that retain
eligibility because they are classified by the State as effective or
higher. Another factor that would reduce the effect of the regulations
on programs and students is that an otherwise eligible student who
received a TEACH Grant for enrollment in a TEACH Grant-eligible program
is eligible to receive additional TEACH Grants to complete the program,
even if that program loses status as a TEACH Grant-eligible program.
Several commenters expressed concern that linking TEACH Grant
eligibility to the State's evaluation of the program would harm teacher
development from, and availability to, poor and underserved
communities. We believe that the pilot year that provides some warning
of program performance, the flexibility for States to develop their
evaluation criteria, and a long history of programs performing above
the at-risk or low-performing levels will reduce the possibility of
this effect. The Department continues to expect that over time a large
portion of the TEACH Grant volume now disbursed to students at programs
that will be categorized as low-performing or at-risk will be shifted
to programs that remain eligible. The extent to which this happens will
depend on other factors affecting the students' enrollment decisions
such as in-State status, proximity to home or future employment
locations, and the availability of programs of interest, but the
Department believes that students will take into account a program's
rating and the availability of TEACH Grants when looking for a teacher
preparation program. As discussed in the Net Budget Impacts section of
this RIA, the Department expects that the reduction in TEACH Grant
volume will taper off as States identify low-performing and at-risk
programs and those programs are improved or are no longer eligible for
TEACH Grants. Because existing recipients will continue to have access
to TEACH Grants, and incoming students will have notice and be able to
consider the program's eligibility for TEACH Grants in making an
enrollment decision, the reduction in TEACH Grant volume that is
classified as a transfer from students at ineligible programs to the
Federal government will be significantly reduced from the estimated
range of approximately $3.0 million to approximately $8.0 million in
Table 4 for the initial years the regulations are in effect. While we
have no past experience with students' reaction to a designation of a
program as low-performing and loss of TEACH Grant eligibility, we
assume that, to the extent it is possible, students would choose to
attend a program rated effective or higher. For IHEs, the effect of the
loss of TEACH Grant funds will depend on the students' reaction and how
many choose to enroll in an eligible program at the same IHE, choose to
attend a different IHE, or make up for the loss of TEACH Grants by
funding their program from other sources.
The Department does not anticipate that many programs will lose
State approval or financial support. If this does occur, IHEs with such
programs would have to notify enrolled and accepted students
immediately, notify the Department within 30 days, and
[[Page 75602]]
disclose such information on its Web site or promotional materials. The
Department estimates that 50 IHEs would offer programs that lose State
approval or financial support and that it would take 5.75 hours to make
the necessary notifications and disclosures at a wage rate of $25.78
for a total cost of $7,410. Finally, some of the programs that lose
State approval or financial support may apply to regain eligibility for
title IV, HEA funds upon improved performance and restoration of State
approval or financial support. The Department estimates that 10 IHEs
with such programs would apply for restored eligibility and the process
would require 20 hours at a wage rate of $25.78 for a total cost of
$5,160.
3. Regulatory Alternatives Considered
The final regulations were developed through a negotiated
rulemaking process in which different options were considered for
several provisions. Among the alternatives the Department considered
were various ways to reduce the volume of information States and
teacher preparation programs are required to collect and report under
the existing title II reporting system. One approach would have been to
limit State reporting to items that are statutorily required. While
this would reduce the reporting burden, it would not address the goal
of enhancing the quality and usefulness of the data that are reported.
Alternatively, by focusing the reporting requirements on student
learning outcomes, employment outcomes, and teacher and employer survey
data, and also providing States with flexibility in the specific
methods they use to measure and weigh these outcomes, the regulations
balance the desire to reduce burden with the need for more meaningful
information.
Additionally, during the negotiated rulemaking session, some non-
Federal negotiators spoke of the difficulty States would have
developing the survey instruments, administering the surveys, and
compiling and tabulating the results for the employer and teacher
surveys. The Department offered to develop and conduct the surveys to
alleviate additional burden and costs on States, but the non-Federal
negotiators indicated that they preferred that States and teacher
preparation programs conduct the surveys.
One alternative considered in carrying out the statutory directive
to direct TEACH Grants to ``high quality'' programs was to limit
eligibility only to programs that States classified as ``exceptional'',
positioning the grants more as a reward for truly outstanding programs
than as an incentive for low-performing and at-risk programs to
improve. In order to prevent a program's eligibility from fluctuating
year-to-year based on small changes in evaluation systems that are
being developed and to keep TEACH Grants available to a wider pool of
students, including those attending teacher preparation programs
producing satisfactory student learning outcomes, the Department and
most non-Federal negotiators agreed that programs rated effective or
higher would be eligible for TEACH Grants.
4. Net Budget Impacts
The final regulations related to the TEACH Grant program are
estimated to have a net budget impact of $0.49 million in cost
reduction over the 2016 to 2026 loan cohorts. These estimates were
developed using the Office of Management and Budget's (OMB) Credit
Subsidy Calculator. The OMB calculator takes projected future cash
flows from the Department's student loan cost estimation model and
produces discounted subsidy rates reflecting the net present value of
all future Federal costs associated with awards made in a given fiscal
year. Values are calculated using a ``basket of zeros'' methodology
under which each cash flow is discounted using the interest rate of a
zero-coupon Treasury bond with the same maturity as that cash flow. To
ensure comparability across programs, this methodology is incorporated
into the calculator and used Government-wide to develop estimates of
the Federal cost of credit programs. Accordingly, the Department
believes it is the appropriate methodology to use in developing
estimates for these regulations. That said, in developing the following
Accounting Statement, the Department consulted with OMB on how to
integrate the Department's discounting methodology with the discounting
methodology traditionally used in developing regulatory impact
analyses.
Absent evidence of the impact of these regulations on student
behavior, budget cost estimates were based on behavior as reflected in
various Department data sets and longitudinal surveys. Program cost
estimates were generated by running projected cash flows related to the
provision through the Department's student loan cost estimation model.
TEACH Grant cost estimates are developed across risk categories:
Freshmen/sophomores at 4-year IHEs, juniors/seniors at 4-year IHEs, and
graduate students. Risk categories have separate assumptions based on
the historical pattern of the behavior of borrowers in each category--
for example, the likelihood of default or the likelihood to use
statutory deferment or discharge benefits.
As discussed in the TEACH Grants section of the Discussion of
Costs, Benefits, and Transfers section in this RIA, the regulations
could result in a reduction in TEACH Grant volume. Under the effective
dates and data collection schedule in the regulations, that reduction
in volume would start with the 2021 TEACH Grant cohort. The Department
assumes that the effect of the regulations would be greatest in the
first years they were in effect as the low-performing and at-risk
programs are identified, removed from TEACH Grant eligibility, and
helped to improve or are replaced by better performing programs.
Therefore, the percent of volume estimated to be at programs in the
low-performing or at-risk categories is assumed to drop for future
cohorts. As shown in Table 7, the net budget impact over the 2016-2026
TEACH Grant cohorts is approximately $0.49 million in reduced costs.
Table 7--Estimated Budget Impact
----------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------
PB 2017 TEACH Grant:
Awards.................. 35,354 36,055 36,770 37,498 38,241 38,999
Amount.................. 104,259,546 106,326,044 108,433,499 110,582,727 112,774,555 115,009,826
Remaining Volume after
Reduction from Change in
TEACH Grants for STEM
Programs:
%....................... 92.00% 92.00% 92.00% 92.00% 92.00% 92.00%
Awards.................. 32,526 33,171 33,828 34,498 35,182 35,879
Amount.................. 95,918,782 97,819,960 99,758,819 101,736,109 103,752,591 105,809,040
Low Performing and At Risk:
%....................... 5.00% 3.00% 1.50% 1.00% 0.75% 0.50%
Awards.................. 1,626 995 507 345 264 179
Amount.................. 4,795,939 2,934,599 1,496,382 1,017,361 778,144 529,045
Redistributed TEACH Grants:
[[Page 75603]]
%....................... 75% 75% 75% 75% 75% 75%
Amount.................. 3,596,954 2,200,949 1,122,287 763,021 583,608 396,784
Reduced TEACH Grant Volume:
%....................... 25% 25% 25% 25% 25% 25%
Amount.................. 1,198,985 733,650 374,096 254,340 194,536 132,261
Estimated Budget Impact of
Policy:
Subsidy Rate............ 17.00% 17.16% 17.11% 16.49% 16.40% 16.24%
Baseline Volume......... 104,259,546 106,326,044 108,433,499 110,582,727 112,774,555 115,009,826
Revised Volume.......... 103,060,561 105,592,394 108,059,403 110,328,387 112,580,019 114,877,565
Baseline Cost........... 17,724,123 18,245,549 18,552,972 18,235,092 18,495,027 18,677,596
Revised Cost............ 17,520,295 18,119,655 18,488,964 18,193,151 18,463,123 18,656,117
Estimated Cost Reduction 203,827 125,894 64,008 41,941 31,904 21,479
----------------------------------------------------------------------------------------------------------------
The estimated budget impact presented in Table 5 is defined against
the PB 2017 baseline costs for the TEACH Grant program, and the actual
volume of TEACH Grants in 2021 and beyond will vary. The budget impact
estimate depends on the assumptions about the percent of TEACH Grant
volume at programs that become ineligible and the share of that volume
that is redistributed or reduced as shown in Table 5. Finally, absent
evidence of different rates of loan conversion at programs that will be
eligible or ineligible for TEACH Grants when the proposed regulations
are in place, the Department did not assume a different loan conversion
rate as TEACH Grants shifted to programs rated effective or higher.
However, given that placement and retention rates are one element of
the program evaluation system, the Department does hope that, as
students shift to programs rated effective, more TEACH Grant recipients
will fulfill their service obligations. If this is the case and their
TEACH Grants do not convert to loans, the students who do not have to
repay the converted loans will benefit and the expected cost reductions
for the Federal government may be reduced or reversed because more of
the TEACH Grants will remain grants and no payment will be made to the
Federal government for these grants. The final regulations also change
total and permanent disability discharge provisions related to TEACH
Grants to be more consistent with the treatment of interest accrual for
total and permanent discharges in the Direct Loan program. This is not
expected to have a significant budget impact.
In addition to the TEACH Grant provision, the regulations include a
provision that would make a program ineligible for title IV, HEA funds
if the program was found to be low-performing and subject to the
withdrawal of the State's approval or termination of the State's
financial support. As noted in the NPRM, the Department assumes this
will happen rarely and that the title IV, HEA funds involved would be
shifted to other programs. Therefore, there is no budget impact
associated with this provision.
5. Accounting Statement
As required by OMB Circular A-4 (available at www.whitehouse.gov/sites/default/files/omb/assets/omb/circulars/a004/a-4.pdf), in the
following table we have prepared an accounting statement showing the
classification of the expenditures associated with the provisions of
these final regulations. This table provides our best estimate of the
changes in annual monetized costs, benefits, and transfers as a result
of the final regulations.
------------------------------------------------------------------------
------------------------------------------------------------------------
Category Benefits
------------------------------------------------------------------------
Better and more publicly available
information on the effectiveness of
teacher preparation programs........... Not Quantified
------------------------------------------------------------------------
Distribution of TEACH Grants to better
performing programs.................... Not Quantified
------------------------------------------------------------------------
Category Costs
------------------------------------------------------------------------
7% 3%
------------------------------------------------------------------------
Institutional Report Card (set-up, $3,734,852 $3,727,459
annual reporting, posting on Web site).
------------------------------------------------------------------------
State Report Card (Statutory $3,653,206 $3,552,147
requirements: Annual reporting, posting
on Web site; Regulatory requirements:
Meaningful differentiation, consulting
with stakeholders, aggregation of small
programs, teacher preparation program
characteristics, other annual reporting
costs).................................
------------------------------------------------------------------------
Reporting Student Learning Outcomes $2,317,111 $2,249,746
(develop model to link aggregate data
on student achievement to teacher
preparation programs, modifications to
student growth models for non-tested
grades and subjects, and measuring
student growth)........................
------------------------------------------------------------------------
Reporting Employment Outcomes (placement $2,704,080 $2,704,080
and retention data collection directly
from IHEs or LEAs).....................
------------------------------------------------------------------------
Reporting Survey Results (developing $14,621,104 $14,571,062
survey instruments, annual
administration, and response costs)....
------------------------------------------------------------------------
Reporting Other Indicators.............. $740,560 $740,560
------------------------------------------------------------------------
Identifying TEACH Grant-eligible $12,570 $12,570
Institutions...........................
------------------------------------------------------------------------
Category Transfers
------------------------------------------------------------------------
Reduced costs to the Federal government ($60,041) ($53,681)
from TEACH Grants to prospective
students at teacher preparation
programs found ineligible..............
------------------------------------------------------------------------
[[Page 75604]]
6. Final Regulatory Flexibility Analysis
These regulations will affect IHEs that participate in the title
IV, HEA programs, including TEACH Grants, alternative certification
programs not housed at IHEs, States, and individual borrowers. The U.S.
Small Business Administration (SBA) Size Standards define for-profit
IHEs as ``small businesses'' if they are independently owned and
operated and not dominant in their field of operation with total annual
revenue below $7,000,000. The SBA Size Standards define nonprofit IHEs
as small organizations if they are independently owned and operated and
not dominant in their field of operation, or as small entities if they
are IHEs controlled by governmental entities with populations below
50,000. The revenues involved in the sector affected by these
regulations, and the concentration of ownership of IHEs by private
owners or public systems means that the number of title IV, HEA
eligible IHEs that are small entities would be limited but for the fact
that the nonprofit entities fit within the definition of a small
organization regardless of revenue. The potential for some of the
programs offered by entities subject to the final regulations to lose
eligibility to participate in the title IV, HEA programs led to the
preparation of this Final Regulatory Flexibility Analysis.
Description of the Reasons That Action by the Agency Is Being
Considered
The Department has a strong interest in encouraging the development
of highly trained teachers and ensuring that today's children have high
quality and effective teachers in the classroom, and it seeks to help
achieve this goal in these final regulations. Teacher preparation
programs have operated without access to meaningful data that could
inform them of the effectiveness of their teachers who graduate and go
on to work in the classroom setting.
The Department wants to establish a teacher preparation feedback
mechanism premised upon teacher effectiveness. Under the final
regulations, an accountability system would be established that would
identify programs by quality so that effective teacher preparation
programs could be recognized and rewarded and low-performing programs
could be supported and improved. Data collected under the new system
will help all teacher preparation programs make necessary corrections
and continuously improve, while facilitating States' efforts to reshape
and reform low-performing and at-risk programs.
We are issuing these regulations to better implement the teacher
preparation program accountability and reporting system under title II
of the HEA and to revise the regulations implementing the TEACH Grant
program. Our key objective is to revise Federal reporting requirements,
while reducing institutional burden, as appropriate. Additionally, we
aim to have State reporting focus on the most important measures of
teacher preparation program quality while tying TEACH Grant eligibility
to assessments of program performance under the title II accountability
system. The legal basis for these regulations is 20 U.S.C. 1022d,
1022f, and 1070g, et seq.
The final regulations related to title II reporting affect a larger
number of entities, including small entities, than the smaller number
of entities that could lose TEACH Grant eligibility or title IV, HEA
program eligibility. The Department has more data on teacher
preparation programs housed at IHEs than on those independent of IHEs.
Whether evaluated at the aggregated institutional level or the
disaggregated program level, as described in the TEACH Grant section of
the Discussion of Costs, Benefits, and Transfers section in this RIA as
Approach 1 and Approach 2, respectively, State-approved teacher
preparation programs are concentrated in the public and private not-
for-profit sectors. For the provisions related to the TEACH Grant
program and using the institutional approach with a threshold of 25
novice teachers (or a lower threshold at the discretion of the State),
since the IHEs will be reporting for all their programs, we estimate
that approximately 56.4 percent of teacher preparation programs are at
public IHEs--the vast majority of which would not be small entities,
and 42.1 percent are at private not-for-profit IHEs. The remaining 1.5
percent are at private for-profit IHEs and of those with teacher
preparation programs, approximately 18 percent had reported FY 2012
total revenues under $7 million based on IPEDS data and are considered
small entities. Table 8 summarizes the estimated number of teacher
preparation programs offered at small entities.
Table 8--Teacher Preparation Programs at Small Entities
----------------------------------------------------------------------------------------------------------------
% of Total Programs at
Total Programs at programs TEACH Grant
programs small entities offered at participating
small entities small entities
----------------------------------------------------------------------------------------------------------------
Public:
Approach 1.................................. 2,522 17 1 14
Approach 2.................................. 11,931 36 0 34
Private Not-for-Profit:
Approach 1.................................. 1,879 1,879 100 1,212
Approach 2.................................. 12,316 12,316 100 8,175
Private For-Profit:
Approach 1.................................. 67 12 18 1
Approach 2.................................. 250 38 15 21
----------------------------------------------------------------------------------------------------------------
Source: IPEDS
Note: Table includes programs at IHEs only.
The Department has no indication that programs at small entities
are more likely to be ineligible for TEACH Grants or title IV, HEA
funds. Since all private not-for-profit IHEs are considered to be small
because none are dominant in the field, we would expect about 5 percent
of TEACH Grant volume at teacher preparation programs at private not-
for-profit IHEs to be at ineligible programs. In AY 2014-15,
approximately 43.7 percent of TEACH Grant disbursements went to private
not-for-profit IHEs, and by applying that to the estimated TEACH Grant
volume in 2021 of $95,918,782, the Department estimates that TEACH
Grant volume at private not-for-profit IHEs in 2021 would be
approximately $42.0 million. At the five percent low-performing or at-
risk rate assumed in the TEACH Grants portion
[[Page 75605]]
of the Cost, Benefits, and Transfers section of the Regulatory Impact
Analysis, TEACH Grant revenues would be reduced by approximately $2.1
million at programs at private not-for-profit entities in the initial
year the regulations are in effect and a lesser amount after that. Much
of this revenue could be shifted to eligible programs within the IHE or
the sector, and the cost to programs would be greatly reduced by
students substituting other sources of funds for the TEACH Grants.
In addition to the teacher preparation programs at IHEs included in
Table 6, approximately 1,281 alternative certification programs offered
outside of IHEs are subject to the reporting requirements in the
regulations. The Department assumes that a significant majority of
these programs are offered by non-profit entities that are not dominant
in the field, so all of the alternative certification teacher
preparation programs are considered to be small entities. However, the
reporting burden for these programs falls on the States. As discussed
in the Paperwork Reduction Act section of this document, the estimated
total paperwork burden on IHEs would decrease by 66,740 hours. Small
entities would benefit from this relief from the current institutional
reporting requirements.
The final regulations are unlikely to conflict with or duplicate
existing Federal regulations.
Paperwork Reduction Act of 1995
The Paperwork Reduction Act of 1995 (PRA) does not require you to
respond to a collection of information unless it displays a valid OMB
control number. We display the valid OMB control numbers assigned to
the collections of information in these final regulations at the end of
the affected sections of the regulations.
Sections 612.3, 612.4, 612.5, 612.6, 612.7, 612.8, and 686.2
contain information collection requirements. Under the PRA, the
Department has submitted a copy of these sections, related forms, and
Information Collection Requests (ICRs) to the Office of Management and
Budget (OMB) for its review.
The OMB control number associated with the regulations and related
forms is 1840-0837. Due to changes described in the Discussion of
Costs, Benefits, and Transfers section of the RIA, estimated burdens
have been updated below.
Start-Up and Annual Reporting Burden
These regulations implement a statutory requirement that IHEs and
States establish an information and accountability system through which
IHEs and States report on the performance of their teacher preparation
programs. Because parts of the regulations require IHEs and States to
establish or scale up certain systems and processes in order to collect
information necessary for annual reporting, IHEs and States may incur
one-time start-up costs for developing those systems and processes. The
burden associated with start-up and annual reporting is reported
separately in this statement.
Section 612.3 Reporting Requirements for the Institutional Report Cards
Section 205(a) of the HEA requires that each IHE that provides a
teacher preparation program leading to State certification or licensure
report on a statutorily enumerated series of data elements for the
programs it provides. The HEOA revised a number of the reporting
requirements for IHEs.
The final regulations under Sec. 612.3(a) require that, beginning
on April 1, 2018, and annually thereafter, each IHE that conducts
traditional or alternative route teacher preparation programs leading
to State initial teacher certification or licensure and that enrolls
students receiving title IV, HEA funds report to the State on the
quality of its programs using an IRC prescribed by the Secretary.
Start-Up Burden
Entity-Level and Program-Level Reporting
Under the current IRC, IHEs typically report at the entity level
rather than the program level. For example, if an IHE offers multiple
teacher preparation programs in a range of subject areas (for example,
music education and special education), that IHE gathers data on each
of those programs, aggregates the data, and reports the required
information as a single teacher preparation entity on a single report
card. Under the final regulations and for the reasons discussed in the
NPRM and the preamble to this final rule, reporting is now required at
the teacher preparation program level rather than at the entity level.
No additional data must be gathered as a consequence of this regulatory
requirement; instead, IHEs will simply report the required data before,
rather than after, aggregation.
As a consequence, IHEs will not be required to alter appreciably
their systems for data collection. However, the Department acknowledges
that in order to communicate disaggregated data, minimal recordkeeping
adjustments may be necessary. The Department estimates that initial
burden for each IHE to adjust its recordkeeping systems will be 10
hours per entity. In the most recent year for which data are available,
1,490 IHEs reported required data to the Department through the IRC.
Therefore, the Department estimates that the one-time total burden for
IHEs to adjust recordkeeping systems will be 14,900 hours (1,490 IHEs
multiplied by 10 burden hours per IHE).
Subtotal of Start-Up Burden Under Sec. 612.3
The Department believes that IHEs' experience during prior title II
reporting cycles has provided sufficient knowledge to ensure that IHEs
will not incur any significant start-up burden, except for the change
from entity-level to program-level reporting described above.
Therefore, the subtotal of start-up burden for Sec. 612.3 is 14,900
hours.
Annual Reporting Burden
Changes to the Institutional Report Card
For a number of years IHEs have gathered, aggregated, and reported
data on teacher preparation program characteristics, including those
required under the HEOA, to the Department using the IRC approved under
OMB control number 1840-0837. The required reporting elements of the
IRC principally concern admissions criteria, student characteristics,
clinical preparation, numbers of teachers prepared, accreditation of
the program, and the pass rates and scaled scores of teacher candidates
on State teacher certification and licensure examinations.
Given all of the reporting changes under these final rules as
discussed in the NPRM, the Department estimates that each IHE will
require 66 fewer burden hours to prepare the revised IRC annually. The
Department estimates that each IHE will require 146 hours to complete
the current IRC approved by OMB. There would thus be an annual burden
of 80 hours to complete the revised IRC (146 hours minus 66 hours in
reduced data collection). The Department estimates that 1,490 IHEs
would respond to the IRC required under the regulations, based on
reporting figures from the most recent year data are available.
Therefore, reporting data using the IRC would represent a total annual
reporting burden of 119,200 hours (80 hours multiplied by 1,490 IHEs).
Entity-Level and Program-Level Reporting
As noted in the start-up burden section of Sec. 612.3, under the
current IRC, IHEs report teacher preparation program data at the entity
level. The final regulations require that each IHE
[[Page 75606]]
report disaggregated data at the teacher preparation program level. The
Department believes this will not require any additional data
collection or appreciably alter the time needed to calculate data
reported to the Department. However, the Department believes that some
additional reporting burden will exist for IHEs' electronic input and
submission of disaggregated data because each IHE typically houses
multiple teacher preparation programs.
Based on the most recent year of data available, the Department
estimates that there are 27,914 teacher preparation programs at 1,490
IHEs nationwide. Based on these figures, the Department estimates that
on average, each of these IHEs offers 16.40 teacher preparation
programs. Because each IHE already collects disaggregated IRC data, the
Department estimates it will take each IHE one additional hour to fill
in existing disaggregated data into the electronic IRC for each teacher
preparation program it offers. Because IHEs already have to submit an
IRC for the IHE, we estimate that the added burden for reporting on a
program level will be 15.40 hours (an average of 16.40 programs at one
hour per program, minus the existing submission of one IRC for the IHE,
or 15.40 hours). Therefore, each IHE will incur an average burden
increase of 15.40 hours (1 hour multiplied by an average of 15.40
teacher preparation programs at each IHE), and there will be an overall
burden increase of 22,946 hours each year associated with this
regulatory reporting requirement (15.40 multiplied by 1,490 IHEs).
Posting on the Institution's Web site
The regulations also require that the IHE provide the information
reported on the IRC to the general public by prominently and promptly
posting the IRC information on the IHE's Web site. Because the
Department believes it is reasonable to assume that an IHE offering a
teacher preparation program and communicating data related to that
program by electronic means maintains a Web site, the Department
presumes that posting such information to an already-existing Web site
will represent a minimal burden increase. The Department therefore
estimates that IHEs will require 0.5 hours (30 minutes) to meet this
requirement. This would represent a total burden increase of 745 hours
each year for all IHEs (0.5 hours multiplied by 1,490 IHEs).
Subtotal of Annual Reporting Burden Under Sec. 612.3
Aggregating the annual burdens calculated under the preceding
sections results in the following burdens: Together, all IHEs would
incur a total burden of 119,200 hours to develop the systems needed to
meet the requirements of the revised IRC, 22,946 hours to report
program-level data, and 745 hours to post IRC data to their Web sites.
This would constitute a total burden of 142,891 hours of annual burden
nationwide.
Total Institutional Report Card Reporting Burden
Aggregating the start-up and annual burdens calculated under the
preceding sections results in the following burdens: Together, all IHEs
would incur a total start-up burden under Sec. 612.3 of 14,900 hours
and a total annual reporting burden under Sec. 612.3 of 142,891 hours.
This would constitute a total burden of 157,791 total burden hours
under Sec. 612.3 nationwide.
The burden estimate for the existing IRC approved under OMB control
number 1840-0837 was 146 hours for each IHE with a teacher preparation
program. When the current IRC was established, the Department estimated
that 1,250 IHEs would provide information using the electronic
submission of the form for a total burden of 182,500 hours for all IHEs
(1,250 IHEs multiplied by 146 hours). Applying these estimates to the
current number of IHEs that are required to report (1,490) would
constitute a burden of 217,540 hours (1,490 IHEs multiplied by 146
hours). Based on these estimates, the revised IRC would constitute a
net burden reduction of 59,749 hours nationwide (217,540 hours minus
157,791 hours).
Section 612.4 Reporting Requirements for the State Report Card
Section 205(b) of the HEA requires that each State that receives
funds under the HEA provide to the Secretary and make widely available
to the public not less than the statutorily required specific
information on the quality of traditional and alternative route teacher
preparation programs. The State must do so in a uniform and
comprehensible manner, conforming with definitions and methods
established by the Secretary. Section 205(c) of the HEA directs the
Secretary to prescribe regulations to ensure the validity, reliability,
accuracy, and integrity of the data submitted. Section 206(b) requires
that IHEs assure the Secretary that their teacher training programs
respond to the needs of LEAs, be closely linked with the instructional
decisions novice teachers confront in the classroom, and prepare
candidates to work with diverse populations and in urban and rural
settings, as applicable.
Implementing the relevant statutory directives, the regulations
under Sec. 612.4(a) require that, starting October 1, 2019, and
annually thereafter, each State report on the SRC the quality of all
approved teacher preparation programs in the State, whether or not they
enroll students receiving Federal assistance under the HEA, including
distance education programs. This new SRC, to be implemented in 2019,
is an update of the current SRC. The State must also make the SRC
information widely available to the general public by posting the
information on the State's Web site.
Section 103(20) of the HEA and Sec. 612.2(d) of the proposed
regulations define ``State'' to include nine locations in addition to
the 50 States: The Commonwealth of Puerto Rico, the District of
Columbia, Guam, American Samoa, the United States Virgin Islands, the
Commonwealth of the Northern Mariana Islands, the Freely Associated
States, which include the Republic of the Marshall Islands, the
Federated States of Micronesia, and the Republic of Palau. For this
reason, all reporting required of States explicitly enumerated under
Sec. 205(b) of the HEA (and the related portions of the regulations,
specifically Sec. Sec. 612.4(a) and 612.6(b)), apply to these 59
States. However, certain additional regulatory requirements
(specifically Sec. Sec. 612.4(b), 612.4(c), 612.5, and 612.6(a)) only
apply to the 50 States of the Union, the Commonwealth of Puerto Rico,
and the District of Columbia. The burden estimates under those portions
of this report apply to those 52 States. For a full discussion of the
reasons for the application of certain regulatory provisions to
different States, see the preamble to the NPRM.
Entity-Level and Program-Level Reporting
As noted in the start-up and annual burden sections of Sec. 612.3,
under the current information collection process, data are collected at
the entity level, and the final regulations require data reporting at
the program level. In 2015, States reported that there were 27,914
teacher preparation programs offered, including 24,430 at IHEs and
3,484 through alternative route teacher preparation programs not
associated with IHEs. In addition, as discussed in the Supplemental
NPRM, the Department estimates that the sections of these final
regulations addressing teacher preparation programs offered through
distance education will result in 812 additional reporting instances.
[[Page 75607]]
Because the remainder of the data reporting discussed in this burden
statement is transmitted using the SRC, for those burden estimates
concerning reporting on the basis of teacher preparation programs, the
Department uses the estimate of 28,726 teacher preparation programs
(27,914 teacher preparation programs plus 812 reporting instances
related to teacher preparation programs offered through distance
education).
Start Up and Annual Burden Under Sec. 612.4(a)
Section 612.4(a) codifies State reporting requirements expressly
referenced in section 205(b) of the HEA; the remainder of Sec. 612.4
provides for reporting consistent with the directives to the Secretary
under sections 205(b) and (c) and the required assurance described in
section 206(c).
The HEOA revised a number of the reporting requirements for States.
The requirements of the SRC are more numerous than those contained in
the IRC, but the reporting elements required in both are similar in
many respects. In addition, the Department has successfully integrated
reporting to the extent that data reported by IHEs in the IRC is pre-
populated in the relevant fields on which the States are required to
report in the SRC. In addition to the elements discussed in Sec. 612.3
of this burden statement regarding the IRC, under the statute a State
must also report on its certification and licensure requirements and
standards, State-wide pass rates and scaled scores, shortages of highly
qualified teachers, and information related to low-performing or at-
risk teacher preparation programs in the State.
The SRC currently in use, approved under OMB control number 1840-
0837, collects information on these elements. States have been
successfully reporting information under this collection for many
years. The burden estimate for the existing SRC was 911 burden hours
per State. In the burden estimate for that SRC, the Department reported
that 59 States were required to report data, equivalent to the current
requirements. This represented a total burden of 53,749 hours for all
States (59 States multiplied by 911 hours). This burden calculation was
made on entity-level, rather than program-level, reporting (for a more
detailed discussion of the consequences of this issue, see the sections
on entity-level and program-level reporting in Sec. Sec. 612.3 and
612.4). However, because relevant program-level data reported by the
IHEs on the IRC will be pre-populated for States on the SRC, the burden
associated with program-level reporting under Sec. 612.4(a) will be
minimal. Those elements that will require additional burden are
discussed in the subsequent paragraphs of this section.
Elements Changed in the State Report Card
Using the calculations outlined in the NPRM and changes discussed
above, the Department estimates that the total reporting burden for
each State will be 243 hours (193 hours for the revised SRC plus the
additional statutory reporting requirements totaling 50 hours). This
would represent a reduction of 668 burden hours for each State to
complete the requirements of the SRC, as compared to approved OMB
collection 1840-0837 (911 burden hours under the current SRC compared
to 243 burden hours under the revised SRC). The total burden for States
to report this information would be 14,337 hours (243 hours multiplied
by 59 States).
Posting on the State's Web Site
The final regulations also require that the State provide the
information reported on the SRC to the general public by prominently
and promptly posting the SRC information on the State's Web site.
Because the Department believes it is reasonable to assume that each
State that communicates data related to its teacher preparation
programs by electronic means maintains a Web site, the Department
presumes that posting such information to an already-existing Web site
represents a minimal burden increase. The Department therefore
estimates that States will require 0.5 hours (30 minutes) to meet this
requirement. This would represent a total burden increase of 29.5 hours
each year for all IHEs (0.5 hours multiplied by 59 States).
Subtotal Sec. 612.4(a) Start-Up and Annual Reporting Burden
As noted in the preceding discussion, there is no start-up burden
associated solely with Sec. 612.4(a). Therefore, the aggregate start-
up and annual reporting burden associated with reporting elements under
Sec. 612.4(a) would be 14,366.5 hours (243 hours multiplied by 59
States plus 0.5 hours for each of the 59 States).
Reporting Required Under Sec. 612.4(b) and Sec. 612.4(c)
The preceding burden discussion of Sec. 612.4 focused on burdens
related to the reporting requirements under section 205(b) of the HEA
and reflected in 34 CFR 612.4(a). The remaining burden discussion of
Sec. 612.4 concerns reporting required under Sec. 612.4(b) and (c).
Start-Up Burden
Meaningful Differentiations
Under Sec. 612.4(b)(1), a State is required to make meaningful
differentiations in teacher preparation program performance using at
least three performance levels--low-performing teacher preparation
program, at-risk teacher preparation program, and effective teacher
preparation program--based on the indicators in Sec. 612.5 and
including employment outcomes for high-need schools and student
learning outcomes.
The Department believes that State higher education authorities
responsible for making State-level classifications of teacher
preparation programs will require time to make meaningful
differentiations in their classifications and determine whether
alternative performance levels are warranted. States are required to
consult with external stakeholders, review best practices by early
adopter States that have more experience in program classification, and
seek technical assistance.
States will also have to determine how they will make such
classifications. For example, a State may choose to classify all
teacher preparation programs on an absolute basis using a cut-off score
that weighs the various indicators, or a State may choose to classify
teacher preparation programs on a relative basis, electing to classify
a certain top percentile as exceptional, the next percentile as
effective, and so on. In exercising this discretion, States may choose
to consult with various external and internal parties and discuss
lessons learned with those States already making such classifications
of their teacher preparation programs.
The Department estimates that each State will require 70 hours to
make these determinations, and this would constitute a one-time total
burden of 3,640 hours (70 hours multiplied by 52 States).
Assurance of Specialized Accreditation
Under Sec. 612.4(b)(3)(i)(A), for each teacher preparation
program, a State must provide disaggregated data for each of the
indicators identified pursuant to Sec. 612.5. See the start-up burden
section of Sec. 612.5 for a more detailed discussion of the burden
associated with gathering the indicator data required to be reported
under this regulatory section. See the annual reporting burden section
of Sec. 612.4 for a discussion of the ongoing reporting burden
associated with reporting
[[Page 75608]]
disaggregated indicator data under this regulation. No further burden
exists beyond the burden described in these two sections.
Under Sec. 612.4(b)(3)(i)(B), a State is required to provide, for
each teacher preparation program in the State, the State's assurance
that the teacher preparation program either: (a) Is accredited by a
specialized agency or (b) provides teacher candidates with content and
pedagogical knowledge, quality clinical preparation, and rigorous
teacher exit qualifications. See the start-up burden section of Sec.
612.5 for a detailed discussion of the burden associated with gathering
the indicator data required to be reported under this regulation. See
the annual reporting burden section of Sec. 612.4 for a discussion of
the ongoing reporting burden associated with reporting these
assurances. No further burden exists beyond the burden described in
these two sections.
Indicator Weighting
Under Sec. 612.4(b)(2)(ii), a State must provide its weighting of
the different indicators in Sec. 612.5 for purposes of describing the
State's assessment of program performance. See the start-up burden
section of Sec. 612.4 on stakeholder consultation for a detailed
discussion of the burden associated with establishing the weighting of
the various indicators under Sec. 612.5. See the annual reporting
burden section of Sec. 612.4 for a discussion of the ongoing reporting
burden associated with reporting these relative weightings. No further
burden exists beyond the burden described in these two sections.
State-Level Rewards or Consequences
Under Sec. 612.4(b)(2)(iii), a State must provide the State-level
rewards or consequences associated with the designated performance
levels. See the start-up burden section of Sec. 612.4 on stakeholder
consultation for a more detailed discussion of the burden associated
with establishing these rewards or consequences. See the annual
reporting burden section of Sec. 612.4 for a discussion of the ongoing
reporting burden associated with reporting these relative weightings.
No further burden exists beyond the burden described in these two
sections.
Aggregation of Small Programs
Under Sec. 612.4(b)(3), a State must ensure that all of its
teacher preparation programs in that State are represented on the SRC.
The Department recognized that many teacher preparation programs
consist of a small number of prospective teachers and that reporting on
these programs could present privacy and data validity issues. After
discussion and input from various non-Federal negotiators during the
negotiated rulemaking process, the Department elected to set a required
reporting program size threshold of 25. However, the Department
realized that, on the basis of research examining accuracy and validity
relating to reporting small program sizes, some States may prefer to
report on programs smaller than 25. Section 612.4(b)(3)(i) permits
States to report using a lower program size threshold. In order to
determine the preferred program size threshold for its programs, a
State may review existing research or the practices of other States
that set program size thresholds to determine feasibility for its own
teacher preparation program reporting. The Department estimates that
such review will require 20 hours for each State, and this would
constitute a one-time total burden of 1,040 hours (20 hours multiplied
by 52 States).
Under Sec. 612.4(b)(3), all teacher preparation entities must
report on the remaining small programs that do not meet the program
size threshold the State chooses. States will be able to do so through
a combination of two possible aggregation methods described in Sec.
612.4(b)(3)(ii). The preferred aggregation methodology is to be
determined by the States after consultation with a group of
stakeholders. For a detailed discussion of the burden related to this
consultation process, see the start-up burden section of Sec. 612.4,
which discusses the stakeholder consultation process. Apart from the
burden discussed in that section, no other burden is associated with
this requirement.
Stakeholder Consultation
Under Sec. 612.4(c), a State must consult with a representative
group of stakeholders to determine the procedures for assessing and
reporting the performance of each teacher preparation program in the
State. This stakeholder group, composed of a variety of members
representing viewpoints and interests affected by these regulations,
must provide input on a number of issues concerning the State's
discretion. There are four issues in particular on which the
stakeholder group advises the State--
a. The relative weighting of the indicators identified in Sec.
612.5;
b. The preferred method for aggregation of data such that
performance data for a maximum number of small programs are reported;
c. The State-level rewards or consequences associated with the
designated performance levels; and
d. The appropriate process and opportunity for programs to
challenge the accuracy of their performance data and program
classification.
The Department believes that this consultative process will require
that the group convene at least three times to afford each of the
stakeholder representatives multiple opportunities to meet and consult
with the constituencies they represent. Further, the Department
believes that members of the stakeholder group will require time to
review relevant materials and academic literature and advise on the
relative strength of each of the performance indicators under Sec.
612.5, as well as any other matters requested by the State.
These stakeholders will also require time to advise whether any of
the particular indicators will have more or less predictive value for
the teacher preparation programs in their State, given its unique
traits. Finally, because some States have already implemented one or
more components of the regulatory indicators of program quality, these
stakeholders will require time to review these States' experiences in
implementing similar systems. The Department estimates that the
combination of gathering the stakeholder group multiple times, review
of the relevant literature and other States' experiences, and making
determinations unique to their particular State will take 900 hours for
each State (60 hours per stakeholder multiplied by 15 stakeholders).
This would constitute a one-time total of 46,800 hours for all States
(900 hours multiplied by 52 States).
Subtotal of Start-Up Burden Under Sec. 612.4(b) and Sec. 612.4(c)
Aggregating the start-up burdens calculated under the preceding
sections results in the following burdens: All States would incur a
total burden of 3,640 hours to make meaningful differentiations in
program classifications, 1,040 hours to determine the State's
aggregation of small programs, and 46,800 hours to complete the
stakeholder consultation process. This would constitute a total of
51,480 hours of start-up burden nationwide.
Annual Reporting Burden
Classification of Teacher Preparation Programs
The bulk of the State burden associated with assigning programs
among classification levels should be in
[[Page 75609]]
gathering and compiling data on the indicators of program quality that
compose the basis for the classification. Once a State has made a
determination of how a teacher preparation program will be classified
at a particular performance level, applying the data gathered under
Sec. 612.5 to this classification basis is straightforward. The
Department estimates that States will require 0.5 hours (30 minutes) to
apply already-gathered indicator data to existing program
classification methodology. The total burden associated with
classification of all teacher preparation programs using meaningful
differentiations would be 14,363 hours each year (0.5 hours multiplied
by 28,726 teacher preparation programs).
Disaggregated Data on Each Indicator in Sec. 612.5
Under Sec. 612.4(b)(2)(i)(A), States must report on the indicators
of program performance in Sec. 612.5. For a full discussion of the
burden related to the reporting of this requirement, see the annual
reporting burden section of Sec. 612.5. Apart from the burden
discussed in this section, no other burden is associated with this
requirement.
Indicator Weighting
Under Sec. 612.4(b)(2)(ii), States must report the relative weight
it places on each of the different indicators enumerated in Sec.
612.5. The burden associated with this reporting is minimal: After the
State, in consultation with a group of stakeholders, has made the
determination about the percentage weight it will place on each of
these indicators, reporting this information on the SRC is a simple
matter of inputting a number for each of the indicators. Under Sec.
612.5, this minimally requires the State to input eight general
indicators of quality.
Note: The eight indicators are--
a. Associated student learning outcome results;
b. Teacher placement results;
c. Teacher retention results;
d. Teacher placement rate calculated for high-need school
results;
e. Teacher retention rate calculated for high-need school
results;
f. Teacher satisfaction survey results;
g. Employer satisfaction survey results; and
h. Teacher preparation program characteristics.
This reporting burden will not be affected by the number of teacher
preparation programs in a State, because such weighting applies equally
to each program. Although the State has the discretion to add
indicators, the Department does not believe that transmission of an
additional figure representing the percentage weighting assigned to
that indicator will constitute an appreciable burden increase. The
Department therefore estimates that each State will incur a burden of
0.25 hours (15 minutes) to report the relative weighting of the
regulatory indicators of program performance. This would constitute a
total burden on States of 13 hours each year (0.25 hours multiplied by
52 States).
State-Level Rewards or Consequences
Similar to the reporting required under Sec. 612.4(b)(2)(ii),
after a State has made the requisite determination about rewards and
consequences, reporting those rewards and consequences represents a
relatively low burden. States must report this on the SRC during the
first year of implementation, the SRC could provide States with a drop-
down list representing common rewards or consequences in use by early
adopter States, and States can briefly describe those rewards or
consequences not represented in the drop-down options. For subsequent
years, the SRC could be pre-populated with the prior-year's selected
rewards and consequences, such that there will be no further burden
associated with subsequent year reporting unless the State altered its
rewards and consequences. For these reasons, the Department estimates
that States will incur, on average, 0.5 hours (30 minutes) of burden in
the first year of implementation to report the State-level rewards and
consequences, and 0.1 hours (6 minutes) of burden in each subsequent
year. The Department therefore estimates that the total burden for the
first year of implementation of this regulatory requirement will be 26
hours (0.5 hours multiplied by 52 States) and 5.2 hours each year
thereafter (0.1 hours multiplied by 52 States).
Stakeholder Consultation
Under Sec. 612.4(b)(4), during the first year of reporting and
every five years thereafter, States must report on the procedures they
established in consultation with the group of stakeholders described
under Sec. 612.4(c)(1). The burden associated with the first and third
of these four procedures, the weighting of the indicators and State-
level rewards and consequences associated with each performance level,
respectively, are discussed in the preceding paragraphs of this
section.
The second procedure, the method by which small programs are
aggregated, is a relatively straightforward reporting procedure on the
SRC. Pursuant to Sec. 612.4(b)(3)(ii), States are permitted to use one
of two methods, or a combination of both in aggregating small programs.
A State can aggregate programs that are similar in teacher preparation
subject matter. A State can also aggregate using prior year data,
including that of multiple prior years. Or a State can use a
combination of both methods. On the SRC, the State simply indicates the
method it uses. The Department estimates that States will require 0.5
hours (30 minutes) to enter these data every fifth year. On an
annualized basis, this would therefore constitute a total burden of 5.2
hours (0.5 hours multiplied by 52 States divided by five to annualize
burden for reporting every fifth year).
The fourth procedure that States must report under Sec.
612.4(b)(4) is the method by which teacher preparation programs in the
State are able to challenge the accuracy of their data and the
classification of their program. First, the Department believes that
States will incur a paperwork burden each year from recordkeeping and
publishing decisions of these challenges. Because the Department
believes the instances of these appeals will be relatively rare, we
estimate that each State will incur 10 hours of burden each year
related to recordkeeping and publishing decisions. This would
constitute an annual reporting burden of 520 hours (10 hours multiplied
by 52 States).
After States and their stakeholder groups determine the preferred
method for programs to challenge data, reporting that information will
likely take the form of narrative responses. This is because the method
for challenging data may differ greatly from State to State, and it is
difficult for the Department to predict what methods States will
choose. The Department therefore estimates that reporting this
information in narrative form during the first year will constitute a
burden of 3 hours for each State. This would represent a total
reporting burden of 156 hours (3 hours multiplied by 52 States).
In subsequent reporting cycles, the Department can examine State
responses and (1) pre-populate this response for States that have not
altered their method for challenging data or (2) provide a drop-down
list of representative alternatives. This will minimize subsequent
burden for most States. The Department therefore estimates that in
subsequent reporting cycles (every five years under the final
regulations), only 10 States will require more time to provide
additional narrative responses totaling 3 burden
[[Page 75610]]
hours each, with the remaining 42 States incurring a negligible burden.
This represents an annualized reporting burden of 6 hours for those 10
States (3 hours multiplied by 10 States, divided by 5 years), for a
total annualized reporting burden of 60 hours for subsequent years (6
hours multiplied by 10 States).
Under Sec. 612.4(c)(2), each State must periodically examine the
quality of its data collection and reporting activities and modify
those activities as appropriate. The Department believes that this
review will be carried out in a manner similar to the one described for
the initial stakeholder determinations in the preceding paragraphs:
States will consult with representative groups to determine their
experience with providing and using the collected data, and they will
consult with data experts to ensure the validity and reliability of the
data collected. The Department believes such a review will recur every
three years, on average. Because this review will take place years
after the State's initial implementation of the regulations, the
Department further believes that the State's review will be of
relatively little burden. This is because the State's review will be
based on the State's own experience with collecting and reporting data
pursuant to the regulations, and because States can consult with many
other States to determine best practices. For these reasons, the
Department estimates that the periodic review and modification of data
collection and reporting will require 30 hours every three years or an
annualized burden of 10 hours for each State. This would constitute a
total annualized burden of 520 hours for all States (10 hours per year
multiplied by 52 States).
Subtotal Annual Reporting Burden Under Sec. 612.4(b) and Sec.
612.4(c)
Aggregating the annual burdens calculated under the preceding
sections results in the following: All States would incur a burden of
14,363 hours to report classifications of teacher preparation programs,
13 hours to report State indicator weightings, 26 hours in the first
year and 5.2 hours in subsequent years to report State-level rewards
and consequences associated with each performance classification, 5.2
hours to report the method of program aggregation, 520 hours for
recordkeeping and publishing appeal decisions, 156 hours the first year
and 60 hours in subsequent years to report the process for challenging
data and program classification, and 520 hours to report on the
examination of data collection quality. This totals 15,603.2 hours of
annual burden in the first year and 15,486.4 hours of annual burden in
subsequent years nationwide.
Total Reporting Burden Under Sec. 612.4
Aggregating the start-up and annual burdens calculated under the
preceding sections results in the following burdens: All States would
incur a total burden under Sec. 612.4(a) of 14,366.5 hours, a start-up
burden under Sec. Sec. 612.4(b) and 612.4(c) of 51,480 hours, and an
annual burden under Sec. Sec. 612.4(b) and 612.4(c) of 15,603.2 hours
in the first year and 15,486.4 hours in subsequent years. This totals
between 81,332.9 and 81,449.7 total burden hours under Sec. 612.4
nationwide. Based on the prior estimate of 53,749 hours of reporting
burden on OMB collection 1840-0837, the total burden increase under
Sec. 612.4 is between 27,583.9 hours and 27,700.7 hours (53,749 hours
minus a range of 81,332.9 and 81,449.7 total burden hours).
Section 612.5 Indicators a State Must Use To Report on Teacher
Preparation Program Performance
The final regulations at Sec. 612.5(a)(1) through (a)(4) identify
those indicators that a State is required to use to assess the academic
content knowledge and teaching skills of novice teachers from each of
its teacher preparation programs. Under the regulations, a State must
use the following indicators of teacher preparation program
performance: (a) Student learning outcomes, (b) employment outcomes,
(c) survey outcomes, and (d) whether the program (1) is accredited by a
specialized accrediting agency or (2) produces teacher candidates with
content and pedagogical knowledge and quality clinical preparation, who
have met rigorous exit standards. Section 612.5(b) permits a State, at
its discretion, to establish additional indicators of academic content
knowledge and teaching skills.
Start-Up Burden
Student Learning Outcomes
As described in the Discussion of Costs, Benefits, and Transfers
section of the RIA, we do not estimate that States will incur any
additional burden associated with creating systems for evaluating
student learning outcomes. However, the regulations also require that
States link student growth or teacher evaluation data back to each
teacher's preparation programs consistent with State discretionary
guidelines included in Sec. 612.4. Currently, few States have such
capacity. However, based on data from the SLDS program, it appears that
30 States, the District of Columbia, and the Commonwealth of Puerto
Rico either already have the ability to aggregate data on student
achievement and map back to teacher preparation programs or have
committed to do so. For these 30 States, the District of Columbia, and
the Commonwealth of Puerto Rico we estimate that no additional costs
will be needed to link student learning outcomes back to teacher
preparation programs.
For the remaining States, the Department estimates that they will
require 2,940 hours for each State, for a total burden of 58,800 hours
nationwide (2,940 hours multiplied by 20 States).
Employment Outcomes
Section 612.5(a)(2) requires a State to provide data on each
teacher preparation program's teacher placement rate as well as the
teacher placement rate calculated for high-need schools. High-need
schools are defined in Sec. 612.2(d) by using the definition of
``high-need school'' in section 200(11) of the HEA. The regulations
give States discretion to exclude those novice teachers or recent
graduates from this measure if they are teaching in a private school,
teaching in another State, enrolled in graduate school, or engaged in
military service. States also have the discretion to treat this rate
differently for alternative route and traditional route providers.
Section 612.5(a)(2) requires a State to provide data on each
teacher preparation program's teacher retention rate and teacher
retention rate calculated for high-need schools. The regulations give
States discretion to exclude those novice teachers or recent graduates
from this measure if they are teaching in a private school (or other
school not requiring State certification), another State, enrolled in
graduate school, or serving in the military. States also have the
discretion to treat this rate differently for alternative route and
traditional route providers.
As discussed in the NPRM, the Department believes that only 11
States will likely incur additional burden in collecting information
about the employment and retention of recent graduates of teacher
preparation programs in its jurisdiction. To the extent that it is not
possible to establish these measures using existing data systems,
States may need to obtain some or all of this information from teacher
preparation programs or from the teachers themselves upon requests for
certification and licensure. The Department estimates that 200 hours
may be required at the State level to collect information about novice
[[Page 75611]]
teachers employed in full-time teaching positions (including designing
the data request instruments, disseminating them, providing training or
other technical assistance on completing the instruments, collecting
the data, and checking their accuracy), which would amount to a total
of 2,200 hours (200 hours multiplied by 11 States).
Survey Outcomes
Section 612.5(a)(3) requires a State to provide data on each
teacher preparation program's teacher survey results. This requires
States to report data from a survey of novice teachers in their first
year of teaching designed to capture their perceptions of whether the
training that they received was sufficient to meet classroom and
profession realities.
Section 612.5(a)(3) also requires a State to provide data on each
teacher preparation program's employer survey results. This requires
States to report data from a survey of employers or supervisors
designed to capture their perceptions of whether the novice teachers
they employ or supervise were prepared sufficiently to meet classroom
and profession realities.
Some States and IHEs already survey graduates of their teacher
preparation programs. The sampling size and length of survey instrument
can strongly affect the potential burden associated with administering
the survey. The Department has learned that some States already have
experience carrying out such surveys (for a more detailed discussion of
these and other estimates in this section, see the Discussion of Costs,
Benefits and Transfers section regarding student learning outcomes in
the RIA). In order to account for variance in States' abilities to
conduct such surveys, the variance in the survey instruments
themselves, and the need to ensure statistical validity and
reliability, the Department assumes a somewhat higher burden estimate
than States' initial experiences.
Based on Departmental consultation with researchers experienced in
carrying out survey research, the Department assumes that survey
instruments will not require more than 30 minutes to complete. The
Department further assumes that a State can develop a survey in 1,224
hours. Assuming that States with experience in administering surveys
will incur a lower cost, the Department assumes that the total burden
incurred nationwide would maximally be 63,648 hours (1,224 hours
multiplied by 52 States).
Teacher Preparation Program Characteristics
Under Sec. 612.5(a)(4), States must report, for each teacher
preparation program in the State whether it: (a) Is accredited by a
specialized accrediting agency recognized by the Secretary for
accreditation of professional teacher education programs, or (b)
provides teacher candidates with content and pedagogical knowledge and
quality clinical preparation, and has rigorous teacher candidate exit
standards.
CAEP, a union of two formerly independent national accrediting
agencies, the National Council for Accreditation of Teacher Education
(NCATE) and the Teacher Education Accreditation Council (TEAC), reports
that currently it has fully accredited approximately 800 IHEs. The
existing IRC currently requires reporting of whether each teacher
preparation program is accredited by a specialized accrediting agency,
and if so, which one. We note that, as of July 1, 2016, CAEP has not
been recognized by the Secretary for accreditation of teacher
preparation programs. As such, programs accredited by CAEP would not
qualify under Sec. 612.5(a)(4)(i). However, as described in the
discussion of comments above, States would be able to use accreditation
by CAEP as an indicator that the teacher preparation program meets the
requirements of Sec. 612.5(a)(4)(ii). In addition, we explain in the
comments above that a State also could meet the reporting requirements
in Sec. 612.5(a)(4)(ii) by indicating that a program has been
accredited by an accrediting organization whose standards cover the
program characteristics identified in that section. Because section
205(a)(1)(D) of the HEA requires IHEs to include in their IRCs the
identity of any agency that has accredited their programs, and the
number of such accrediting agencies is small, States should readily
know whether these other agencies meet these standards. For these
reasons, the Department believes that no significant start-up burden
will be associated with State determinations of specialized
accreditation of teacher preparation programs for those programs that
are already accredited.
As discussed in the NPRM, the Department estimates that States will
have to provide information for 15,335 teacher preparation programs
nationwide (11,461 unaccredited programs at IHEs plus 3,484 programs at
alternative routes not affiliated with an IHE plus 390 reporting
instances for teacher preparation programs offered through distance
education).
The Department believes that States will be able to make use of
accreditation guidelines from specialized accrediting agencies to
determine the measures that will adequately inform them about which of
its teacher preparation programs provide teacher candidates with
content and pedagogical knowledge, quality clinical preparation, and
have rigorous teacher candidate exit qualifications--the indicators
contained in Sec. 612.5(a)(4)(ii). The Department estimates that
States will require 2 hours for each teacher preparation program to
determine whether or not it can provide such information. Therefore,
the Department estimates that the total reporting burden to provide
this information would be 30,670 hours (15,335 teacher preparation
programs multiplied by 2 hours).
Subtotal of Start-Up Reporting Burden Under Sec. 612.5
Aggregating the start-up burdens calculated under the preceding
sections results in the following burdens: All States would incur a
burden of 58,800 hours to link student learning outcome measures back
to each teacher's preparation program, 2,200 hours to measure
employment outcomes, 63,648 hours to develop surveys, and 30,670 hours
to establish the process to obtain information related to certain
indicators for teacher preparation programs without specialized
accreditation. This totals 155,318 hours of start-up burden nationwide.
Annual Reporting Burden
Under Sec. 612.5(a), States must transmit, through specific
elements on the SRC, information related to indicators of academic
content knowledge and teaching skills of novice teachers for each
teacher preparation program in the State. We discuss the burden
associated with establishing systems related to gathering these data in
the section discussing start-up burden associated with Sec. 612.5. The
following section describes the burden associated with gathering these
data and reporting them to the Department annually.
Student Learning Outcomes
Under Sec. 612.5(a)(1), States are required to transmit
information related to student learning outcomes for each teacher
preparation program in the State. The Department believes that in order
to ensure the validity of the data, each State will require two hours
to gather and compile data related to the student learning outcomes of
each teacher preparation program. Much of
[[Page 75612]]
the burden related to data collection will be built into State-
established reporting systems, limiting the burden related to data
collection to technical support to ensure proper reporting and to
correct data that had been inputted incorrectly. States have the
discretion to use student growth measures or teacher evaluation
measures in determining student learning outcomes. Regardless of the
measure(s) used, the Department estimates that States will require 0.5
hours (30 minutes) for each teacher preparation program to convey this
information to the Department through the SRC. This is because these
measures will be calculated on a quantitative basis. The combination of
gathering and reporting data related to student learning outcomes
therefore constitutes a burden of 2.5 hours for each teacher
preparation program, and would represent a total burden of 71,815 hours
annually (2.5 hours multiplied by 28,726 teacher preparation programs).
Employment Outcomes
Under Sec. 612.5(a)(2), States are required to transmit
information related to employment outcomes for each teacher preparation
program in the State. In order to report employment outcomes to the
Department, States must compile and transmit teacher placement rate
data, teacher placement rate data calculated for high-need schools,
teacher retention rate data, and teacher retention rate data for high-
need schools. Similar to the process for reporting student learning
outcome data, much of the burden related to gathering data on
employment outcomes is subsumed into the State-established data
systems, which provides information on whether and where teachers were
employed. The Department estimates that States will require 3 hours to
gather data both on teacher placement and teacher retention for each
teacher preparation program in the State. Reporting these data using
the SRC is relatively straightforward. The measures are the percentage
of teachers placed and the percentage of teachers who continued to
teach, both generally and at high-need schools. The Department
therefore estimates that States will require 0.5 hours (30 minutes) for
each teacher preparation program to convey this information to the
Department through the SRC. The combination of gathering and reporting
data related to employment outcomes therefore constitutes a burden of
3.5 hours for each teacher preparation program and would represent a
total burden of 100,541 hours annually (3.5 hours multiplied by 28,726
teacher preparation programs).
Survey Outcomes
In addition to the start-up burden needed to produce a survey,
States will incur annual burdens to administer the survey. Surveys will
include, but will not be limited to, a teacher survey and an employer
survey, designed to capture perceptions of whether novice teachers who
are employed as teachers in their first year of teaching in the State
where the teacher preparation program is located possess the skills
needed to succeed in the classroom. The burdens for administering an
annual survey will be borne by the State administering the survey and
the respondents completing it. For the reasons discussed in the RIA in
this document, the Department estimates that States will require
approximately 0.5 hours (30 minutes) per respondent to collect a
sufficient number of survey instruments to ensure an adequate response
rate. The Department employs an estimate of 253,042 respondents (70
percent of 361,488--the 180,744 completers plus their 180,744
employers) that will be required to complete the survey. Therefore, the
Department estimates that the annual burden to respondents nationwide
would be 126,521 hours (285,181 respondents multiplied by 0.5 hours per
respondent).
With respect to burden incurred by States to administer the surveys
annually, the Department estimates that one hour of burden will be
incurred for every respondent to the surveys. This would constitute an
annual burden nationwide of 253,042 hours (253,042 respondents
multiplied by one hour per respondent).
Under Sec. 612.5(a)(3), after these surveys are administered,
States are required to report the information using the SRC. In order
to report survey outcomes to the Department, the Department estimates
that States will need 0.5 hours to report the quantitative data related
to the survey responses for each instrument on the SRC, constituting a
total burden of one hour to report data on both instruments. This would
represent a total burden of 28,726 hours annually (1 hour multiplied by
28,726 teacher preparation programs). The total burden associated with
administering, completing, and reporting data on the surveys therefore
constitutes 408,289 hours annually (126,521 hours plus 253,042 hours
plus 28,726 hours).
Teacher Preparation Program Characteristics
Under Sec. 612.5(a)(4), States are required to report whether each
program in the State is accredited by a specialized accrediting agency
recognized by the Secretary, or produces teacher candidates with
content and pedagogical knowledge, with quality clinical preparation,
and who have met rigorous teacher candidate exit qualifications. The
Department estimates that 726 IHEs offering teacher preparation
programs are or will be accredited by a specialized accrediting agency
(see the start-up burden discussion for Sec. 612.5 for an explanation
of this figure). Using the IRC, IHEs already report to States whether
teacher preparation programs have specialized accreditation. However,
as noted in the start-up burden discussion of Sec. 612.5, as of July
1, 2016, there are no specialized accrediting agencies recognized by
the Secretary for teacher preparation programs. As such, the Department
does not expect any teacher preparation program to qualify under Sec.
612.5(a)(4)(i). However, as discussed elsewhere in this document,
States can use accreditation by CAEP or another entity whose standards
for accreditation cover the basic program characteristics in Sec.
612.5(a)(4)(ii) as evidence that the teacher preparation program has
satisfied the indicator of program performance in that provision. Since
IHEs are already reporting whether they have specialized accreditation
in their IRCs, and this reporting element will be pre-populated for
States on the SRC, States would simply need to know whether these
accrediting agencies have standards that examine the program
characteristics in Sec. 612.5(a)(4)(ii). Therefore, the Department
estimates no additional burden for this reporting element for programs
that have the requisite accreditation.
Under Sec. 612.5(a)(4)(ii), for those programs that are not
accredited by a specialized accrediting agency, States are required to
report on certain indicators in lieu of that accreditation: Whether the
program provides teacher candidates with content and pedagogical
knowledge and quality clinical preparation, and has rigorous teacher
candidate exit qualifications. We assume that such requirements are
already built into State approval of relevant programs. The Department
estimates that States will require 0.25 hours (15 minutes) to provide
to the Secretary an assurance, in a yes/no format, whether each teacher
preparation program in its jurisdiction not holding a specialized
accreditation from CAEP, NCATE, or TEAC meets these indicators.
As discussed in the start-up burden section of Sec. 612.5 which
discusses reporting of teacher preparation program characteristics, the
Department
[[Page 75613]]
estimates States will have to provide such assurances for 15,335
teacher preparation programs that do not have specialized
accreditation. Therefore, the Department estimates that the total
burden associated with providing an assurance that these teacher
preparation programs meet these indicators is 3,834 hours (0.25 hours
multiplied by the 15,335 teacher preparation programs that do not have
specialized accreditation).
Other Indicators
Under Sec. 612.5(b), States may include additional indicators of
academic content knowledge and teaching skill in their determination of
whether teacher preparation programs are low-performing. As discussed
in the Discussion of Costs, Benefits, and Transfers section of the RIA,
we do not assume that States will incur any additional burden under
this section beyond entering the relevant data into the information
collection instrument. The Department estimates that the total
reporting burden associated with this provision will be 28,726 hours
(28,726 teacher preparation programs multiplied by 1 hour).
Subtotal of Annual Reporting Burden Under Sec. 612.5
Aggregating the annual burdens calculated under the preceding
sections results in the following burdens: All States would incur a
burden of 71,815 hours to report on student learning outcome measures
for all subjects and grades, 100,541 hours to report on employment
outcomes, 408,289 hours to report on survey outcomes, 3,834 hours to
report on teacher preparation program characteristics, and 28,726 hours
to report on other indicators not required in Sec. 612.5(a)(1)-(4).
This totals 613,204.75 hours of annual burden nationwide.
Total Reporting Burden Under Sec. 612.5
Aggregating the start-up and annual burdens calculated under the
preceding sections results in the following burdens: All States would
incur a start-up burden under Sec. 612.5 of 155,318 hours and an
annual burden under Sec. 612.5 of 613,204.75 hours. This totals
768,522.75 burden hours under Sec. 612.5 nationwide.
Section 612.6 What Must a State Consider in Identifying Low-Performing
Teacher Preparation Programs or At-Risk Programs?
The regulations in Sec. 612.6 require States to use criteria,
including, at a minimum, indicators of academic content knowledge and
teaching skills from Sec. 612.5, to identify low-performing or at-risk
teacher preparation programs.
For a full discussion of the burden related to the consideration
and selection of the criteria reflected in the indicators described in
Sec. 612.5, see the start-up burden section of Sec. Sec. 612.4(b) and
612.4(c) discussing meaningful differentiations. Apart from that burden
discussion, the Department believes States will incur no other burden
related to this regulatory provision.
Section 612.7 Consequences for a Low-Performing Teacher Preparation
Program That Loses the State's Approval or the State's Financial
Support
For any IHE administering a teacher preparation program that has
lost State approval or financial support based on being identified as a
low-performing teacher preparation program, the regulations under Sec.
612.7 require the IHE to--(a) notify the Secretary of its loss of State
approval or financial support within thirty days of such designation;
(b) immediately notify each student who is enrolled in or accepted into
the low-performing teacher preparation program and who receives funding
under title IV, HEA that the IHE is no longer eligible to provide such
funding to them; and (c) disclose information on its Web site and
promotional materials regarding its loss of State approval or financial
support and loss of eligibility for title IV funding.
The Department does not expect that a large percentage of programs
will be subject to a loss of title IV eligibility. The Department
estimates that approximately 50 programs will lose their State approval
or financial support.
For those 50 programs, the Department estimates that it will take
each program 15 minutes to notify the Secretary of its loss of
eligibility; 5 hours to notify all students who are enrolled in or
accepted into the program and who receive funding under title IV of the
HEA; and 30 minutes to disclose this information on its Web sites and
promotional materials, for a total of 5.75 hours per program. The
Department estimates the total burden at 287.5 hours (50 programs
multiplied by 5.75 hours).
Section 612.8 Regaining Eligibility To Accept or Enroll Students
Receiving Title IV, HEA Funds After Loss of State Approval or Financial
Support
The regulations in Sec. 612.8 provide a process for a low-
performing teacher preparation program that has lost State approval or
financial support to regain its ability to accept and enroll students
who receive title IV, HEA funds. Under this process, IHEs will submit
an application and supporting documentation demonstrating to the
Secretary: (1) Improved performance on the teacher preparation program
performance criteria reflected in indicators described in Sec. 612.5
as determined by the State; and (2) reinstatement of the State's
approval or the State's financial support.
The process by which programs and institutions apply for title IV
eligibility already accounts for the burden associated with this
provision.
Total Reporting Burden Under Part 612
Aggregating the total burdens calculated under the preceding
sections of part 612 results in the following burdens: Total burden
hours incurred under Sec. 612.3 is 157,791 hours, under Sec. 612.4 is
between 81,332.9 hours and 81,449.7 hours, under Sec. 612.5 is
768,522.75 hours, under Sec. 612.7 is 287.5 hours, and under Sec.
612.8 is 200 hours. This totals between 1,008,134.15 hours and
1,008,250.95 hours nationwide.
Reporting Burden Under Part 686
The changes to part 686 in these regulations have no measurable
effect on the burden currently identified in the OMB Control Numbers
1845-0083 and 1845-0084.
Consistent with the discussions above, the following chart
describes the sections of the final regulations involving information
collections, the information being collected, and the collections the
Department has submitted to the OMB for approval and public comment
under the Paperwork Reduction Act. In the chart, the Department labels
those estimated burdens not already associated an OMB approval number
under a single prospective designation ``OMB 1840-0837.'' This label
represents a single information collection; the different sections of
the regulations are separated in the table below for clarity and to
appropriately divide the burden hours associated with each regulatory
section.
Please note that the changes in burden estimated in the chart are
based on the change in burden under the current IRC OMB control numbers
1840-0837 and ``OMB 1840-0837.'' The burden estimate for 612.3 is based
on the most recent data available for the number of IHEs that are
required to report (i.e. 1,522 IHEs using most recent data available
rather than 1,250 IHEs using prior estimates). For a complete
discussion of the costs associated with the burden incurred under these
regulations, please see the RIA in this document, specifically the
accounting statement.
[[Page 75614]]
------------------------------------------------------------------------
OMB Control No. and
Regulatory section Information estimated change in
collection the burden
------------------------------------------------------------------------
612.3...................... This section requires OMB 1840-0837--The
IHEs that provide a burden will
teacher preparation decrease by 64,421
program leading to hours.
State certification
or licensure to
provide data on
teacher preparation
program performance
to the States.
612.4...................... This section requires OMB 1840-0837--The
States that receive burden will
funds under the increase by between
Higher Education Act 27,700.7 hours.
of 1965, as amended,
to report to the
Secretary on the
quality of teacher
preparation in the
State, both for
traditional teacher
preparation programs
and for alternative
route to State
certification and
licensure programs.
612.5...................... This regulatory OMB 1840-0837--The
section requires burden will
States to use increase by
certain indicators 768,522.75.
of teacher
preparation
performance for
purposes of the
State report card.
612.6...................... This regulatory OMB 1840-0837--The
section requires burden associated
States to use with this
criteria, including regulatory
indicators of provision is
academic content accounted for in
knowledge and other portions of
teaching skills, to this burden
identify low- statement.
performing or at-
risk teacher
preparation programs.
612.7...................... The regulations under OMB 1840-0837--The
this section require burden will
any IHE increase by 287.5
administering a hours.
teacher preparation
program that has
lost State approval
or financial support
based on being
identified as a low-
performing teacher
preparation program
to notify the
Secretary and
students receiving
title IV, HEA funds,
and to disclose this
information on its
Web site.
612.8...................... The regulations in OMB 1840-0837--The
this section provide burden will
a process for a low- increase by 200
performing teacher hours.
preparation program
that lost State
approval or
financial support to
regain its ability
to accept and enroll
students who receive
title IV funds.
Total Change in Burden. ..................... Total increase in
burden under parts
612 will be between
732,173.15 hours
and 732,289.95
hours.
------------------------------------------------------------------------
Intergovernmental Review
These programs are subject to the requirements of Executive Order
12372 and the regulations in 34 CFR part 79. One of the objectives of
the Executive order is to foster an intergovernmental partnership and a
strengthened federalism. The Executive order relies on processes
developed by State and local governments for coordination and review of
proposed Federal financial assistance.
This document provides early notification of our specific plans and
actions for these programs.
Assessment of Educational Impact
In the NPRM we requested comments on whether the proposed
regulations would require transmission of information that any other
agency or authority of the United States gathers or makes available.
Based on the response to the NPRM and on our review, we have
determined that these final regulations do not require transmission of
information that any other agency or authority of the United States
gathers or makes available.
Federalism
Executive Order 13132 requires us to ensure meaningful and timely
input by State and local elected officials in the development of
regulatory policies that have federalism implications. ``Federalism
implications'' means substantial direct effects on the States, on the
relationship between the National Government and the States, or on the
distribution of power and responsibilities among the various levels of
government.
In the NPRM we identified a specific section that may have
federalism implications and encouraged State and local elected
officials to review and provide comments on the proposed regulations.
In the Public Comment section of this preamble, we discuss any comments
we received on this subject.
Accessible Format: Individuals with disabilities can obtain this
document in an accessible format (e.g., braille, large print,
audiotape, or compact disc) on request to the person listed under FOR
FURTHER INFORMATION CONTACT.
Electronic Access to This Document: The official version of this
document is the document published in the Federal Register. Free
Internet access to the official edition of the Federal Register and the
Code of Federal Regulations is available via the Federal Digital System
at: www.gpo.gov/fdsys. At this site you can view this document, as well
as all other documents of this Department published in the Federal
Register, in text or Adobe Portable Document Format (PDF). To use PDF
you must have Adobe Acrobat Reader, which is available free at the
site.
You may also access documents of the Department published in the
Federal Register by using the article search feature at:
www.federalregister.gov. Specifically, through the advanced search
feature at this site, you can limit your search to documents published
by the Department.
List of Subjects in 34 CFR Parts 612 and 686
Administrative practice and procedure, Aliens, Colleges and
universities, Consumer protection, Grant programs--education, Loan
programs--education, Reporting and recordkeeping requirements,
Selective Service System, Student aid, Vocational education.
Dated: October 11, 2016.
John B. King, Jr.,
Secretary of Education.
For the reasons discussed in the preamble, the Secretary amends
chapter VI of title 34 of the Code of Federal Regulations as follows:
0
1. Part 612 is added to read as follows:
PART 612--TITLE II REPORTING SYSTEM
Subpart A--Scope, Purpose, and Definitions
Sec.
[[Page 75615]]
612.1 Scope and purpose.
612.2 Definitions.
Subpart B--Reporting Requirements
612.3 What are the regulatory reporting requirements for the
Institutional Report Card?
612.4 What are the regulatory reporting requirements for the State
Report Card?
612.5 What indicators must a State use to report on teacher
preparation program performance for purposes of the State report
card?
612.6 What must States consider in identifying low-performing
teacher preparation programs or at-risk teacher preparation
programs, and what actions must a State take with respect to those
programs identified as low-performing?
Subpart C--Consequences of Withdrawal of State Approval or Financial
Support
612.7 What are the consequences for a low-performing teacher
preparation program that loses the State's approval or the State's
financial support?
612.8 How does a low-performing teacher preparation program regain
eligibility to accept or enroll students receiving Title IV, HEA
funds after loss of the State's approval or the State's financial
support?
Authority: 20 U.S.C. 1022d and 1022f.
Subpart A--Scope, Purpose, and Definitions
Sec. 612.1 Scope and purpose.
This part establishes regulations related to the teacher
preparation program accountability system under title II of the HEA.
This part includes:
(a) Institutional Report Card reporting requirements.
(b) State Report Card reporting requirements.
(c) Requirements related to the indicators States must use to
report on teacher preparation program performance.
(d) Requirements related to the areas States must consider to
identify low-performing teacher preparation programs and at-risk
teacher preparation programs and actions States must take with respect
to those programs.
(e) The consequences for a low-performing teacher preparation
program that loses the State's approval or the State's financial
support.
(f) The conditions under which a low-performing teacher preparation
program that has lost the State's approval or the State's financial
support may regain eligibility to resume accepting and enrolling
students who receive title IV, HEA funds.
Sec. 612.2 Definitions.
(a) The following terms used in this part are defined in the
regulations for Institutional Eligibility under the HEA, 34 CFR part
600:
Distance education
Secretary
State
Title IV, HEA program
(b) The following term used in this part is defined in subpart A of
the Student Assistance General Provisions, 34 CFR part 668:
Payment period
(c) The following term used in this part is defined in 34 CFR 77.1:
Local educational agency (LEA)
(d) Other terms used in this part are defined as follows:
At-risk teacher preparation program: A teacher preparation program
that is identified as at-risk of being low-performing by a State based
on the State's assessment of teacher preparation program performance
under Sec. 612.4.
Candidate accepted into the teacher preparation program: An
individual who has been admitted into a teacher preparation program but
who has not yet enrolled in any coursework that the institution has
determined to be part of that teacher preparation program.
Candidate enrolled in the teacher preparation program: An
individual who has been accepted into a teacher preparation program and
is in the process of completing coursework but has not yet completed
the teacher preparation program.
Content and pedagogical knowledge: An understanding of the central
concepts and structures of the discipline in which a teacher candidate
has been trained, and how to create effective learning experiences that
make the discipline accessible and meaningful for all students,
including a distinct set of instructional skills to address the needs
of English learners and students with disabilities, in order to assure
mastery of the content by the students, as described in applicable
professional, State, or institutional standards.
Effective teacher preparation program: A teacher preparation
program with a level of performance higher than a low-performing
teacher preparation program or an at-risk teacher preparation program.
Employer survey: A survey of employers or supervisors designed to
capture their perceptions of whether the novice teachers they employ or
supervise who are in their first year of teaching were effectively
prepared.
High-need school: A school that, based on the most recent data
available, meets one or both of the following:
(i) The school is in the highest quartile of schools in a ranking
of all schools served by a local educational agency (LEA), ranked in
descending order by percentage of students from low-income families
enrolled in such schools, as determined by the LEA based on one of the
following measures of poverty:
(A) The percentage of students aged 5 through 17 in poverty counted
in the most recent Census data approved by the Secretary.
(B) The percentage of students eligible for a free or reduced price
school lunch under the Richard B. Russell National School Lunch Act [42
U.S.C. 1751 et seq.].
(C) The percentage of students in families receiving assistance
under the State program funded under part A of title IV of the Social
Security Act (42 U.S.C. 601 et seq.).
(D) The percentage of students eligible to receive medical
assistance under the Medicaid program.
(E) A composite of two or more of the measures described in
paragraphs (i)(A) through (D) of this definition.
(ii) In the case of--
(A) An elementary school, the school serves students not less than
60 percent of whom are eligible for a free or reduced price school
lunch under the Richard B. Russell National School Lunch Act; or
(B) Any school other than an elementary school, the school serves
students not less than 45 percent of whom are eligible for a free or
reduced price school lunch under the Richard B. Russell National School
Lunch Act.
Low-performing teacher preparation program: A teacher preparation
program that is identified as low-performing by a State based on the
State's assessment of teacher preparation program performance under
Sec. 612.4.
Novice teacher: A teacher of record in the first three years of
teaching who teaches elementary or secondary public school students,
which may include, at a State's discretion, preschool students.
Quality clinical preparation: Training that integrates content,
pedagogy, and professional coursework around a core of pre-service
clinical experiences. Such training must, at a minimum--
(i) Be provided by qualified clinical instructors, including school
and LEA-based personnel, who meet established qualification
requirements and who use a training standard that is made publicly
available;
(ii) Include multiple clinical or field experiences, or both, that
serve diverse, rural, or underrepresented student populations in
elementary through secondary school, including English learners and
students with disabilities, and that are assessed using a performance-
based protocol to
[[Page 75616]]
demonstrate teacher candidate mastery of content and pedagogy; and
(iii) Require that teacher candidates use research-based practices,
including observation and analysis of instruction, collaboration with
peers, and effective use of technology for instructional purposes.
Recent graduate: An individual whom a teacher preparation program
has documented as having met all the requirements of the program in any
of the three title II reporting years preceding the current reporting
year, as defined in the report cards prepared under Sec. Sec. 612.3
and 612.4. Documentation may take the form of a degree, institutional
certificate, program credential, transcript, or other written proof of
having met the program's requirements. For the purposes of this
definition, a program may not use either of the following criteria to
determine if an individual has met all the requirements of the program:
(i) Becoming a teacher of record; or
(ii) Obtaining initial certification or licensure.
Rigorous teacher candidate exit qualifications: Qualifications of a
teacher candidate established by a teacher preparation program prior to
the candidate's completion of the program using an assessment of
candidate performance that relies, at a minimum, on validated
professional teaching standards and measures of the candidate's
effectiveness in curriculum planning, instruction of students,
appropriate plans and modifications for all students, and assessment of
student learning.
Student growth: The change in student achievement between two or
more points in time, using a student's scores on the State's
assessments under section 1111(b)(2) of the ESEA or other measures of
student learning and performance, such as student results on pre-tests
and end-of-course tests; objective performance-based assessments;
student learning objectives; student performance on English language
proficiency assessments; and other measures that are rigorous,
comparable across schools, and consistent with State guidelines.
Teacher evaluation measure: A teacher's performance level based on
an LEA's teacher evaluation system that differentiates teachers on a
regular basis using at least three performance levels and multiple
valid measures in assessing teacher performance. For purposes of this
definition, multiple valid measures must include data on student growth
for all students (including English learners and students with
disabilities) and other measures of professional practice (such as
observations based on rigorous teacher performance standards, teacher
portfolios, and student and parent surveys).
Teacher of record: A teacher (including a teacher in a co-teaching
assignment) who has been assigned the lead responsibility for student
learning in a subject or area.
Teacher placement rate: (i) The percentage of recent graduates who
have become novice teachers (regardless of retention) for the grade
level, grade span, and subject area in which they were prepared.
(ii) At the State's discretion, the rate calculated under paragraph
(i) of this definition may exclude one or more of the following,
provided that the State uses a consistent approach to assess and report
on all of the teacher preparation programs in the State:
(A) Recent graduates who have taken teaching positions in another
State.
(B) Recent graduates who have taken teaching positions in private
schools.
(C) Recent graduates who have enrolled in graduate school or
entered military service.
(iii) For a teacher preparation program provided through distance
education, a State calculates the rate under paragraph (i) of this
definition using the total number of recent graduates who have obtained
certification or licensure in the State during the three preceding
title II reporting years as the denominator.
Teacher preparation entity: An institution of higher education or
other organization that is authorized by the State to prepare teachers.
Teacher preparation program: A program, whether traditional or
alternative route, offered by a teacher preparation entity that leads
to initial State teacher certification or licensure in a specific
field. Where some participants in the program are in a traditional
route to certification or licensure in a specific field, and others are
in an alternative route to certification or licensure in that same
field, the traditional and alternative route components are considered
to be separate teacher preparation programs. The term teacher
preparation program includes a teacher preparation program provided
through distance education.
Teacher preparation program provided through distance education: A
teacher preparation program at which at least 50 percent of the
program's required coursework is offered through distance education.
Teacher retention rate: The percentage of individuals in a given
cohort of novice teachers who have been continuously employed as
teachers of record in each year between their first year as a novice
teacher and the current reporting year.
(i) For the purposes of this definition, a cohort of novice
teachers includes all teachers who were first identified as a novice
teacher by the State in the same title II reporting year.
(ii) At the State's discretion, the teacher retention rates may
exclude one or more of the following, provided that the State uses a
consistent approach to assess and report on all teacher preparation
programs in the State:
(A) Novice teachers who have taken teaching positions in other
States.
(B) Novice teachers who have taken teaching positions in private
schools.
(C) Novice teachers who are not retained specifically and directly
due to budget cuts.
(D) Novice teachers who have enrolled in graduate school or entered
military service.
Teacher survey: A survey administered to all novice teachers who
are in their first year of teaching that is designed to capture their
perceptions of whether the preparation that they received from their
teacher preparation program was effective.
Title II reporting year: A period of twelve consecutive months,
starting September 1 and ending August 31.
Subpart B--Reporting Requirements
Sec. 612.3 What are the regulatory reporting requirements for the
Institutional report card?
Beginning not later than April 30, 2018, and annually thereafter,
each institution of higher education that conducts a teacher
preparation program and that enrolls students receiving title IV HEA
program funds--
(a) Must report to the State on the quality of teacher preparation
and other information consistent with section 205(a) of the HEA, using
an institutional report card that is prescribed by the Secretary;
(b) Must prominently and promptly post the institutional report
card information on the institution's Web site and, if applicable, on
the teacher preparation program portion of the institution's Web site;
and
(c) May also provide the institutional report card information to
the general public in promotional or other materials it makes available
to prospective students or other individuals.
Sec. 612.4 What are the regulatory reporting requirements for the
State report card?
(a) General. Beginning not later than October 31, 2018, and
annually
[[Page 75617]]
thereafter, each State that receives funds under the HEA must--
(1) Report to the Secretary, using a State report card that is
prescribed by the Secretary, on--
(i) The quality of all teacher preparation programs in the State
consistent with paragraph (b)(3) of this section, whether or not they
enroll students receiving Federal assistance under the HEA; and
(ii) All other information consistent with section 205(b) of the
HEA; and
(2) Make the State report card information widely available to the
general public by posting the State report card information on the
State's Web site.
(b) Reporting of information on teacher preparation program
performance. In the State report card, beginning not later than October
31, 2019, and annually thereafter, the State--
(1) Must make meaningful differentiations in teacher preparation
program performance using at least three performance levels--low-
performing teacher preparation program, at-risk teacher preparation
program, and effective teacher preparation program--based on the
indicators in Sec. 612.5.
(2) Must provide--
(i) For each teacher preparation program, data for each of the
indicators identified in Sec. 612.5 for the most recent title II
reporting year;
(ii) The State's weighting of the different indicators in Sec.
612.5 for purposes of describing the State's assessment of program
performance; and
(iii) Any State-level rewards or consequences associated with the
designated performance levels;
(3) In implementing paragraph (b)(1) through (2) of this section,
except as provided in paragraphs (b)(3)(ii)(D) and (b)(5) of this
section, must ensure the performance of all of the State's teacher
preparation programs are represented in the State report card by--
(i)(A) Annually reporting on the performance of each teacher
preparation program that, in a given reporting year, produces a total
of 25 or more recent graduates who have received initial certification
or licensure from the State that allows them to serve in the State as
teachers of record for K-12 students and, at a State's discretion,
preschool students (i.e., the program size threshold); or
(B) If a State chooses a program size threshold of less than 25
(e.g., 15 or 20), annually reporting on the performance of each teacher
preparation program that, in a given reporting year, produces an amount
of recent graduates, as described in this paragraph (b)(3)(i), that
meets or exceeds this threshold; and
(ii) For any teacher preparation program that does not meet the
program size threshold in paragraph (b)(3)(i)(A) or (B) of this
section, annually reporting on the program's performance by aggregating
data under paragraph (b)(3)(ii)(A), (B), or (C) of this section in
order to meet the program size threshold except as provided in
paragraph (b)(3)(ii)(D) of this section.
(A) The State may report on the program's performance by
aggregating data that determine the program's performance with data for
other teacher preparation programs that are operated by the same
teacher preparation entity and are similar to or broader than the
program in content.
(B) The State may report on the program's performance by
aggregating data that determine the program's performance over multiple
years for up to four years until the program size threshold is met.
(C) If the State cannot meet the program size threshold by
aggregating data under paragraph (b)(3)(ii)(A) or (B) of this section,
it may aggregate data using a combination of the methods under both of
these paragraphs.
(D) The State is not required under this paragraph (b)(3)(ii) of
this section to report data on a particular teacher preparation program
for a given reporting year if aggregation under paragraph (b)(3)(ii) of
this section would not yield the program size threshold for that
program; and
(4) Must report on the procedures established by the State in
consultation with a group of stakeholders, as described in paragraph
(c)(1) of this section, and on the State's examination of its data
collection and reporting, as described in paragraph (c)(2) of this
section, in the State report card submitted--
(i) No later than October 31, 2019, and every four years
thereafter; and
(ii) At any other time that the State makes a substantive change to
the weighting of the indicators or the procedures for assessing and
reporting the performance of each teacher preparation program in the
State described in paragraph (c) of this section.
(5) The State is not required under this paragraph (b) to report
data on a particular teacher preparation program if reporting these
data would be inconsistent with Federal or State privacy and
confidentiality laws and regulations.
(c) Fair and equitable methods--(1) Consultation. Each State must
establish, in consultation with a representative group of stakeholders,
the procedures for assessing and reporting the performance of each
teacher preparation program in the State under this section.
(i) The representative group of stakeholders must include, at a
minimum, representatives of--
(A) Leaders and faculty of traditional teacher preparation programs
and alternative routes to State certification or licensure programs;
(B) Students of teacher preparation programs;
(C) LEA superintendents;
(D) Small teacher preparation programs (i.e., programs that produce
fewer than a program size threshold of 25 recent graduates in a given
year or any lower threshold set by a State, as described in paragraph
(b)(3)(i) of this section);
(E) Local school boards;
(F) Elementary through secondary school leaders and instructional
staff;
(G) Elementary through secondary school students and their parents;
(H) IHEs that serve high proportions of low-income students,
students of color, or English learners;
(I) English learners, students with disabilities, and other
underserved students;
(J) Officials of the State's standards board or other appropriate
standards body; and
(K) At least one teacher preparation program provided through
distance education.
(ii) The procedures for assessing and reporting the performance of
each teacher preparation program in the State under this section must,
at minimum, include--
(A) The weighting of the indicators identified in Sec. 612.5 for
establishing performance levels of teacher preparation programs as
required by this section;
(B) The method for aggregation of data pursuant to paragraph
(b)(3)(ii) of this section;
(C) Any State-level rewards or consequences associated with the
designated performance levels; and
(D) Appropriate opportunities for programs to challenge the
accuracy of their performance data and classification of the program.
(2) State examination of data collection and reporting. Each State
must periodically examine the quality of the data collection and
reporting activities it conducts pursuant to paragraph (b) of this
section and Sec. 612.5, and, as appropriate, modify its procedures for
assessing and reporting the performance of each teacher preparation
program in the State using
[[Page 75618]]
the procedures in paragraph (c)(1) of this section.
(d) Inapplicability to certain insular areas. Paragraphs (b) and
(c) of this section do not apply to American Samoa, the Commonwealth of
the Northern Mariana Islands, the freely associated States of the
Republic of the Marshall Islands, the Federated States of Micronesia,
the Republic of Palau, Guam, and the United States Virgin Islands.
Sec. 612.5 What indicators must a State use to report on teacher
preparation program performance for purposes of the State report card?
(a) For purposes of reporting under Sec. 612.4, a State must
assess, for each teacher preparation program within its jurisdiction,
indicators of academic content knowledge and teaching skills of novice
teachers from that program, including, at a minimum, the following
indicators:
(1) Student learning outcomes.
(i) For each year and each teacher preparation program in the
State, a State must calculate the aggregate student learning outcomes
of all students taught by novice teachers.
(ii) For purposes of calculating student learning outcomes under
paragraph (a)(1)(i) of this section, a State must use:
(A) Student growth;
(B) A teacher evaluation measure;
(C) Another State-determined measure that is relevant to
calculating student learning outcomes, including academic performance,
and that meaningfully differentiates among teachers; or
(D) Any combination of paragraphs (a)(1)(ii)(A), (B), or (C) of
this section.
(iii) At the State's discretion, in calculating a teacher
preparation program's aggregate student learning outcomes a State may
exclude one or both of the following, provided that the State uses a
consistent approach to assess and report on all of the teacher
preparation programs in the State--
(A) Student learning outcomes of students taught by novice teachers
who have taken teaching positions in another State.
(B) Student learning outcomes of all students taught by novice
teachers who have taken teaching positions in private schools.
(2) Employment outcomes.
(i) Except as provided in paragraph (a)(2)(v) of this section, for
each year and each teacher preparation program in the State, a State
must calculate:
(A) Teacher placement rate;
(B) Teacher placement rate in high-need schools;
(C) Teacher retention rate; and
(D) Teacher retention rate in high-need schools.
(ii) For purposes of reporting the teacher retention rate and
teacher retention rate in high-need schools under paragraph
(a)(2)(i)(C) and (D) of this section--
(A) Except as provided in paragraph (B), the State reports a
teacher retention rate for each of the three cohorts of novice teachers
immediately preceding the current title II reporting year.
(B)(1) The State is not required to report a teacher retention rate
for any teacher preparation program in the State report to be submitted
in October 2018.
(2) For the State report to be submitted in October 2019, the
teacher retention rate must be calculated for the cohort of novice
teachers identified in the 2017-2018 title II reporting year.
(3) For the State report to be submitted in October 2020, separate
teacher retention rates must be calculated for the cohorts of novice
teachers identified in the 2017-2018 and 2018-2019 title II reporting
years.
(iii) For the purposes of calculating employment outcomes under
paragraph (a)(2)(i) of this section, a State may, at its discretion,
assess traditional and alternative route teacher preparation programs
differently, provided that differences in assessments and the reasons
for those differences are transparent and that assessments result in
equivalent levels of accountability and reporting irrespective of the
type of program.
(iv) For the purposes of the teacher placement rate under paragraph
(a)(2)(i)(A) and (B) of this section, a State may, at its discretion,
assess teacher preparation programs provided through distance education
differently from teacher preparation programs not provided through
distance education, based on whether the differences in the way the
rate is calculated for teacher preparation programs provided through
distance education affect employment outcomes. Differences in
assessments and the reasons for those differences must be transparent
and result in equivalent levels of accountability and reporting
irrespective of where the program is physically located.
(v) A State is not required to calculate a teacher placement rate
under paragraph (a)(2)(i)(A) of this section for alternative route to
certification programs.
(3) Survey outcomes. (i) For each year and each teacher preparation
program on which a State must report a State must collect through
survey instruments qualitative and quantitative data including, but not
limited to, a teacher survey and an employer survey designed to capture
perceptions of whether novice teachers who are employed in their first
year of teaching possess the academic content knowledge and teaching
skills needed to succeed in the classroom.
(ii) At the State's discretion, in calculating a teacher
preparation program's survey outcomes the State may exclude survey
outcomes for all novice teachers who have taken teaching positions in
private schools provided that the State uses a consistent approach to
assess and report on all of the teacher preparation programs in the
State.
(4) Characteristics of teacher preparation programs. Whether the
program--
(i) Is administered by an entity accredited by an agency recognized
by the Secretary for accreditation of professional teacher education
programs; or
(ii) Produces teacher candidates--
(A) With content and pedagogical knowledge;
(B) With quality clinical preparation; and
(C) Who have met rigorous teacher candidate exit qualifications.
(b) At a State's discretion, the indicators of academic content
knowledge and teaching skills may include other indicators of a
teacher's effect on student performance, such as student survey
results, provided that the State uses the same indicators for all
teacher preparation programs in the State.
(c) A State may, at its discretion, exclude from its reporting
under paragraph (a)(1)-(3) of this section individuals who have not
become novice teachers after three years of becoming recent graduates.
(d) This section does not apply to American Samoa, the Commonwealth
of the Northern Mariana Islands, the freely associated states of the
Republic of the Marshall Islands, the Federated States of Micronesia,
the Republic of Palau, Guam, and the United States Virgin Islands.
Sec. 612.6 What must a State consider in identifying low-performing
teacher preparation programs or at-risk teacher preparation programs,
and what actions must a State take with respect to those programs
identified as low-performing?
(a)(1) In identifying low-performing or at-risk teacher preparation
programs the State must use criteria that, at a minimum, include the
indicators of academic content knowledge and teaching skills from Sec.
612.5.
(2) Paragraph (a)(1) of this section does not apply to American
Samoa, the
[[Page 75619]]
Commonwealth of the Northern Mariana Islands, the freely associated
states of the Republic of the Marshall Islands, the Federated States of
Micronesia, the Republic of Palau, Guam, and the United States Virgin
Islands.
(b) At a minimum, a State must provide technical assistance to low-
performing teacher preparation programs in the State to help them
improve their performance in accordance with section 207(a) of the HEA.
Technical assistance may include, but is not limited to: Providing
programs with information on the specific indicators used to determine
the program's rating (e.g., specific areas of weakness in student
learning, job placement and retention, and novice teacher and employer
satisfaction); assisting programs to address the rigor of their exit
criteria; helping programs identify specific areas of curriculum or
clinical experiences that correlate with gaps in graduates'
preparation; helping identify potential research and other resources to
assist program improvement (e.g., evidence of other successful
interventions, other university faculty, other teacher preparation
programs, nonprofits with expertise in educator preparation and teacher
effectiveness improvement, accrediting organizations, or higher
education associations); and sharing best practices from exemplary
programs.
Subpart C--Consequences of Withdrawal of State Approval or
Financial Support
Sec. 612.7 What are the consequences for a low-performing teacher
preparation program that loses the State's approval or the State's
financial support?
(a) Any teacher preparation program for which the State has
withdrawn the State's approval or the State has terminated the State's
financial support due to the State's identification of the program as a
low-performing teacher preparation program--
(1) Is ineligible for any funding for professional development
activities awarded by the Department as of the date that the State
withdrew its approval or terminated its financial support;
(2) May not include any candidate accepted into the teacher
preparation program or any candidate enrolled in the teacher
preparation program who receives aid under title IV, HEA programs in
the institution's teacher preparation program as of the date that the
State withdrew its approval or terminated its financial support; and
(3) Must provide transitional support, including remedial services,
if necessary, to students enrolled at the institution at the time of
termination of financial support or withdrawal of approval for a period
of time that is not less than the period of time a student continues in
the program but no more than 150 percent of the published program
length.
(b) Any institution administering a teacher preparation program
that has lost State approval or financial support based on being
identified as a low-performing teacher preparation program must--
(1) Notify the Secretary of its loss of the State's approval or the
State's financial support due to identification as low-performing by
the State within 30 days of such designation;
(2) Immediately notify each student who is enrolled in or accepted
into the low-performing teacher preparation program and who receives
title IV, HEA program funds that, commencing with the next payment
period, the institution is no longer eligible to provide such funding
to students enrolled in or accepted into the low-performing teacher
preparation program; and
(3) Disclose on its Web site and in promotional materials that it
makes available to prospective students that the teacher preparation
program has been identified as a low-performing teacher preparation
program by any State and has lost the State's approval or the State's
financial support, including the identity of the State or States, and
that students accepted or enrolled in the low-performing teacher
preparation program may not receive title IV, HEA program funds.
Sec. 612.8 How does a low-performing teacher preparation program
regain eligibility to accept or enroll students receiving Title IV, HEA
program funds after loss of the State's approval or the State's
financial support?
(a) A low-performing teacher preparation program that has lost the
State's approval or the State's financial support may regain its
ability to accept and enroll students who receive title IV, HEA program
funds upon demonstration to the Secretary under paragraph (b) of this
section of--
(1) Improved performance on the teacher preparation program
performance criteria in Sec. 612.5 as determined by the State; and
(2) Reinstatement of the State's approval or the State's financial
support, or, if both were lost, the State's approval and the State's
financial support.
(b) To regain eligibility to accept or enroll students receiving
title IV, HEA funds in a teacher preparation program that was
previously identified by the State as low-performing and that lost the
State's approval or the State's financial support, the institution that
offers the teacher preparation program must submit an application to
the Secretary along with supporting documentation that will enable the
Secretary to determine that the teacher preparation program has met the
requirements under paragraph (a) of this section.
PART 686--TEACHER EDUCATION ASSISTANCE FOR COLLEGE AND HIGHER
EDUCATION (TEACH) GRANT PROGRAM
0
2. The authority citation for part 686 continues to read as follows:
Authority: 20 U.S.C. 1070g, et seq., unless otherwise noted.
Sec. 686.1 [Amended]
0
3. Section 686.1 is amended by removing the words ``school serving low-
income students'' and adding, in their place, the words ``school or
educational service agency serving low-income students (low-income
school)''.
0
4. Section 686.2 is amended by:
0
A. Redesignating paragraph (d) as paragraph (e).
0
B. Adding a new paragraph (d).
0
C. In newly redesignated paragraph (e):
0
i. Redesignating paragraphs (1) and (2) in the definition of ``Academic
year or its equivalent for elementary and secondary schools (elementary
or secondary academic year)'' as paragraphs (i) and (ii);
0
ii. Adding in alphabetical order the definition of ``Educational
service agency'';
0
iii. Redesignating paragraphs (1) through (7) in the definition of
``High-need field'' as paragraphs (i) through (vii), respectively;
0
iv. Adding in alphabetical order definitions of ``High-quality teacher
preparation program not provided through distance education'' and
``High-quality teacher preparation program provided through distance
education'';
0
v. Redesignating paragraphs (1) through (3) in the definition of
``Institutional Student Information Record (ISIR)'' as paragraphs (i)
through (iii), respectively;
0
vi. Redesignating paragraphs (1) and (2) as paragraphs (i) and (ii) and
paragraphs (2)(i) and (ii) as paragraphs (ii)(A) and (B), respectively,
in the definition of ``Numeric equivalent'';
0
vii. Redesignating paragraphs (1) through (3) in the definition of
``Post-baccalaureate program'' as paragraphs (i) through (iii),
respectively;
[[Page 75620]]
0
viii. Adding in alphabetical order a definition for ``School or
educational service agency serving low-income students (low-income
school)'';
0
ix. Removing the definition of ``School serving low-income students
(low-income school)'';
0
x. Revising the definitions of ``TEACH Grant-eligible institution'' and
``TEACH Grant-eligible program''; and
0
xi. Revising the definition of ``Teacher preparation program.''
The additions and revisions read as follows:
Sec. 686.2 Definitions.
* * * * *
(d) A definition for the following term used in this part is in
Title II Reporting System, 34 CFR part 612:
Effective teacher preparation program.
(e) * * *
Educational service agency: A regional public multiservice agency
authorized by State statute to develop, manage, and provide services or
programs to LEAs, as defined in section 8101 of the Elementary and
Secondary Education Act of l965, as amended (ESEA).
* * * * *
High-quality teacher preparation program not provided through
distance education: A teacher preparation program at which less than 50
percent of the program's required coursework is offered through
distance education; and
(i) Beginning with the 2021-2022 award year, is not classified by
the State to be less than an effective teacher preparation program
based on 34 CFR 612.4(b) in two of the previous three years; or
(ii) Meets the exception from State reporting of teacher
preparation program performance under 34 CFR 612.4(b)(3)(ii)(D) or 34
CFR 612.4(b)(5).
High-quality teacher preparation program provided through distance
education: A teacher preparation program at which at least 50 percent
of the program's required coursework is offered through distance
education; and
(i) Beginning with the 2021-2022 award year, is not classified by
the same State to be less than an effective teacher preparation program
based on 34 CFR 612.4(b); in two of the previous three years; or
(ii) Meets the exception from State reporting of teacher
preparation program performance under 34 CFR 612.4(b)(3)(ii)(D) or (E).
* * * * *
School or educational service agency serving low-income students
(low-income school): An elementary or secondary school or educational
service agency that--
(i) Is located within the area served by the LEA that is eligible
for assistance pursuant to title I of the ESEA;
(ii) Has been determined by the Secretary to be a school or
educational service agency in which more than 30 percent of the
school's or educational service agency's total enrollment is made up of
children who qualify for services provided under title I of the ESEA;
and
(iii) Is listed in the Department's Annual Directory of Designated
Low-Income Schools for Teacher Cancellation Benefits. The Secretary
considers all elementary and secondary schools and educational service
agencies operated by the Bureau of Indian Education (BIE) in the
Department of the Interior or operated on Indian reservations by Indian
tribal groups under contract or grant with the BIE to qualify as
schools or educational service agencies serving low-income students.
* * * * *
TEACH Grant-eligible institution: An eligible institution as
defined in 34 CFR part 600 that meets financial responsibility
standards established in 34 CFR part 668, subpart L, or that qualifies
under an alternative standard in 34 CFR 668.175 and--
(i) Provides at least one high-quality teacher preparation program
not provided through distance education or one high-quality teacher
preparation program provided through distance education at the
baccalaureate or master's degree level that also provides supervision
and support services to teachers, or assists in the provision of
services to teachers, such as--
(A) Identifying and making available information on effective
teaching skills or strategies;
(B) Identifying and making available information on effective
practices in the supervision and coaching of novice teachers; and
(C) Mentoring focused on developing effective teaching skills and
strategies;
(ii) Provides a two-year program that is acceptable for full credit
in a TEACH Grant-eligible program offered by an institution described
in paragraph (i) of this definition, as demonstrated by the institution
that provides the two-year program, or provides a program that is the
equivalent of an associate degree, as defined in Sec. 668.8(b)(1),
that is acceptable for full credit toward a baccalaureate degree in a
TEACH Grant-eligible program;
(iii) Provides a high-quality teacher preparation program not
provided through distance education or a high-quality teacher
preparation program provided through distance education that is a post-
baccalaureate program of study; or
(iv) Provides a master's degree program that does not meet the
definition of terms ``high-quality teacher preparation program not
provided through distance education'' or ``high-quality teacher
preparation program that is provided through distance education''
because it is not subject to reporting under 34 CFR part 612, but that
prepares:
(A) A teacher or a retiree from another occupation with expertise
in a field in which there is a shortage of teachers, such as
mathematics, science, special education, English language acquisition,
or another high-need field; or
(B) A teacher who is using high-quality alternative certification
routes to become certified.
TEACH Grant-eligible program: (i) An eligible program, as defined
in 34 CFR 668.8, that meets the definition of a ``high-quality teacher
preparation program not provided through distance education'' or
``high-quality teacher preparation program provided through distance
education'' and that is designed to prepare an individual to teach as a
highly-qualified teacher in a high-need field and leads to a
baccalaureate or master's degree, or is a post-baccalaureate program of
study;
(ii) A program that is a two-year program or is the equivalent of
an associate degree, as defined in 34 CFR 668.8(b)(1), that is
acceptable for full credit toward a baccalaureate degree in a TEACH
Grant-eligible program; or;
(iii) A master's degree program that does not meet the definition
of the terms ``high-quality teacher preparation not provided through
distance education'' or ``high-quality teacher preparation program that
is provided through distance education'' because it is not subject to
reporting under 34 CFR part 612, but that prepares:
(A) A teacher or a retiree from another occupation with expertise
in a field in which there is a shortage of teachers, such as
mathematics, science, special education, English language acquisition,
or another high-need field; or
(B) A teacher who is using high-quality alternative certification
routes to become certified.
* * * * *
Teacher preparation program: A course of study, provided by an
institution of higher education, the completion of which signifies that
an enrollee has met all of the State's educational or training
requirements for initial certification or licensure to teach in the
State's elementary or secondary
[[Page 75621]]
schools. A teacher preparation program may be a traditional program or
an alternative route to certification or licensure, as defined by the
State.
* * * * *
0
5. Section 686.3 is amended by adding paragraph (c) to read as follows:
Sec. 686.3 Duration of student eligibility.
* * * * *
(c) An otherwise eligible student who received a TEACH Grant for
enrollment in a TEACH Grant-eligible program is eligible to receive
additional TEACH Grants to complete that program, even if that program
is no longer considered a TEACH Grant-eligible program, not to exceed
four Scheduled Awards for an undergraduate or post-baccalaureate
student and up to two Scheduled Awards for a graduate student.
0
6. Section 686.11 is amended by:
0
A. Revising paragraph (a)(1)(iii).
0
B. Adding paragraph (d).
The revision and addition read as follows:
Sec. 686.11 Eligibility to receive a grant.
(a) * * *
(1) * * *
(iii) Is enrolled in a TEACH Grant-eligible institution in a TEACH
Grant-eligible program or is an otherwise eligible student who received
a TEACH Grant and who is completing a program under Sec. 686.3(c);
* * * * *
(d) Students who received a total and permanent disability
discharge of a TEACH Grant agreement to serve or a title IV, HEA loan.
If a student's previous TEACH Grant agreement to serve or title IV, HEA
loan was discharged based on total and permanent disability, the
student is eligible to receive a TEACH Grant if the student--
(1) Obtains a certification from a physician that the student is
able to engage in substantial gainful activity as defined in 34 CFR
685.102(b);
(2) Signs a statement acknowledging that neither the new agreement
to serve for the TEACH Grant the student receives nor any previously
discharged agreement to serve which the grant recipient is required to
fulfill in accordance with paragraph (d)(3) of this section can be
discharged in the future on the basis of any impairment present when
the new grant is awarded, unless that impairment substantially
deteriorates and the grant recipient applies for and meets the
eligibility requirements for a discharge in accordance with 34 CFR
685.213; and
(3) In the case of a student who receives a new TEACH Grant within
three years of the date that any previous TEACH Grant service
obligation or title IV loan was discharged due to a total and permanent
disability in accordance with Sec. 686.42(b), 34 CFR
685.213(b)(4)(iii), 34 CFR 674.61(b)(3)(v), or 34 CFR
682.402(c)(3)(iv), acknowledges that he or she is once again subject to
the terms of the previously discharged TEACH Grant agreement to serve
or resumes repayment on the previously discharged loan in accordance
with 34 CFR 685.213(b)(7), 674.61(b)(6), or 682.402(c)(6) before
receiving the new grant.
0
7. Section 686.12 is amended by:
0
A. In paragraph (b)(2), adding the words ``low-income'' before the word
``school''; and
0
B. Revising paragraph (d).
The revision reads as follows:
Sec. 686.12 Agreement to serve.
* * * * *
(d) Majoring and serving in a high-need field. In order for a grant
recipient's teaching service in a high-need field listed in the
Nationwide List to count toward satisfying the recipient's service
obligation, the high-need field in which he or she prepared to teach
must be listed in the Nationwide List for the State in which the grant
recipient teaches--
(1) At the time the grant recipient begins teaching in that field,
even if that field subsequently loses its high-need designation for
that State; or
(2) For teaching service performed on or after July 1, 2010, at the
time the grant recipient begins teaching in that field or when the
grant recipient signed the agreement to serve or received the TEACH
Grant, even if that field subsequently loses its high-need designation
for that State before the grant recipient begins teaching.
* * * * *
Sec. 686.32 [Amended]
0
8. Section 686.32 is amended by:
0
A. In paragraph (a)(3)(iii)(B), adding the words ``or when the grant
recipient signed the agreement to serve or received the TEACH Grant''
after the words ``that field''; and
0
B. In paragraph (c)(4)(iv)(B), adding the words ``or when the grant
recipient signed the agreement to serve or received the TEACH Grant''
after the words ``that field''.
Sec. 686.37 [Amended]
0
9. Section 686.37(a)(1) is amended by removing the citation
``Sec. Sec. 686.11'' and adding in its place the citation ``Sec. Sec.
686.3(c), 686.11,.''
0
10. Section 686.40 is amended by revising paragraphs (b) and (f) to
read as follows:
Sec. 686.40 Documenting the service obligation.
* * * * *
(b) If a grant recipient is performing full-time teaching service
in accordance with the agreement to serve, or agreements to serve if
more than one agreement exists, the grant recipient must, upon
completion of each of the four required elementary or secondary
academic years of teaching service, provide to the Secretary
documentation of that teaching service on a form approved by the
Secretary and certified by the chief administrative officer of the
school or educational service agency in which the grant recipient is
teaching. The documentation must show that the grant recipient is
teaching in a low-income school. If the school or educational service
agency at which the grant recipient is employed meets the requirements
of a low-income school in the first year of the grant recipient's four
elementary or secondary academic years of teaching and the school or
educational service agency fails to meet those requirements in
subsequent years, those subsequent years of teaching qualify for
purposes of this section for that recipient.
* * * * *
(f) A grant recipient who taught in more than one qualifying school
or more than one qualifying educational service agency during an
elementary or secondary academic year and demonstrates that the
combined teaching service was the equivalent of full-time, as supported
by the certification of one or more of the chief administrative
officers of the schools or educational service agencies involved, is
considered to have completed one elementary or secondary academic year
of qualifying teaching.
0
11. Section 686.42 is amended by revising paragraph (b) to read as
follows:
Sec. 686.42 Discharge of agreement to serve.
* * * * *
(b) Total and permanent disability. (1) A grant recipient's
agreement to serve is discharged if the recipient becomes totally and
permanently disabled, as defined in 34 CFR 685.102(b), and the grant
recipient applies for and satisfies the eligibility requirements for a
total and permanent disability discharge in accordance with 34 CFR
685.213.
(2) If at any time the Secretary determines that the grant
recipient does not meet the requirements of the three-year period
following the discharge as described in 34 CFR 685.213(b)(7), the
[[Page 75622]]
Secretary will notify the grant recipient that the grant recipient's
obligation to satisfy the terms of the agreement to serve is
reinstated.
(3) The Secretary's notification under paragraph (b)(2) of this
section will--
(i) Include the reason or reasons for reinstatement;
(ii) Provide information on how the grant recipient may contact the
Secretary if the grant recipient has questions about the reinstatement
or believes that the agreement to serve was reinstated based on
incorrect information; and
(iii) Inform the TEACH Grant recipient that he or she must satisfy
the service obligation within the portion of the eight-year period that
remained after the date of the discharge.
(4) If the TEACH Grant of a recipient whose TEACH Grant agreement
to serve is reinstated is later converted to a Direct Unsubsidized
Loan, the recipient will not be required to pay interest that accrued
on the TEACH Grant disbursements from the date the agreement to serve
was discharged until the date the agreement to serve was reinstated.
* * * * *
0
12. Section 686.43 is amended by revising paragraph (a)(1) to read as
follows:
Sec. 686.43 Obligation to repay the grant.
(a) * * *
(1) The grant recipient, regardless of enrollment status, requests
that the TEACH Grant be converted into a Federal Direct Unsubsidized
Loan because he or she has decided not to teach in a qualified school
or educational service agency, or not to teach in a high-need field, or
for any other reason;
* * * * *
[FR Doc. 2016-24856 Filed 10-28-16; 8:45 am]
BILLING CODE 4000-01-P