Connect America Fund, 67220-67236 [2019-26448]
Download as PDF
67220
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
(vii) A written procedure must be in
place in the event of a spill or release
and a spill clean-up kit must be
provided. All spills or leaks of the
contents of the aerosol cans must be
cleaned up promptly.
■ 22. Section 273.34 is amended by
adding paragraph (f) to read as follows:
§ 273.34
Labeling/marking.
*
*
*
*
*
(f) Universal waste aerosol cans (i.e.,
each aerosol can), or a container in
which the aerosol cans are contained,
must be labeled or marked clearly with
any of the following phrases: ‘‘Universal
Waste—Aerosol Can(s)’’, ‘‘Waste
Aerosol Can(s)’’, or ‘‘Used Aerosol
Can(s)’’.
[FR Doc. 2019–25674 Filed 12–6–19; 8:45 am]
BILLING CODE 6560–50–P
FEDERAL COMMUNICATIONS
COMMISSION
47 CFR Part 54
[WC Docket No. 10–90; FCC 19–104]
Connect America Fund
Federal Communications
Commission.
ACTION: Final rule.
AGENCY:
In this document, the Federal
Communications Commission
(Commission) reviews performance
measures established by the Wireline
Competition Bureau (WCB), the
Wireless Telecommunications Bureau,
and the Office of Engineering and
Technology (collectively the Bureaus)
for recipients of Connect America Fund
(CAF) high-cost universal service
support to ensure that those standards
strike the right balance between
ensuring effective use of universal
service funds while granting the
flexibility providers need given the
practicalities of network deployment in
varied circumstances.
DATES: Effective January 8, 2020.
FOR FURTHER INFORMATION CONTACT:
Suzanne Yelen, Wireline Competition
Bureau, (202) 418–7400 or TTY: (202)
418–0484.
SUPPLEMENTARY INFORMATION: This is a
summary of the Commission’s Order on
Reconsideration in WC Docket No. 10–
90; FCC 19–104, adopted on October 25,
2019 and released on October 31, 2019.
The full text of this document is
available for public inspection during
regular business hours in the FCC
Reference Center, Room CY–A257, 445
12th Street SW, Washington, DC 20554
or at the following internet address:
tkelley on DSKBCP9HB2PROD with RULES
SUMMARY:
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
https://docs.fcc.gov/public/
attachments/FCC–19–104A1.pdf
I. Introduction
1. The Commission has long
recognized that ‘‘[a]ll Americans
[should] have access to broadband that
is capable of enabling the kinds of key
applications that drive the
Commission’s efforts to achieve
universal broadband, including
education (e.g., distance/online
learning), health care (e.g., remote
health monitoring), and person-toperson communications (e.g., Voice over
internet Protocol (VoIP) or online video
chat with loved ones serving overseas).’’
To that end, the Commission has
invested significant Universal Service
Fund support for the deployment of
broadband-capable networks in high
cost, rural areas.
2. But only fast and responsive
networks will allow Americans to fully
realize the benefits of connectivity. That
is why the Commission requires
recipients of universal service support
in high cost areas to deploy broadband
networks capable of meeting minimum
service standards. These standards
protect taxpayers’ investment and
ensure that carriers receiving this
support deploy networks that meet the
performance standards they promised to
deliver to rural consumers. At the same
time, the Commission recognizes that
each carrier faces unique circumstances,
and that one set of prescriptive rules
may not make sense for every one of
them. To accommodate this practical
reality, the Commission’s rules provide
flexibility, taking into account the
operational, technical, and size
differences among providers when
establishing minimum standards, to
ensure that even the smallest rural
carriers can meet testing requirements
without facing excessive burdens.
3. In the Order on Reconsideration,
the Commission reviews performance
measures established by the Bureaus for
recipients of CAF high-cost universal
service support to ensure that those
standards strike the right balance
between ensuring effective use of
universal service funds while granting
the flexibility providers need given the
practicalities of network deployment in
varied circumstances. Several petitions
for reconsideration and applications for
review of the Performance Measures
Order, 83 FR 42052, August 20, 2018,
propose changes to these performance
measures. Here, the Commission rejects
the proposed changes where it finds that
the Bureaus’ approach strikes the right
balance. Where the Commission finds
that the Bureaus’ approach does not—
for example, where it concludes that
PO 00000
Frm 00052
Fmt 4700
Sfmt 4700
greater flexibility is warranted than was
offered under the Bureaus’ original
methodology—the Commission adjusts
its rules accordingly. Finally, the
Commission clarifies the Bureaus’
approach where doing so will help
resolve stakeholder confusion.
II. Discussion
4. In the Order on Reconsideration,
the Commission reexamines each of the
described performance measure
requirements in this document. As a
result, the Commission adopts several
modifications. The Commission believes
these changes will alleviate concerns
expressed by carriers by increasing the
time for carriers to meet certain
deadlines and further minimizing the
costs associated with compliance, yet
still ensure that carriers meet their
performance obligations. In short, the
refinements to the Bureau’s approach
adopted in the Performance Measures
Order will further the overarching goal
of the Performance Measures Order;
namely, to ensure that carriers deliver
broadband services with the speed and
latency required while providing
flexibility to enable carriers of all sizes
to choose how to conduct the required
performance testing in the manner most
appropriate for each individual carrier.
5. Under the Performance Measures
Order, all high-cost support recipients
serving fixed locations must perform
speed and latency tests from the
customer premises of an active
subscriber to a remote test server located
at or reached by passing through an
FCC-designated internet Exchange Point
(IXP). In the USF/ICC Transformation
Order, 76 FR 73830, November 29, 2011,
the Commission decided that speed and
latency should be measured on each
eligible telecommunications carriers
(ETCs) access network from the enduser interface to the nearest internet
access point, i.e., the internet gateway,
which is the closest peering point
between the broadband provider and the
public internet for a given consumer
connection. Subsequently, in the CAF
Phase II Price Cap Service Obligation
Order, 78 FR 70881, November 27, 2013,
WCB stated that latency should be
tested to an IXP, defined as occurring in
any of ten different U.S. locations,
almost all of which are locations used
in the MBA program because they are
geographically distributed major peering
locations. The Bureaus expanded the
list to permit testing to six additional
metropolitan areas to ensure that most
mainland U.S. locations are within 300
miles of an FCC-designated IXP and that
all are within approximately 500 air
miles of one. Further, the Bureaus
permitted providers to use any FCC-
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
designated IXP for testing purposes,
rather than limiting testing to the
provider’s nearest IXP. Providers
serving non-contiguous areas greater
than 500 air miles from an FCCdesignated IXP were also permitted to
conduct testing between the customer
premises and the point at which traffic
is aggregated for transport to the
continental U.S.
6. The Commission agrees with the
Bureaus that the speed and latency of
networks of carriers receiving support
through the various high-cost support
mechanisms should be tested between
the customer premise of an active
subscriber and an FCC-designated IXP.
This approach is consistent with the
Commission’s determination in the
USF/ICC Transformation Order that
‘‘actual speed and latency [must] be
measured on each ETCs access network
from the end-user interface to the
nearest internet access point.’’
Measuring the performance of a
consumer’s connection to an IXP better
reflects the performance that a carrier’s
customers experience. As the
Commission observed when it first
adopted performance measures for CAF
Phase II model-based support recipients,
‘‘[t]esting . . . on only a portion of the
network connecting a consumer to the
internet core will not show whether that
customer is able to enjoy high-quality
real-time applications because it is
network performance from the
customer’s location to the destination
that determines the quality of the
service from the customer’s
perspective.’’
7. The Commission therefore
disagrees with those commenters
arguing that it should require testing
over a shorter span. For example, NTCA
seeks modification of the testing
requirements to account for
performance only on ‘‘portions of the
network owned by the USF recipient
and the next-tier ISP from which that
USF recipient procures capacity
directly.’’ NTCA argues that requiring
testing to an FCC-designated IXP
imposes liability on a carrier for
conditions beyond its control and
violates the Act by applying obligations
to parts of the network that are not
supported by USF funding.
Alternatively, NTCA requests that the
Commission provide a ‘‘safe harbor’’ to
protect a carrier from off-network issues
that affect its test measurements. WTA
similarly contends that testing to an
FCC-designated IXP makes carriers
responsible for portions of the
connection over which they have no
control. WTA instead proposes a twotiered framework consisting of a
network-only test for purposes of high-
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
cost compliance and customer-to-IXP
testing to respond to customer
complaints, with unresolved networkonly problems being subject to noncompliance support reductions. Finally,
Vantage Point seeks clarity on the
initiation point for performance testing
within the customer premises, and
contends that the endpoint for testing
should be at or reached by passing
through a carrier’s next tier ISP.
8. The Commission disagrees with
petitioners that testing to an FCCdesignated IXP, rather than the edge of
a carrier’s network, makes a carrier
responsible for network elements it does
not control, and the Commission rejects
testing only on a carrier’s own network
as inadequate. As the Bureaus
explained, carriers—even smaller
ones—do have some influence and
control over the type and quality of
internet transport they purchase. The
Commission expects a carrier to
purchase transport of a sufficient quality
that enables it to provide the requisite
level of service expected by consumers
and required by the Commission’s rules.
However, in the event a carrier fails to
meet its performance obligations
because the only transport available
would demonstrably degrade the
measured performance of the carrier’s
network, the carrier can seek a waiver
of the performance measures
requirements. The Commission is
similarly unpersuaded by WTA’s twotiered testing proposal. Adopting WTA’s
proposal to conduct its required tests
over only half of the full testing span
would only provide the Commission
with insight into the customer
experience on half of the network
between the customer and the IXP.
Given that the Commission’s aim is to
ensure that customers are able to enjoy
high-quality real-time applications, it
declines to adopt WTA’s proposed
approach.
9. Finally, the Commission provides
additional clarity on both the initiation
point and endpoint for testing. As the
Commission has noted in this
document, one of the chief purposes for
implementing performance
requirements is to ensure that customers
are receiving the expected levels of
service that carriers have committed to
providing. Testing from any place other
than the customer side of any carrier
network equipment used in providing a
customer’s connection may skew the
testing results and not provide an
accurate reflection of the customer’s
broadband experience. As Vantage Point
notes, testing in this manner would
make it ‘‘difficult to ensure that the test
was being performed on the network
path actually used by the customer.’’
PO 00000
Frm 00053
Fmt 4700
Sfmt 4700
67221
Thus, the Commission clarifies that
testing should be conducted from the
customer side of any network
equipment that is being used.
10. Definition of FCC-designated
internet Exchange Point. Given the
Commission’s commitment to testing
the performance of connections between
consumers and FCC-designated IXPs, it
also takes this opportunity to clarify
which facilities qualify as FCCdesignated IXPs for purposes of
performance testing.
11. USTelecom, ITTA, and WISPA
request clarification that ETCs are
permitted to use ‘‘the nearest internet
access point,’’ as specified in the USF/
ICC Transformation Order, which may
not necessarily be a location specified in
the Performance Measures Order. They
also seek clarification that ETCs may
test to servers that are within the
provider’s own network (i.e., on-net
servers). In subsequent filings, the
petitioners suggest that there should be
a criteria-based approach to defining the
testing endpoint. Specifically, they
propose that testing occur ‘‘from the
end-user interface to the first public
internet gateway in the path of the CAFsupported customer that connects
through a transitive internet
Autonomous System,’’ (ASN) and ‘‘that
the Commission establish a safe harbor
where the transitive internet AS which
the gateway hosts includes one or more
router(s) that advertise(s) [ASN]
organizations that are listed on the
Center for Applied internet Data
Analysis (CAIDA) ‘AS Organization
Rank List.’ ’’ The petitioners propose
that testing occurring through a ‘‘safe
harbor’’ ASN ‘‘would be considered
valid without further inquiry.’’
12. The Commission concludes that
the Performance Measures Order’s
designation of certain metropolitan
areas as qualifying IXPs is too
ambiguous. It is not clear where the
boundaries of a designated IXP
metropolitan area begin and end. Thus,
drawing on the petitioners’ proposal,
the Commission now provides a revised
definition of FCC-designated IXP that is
more specific and better designed to
account for the way internet traffic is
routed. For testing purposes, the
Commission defines an FCC-designated
IXP as any building, facility, or location
housing a public internet gateway that
has an active interface to a qualifying
ASN. Such a building, facility, or
location could be either within the
provider’s own network or outside of it.
The Commission uses the term
‘‘qualifying ASN’’ to ensure that the
ASN can properly be considered a
connection to the public internet. The
Commission notes that in the USF/ICC
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
67222
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
Transformation Order, it finds that the
internet gateway is the ‘‘peering point
between the broadband provider and the
public internet’’ and that public internet
content is ‘‘hosted by multiple service
providers, content providers and other
entities in a geographically diverse
(worldwide) manner.’’ The criteria the
Commission uses to determine FCCdesignated IXPs are designed to ensure
that the peering point is sufficiently
robust such that it can be considered a
connection to the public internet and
not simply another intervening
connection point. The Commission
designates 44 major North American
ASNs using CAIDA’s ranking of
Autonomous Systems and other
publicly available resources as ‘‘safe
harbors.’’ The Commission directs the
Bureaus to update this list of ASNs
periodically using the CAIDA ranking of
ASNs, PeeringDB, and other publicly
available resources. Providers may test
to a test server located at or reached by
passing through any building, facility,
or location housing a public internet
gateway that has an active interface to
one of these qualifying ASNs or may
petition the Bureaus to add additional
ASNs to the list. The Bureaus will
determine whether any ASN included
in a carrier petition is sufficiently
similar to qualifying ASNs that it should
be added to the list of qualifying ASNs.
13. The Bureaus also established a
daily testing period for speed and
latency tests, requiring carriers to
conduct tests between 6:00 p.m. and
12:00 a.m. local time, including
weekends. The testing window the
Bureaus adopted reflects a slight
expansion of the testing window used
for the MBA. The Bureaus reasoned that
MBA data indicated a peak period of
internet usage every evening but noted
that they would revisit this requirement
periodically ‘‘to determine whether
peak internet usage times have changed
substantially.’’
14. Petitioners and commenters urge
the Commission to reconsider the daily
test period requirement to account for
the usage patterns of rural consumers, as
well as the conditions and
characteristics of rural areas. WTA notes
that the MBA data cited by the Bureaus
likely reflect the usage patterns of urban
consumers, rather than consumers in
rural areas that ‘‘are typically making
personal and business use of their
household internet connections
throughout the day.’’ WTA contends
that there is likely to be increased
congestion on rural networks during the
time period adopted by the Bureaus,
potentially resulting in an inaccurate or
unrepresentative testing of the carrier’s
service. WTA also argues that
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
mandating testing during evening hours
and weekends requires rural carriers to
adjust their regular daytime schedule,
creating staffing and financial hardships
and potentially preventing them from
responding to other customer service
issues. ITTA supports this point, noting
that ‘‘evening and weekend test hours
require RLECs to re-schedule one or
more technicians from their regular
daytime maintenance and installation
duties and pay them premium or
overtime wages.’’ ITTA also challenges
the expansion of the daily test period
from 7 p.m. to 11 p.m. to 6 p.m. to 12
a.m., and requests flexibility as to the
specific hours that testing may be
conducted.
15. The Commission declines to
revisit the daily testing period at this
time. WTA provides no data to support
its claim that rural consumers are more
active users of broadband service during
daytime hours than urban consumers.
Moreover, the Commission’s review of
MBA data from more rural areas
indicates that these areas have similar
peak periods to urban areas. As the
Commission has stated many times, a
primary goal for universal service is to
ensure that customers in rural areas
receive the same level of service as
those in urban areas. By establishing the
same testing window for urban and
rural areas, the Commission can confirm
that consumers in rural areas are not
receiving substandard service as
compared to consumers in urban areas
during the same time periods.
Additionally, WTA’s concern that
testing during the peak period may
degrade a consumer’s broadband
experience is unfounded. As the
Commission previously observed, the
small amount of data required for speed
testing will have no noticeable effect on
network congestion. The Commission
reminds carriers that it provides them
the flexibility to choose whether to
stagger their tests over the course of the
testing period, so long as they do not
violate any other testing requirements.
16. The Commission also disagrees
with WTA and ITTA that the current
daily testing period will require rural
carriers to devote additional personnel
hours to implement the Commission’s
performance testing requirements. Once
the testing regime is implemented and
carriers have installed the necessary
technology and software to test the
speed and latency of their networks on
a routine basis, the Commission does
not anticipate that extensive staffing
will be required to monitor the testing
process. Because the technological
testing options that the Commission has
allowed carriers to use are all relatively
automated, carriers should not have to
PO 00000
Frm 00054
Fmt 4700
Sfmt 4700
adjust schedules to ensure staffing
during evenings and weekends.
Additionally, the Commission notes that
the Bureaus expanded the testing period
from 7 p.m. to 11 p.m. to 6 p.m. to 12
a.m. based on several comments from
parties that requested a longer testing
period. Adding one additional hour on
both the front and back end of the
testing period allows a carrier’s testing
to capture the ramp up and ramp down
periods before and after peak time,
providing a more accurate picture of
whether customers are receiving the
required level of service. The
Commission also reminds parties that
the Bureaus committed to revisiting
periodically the daily testing window to
ensure that the established hours
continue to reflect the usage habits of
consumers.
17. The Bureaus required a specified
number of speed tests during each
testing window. In particular, the
Performance Measures Order required a
minimum of one download test and one
upload test per testing hour at each
subscriber test location. Providers were
required to start separate download and
upload speed tests at the beginning of
each test hour window, and, after
deferring a test due to cross-talk (e.g.,
traffic to and from the consumer’s
location that could impact performance
testing), providers were required to
reevaluate whether the consumer load
exceeds the cross-talk threshold every
minute until the speed test can be run
or the one-hour test window ends.
18. In their Petition for
Reconsideration, USTelecom, ITTA, and
WISPA request clarification that
recipients are afforded flexibility in
commencing hourly tests. They argue
that ‘‘[i]t is not clear from the
Performance Measures Order . . .
whether ‘the beginning’ of a test hour
window requires a recipient to
commence testing at the top of the hour,
or whether testing must commence for
all test subscribers at exactly the same
time.’’ The petitioners state that carriers
should only be required to complete the
test within the hour, and they should be
able to retry tests as frequently as their
systems allow until a successful test is
administered, rather than retrying
deferred tests every minute. Noting that
‘‘there should be no practical difference
as to whether testing occurs at the top,
middle, or closer to [the] end of a testing
window,’’ NTCA, NRECA, and UTC
support the petitioners’ request that
‘‘the Commission reconsider the
discrete and specific times at which
testing is to be conducted within each
hour.’’ Vantage Point likewise proposes
that the Commission permit carriers to
distribute speed tests within testing
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
hours in a way that minimizes network
impact; otherwise, Vantage Point
asserts, requiring all speed testing to
start at the beginning of each hour
would significantly burden test servers
such that test results would not be
representative of customers’ normal
experience.
19. The Commission clarifies that
providers do not have to begin speed
tests at the beginning of each test hour,
as petitioners suggest. In particular, the
Commission agrees with Vantage Point
that providing greater flexibility in this
regard will further minimize the impact
of any potential burden on the test
servers during speed testing. However,
to ensure that there is enough data on
carriers’ speed performance, providers
must still conduct and report at least
one download test and one upload
speed test per testing hour at each
subscriber test location, with one
exception. A carrier that begins
attempting speed tests within the first
fifteen minutes of a testing hour, and
repeatedly retries and defers the test at
one-minute intervals due to consumer
load meeting the adopted cross-talk
thresholds (i.e., 64 Kbps for download
tests or 32 Kbps for upload tests), may
report that no test was successfully
completed during the test hour because
of cross-talk. A provider that does not
attempt a speed test within the first 15
minutes of the hour and/or chooses to
retry tests in greater than one-minute
intervals must, however, conduct and
report a successful speed test for the
testing hour regardless of cross-talk.
Although this approach continues to
differ slightly from MBA practice, the
Commission believes that it minimizes
the possibility of network congestion at
the beginning of the testing hour while
ensuring that it will have access to
sufficient testing data.
20. The Performance Measures Order
established specific test intervals within
the daily test period for latency testing,
requiring carriers to conduct ‘‘a
minimum of one discrete test per
minute, i.e., 60 tests per hour, for each
of the testing hours, at each subscriber
test location, with the results of each
discrete test recorded separately.’’
Recognizing that cross-talk could
negatively affect the test results, the
Bureaus provided flexibility for carriers
to postpone a latency test in the event
that the consumer load exceeded 64
Kbps downstream and to reevaluate the
consumer load before attempting the
next test.
21. Several parties express concern
with these requirements and request
reconsideration of the latency testing
framework. USTelecom, ITTA, and
WISPA jointly contend that the Bureaus
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
failed to provide adequate notice for the
frequency of latency testing and did not
justify departing from the MBA practice
of combining speed and latency testing
under a unified framework. These
parties further argue that requiring
latency testing once per minute will be
administratively burdensome for
carriers by preventing them from
combining the instructions for testing
into a single process and potentially
overloading and disrupting some testing
methods. Instead, USTelecom, ITTA,
and WISPA propose that the number of
latency tests should be reduced to
match the frequency of speed testing.
Midcontinent also supports aligning the
frequency of speed and latency testing
requirements.
22. AT&T contends that testing once
per minute ‘‘is unnecessary and
arbitrary and capricious’’ and likewise
argues that the Commission should
permit carriers to test latency only once
per hour. AT&T supports its proposal by
providing internal data purporting to
demonstrate no material difference
between testing latency once per minute
versus testing once per hour. As a result,
AT&T proposes that the Commission
require a minimum of one latency test
per hour, but provide flexibility to allow
carriers to test more frequently if they
desire. ITTA concurs with AT&T’s
proposed approach.
23. Conversely, NTCA, NRECA, and
UTC support the latency testing
framework adopted by the Bureaus.
These parties observe that aligning the
frequency of speed and latency tests
would ‘‘risk undermining the
Commission’s statutory mandate to
ensure reasonably comparable services
in rural and urban areas’’ because speed
does not require as frequent testing as
latency in order to demonstrate
compliance. In response, USTelecom,
ITTA, and WISPA again argue that the
Bureaus failed to adequately address the
Administrative Procedure Act’s notice
obligations or present any legal or
factual basis for requiring substantially
more latency tests than speed tests.
24. The Commission declines to
revise the determination of the Bureaus
that carriers must conduct latency
testing once per minute. Regarding
parties’ procedural arguments, the
Commission notes that, in the two
Public Notices seeking comment on the
performance measures, the Bureaus
specifically explained that adopting the
Measuring Broadband America (MBA)
testing was under consideration. Indeed,
many of the performance testing
requirements were derived from or
influenced by the Commission’s
experience with MBA testing. As such,
parties had ample notice that the testing
PO 00000
Frm 00055
Fmt 4700
Sfmt 4700
67223
regime adopted by the Bureaus, which
is a less burdensome variation of the
MBA testing, was a potential option.
Any argument to the contrary is
unfounded.
25. Complaints that the frequency of
latency testing will affect network
performance also are speculative. The
latency testing frequency framework
ultimately adopted by the Bureaus is
substantially less extensive than the
MBA program testing. For example,
MBA testing sends approximately 2,000
User Datagram Protocol (UDP) packets
per hour, and these 2,000 individual
results are summarized as a single
reporting record that reflects all 2,000
tests. To be clear, MBA requires latency
to be tested 2,000 times per hour, with
results summarized into one record.
Conversely, the Bureaus adopted testing
of 60 UDP packets per hour that consists
of approximately 3% of the typical MBA
load. The more intensive MBA test
frequency has not been found to pose
any technical or other difficulties, so
there is no reason to believe that the
vastly lower frequency of latency testing
adopted by the Bureaus will cause
concerns. Requiring 60 UDP packets per
hour rather than 2,000 balances the
need for sufficient testing while
minimizing the burden of testing on
carriers.
26. The Commission also agrees with
the Bureaus that the disparity in testing
frequency between speed and latency
reflects the different type of testing
necessary to determine whether carriers
are meeting the required benchmarks.
The purpose of speed testing is to
determine if the network is properly
provisioned to furnish the required
speed and whether the network
provides sufficient throughput to handle
uploads and downloads at particular
speeds and times. Because of the burden
that such testing puts on a carrier’s
network, the Bureaus adopted the
minimum number of tests necessary to
ensure that consumers are receiving
broadband service at required speed
levels. On the other hand, latency
testing indicates whether there is
sufficient capacity in the network to
handle the level of traffic, which is of
particular importance when the network
is experiencing high traffic load. In this
respect, latency is similar to a pulse rate
and can vary substantially as a result of
several factors. Even if all these factors
are unknown, frequently monitoring
latency determines the ability of the
network to handle various
circumstances and factors that are
affecting it. As NTCA, NRECA, and UTC
explain:
[T]here is logic in a protocol that tests for
latency more frequently than speed. The
E:\FR\FM\09DER1.SGM
09DER1
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
impact of latency is measured in and
discernible by milliseconds: the frequency of
testing aims to illuminate whether variables
that perforate performance are present. In
contrast, speed contemplates a steadier
aspect of the network facility, and therefore
does not require as frequent testing to
demonstrate compliance. Therefore, in as
much as latency-sensitive services and
applications (including but not limited to
voice) are affected by millisecond variables,
NTCA, NRECA and UTC urge the
Commission to maintain its rigorous
standards for latency testing.
tkelley on DSKBCP9HB2PROD with RULES
And, in any event, conducting more
tests for latency is to the carrier’s
benefit, because of the variability of
latency and resulting greater likelihood
that outlier failures will not affect the
overall rate.
27. The Commission appreciates
AT&T’s willingness to share its internal
data and analysis. However, AT&T’s
data reflect only the capabilities of its
own network and consisted of a very
small sample set—18 customers for one
peak period in one instance and
‘‘almost’’ 100 subscribers for one peak
period in the other. The Commission
also notes that even AT&T’s data
demonstrated a substantial variation
between testing once per hour and once
per minute. For example, in its testing,
AT&T found that per minute latency
testing of customers served by varying
technologies showed that 1.17% of tests
were higher than 100 ms but once per
hour testing showed that 3.04% of tests
showed a latency of higher than 100 ms.
A difference of 2% when the latency
standard is 5% is substantial.
28. Analysis undertaken by
Commission staff confirms the
importance of more frequent testing to
account for the variability associated
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
with latency. Commission staff
compared the conclusions that AT&T—
and supported by ITTA—drew from its
data to what the much larger MBA data
demonstrate. This analysis indicates
that the risk of false positives and false
negatives (i.e., sample test results
indicate that a carrier fails, when given
overall network performance, it should
have passed, or that a carrier passes,
when given overall network
performance, it should have failed)
varies significantly based on the number
of measurements per hour. Because the
Commission’s performance standard for
latency requires 95% of the latency
measurements to be less than or equal
to 100 ms, a carrier would fail the
standard if more than 5% of its latency
measurements are greater than 100 ms.
In general, staff’s analysis found that a
greater number of measurements
reduces the impact of data outliers and
makes false positives and false negatives
less likely. For example, a single 200 ms
data outlier among a sample of 10
latency measurements that otherwise
are all under 100 ms would result in the
carrier’s failing to meet the 95%
threshold (i.e., only 9 out of 10 or 90%
of the measurements would be at or
under 100 ms). However, a single data
outlier of 200 ms in a sample of 100
latency measurements would not, in the
absence of at least five other
measurements exceeding 100 ms, cause
the carrier to fail (i.e., 99 out of 100 or
99% of the measurements would be at
or under 100 ms).
29. Additionally, staff analysis of MBA
data indicated that the distribution of
latency among carriers varies widely
even within the same minute. This
means that latency varies significantly
PO 00000
Frm 00056
Fmt 4700
Sfmt 4725
depending upon the traffic on the
network at any given time and does not
vary in the same way for each carrier or
even within each day for each carrier.
Because of the countless number of
distributions observed among carriers
reflected by the MBA data, the
Commission concludes that a smaller
number of observations would not yield
reliable testing results. Thus, more
testing provides the Commission with
greater ability to detect bad performance
in cases where a carrier’s latency is
consistently high. In other words, since
the likelihood of failing or passing the
Commission’s latency standard
depends, to some degree, on random
noise, the more measurements taken by
a carrier, the less likely that random
factors would cause it to fail the
standard.
30. The figure in the following
demonstrates staff’s analysis of the
estimated probability of failure and
associated risk of false positive or false
negative results with different numbers
of measurements from a range of latency
distributions observed in the MBA data.
Each box (bar) represents the estimated
probability of failure for a given latency
distribution. The difference in the
probability of failure between N number
of measurements and N=2000 is the
estimated risk of a false positive (the test
result indicates that a carrier fails when
it should have passed) and a false
negative (the test result indicates that a
carrier passes when it should have
failed). As demonstrated, there is a
much higher risk of a false positive or
false negative under AT&T’s proposed
once per hour latency measurement as
compared to a moderate risk from 60
measurements per hour.
E:\FR\FM\09DER1.SGM
09DER1
ER09DE19.001
67224
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
Thus, staff’s analysis shows that,
given the high variability of latency, one
of two things would occur if the
Commission required only one
measurement per hour: either a few
extreme measurements would cause a
carrier to fail the standard when, in fact,
it should pass given its overall
performance, or the Commission would
be unable to capture consistent poor
performance by a carrier that should fail
based on the overall performance of its
network. As a result, a moderate-risk
approach of 60 measurements per hour
strikes a balance between the burden of
testing on carriers and the risk of failure
by carriers caused by uncertainty.
31. Finally, the Commission notes
that some parties may misunderstand
what exactly constitutes a latency test
for purposes of the performance
measures. Specifically, USTelecom
states that, ‘‘[t]esting every minute may
also overload some testing methods and
cause testing to be disrupted,’’ implying
that a carrier must start and stop a
latency test every minute within a testhour. While the Commission does not
believe this interpretation is consistent
with the intent of the Performance
Measures Order, it provides greater
clarity here on what is considered a
sufficient latency test to assuage
concerns about the number of latency
tests per hour. As the Bureaus described
in the Performance Measures Order, a
‘‘test’’ constitutes a ‘‘single, discrete
observation or measurement of speed or
latency.’’ While carriers may choose to
continuously start and stop latency
testing every minute and record the
specific result, the Commission clarifies
that there is no requirement to conduct
latency testing in this manner. Instead,
carriers may continuously run the
latency testing software over the course
of a test-hour and record an observation
or measurement every minute of that
test-hour. If a carrier transmits one
packet at a time for a one-minute
measurement, the carrier should report
the result of that packet as one
observation. However, some
applications, such as ping, commonly
send three packets and only report
summarized results for the minimum,
mean, and maximum packet round trip
time and not individual packet round
trip time. If this is the case, the carrier
should report the mean as the result of
this observation. If the carrier sends
more than one packet and the testing
application allows for individual round
trip time results to be reported for each
packet, then the carrier must report all
individual measurements for each
packet. Such an approach plainly fits
within the definition of ‘‘test’’ adopted
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
by the Bureaus in the Performance
Measures Order and does not require
constant starting and stopping of the
latency testing software. In sum, carriers
have the flexibility to choose how to
conduct their latency testing, so long as
one separate, discrete observation or
measurement is recorded each minute of
the specific test-hour.
32. The Bureaus required that carriers
test a maximum of 50 subscriber
locations per required service tier
offering per state, depending on the
number of subscribers a carrier has in a
state, randomly selected every two
years. The Performance Measures Order
included scaled requirements
permitting smaller carriers (i.e., carriers
with fewer than 500 subscribers in a
state and particular service tier) to test
10% of the total subscribers in the state
and service tier, except for the smallest
carriers (i.e., carriers with 50 or fewer
subscribers), which must test five
subscriber locations. The Bureaus also
recognized that, in certain situations, a
carrier serving 50 or fewer subscribers
in a state and service tier may not be
able to test even five active subscribers;
the Bureaus permitted such carriers to
test a random sample of existing, nonCAF-supported active subscriber
locations within the same state and
service tier to satisfy the testing
requirement. In situations where a
subscriber at a test location stops
subscribing to the service provider
within 12 months after the location was
selected, the Bureaus required that the
carrier test another randomly selected
active subscriber location. Finally, the
Bureaus explained that carriers may use
inducements to encourage subscribers to
participate in testing, which may be
particularly useful in cases where
support is tied to a particular
performance level for the network, but
the provider does not have enough
subscribers to higher performance
service tiers to test to comply with the
testing sample sizes.
33. Petitioners and applicants raise
various concerns regarding the required
number of subscriber test locations.
Micronesian Telecommunications
Corporation (MTC), for example, argues
that it and similar carriers that may have
fewer than 50 subscribers in a particular
state and speed service tier will be
unable to comply with the test locations
requirement. MTC claims that it will be
difficult to find even five customers to
test, particularly in higher service tiers.
Asking that the Commission ‘‘provide a
safety valve’’ for similar small carriers,
MTC proposes that such a provider
should ‘‘test no more than 10 percent of
its customers in any given service tier,
with a minimum of one test customer
PO 00000
Frm 00057
Fmt 4700
Sfmt 4700
67225
per service tier with customers.’’ NTCA
argues that testing 10% of subscribers
may be excessive; instead, NTCA
proposes that carriers should test the
lesser of 50 locations per state or 5% of
active subscribers. Further, NTCA
argues that carriers should not be
required to upgrade the speed or
customer premises equipment for
individual locations even temporarily to
conduct speed tests. WTA suggests that,
at least for rural carriers, the number of
test locations should be much lower
than adopted in the Performance
Measures Order. Smaller carriers must
test larger percentages of their
customers compared to larger carriers;
accordingly, WTA argues, the
Commission should permit testing of
just 10–15 locations or 2–3% of
subscribers in each CAF-required
service tier.
34. NTCA, as well as USTelecom,
ITTA, and WISPA, also ask that the
Commission clarify that carriers may
use the same locations for testing both
speed and latency. USTelecom, ITTA,
and WISPA explain that, if carriers must
conduct speed and latency testing at
different locations, the number of
subscribers that must be tested would be
unnecessarily doubled, which ‘‘would
be particularly troublesome for smaller
recipients, many of whom will be
drawing test locations from a small
group of subscribers.’’ Similarly, the
petitioners explain, the requirement
regarding the number of test locations
should be clarified to be exactly the
same for both speed and latency. These
clarification proposals drew broad
support from commenters. For example,
comments submitted jointly by NTCA,
NRECA, and UTC assert that the
clarifications would help providers
‘‘avoid unnecessary costs and excessive
administrative burden,’’ while
Midcontinent Communications notes
that using ‘‘the same panelists for speed
and latency testing for CAF purposes
would align with [its] internal testing
practices.’’
35. A few parties offer suggestions
regarding the parameters for the random
selection process. In particular, WTA
asks that locations should be tested for
five years, instead of two years, before
a new random sample of test locations
is chosen. WTA also proposes that twice
the required random number of testing
locations be provided to carriers so that
carriers can replace locations where
residents refuse to participate or have
incompatible CPE. Frontier, in an ex
parte filing, proposes that carriers be
allowed to test only new customer
locations; it argues that installing the
necessary testing equipment at older
locations requires more time than is
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
67226
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
available with the adopted testing
schedule.
36. The Commission declines to
modify the adopted sample sizes for
testing speed and latency. To minimize
the burdens of testing, the Bureaus have
used a ‘‘trip-wire’’ approach in
determining the required sample sizes.
In other words, the adopted sample
sizes produce estimates with a high
margin of error but can show where
further inquiry may be helpful; the
Commission’s target estimation
precision is a 90% confidence level
with an 11.5% margin of error. For the
largest carriers, i.e., those with over 500
subscribers in a given state and speed
service tier, this requires a sample size
of 50 subscriber locations. For the
smallest carriers, the Bureaus adopted
small sample sizes that result in less
precision, with the margin of error
reaching 34.9%, to reduce the testing
burden on smaller providers. Reducing
the sample sizes for smaller carriers
even more would further reduce the
resulting estimation precision—making
the test data even less likely to be
representative of the actual speed and
latency consumers experience on CAFsupported networks. The Commission
therefore does not modify the required
numbers of subscriber locations carriers
must test.
37. Nonetheless, the Commission
recognizes that a few carriers facing
unique circumstances may find it
extraordinarily difficult to find a
sufficient number of subscriber
locations to test. Although the
Commission declines to modify the
adopted sample sizes, the Commission
appreciates that special circumstances
occasionally demand exceptions to a
general rule. The Commission’s rules
may be waived for good cause shown.
38. For carriers that cannot find even
five CAF-supported locations to test, the
Commission also reconsiders the
Bureaus’ decision to permit testing of
non-CAF-supported active subscriber
locations within the same state and
service tier. Testing and reporting speed
and latency for non-CAF-supported
locations adds unnecessary complexity
to the Commission’s requirements.
Accordingly, the Commission requires
that any non-compliant carrier testing
fewer than five CAF-supported
subscriber locations because more are
not available would be subject to
verification that more customers are not
available, rather than requiring that all
carriers testing fewer than five CAFsupported subscriber locations find nonCAF-supported locations to test.
39. Additionally, the Commission
recognizes that, as several parties have
noted, obtaining customer consent for
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
testing which requires placement of
testing equipment on customer premises
may prove difficult. The Commission
believes that its revised testing
implementation schedule (discussed in
the following) will help alleviate this
concern, particularly for smaller
carriers. Numerous vendors are
developing software solutions that will
allow providers to test the service at
customer locations without requiring
any additional hardware at the
customer’s premises. Further, the
Commission directs WCB to publish
information on the Commission’s
website explaining the nature and
purpose of the required testing—to
ensure that carriers are living up to the
obligations associated with CAF
support—and urging the public’s
participation. The Commission expects
that providing such information in an
easy-to-understand format will help
alleviate subscribers’ potential concerns.
Moreover, the Commission emphasizes
that no customer proprietary network
information is involved in the required
testing or reporting, other than
information for which the carrier likely
would already have obtained customer
consent; carriers routinely perform
network testing of speed and latency
and the performance measures testing
the Commission is requiring is of a
similar nature.
40. The Commission agrees with
comments recommending that the same
sample sizes adopted for speed should
also apply to latency, and that the same
subscriber locations should be used for
both speed and latency tests. As some
parties have noted, requiring testing of
two separate sets of subscriber locations
for speed and latency, rather than the
same group of locations for both, is
unnecessarily burdensome. By requiring
speed and latency tests at the same
subscriber locations, the Commission
reduces the amount of equipment,
coordination, and effort that may
otherwise be involved in setting up
testing. Therefore, carriers will test all of
the locations in the random sample for
both speed and latency. The
Commission notes that because it is
adopting different implementation dates
for testing of different broadband
deployment programs, a carrier will
receive a separate random sample of
testing locations for each program for
which it must do performance testing.
In the Performance Measures Order, the
Bureaus stated that, ‘‘[a] carrier with
2,000 customers subscribed to 10/1
Mbps in one state through CAF Phase II
funding and 500 rural broadband
experiment (RBE) customers subscribed
to 10/1 Mbps in the same state, and no
PO 00000
Frm 00058
Fmt 4700
Sfmt 4700
other high-cost support with
deployment obligations, must test a total
of 50 locations in that state for the 10/
1 Mbps service tier.’’ But because CAF
Phase II and RBE have different
implementation dates for testing, the
carrier in this example must test 50
locations for its CAF Phase II obligations
and 50 locations for its RBE obligations.
Similarly, because the Commission now
requires carriers to use the same sample
for both speed and latency, it
reconsiders the requirement that carriers
replace latency testing locations that are
no longer actively subscribed after 12
months with another actively
subscribed location. The Bureaus did
not make clear if this provision applied
to both speed and latency test locations.
To avoid confusion, the Commission
clarifies that the same replacement
requirements should apply to both
speed and latency. Therefore, the
Commission now requires that carriers
replace non-actively subscribed
locations with another actively
subscribed location by the next calendar
quarter testing. Although the
Commission does not believe it is
necessary for carriers to obtain a random
list of twice the number of required
testing locations at the outset, carriers
should be able to obtain additional
randomly selected subscriber locations
as necessary for these kinds of
situations.
41. The Commission reconsiders the
Bureaus’ requirement that carriers meet
and test to their CAF obligation speed(s)
regardless of whether their subscribers
purchase internet service offerings with
speeds matching the CAF-required
speeds for those CAF-eligible locations.
Specifically, in situations where
subscribers purchase internet service
offerings with speeds lower than the
CAF-required speeds for those locations,
carriers are not required to upgrade
individual subscriber locations to
conduct speed testing unless there are
no other available subscriber locations
at the CAF-required speeds within the
same state or relevant service area. The
Commission recognizes that there may
be significant burdens associated with
upgrading an individual location,
particularly when physically replacing
equipment at the customer premises is
necessary. Some carriers may still find
it necessary to upgrade individual
subscriber locations, at least
temporarily, to conduct speed testing.
The Commission does not believe that
requiring temporary upgrades of service
of testing locations in these instances
will discourage bidding in future
auctions. Carriers participating in
auctions should be prepared to provide
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
the required speeds at all of the
locations in the relevant service area
and should anticipate that over time
more and more customers in the service
area will be purchasing the higher-speed
offerings.
42. Finally, the Commission rejects
proposals to require testing only of
newly deployed subscriber locations
and to maintain the same sample for
more than two years. If the Commission
were to permit testing of only new
locations, carriers’ speed and latency
test data would not reflect their
previous CAF-supported deployments,
for which carriers also have ongoing
speed and latency obligations.
Moreover, although the Bureaus
adopted the Performance Measures
Order in 2018, carriers have been
certifying that their CAF-supported
deployments meet the relevant speed
and latency obligations for several years.
Requiring testing of older locations
should not prove a problem for carriers
that have been certifying that their
deployments properly satisfy their CAF
obligations. In any case, further
shrinking the required sample to
include only more recent deployments
would compromise the effectiveness of
the ‘‘trip-wire’’ sample; the Commission
would not be able to identify potential
problems with many older CAFsupported deployments. Maintaining
the same sample beyond two years
would present the opposite problem. By
excluding newer deployments, the
Commission’s understanding of carriers’
networks would be outdated; the
Bureaus’ decision to require testing a
different set of subscriber locations
every two years struck the correct
balance between overburdening carriers
and maintaining a current, relevant
sample for testing.
43. The Bureaus required quarterly
testing for speed and latency. In
particular, to capture any seasonal
effects and differing conditions
throughout the year that can affect a
carrier’s broadband performance, the
Bureaus required carriers subject to the
performance measures to conduct one
week of speed and latency testing in
each quarter of the calendar year.
44. WTA argues that spreading testing
across the year imposes a substantial
burden, particularly on rural carriers,
without producing more accurate
information than a single week of
testing. WTA also contends that
obtaining consent from customers to
allow testing for four weeks a year ‘‘is
going to be extremely difficult and
likely to become a customer relations
nightmare.’’ Instead, WTA argues that
testing for a single week in late spring
or early fall would be more
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
representative of typical internet usage.
WTA cites these claimed difficulties as
a reason for reducing the number of
weeks of annual testing, reducing the
numbers of locations to be tested,
allowing more flexible selection of
customer locations, and using the test
locations for longer periods.
45. The Commission declines to
adjust the quarterly testing requirement
as proposed by WTA. As the Bureaus
acknowledged when they adopted the
quarterly requirement, different
conditions exist throughout the year
that can affect service quality, including
changes in foliage, weather, and
customer usage patterns, school
schedules, holiday shopping, increased
or decreased customer use because of
travel and sporting events, and business
cycles. The goal of the testing
requirements is to ensure that
consumers across the country
experience consistent, quality
broadband service throughout the year,
not at only one defined point during the
year. Additionally, the Commission
believes WTA’s concerns regarding
customer consent are unfounded. The
Commission expects that once the
requisite technology and software to
conduct the required testing has been
installed, testing the performance of the
network for one week per quarter will
not impose any additional significant
burden on carriers or customers.
Moreover, the tests themselves use so
little bandwidth that the Commission
does not believe customers will even
notice that testing is occurring. Indeed,
as the Bureaus explained, quarterly
testing ‘‘strikes a better balance of
accounting for seasonal changes in
broadband usage and minimizing the
burden on consumers who may
participate in testing.’’
46. The Commission confirms that
carriers may use any of the three
methodologies outlined in the
Performance Measures Order to
demonstrate their compliance with
network performance requirements. The
Commission has previously determined
that it should provide carriers subject to
performance testing with flexibility in
determining the best means of
conducting tests. In 2013, WCB had
determined that price cap carriers
generally may use ‘‘existing network
management systems, ping tests, or
other commonly available network
measurement tools,’’ as well as results
from the MBA program, to demonstrate
compliance with latency obligations
associated with CAF Phase II modelbased support. Thus, the Bureaus
concluded that ETCs subject to fixed
broadband performance obligations
would be permitted to conduct testing
PO 00000
Frm 00059
Fmt 4700
Sfmt 4700
67227
by employing either: (1) MBA testing
infrastructure (MBA testing), (2) existing
network management systems and tools
(off-the-shelf testing), or (3) providerdeveloped self-testing configurations
(provider-developed self-testing or selftesting). The Bureaus reasoned that the
flexibility afforded by three different
options offered ‘‘a cost-effective method
for conducting testing for providers of
different sizes and technological
sophistication.’’
47. NTCA requests clarification about
language in the Performance Measures
Order stating that ‘‘MBA testing must
occur in areas and for the locations
supported by CAF, e.g., in CAF Phase II
eligible areas for price cap carriers and
for specific built-out locations for RBE,
Alternative Connect America Cost
Model (A–CAM), and legacy rate-ofreturn support recipients.’’ NTCA
contends that this language refers to
previously-promulgated MBA testing
requirements and that the Commission
should clarify that ETCs subject to fixed
broadband performance obligations
should be permitted to use any of three
testing options outlined by the Bureaus.
48. The language highlighted by
NTCA applies only to carriers choosing
the MBA testing option; the Bureaus set
out additional, separate requirements
for carriers choosing to use off-the-shelf
or provider-developed testing options.
As the Performance Measures Order
explained, in the event that a carrier
opts to use the MBA testing
methodology to collect performance
data, it must ensure boxes are placed at
the appropriate randomly selected
locations in the CAF-funded areas, as
required for the CAF testing program. If,
on the other hand, a carrier opts for
either off-the-shelf testing tools or its
own self-testing, it must use the testing
procedures specific to the providers’
respective chosen methodology.
49. To achieve full compliance with
the latency and speed standards, the
Performance Measures Order required
that 95% of latency measurements
during testing windows fall below 100
ms round-trip time, and that 80% of
speed measurements be at 80% of the
required network speed. Based on the
standard adopted by the Commission in
2011, WCB used ITU calculations and
reported core latencies in the
contiguous United States in 2013 to
determine that a latency of 100 ms or
below was appropriate for real-time
applications like VoIP. WCB thus
required price cap carriers receiving
CAF Phase II model-based support to
test and certify that 95% of testing hours
latency measurements are at or below
100 ms (the latency standard). Later,
WCB sought comment on extending the
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
67228
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
same testing methodologies to other
high-cost support recipients serving
fixed locations, and in multiple orders,
the Commission extended the same
latency standard to RBE participants,
rate-of-return carriers electing the
voluntary path to model support, CAF
Phase II competitive bidders not
submitting high-latency bids, and
Alaska Plan carriers.
50. The Bureaus ultimately reaffirmed
and further extended the latency
standard to all high-cost support
recipients serving fixed locations,
except those carriers submitting highlatency bids in the CAF Phase II
auction. In doing so, the Bureaus noted
that the data on round-trip latency in
the United States had not markedly
changed since the CAF Phase II Price
Cap Service Obligation Order, and that
no parties challenged the Commission’s
reasoning for the existing 100 ms
standard. More recently, the Bureaus
refreshed the record, seeking comment
on USTelecom’s proposal that certifying
‘‘full’’ compliance means that 95 to
100% of all of an ETCs measurements
during the test period meet the required
speed. The Bureaus then adopted a
standard requiring that 80% of a
carrier’s download and upload
measurements be at or above 80% of the
CAF-required speed (i.e., an 80/80
standard). The Bureaus explained that
this speed standard best meets the
Commission’s statutory requirement to
ensure that high-cost-supported
broadband deployments provide
reasonably comparable service as those
available in urban areas. The Bureaus
also noted that they would exclude from
certification calculations certain speed
measurements above a certain threshold
to ensure that outlying observations do
not unreasonably affect results.
51. In their Petition, USTelecom,
ITTA, and WISPA complain that
‘‘[t]here is . . . a significant disparity in
compliance thresholds for speed and
latency,’’ and ask that the Bureaus
require ETCs’ latency measurements to
meet 175 ms at least 95% of the time.
The petitioners argue that, before
accepting CAF Phase II model-based
support, carriers could not have fully
understood whether the latency
standard adopted in 2013 was
appropriate, apparently because it was
adopted ‘‘almost two full years before
price cap carriers accepted CAF Phase II
support,’’ and other ‘‘reasonable’’
requirements were adopted later.
Further, the petitioners argue, the same
ITU analysis that WCB relied on in 2013
to adopt the latency standard ‘‘found
that consumers continue to be ‘satisfied’
with speech quality at a one-way
mouth-to-ear latency of 275 ms or a
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
provider round-trip latency of 175 ms,’’
so ‘‘treating a latency result that is even
one millisecond above 100 ms as a
violation . . . penaliz[es] recipients for
providing users with voice quality with
which they are fully satisfied.’’
Changing the standard to require latency
measurements of 175 ms or better 95%
of the time, petitioners assert, would
better align the latency standard with
the speed standard, which is designed
to ensure that high-cost-supported
broadband deployments are reasonably
comparable to those in urban areas.
52. NTCA, NRECA, and UTC oppose
the petitioners’ request to ‘‘align’’ the
latency standard with the speed
standard. Defending the 95% threshold
adopted by the Bureaus, these parties
explain that low latency is necessary to
support achieving a ‘‘reasonably
comparable’’ level of service, and the
95% compliance benchmark for latency
is a ‘‘reasonable’’ standard for that.
Moreover, speeds may vary up to 20%
because of ‘‘networking protocols,
interference and other variances that
affect all providers and whose
accommodation is technology neutral,’’
but such factors do not affect latency.
Thus, they say, the record supports the
adopted latency standard.
53. Multiple parties seek clarifications
regarding implementation of the 80/80
speed standard adopted in the
Performance Measures Order. In
particular, carriers expressed concern
that compliance will be measured
against advertised speeds, rather than
the speeds carriers are obligated to
provide in exchange for CAF support. In
addition, USTelecom, ITTA, and
WISPA, among others, challenge the
Bureaus’ finding that speed test results
greater than 150% of advertised speeds
are likely invalid and ask that the
Bureaus reconsider automatically
excluding those measurements from
compliance calculations. Instead,
Vantage Point suggests, the Commission
should consider excluding data points
beyond a defined number of standard
deviations, rather than setting a 150%
cutoff for measurements.
54. The Commission declines to
modify the longstanding latency
standard requiring that 95% of roundtrip measurements be at or below 100
ms. As petitioners acknowledge, the
standard was initially adopted in 2013,
before carriers accepted CAF Phase II
model-based support. Petitioners claim
that, as a result, ‘‘no future recipient
could have been expected to assess the
appropriateness of this prematurely
adopted requirement,’’ but, in fact,
carriers accepted CAF Phase II support
conditioned on the requirement that
they certify to the adopted latency
PO 00000
Frm 00060
Fmt 4700
Sfmt 4700
standard. In other words, carriers
assessed the appropriateness of the
standard and decided that they would
be able to certify meeting the standard—
or, at the very least, accepted that they
would risk losing CAF Phase II support
if they were unable to meet the
standard. Moreover, no parties sought
reconsideration when the standard was
originally adopted, and the Commission
later extended the same standard to
other high-cost support recipients in the
years following.
55. The Commission also notes that
latency is fundamentally different from
speed and therefore requires a different
standard to ensure that CAF-supported
broadband internet service is reasonably
comparable to service in urban areas.
The 100 ms standard, which is more
lenient than the 60 ms standard
originally proposed, ensures that
subscribers of CAF-supported internet
service can use real-time applications
like VoIP. If the Commission were to
require 95% of latency measurements to
be only 175 ms or lower, it would be
relaxing the standard considerably—
permitting CAF-supported internet
service to have 75% higher latency than
permitted by the existing standard
adopted by the Commission. Further,
lowering the existing standard would
not decrease burdens on carriers and
provide ‘‘a more efficient compliance
and enforcement process,’’ as the
petitioners suggest. The carriers need
only to conduct tests, which can be
automated, and provide the data;
Universal Service Administrative
Company (USAC) will complete the
necessary calculations to determine
compliance. To the extent that parties
argue that the 100 ms standard is overly
strict and that consumers may be
satisfied with higher latencies, that
standard was adopted in prior
Commission orders and thus is not
properly addressed in this proceeding,
which is to determine the appropriate
methodology for measuring whether
high-cost support recipients’ networks
meet established performance levels.
56. The Commission clarifies,
however, that carriers are not required
to provide speeds beyond what they are
already obligated to deploy as a
condition of their receipt of high-cost
support. Thus, for a location where a
carrier is obligated to provide 10/1
Mbps service, the Commission only
requires testing to ensure that the
location provides 10/1 Mbps service,
even if the customer there has ordered
and is receiving 25/3 Mbps service.
57. Regarding the trimming of data in
calculating compliance with the speed
standard, the Commission reconsiders
the Bureaus’ decision to exclude from
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
compliance calculations any speed test
results with values over 150% of the
advertised speed for the location.
Instead of trimming the data at the
outset as the Bureaus had required, the
Commission directs the Bureaus to
study data collected from carriers’ pretesting and testing and determine how
best to implement a more sophisticated
procedure using multiple statistical
analyses to exclude outlying data points
from the test results. The Commission
anticipates that the Bureaus will
develop such a procedure for USAC to
implement for each carrier’s test results
in each speed tier in each state or study
area and may involve determining
whether multiple methods (e.g., the
interquartile range, median absolute
deviation, Cook’s distance, Isolation
Forest, or extreme value analysis) flag a
particular data point as an anomaly.
58. The Performance Measures Order
also established a framework of support
reductions that carriers would face in
the event that their performance testing
did not demonstrate compliance with
speed and latency standards to which
each carrier is subject. The Bureaus
considered numerous approaches to
address non-compliance with the
required speed and latency standards.
They adopted a ‘‘four-level framework
that sets forth particular obligations and
automatic triggers based on an ETCs
degree of compliance with the
Commission’s latency, speed, and, if
applicable, MOS testing standards in
each state and high-cost support
program.’’ Under this scheme,
compliance for each standard is
separately determined, with the
percentage of a carrier’s measurements
meeting the relevant standard divided
by the required percentage of
measurements to be in full compliance.
The Bureaus noted that the framework
‘‘appropriately encourages carriers to
come into full compliance and offer, in
areas requiring high-cost support,
broadband service meeting standards
consistent with what consumers
typically experience.’’
59. Broadly, the Commission’s goal in
establishing a performance testing
regime is to ensure that consumers
receive broadband at the speed and
latency to which carriers have
committed, and for which they are
receiving support. The Commission’s
compliance regime is designed to
encourage them to provide high quality
broadband, not to punish carriers for
failing to perform. That is why the
Bureaus adopted an interim schedule
for withholding support for failing to
meet the required performance, but to
return such support as the carrier comes
into compliance. This is consistent with
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
the Commission’s approach to
construction of network facilities, i.e.
support is withheld if carriers do not
meet their build-out milestones, but as
the carrier improves its performance,
withheld support is returned. There is
no correlation in either case between the
interim percentages of support withheld
and the total per-location support;
rather, these interim withholdings are
designed solely to encourage the carrier
to meet its obligations and ensure that
progress is continuing. The Commission
notes that carriers have their entire
support term to improve their networks
and come into compliance. Even at the
end of the support term, the
Commission’s rules provide for a oneyear period before any support is
permanently withheld, during which
the carrier can show that it has fixed the
problems with its network. Further, as
explained in the following, the
Commission add san opportunity for
carriers to request a larger, statistically
valid sample if the carrier believes that
the small sample size is the cause of the
failure to perform. The Commission
therefore anticipates few instances of
non-compliance with the Commission’s
performance measures.
60. Several parties urge the
Commission to adjust the adopted
framework for non-compliance.
USTelecom, ITTA, and WISPA jointly
argue that non-compliance with the
speed and latency requirements is
subject to support withholding under
the established framework that is ‘‘more
severe[] than non-compliance with
build-out milestones.’’ For example,
they observe that a carrier with a
compliance gap of less than six percent
would lose 5% of its high-cost support,
while only being subject to quarterly
reporting obligations for missing its
required build out by up to 14.9%.
USTelecom, ITTA, and WISPA instead
propose mirroring the precedent
established for the deployment
milestone framework, with noncompliance with the speed and latency
requirements of 5% or less resulting
only in a quarterly reporting obligation
and non-compliance of 5% to 15%
resulting in 5% of funding being
withheld. Additionally, they request
clarification that a carrier not complying
with both its performance measurement
requirements and deployment
requirements will be subject only to a
reduction in support equal to the greater
of the two amounts, rather than the
combined percentage of the two
amounts. AT&T concurs with
petitioners that support reductions for
failing to comply with performance
standards should not be more serious
PO 00000
Frm 00061
Fmt 4700
Sfmt 4700
67229
than failure to deploy. NTCA, NRECA,
and UTC jointly contend that ‘‘noncompliance (especially if relatively
minor in degree) should impose upon
the provider the burden of proof to
demonstrate a justifiable reason for noncompliance and an avenue toward
remediation; it should not eliminate
automatically support upon which the
provider relies for deployment and
operation.’’ WTA proposes that rural
carriers not in full compliance be given
a six-month grace period ‘‘to locate and
correct the problem without reduction
or withholding of the monthly high-cost
support needed to finance the repair,
upgrade and operation of [their]
networks.’’ WTA also reiterates that
rural local exchange carriers (LECs)
should not lose high-cost support due to
the shortcomings of facilities or
circumstances over which they have no
control and are not able to repair or
upgrade. Finally, Pen˜asco Valley
Telephone Cooperative argues that a
100% success requirement for full
compliance does not take into account
factors outside the carrier’s control and
instead proposes a high percentage
benchmark, but less than 100%, to
account for these variables.
61. Except as discussed in the
following, the Commission generally
declines to revise the compliance and
certification frameworks adopted by the
Bureaus. The Commission disagrees that
the consequences for failure to meet its
performance measures are greater than
that for failure to meet deployment
obligations. As opposed to the
deployment obligations that many
parties use for comparison, the speed
and latency standards adopted by the
Bureaus include a margin for error and
do not require carriers to meet the
established standards in every instance.
For example, carriers are required to
meet the 100 ms standard for latency
only 95% of the time, rather than 100%
as suggested by some parties. Similarly,
the Commission allows carriers to be in
compliance with its speed standards if
they provide 80% of the required speed
80% of the time. Moreover, the
Commission establishes pre-testing
periods in which no support reductions
for failing to meet standards will occur
to allow carriers to adjust to the new
regime. This opportunity for pre-testing
will ensure that carriers are familiar
with the required testing and how to
properly measure the speed and latency
of their networks. Because carriers will
be aware of which locations are being
tested, they will be able to monitor their
networks prior to beginning the required
testing to make sure the network is
performing properly. Further, once a
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
67230
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
location is certified in USAC’s High
Cost Universal Broadband (HUBB)
portal, the carrier has certified that it
meets the required standards, so the
performance of the network should not
be a surprise to the carrier.
62. Some parties have expressed
concern about the performance
requirements and the non-compliance
support reductions. For example,
USTelecom, ITTA, and WISPA argue
that certain aspects of the compliance
framework ‘‘penalize non-compliance
with broadband speed and latency
requirements more severely than noncompliance with build-out milestones.’’
They also assert that the compliance
framework is ‘‘is too stringent and could
impede—rather than advance—
broadband deployment in rural CAFsupported areas.’’ The Commission
disagrees. As a condition of receiving
high-cost support, carriers must commit
not only to building out broadbandcapable networks to a certain number of
locations, but also to providing those
locations with a specific, defined level
of service. Building infrastructure is
insufficient to meet a carrier’s obligation
if the customers do not receive the
required level of service. If a carrier fails
to meet its deployment requirements, it
will face certain support reductions, and
if it likewise fails to meet its
performance requirements for locations
to which it claims it has deployed, it has
failed to fully fulfill its obligations. The
compliance framework established by
the Bureaus is essential to ensuring that
consumers are receiving the appropriate
level of service that the carrier has
committed to provide.
63. The Commission emphasizes that
at the conclusion of a carrier’s build-out
term, any failure to meet the speed and
latency requirements is a failure to
deploy because the carrier is not
delivering the service it has committed
to deliver. A failure to comply with all
performance measure requirements will
result in the Commission determining
that the carrier has not fully satisfied its
broadband deployment obligations at
the end of its build-out term and
subjecting the carrier to the appropriate
broadband deployment non-compliance
support reductions. The Commission
does not consider a carrier to have
completed deployment of a universal
service funded broadband-capable
network simply by entering the required
number of locations to which it has
built into the HUBB; customers at those
locations also must be able to receive
service at the specific speed and latency
to which the carrier has committed.
Simply put, consumers must receive the
required level of service before a
network can be considered to have been
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
fully deployed. Otherwise, a carrier
would not be meeting the conditions on
which it receives support to deploy
broadband.
64. Several parties argue that there is
insufficient notice for clarifying that
‘‘any failure to meet the speed and
latency requirements will be considered
a failure to deploy.’’ The Commission
disagrees. When establishing the CAF in
2011, the Commission noted that it
‘‘will require recipients of funding to
test their broadband networks for
compliance with speed and latency
metrics,’’ and each recipient of high-cost
support with defined build-out
obligations must deploy broadband
service with available speeds as
required by the Commission. Indeed,
the Commission found that verifiable
test results would allow the
Commission ‘‘to ensure that ETCs that
receive universal service funding are
providing at least the minimum
broadband speeds, and thereby using
support for its intended purpose as
required by section 254(e)’’; if the
support is not used to provide the
required level of service, it is not being
used for its intended purpose under
section 254(e). Carriers do not receive
high-cost support to just install any
network; they must deploy a broadbandcapable network actually meeting the
required speed and latency metrics.
Indeed, section 54.320(d)(1) of the
Commission’s rules provides that ‘‘[f]or
purposes of determining whether a
default has occurred, a carrier must be
offering service meeting the requisite
performance obligations.’’
65. The Commission uses the testing
data to determine the level of
compliance for the carrier’s network, as
defined by the Bureaus in the
Performance Measures Order. Thus, at
the end of a carrier’s build-out term, if
a carrier has deployed to 100% of its
required locations, but its overall
performance compliance percentage is
90%, USAC will recover the percentage
of the carrier’s support equal to 1.89
times the average amount of support per
location received in the state for that
carrier over the term of support for the
relevant performance non-compliance
percentage (i.e., 10%), plus 10 percent
of the carrier’s total relevant high-cost
support over the support term for that
state. Similarly, if a carrier deploys to
only 90% of the locations to which it is
required to build, and of those locations,
the performance compliance percentage
is 90%, the carrier will be required to
forfeit support equal to 1.89 times the
average amount of support per location
received in the state for that carrier over
the term of support for both the 10% of
locations lacking deployment and an
PO 00000
Frm 00062
Fmt 4700
Sfmt 4700
additional 9% of locations (reflecting a
non-compliance percentage of 10% for
the 90% deployed locations), plus 10
percent of the carrier’s total relevant
high-cost support over the support term
for that state. However, carriers are
permitted up to one year to address any
shortcomings in their deployment
obligations, including ensuring that
their performance measurements are
100% in compliance, before these
support reductions will take effect.
66. To provide certainty to carriers
and to take into account that carriers
may be in compliance with performance
obligations during their testing periods,
but for whatever reason may not be in
compliance at the end of the support
term, the Commission more narrowly
tailors its end-of-term non-compliance
provisions to recognize past
compliance. Accordingly, the
Commission will withhold support
where a carrier is unable to demonstrate
compliance at the end of the support
term only for the amount of time since
the carrier’s network performance was
last fully compliant. Specifically, the
Commission modifies the support
recovery required by section 54.320(d)
that is related to compliance with
performance measures by multiplying it
by the percentage of time since a carrier
was last able to show full compliance
with required performance testing
requirements prior to the end of the
support term on a quarterly basis. For
example, if a carrier’s failure to meet
end-of-term performance measures
under section 54.320(d) resulted in it
having to repay support associated with
10% of locations to which it was
obligated to deploy (and not including
any support related to a failure to build
and install the network as determined
by USAC verifications) and the carrier’s
performance testing had not been in
compliance with the Commission’s
requirements for the 15 preceding
quarters of testing, out of a total of 20
annual quarters in which it received
support, the amount of support to be
recovered would be multiplied by 15⁄20
or 3⁄4. If a carrier was not in compliance
with the Commission’s performance
measures for 5 quarters of testing but
comes into compliance before or during
end-of-term testing, USAC will not
recover any support. However, because
carriers have an affirmative duty to
demonstrate compliance with network
performance measures—as they have
with respect to physical build-out
milestones—a carrier that has never
been in compliance with performance
testing requirements at any time during
the testing period will have the
appropriate amount of support withheld
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
at the end of the support term for the
entire term. The Commission believes
that this approach more narrowly ties
the non-compliance consequences to the
period of time in which a carrier fails
to comply with performance
requirements.
67. In response to commenters’
concerns regarding the fairness of
potentially reducing carriers’ support
amounts for both lack of deployment
and non-compliance with speed and
latency standards, the Commission
clarifies that at the end of the support
term when USAC has performed the
calculation to determine the total lack of
deployment based on the numbers of
locations to which the carrier has built
out facilities and the number of
locations that are in compliance with
the performance measures, USAC will
ensure that the total amount of support
withheld from the carrier because of
failure to meet deployment milestones
and performance requirements does not
exceed the requirements of
§ 54.320(d)(2). To facilitate this
calculation, the Commission reconsiders
the decision allowing carriers to recover
only the support withheld for noncompliance for 12 months or less. When
a non-compliant carrier comes into a
higher level of compliance, USAC will
now return the withheld support up to
an amount reflecting the difference
between the levels’ required
withholding. By returning all the
support USAC may have withheld from
a carrier for non-compliance, the noncompliance framework will continue to
provide an incentive to carriers to return
to full compliance with the speed and
latency standards.
68. Finally, the Commission provides
additional flexibility at the conclusion
of a carrier’s build-out term for any
carrier that has failed to meet its
performance requirements and believes
that its failure to do so is the result of
a small sample size. As noted in this
document, to minimize the burdens of
testing, the Bureaus have used a ‘‘tripwire’’ approach in determining the
required sample sizes; while these
sample sizes are useful for
demonstrating where further inquiry
may be helpful, they are subject to a
high margin of error. Thus, if at the end
of its term, a carrier is shown not to
have met its deployment obligations due
to a failure in meeting the speed and
latency requirements, the carrier can
submit a request to the Bureaus for an
increased size of random samples that
will produce an estimate with a margin
of error of 5% or less and conduct
further testing during the additional 12month period provided in section
54.320(d)(2) to show that the carrier is
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
compliance with the Commission’s
performance requirements. If, after this
further testing, the carrier is able to
demonstrate that it fully complies with
the required speed and latency
benchmarks, then the carrier will be
considered to have met the deployment
obligations.
69. The Commission is persuaded by
the record here to modify the specific
schedule to commence speed and
latency tests established in the
Performance Measures Order. The
Performance Measures Order
established a deadline of July 1, 2020 for
carriers subject to the Performance
Measures Order to report the results of
testing, with an accompanying
certification, for the third and fourth
quarters of 2019. The Commission now
adopts a modified approach to enable
better individualization to the specific
circumstances of a given provider.
70. The Commission concludes that it
is appropriate under the circumstances
to modify the scheduled start of
performance testing to link speed and
latency testing to the deployment
obligations for carriers receiving support
from each of the various high-cost
support mechanisms. The Commission
believes this solution best balances its
responsibility to ensure that consumers
are receiving the promised levels of
service in a timely manner with the
ability of all carriers to undertake the
required performance testing. This
approach also allows larger price cap
carriers that are further along in their
deployments and are more able, at this
point, to begin testing to do so without
additional delay. Moreover, the rolling
testing schedule the Commission adopts
will be less administratively
burdensome for Commission staff by
allowing for more individualized review
and evaluation of testing results over
time. Pushing back testing will have the
added benefit of allowing additional
time for the marketplace to further
develop solutions for carriers to
undertake the required testing.
71. The Commission also implements
a pre-testing period that will occur prior
to the commencement of each carrier’s
testing start date. As with the testing
period, this pre-testing period will be
aligned with a carrier’s deployment
obligations for the specific high-cost
mechanism under which it receives
support and will require the filing of
data regarding pre-testing results. Pretesting will require carriers to conduct
testing according to the Commission’s
requirements using a USAC-determined
random sample of subscribers, and
results must be submitted to USAC
within one week of the end of each
quarter (i.e., by April 7 for the first
PO 00000
Frm 00063
Fmt 4700
Sfmt 4700
67231
quarter, July 7 for the second quarter,
etc.).
72. However, no support reductions
will be assessed during the pre-testing
period, as long as carriers actually
undertake the pre-testing and report
their results. Carriers that fail to conduct
pre-testing and submit results in a
timely fashion will be considered to be
at Level 1 non-compliance. The random
sample for pre-testing can be used by
the carrier for a total of two years,
meaning that carriers will need to obtain
a new random sample after two years of
pre-testing/testing. Thus, for example, if
a carrier does one year of pre-testing and
then one year of testing, it will need to
obtain a new random sample prior to
beginning the second year of testing.
While there will be no support
reductions during the pre-testing period
(as long as the carrier undertakes the
testing and reports results), the filing
will allow Commission staff to evaluate
the pre-testing data and determine if any
adjustments to the testing regime are
needed to ensure that the testing period
is successful. In addition, pre-testing
will give carriers an opportunity to see
how their networks and testing software
and hardware perform and make any
changes necessary. The Commission
directs the Bureaus to amend the
performance measures as appropriate
based on the information learned and
experience gained from the pre-testing
period.
73. Several industry associations
support the approach the Commission
adopts to tie speed and latency testing
to a carrier’s deployment obligations for
the specific high-cost program under
which it receives support. Specifically,
ITTA, USTelecom, and WISPA advocate
aligning a carrier’s performance
obligations with its deployment
obligations, as well as designating the
first two quarters of testing as
‘‘transitional and not subject to noncompliance measures for any
performance deficiencies’’ to allow
carriers to become familiar with the
testing process. In addition, both NTCA
and WTA support linking testing
obligations to deployment obligations
and allowing carriers to have a period
of advanced testing before the mandated
testing period. The Commission agrees
with those commenters suggesting that
a period to ‘‘test the testing’’ will help
ensure that all carriers become familiar
with testing methodologies and
equipment, as well as prevent or reduce
future administrative issues with the
testing process.
74. Accordingly, the Commission
adopts the schedule in the following for
pre-testing and testing obligations
E:\FR\FM\09DER1.SGM
09DER1
67232
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
specific to the carriers receiving highcost universal service support:
SCHEDULE FOR PRE-TESTING AND TESTING
Pre-testing
start date
Program
tkelley on DSKBCP9HB2PROD with RULES
CAF Phase II (Price-cap carrier funding) ........................................................................................
RBE ..................................................................................................................................................
Alaska Plan ......................................................................................................................................
A–CAM I ..........................................................................................................................................
A–CAM I Revised ............................................................................................................................
ACAM II ...........................................................................................................................................
Legacy Rate of Return ....................................................................................................................
CAF II Auction .................................................................................................................................
New NY Broadband Program ..........................................................................................................
75. Because the Commission
establishes pre-testing and testing
periods to coincide with a carrier’s
specific deployment obligations under
its respective high-cost mechanism,
recipients of CAF Phase II model-based
support will be the first to undertake the
pre-testing period on January 1, 2020.
These carriers are required to build out
to 80% of their supported locations by
December 31, 2019. Recipients of CAF
Phase II model-based support are
primarily larger carriers that are better
positioned to begin testing sooner due to
the availability of testing equipment and
solutions already in the marketplace for
these carriers. During the six-month pretesting period, these carriers will be
required to test the speed and latency of
their networks for a weeklong period
once per quarter (first and second
quarters of 2020) and submit the results
to the Commission within one week of
the end of each quarter of pre-testing.
The testing period for CAF Phase II
model-based support recipients will
commence on July 1, 2020, with speed
and latency tests occurring for weeklong
periods in both the third and fourth
quarters of 2020 and results of that
testing submitted by July 2021.
76. RBE support recipients, as well as
rate-of-return carriers receiving modelbased support under both the A–CAM I
and the revised A–CAM I, will follow a
similar, but slightly extended schedule.
The pre-testing period for these carriers
will commence on January 1, 2021 and
will last one full year to ensure that the
predominantly smaller carriers
receiving support under these
mechanisms have adequate time to
implement and test their technology and
software solutions to meet the
Commission’s performance testing
requirements. The Commission believes
that a longer pre-testing period than the
one it adopts for CAF Phase II modelbased support recipients is warranted to
ensure that any concerns or issues with
the testing process are addressed prior
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
to these carriers being subject to support
reductions. During this one-year pretesting period, this group of carriers will
be required to test the speed and latency
of their networks quarterly for a
weeklong period and submit the results
to the Commission within one week of
the end of each quarter of pre-testing.
The testing period for these carriers will
begin on January 1, 2022, and results
will be submitted to the Commission by
July 2023.
77. The Commission also adopts a
one-year pre-testing period for
recipients of support from the CAF
Phase II auction and A–CAM II, as well
as legacy rate-of-return support
recipients. However, the Commission
delays commencement of the pre-testing
period for these carriers to account for
certain timing considerations. For
example, the Commission is in the
process of authorizing CAF Phase II
auction winners to receive support, and
recently authorized rate-of-return
carriers electing the A–CAM II offer to
receive support. Additionally, to
increase administrative efficiency, the
Commission put legacy rate-of-return
carriers on the same schedule as A–
CAM II support recipients in light of the
fact that their deployment requirements
started at approximately the same time.
Thus, to allow time for carriers
receiving support under these
mechanisms not only to be authorized,
but also to deploy in a timely manner,
the Commission institutes a one-year
pre-testing period beginning January 1,
2022. The required testing period for
these carriers will commence on January
1, 2023. The Commission anticipates
that these support recipients will have
deployed to at least 40% of their
required locations by the end of 2022.
These carriers will be subject to the
same testing and reporting
requirements, for both pre-testing and
testing, as the other categories of carriers
described in this document, except that
these carriers will have a one-year pre-
PO 00000
Frm 00064
Fmt 4700
Sfmt 4700
January
January
January
January
January
January
January
January
January
1,
1,
1,
1,
1,
1,
1,
1,
1,
2020
2021
2021
2021
2021
2022
2022
2022
2022
Testing start date
........
........
........
........
........
........
........
........
........
July 1, 2020.
January 1, 2022.
January 1, 2022.
January 1, 2022.
January 1, 2022.
January 1, 2023.
January 1, 2023.
January 1, 2023.
January 1, 2023.
test period rather than a six-month pretest period.
78. The Commission disagrees with
those petitioners urging it to adopt a
blanket delay of implementation of the
testing requirements. NTCA contends
that the equipment necessary for the
most cost-effective method of testing is
not yet fully developed or widely
available, particularly in rural markets.
NTCA instead proposes that any
obligations be suspended or waived
until a later time—at least 12 months—
following the widespread availability of
modems with built-in testing capability
to the rural market. WTA agrees that the
necessary testing equipment is
unavailable at this time and thus
proposes that the Commission postpone
testing for rural LECs for at least two
years. WTA also proposes to delay
support reductions for non-compliance
to coincide with build-out milestones.
WISPA, ITTA, and NTTA support
proposals to postpone testing for a time
in order to permit equipment to become
more available and affordable.
79. The Commission is not convinced
that a blanket delay for all carriers
subject to its performance measure
requirements is necessary. As
petitioners and commenters observe,
large carriers and carriers serving more
urban markets are differently situated
than smaller carriers serving more rural
communities, and these carriers may
already be positioned to begin testing.
Though a minor delay for all carriers is
warranted to allow USAC time to
develop and implement specific IT
solutions, additional time beyond that
for the marketplace to develop technical
solutions is necessary only for a certain
subset of carriers. As WTA observes,
‘‘Whiteboxes for MBA testing are being
used by large carriers, but thus far [its
members] have generally been unable to
obtain Whitebox pricing estimates for
their likely levels of demand.’’
Similarly, NTCA explains that larger
carriers are able to purchase modems
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
and routers at scale or can develop their
own proprietary devices, but smaller
carriers oftentimes must purchase ‘‘off
the rack’’ technology solutions and may
have already deployed equipment that
cannot be easily retrofitted to
accommodate performance testing.
80. The Commission agrees that a onesize-fits-all approach does not reflect the
realities of the marketplace. However,
the tiered implementation schedule the
Commission adopts strikes a better
balance between the interests of carriers
in cost-effectively testing their
networks’ performance and its need to
ensure that those networks are
performing at the level promised. The
Commission further notes that WCB has
already announced a delay in the
requirement to begin testing and
reporting of speed and latency results
until the first quarter of 2020.
81. Given the changes to the testing
framework the Commission adopts, it
likewise declines WTA’s suggestion to
delay support reductions for noncompliant carriers until they are given
an opportunity to address any
deficiencies in their networks. The pretesting period the Commission adopts
will provide carriers with ample
opportunity to identify any issues
within their network infrastructure that
may impact testing results and to rectify
those problems prior to undertaking the
required testing. As a result, carriers
should have minimal, if any,
technological or software challenges
that prevent them from meeting the
Commission’s performance
requirements and would require an
opportunity to cure. Moreover, because
carriers will be testing only those
locations that the carrier has certified
are deployed with the requisite speed,
the Commission does not see a
compelling reason to delay support
reductions for non-compliance.
82. The Commission likewise declines
to further delay testing and reporting
obligations for Alaska Communications
Systems (ACS). Because carriers serving
certain non-contiguous areas of the
United States face different operating
conditions and challenges from those
faced by carriers in the contiguous 48
states, the Commission concluded that it
was appropriate to adopt tailored
service obligations for each noncontiguous carrier that elected to
continue to receive frozen support
amounts for Phase II in lieu of the offer
of model-based support. For ACS, the
Commission adopted a 10-year term of
support to provide a minimum of 10/1
Mbps broadband service with a
roundtrip provider network latency
requirement of 100 ms or less to a
minimum of 31,571 locations.
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
83. ITTA, USTelecom, and WISPA
propose that testing and reporting
obligations for ACS be delayed for one
year from the date on which they begin
for other CAF Phase II model-based
support recipients. These parties
contend that ACS should be given more
time because it is still in the process of
planning its CAF II deployment and has
not identified or reported the specific
customer locations that it intends to
serve. ITTA, USTelecom, and WISPA
also argue that additional time also is
necessary for ACS to identify one or
more suitable points at which traffic can
be aggregated for transport to the
continental U.S.
84. Because the Commission is
instituting a pre-testing period and
delaying the start of the required testing
period for CAF Phase II model-based
support recipients until July 1, 2020, the
Commission anticipates that ACS will
have had ample time to finalize
deployment plans and identify a
suitable aggregation point or points.
Thus, the Commission is unconvinced
by the argument advanced by ITTA,
USTelecom, and WISPA that these
issues warrant further delay for ACS.
Moreover, the Commission notes that
ACS already has passed its first
deployment milestone and certified to
locations in the HUBB. Thus, ACS
should be fully prepared to commence
testing on the same schedule as other
CAF Phase II support recipients.
85. NTCA requests clarification that
the Performance Measures Order
applies only to high-cost recipients with
mandatory build-out obligations.
Though some Alaskan rate-of-return
carriers are subject to defined build-out
obligations, NTCA observes that if a
carrier has ‘‘no mandated build-out
obligation, there is neither a clear speed
threshold to which a carrier can be
required to test nor a specified number
of locations at which the test can be
conducted.’’ NTCA argues that
additional proper notice-and-comment
rulemaking procedures would be
needed to subject carriers without
mandatory build-out obligations to any
required performance measures.
86. Absent any specific deployment
requirements, the Commission lacks a
standard for determining whether a
carrier’s deployment meets the required
performance measures. As a result,
consistent with NTCA’s request, the
Commission clarifies that only carriers
subject to defined build-out
requirements are required to test the
speed and latency of their networks in
accord with Commission rules. Alaskan
rate-of-return carriers that have
committed to maintaining existing
service levels therefore are not subject to
PO 00000
Frm 00065
Fmt 4700
Sfmt 4700
67233
the performance measures adopted by
the Bureaus and modified herein.
87. Alaskan rate-of-return carriers that
have committed to defined build-out
obligations, however, must conduct
speed and latency testing of their
networks. That said, the Commission
recognizes that many of these carriers
lack the ability to obtain terrestrial
backhaul such as fiber, microwave, or
other technologies and instead must rely
exclusively on satellite backhaul.
Consistent with the standards the
Commission adopted for high-latency
service providers in the CAF Phase II
auction, it requires Alaska Plan carriers
using satellite or satellite backhaul to
certify that 95% or more of all testing
hour measurements of network round
trip latency are at or below 750 ms for
any locations using satellite technology.
The Commission also reaffirms that
these carriers must certify annually that
no terrestrial backhaul options exist,
and that they are unable to satisfy the
standard performance measures due to
the limited functionality of the available
satellite backhaul facilities. To the
extent that new terrestrial backhaul
facilities are constructed, or existing
facilities improve sufficiently to meet
the public interest obligations, the
Commission has required funding
recipients to meet the standard
performance measures within twelve
months of the new backhaul facilities
becoming commercially available.
III. Procedural Matters
88. Paperwork Reduction Act. This
document contains new information
collection requirements subject to the
Paperwork Reduction Act of 1995
(PRA), Public Law 104–13. It will be
submitted to the Office of Management
and Budget (OMB) for review under
Section 3507(d) of the PRA. OMB, the
general public, and other Federal
agencies will be invited to comment on
the new information collection
requirements contained in this
proceeding. In addition, the
Commission notes that pursuant to the
Small Business Paperwork Relief Act of
2002, Public Law 107–198, see 44 U.S.C.
3506(c)(4), the Commission previously
sought specific comment on how it
might further reduce the information
collection burden for small business
concerns with fewer than 25 employees.
89. Congressional Review Act. The
Commission has determined, and the
Administrator of the Office of
Information and Regulatory Affairs,
Office of Management and Budget,
concurs that these rules are non-major
under the Congressional Review Act, 5
U.S.C. 804(2). The Commission will
send a copy of this Order on
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
67234
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
Reconsideration to Congress and the
Government Accountability Office
pursuant to 5 U.S.C. 801(a)(1)(A).
90. As required by the Regulatory
Flexibility Act of 1980 (RFA), as
amended, an Initial Regulatory
Flexibility Analysis (IRFA) was
incorporated in the USF/ICC
Transformation FNPRM, 76 FR 78384,
December 16, 2011. The Commission
sought written public comment on the
proposals in the USF/ICC
Transformation FNPRM, including
comment on the IRFA. The Bureaus
included a Final Regulatory Flexibility
Analysis (FRFA) in connection with the
Performance Measures Order. This
Supplemental Final Regulatory
Flexibility Analysis (Supplemental
FRFA) supplements the FRFA in the
Performance Measures Order to reflect
the actions taken in the Order on
Reconsideration and conforms to the
RFA.
91. The Order on Reconsideration
addresses issues raised by parties in
petitions for reconsideration and
applications for review of the
Performance Measures Order. In the
Performance Measures Order, the
Bureaus established how recipients of
CAF support must test their broadband
networks for compliance with speed
and latency metrics and certify and
report those results. In doing so, the
Bureaus adopted a flexible framework to
minimize the burden on small entities—
for example, by permitting carriers to
choose from one of three methodologies
to conduct the required testing.
92. The Order on Reconsideration
affirms certain key components of the
Performance Measures Order while
making several modifications to the
requirements. Specifically, in the Order,
the Commission maintains the choice
between three testing methodologies for
carriers to conduct required testing; tie
the implementation of speed and
latency testing to a carrier’s deployment
obligations for the specific high-cost
program under which it receives
support; adopt a pre-testing regime to
give both carriers and the Commission
the opportunity to ensure that carriers
are familiar with the testing regime and
minimize any administrative issues;
maintain the previously-adopted testing
sample sizes but clarify that carriers
must use the same locations for testing
both latency and speed; adopt a revised
definition of FCC-designated Internet
Exchange Point (IXP); confirm that endpoints for testing are from the
customer’s side of any network being
used to an FCC-designated IXP;
maintain the existing daily testing time
period and quarterly testing
requirement; allow further flexibility for
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
the timing of speed tests but maintain
the same frequency of latency testing;
and reaffirm the compliance standards
and associated support reductions for
non-compliance.
93. There were no comments raised
that specifically addressed how
broadband service should be measured,
as presented in the USF/ICC
Transformation FNPRM IRFA.
Nonetheless, the Commission has
considered the potential impact of the
rules proposed in the IRFA on small
entities and reduced the compliance
burden for all small entities in order to
reduce the economic impact of the rules
enacted herein on such entities.
94. The RFA directs agencies to
provide a description of, and where
feasible, an estimate of the number of
small entities that may be affected by
the proposed rules, if adopted. The RFA
generally defines the term ‘‘small
entity’’ as having the same meaning as
the terms ‘‘small business,’’ ‘‘small
organization,’’ and ‘‘small governmental
jurisdiction.’’ In addition, the term
‘‘small business’’ has the same meaning
as the term ‘‘small-business concern’’
under the Small Business Act. A smallbusiness concern’’ is one which: (1) Is
independently owned and operated; (2)
is not dominant in its field of operation;
and (3) satisfies any additional criteria
established by the Small Business
Administration (SBA).
95. As noted in this document, the
Performance Measures Order included a
FRFA. In that analysis, the Bureaus
described in detail the small entities
that might be significantly affected.
Accordingly, in this FRFA, the
Commission hereby incorporates by
reference the descriptions and estimates
of the number of small entities from the
previous FRFA in the Performance
Measures Order.
96. The Commission expects the
amended requirements in the Order on
Reconsideration will not impose any
new or additional reporting or
recordkeeping or other compliance
obligations on small entities and, as
described in the following, will reduce
their costs.
97. The RFA requires an agency to
describe any significant alternatives that
it has considered in reaching its
proposed approach, which may include
(among others) the following four
alternatives: (1) The establishment of
differing compliance or reporting
requirements or timetables that take into
account the resources available to small
entities; (2) the clarification,
consolidation, or simplification of
compliance or reporting requirements
under the rule for small entities; (3) the
use of performance, rather than design,
PO 00000
Frm 00066
Fmt 4700
Sfmt 4700
standards; and (4) an exemption from
coverage of the rule, or any part thereof,
for small entities.
98. The Commission has taken further
steps which will minimize the
economic impact on small entities. In
the Order on Reconsideration, the
Commission adopts a delayed schedule
providing for a period of ‘‘pre-testing’’
for all carriers and later start dates for
carriers that do not receive CAF Phase
II model-based support. Thus, CAF
Phase II model-based support recipients,
which include only large carriers, must
begin pre-testing and testing in 2020,
whereas legacy rate-of-return carriers,
many of which are smaller entities,
must begin pre-testing in 2022 and
testing in 2023, and small carriers
receiving A–CAM I model support do
not begin pre-testing until 2021 and
testing in 2022. Pre-testing will give
carriers time to correct any issues with
their networks or with their testing
infrastructure without being subject to
support reductions, and the delayed
schedule for non-CAF Phase II carriers
will permit smaller entities even more
time to prepare to meet the
Commission’s testing requirements.
99. The Commission also now permits
greater flexibility for carriers to conduct
speed tests within an hour. In the Order
on Reconsideration, the Commission
clarifies that carriers may not
necessarily start testing speed at the
very beginning of each test hour.
Instead, a carrier must simply report a
successful speed test for each hour,
except a carrier that begins attempting a
speed test within the first 15 minutes of
an hour and checks for cross-talk in oneminute intervals (using the cross-talk
thresholds of 64 Kbps for download and
32 Kbps for upload) may record that no
test was successful during that test hour.
100. Finally, the Commission clarifies
that carriers may use the same
subscriber locations for testing both
speed and latency, halving the potential
burdens for carriers that may have
otherwise believed it necessary to test
separate subscriber locations for speed
and latency. This clarification is most
significant for the smallest carriers,
which may use less automated means of
testing than larger carriers.
IV. Ordering Clauses
Accordingly, it is ordered that,
pursuant to the authority contained in
sections 1–4, 5, 201–206, 214, 218–220,
251, 252, 254, 256, 303(r), 332, 403, and
405 of the Communications Act of 1934,
as amended, and section 706 of the
Telecommunications Act of 1996, 47
U.S.C. 151–155, 201–206, 214, 218–220,
251, 256, 254, 256, 303(r), 403 and 405,
the Order on Reconsideration is
E:\FR\FM\09DER1.SGM
09DER1
tkelley on DSKBCP9HB2PROD with RULES
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
adopted, effective thirty (30) days after
publication of the text or summary
thereof in the Federal Register, except
for paragraphs 15, 16, 19, 22, 23, 26, 31
through 38, 43 through 49, 52, 53, 64,
and 75 through 91, which contain new
or modified information collection
requirements, that will not be effective
until approved by the Office of
Management and Budget. The
Commission will publish a document in
the Federal Register announcing the
effective date for those sections not yet
effective. It is the Commission’s
intention in adopting these rules that if
any of the rules that the Commission
retains, modifies, or adopts in this
document, or the application thereof to
any person or circumstance, are held to
be unlawful, the remaining portions of
the rules not deemed unlawful, and the
application of such rules to other
persons or circumstances, shall remain
in effect to the fullest extent permitted
by law.
101. It is further ordered that,
pursuant to the authority contained in
section 405 of the Communications Act
of 1934, as amended, 47 U.S.C. 405, and
§§ 0.331 and 1.429 of the Commission’s
rules, 47 CFR 0.331 and 47 CFR 1.429,
the Petition for Reconsideration and
Clarification filed by USTELECOM—
THE BROADBAND ASSOCIATION,
ITTA—THE VOICE OF AMERICA’S
BROADBAND PROVIDERS, and the
WIRELESS INTERNET SERVICE
PROVIDERS ASSOCIATION on
September 19, 2018 is granted in part
and denied in part to the extent
described herein, and the Petition for
Partial Reconsideration filed by
MICRONESIAN
TELECOMMUNICATIONS
CORPORATION on September 19, 2018
is denied.
102. It is further ordered that,
pursuant to the authority contained in
5(c)(5) of the Communications Act of
1934, as amended, 47 U.S.C. 155(c)(5),
and § 1.115(g) of the Commission’s
rules, 47 CFR 1.115(g), the Application
for Review and Request for Clarification
filed by NTCA—THE RURAL
BROADBAND ASSOCIATION on
September 19, 2018 and the Application
for Review filed by WTA—
ADVOCATES FOR BROADBAND on
September 19, 2018, are granted in part
and denied in part to the extent
described herein.
List of Subjects in 47 CFR Part 54
Communications common carriers,
Health facilities, Infants and children,
internet, Libraries, Reporting and
recordkeeping requirements, Schools,
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
Telecommunications, Telephone.
Federal Communications Commission.
Marlene Dortch,
Secretary.
Final Rules
For the reasons discussed in the
preamble, the Federal Communications
Commission amends 47 CFR part 54 as
follows:
PART 54—UNIVERSAL SERVICE
1. The authority for part 54 continues
to read as follows:
■
Authority: 47 U.S.C. 151, 154(i), 155, 201,
205, 214, 219, 220, 254, 303(r), 403, and
1302, unless otherwise noted.
2. Amend § 54.320 by revising
paragraphs (d)(1)(ii) and (iii), the first
sentence of paragraph (d)(1)(iv)(A) and
paragraph (d)(2) to read as follows:
■
§ 54.320 Compliance and recordkeeping
for the high-cost program.
*
*
*
*
*
(d) * * *
(1) * * *
*
*
*
*
*
(ii) Tier 2. If an eligible
telecommunications carrier has a
compliance gap of at least 15 percent
but less than 25 percent of the number
of locations that the eligible
telecommunications carrier is required
to have built out to or, in the case of
Alaska Plan mobile-carrier participants,
population covered by the specified
technology, middle mile, and speed of
service in the carrier’s approved
performance plan, by the interim
milestone, USAC will withhold 15
percent of the eligible
telecommunications carrier’s monthly
support for that support area and the
eligible telecommunications carrier will
be required to file quarterly reports.
Once the eligible telecommunications
carrier has reported that it has reduced
the compliance gap to less than 15
percent of the required number of
locations (or population, if applicable)
for that interim milestone for that
support area, the Wireline Competition
Bureau or Wireless Telecommunications
Bureau will issue a letter to that effect,
USAC will stop withholding support,
and the eligible telecommunications
carrier will receive all of the support
that had been withheld. The eligible
telecommunications carrier will then
move to Tier 1 status.
(iii) Tier 3. If an eligible
telecommunications carrier has a
compliance gap of at least 25 percent
but less than 50 percent of the number
of locations that the eligible
telecommunications carrier is required
PO 00000
Frm 00067
Fmt 4700
Sfmt 4700
67235
to have built out to by the interim
milestone, or, in the case of Alaska Plan
mobile-carrier participants, population
covered by the specified technology,
middle mile, and speed of service in the
carrier’s approved performance plan,
USAC will withhold 25 percent of the
eligible telecommunications carrier’s
monthly support for that support area
and the eligible telecommunications
carrier will be required to file quarterly
reports. Once the eligible
telecommunications carrier has reported
that it has reduced the compliance gap
to less than 25 percent of the required
number of locations (or population, if
applicable) for that interim milestone
for that support area, the Wireline
Competition Bureau or Wireless
Telecommunications Bureau will issue
a letter to that effect, the eligible
telecommunications carrier will move to
Tier 2 status.
(iv) * * *
(A) USAC will withhold 50 percent of
the eligible telecommunications
carrier’s monthly support for that
support area, and the eligible
telecommunications carrier will be
required to file quarterly reports. * * *
*
*
*
*
*
(2) Final milestone. Upon notification
that the eligible telecommunications
carrier has not met a final milestone, the
eligible telecommunications carrier will
have twelve months from the date of the
final milestone deadline to come into
full compliance with this milestone. If
the eligible telecommunications carrier
does not report that it has come into full
compliance with this milestone within
twelve months, the Wireline
Competition Bureau—or Wireless
Telecommunications Bureau in the case
of mobile carrier participants—will
issue a letter to this effect. In the case
of Alaska Plan mobile carrier
participants, USAC will then recover
the percentage of support that is equal
to 1.89 times the average amount of
support per location received by that
carrier over the support term for the
relevant percentage of population. For
other recipients of high-cost support,
USAC will then recover the percentage
of support that is equal to 1.89 times the
average amount of support per location
received in the support area for that
carrier over the term of support for the
relevant number of locations plus 10
percent of the eligible
telecommunications carrier’s total
relevant high-cost support over the
support term for that support area.
Where a recipient is unable to
demonstrate compliance with a final
performance testing milestone, USAC
will recover the percentage of support
E:\FR\FM\09DER1.SGM
09DER1
67236
Federal Register / Vol. 84, No. 236 / Monday, December 9, 2019 / Rules and Regulations
that is equal to 1.89 times the average
amount of support per location received
in the support area for the relevant
number of locations for that carrier plus
10 percent of the eligible
telecommunications carrier’s total
relevant high cost-support over the
support term for that support area, the
total of which will then be multiplied
by the percentage of time since the
carrier was last able to demonstrate
compliance based on performance
testing, on a quarterly basis. In the event
that a recipient fails to meet a final
milestone both for build-out and
performance compliance, USAC will
recover the total of the percentage of
support that is equal to 1.89 times the
average amount of support per location
received by that carrier over the support
term for the relevant number of
locations to which the carrier failed to
build out; the percentage of support that
is equal to 1.89 times the average
amount of support per location received
in the support area for the relevant
number of locations for that carrier
multiplied by the percentage of time
since the carrier was last able to
demonstrate compliance based on
performance testing; and 10 percent of
the eligible telecommunications
carrier’s total relevant high-cost support
over the support term for that support
area.
[FR Doc. 2019–26448 Filed 12–6–19; 8:45 am]
BILLING CODE 6712–01–P
DEPARTMENT OF COMMERCE
50 CFR Part 622
[Docket No. 191202–0098]
RIN 0648–BI98
Fisheries of the Caribbean, Gulf of
Mexico, and South Atlantic; SnapperGrouper Fishery of the South Atlantic
Region; Amendment 42
National Marine Fisheries
Service (NMFS), National Oceanic and
Atmospheric Administration (NOAA),
Commerce.
ACTION: Final rule.
AGENCY:
NMFS implements
management measures described in
Amendment 42 to the Fishery
Management Plan (FMP) for the
Snapper-Grouper Fishery of the South
Atlantic Region (Amendment 42), as
prepared and submitted by the South
Atlantic Fishery Management Council
(South Atlantic Council). This final rule
tkelley on DSKBCP9HB2PROD with RULES
VerDate Sep<11>2014
16:27 Dec 06, 2019
Jkt 250001
This final rule is effective on
January 8, 2020. The incorporation by
reference of certain publications listed
in this final rule is approved by the
Director of the Federal Register as of
January 8, 2020.
ADDRESSES: Electronic copies of
Amendment 42 may be obtained at
www.regulations.gov or from the
Southeast Regional Office website at
https://www.fisheries.noaa.gov/action/
amendment-42-modifications-sea-turtlerelease-gear-and-framework-proceduresnapper-grouper. Amendment 42
includes a fishery impact statement, a
regulatory impact review, and a
Regulatory Flexibility Act (RFA)
analysis.
DATES:
FOR FURTHER INFORMATION CONTACT:
National Oceanic and Atmospheric
Administration
SUMMARY:
adds three new devices to the Federal
regulations as options for fishermen
with Federal commercial or charter
vessel/headboat permits for South
Atlantic snapper-grouper to meet
existing requirements for sea turtle
release gear, and updates the regulations
to simplify and clarify the requirements
for other sea turtle release gear. This
final rule also modifies the FMP
framework procedure to allow for future
changes to release gear and handling
requirements for sea turtles and other
protected resources. The purpose of this
final rule is to allow the use of new
devices to safely handle and release
incidentally captured sea turtles, clarify
existing requirements, and streamline
the process for making changes to the
release devices and handling procedures
for sea turtles and other protected
species.
Frank Helies, NMFS Southeast Regional
Office, telephone: 727–824–5305; email:
frank.helies@noaa.gov.
SUPPLEMENTARY INFORMATION: NMFS and
the South Atlantic Council manage the
snapper-grouper fishery under the FMP.
The FMP was prepared by the South
Atlantic Council and is implemented by
NMFS through regulations at 50 CFR
part 622 under the authority of the
Magnuson-Stevens Fishery
Conservation and Management Act
(Magnuson-Stevens Act) (16 U.S.C. 1801
et seq.).
On June 13, 2019, NMFS published
the notice of availability for
Amendment 42 in the Federal Register
and requested public comment (84 FR
27576). On September 17, 2019, NMFS
published a proposed rule for
Amendment 42 in the Federal Register
and requested public comment (84 FR
48890). On September 5, 2019, the
Secretary of Commerce approved
Amendment 42 under section 304(a)(3)
of the Magnuson-Stevens Act.
PO 00000
Frm 00068
Fmt 4700
Sfmt 4700
Amendment 42 and the proposed rule
outline the rationale for the actions
contained in this final rule. A summary
of the management measures described
in Amendment 42 and implemented by
this final rule is provided below.
Management Measures Contained in
This Final Rule
This final rule adds three new sea
turtle handling and release devices to
the Federal regulations, clarifies the
requirements for other required gear,
and modifies the FMP framework
procedure to include future changes to
release gear and handling requirements
for sea turtles and other protected
resources.
New Sea Turtle Release Gear
For vessels with Federal commercial
and charter vessel/headboat permits for
South Atlantic snapper-grouper, this
final rule adds three new devices to the
Federal regulations that have been
approved for use by NMFS’ Southeast
Fisheries Science Center (SEFSC) to
safely handle and release sea turtles,
and provide more options for fishermen
to fulfill existing requirements. Details
for these new devices can be found in
Amendment 42, the proposed rule, and
the 2019 NMFS Technical
Memorandum titled, ‘‘Careful Release
Protocols for Sea Turtle Release with
Minimal Injury’’ (Release Protocols),
which is published by the SEFSC.
Complete construction specifications for
all SEFSC-approved handling and
release devices are included in the 2019
NMFS SEFSC Technical Memorandum
titled, ‘‘Design Standards and
Equipment for Careful Release of Sea
Turtles Caught in Hook-and-Line
Fisheries’’. Both documents are
available at https://
www.fisheries.noaa.gov/southeast/
endangered-species-conservation/seaturtle-and-smalltooth-sawfish-releasegear-protocols. NMFS expects the new
release devices in this final rule will
increase flexibility for fishermen and
regulatory compliance within the
snapper-grouper fishery, which may
result in positive benefits to sea turtles.
Two of the new sea turtle handling
devices are a collapsible hoop net and
a sea turtle hoist (net). Both of these
devices are more compact versions of
the approved long-handled dip net, and
could be used for bringing an
incidentally captured sea turtle on
board the fishing vessel to remove
fishing gear from the sea turtle. For the
collapsible hoop net, the net portion is
attached to hoops made of flexible
stainless steel cable; when the
collapsible hoop net is folded over on
itself for storage, its size reduces to
E:\FR\FM\09DER1.SGM
09DER1
Agencies
[Federal Register Volume 84, Number 236 (Monday, December 9, 2019)]
[Rules and Regulations]
[Pages 67220-67236]
From the Federal Register Online via the Government Publishing Office [www.gpo.gov]
[FR Doc No: 2019-26448]
=======================================================================
-----------------------------------------------------------------------
FEDERAL COMMUNICATIONS COMMISSION
47 CFR Part 54
[WC Docket No. 10-90; FCC 19-104]
Connect America Fund
AGENCY: Federal Communications Commission.
ACTION: Final rule.
-----------------------------------------------------------------------
SUMMARY: In this document, the Federal Communications Commission
(Commission) reviews performance measures established by the Wireline
Competition Bureau (WCB), the Wireless Telecommunications Bureau, and
the Office of Engineering and Technology (collectively the Bureaus) for
recipients of Connect America Fund (CAF) high-cost universal service
support to ensure that those standards strike the right balance between
ensuring effective use of universal service funds while granting the
flexibility providers need given the practicalities of network
deployment in varied circumstances.
DATES: Effective January 8, 2020.
FOR FURTHER INFORMATION CONTACT: Suzanne Yelen, Wireline Competition
Bureau, (202) 418-7400 or TTY: (202) 418-0484.
SUPPLEMENTARY INFORMATION: This is a summary of the Commission's Order
on Reconsideration in WC Docket No. 10-90; FCC 19-104, adopted on
October 25, 2019 and released on October 31, 2019. The full text of
this document is available for public inspection during regular
business hours in the FCC Reference Center, Room CY-A257, 445 12th
Street SW, Washington, DC 20554 or at the following internet address:
https://docs.fcc.gov/public/attachments/FCC-19-104A1.pdf
I. Introduction
1. The Commission has long recognized that ``[a]ll Americans
[should] have access to broadband that is capable of enabling the kinds
of key applications that drive the Commission's efforts to achieve
universal broadband, including education (e.g., distance/online
learning), health care (e.g., remote health monitoring), and person-to-
person communications (e.g., Voice over internet Protocol (VoIP) or
online video chat with loved ones serving overseas).'' To that end, the
Commission has invested significant Universal Service Fund support for
the deployment of broadband-capable networks in high cost, rural areas.
2. But only fast and responsive networks will allow Americans to
fully realize the benefits of connectivity. That is why the Commission
requires recipients of universal service support in high cost areas to
deploy broadband networks capable of meeting minimum service standards.
These standards protect taxpayers' investment and ensure that carriers
receiving this support deploy networks that meet the performance
standards they promised to deliver to rural consumers. At the same
time, the Commission recognizes that each carrier faces unique
circumstances, and that one set of prescriptive rules may not make
sense for every one of them. To accommodate this practical reality, the
Commission's rules provide flexibility, taking into account the
operational, technical, and size differences among providers when
establishing minimum standards, to ensure that even the smallest rural
carriers can meet testing requirements without facing excessive
burdens.
3. In the Order on Reconsideration, the Commission reviews
performance measures established by the Bureaus for recipients of CAF
high-cost universal service support to ensure that those standards
strike the right balance between ensuring effective use of universal
service funds while granting the flexibility providers need given the
practicalities of network deployment in varied circumstances. Several
petitions for reconsideration and applications for review of the
Performance Measures Order, 83 FR 42052, August 20, 2018, propose
changes to these performance measures. Here, the Commission rejects the
proposed changes where it finds that the Bureaus' approach strikes the
right balance. Where the Commission finds that the Bureaus' approach
does not--for example, where it concludes that greater flexibility is
warranted than was offered under the Bureaus' original methodology--the
Commission adjusts its rules accordingly. Finally, the Commission
clarifies the Bureaus' approach where doing so will help resolve
stakeholder confusion.
II. Discussion
4. In the Order on Reconsideration, the Commission reexamines each
of the described performance measure requirements in this document. As
a result, the Commission adopts several modifications. The Commission
believes these changes will alleviate concerns expressed by carriers by
increasing the time for carriers to meet certain deadlines and further
minimizing the costs associated with compliance, yet still ensure that
carriers meet their performance obligations. In short, the refinements
to the Bureau's approach adopted in the Performance Measures Order will
further the overarching goal of the Performance Measures Order; namely,
to ensure that carriers deliver broadband services with the speed and
latency required while providing flexibility to enable carriers of all
sizes to choose how to conduct the required performance testing in the
manner most appropriate for each individual carrier.
5. Under the Performance Measures Order, all high-cost support
recipients serving fixed locations must perform speed and latency tests
from the customer premises of an active subscriber to a remote test
server located at or reached by passing through an FCC-designated
internet Exchange Point (IXP). In the USF/ICC Transformation Order, 76
FR 73830, November 29, 2011, the Commission decided that speed and
latency should be measured on each eligible telecommunications carriers
(ETCs) access network from the end-user interface to the nearest
internet access point, i.e., the internet gateway, which is the closest
peering point between the broadband provider and the public internet
for a given consumer connection. Subsequently, in the CAF Phase II
Price Cap Service Obligation Order, 78 FR 70881, November 27, 2013, WCB
stated that latency should be tested to an IXP, defined as occurring in
any of ten different U.S. locations, almost all of which are locations
used in the MBA program because they are geographically distributed
major peering locations. The Bureaus expanded the list to permit
testing to six additional metropolitan areas to ensure that most
mainland U.S. locations are within 300 miles of an FCC-designated IXP
and that all are within approximately 500 air miles of one. Further,
the Bureaus permitted providers to use any FCC-
[[Page 67221]]
designated IXP for testing purposes, rather than limiting testing to
the provider's nearest IXP. Providers serving non-contiguous areas
greater than 500 air miles from an FCC-designated IXP were also
permitted to conduct testing between the customer premises and the
point at which traffic is aggregated for transport to the continental
U.S.
6. The Commission agrees with the Bureaus that the speed and
latency of networks of carriers receiving support through the various
high-cost support mechanisms should be tested between the customer
premise of an active subscriber and an FCC-designated IXP. This
approach is consistent with the Commission's determination in the USF/
ICC Transformation Order that ``actual speed and latency [must] be
measured on each ETCs access network from the end-user interface to the
nearest internet access point.'' Measuring the performance of a
consumer's connection to an IXP better reflects the performance that a
carrier's customers experience. As the Commission observed when it
first adopted performance measures for CAF Phase II model-based support
recipients, ``[t]esting . . . on only a portion of the network
connecting a consumer to the internet core will not show whether that
customer is able to enjoy high-quality real-time applications because
it is network performance from the customer's location to the
destination that determines the quality of the service from the
customer's perspective.''
7. The Commission therefore disagrees with those commenters arguing
that it should require testing over a shorter span. For example, NTCA
seeks modification of the testing requirements to account for
performance only on ``portions of the network owned by the USF
recipient and the next-tier ISP from which that USF recipient procures
capacity directly.'' NTCA argues that requiring testing to an FCC-
designated IXP imposes liability on a carrier for conditions beyond its
control and violates the Act by applying obligations to parts of the
network that are not supported by USF funding. Alternatively, NTCA
requests that the Commission provide a ``safe harbor'' to protect a
carrier from off-network issues that affect its test measurements. WTA
similarly contends that testing to an FCC-designated IXP makes carriers
responsible for portions of the connection over which they have no
control. WTA instead proposes a two-tiered framework consisting of a
network-only test for purposes of high-cost compliance and customer-to-
IXP testing to respond to customer complaints, with unresolved network-
only problems being subject to non-compliance support reductions.
Finally, Vantage Point seeks clarity on the initiation point for
performance testing within the customer premises, and contends that the
endpoint for testing should be at or reached by passing through a
carrier's next tier ISP.
8. The Commission disagrees with petitioners that testing to an
FCC-designated IXP, rather than the edge of a carrier's network, makes
a carrier responsible for network elements it does not control, and the
Commission rejects testing only on a carrier's own network as
inadequate. As the Bureaus explained, carriers--even smaller ones--do
have some influence and control over the type and quality of internet
transport they purchase. The Commission expects a carrier to purchase
transport of a sufficient quality that enables it to provide the
requisite level of service expected by consumers and required by the
Commission's rules. However, in the event a carrier fails to meet its
performance obligations because the only transport available would
demonstrably degrade the measured performance of the carrier's network,
the carrier can seek a waiver of the performance measures requirements.
The Commission is similarly unpersuaded by WTA's two-tiered testing
proposal. Adopting WTA's proposal to conduct its required tests over
only half of the full testing span would only provide the Commission
with insight into the customer experience on half of the network
between the customer and the IXP. Given that the Commission's aim is to
ensure that customers are able to enjoy high-quality real-time
applications, it declines to adopt WTA's proposed approach.
9. Finally, the Commission provides additional clarity on both the
initiation point and endpoint for testing. As the Commission has noted
in this document, one of the chief purposes for implementing
performance requirements is to ensure that customers are receiving the
expected levels of service that carriers have committed to providing.
Testing from any place other than the customer side of any carrier
network equipment used in providing a customer's connection may skew
the testing results and not provide an accurate reflection of the
customer's broadband experience. As Vantage Point notes, testing in
this manner would make it ``difficult to ensure that the test was being
performed on the network path actually used by the customer.'' Thus,
the Commission clarifies that testing should be conducted from the
customer side of any network equipment that is being used.
10. Definition of FCC-designated internet Exchange Point. Given the
Commission's commitment to testing the performance of connections
between consumers and FCC-designated IXPs, it also takes this
opportunity to clarify which facilities qualify as FCC-designated IXPs
for purposes of performance testing.
11. USTelecom, ITTA, and WISPA request clarification that ETCs are
permitted to use ``the nearest internet access point,'' as specified in
the USF/ICC Transformation Order, which may not necessarily be a
location specified in the Performance Measures Order. They also seek
clarification that ETCs may test to servers that are within the
provider's own network (i.e., on-net servers). In subsequent filings,
the petitioners suggest that there should be a criteria-based approach
to defining the testing endpoint. Specifically, they propose that
testing occur ``from the end-user interface to the first public
internet gateway in the path of the CAF-supported customer that
connects through a transitive internet Autonomous System,'' (ASN) and
``that the Commission establish a safe harbor where the transitive
internet AS which the gateway hosts includes one or more router(s) that
advertise(s) [ASN] organizations that are listed on the Center for
Applied internet Data Analysis (CAIDA) `AS Organization Rank List.' ''
The petitioners propose that testing occurring through a ``safe
harbor'' ASN ``would be considered valid without further inquiry.''
12. The Commission concludes that the Performance Measures Order's
designation of certain metropolitan areas as qualifying IXPs is too
ambiguous. It is not clear where the boundaries of a designated IXP
metropolitan area begin and end. Thus, drawing on the petitioners'
proposal, the Commission now provides a revised definition of FCC-
designated IXP that is more specific and better designed to account for
the way internet traffic is routed. For testing purposes, the
Commission defines an FCC-designated IXP as any building, facility, or
location housing a public internet gateway that has an active interface
to a qualifying ASN. Such a building, facility, or location could be
either within the provider's own network or outside of it. The
Commission uses the term ``qualifying ASN'' to ensure that the ASN can
properly be considered a connection to the public internet. The
Commission notes that in the USF/ICC
[[Page 67222]]
Transformation Order, it finds that the internet gateway is the
``peering point between the broadband provider and the public
internet'' and that public internet content is ``hosted by multiple
service providers, content providers and other entities in a
geographically diverse (worldwide) manner.'' The criteria the
Commission uses to determine FCC-designated IXPs are designed to ensure
that the peering point is sufficiently robust such that it can be
considered a connection to the public internet and not simply another
intervening connection point. The Commission designates 44 major North
American ASNs using CAIDA's ranking of Autonomous Systems and other
publicly available resources as ``safe harbors.'' The Commission
directs the Bureaus to update this list of ASNs periodically using the
CAIDA ranking of ASNs, PeeringDB, and other publicly available
resources. Providers may test to a test server located at or reached by
passing through any building, facility, or location housing a public
internet gateway that has an active interface to one of these
qualifying ASNs or may petition the Bureaus to add additional ASNs to
the list. The Bureaus will determine whether any ASN included in a
carrier petition is sufficiently similar to qualifying ASNs that it
should be added to the list of qualifying ASNs.
13. The Bureaus also established a daily testing period for speed
and latency tests, requiring carriers to conduct tests between 6:00
p.m. and 12:00 a.m. local time, including weekends. The testing window
the Bureaus adopted reflects a slight expansion of the testing window
used for the MBA. The Bureaus reasoned that MBA data indicated a peak
period of internet usage every evening but noted that they would
revisit this requirement periodically ``to determine whether peak
internet usage times have changed substantially.''
14. Petitioners and commenters urge the Commission to reconsider
the daily test period requirement to account for the usage patterns of
rural consumers, as well as the conditions and characteristics of rural
areas. WTA notes that the MBA data cited by the Bureaus likely reflect
the usage patterns of urban consumers, rather than consumers in rural
areas that ``are typically making personal and business use of their
household internet connections throughout the day.'' WTA contends that
there is likely to be increased congestion on rural networks during the
time period adopted by the Bureaus, potentially resulting in an
inaccurate or unrepresentative testing of the carrier's service. WTA
also argues that mandating testing during evening hours and weekends
requires rural carriers to adjust their regular daytime schedule,
creating staffing and financial hardships and potentially preventing
them from responding to other customer service issues. ITTA supports
this point, noting that ``evening and weekend test hours require RLECs
to re-schedule one or more technicians from their regular daytime
maintenance and installation duties and pay them premium or overtime
wages.'' ITTA also challenges the expansion of the daily test period
from 7 p.m. to 11 p.m. to 6 p.m. to 12 a.m., and requests flexibility
as to the specific hours that testing may be conducted.
15. The Commission declines to revisit the daily testing period at
this time. WTA provides no data to support its claim that rural
consumers are more active users of broadband service during daytime
hours than urban consumers. Moreover, the Commission's review of MBA
data from more rural areas indicates that these areas have similar peak
periods to urban areas. As the Commission has stated many times, a
primary goal for universal service is to ensure that customers in rural
areas receive the same level of service as those in urban areas. By
establishing the same testing window for urban and rural areas, the
Commission can confirm that consumers in rural areas are not receiving
substandard service as compared to consumers in urban areas during the
same time periods. Additionally, WTA's concern that testing during the
peak period may degrade a consumer's broadband experience is unfounded.
As the Commission previously observed, the small amount of data
required for speed testing will have no noticeable effect on network
congestion. The Commission reminds carriers that it provides them the
flexibility to choose whether to stagger their tests over the course of
the testing period, so long as they do not violate any other testing
requirements.
16. The Commission also disagrees with WTA and ITTA that the
current daily testing period will require rural carriers to devote
additional personnel hours to implement the Commission's performance
testing requirements. Once the testing regime is implemented and
carriers have installed the necessary technology and software to test
the speed and latency of their networks on a routine basis, the
Commission does not anticipate that extensive staffing will be required
to monitor the testing process. Because the technological testing
options that the Commission has allowed carriers to use are all
relatively automated, carriers should not have to adjust schedules to
ensure staffing during evenings and weekends. Additionally, the
Commission notes that the Bureaus expanded the testing period from 7
p.m. to 11 p.m. to 6 p.m. to 12 a.m. based on several comments from
parties that requested a longer testing period. Adding one additional
hour on both the front and back end of the testing period allows a
carrier's testing to capture the ramp up and ramp down periods before
and after peak time, providing a more accurate picture of whether
customers are receiving the required level of service. The Commission
also reminds parties that the Bureaus committed to revisiting
periodically the daily testing window to ensure that the established
hours continue to reflect the usage habits of consumers.
17. The Bureaus required a specified number of speed tests during
each testing window. In particular, the Performance Measures Order
required a minimum of one download test and one upload test per testing
hour at each subscriber test location. Providers were required to start
separate download and upload speed tests at the beginning of each test
hour window, and, after deferring a test due to cross-talk (e.g.,
traffic to and from the consumer's location that could impact
performance testing), providers were required to reevaluate whether the
consumer load exceeds the cross-talk threshold every minute until the
speed test can be run or the one-hour test window ends.
18. In their Petition for Reconsideration, USTelecom, ITTA, and
WISPA request clarification that recipients are afforded flexibility in
commencing hourly tests. They argue that ``[i]t is not clear from the
Performance Measures Order . . . whether `the beginning' of a test hour
window requires a recipient to commence testing at the top of the hour,
or whether testing must commence for all test subscribers at exactly
the same time.'' The petitioners state that carriers should only be
required to complete the test within the hour, and they should be able
to retry tests as frequently as their systems allow until a successful
test is administered, rather than retrying deferred tests every minute.
Noting that ``there should be no practical difference as to whether
testing occurs at the top, middle, or closer to [the] end of a testing
window,'' NTCA, NRECA, and UTC support the petitioners' request that
``the Commission reconsider the discrete and specific times at which
testing is to be conducted within each hour.'' Vantage Point likewise
proposes that the Commission permit carriers to distribute speed tests
within testing
[[Page 67223]]
hours in a way that minimizes network impact; otherwise, Vantage Point
asserts, requiring all speed testing to start at the beginning of each
hour would significantly burden test servers such that test results
would not be representative of customers' normal experience.
19. The Commission clarifies that providers do not have to begin
speed tests at the beginning of each test hour, as petitioners suggest.
In particular, the Commission agrees with Vantage Point that providing
greater flexibility in this regard will further minimize the impact of
any potential burden on the test servers during speed testing. However,
to ensure that there is enough data on carriers' speed performance,
providers must still conduct and report at least one download test and
one upload speed test per testing hour at each subscriber test
location, with one exception. A carrier that begins attempting speed
tests within the first fifteen minutes of a testing hour, and
repeatedly retries and defers the test at one-minute intervals due to
consumer load meeting the adopted cross-talk thresholds (i.e., 64 Kbps
for download tests or 32 Kbps for upload tests), may report that no
test was successfully completed during the test hour because of cross-
talk. A provider that does not attempt a speed test within the first 15
minutes of the hour and/or chooses to retry tests in greater than one-
minute intervals must, however, conduct and report a successful speed
test for the testing hour regardless of cross-talk. Although this
approach continues to differ slightly from MBA practice, the Commission
believes that it minimizes the possibility of network congestion at the
beginning of the testing hour while ensuring that it will have access
to sufficient testing data.
20. The Performance Measures Order established specific test
intervals within the daily test period for latency testing, requiring
carriers to conduct ``a minimum of one discrete test per minute, i.e.,
60 tests per hour, for each of the testing hours, at each subscriber
test location, with the results of each discrete test recorded
separately.'' Recognizing that cross-talk could negatively affect the
test results, the Bureaus provided flexibility for carriers to postpone
a latency test in the event that the consumer load exceeded 64 Kbps
downstream and to reevaluate the consumer load before attempting the
next test.
21. Several parties express concern with these requirements and
request reconsideration of the latency testing framework. USTelecom,
ITTA, and WISPA jointly contend that the Bureaus failed to provide
adequate notice for the frequency of latency testing and did not
justify departing from the MBA practice of combining speed and latency
testing under a unified framework. These parties further argue that
requiring latency testing once per minute will be administratively
burdensome for carriers by preventing them from combining the
instructions for testing into a single process and potentially
overloading and disrupting some testing methods. Instead, USTelecom,
ITTA, and WISPA propose that the number of latency tests should be
reduced to match the frequency of speed testing. Midcontinent also
supports aligning the frequency of speed and latency testing
requirements.
22. AT&T contends that testing once per minute ``is unnecessary and
arbitrary and capricious'' and likewise argues that the Commission
should permit carriers to test latency only once per hour. AT&T
supports its proposal by providing internal data purporting to
demonstrate no material difference between testing latency once per
minute versus testing once per hour. As a result, AT&T proposes that
the Commission require a minimum of one latency test per hour, but
provide flexibility to allow carriers to test more frequently if they
desire. ITTA concurs with AT&T's proposed approach.
23. Conversely, NTCA, NRECA, and UTC support the latency testing
framework adopted by the Bureaus. These parties observe that aligning
the frequency of speed and latency tests would ``risk undermining the
Commission's statutory mandate to ensure reasonably comparable services
in rural and urban areas'' because speed does not require as frequent
testing as latency in order to demonstrate compliance. In response,
USTelecom, ITTA, and WISPA again argue that the Bureaus failed to
adequately address the Administrative Procedure Act's notice
obligations or present any legal or factual basis for requiring
substantially more latency tests than speed tests.
24. The Commission declines to revise the determination of the
Bureaus that carriers must conduct latency testing once per minute.
Regarding parties' procedural arguments, the Commission notes that, in
the two Public Notices seeking comment on the performance measures, the
Bureaus specifically explained that adopting the Measuring Broadband
America (MBA) testing was under consideration. Indeed, many of the
performance testing requirements were derived from or influenced by the
Commission's experience with MBA testing. As such, parties had ample
notice that the testing regime adopted by the Bureaus, which is a less
burdensome variation of the MBA testing, was a potential option. Any
argument to the contrary is unfounded.
25. Complaints that the frequency of latency testing will affect
network performance also are speculative. The latency testing frequency
framework ultimately adopted by the Bureaus is substantially less
extensive than the MBA program testing. For example, MBA testing sends
approximately 2,000 User Datagram Protocol (UDP) packets per hour, and
these 2,000 individual results are summarized as a single reporting
record that reflects all 2,000 tests. To be clear, MBA requires latency
to be tested 2,000 times per hour, with results summarized into one
record. Conversely, the Bureaus adopted testing of 60 UDP packets per
hour that consists of approximately 3% of the typical MBA load. The
more intensive MBA test frequency has not been found to pose any
technical or other difficulties, so there is no reason to believe that
the vastly lower frequency of latency testing adopted by the Bureaus
will cause concerns. Requiring 60 UDP packets per hour rather than
2,000 balances the need for sufficient testing while minimizing the
burden of testing on carriers.
26. The Commission also agrees with the Bureaus that the disparity
in testing frequency between speed and latency reflects the different
type of testing necessary to determine whether carriers are meeting the
required benchmarks. The purpose of speed testing is to determine if
the network is properly provisioned to furnish the required speed and
whether the network provides sufficient throughput to handle uploads
and downloads at particular speeds and times. Because of the burden
that such testing puts on a carrier's network, the Bureaus adopted the
minimum number of tests necessary to ensure that consumers are
receiving broadband service at required speed levels. On the other
hand, latency testing indicates whether there is sufficient capacity in
the network to handle the level of traffic, which is of particular
importance when the network is experiencing high traffic load. In this
respect, latency is similar to a pulse rate and can vary substantially
as a result of several factors. Even if all these factors are unknown,
frequently monitoring latency determines the ability of the network to
handle various circumstances and factors that are affecting it. As
NTCA, NRECA, and UTC explain:
[T]here is logic in a protocol that tests for latency more
frequently than speed. The
[[Page 67224]]
impact of latency is measured in and discernible by milliseconds:
the frequency of testing aims to illuminate whether variables that
perforate performance are present. In contrast, speed contemplates a
steadier aspect of the network facility, and therefore does not
require as frequent testing to demonstrate compliance. Therefore, in
as much as latency-sensitive services and applications (including
but not limited to voice) are affected by millisecond variables,
NTCA, NRECA and UTC urge the Commission to maintain its rigorous
standards for latency testing.
And, in any event, conducting more tests for latency is to the
carrier's benefit, because of the variability of latency and resulting
greater likelihood that outlier failures will not affect the overall
rate.
27. The Commission appreciates AT&T's willingness to share its
internal data and analysis. However, AT&T's data reflect only the
capabilities of its own network and consisted of a very small sample
set--18 customers for one peak period in one instance and ``almost''
100 subscribers for one peak period in the other. The Commission also
notes that even AT&T's data demonstrated a substantial variation
between testing once per hour and once per minute. For example, in its
testing, AT&T found that per minute latency testing of customers served
by varying technologies showed that 1.17% of tests were higher than 100
ms but once per hour testing showed that 3.04% of tests showed a
latency of higher than 100 ms. A difference of 2% when the latency
standard is 5% is substantial.
28. Analysis undertaken by Commission staff confirms the importance
of more frequent testing to account for the variability associated with
latency. Commission staff compared the conclusions that AT&T--and
supported by ITTA--drew from its data to what the much larger MBA data
demonstrate. This analysis indicates that the risk of false positives
and false negatives (i.e., sample test results indicate that a carrier
fails, when given overall network performance, it should have passed,
or that a carrier passes, when given overall network performance, it
should have failed) varies significantly based on the number of
measurements per hour. Because the Commission's performance standard
for latency requires 95% of the latency measurements to be less than or
equal to 100 ms, a carrier would fail the standard if more than 5% of
its latency measurements are greater than 100 ms. In general, staff's
analysis found that a greater number of measurements reduces the impact
of data outliers and makes false positives and false negatives less
likely. For example, a single 200 ms data outlier among a sample of 10
latency measurements that otherwise are all under 100 ms would result
in the carrier's failing to meet the 95% threshold (i.e., only 9 out of
10 or 90% of the measurements would be at or under 100 ms). However, a
single data outlier of 200 ms in a sample of 100 latency measurements
would not, in the absence of at least five other measurements exceeding
100 ms, cause the carrier to fail (i.e., 99 out of 100 or 99% of the
measurements would be at or under 100 ms).
29. Additionally, staff analysis of MBA data indicated that the
distribution of latency among carriers varies widely even within the
same minute. This means that latency varies significantly depending
upon the traffic on the network at any given time and does not vary in
the same way for each carrier or even within each day for each carrier.
Because of the countless number of distributions observed among
carriers reflected by the MBA data, the Commission concludes that a
smaller number of observations would not yield reliable testing
results. Thus, more testing provides the Commission with greater
ability to detect bad performance in cases where a carrier's latency is
consistently high. In other words, since the likelihood of failing or
passing the Commission's latency standard depends, to some degree, on
random noise, the more measurements taken by a carrier, the less likely
that random factors would cause it to fail the standard.
30. The figure in the following demonstrates staff's analysis of
the estimated probability of failure and associated risk of false
positive or false negative results with different numbers of
measurements from a range of latency distributions observed in the MBA
data. Each box (bar) represents the estimated probability of failure
for a given latency distribution. The difference in the probability of
failure between N number of measurements and N=2000 is the estimated
risk of a false positive (the test result indicates that a carrier
fails when it should have passed) and a false negative (the test result
indicates that a carrier passes when it should have failed). As
demonstrated, there is a much higher risk of a false positive or false
negative under AT&T's proposed once per hour latency measurement as
compared to a moderate risk from 60 measurements per hour.
[GRAPHIC] [TIFF OMITTED] TR09DE19.001
[[Page 67225]]
Thus, staff's analysis shows that, given the high variability of
latency, one of two things would occur if the Commission required only
one measurement per hour: either a few extreme measurements would cause
a carrier to fail the standard when, in fact, it should pass given its
overall performance, or the Commission would be unable to capture
consistent poor performance by a carrier that should fail based on the
overall performance of its network. As a result, a moderate-risk
approach of 60 measurements per hour strikes a balance between the
burden of testing on carriers and the risk of failure by carriers
caused by uncertainty.
31. Finally, the Commission notes that some parties may
misunderstand what exactly constitutes a latency test for purposes of
the performance measures. Specifically, USTelecom states that,
``[t]esting every minute may also overload some testing methods and
cause testing to be disrupted,'' implying that a carrier must start and
stop a latency test every minute within a test-hour. While the
Commission does not believe this interpretation is consistent with the
intent of the Performance Measures Order, it provides greater clarity
here on what is considered a sufficient latency test to assuage
concerns about the number of latency tests per hour. As the Bureaus
described in the Performance Measures Order, a ``test'' constitutes a
``single, discrete observation or measurement of speed or latency.''
While carriers may choose to continuously start and stop latency
testing every minute and record the specific result, the Commission
clarifies that there is no requirement to conduct latency testing in
this manner. Instead, carriers may continuously run the latency testing
software over the course of a test-hour and record an observation or
measurement every minute of that test-hour. If a carrier transmits one
packet at a time for a one-minute measurement, the carrier should
report the result of that packet as one observation. However, some
applications, such as ping, commonly send three packets and only report
summarized results for the minimum, mean, and maximum packet round trip
time and not individual packet round trip time. If this is the case,
the carrier should report the mean as the result of this observation.
If the carrier sends more than one packet and the testing application
allows for individual round trip time results to be reported for each
packet, then the carrier must report all individual measurements for
each packet. Such an approach plainly fits within the definition of
``test'' adopted by the Bureaus in the Performance Measures Order and
does not require constant starting and stopping of the latency testing
software. In sum, carriers have the flexibility to choose how to
conduct their latency testing, so long as one separate, discrete
observation or measurement is recorded each minute of the specific
test-hour.
32. The Bureaus required that carriers test a maximum of 50
subscriber locations per required service tier offering per state,
depending on the number of subscribers a carrier has in a state,
randomly selected every two years. The Performance Measures Order
included scaled requirements permitting smaller carriers (i.e.,
carriers with fewer than 500 subscribers in a state and particular
service tier) to test 10% of the total subscribers in the state and
service tier, except for the smallest carriers (i.e., carriers with 50
or fewer subscribers), which must test five subscriber locations. The
Bureaus also recognized that, in certain situations, a carrier serving
50 or fewer subscribers in a state and service tier may not be able to
test even five active subscribers; the Bureaus permitted such carriers
to test a random sample of existing, non-CAF-supported active
subscriber locations within the same state and service tier to satisfy
the testing requirement. In situations where a subscriber at a test
location stops subscribing to the service provider within 12 months
after the location was selected, the Bureaus required that the carrier
test another randomly selected active subscriber location. Finally, the
Bureaus explained that carriers may use inducements to encourage
subscribers to participate in testing, which may be particularly useful
in cases where support is tied to a particular performance level for
the network, but the provider does not have enough subscribers to
higher performance service tiers to test to comply with the testing
sample sizes.
33. Petitioners and applicants raise various concerns regarding the
required number of subscriber test locations. Micronesian
Telecommunications Corporation (MTC), for example, argues that it and
similar carriers that may have fewer than 50 subscribers in a
particular state and speed service tier will be unable to comply with
the test locations requirement. MTC claims that it will be difficult to
find even five customers to test, particularly in higher service tiers.
Asking that the Commission ``provide a safety valve'' for similar small
carriers, MTC proposes that such a provider should ``test no more than
10 percent of its customers in any given service tier, with a minimum
of one test customer per service tier with customers.'' NTCA argues
that testing 10% of subscribers may be excessive; instead, NTCA
proposes that carriers should test the lesser of 50 locations per state
or 5% of active subscribers. Further, NTCA argues that carriers should
not be required to upgrade the speed or customer premises equipment for
individual locations even temporarily to conduct speed tests. WTA
suggests that, at least for rural carriers, the number of test
locations should be much lower than adopted in the Performance Measures
Order. Smaller carriers must test larger percentages of their customers
compared to larger carriers; accordingly, WTA argues, the Commission
should permit testing of just 10-15 locations or 2-3% of subscribers in
each CAF-required service tier.
34. NTCA, as well as USTelecom, ITTA, and WISPA, also ask that the
Commission clarify that carriers may use the same locations for testing
both speed and latency. USTelecom, ITTA, and WISPA explain that, if
carriers must conduct speed and latency testing at different locations,
the number of subscribers that must be tested would be unnecessarily
doubled, which ``would be particularly troublesome for smaller
recipients, many of whom will be drawing test locations from a small
group of subscribers.'' Similarly, the petitioners explain, the
requirement regarding the number of test locations should be clarified
to be exactly the same for both speed and latency. These clarification
proposals drew broad support from commenters. For example, comments
submitted jointly by NTCA, NRECA, and UTC assert that the
clarifications would help providers ``avoid unnecessary costs and
excessive administrative burden,'' while Midcontinent Communications
notes that using ``the same panelists for speed and latency testing for
CAF purposes would align with [its] internal testing practices.''
35. A few parties offer suggestions regarding the parameters for
the random selection process. In particular, WTA asks that locations
should be tested for five years, instead of two years, before a new
random sample of test locations is chosen. WTA also proposes that twice
the required random number of testing locations be provided to carriers
so that carriers can replace locations where residents refuse to
participate or have incompatible CPE. Frontier, in an ex parte filing,
proposes that carriers be allowed to test only new customer locations;
it argues that installing the necessary testing equipment at older
locations requires more time than is
[[Page 67226]]
available with the adopted testing schedule.
36. The Commission declines to modify the adopted sample sizes for
testing speed and latency. To minimize the burdens of testing, the
Bureaus have used a ``trip-wire'' approach in determining the required
sample sizes. In other words, the adopted sample sizes produce
estimates with a high margin of error but can show where further
inquiry may be helpful; the Commission's target estimation precision is
a 90% confidence level with an 11.5% margin of error. For the largest
carriers, i.e., those with over 500 subscribers in a given state and
speed service tier, this requires a sample size of 50 subscriber
locations. For the smallest carriers, the Bureaus adopted small sample
sizes that result in less precision, with the margin of error reaching
34.9%, to reduce the testing burden on smaller providers. Reducing the
sample sizes for smaller carriers even more would further reduce the
resulting estimation precision--making the test data even less likely
to be representative of the actual speed and latency consumers
experience on CAF-supported networks. The Commission therefore does not
modify the required numbers of subscriber locations carriers must test.
37. Nonetheless, the Commission recognizes that a few carriers
facing unique circumstances may find it extraordinarily difficult to
find a sufficient number of subscriber locations to test. Although the
Commission declines to modify the adopted sample sizes, the Commission
appreciates that special circumstances occasionally demand exceptions
to a general rule. The Commission's rules may be waived for good cause
shown.
38. For carriers that cannot find even five CAF-supported locations
to test, the Commission also reconsiders the Bureaus' decision to
permit testing of non-CAF-supported active subscriber locations within
the same state and service tier. Testing and reporting speed and
latency for non-CAF-supported locations adds unnecessary complexity to
the Commission's requirements. Accordingly, the Commission requires
that any non-compliant carrier testing fewer than five CAF-supported
subscriber locations because more are not available would be subject to
verification that more customers are not available, rather than
requiring that all carriers testing fewer than five CAF-supported
subscriber locations find non-CAF-supported locations to test.
39. Additionally, the Commission recognizes that, as several
parties have noted, obtaining customer consent for testing which
requires placement of testing equipment on customer premises may prove
difficult. The Commission believes that its revised testing
implementation schedule (discussed in the following) will help
alleviate this concern, particularly for smaller carriers. Numerous
vendors are developing software solutions that will allow providers to
test the service at customer locations without requiring any additional
hardware at the customer's premises. Further, the Commission directs
WCB to publish information on the Commission's website explaining the
nature and purpose of the required testing--to ensure that carriers are
living up to the obligations associated with CAF support--and urging
the public's participation. The Commission expects that providing such
information in an easy-to-understand format will help alleviate
subscribers' potential concerns. Moreover, the Commission emphasizes
that no customer proprietary network information is involved in the
required testing or reporting, other than information for which the
carrier likely would already have obtained customer consent; carriers
routinely perform network testing of speed and latency and the
performance measures testing the Commission is requiring is of a
similar nature.
40. The Commission agrees with comments recommending that the same
sample sizes adopted for speed should also apply to latency, and that
the same subscriber locations should be used for both speed and latency
tests. As some parties have noted, requiring testing of two separate
sets of subscriber locations for speed and latency, rather than the
same group of locations for both, is unnecessarily burdensome. By
requiring speed and latency tests at the same subscriber locations, the
Commission reduces the amount of equipment, coordination, and effort
that may otherwise be involved in setting up testing. Therefore,
carriers will test all of the locations in the random sample for both
speed and latency. The Commission notes that because it is adopting
different implementation dates for testing of different broadband
deployment programs, a carrier will receive a separate random sample of
testing locations for each program for which it must do performance
testing. In the Performance Measures Order, the Bureaus stated that,
``[a] carrier with 2,000 customers subscribed to 10/1 Mbps in one state
through CAF Phase II funding and 500 rural broadband experiment (RBE)
customers subscribed to 10/1 Mbps in the same state, and no other high-
cost support with deployment obligations, must test a total of 50
locations in that state for the 10/1 Mbps service tier.'' But because
CAF Phase II and RBE have different implementation dates for testing,
the carrier in this example must test 50 locations for its CAF Phase II
obligations and 50 locations for its RBE obligations. Similarly,
because the Commission now requires carriers to use the same sample for
both speed and latency, it reconsiders the requirement that carriers
replace latency testing locations that are no longer actively
subscribed after 12 months with another actively subscribed location.
The Bureaus did not make clear if this provision applied to both speed
and latency test locations. To avoid confusion, the Commission
clarifies that the same replacement requirements should apply to both
speed and latency. Therefore, the Commission now requires that carriers
replace non-actively subscribed locations with another actively
subscribed location by the next calendar quarter testing. Although the
Commission does not believe it is necessary for carriers to obtain a
random list of twice the number of required testing locations at the
outset, carriers should be able to obtain additional randomly selected
subscriber locations as necessary for these kinds of situations.
41. The Commission reconsiders the Bureaus' requirement that
carriers meet and test to their CAF obligation speed(s) regardless of
whether their subscribers purchase internet service offerings with
speeds matching the CAF-required speeds for those CAF-eligible
locations. Specifically, in situations where subscribers purchase
internet service offerings with speeds lower than the CAF-required
speeds for those locations, carriers are not required to upgrade
individual subscriber locations to conduct speed testing unless there
are no other available subscriber locations at the CAF-required speeds
within the same state or relevant service area. The Commission
recognizes that there may be significant burdens associated with
upgrading an individual location, particularly when physically
replacing equipment at the customer premises is necessary. Some
carriers may still find it necessary to upgrade individual subscriber
locations, at least temporarily, to conduct speed testing. The
Commission does not believe that requiring temporary upgrades of
service of testing locations in these instances will discourage bidding
in future auctions. Carriers participating in auctions should be
prepared to provide
[[Page 67227]]
the required speeds at all of the locations in the relevant service
area and should anticipate that over time more and more customers in
the service area will be purchasing the higher-speed offerings.
42. Finally, the Commission rejects proposals to require testing
only of newly deployed subscriber locations and to maintain the same
sample for more than two years. If the Commission were to permit
testing of only new locations, carriers' speed and latency test data
would not reflect their previous CAF-supported deployments, for which
carriers also have ongoing speed and latency obligations. Moreover,
although the Bureaus adopted the Performance Measures Order in 2018,
carriers have been certifying that their CAF-supported deployments meet
the relevant speed and latency obligations for several years. Requiring
testing of older locations should not prove a problem for carriers that
have been certifying that their deployments properly satisfy their CAF
obligations. In any case, further shrinking the required sample to
include only more recent deployments would compromise the effectiveness
of the ``trip-wire'' sample; the Commission would not be able to
identify potential problems with many older CAF-supported deployments.
Maintaining the same sample beyond two years would present the opposite
problem. By excluding newer deployments, the Commission's understanding
of carriers' networks would be outdated; the Bureaus' decision to
require testing a different set of subscriber locations every two years
struck the correct balance between overburdening carriers and
maintaining a current, relevant sample for testing.
43. The Bureaus required quarterly testing for speed and latency.
In particular, to capture any seasonal effects and differing conditions
throughout the year that can affect a carrier's broadband performance,
the Bureaus required carriers subject to the performance measures to
conduct one week of speed and latency testing in each quarter of the
calendar year.
44. WTA argues that spreading testing across the year imposes a
substantial burden, particularly on rural carriers, without producing
more accurate information than a single week of testing. WTA also
contends that obtaining consent from customers to allow testing for
four weeks a year ``is going to be extremely difficult and likely to
become a customer relations nightmare.'' Instead, WTA argues that
testing for a single week in late spring or early fall would be more
representative of typical internet usage. WTA cites these claimed
difficulties as a reason for reducing the number of weeks of annual
testing, reducing the numbers of locations to be tested, allowing more
flexible selection of customer locations, and using the test locations
for longer periods.
45. The Commission declines to adjust the quarterly testing
requirement as proposed by WTA. As the Bureaus acknowledged when they
adopted the quarterly requirement, different conditions exist
throughout the year that can affect service quality, including changes
in foliage, weather, and customer usage patterns, school schedules,
holiday shopping, increased or decreased customer use because of travel
and sporting events, and business cycles. The goal of the testing
requirements is to ensure that consumers across the country experience
consistent, quality broadband service throughout the year, not at only
one defined point during the year. Additionally, the Commission
believes WTA's concerns regarding customer consent are unfounded. The
Commission expects that once the requisite technology and software to
conduct the required testing has been installed, testing the
performance of the network for one week per quarter will not impose any
additional significant burden on carriers or customers. Moreover, the
tests themselves use so little bandwidth that the Commission does not
believe customers will even notice that testing is occurring. Indeed,
as the Bureaus explained, quarterly testing ``strikes a better balance
of accounting for seasonal changes in broadband usage and minimizing
the burden on consumers who may participate in testing.''
46. The Commission confirms that carriers may use any of the three
methodologies outlined in the Performance Measures Order to demonstrate
their compliance with network performance requirements. The Commission
has previously determined that it should provide carriers subject to
performance testing with flexibility in determining the best means of
conducting tests. In 2013, WCB had determined that price cap carriers
generally may use ``existing network management systems, ping tests, or
other commonly available network measurement tools,'' as well as
results from the MBA program, to demonstrate compliance with latency
obligations associated with CAF Phase II model-based support. Thus, the
Bureaus concluded that ETCs subject to fixed broadband performance
obligations would be permitted to conduct testing by employing either:
(1) MBA testing infrastructure (MBA testing), (2) existing network
management systems and tools (off-the-shelf testing), or (3) provider-
developed self-testing configurations (provider-developed self-testing
or self-testing). The Bureaus reasoned that the flexibility afforded by
three different options offered ``a cost-effective method for
conducting testing for providers of different sizes and technological
sophistication.''
47. NTCA requests clarification about language in the Performance
Measures Order stating that ``MBA testing must occur in areas and for
the locations supported by CAF, e.g., in CAF Phase II eligible areas
for price cap carriers and for specific built-out locations for RBE,
Alternative Connect America Cost Model (A-CAM), and legacy rate-of-
return support recipients.'' NTCA contends that this language refers to
previously-promulgated MBA testing requirements and that the Commission
should clarify that ETCs subject to fixed broadband performance
obligations should be permitted to use any of three testing options
outlined by the Bureaus.
48. The language highlighted by NTCA applies only to carriers
choosing the MBA testing option; the Bureaus set out additional,
separate requirements for carriers choosing to use off-the-shelf or
provider-developed testing options. As the Performance Measures Order
explained, in the event that a carrier opts to use the MBA testing
methodology to collect performance data, it must ensure boxes are
placed at the appropriate randomly selected locations in the CAF-funded
areas, as required for the CAF testing program. If, on the other hand,
a carrier opts for either off-the-shelf testing tools or its own self-
testing, it must use the testing procedures specific to the providers'
respective chosen methodology.
49. To achieve full compliance with the latency and speed
standards, the Performance Measures Order required that 95% of latency
measurements during testing windows fall below 100 ms round-trip time,
and that 80% of speed measurements be at 80% of the required network
speed. Based on the standard adopted by the Commission in 2011, WCB
used ITU calculations and reported core latencies in the contiguous
United States in 2013 to determine that a latency of 100 ms or below
was appropriate for real-time applications like VoIP. WCB thus required
price cap carriers receiving CAF Phase II model-based support to test
and certify that 95% of testing hours latency measurements are at or
below 100 ms (the latency standard). Later, WCB sought comment on
extending the
[[Page 67228]]
same testing methodologies to other high-cost support recipients
serving fixed locations, and in multiple orders, the Commission
extended the same latency standard to RBE participants, rate-of-return
carriers electing the voluntary path to model support, CAF Phase II
competitive bidders not submitting high-latency bids, and Alaska Plan
carriers.
50. The Bureaus ultimately reaffirmed and further extended the
latency standard to all high-cost support recipients serving fixed
locations, except those carriers submitting high-latency bids in the
CAF Phase II auction. In doing so, the Bureaus noted that the data on
round-trip latency in the United States had not markedly changed since
the CAF Phase II Price Cap Service Obligation Order, and that no
parties challenged the Commission's reasoning for the existing 100 ms
standard. More recently, the Bureaus refreshed the record, seeking
comment on USTelecom's proposal that certifying ``full'' compliance
means that 95 to 100% of all of an ETCs measurements during the test
period meet the required speed. The Bureaus then adopted a standard
requiring that 80% of a carrier's download and upload measurements be
at or above 80% of the CAF-required speed (i.e., an 80/80 standard).
The Bureaus explained that this speed standard best meets the
Commission's statutory requirement to ensure that high-cost-supported
broadband deployments provide reasonably comparable service as those
available in urban areas. The Bureaus also noted that they would
exclude from certification calculations certain speed measurements
above a certain threshold to ensure that outlying observations do not
unreasonably affect results.
51. In their Petition, USTelecom, ITTA, and WISPA complain that
``[t]here is . . . a significant disparity in compliance thresholds for
speed and latency,'' and ask that the Bureaus require ETCs' latency
measurements to meet 175 ms at least 95% of the time. The petitioners
argue that, before accepting CAF Phase II model-based support, carriers
could not have fully understood whether the latency standard adopted in
2013 was appropriate, apparently because it was adopted ``almost two
full years before price cap carriers accepted CAF Phase II support,''
and other ``reasonable'' requirements were adopted later. Further, the
petitioners argue, the same ITU analysis that WCB relied on in 2013 to
adopt the latency standard ``found that consumers continue to be
`satisfied' with speech quality at a one-way mouth-to-ear latency of
275 ms or a provider round-trip latency of 175 ms,'' so ``treating a
latency result that is even one millisecond above 100 ms as a violation
. . . penaliz[es] recipients for providing users with voice quality
with which they are fully satisfied.'' Changing the standard to require
latency measurements of 175 ms or better 95% of the time, petitioners
assert, would better align the latency standard with the speed
standard, which is designed to ensure that high-cost-supported
broadband deployments are reasonably comparable to those in urban
areas.
52. NTCA, NRECA, and UTC oppose the petitioners' request to
``align'' the latency standard with the speed standard. Defending the
95% threshold adopted by the Bureaus, these parties explain that low
latency is necessary to support achieving a ``reasonably comparable''
level of service, and the 95% compliance benchmark for latency is a
``reasonable'' standard for that. Moreover, speeds may vary up to 20%
because of ``networking protocols, interference and other variances
that affect all providers and whose accommodation is technology
neutral,'' but such factors do not affect latency. Thus, they say, the
record supports the adopted latency standard.
53. Multiple parties seek clarifications regarding implementation
of the 80/80 speed standard adopted in the Performance Measures Order.
In particular, carriers expressed concern that compliance will be
measured against advertised speeds, rather than the speeds carriers are
obligated to provide in exchange for CAF support. In addition,
USTelecom, ITTA, and WISPA, among others, challenge the Bureaus'
finding that speed test results greater than 150% of advertised speeds
are likely invalid and ask that the Bureaus reconsider automatically
excluding those measurements from compliance calculations. Instead,
Vantage Point suggests, the Commission should consider excluding data
points beyond a defined number of standard deviations, rather than
setting a 150% cutoff for measurements.
54. The Commission declines to modify the longstanding latency
standard requiring that 95% of round-trip measurements be at or below
100 ms. As petitioners acknowledge, the standard was initially adopted
in 2013, before carriers accepted CAF Phase II model-based support.
Petitioners claim that, as a result, ``no future recipient could have
been expected to assess the appropriateness of this prematurely adopted
requirement,'' but, in fact, carriers accepted CAF Phase II support
conditioned on the requirement that they certify to the adopted latency
standard. In other words, carriers assessed the appropriateness of the
standard and decided that they would be able to certify meeting the
standard--or, at the very least, accepted that they would risk losing
CAF Phase II support if they were unable to meet the standard.
Moreover, no parties sought reconsideration when the standard was
originally adopted, and the Commission later extended the same standard
to other high-cost support recipients in the years following.
55. The Commission also notes that latency is fundamentally
different from speed and therefore requires a different standard to
ensure that CAF-supported broadband internet service is reasonably
comparable to service in urban areas. The 100 ms standard, which is
more lenient than the 60 ms standard originally proposed, ensures that
subscribers of CAF-supported internet service can use real-time
applications like VoIP. If the Commission were to require 95% of
latency measurements to be only 175 ms or lower, it would be relaxing
the standard considerably--permitting CAF-supported internet service to
have 75% higher latency than permitted by the existing standard adopted
by the Commission. Further, lowering the existing standard would not
decrease burdens on carriers and provide ``a more efficient compliance
and enforcement process,'' as the petitioners suggest. The carriers
need only to conduct tests, which can be automated, and provide the
data; Universal Service Administrative Company (USAC) will complete the
necessary calculations to determine compliance. To the extent that
parties argue that the 100 ms standard is overly strict and that
consumers may be satisfied with higher latencies, that standard was
adopted in prior Commission orders and thus is not properly addressed
in this proceeding, which is to determine the appropriate methodology
for measuring whether high-cost support recipients' networks meet
established performance levels.
56. The Commission clarifies, however, that carriers are not
required to provide speeds beyond what they are already obligated to
deploy as a condition of their receipt of high-cost support. Thus, for
a location where a carrier is obligated to provide 10/1 Mbps service,
the Commission only requires testing to ensure that the location
provides 10/1 Mbps service, even if the customer there has ordered and
is receiving 25/3 Mbps service.
57. Regarding the trimming of data in calculating compliance with
the speed standard, the Commission reconsiders the Bureaus' decision to
exclude from
[[Page 67229]]
compliance calculations any speed test results with values over 150% of
the advertised speed for the location. Instead of trimming the data at
the outset as the Bureaus had required, the Commission directs the
Bureaus to study data collected from carriers' pre-testing and testing
and determine how best to implement a more sophisticated procedure
using multiple statistical analyses to exclude outlying data points
from the test results. The Commission anticipates that the Bureaus will
develop such a procedure for USAC to implement for each carrier's test
results in each speed tier in each state or study area and may involve
determining whether multiple methods (e.g., the interquartile range,
median absolute deviation, Cook's distance, Isolation Forest, or
extreme value analysis) flag a particular data point as an anomaly.
58. The Performance Measures Order also established a framework of
support reductions that carriers would face in the event that their
performance testing did not demonstrate compliance with speed and
latency standards to which each carrier is subject. The Bureaus
considered numerous approaches to address non-compliance with the
required speed and latency standards. They adopted a ``four-level
framework that sets forth particular obligations and automatic triggers
based on an ETCs degree of compliance with the Commission's latency,
speed, and, if applicable, MOS testing standards in each state and
high-cost support program.'' Under this scheme, compliance for each
standard is separately determined, with the percentage of a carrier's
measurements meeting the relevant standard divided by the required
percentage of measurements to be in full compliance. The Bureaus noted
that the framework ``appropriately encourages carriers to come into
full compliance and offer, in areas requiring high-cost support,
broadband service meeting standards consistent with what consumers
typically experience.''
59. Broadly, the Commission's goal in establishing a performance
testing regime is to ensure that consumers receive broadband at the
speed and latency to which carriers have committed, and for which they
are receiving support. The Commission's compliance regime is designed
to encourage them to provide high quality broadband, not to punish
carriers for failing to perform. That is why the Bureaus adopted an
interim schedule for withholding support for failing to meet the
required performance, but to return such support as the carrier comes
into compliance. This is consistent with the Commission's approach to
construction of network facilities, i.e. support is withheld if
carriers do not meet their build-out milestones, but as the carrier
improves its performance, withheld support is returned. There is no
correlation in either case between the interim percentages of support
withheld and the total per-location support; rather, these interim
withholdings are designed solely to encourage the carrier to meet its
obligations and ensure that progress is continuing. The Commission
notes that carriers have their entire support term to improve their
networks and come into compliance. Even at the end of the support term,
the Commission's rules provide for a one-year period before any support
is permanently withheld, during which the carrier can show that it has
fixed the problems with its network. Further, as explained in the
following, the Commission add san opportunity for carriers to request a
larger, statistically valid sample if the carrier believes that the
small sample size is the cause of the failure to perform. The
Commission therefore anticipates few instances of non-compliance with
the Commission's performance measures.
60. Several parties urge the Commission to adjust the adopted
framework for non-compliance. USTelecom, ITTA, and WISPA jointly argue
that non-compliance with the speed and latency requirements is subject
to support withholding under the established framework that is ``more
severe[] than non-compliance with build-out milestones.'' For example,
they observe that a carrier with a compliance gap of less than six
percent would lose 5% of its high-cost support, while only being
subject to quarterly reporting obligations for missing its required
build out by up to 14.9%. USTelecom, ITTA, and WISPA instead propose
mirroring the precedent established for the deployment milestone
framework, with non-compliance with the speed and latency requirements
of 5% or less resulting only in a quarterly reporting obligation and
non-compliance of 5% to 15% resulting in 5% of funding being withheld.
Additionally, they request clarification that a carrier not complying
with both its performance measurement requirements and deployment
requirements will be subject only to a reduction in support equal to
the greater of the two amounts, rather than the combined percentage of
the two amounts. AT&T concurs with petitioners that support reductions
for failing to comply with performance standards should not be more
serious than failure to deploy. NTCA, NRECA, and UTC jointly contend
that ``non-compliance (especially if relatively minor in degree) should
impose upon the provider the burden of proof to demonstrate a
justifiable reason for non-compliance and an avenue toward remediation;
it should not eliminate automatically support upon which the provider
relies for deployment and operation.'' WTA proposes that rural carriers
not in full compliance be given a six-month grace period ``to locate
and correct the problem without reduction or withholding of the monthly
high-cost support needed to finance the repair, upgrade and operation
of [their] networks.'' WTA also reiterates that rural local exchange
carriers (LECs) should not lose high-cost support due to the
shortcomings of facilities or circumstances over which they have no
control and are not able to repair or upgrade. Finally, Pe[ntilde]asco
Valley Telephone Cooperative argues that a 100% success requirement for
full compliance does not take into account factors outside the
carrier's control and instead proposes a high percentage benchmark, but
less than 100%, to account for these variables.
61. Except as discussed in the following, the Commission generally
declines to revise the compliance and certification frameworks adopted
by the Bureaus. The Commission disagrees that the consequences for
failure to meet its performance measures are greater than that for
failure to meet deployment obligations. As opposed to the deployment
obligations that many parties use for comparison, the speed and latency
standards adopted by the Bureaus include a margin for error and do not
require carriers to meet the established standards in every instance.
For example, carriers are required to meet the 100 ms standard for
latency only 95% of the time, rather than 100% as suggested by some
parties. Similarly, the Commission allows carriers to be in compliance
with its speed standards if they provide 80% of the required speed 80%
of the time. Moreover, the Commission establishes pre-testing periods
in which no support reductions for failing to meet standards will occur
to allow carriers to adjust to the new regime. This opportunity for
pre-testing will ensure that carriers are familiar with the required
testing and how to properly measure the speed and latency of their
networks. Because carriers will be aware of which locations are being
tested, they will be able to monitor their networks prior to beginning
the required testing to make sure the network is performing properly.
Further, once a
[[Page 67230]]
location is certified in USAC's High Cost Universal Broadband (HUBB)
portal, the carrier has certified that it meets the required standards,
so the performance of the network should not be a surprise to the
carrier.
62. Some parties have expressed concern about the performance
requirements and the non-compliance support reductions. For example,
USTelecom, ITTA, and WISPA argue that certain aspects of the compliance
framework ``penalize non-compliance with broadband speed and latency
requirements more severely than non-compliance with build-out
milestones.'' They also assert that the compliance framework is ``is
too stringent and could impede--rather than advance--broadband
deployment in rural CAF-supported areas.'' The Commission disagrees. As
a condition of receiving high-cost support, carriers must commit not
only to building out broadband-capable networks to a certain number of
locations, but also to providing those locations with a specific,
defined level of service. Building infrastructure is insufficient to
meet a carrier's obligation if the customers do not receive the
required level of service. If a carrier fails to meet its deployment
requirements, it will face certain support reductions, and if it
likewise fails to meet its performance requirements for locations to
which it claims it has deployed, it has failed to fully fulfill its
obligations. The compliance framework established by the Bureaus is
essential to ensuring that consumers are receiving the appropriate
level of service that the carrier has committed to provide.
63. The Commission emphasizes that at the conclusion of a carrier's
build-out term, any failure to meet the speed and latency requirements
is a failure to deploy because the carrier is not delivering the
service it has committed to deliver. A failure to comply with all
performance measure requirements will result in the Commission
determining that the carrier has not fully satisfied its broadband
deployment obligations at the end of its build-out term and subjecting
the carrier to the appropriate broadband deployment non-compliance
support reductions. The Commission does not consider a carrier to have
completed deployment of a universal service funded broadband-capable
network simply by entering the required number of locations to which it
has built into the HUBB; customers at those locations also must be able
to receive service at the specific speed and latency to which the
carrier has committed. Simply put, consumers must receive the required
level of service before a network can be considered to have been fully
deployed. Otherwise, a carrier would not be meeting the conditions on
which it receives support to deploy broadband.
64. Several parties argue that there is insufficient notice for
clarifying that ``any failure to meet the speed and latency
requirements will be considered a failure to deploy.'' The Commission
disagrees. When establishing the CAF in 2011, the Commission noted that
it ``will require recipients of funding to test their broadband
networks for compliance with speed and latency metrics,'' and each
recipient of high-cost support with defined build-out obligations must
deploy broadband service with available speeds as required by the
Commission. Indeed, the Commission found that verifiable test results
would allow the Commission ``to ensure that ETCs that receive universal
service funding are providing at least the minimum broadband speeds,
and thereby using support for its intended purpose as required by
section 254(e)''; if the support is not used to provide the required
level of service, it is not being used for its intended purpose under
section 254(e). Carriers do not receive high-cost support to just
install any network; they must deploy a broadband-capable network
actually meeting the required speed and latency metrics. Indeed,
section 54.320(d)(1) of the Commission's rules provides that ``[f]or
purposes of determining whether a default has occurred, a carrier must
be offering service meeting the requisite performance obligations.''
65. The Commission uses the testing data to determine the level of
compliance for the carrier's network, as defined by the Bureaus in the
Performance Measures Order. Thus, at the end of a carrier's build-out
term, if a carrier has deployed to 100% of its required locations, but
its overall performance compliance percentage is 90%, USAC will recover
the percentage of the carrier's support equal to 1.89 times the average
amount of support per location received in the state for that carrier
over the term of support for the relevant performance non-compliance
percentage (i.e., 10%), plus 10 percent of the carrier's total relevant
high-cost support over the support term for that state. Similarly, if a
carrier deploys to only 90% of the locations to which it is required to
build, and of those locations, the performance compliance percentage is
90%, the carrier will be required to forfeit support equal to 1.89
times the average amount of support per location received in the state
for that carrier over the term of support for both the 10% of locations
lacking deployment and an additional 9% of locations (reflecting a non-
compliance percentage of 10% for the 90% deployed locations), plus 10
percent of the carrier's total relevant high-cost support over the
support term for that state. However, carriers are permitted up to one
year to address any shortcomings in their deployment obligations,
including ensuring that their performance measurements are 100% in
compliance, before these support reductions will take effect.
66. To provide certainty to carriers and to take into account that
carriers may be in compliance with performance obligations during their
testing periods, but for whatever reason may not be in compliance at
the end of the support term, the Commission more narrowly tailors its
end-of-term non-compliance provisions to recognize past compliance.
Accordingly, the Commission will withhold support where a carrier is
unable to demonstrate compliance at the end of the support term only
for the amount of time since the carrier's network performance was last
fully compliant. Specifically, the Commission modifies the support
recovery required by section 54.320(d) that is related to compliance
with performance measures by multiplying it by the percentage of time
since a carrier was last able to show full compliance with required
performance testing requirements prior to the end of the support term
on a quarterly basis. For example, if a carrier's failure to meet end-
of-term performance measures under section 54.320(d) resulted in it
having to repay support associated with 10% of locations to which it
was obligated to deploy (and not including any support related to a
failure to build and install the network as determined by USAC
verifications) and the carrier's performance testing had not been in
compliance with the Commission's requirements for the 15 preceding
quarters of testing, out of a total of 20 annual quarters in which it
received support, the amount of support to be recovered would be
multiplied by \15/20\ or \3/4\. If a carrier was not in compliance with
the Commission's performance measures for 5 quarters of testing but
comes into compliance before or during end-of-term testing, USAC will
not recover any support. However, because carriers have an affirmative
duty to demonstrate compliance with network performance measures--as
they have with respect to physical build-out milestones--a carrier that
has never been in compliance with performance testing requirements at
any time during the testing period will have the appropriate amount of
support withheld
[[Page 67231]]
at the end of the support term for the entire term. The Commission
believes that this approach more narrowly ties the non-compliance
consequences to the period of time in which a carrier fails to comply
with performance requirements.
67. In response to commenters' concerns regarding the fairness of
potentially reducing carriers' support amounts for both lack of
deployment and non-compliance with speed and latency standards, the
Commission clarifies that at the end of the support term when USAC has
performed the calculation to determine the total lack of deployment
based on the numbers of locations to which the carrier has built out
facilities and the number of locations that are in compliance with the
performance measures, USAC will ensure that the total amount of support
withheld from the carrier because of failure to meet deployment
milestones and performance requirements does not exceed the
requirements of Sec. 54.320(d)(2). To facilitate this calculation, the
Commission reconsiders the decision allowing carriers to recover only
the support withheld for non-compliance for 12 months or less. When a
non-compliant carrier comes into a higher level of compliance, USAC
will now return the withheld support up to an amount reflecting the
difference between the levels' required withholding. By returning all
the support USAC may have withheld from a carrier for non-compliance,
the non-compliance framework will continue to provide an incentive to
carriers to return to full compliance with the speed and latency
standards.
68. Finally, the Commission provides additional flexibility at the
conclusion of a carrier's build-out term for any carrier that has
failed to meet its performance requirements and believes that its
failure to do so is the result of a small sample size. As noted in this
document, to minimize the burdens of testing, the Bureaus have used a
``trip-wire'' approach in determining the required sample sizes; while
these sample sizes are useful for demonstrating where further inquiry
may be helpful, they are subject to a high margin of error. Thus, if at
the end of its term, a carrier is shown not to have met its deployment
obligations due to a failure in meeting the speed and latency
requirements, the carrier can submit a request to the Bureaus for an
increased size of random samples that will produce an estimate with a
margin of error of 5% or less and conduct further testing during the
additional 12-month period provided in section 54.320(d)(2) to show
that the carrier is compliance with the Commission's performance
requirements. If, after this further testing, the carrier is able to
demonstrate that it fully complies with the required speed and latency
benchmarks, then the carrier will be considered to have met the
deployment obligations.
69. The Commission is persuaded by the record here to modify the
specific schedule to commence speed and latency tests established in
the Performance Measures Order. The Performance Measures Order
established a deadline of July 1, 2020 for carriers subject to the
Performance Measures Order to report the results of testing, with an
accompanying certification, for the third and fourth quarters of 2019.
The Commission now adopts a modified approach to enable better
individualization to the specific circumstances of a given provider.
70. The Commission concludes that it is appropriate under the
circumstances to modify the scheduled start of performance testing to
link speed and latency testing to the deployment obligations for
carriers receiving support from each of the various high-cost support
mechanisms. The Commission believes this solution best balances its
responsibility to ensure that consumers are receiving the promised
levels of service in a timely manner with the ability of all carriers
to undertake the required performance testing. This approach also
allows larger price cap carriers that are further along in their
deployments and are more able, at this point, to begin testing to do so
without additional delay. Moreover, the rolling testing schedule the
Commission adopts will be less administratively burdensome for
Commission staff by allowing for more individualized review and
evaluation of testing results over time. Pushing back testing will have
the added benefit of allowing additional time for the marketplace to
further develop solutions for carriers to undertake the required
testing.
71. The Commission also implements a pre-testing period that will
occur prior to the commencement of each carrier's testing start date.
As with the testing period, this pre-testing period will be aligned
with a carrier's deployment obligations for the specific high-cost
mechanism under which it receives support and will require the filing
of data regarding pre-testing results. Pre-testing will require
carriers to conduct testing according to the Commission's requirements
using a USAC-determined random sample of subscribers, and results must
be submitted to USAC within one week of the end of each quarter (i.e.,
by April 7 for the first quarter, July 7 for the second quarter, etc.).
72. However, no support reductions will be assessed during the pre-
testing period, as long as carriers actually undertake the pre-testing
and report their results. Carriers that fail to conduct pre-testing and
submit results in a timely fashion will be considered to be at Level 1
non-compliance. The random sample for pre-testing can be used by the
carrier for a total of two years, meaning that carriers will need to
obtain a new random sample after two years of pre-testing/testing.
Thus, for example, if a carrier does one year of pre-testing and then
one year of testing, it will need to obtain a new random sample prior
to beginning the second year of testing. While there will be no support
reductions during the pre-testing period (as long as the carrier
undertakes the testing and reports results), the filing will allow
Commission staff to evaluate the pre-testing data and determine if any
adjustments to the testing regime are needed to ensure that the testing
period is successful. In addition, pre-testing will give carriers an
opportunity to see how their networks and testing software and hardware
perform and make any changes necessary. The Commission directs the
Bureaus to amend the performance measures as appropriate based on the
information learned and experience gained from the pre-testing period.
73. Several industry associations support the approach the
Commission adopts to tie speed and latency testing to a carrier's
deployment obligations for the specific high-cost program under which
it receives support. Specifically, ITTA, USTelecom, and WISPA advocate
aligning a carrier's performance obligations with its deployment
obligations, as well as designating the first two quarters of testing
as ``transitional and not subject to non-compliance measures for any
performance deficiencies'' to allow carriers to become familiar with
the testing process. In addition, both NTCA and WTA support linking
testing obligations to deployment obligations and allowing carriers to
have a period of advanced testing before the mandated testing period.
The Commission agrees with those commenters suggesting that a period to
``test the testing'' will help ensure that all carriers become familiar
with testing methodologies and equipment, as well as prevent or reduce
future administrative issues with the testing process.
74. Accordingly, the Commission adopts the schedule in the
following for pre-testing and testing obligations
[[Page 67232]]
specific to the carriers receiving high-cost universal service support:
Schedule for Pre-Testing and Testing
----------------------------------------------------------------------------------------------------------------
Program Pre-testing start date Testing start date
----------------------------------------------------------------------------------------------------------------
CAF Phase II (Price-cap carrier January 1, 2020.................... July 1, 2020.
funding).
RBE................................... January 1, 2021.................... January 1, 2022.
Alaska Plan........................... January 1, 2021.................... January 1, 2022.
A-CAM I............................... January 1, 2021.................... January 1, 2022.
A-CAM I Revised....................... January 1, 2021.................... January 1, 2022.
ACAM II............................... January 1, 2022.................... January 1, 2023.
Legacy Rate of Return................. January 1, 2022.................... January 1, 2023.
CAF II Auction........................ January 1, 2022.................... January 1, 2023.
New NY Broadband Program.............. January 1, 2022.................... January 1, 2023.
----------------------------------------------------------------------------------------------------------------
75. Because the Commission establishes pre-testing and testing
periods to coincide with a carrier's specific deployment obligations
under its respective high-cost mechanism, recipients of CAF Phase II
model-based support will be the first to undertake the pre-testing
period on January 1, 2020. These carriers are required to build out to
80% of their supported locations by December 31, 2019. Recipients of
CAF Phase II model-based support are primarily larger carriers that are
better positioned to begin testing sooner due to the availability of
testing equipment and solutions already in the marketplace for these
carriers. During the six-month pre-testing period, these carriers will
be required to test the speed and latency of their networks for a
weeklong period once per quarter (first and second quarters of 2020)
and submit the results to the Commission within one week of the end of
each quarter of pre-testing. The testing period for CAF Phase II model-
based support recipients will commence on July 1, 2020, with speed and
latency tests occurring for weeklong periods in both the third and
fourth quarters of 2020 and results of that testing submitted by July
2021.
76. RBE support recipients, as well as rate-of-return carriers
receiving model-based support under both the A-CAM I and the revised A-
CAM I, will follow a similar, but slightly extended schedule. The pre-
testing period for these carriers will commence on January 1, 2021 and
will last one full year to ensure that the predominantly smaller
carriers receiving support under these mechanisms have adequate time to
implement and test their technology and software solutions to meet the
Commission's performance testing requirements. The Commission believes
that a longer pre-testing period than the one it adopts for CAF Phase
II model-based support recipients is warranted to ensure that any
concerns or issues with the testing process are addressed prior to
these carriers being subject to support reductions. During this one-
year pre-testing period, this group of carriers will be required to
test the speed and latency of their networks quarterly for a weeklong
period and submit the results to the Commission within one week of the
end of each quarter of pre-testing. The testing period for these
carriers will begin on January 1, 2022, and results will be submitted
to the Commission by July 2023.
77. The Commission also adopts a one-year pre-testing period for
recipients of support from the CAF Phase II auction and A-CAM II, as
well as legacy rate-of-return support recipients. However, the
Commission delays commencement of the pre-testing period for these
carriers to account for certain timing considerations. For example, the
Commission is in the process of authorizing CAF Phase II auction
winners to receive support, and recently authorized rate-of-return
carriers electing the A-CAM II offer to receive support. Additionally,
to increase administrative efficiency, the Commission put legacy rate-
of-return carriers on the same schedule as A-CAM II support recipients
in light of the fact that their deployment requirements started at
approximately the same time. Thus, to allow time for carriers receiving
support under these mechanisms not only to be authorized, but also to
deploy in a timely manner, the Commission institutes a one-year pre-
testing period beginning January 1, 2022. The required testing period
for these carriers will commence on January 1, 2023. The Commission
anticipates that these support recipients will have deployed to at
least 40% of their required locations by the end of 2022. These
carriers will be subject to the same testing and reporting
requirements, for both pre-testing and testing, as the other categories
of carriers described in this document, except that these carriers will
have a one-year pre-test period rather than a six-month pre-test
period.
78. The Commission disagrees with those petitioners urging it to
adopt a blanket delay of implementation of the testing requirements.
NTCA contends that the equipment necessary for the most cost-effective
method of testing is not yet fully developed or widely available,
particularly in rural markets. NTCA instead proposes that any
obligations be suspended or waived until a later time--at least 12
months--following the widespread availability of modems with built-in
testing capability to the rural market. WTA agrees that the necessary
testing equipment is unavailable at this time and thus proposes that
the Commission postpone testing for rural LECs for at least two years.
WTA also proposes to delay support reductions for non-compliance to
coincide with build-out milestones. WISPA, ITTA, and NTTA support
proposals to postpone testing for a time in order to permit equipment
to become more available and affordable.
79. The Commission is not convinced that a blanket delay for all
carriers subject to its performance measure requirements is necessary.
As petitioners and commenters observe, large carriers and carriers
serving more urban markets are differently situated than smaller
carriers serving more rural communities, and these carriers may already
be positioned to begin testing. Though a minor delay for all carriers
is warranted to allow USAC time to develop and implement specific IT
solutions, additional time beyond that for the marketplace to develop
technical solutions is necessary only for a certain subset of carriers.
As WTA observes, ``Whiteboxes for MBA testing are being used by large
carriers, but thus far [its members] have generally been unable to
obtain Whitebox pricing estimates for their likely levels of demand.''
Similarly, NTCA explains that larger carriers are able to purchase
modems
[[Page 67233]]
and routers at scale or can develop their own proprietary devices, but
smaller carriers oftentimes must purchase ``off the rack'' technology
solutions and may have already deployed equipment that cannot be easily
retrofitted to accommodate performance testing.
80. The Commission agrees that a one-size-fits-all approach does
not reflect the realities of the marketplace. However, the tiered
implementation schedule the Commission adopts strikes a better balance
between the interests of carriers in cost-effectively testing their
networks' performance and its need to ensure that those networks are
performing at the level promised. The Commission further notes that WCB
has already announced a delay in the requirement to begin testing and
reporting of speed and latency results until the first quarter of 2020.
81. Given the changes to the testing framework the Commission
adopts, it likewise declines WTA's suggestion to delay support
reductions for non-compliant carriers until they are given an
opportunity to address any deficiencies in their networks. The pre-
testing period the Commission adopts will provide carriers with ample
opportunity to identify any issues within their network infrastructure
that may impact testing results and to rectify those problems prior to
undertaking the required testing. As a result, carriers should have
minimal, if any, technological or software challenges that prevent them
from meeting the Commission's performance requirements and would
require an opportunity to cure. Moreover, because carriers will be
testing only those locations that the carrier has certified are
deployed with the requisite speed, the Commission does not see a
compelling reason to delay support reductions for non-compliance.
82. The Commission likewise declines to further delay testing and
reporting obligations for Alaska Communications Systems (ACS). Because
carriers serving certain non-contiguous areas of the United States face
different operating conditions and challenges from those faced by
carriers in the contiguous 48 states, the Commission concluded that it
was appropriate to adopt tailored service obligations for each non-
contiguous carrier that elected to continue to receive frozen support
amounts for Phase II in lieu of the offer of model-based support. For
ACS, the Commission adopted a 10-year term of support to provide a
minimum of 10/1 Mbps broadband service with a roundtrip provider
network latency requirement of 100 ms or less to a minimum of 31,571
locations.
83. ITTA, USTelecom, and WISPA propose that testing and reporting
obligations for ACS be delayed for one year from the date on which they
begin for other CAF Phase II model-based support recipients. These
parties contend that ACS should be given more time because it is still
in the process of planning its CAF II deployment and has not identified
or reported the specific customer locations that it intends to serve.
ITTA, USTelecom, and WISPA also argue that additional time also is
necessary for ACS to identify one or more suitable points at which
traffic can be aggregated for transport to the continental U.S.
84. Because the Commission is instituting a pre-testing period and
delaying the start of the required testing period for CAF Phase II
model-based support recipients until July 1, 2020, the Commission
anticipates that ACS will have had ample time to finalize deployment
plans and identify a suitable aggregation point or points. Thus, the
Commission is unconvinced by the argument advanced by ITTA, USTelecom,
and WISPA that these issues warrant further delay for ACS. Moreover,
the Commission notes that ACS already has passed its first deployment
milestone and certified to locations in the HUBB. Thus, ACS should be
fully prepared to commence testing on the same schedule as other CAF
Phase II support recipients.
85. NTCA requests clarification that the Performance Measures Order
applies only to high-cost recipients with mandatory build-out
obligations. Though some Alaskan rate-of-return carriers are subject to
defined build-out obligations, NTCA observes that if a carrier has ``no
mandated build-out obligation, there is neither a clear speed threshold
to which a carrier can be required to test nor a specified number of
locations at which the test can be conducted.'' NTCA argues that
additional proper notice-and-comment rulemaking procedures would be
needed to subject carriers without mandatory build-out obligations to
any required performance measures.
86. Absent any specific deployment requirements, the Commission
lacks a standard for determining whether a carrier's deployment meets
the required performance measures. As a result, consistent with NTCA's
request, the Commission clarifies that only carriers subject to defined
build-out requirements are required to test the speed and latency of
their networks in accord with Commission rules. Alaskan rate-of-return
carriers that have committed to maintaining existing service levels
therefore are not subject to the performance measures adopted by the
Bureaus and modified herein.
87. Alaskan rate-of-return carriers that have committed to defined
build-out obligations, however, must conduct speed and latency testing
of their networks. That said, the Commission recognizes that many of
these carriers lack the ability to obtain terrestrial backhaul such as
fiber, microwave, or other technologies and instead must rely
exclusively on satellite backhaul. Consistent with the standards the
Commission adopted for high-latency service providers in the CAF Phase
II auction, it requires Alaska Plan carriers using satellite or
satellite backhaul to certify that 95% or more of all testing hour
measurements of network round trip latency are at or below 750 ms for
any locations using satellite technology. The Commission also reaffirms
that these carriers must certify annually that no terrestrial backhaul
options exist, and that they are unable to satisfy the standard
performance measures due to the limited functionality of the available
satellite backhaul facilities. To the extent that new terrestrial
backhaul facilities are constructed, or existing facilities improve
sufficiently to meet the public interest obligations, the Commission
has required funding recipients to meet the standard performance
measures within twelve months of the new backhaul facilities becoming
commercially available.
III. Procedural Matters
88. Paperwork Reduction Act. This document contains new information
collection requirements subject to the Paperwork Reduction Act of 1995
(PRA), Public Law 104-13. It will be submitted to the Office of
Management and Budget (OMB) for review under Section 3507(d) of the
PRA. OMB, the general public, and other Federal agencies will be
invited to comment on the new information collection requirements
contained in this proceeding. In addition, the Commission notes that
pursuant to the Small Business Paperwork Relief Act of 2002, Public Law
107-198, see 44 U.S.C. 3506(c)(4), the Commission previously sought
specific comment on how it might further reduce the information
collection burden for small business concerns with fewer than 25
employees.
89. Congressional Review Act. The Commission has determined, and
the Administrator of the Office of Information and Regulatory Affairs,
Office of Management and Budget, concurs that these rules are non-major
under the Congressional Review Act, 5 U.S.C. 804(2). The Commission
will send a copy of this Order on
[[Page 67234]]
Reconsideration to Congress and the Government Accountability Office
pursuant to 5 U.S.C. 801(a)(1)(A).
90. As required by the Regulatory Flexibility Act of 1980 (RFA), as
amended, an Initial Regulatory Flexibility Analysis (IRFA) was
incorporated in the USF/ICC Transformation FNPRM, 76 FR 78384, December
16, 2011. The Commission sought written public comment on the proposals
in the USF/ICC Transformation FNPRM, including comment on the IRFA. The
Bureaus included a Final Regulatory Flexibility Analysis (FRFA) in
connection with the Performance Measures Order. This Supplemental Final
Regulatory Flexibility Analysis (Supplemental FRFA) supplements the
FRFA in the Performance Measures Order to reflect the actions taken in
the Order on Reconsideration and conforms to the RFA.
91. The Order on Reconsideration addresses issues raised by parties
in petitions for reconsideration and applications for review of the
Performance Measures Order. In the Performance Measures Order, the
Bureaus established how recipients of CAF support must test their
broadband networks for compliance with speed and latency metrics and
certify and report those results. In doing so, the Bureaus adopted a
flexible framework to minimize the burden on small entities--for
example, by permitting carriers to choose from one of three
methodologies to conduct the required testing.
92. The Order on Reconsideration affirms certain key components of
the Performance Measures Order while making several modifications to
the requirements. Specifically, in the Order, the Commission maintains
the choice between three testing methodologies for carriers to conduct
required testing; tie the implementation of speed and latency testing
to a carrier's deployment obligations for the specific high-cost
program under which it receives support; adopt a pre-testing regime to
give both carriers and the Commission the opportunity to ensure that
carriers are familiar with the testing regime and minimize any
administrative issues; maintain the previously-adopted testing sample
sizes but clarify that carriers must use the same locations for testing
both latency and speed; adopt a revised definition of FCC-designated
Internet Exchange Point (IXP); confirm that end-points for testing are
from the customer's side of any network being used to an FCC-designated
IXP; maintain the existing daily testing time period and quarterly
testing requirement; allow further flexibility for the timing of speed
tests but maintain the same frequency of latency testing; and reaffirm
the compliance standards and associated support reductions for non-
compliance.
93. There were no comments raised that specifically addressed how
broadband service should be measured, as presented in the USF/ICC
Transformation FNPRM IRFA. Nonetheless, the Commission has considered
the potential impact of the rules proposed in the IRFA on small
entities and reduced the compliance burden for all small entities in
order to reduce the economic impact of the rules enacted herein on such
entities.
94. The RFA directs agencies to provide a description of, and where
feasible, an estimate of the number of small entities that may be
affected by the proposed rules, if adopted. The RFA generally defines
the term ``small entity'' as having the same meaning as the terms
``small business,'' ``small organization,'' and ``small governmental
jurisdiction.'' In addition, the term ``small business'' has the same
meaning as the term ``small-business concern'' under the Small Business
Act. A small-business concern'' is one which: (1) Is independently
owned and operated; (2) is not dominant in its field of operation; and
(3) satisfies any additional criteria established by the Small Business
Administration (SBA).
95. As noted in this document, the Performance Measures Order
included a FRFA. In that analysis, the Bureaus described in detail the
small entities that might be significantly affected. Accordingly, in
this FRFA, the Commission hereby incorporates by reference the
descriptions and estimates of the number of small entities from the
previous FRFA in the Performance Measures Order.
96. The Commission expects the amended requirements in the Order on
Reconsideration will not impose any new or additional reporting or
recordkeeping or other compliance obligations on small entities and, as
described in the following, will reduce their costs.
97. The RFA requires an agency to describe any significant
alternatives that it has considered in reaching its proposed approach,
which may include (among others) the following four alternatives: (1)
The establishment of differing compliance or reporting requirements or
timetables that take into account the resources available to small
entities; (2) the clarification, consolidation, or simplification of
compliance or reporting requirements under the rule for small entities;
(3) the use of performance, rather than design, standards; and (4) an
exemption from coverage of the rule, or any part thereof, for small
entities.
98. The Commission has taken further steps which will minimize the
economic impact on small entities. In the Order on Reconsideration, the
Commission adopts a delayed schedule providing for a period of ``pre-
testing'' for all carriers and later start dates for carriers that do
not receive CAF Phase II model-based support. Thus, CAF Phase II model-
based support recipients, which include only large carriers, must begin
pre-testing and testing in 2020, whereas legacy rate-of-return
carriers, many of which are smaller entities, must begin pre-testing in
2022 and testing in 2023, and small carriers receiving A-CAM I model
support do not begin pre-testing until 2021 and testing in 2022. Pre-
testing will give carriers time to correct any issues with their
networks or with their testing infrastructure without being subject to
support reductions, and the delayed schedule for non-CAF Phase II
carriers will permit smaller entities even more time to prepare to meet
the Commission's testing requirements.
99. The Commission also now permits greater flexibility for
carriers to conduct speed tests within an hour. In the Order on
Reconsideration, the Commission clarifies that carriers may not
necessarily start testing speed at the very beginning of each test
hour. Instead, a carrier must simply report a successful speed test for
each hour, except a carrier that begins attempting a speed test within
the first 15 minutes of an hour and checks for cross-talk in one-minute
intervals (using the cross-talk thresholds of 64 Kbps for download and
32 Kbps for upload) may record that no test was successful during that
test hour.
100. Finally, the Commission clarifies that carriers may use the
same subscriber locations for testing both speed and latency, halving
the potential burdens for carriers that may have otherwise believed it
necessary to test separate subscriber locations for speed and latency.
This clarification is most significant for the smallest carriers, which
may use less automated means of testing than larger carriers.
IV. Ordering Clauses
Accordingly, it is ordered that, pursuant to the authority
contained in sections 1-4, 5, 201-206, 214, 218-220, 251, 252, 254,
256, 303(r), 332, 403, and 405 of the Communications Act of 1934, as
amended, and section 706 of the Telecommunications Act of 1996, 47
U.S.C. 151-155, 201-206, 214, 218-220, 251, 256, 254, 256, 303(r), 403
and 405, the Order on Reconsideration is
[[Page 67235]]
adopted, effective thirty (30) days after publication of the text or
summary thereof in the Federal Register, except for paragraphs 15, 16,
19, 22, 23, 26, 31 through 38, 43 through 49, 52, 53, 64, and 75
through 91, which contain new or modified information collection
requirements, that will not be effective until approved by the Office
of Management and Budget. The Commission will publish a document in the
Federal Register announcing the effective date for those sections not
yet effective. It is the Commission's intention in adopting these rules
that if any of the rules that the Commission retains, modifies, or
adopts in this document, or the application thereof to any person or
circumstance, are held to be unlawful, the remaining portions of the
rules not deemed unlawful, and the application of such rules to other
persons or circumstances, shall remain in effect to the fullest extent
permitted by law.
101. It is further ordered that, pursuant to the authority
contained in section 405 of the Communications Act of 1934, as amended,
47 U.S.C. 405, and Sec. Sec. 0.331 and 1.429 of the Commission's
rules, 47 CFR 0.331 and 47 CFR 1.429, the Petition for Reconsideration
and Clarification filed by USTELECOM--THE BROADBAND ASSOCIATION, ITTA--
THE VOICE OF AMERICA'S BROADBAND PROVIDERS, and the WIRELESS INTERNET
SERVICE PROVIDERS ASSOCIATION on September 19, 2018 is granted in part
and denied in part to the extent described herein, and the Petition for
Partial Reconsideration filed by MICRONESIAN TELECOMMUNICATIONS
CORPORATION on September 19, 2018 is denied.
102. It is further ordered that, pursuant to the authority
contained in 5(c)(5) of the Communications Act of 1934, as amended, 47
U.S.C. 155(c)(5), and Sec. 1.115(g) of the Commission's rules, 47 CFR
1.115(g), the Application for Review and Request for Clarification
filed by NTCA--THE RURAL BROADBAND ASSOCIATION on September 19, 2018
and the Application for Review filed by WTA--ADVOCATES FOR BROADBAND on
September 19, 2018, are granted in part and denied in part to the
extent described herein.
List of Subjects in 47 CFR Part 54
Communications common carriers, Health facilities, Infants and
children, internet, Libraries, Reporting and recordkeeping
requirements, Schools, Telecommunications, Telephone. Federal
Communications Commission.
Marlene Dortch,
Secretary.
Final Rules
For the reasons discussed in the preamble, the Federal
Communications Commission amends 47 CFR part 54 as follows:
PART 54--UNIVERSAL SERVICE
0
1. The authority for part 54 continues to read as follows:
Authority: 47 U.S.C. 151, 154(i), 155, 201, 205, 214, 219, 220,
254, 303(r), 403, and 1302, unless otherwise noted.
0
2. Amend Sec. 54.320 by revising paragraphs (d)(1)(ii) and (iii), the
first sentence of paragraph (d)(1)(iv)(A) and paragraph (d)(2) to read
as follows:
Sec. 54.320 Compliance and recordkeeping for the high-cost program.
* * * * *
(d) * * *
(1) * * *
* * * * *
(ii) Tier 2. If an eligible telecommunications carrier has a
compliance gap of at least 15 percent but less than 25 percent of the
number of locations that the eligible telecommunications carrier is
required to have built out to or, in the case of Alaska Plan mobile-
carrier participants, population covered by the specified technology,
middle mile, and speed of service in the carrier's approved performance
plan, by the interim milestone, USAC will withhold 15 percent of the
eligible telecommunications carrier's monthly support for that support
area and the eligible telecommunications carrier will be required to
file quarterly reports. Once the eligible telecommunications carrier
has reported that it has reduced the compliance gap to less than 15
percent of the required number of locations (or population, if
applicable) for that interim milestone for that support area, the
Wireline Competition Bureau or Wireless Telecommunications Bureau will
issue a letter to that effect, USAC will stop withholding support, and
the eligible telecommunications carrier will receive all of the support
that had been withheld. The eligible telecommunications carrier will
then move to Tier 1 status.
(iii) Tier 3. If an eligible telecommunications carrier has a
compliance gap of at least 25 percent but less than 50 percent of the
number of locations that the eligible telecommunications carrier is
required to have built out to by the interim milestone, or, in the case
of Alaska Plan mobile-carrier participants, population covered by the
specified technology, middle mile, and speed of service in the
carrier's approved performance plan, USAC will withhold 25 percent of
the eligible telecommunications carrier's monthly support for that
support area and the eligible telecommunications carrier will be
required to file quarterly reports. Once the eligible
telecommunications carrier has reported that it has reduced the
compliance gap to less than 25 percent of the required number of
locations (or population, if applicable) for that interim milestone for
that support area, the Wireline Competition Bureau or Wireless
Telecommunications Bureau will issue a letter to that effect, the
eligible telecommunications carrier will move to Tier 2 status.
(iv) * * *
(A) USAC will withhold 50 percent of the eligible
telecommunications carrier's monthly support for that support area, and
the eligible telecommunications carrier will be required to file
quarterly reports. * * *
* * * * *
(2) Final milestone. Upon notification that the eligible
telecommunications carrier has not met a final milestone, the eligible
telecommunications carrier will have twelve months from the date of the
final milestone deadline to come into full compliance with this
milestone. If the eligible telecommunications carrier does not report
that it has come into full compliance with this milestone within twelve
months, the Wireline Competition Bureau--or Wireless Telecommunications
Bureau in the case of mobile carrier participants--will issue a letter
to this effect. In the case of Alaska Plan mobile carrier participants,
USAC will then recover the percentage of support that is equal to 1.89
times the average amount of support per location received by that
carrier over the support term for the relevant percentage of
population. For other recipients of high-cost support, USAC will then
recover the percentage of support that is equal to 1.89 times the
average amount of support per location received in the support area for
that carrier over the term of support for the relevant number of
locations plus 10 percent of the eligible telecommunications carrier's
total relevant high-cost support over the support term for that support
area. Where a recipient is unable to demonstrate compliance with a
final performance testing milestone, USAC will recover the percentage
of support
[[Page 67236]]
that is equal to 1.89 times the average amount of support per location
received in the support area for the relevant number of locations for
that carrier plus 10 percent of the eligible telecommunications
carrier's total relevant high cost-support over the support term for
that support area, the total of which will then be multiplied by the
percentage of time since the carrier was last able to demonstrate
compliance based on performance testing, on a quarterly basis. In the
event that a recipient fails to meet a final milestone both for build-
out and performance compliance, USAC will recover the total of the
percentage of support that is equal to 1.89 times the average amount of
support per location received by that carrier over the support term for
the relevant number of locations to which the carrier failed to build
out; the percentage of support that is equal to 1.89 times the average
amount of support per location received in the support area for the
relevant number of locations for that carrier multiplied by the
percentage of time since the carrier was last able to demonstrate
compliance based on performance testing; and 10 percent of the eligible
telecommunications carrier's total relevant high-cost support over the
support term for that support area.
[FR Doc. 2019-26448 Filed 12-6-19; 8:45 am]
BILLING CODE 6712-01-P