Medicare Program; CY 2018 Updates to the Quality Payment Program; and Quality Payment Program: Extreme and Uncontrollable Circumstance Policy for the Transition Year, 53568-54229 [2017-24067]
Agencies
[Federal Register Volume 82, Number 220 (Thursday, November 16, 2017)]
[Rules and Regulations]
[Pages 53568-54229]
From the Federal Register Online via the Government Publishing Office [www.gpo.gov]
[FR Doc No: 2017-24067]
[[Page 53567]]
Vol. 82
Thursday,
No. 220
November 16, 2017
Part II
Book 2 of 3 Books
Pages 53567-54230
Department of Health and Human Services
-----------------------------------------------------------------------
Centers for Medicare & Medicaid Services
-----------------------------------------------------------------------
42 CFR Part 414
Medicare Program; CY 2018 Updates to the Quality Payment Program; and
Quality Payment Program: Extreme and Uncontrollable Circumstance Policy
for the Transition Year; Rule
Federal Register / Vol. 82 , No. 220 / Thursday, November 16, 2017 /
Rules and Regulations
[[Page 53568]]
-----------------------------------------------------------------------
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Centers for Medicare & Medicaid Services
42 CFR Part 414
[CMS-5522-FC and IFC]
RIN 0938-AT13
Medicare Program; CY 2018 Updates to the Quality Payment Program;
and Quality Payment Program: Extreme and Uncontrollable Circumstance
Policy for the Transition Year
AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS.
ACTION: Final rule with comment period and interim final rule with
comment period.
-----------------------------------------------------------------------
SUMMARY: The Medicare Access and CHIP Reauthorization Act of 2015
(MACRA) established the Quality Payment Program for eligible
clinicians. Under the Quality Payment Program, eligible clinicians can
participate via one of two tracks: Advanced Alternative Payment Models
(APMs); or the Merit-based Incentive Payment System (MIPS). We began
implementing the Quality Payment Program through rulemaking for
calendar year (CY) 2017. This final rule with comment period provides
updates for the second and future years of the Quality Payment Program.
In addition, we also are issuing an interim final rule with comment
period (IFC) that addresses extreme and uncontrollable circumstances
MIPS eligible clinicians may face as a result of widespread
catastrophic events affecting a region or locale in CY 2017, such as
Hurricanes Irma, Harvey and Maria.
DATES:
Effective date: These provisions of this final rule with comment
period and interim final rule with comment period are effective on
January 1, 2018.
Comment date: To be assured consideration, comments must be
received at one of the addresses provided below, no later than 5 p.m.
on January 1, 2018.
ADDRESSES: In commenting, please refer to file code CMS-5522-FC when
commenting on issues in the final rule with comment period, and CMS-
5522-IFC when commenting on issues in the interim final rule with
comment period. Because of staff and resource limitations, we cannot
accept comments by facsimile (FAX) transmission. You may submit
comments in one of four ways (please choose only one of the ways
listed):
1. Electronically. You may submit electronic comments on this
regulation to https://www.regulations.gov. Follow the ``Submit a
comment'' instructions.
2. By regular mail. You may mail written comments to the following
address ONLY: Centers for Medicare & Medicaid Services, Department of
Health and Human Services, Attention: CMS-5522-FC or CMS-5522-IFC (as
appropriate), P.O. Box 8016, Baltimore, MD 21244-8016.
Please allow sufficient time for mailed comments to be received
before the close of the comment period.
3. By express or overnight mail. You may send written comments to
the following address ONLY: Centers for Medicare & Medicaid Services,
Department of Health and Human Services, Attention: CMS-5522-FC or CMS-
5522-IFC (as appropriate), Mail Stop C4-26-05, 7500 Security Boulevard,
Baltimore, MD 21244-1850.
4. By hand or courier. Alternatively, you may deliver (by hand or
courier) your written comments ONLY to the following addresses prior to
the close of the comment period:
a. For delivery in Washington, DC-- Centers for Medicare & Medicaid
Services, Department of Health and Human Services, Room 445-G, Hubert
H. Humphrey Building, 200 Independence Avenue SW., Washington, DC
20201.
(Because access to the interior of the Hubert H. Humphrey Building
is not readily available to persons without Federal government
identification, commenters are encouraged to leave their comments in
the CMS drop slots located in the main lobby of the building. A stamp-
in clock is available for persons wishing to retain a proof of filing
by stamping in and retaining an extra copy of the comments being
filed.)
b. For delivery in Baltimore, MD-- Centers for Medicare & Medicaid
Services, Department of Health and Human Services, 7500 Security
Boulevard, Baltimore, MD 21244-1850.
If you intend to deliver your comments to the Baltimore address,
call telephone number (410) 786-7195 in advance to schedule your
arrival with one of our staff members. Comments erroneously mailed to
the addresses indicated as appropriate for hand or courier delivery may
be delayed and received after the comment period.
For information on viewing public comments, see the beginning of
the SUPPLEMENTARY INFORMATION section.
FOR FURTHER INFORMATION CONTACT:
Molly MacHarris, (410) 786-4461, for inquiries related to MIPS.
Benjamin Chin, (410) 786-0679, for inquiries related to APMs.
SUPPLEMENTARY INFORMATION:
Table of Contents
I. Executive Summary and Background
A. Overview
B. Quality Payment Program Strategic Objectives
C. One Quality Payment Program
D. Summary of the Major Provisions
1. Quality Payment Program Year 2
2. Small Practices
3. Summary of Major Provisions for Advanced Alternative Payment
Models (Advanced APMs)
4. Summary of Major Provisions for the Merit-Based Incentive
Payment System (MIPS)
E. Payment Adjustments
F. Benefits and Costs of the Final Rule With Comment Period
G. Automatic Extreme and Uncontrollable Circumstance Policy
Interim Final Rule With Comment Period
H. Stakeholder Input
II. Summary of the Provisions of the Proposed Regulations, and
Analysis of and Responses to Public Comments
A. Introduction
B. Definitions
C. MIPS Program Details
1. MIPS Eligible Clinicians
2. Exclusions
3. Group Reporting
4. Virtual Groups
5. MIPS Performance Period
6. MIPS Category Measures and Activities
7. MIPS Final Score Methodology
8. MIPS Payment Adjustments
9. Review and Correction of MIPS Final Score
10. Third Party Data Submission
11. Public Reporting on Physician Compare
D. Overview of the APM Incentive
1. Overview
2. Terms and Definitions
3. Regulation Text Changes
4. Advanced APMs
5. Qualifying APM Participant (QP) and Partial QP Determinations
6. All-Payer Combination Option
7. Physician-Focused Payment Models (PFPMs)
III. Quality Payment Program: Extreme and Uncontrollable
Circumstances Policy for the Transition Year Interim Final Rule With
Comment Period
A. Background
B. Changes to the Extreme and Uncontrollable Circumstances
Policies for the MIPS Transition Year
C. Changes to the Final Score and Policies for Redistributing
the Performance Category Weights for the Transition Year
D. Changes to the APM Scoring Standard for MIPS Eligible
Clinicians in MIPS APMs for the Transition Year
E. Waiver of Proposed Rulemaking for Provisions Related to
Extreme and Uncontrollable Circumstances
IV. Collection of Information Requirements
A. Wage Estimates
B. Framework for Understanding the Burden of MIPS Data
Submission
[[Page 53569]]
C. ICR Regarding Burden for Virtual Group Election (Sec.
414.1315)
D. ICR Regarding Burden for Election of Facility-Based
Measurement (Sec. 414.1380(e))
E. ICRs Regarding Burden for Third-Party Reporting (Sec.
414.1400)
F. ICRs Regarding the Quality Performance Category (Sec. Sec.
414.1330 and 414.1335)
G. ICRs Regarding Burden Estimate for Advancing Care Information
Data (Sec. 414.1375)
H. ICR Regarding Burden for Improvement Activities Submission
(Sec. 414.1355)
I. ICR Regarding Burden for Nomination of Improvement Activities
(Sec. 414.1360)
J. ICRs Regarding Burden for Cost (Sec. 414.1350)
K. ICR Regarding Partial QP Elections (Sec. 414.1430)
L. ICRs Regarding Other Payer Advanced APM Determinations:
Payer-Initiated Process (Sec. 414.1440) and Medicaid Specific
Eligible Clinician Initiated Process (Sec. 414.1445)
M. ICRs Regarding Burden for Voluntary Participants To Elect Opt
Out of Performance Data Display on Physician Compare (Sec.
414.1395)
N. Summary of Annual Burden Estimates
O. Submission of PRA-Related Comments
P. Collection of Information Requirements for the Interim Final
Rule With Comment Period: Medicare Program; Quality Payment Program:
Extreme and Uncontrollable Circumstances Policy for the Transition
Year
V. Response to Comments
VI. Regulatory Impact Analysis
A. Statement of Need
B. Overall Impact
C. Changes in Medicare Payments
D. Impact on Beneficiaries
E. Regulatory Review Costs
F. Accounting Statement
G. Regulatory Impact Statement for Interim Final Rule With
Comment Period: Medicare Program; Quality Payment Program: Extreme
and Uncontrollable Circumstance Policy for the Transition Year
Acronyms
Because of the many terms to which we refer by acronym in this
rule, we are listing the acronyms used and their corresponding meanings
in alphabetical order below:
ABCTM Achievable Benchmark of Care
ACO Accountable Care Organization
API Application Programming Interface
APM Alternative Payment Model
APRN Advanced Practice Registered Nurse
ASC Ambulatory Surgical Center
ASPE HHS' Office of the Assistant Secretary for Planning and
Evaluation
BPCI Bundled Payments for Care Improvement
CAH Critical Access Hospital
CAHPS Consumer Assessment of Healthcare Providers and Systems
CBSA Core Based Statistical Area
CEHRT Certified EHR Technology
CFR Code of Federal Regulations
CHIP Children's Health Insurance Program
CJR Comprehensive Care for Joint Replacement
COI Collection of Information
CPR Customary, Prevailing, and Reasonable
CPS Composite Performance Score
CPT Current Procedural Terminology
CQM Clinical Quality Measure
CY Calendar Year
eCQM Electronic Clinical Quality Measure
ED Emergency Department
EHR Electronic Health Record
EP Eligible Professional
ESRD End-Stage Renal Disease
FFS Fee-for-Service
FR Federal Register
FQHC Federally Qualified Health Center
GAO Government Accountability Office
HCC Hierarchical Condition Category
HIE Health Information Exchange
HIPAA Health Insurance Portability and Accountability Act of 1996
HITECH Health Information Technology for Economic and Clinical
Health
HPSA Health Professional Shortage Area
HHS Department of Health & Human Services
HRSA Health Resources and Services Administration
IHS Indian Health Service
IT Information Technology
LDO Large Dialysis Organization
MACRA Medicare Access and CHIP Reauthorization Act of 2015
MEI Medicare Economic Index
MIPAA Medicare Improvements for Patients and Providers Act of 2008
MIPS Merit-based Incentive Payment System
MLR Minimum Loss Rate
MSPB Medicare Spending per Beneficiary
MSR Minimum Savings Rate
MUA Medically Underserved Area
NPI National Provider Identifier
OCM Oncology Care Model
ONC Office of the National Coordinator for Health Information
Technology
PECOS Medicare Provider Enrollment, Chain, and Ownership System
PFPMs Physician-Focused Payment Models
PFS Physician Fee Schedule
PHI Protected Health Information
PHS Public Health Service
PQRS Physician Quality Reporting System
PTAC Physician-Focused Payment Model Technical Advisory Committee
QCDR Qualified Clinical Data Registry
QP Qualifying APM Participant
QRDA Quality Reporting Document Architecture
QRUR Quality and Resource Use Reports
RBRVS Resource-Based Relative Value Scale
RFI Request for Information
RHC Rural Health Clinic
RIA Regulatory Impact Analysis
RVU Relative Value Unit
SGR Sustainable Growth Rate
TCPI Transforming Clinical Practice Initiative
TIN Tax Identification Number
VBP Value-Based Purchasing
VM Value-Based Payment Modifier
VPS Volume Performance Standard
I. Executive Summary and Background
A. Overview
This final rule with comment period makes payment and policy
changes to the Quality Payment Program. The Medicare Access and CHIP
Reauthorization Act of 2015 (MACRA) (Pub. L. 114-10, enacted April 16,
2015) amended Title XVIII of the Social Security Act (the Act) to
repeal the Medicare sustainable growth rate (SGR) formula, to
reauthorize the Children's Health Insurance Program (CHIP), and to
strengthen Medicare access by improving physician and other clinician
payments and making other improvements. The MACRA advances a forward-
looking, coordinated framework for clinicians to successfully take part
in the Quality Payment Program that rewards value and outcomes in one
of two ways:
Advanced Alternative Payment Models (Advanced APMs).
Merit-based Incentive Payment System (MIPS).
Our goal is to support patients and clinicians in making their own
decisions about health care using data driven insights, increasingly
aligned and meaningful quality measures, and innovative technology. To
implement this vision, the Quality Payment Program emphasizes high-
value care and patient outcomes while minimizing burden on eligible
clinicians. The Quality Payment Program is also designed to be
flexible, transparent, and structured to improve over time with input
from clinicians, patients, and other stakeholders.
In today's health care system, we often pay doctors and other
clinicians based on the number of services they perform rather than
patient health outcomes. The good work that clinicians do is not
limited to conducting tests or writing prescriptions, but also taking
the time to have a conversation with a patient about test results,
being available to a patient through telehealth or expanded hours,
coordinating medicine and treatments to avoid confusion or errors, and
developing care plans.
The Quality Payment Program takes a comprehensive approach to
payment by basing consideration of quality on a set of evidenced-based
measures that were primarily developed by clinicians, thus encouraging
improvement in clinical practice and supporting by advances in
technology that allow for the easy exchange of information. The Quality
Payment Program also offers special incentives for those participating
in certain innovative models of care that
[[Page 53570]]
provide an alternative to fee-for-service payment.
We have sought and will continue to seek feedback from the health
care community through various public avenues such as rulemaking,
listening sessions and stakeholder engagement. We understand that
technology, infrastructure, physician support systems, and clinical
practices will change over the next few years and are committed to
refine our policies for the Quality Payment Program with those factors
in mind.
We are aware of the diversity among clinician practices in their
experience with quality-based payments and expect the Quality Payment
Program to evolve over multiple years. The groundwork has been laid for
expansion toward an innovative, patient-centered, health system that is
both outcome focused and resource effective. A system that leverages
health information technology to support clinicians and patients and
builds collaboration across care settings. The Quality Payment Program:
(1) Supports care improvement by focusing on better outcomes for
patients, and preserving the independent clinical practice; (2)
promotes the adoption of APMs that align incentives for high-quality,
low-cost care across healthcare stakeholders; and (3) advances existing
delivery system reform efforts, including ensuring a smooth transition
to a healthcare system that promotes high-value, efficient care through
unification of CMS legacy programs.
In the Merit-based Incentive Payment System (MIPS) and Alternative
Payment Model (APM) Incentive under the Physician Fee Schedule, and
Criteria for Physician-Focused Payment Models final rule with comment
period (81 FR 77008, November 4, 2016), referred to as the ``CY 2017
Quality Payment Program final rule,'' we established incentives for
participation in Advanced APMs, supporting the goals of transitioning
from fee-for-service (FFS) payments to payments for quality and value.
The CY 2017 Quality Payment Program final rule included definitions and
processes to determine Qualifying APM Participants (QPs) in Advanced
APMs. The CY 2017 Quality Payment Program final rule also established
the criteria for use by the Physician-Focused Payment Model Technical
Advisory Committee (PTAC) in making comments and recommendations to the
Secretary on proposals for physician-focused payment models (PFPMs).
The CY 2017 Quality Payment Program final rule also established
policies to implement MIPS, which consolidated certain aspects of the
Physician Quality Reporting System (PQRS), the Physician Value-based
Payment Modifier (VM), and the Medicare Electronic Health Record (EHR)
Incentive Program for Eligible Professionals (EPs) and made CY 2017 the
transition year for clinicians under the Quality Payment Program. As
prescribed by MACRA, MIPS focuses on the following: (1) Quality--
including a set of evidence-based, specialty-specific standards; (2)
cost; (3) practice-based improvement activities; and (4) use of
certified electronic health record (EHR) technology (CEHRT) to support
interoperability and advanced quality objectives in a single, cohesive
program that avoids redundancies.
This CY 2018 final rule with comment period continues to build and
improve upon our transition year policies, as well as, address elements
of MACRA that were not included in the first year of the program,
including virtual groups, beginning with the CY 2019 performance period
facility-based measurement, and improvement scoring. This final rule
with comment period implements policies for ``Quality Payment Program
Year 2,'' some of which will continue into subsequent years of the
Quality Payment Program.
We have also included an interim final rule with comment period to
establish an automatic extreme and uncontrollable circumstance policy
for the 2017 MIPS performance period that recognizes recent hurricanes
(Harvey, Irma, and Maria) and other natural disasters can effectively
impede a MIPS eligible clinician's ability to participate in MIPS.
B. Quality Payment Program Strategic Objectives
After extensive outreach with clinicians, patients and other
stakeholders, we created 7 strategic objectives to drive continued
progress and improvement. These objectives help guide our final
policies and future rulemaking in order to design, implement, and
advance a Quality Payment Program that aims to improve health outcomes,
promote efficiency, minimize burden of participation, and provide
fairness and transparency in operations.
These strategic objectives are as follows: (1) To improve
beneficiary outcomes and engage patients through patient-centered
Advanced APM and MIPS policies; (2) to enhance clinician experience
through flexible and transparent program design and interactions with
easy-to-use program tools; (3) to increase the availability and
adoption of robust Advanced APMs; (4) to promote program understanding
and maximize participation through customized communication, education,
outreach and support that meet the needs of the diversity of physician
practices and patients, especially the unique needs of small practices;
(5) to improve data and information sharing on program performance to
provide accurate, timely, and actionable feedback to clinicians and
other stakeholders; (6) to deliver IT systems capabilities that meet
the needs of users for data submission, reporting, and improvement and
are seamless, efficient and valuable on the front and back-end; and (7)
to ensure operation excellence in program implementation and ongoing
development; and to design the program in a manner that allows smaller
independent and rural practices to be successful. More information on
these objectives and the Quality Payment Program can be found at
qpp.cms.gov.
Stakeholder feedback is the hallmark of the Quality Payment
Program. We solicited and reviewed nearly 1,300 comments and had over
100,000 physicians and other stakeholders attend our outreach sessions
to help inform our policies for Quality Payment Program Year 2. We have
set ambitious yet achievable goals for those clinicians interested in
APMs, as they are a vital part of bending the Medicare cost curve by
encouraging the delivery of high-quality, low-cost care. To allow this
program to work for all stakeholders, we further recognize that we must
provide ongoing education, support, and technical assistance so that
clinicians can understand program requirements, use available tools to
enhance their practices, and improve quality and progress toward
participation in APMs if that is the best choice for their practice.
Finally, we understand that we must achieve excellence in program
management, focusing on customer needs while also promoting problem-
solving, teamwork, and leadership to provide continuous improvements in
the Quality Payment Program.
C. One Quality Payment Program
Clinicians have told us that they do not separate their patient
care into domains, and that the Quality Payment Program needs to
reflect typical clinical workflows in order to achieve its goal of
better patient care. Advanced APMs, the focus of one pathway of the
Quality Payment Program, contribute to better care and smarter spending
by allowing physicians and other clinicians to deliver coordinated,
customized, high-value care to their patients in a streamlined and
cost-effective manner. Within MIPS, the second pathway of the Quality
Payment Program, we believe that integration into typical clinical
[[Page 53571]]
workflows can best be accomplished by making connections across the
four statutory pillars of the MIPS incentive structure. Those four
pillars are: (1) Quality; (2) clinical practice improvement activities
(referred to as ``improvement activities''); (3) meaningful use of
CEHRT (referred to as ``advancing care information''); and (4) resource
use (referred to as ``cost'').
Although there are two separate pathways within the Quality Payment
Program, Advanced APMs and MIPS both contribute toward the goal of
seamless integration of the Quality Payment Program into clinical
practice workflows. Advanced APMs promote this seamless integration by
way of payment methodology and design that incentivize care
coordination. The MIPS builds the capacity of eligible clinicians
across the four pillars of MIPS to prepare them for participation in
APMs in later years of the Quality Payment Program. Indeed, the bedrock
of the Quality Payment Program is high-value, patient-centered care,
informed by useful feedback, in a continuous cycle of improvement. The
principal way that MIPS measures quality of care is through a set of
clinical quality measures (CQMs) from which MIPS eligible clinicians
can select. The CQMs are evidence-based, and the vast majority are
created or supported by clinicians. Over time, the portfolio of quality
measures will grow and develop, driving towards outcomes that are of
the greatest importance to patients and clinicians and away from
process, or ``check the box'' type measures.
Through MIPS, we have the opportunity to measure clinical and
patient outcomes, not only through evidence-based quality measures, but
also by accounting for activities that clinicians and patients
themselves identify: Namely, practice-driven quality improvement. MIPS
also requires us to assess whether CEHRT is used in a meaningful way
and based on significant feedback, this area was simplified to support
the exchange of patient information, engagement of patients in their
own care through technology, and the way technology specifically
supports the quality goals selected by the practice. And lastly, MIPS
requires us to measure the cost of services provided through the cost
performance category, which will contribute to a MIPS eligible
clinician's final score beginning in the second year of the MIPS.
We realize the Quality Payment Program is a big change. In this
final rule with comment period, we continue the slow ramp-up of the
Quality Payment Program by establishing special policies for MIPS Year
2 aimed at encouraging successful participation in the program while
reducing burden, reducing the number of clinicians required to
participate, and preparing clinicians for the CY 2019 performance
period (CY 2021 payment year). Our hope is for the program to evolve to
the point where all the clinical activities captured in MIPS across the
four performance categories reflect the single, unified goal of quality
improvement.
D. Summary of the Major Provisions
1. Quality Payment Program Year 2
We believe the second year of the Quality Payment Program should
build upon the foundation that has been established which provides a
trajectory for clinicians to value-based care. A second year to ramp-up
the program will continue to help build upon the iterative learning and
development of year 1 in preparation for a robust program in year 3.
2. Small Practices
The support of small, independent practices remains an important
thematic objective for the implementation of the Quality Payment
Program and is expected to be carried throughout future rulemaking.
Many small practices did not have to participate in MIPS during the
transition year due to the low-volume threshold, which was set for the
CY 2017 performance period at less than or equal to $30,000 in Medicare
Part B allowed charges or less than or equal to 100 Medicare Part B
patients. We have heard feedback that many small practices still face
challenges in their ability to participate in the program. We are
implementing additional flexibilities for Year 2 including:
Implementing the virtual groups provisions; increasing the low-volume
threshold to less than or equal to $90,000 in Medicare Part B allowed
charges or less than or equal to 200 Medicare Part B patients; adding a
significant hardship exception from the advancing care information
performance category for MIPS eligible clinicians in small practices;
providing 3 points even if small practices submit quality measures
below data completeness standards; and providing bonus points that are
added to the final scores of MIPS eligible clinicians who are in small
practices. We believe that these additional flexibilities and reduction
in barriers will further enhance the ability of small practices to
participate successfully in the Quality Payment Program.
In keeping with the objectives to provide education about the
Quality Payment Program and maximize participation, and as mandated by
the statute, during a period of 5 years, $100 million in funding was
provided for technical assistance to be available to provide guidance
and assistance to MIPS eligible clinicians in small practices through
contracts with regional health collaboratives, and others. Guidance and
assistance on the MIPS performance categories or the transition to APM
participation will be available to MIPS eligible clinicians in
practices of 15 or fewer clinicians with priority given to practices
located in rural areas or medically underserved areas (MUAs), and
practices with low MIPS final scores. More information on the technical
assistance support available to small practices can be found at https://qpp.cms.gov/docs/QPP_Support_for_Small_Practices.pdf.
We have also performed an updated regulatory impact analysis,
accounting for flexibilities, many of which are continuing into the
Quality Payment Program Year 2, that have been created to ease the
burden for small and solo practices.
3. Summary of Major Provisions for Advanced Alternative Payment Models
(Advanced APMs)
a. Overview
APMs represent an important step forward in our efforts to move our
healthcare system from volume-based to value-based care. Our existing
APM policies provide opportunities that support state flexibility,
local leadership, regulatory relief, and innovative approaches to
improve quality, accessibility, and affordability.
APMs that meet the criteria to be Advanced APMs provide the pathway
through which eligible clinicians, many of whom who would otherwise
fall under the MIPS, can become Qualifying APM Participants (QPs),
thereby earning incentives for their Advanced APM participation. In the
CY 2017 Quality Payment Program final rule, we estimated that 70,000 to
120,000 eligible clinicians would be QPs for payment year 2019 based on
Advanced APM participation in performance year 2017 (81 FR 77516). With
new Advanced APMs expected to be available for participation in 2018,
including the Medicare ACO Track 1 Plus (1+) Model, and the addition of
new participants for some current Advanced APMs, such as the Next
Generation ACO Model and Comprehensive Primary Care Plus (CPC+) Model,
we anticipate higher numbers of QPs in subsequent years of the program.
We currently estimate that approximately 185,000 to 250,000
[[Page 53572]]
eligible clinicians may become QPs for payment year 2020 based on
Advanced APM participation in performance year 2018.
b. Advanced APMs
In the CY 2017 Quality Payment Program final rule, to be considered
an Advanced APM, we finalized that an APM must meet all three of the
following criteria, as required under section 1833(z)(3)(D) of the Act:
(1) The APM must require participants to use CEHRT; (2) The APM must
provide for payment for covered professional services based on quality
measures comparable to those in the quality performance category under
MIPS; and (3) The APM must either require that participating APM
Entities bear risk for monetary losses of a more than nominal amount
under the APM, or be a Medical Home Model expanded under section
1115A(c) of the Act (81 FR 77408).
We are maintaining the generally applicable revenue-based nominal
amount standard at 8 percent for QP Performance Periods 2019 and 2020.
We are exempting participants in Round 1 of the CPC+ Model as of
January 1, 2017 from the 50 eligible clinician limit as proposed. We
are also finalizing a more gradual ramp-up in percentages of revenue
for the Medical Home Model nominal amount standard over the next
several years.
c. Qualifying APM Participant (QP) and Partial QP Determinations
QPs are eligible clinicians in an Advanced APM who have met a
threshold percentage of their patients or payments through an Advanced
APM or, beginning in performance year 2019, attain QP status through
the All-Payer Combination Option. Eligible clinicians who are QPs for a
year are excluded from the MIPS reporting requirements and payment
adjustment for the year, and receive a 5 percent APM Incentive Payment
for the year in years from 2019 through 2024. The statute sets
thresholds for the level of participation in Advanced APMs required for
an eligible clinician to become a QP for a year.
We are finalizing that for Advanced APMs that start or end during
the QP Performance Period and operate continuously for a minimum of 60
days during the QP Performance Period for the year, we are making QP
determinations using payment or patient data only for the dates that
APM Entities were able to participate in the Advanced APM per the terms
of the Advanced APM, not for the full QP Performance Period.
Eligible clinicians who participate in Advanced APMs but do not
meet the QP or Partial QP thresholds are subject to MIPS reporting
requirements and payment adjustments unless they are otherwise excluded
from MIPS.
d. All-Payer Combination Option
The All-Payer Combination Option, which uses a calculation based on
an eligible clinician's participation in both Advanced APMs and Other
Payer Advanced APMs to make QP determinations, is applicable beginning
in performance year 2019. To become a QP through the All-Payer
Combination Option, an eligible clinician must participate in an
Advanced APM with CMS as well as an Other Payer Advanced APM. We
determine whether other payer arrangements are Other Payer Advanced
APMs based on information submitted to us by eligible clinicians, APM
Entities, and in some cases by payers, including states and Medicare
Advantage Organizations. In addition, the eligible clinician or the APM
Entity must submit information to CMS so that we can determine whether
the eligible clinician meets the requisite QP threshold of
participation.
To be an Other Payer Advanced APM, as set forth in section
1833(z)(2)(B)(ii) and (C)(ii) of the Act and implemented in the CY 2017
Quality Payment Program final rule, a payment arrangement with a payer
(for example, payment arrangements authorized under Title XIX, Medicare
Health Plan payment arrangements, and payment arrangements in CMS
Multi-Payer Models) must meet all three of the following criteria: (1)
CEHRT is used; (2) the payment arrangement must require the use of
quality measures comparable to those in the quality performance
category under MIPS; and (3) the payment arrangement must either
require the APM Entities to bear more than nominal financial risk if
actual aggregate expenditures exceed expected aggregate expenditures,
or be a Medicaid Medical Home Model that meets criteria comparable to
Medical Home Models expanded under section 1115A(c) of the Act.
In this final rule with comment period, we are finalizing policies
that provide more detail about how the All-Payer Combination Option
will operate. We are finalizing that an other payer arrangement would
meet the generally applicable revenue-based nominal amount standard we
proposed if, under the terms of the other payer arrangement, the total
amount that an APM Entity potentially owes the payer or foregoes is
equal to at least: For the 2019 and 2020 QP Performance Periods, 8
percent of the total combined revenues from the payer of providers and
suppliers in participating APM Entities only for arrangements that are
expressly defined in terms of revenue. We are also finalizing a more
gradual ramp-up in percentages of revenue for the Medicaid Medical Home
Model nominal amount standard over the next several years.
We are finalizing the Payer Initiated and Eligible Clinician Other
Payer Advanced APM determination processes to allow payers, APM
Entities, or eligible clinicians to request that we determine whether
other payer arrangements meet the Other Payer Advanced APM criteria. We
have also finalized requirements pertaining to the submission of
information.
We are finalizing certain modifications to how we calculate
Threshold Scores and make QP determinations under the All-Payer
Combination Option. We are retaining the QP Performance Period for the
All-Payer Combination Option from January 1 through August 31 of each
year as finalized in the CY 2017 Quality Payment Program final rule.
e. Physician-Focused Payment Models (PFPMs)
The PTAC is an 11-member federal advisory committee that is an
important avenue for the creation of innovative payment models. The
PTAC is charged with reviewing stakeholders' proposed PFPMs, and making
comments and recommendations to the Secretary regarding whether they
meet the PFPM criteria established by the Secretary through rulemaking
in the CY 2017 Quality Payment Program final rule. The Secretary is
required to review the comments and recommendations submitted by the
PTAC and post a detailed response to these recommendations on the CMS
Web site.
We sought comments on broadening the definition of PFPM to include
payment arrangements that involve Medicaid or the Children's Health
Insurance Program (CHIP) as a payer even if Medicare is not included as
a payer. We are maintaining the current definition of a PFPM to include
only payment arrangements with Medicare as a payer. We believe this
definition retains focus on APMs and Advanced APMs, which would be
proposals that the Secretary has more direct authority to implement,
while maintaining consistency for PTAC's review while they are still
refining their processes. In addition, we sought comment on the
Secretary's criteria and stakeholders' needs in developing PFPM
proposals aimed at meeting the criteria.
[[Page 53573]]
4. Summary of Major Provisions for the Merit-Based Incentive Payment
System (MIPS)
For Quality Payment Program Year 2, which is the second year of the
MIPS and includes the 2018 performance period and the 2020 MIPS payment
year, as well as the following:
a. Quality
We previously finalized that the quality performance category would
comprise 60 percent of the final score for the transition year and 50
percent of the final score for the 2020 MIPS payment year (81 FR
77100). While we proposed to maintain a 60 percent weight for the
quality performance category for the 2020 MIPS payment year, we are not
finalizing this proposal and will be keeping our previously finalized
policy to weight the quality performance category at 50 percent for the
2020 MIPS payment year. We are also finalizing that for purposes of the
2021 MIPS payment year, the performance period for the quality and cost
performance categories is CY 2019 (January 1, 2019 through December 31,
2019). We note that we had previously finalized that for the purposes
of the 2020 MIPS payment year the performance period for the quality
and cost performance categories is CY 2018 (January 1, 2018 through
December 31, 2018). We did not make proposals to modify this time frame
in the CY 2018 Quality Payment Program proposed rule and are therefore
unable to modify this performance period.
Quality measures are selected annually through a call for quality
measures under consideration, with a final list of quality measures
being published in the Federal Register by November 1 of each year. We
are finalizing for the CAHPS for MIPS survey for the Quality Payment
Program Year 2 and future years that the survey administration period
will, at a minimum, span over 8 weeks and, at a maximum, 17 weeks and
will end no later than February 28th following the applicable
performance period. In addition, we are finalizing for the Quality
Payment Program Year 2 and future years to remove two Summary Survey
Modules (SSMs), specifically, ``Helping You to Take Medication as
Directed'' and ``Between Visit Communication'' from the CAHPS for MIPS
survey.
For the 2018 MIPS performance period, we previously finalized that
the data completeness threshold would increase to 60 percent for data
submitted on quality measures using QCDRs, qualified registries, via
EHR, or Medicare Part B claims. While we proposed to maintain a 50
percent data completeness threshold for the 2018 MIPS performance
period, we are not finalizing this proposal and will be keeping our
previously finalized data completeness threshold of 60 percent for data
submitted on quality measures using QCDRs, qualified registries, EHR,
or Medicare Part B claims for the 2018 MIPS performance period. We also
proposed to have the data completeness threshold for the 2021 MIPS
payment year (2019 performance period) to 60 percent for data submitted
on quality measures using QCDRs, qualified registries, EHR, or Medicare
Part B claims. We are also finalizing this proposal. We anticipate that
as MIPS eligible clinicians gain experience with the MIPS we will
propose to further increase these thresholds over time.
b. Improvement Activities
Improvement activities are those that improve clinical practice or
care delivery and that, when effectively executed, are likely to result
in improved outcomes. We believe improvement activities support broad
aims within healthcare delivery, including care coordination,
beneficiary engagement, population management, and health equity. For
the 2020 MIPS payment year, we previously finalized that the
improvement activities performance category would comprise 15 percent
of the final score (81 FR 77179). There are no changes in improvement
activities scoring for Quality Payment Program Year 2 (2018 MIPS
performance period) as discussed in section II.C.7.a.(5) of this final
rule with comment period. However, in this final rule, we are
finalizing our proposal to no longer require self-identifications for
non-patient facing MIPS eligible clinicians, small practices, practices
located in rural areas or geographic HPSAs, or any combination thereof,
beginning with the 2018 MIPS performance period and for future years.
We are finalizing that for Quality Payment Program Year 2 and
future years (2018 MIPS performance period and future years), MIPS
eligible clinicians or groups must submit data on improvement
activities in one of the following manners: Via qualified registries,
EHR submission mechanisms, QCDR, CMS Web Interface, or attestation; and
that for activities that are performed for at least a continuous 90
days during the performance period, MIPS eligible clinicians must
submit a yes response for activities within the Improvement Activities
Inventory.
In this final rule with comment period, we are finalizing updates
to the Improvement Activities Inventory. Specifically, as discussed in
the appendices (Tables F and G) of this final rule with comment period,
we are finalizing 21 new improvement activities (some with
modification) and changes to 27 previously adopted improvement
activities (some with modification and including 1 removal) for the
Quality Payment Program Year 2 and future years (2018 MIPS performance
period and future years) Improvement Activities Inventory. These
activities were recommended by clinicians, patients and other
stakeholders interested in advancing quality improvement and
innovations in healthcare. We will continue to seek new improvement
activities as the program evolves. Additionally, we are finalizing
several policies related to submission of improvement activities. In
particular, we are formalizing the annual call for activities process
for Quality Payment Program Year 3 and future years. We are finalizing
with modification, for the Quality Payment Program Year 3 and future
years, that stakeholders should apply one or more of the criteria when
submitting improvement activities in response to the Annual Call for
Activities. In addition to the criteria listed in the proposed rule for
nominating new improvement activities for the Annual Call for
Activities policy, we are modifying and expanding the proposed criteria
list to also include: (1) Improvement activities that focus on
meaningful actions from the person and family's point of view, and (2)
improvement activities that support the patient's family or personal
caregiver. In addition, we are finalizing to: (1) Accept submissions
for prospective improvement activities at any time during the
performance period for the Annual Call for Activities and create an
Improvement Activities Under Review (IAUR) list; (2) only consider
prospective activities submitted by March 1 for inclusion in the
Improvement Activities Inventory for the performance periods occurring
in the following calendar year; and (3) add new improvement activities
and subcategories through notice-and-comment rulemaking in future years
of the Quality Payment Program.
Additionally, we are finalizing that for purposes of the 2021 MIPS
payment year, the performance period for the improvement activities
performance category is a minimum of a continuous 90-day period within
CY 2019, up to and including the full CY 2019 (January 1, 2019 through
December 31, 2019).
In this final rule with comment period, we are also expanding our
[[Page 53574]]
definition of how we will recognize an individual MIPS eligible
clinician or group as being a certified patient-centered medical home
or comparable specialty practice. We are finalizing our proposal, with
clarification, that at least 50 percent of the practice sites within
the TIN must be recognized as a patient-centered medical home or
comparable specialty practice to receive full credit as a certified or
recognized patient-centered medical home or comparable specialty
practice for the 2020 MIPS payment year and future years. We are
clarifying that a practice site as is the physical location where
services are delivered. We proposed in section II.C.6.e.(3)(b) of the
proposed rule (82 FR 30054) that eligible clinicians in practices that
have been randomized to the control group in the CPC+ model would also
receive full credit as a Medical Home Model. We are not finalizing this
proposal, however, because CMMI has not randomized any practices into a
control group in CPC+ Round 2.
We are also finalizing changes to the study, including modifying
the name to the ``CMS Study on Burdens Associated with Reporting
Quality Measures,'' increasing the sample size for 2018, and updating
requirements.
Furthermore, in recognition of improvement activities as supporting
the central mission of a unified Quality Payment Program, we are
finalizing in section II.C.6.e.(3)(a) of this final rule with comment
period to continue to designate activities in the Improvement
Activities Inventory that will also qualify for the advancing care
information bonus score. This is consistent with our desire to
recognize that CEHRT is often deployed to improve care in ways that our
programs should recognize.
c. Advancing Care Information
For the Quality Payment Program Year 2, the advancing care
information performance category is 25 percent of the final score.
However, if a MIPS eligible clinician is participating in a MIPS APM
the advancing care information performance category may be 30 percent
or 75 percent of the final score depending on the availability of APM
quality data for reporting. We are finalizing that for purposes of the
2021 MIPS payment year, the performance period for advancing care
information performance category is a minimum of a continuous 90-day
period within CY 2019, up to and including the full CY 2019 (January 1,
2019 through December 31, 2019).
Objectives and measures in the advancing care information
performance category focus on the secure exchange of health information
and the use of CEHRT to support patient engagement and improved
healthcare quality. While we continue to recommend that physicians and
clinicians migrate to the implementation and use of EHR technology
certified to the 2015 Edition so they may take advantage of improved
functionalities, including care coordination and technical advancements
such as application programming interfaces, or APIs, we recognize that
some practices may have challenges in adopting new certified health IT.
Therefore, we are finalizing that MIPS eligible clinicians may continue
to use EHR technology certified to the 2014 Edition for the performance
period in CY 2018. Clinicians may also choose to use the 2015 Edition
CEHRT or a combination of the two. Clinicians will earn a bonus for
using only 2015 CEHRT in 2018.
For the 2018 performance period, MIPS eligible clinicians will have
the option to report the Advancing Care Information Transition
Objectives and Measures using 2014 Edition CEHRT, 2015 Edition CEHRT,
or a combination of 2014 and 2015 Edition CEHRT, as long as the EHR
technology they possess can support the objectives and measures to
which they plan to attest. Similarly, MIPS eligible clinicians will
have the option to attest to the Advancing Care Information Objectives
and Measures using 2015 Edition CEHRT or a combination of 2014 and 2015
Edition CEHRT, as long as their EHR technology can support the
objectives and measures to which they plan to attest.
We are finalizing exclusions for the e-Prescribing and Health
Information Exchange Objectives beginning with the 2017 performance
period. We are also finalizing that eligible clinicians can earn 10
percentage points in their performance score for reporting to any
single public health agency or clinical data registry to meet any of
the measures associated with the Public Health and Clinical Data
Registry Reporting objective (or any of the measures associated with
the Public Health Reporting Objective of the 2018 Advancing Care
Information Transition Objectives and Measures, for clinicians who
choose to report on those measures) and, and will award an additional 5
percentage point bonus for reporting to more than one. We are
implementing several provisions of the 21st Century Cures Act (Pub. L.
114-255, enacted on December 13, 2016) pertaining to hospital-based
MIPS eligible clinicians, ambulatory surgical center-based MIPS
eligible clinicians, MIPS eligible clinicians using decertified EHR
technology, and significant hardship exceptions under the MIPS. We are
also finalizing a significant hardship exception for MIPS eligible
clinicians in small practices. For clinicians requesting a reweighting
of the advancing care information performance category, we are changing
the deadline for submission of this application to December 31 of the
performance period. Lastly, we are finalizing additional improvement
activities that are eligible for a 10 percent bonus under the advancing
care information performance category if they are completed using
CEHRT.
d. Cost
We previously finalized that the cost performance category would
comprise zero percent of the final score for the transition year and 10
percent of the final score for the 2020 MIPS payment year (81 FR
77165). For the 2020 MIPS payment year, we proposed to change the
weight of the cost performance category from 10 percent to zero percent
(82 FR 30047). For the 2020 MIPS payment year, we are finalizing a 10
percent weight for the cost performance category in the final score in
order to ease the transition to a 30 percent weight for the cost
performance category in the 2021 MIPS payment year. For the 2018 MIPS
performance period, we are adopting the total per capita costs for all
attributed beneficiaries measure and the Medicare Spending per
Beneficiary (MSPB) measure that were adopted for the 2017 MIPS
performance period, and we will not use the 10 episode-based measures
that were adopted for the 2017 MIPS performance period. Although data
on the episode-based measures has been made available to clinicians in
the past, we are in the process of developing new episode-based
measures with significant clinician input and believe it would be more
prudent to introduce these new measures over time. We will continue to
offer performance feedback on episode-based measures prior to potential
inclusion of these measures in MIPS to increase clinician familiarity
with the concept as well as specific episode-based measures.
Specifically, we are providing feedback on these new episode-based cost
measures for informational purposes only. We intend to provide
performance feedback on the MSPB and total per capita cost measures by
July 1, 2018, consistent with section 1848(q)(12) of the Act. In
addition, we intend to offer feedback on newly developed episode-based
cost measures in 2018 as well.
[[Page 53575]]
e. Submission Mechanisms
We are finalizing additional flexibility for submitting data
through multiple submission mechanisms. Due to operational reasons and
to allow additional time to communicate how this policy intersects with
our measure applicability policies, this policy will not be implemented
for the 2018 performance period but will be implemented instead for the
2019 performance period of the Quality Payment Program. Individual MIPS
eligible clinicians or groups will be able to submit measures and
activities, as available and applicable, via as many mechanisms as
necessary to meet the requirements of the quality, improvement
activities, or advancing care information performance categories for
the 2019 performance period. This option will provide clinicians the
ability to select the measures most meaningful to them, regardless of
the submission mechanism.
Also, given stakeholder concerns regarding CMS' multiple
submissions mechanism policy, we want to clarify that under the
validation process for Year 3, MIPS eligible clinicians who submit via
claims or registry submission only or a combination of claims and
registry submissions would not be required to submit measures through
other mechanisms to meet the quality performance category criteria;
rather, it is an option available to MIPS eligible clinicians which may
increase their quality performance category score. We expect that MIPS
eligible clinicians would choose the submission mechanism that would
give them 6 measures to report. Our intention is to offer multiple
submission mechanisms to increase flexibility for MIPS individual
clinicians and groups. We are not requiring that MIPS individual
clinicians and groups submit via additional submission mechanisms;
however, through this policy the option would be available for those
that have applicable measures and/or activities available to them.
f. Virtual Groups
Virtual groups are a new way to participate in MIPS starting with
the 2018 MIPS performance period. For the 2018 performance period,
clinicians can participate in MIPS as an individual, as a group, as an
APM Entity in a MIPS APM, or as a virtual group.
For the implementation of virtual groups as a participation option
under MIPS, we are establishing the following policies. We are defining
a virtual group as a combination of two or more TINs assigned to one or
more solo practitioners or one or more groups consisting of 10 or fewer
eligible clinicians that elect to form a virtual group for a
performance period for a year. In order for solo practitioners or such
groups to be eligible to join a virtual group, the solo practitioners
and the groups would need to exceed the low-volume threshold. A solo
practitioner or a group that does not exceed the low-volume threshold
could not participate in a virtual group, and it is not permissible
under the statute to apply the low-volume threshold at the virtual
group level. Also, we are finalizing our virtual group policies to
clearly delineate those group-related policies that apply to virtual
groups versus policies that only apply to virtual groups.
Virtual groups are required to make an election to participate in
MIPS as a virtual group prior to the start of an applicable performance
period. We are also finalizing a two-stage virtual group election
process for the applicable 2018 and 2019 performance periods. The first
stage is the optional eligibility stage, but for practices that do not
choose to participate in stage 1 of the election process, we will make
an eligibility determination during stage 2 of the election process.
The second stage is the virtual group formation stage. We are also
finalizing that virtual groups must have a formal written agreement
among each party of a virtual group. The election deadline will be
December 31.
To provide support and reduce burden, we intend to make technical
assistance (TA) available, to the extent feasible and appropriate, to
support clinicians who choose to come together as a virtual group for
the first 2 years of virtual group implementation applicable to the
2018 and 2019 performance years. Clinicians already receiving technical
assistance may continue to do so for virtual groups support; otherwise,
the Quality Payment Service Center is available to assist and connect
virtual groups with a technical assistance representative. For year 2,
we believe that we have created an election process that is simple and
straightforward. For Quality Payment Program Year 3, we intend to
provide an electronic election process, if technically feasible.
Virtual groups are required to meet the requirements for each
performance category and responsible for aggregating data for their
measures and activities across the virtual group, for example, across
their TINs. In future years, we intend to examine how we define
``group'' under MIPS with respect to flexibility in composition and
reporting.
g. MIPS APMs
MIPS eligible clinicians who participate in MIPS APMs are scored
using the APM scoring standard instead of the generally applicable MIPS
scoring standard. For the 2018 performance period, we are finalizing
modifications to the quality performance category reporting
requirements and scoring for MIPS eligible clinicians in MIPS APMs, and
other modifications to the APM scoring standard. For purposes of the
APM scoring standard, we are adding a fourth snapshot date that would
be used only to identify eligible clinicians in APM Entity groups
participating in those MIPS APMs that require full TIN participation.
This snapshot date will not be used to make QP determinations. Along
with the other APM Entity groups, these APM Entity groups would be used
for the purposes of reporting and scoring under the APM scoring
standard described in the CY 2017 Quality Payment Program final rule
(81 FR 77246).
h. Facility-Based Measurement
We solicited comments on implementing facility-based measurement
for the 2018 MIPS performance period and future performance periods to
add more flexibility for clinicians to be assessed in the context of
the facilities at which they work. We described facility-based measures
policies related to applicable measures, applicability to facility-
based measurement, group participation, and facility attribution. For
clinicians whose primary professional responsibilities are in a
healthcare facility we presented a method to assess performance in the
quality and cost performance categories of MIPS based on the
performance of that facility in another value-based purchasing program.
After much consideration, we are finalizing our proposal to allow
clinicians to use facility-based measurement in year 3 (2019) of the
Quality Payment Program. We will use the 2018 year to ensure that
clinicians better understand the opportunity and ensure operational
readiness to offer facility-based measurement.
i. Scoring
In the transition year of the Quality Payment Program, we finalized
a unified scoring system to determine a final score across the 4
performance categories (81 FR 77273 through 77276). For the 2018 MIPS
performance period, we will build on the scoring methodology we
finalized for the transition year, focusing on encouraging
[[Page 53576]]
MIPS eligible clinicians to meet data completeness requirements.
For quality performance category scoring, we are finalizing to
extend some of the transition year policies to the 2018 MIPS
performance period and also finalizing several modifications to
existing policy. Quality measures that can be scored against a
benchmark that meet data completeness standards, and meet the minimum
case size requirements will continue to receive between 3 and 10 points
as measure achievement points. Measures that do not have a benchmark or
meet the case minimum requirement will continue to receive 3 points.
For quality data submitted via EHR, QCDR, or qualified registry, we
are lowering the number of points available for measures that do not
meet the data completeness criteria to 1 point, except for a measure
submitted by a small practice, which we will continue to assign 3
points.
We are finalizing a timeline to identify and propose to remove
topped out quality measures through future rulemaking. We are
evaluating additional considerations needed to maintain measures for
important aspects of care, such as patient safety and high reliability,
and will address this in future rulemaking. We are finalizing a policy
of applying a scoring cap to identified topped out measures with
measure benchmarks that have been topped out for at least 2 consecutive
years; however, based on feedback, we will award up to 7 points for
topped out measures rather than the 6 points originally proposed. We
are finalizing the special scoring policy for the 6 measures identified
for the 2018 performance period with a 7-point scoring cap.
We are also excluding CMS Web Interface measures from topped out
scoring, but we will continue to monitor differences between CMS Web
Interface and other submission options. We intend to address CAHPS
through future rulemaking.
Beginning with the 2018 MIPS performance period, we are finalizing
measuring improvement scoring at the performance category level for the
quality performance category, but we will monitor this approach and
revisit as needed through future rule making. We are finalizing
measuring improvement scoring at the measure level for the cost
performance category.
For the 2018 MIPS performance period, the quality, improvement
activities, cost and advancing care information performance category
scores will be given weight in the final score, or be reweighted if a
performance category score is not available.
We are also finalizing small practice and complex patient bonuses
only for the 2020 MIPS payment year. The small practice bonus of 5
points will be applied to the final score for MIPS eligible clinicians
in groups, virtual groups, or APM Entities that have 15 or fewer
clinicians and that submit data on at least one performance category in
the 2018 performance period. We will also apply a complex patient bonus
capped at 5 points using the dual eligibility ratio and average HCC
risk score. We increased the complex patients bonus from 3 points as
proposed in part to align with the small practice bonus. The final
score will be compared against the MIPS performance threshold of 15
points for the 2020 MIPS payment year, a modest increase from 3 points
in the transition year. A 15-point final score equal to the performance
threshold can be achieved via multiple pathways and continues the
gradual transition into MIPS. The additional performance threshold for
exceptional performance will remain at 70 points, the same as for the
transition year.
We are finalizing a policy of applying the MIPS payment adjustment
to the Medicare paid amount.
j. Performance Feedback
We proposed and are finalizing the policy to provide Quality
Payment Program performance feedback to eligible clinicians and groups.
Initially, we will provide performance feedback on an annual basis. In
future years, we aim to provide performance feedback on a more frequent
basis, which is in line with clinician requests for timely, actionable
feedback that they can use to improve care.
k. Third Party Intermediaries
In the CY 2017 Quality Payment Program final rule (81 FR 77362), we
finalized that qualified registries, QCDRs, health IT vendors, and CMS-
approved survey vendors will have the ability to act as intermediaries
on behalf of individual MIPS eligible clinicians and groups for
submission of data to CMS across the quality, improvement activities,
and advancing care information performance categories.
Regarding QCDRs and qualified registries, we are finalizing our
proposal to eliminate the self-nomination submission method of email
and require that QCDRs and qualified registries submit their self-
nomination applications via a web-based tool for future program years
beginning with the 2018 performance period. Beginning with the 2019
performance period, we are finalizing the use of a simplified self-
nomination process for previously approved QCDRs and qualified
registries in good standing.
In addition, regarding information a QCDR specifically must provide
to us at the time of self-nomination, we are making a number of
clarifications, finalized that the term ``QCDR measures'' will replace
the existing term of ``non-MIPS measures'', and sought public input on
requiring full development and testing of QCDR measures by submission.
We have also made a few clarifications to existing criteria as they
pertain to qualified registries.
We are not making any changes to the health IT vendors that obtain
data from CEHRT requirements. Regarding CMS-approved survey vendors, we
are finalizing that for the Quality Payment Program year 2 and for
future years, that the vendor application deadline be January 31st of
the applicable performance year or a later date specified by CMS.
Lastly, based on comments we received on the 10-year record retention
period and our interest in reducing financial and time burdens under
this program and having consistent policies across this program, we are
aligning our record retention period across the program by modifying
our proposal for third parties from 10 years to finalize a 6-year
retention period. Therefore, we are finalizing that entities must
retain all data submitted to us for purposes of MIPS for a 6 years from
the end of the MIPS performance period.
l. Public Reporting
As discussed in section II.C.11. of this final rule with comment
period, we proposed and are finalizing public reporting of certain
eligible clinician and group Quality Payment Program information,
including MIPS and APM data in an easily understandable format as
required under the MACRA.
m. Eligibility and Exclusion Provisions of the MIPS Program
We are modifying the definition of a non-patient facing MIPS
eligible clinician to apply to virtual groups. In addition, we are
finalizing our proposal to specify that groups considered to be non-
patient facing (more than 75 percent of the NPIs billing under the
group's TIN meet the definition of a non-patient facing individual MIPS
eligible clinician) during the non-patient facing determination period
would automatically have their advancing care information performance
category reweighted to zero.
[[Page 53577]]
Additionally, we are finalizing our proposal to increase the low-
volume threshold to less than or equal to $90,000 in Medicare Part B
allowed charges or 200 or fewer Part-B enrolled Medicare beneficiaries
to further decrease burden on MIPS eligible clinicians that practice in
rural areas or are part of a small practice or are solo practitioners.
We are not finalizing our proposal to provide clinicians the ability to
opt-in to MIPS if they meet or exceed one, but not all, of the low-
volume threshold determinations, including as defined by dollar amount,
beneficiary count or, if established, items and services. We intend to
revisit this policy in future rulemaking and are seeking comment on
methods to implement this policy in a low burden manner.
E. Payment Adjustments
For the 2020 payment year based on Advanced APM participation in
2018 performance period, we estimated that approximately 185,000 to
250,000 clinicians will become QPs, and therefore, be excluded from the
MIPS reporting requirements and payment adjustment, and qualify for a
lump sum APM incentive payment equal to 5 percent of their estimated
aggregate payment amounts for covered professional services in the
preceding year. We estimate that the total lump sum APM incentive
payments will be between approximately $675 million and $900 million
for the 2020 Quality Payment Program payment year. This expected growth
in QPs between the first and second year of the program is due in part
to reopening of CPC+ and Next Generation ACO for 2018, and the Medicare
ACO Track 1+ Model which is projected to have a large number of
participants, with a large majority reaching QP status.
Under the policies in this final rule with comment period, and for
purposes of the Regulatory Impact Analysis, we estimate that
approximately 622,000 eligible clinicians will be subject to MIPS
reporting requirements and payment adjustments in the 2018 MIPS
performance period. However, this number may vary depending on the
number of eligible clinicians excluded from MIPS based on their status
as QPs or Partial QPs. After restricting the population to eligible
clinician types who are not newly enrolled, we believe the increase in
the low-volume threshold is expected to exclude 540,000 clinicians who
do not exceed the low-volume threshold. In the 2020 MIPS payment year,
MIPS payment adjustments will be applied based on MIPS eligible
clinicians' performance on specified measures and activities within
four integrated performance categories.
Assuming that 90 percent of MIPS eligible clinicians of all
practice sizes participate in MIPS, we estimate that MIPS payment
adjustments will be approximately equally distributed between negative
MIPS payment adjustments of $118 million and positive MIPS payment
adjustments of $118 million to MIPS eligible clinicians, as required by
the statute to ensure budget neutrality. Positive MIPS payment
adjustments will also include up to an additional $500 million for
exceptional performance to MIPS eligible clinicians whose final score
meets or exceeds the additional performance threshold of 70 points.
These MIPS payment adjustments are expected to drive quality
improvement in the provision of MIPS eligible clinicians' care to
Medicare beneficiaries and to all patients in the health care system.
However, the distribution will change based on the final population of
MIPS eligible clinicians for CY 2020 and the distribution of scores
under the program. We believe that starting with these modest initial
MIPS payment adjustments is in the long-term best interest of
maximizing participation and starting the Quality Payment Program off
on the right foot, even if it limits the magnitude of MIPS positive
adjustments during the 2018 MIPS performance period. The increased
availability of Advanced APM opportunities, including through Medical
Home models, also provides earlier avenues to earn APM incentive
payments for those eligible clinicians who choose to participate.
F. Benefits and Costs of the Final Rule With Comment Period
We quantify several costs associated with this rule. We estimate
that this final rule with comment period will result in approximately
$694 million in collection of information-related burden. We estimate
that the incremental collection of information-related burden
associated with this final rule with comment period is a reduction of
approximately $13.9 million relative to the estimated burden of
continuing the policies the CY 2017 Quality Payment Program final rule,
which is $708 million. We also estimate regulatory review costs of $2.2
million for this final rule with comment period. We estimate that
federal expenditures will include $118 million in revenue neutral
payment adjustments and $500 million for exceptional performance
payments. Additional federal expenditures include approximately $675-
$900 million in APM incentive payments to QPs.
G. Automatic Extreme and Uncontrollable Circumstance Policy Interim
Final Rule With Comment Period
In order to account for Hurricanes Harvey, Irma, and Maria and
other disasters that have occurred or might occur during the 2017 MIPS
performance period, we are establishing in an interim final rule with
comment period an automatic extreme and uncontrollable circumstance
policy for the quality, improvement activities, and advancing care
information performance categories for the 2017 MIPS performance
period. We believe the automatic extreme and uncontrollable
circumstance policy will reduce clinician burden during a catastrophic
time and will also align with Medicare policies in other programs such
as the Hospital IQR Program. Under this policy, we will apply the
extreme and uncontrollable circumstance policies for the MIPS
performance categories to individual MIPS eligible clinicians for the
2017 MIPS performance period without requiring a MIPS eligible
clinician to submit an application when we determine a triggering
event, such as a hurricane, has occurred and the clinician is in an
affected area. We will automatically weight the quality, improvement
activities, and advancing care information performance categories at
zero percent of the final score, resulting in a final score equal to
the performance threshold, unless the MIPS eligible clinician submits
MIPS data which we would then score on a performance-category-by-
performance-category-basis, like all other MIPS eligible clinicians. We
are not making any changes to the APM scoring standard policies that
apply in 2017 for participants in MIPS APMs. We are waiving notice and
comment and adopting this policy on an interim final basis due to the
urgency of providing relief for MIPS eligible clinicians impacted by
recent natural disasters during the 2017 MIPS performance period.
H. Stakeholder Input
In developing this final rule with comment period, we sought
feedback from stakeholders and the public throughout the process,
including in the CY 2018 Quality Payment Program proposed rule, CY 2017
Quality Payment Program final rule with comment period, listening
sessions, webinars, and other listening venues. We received a high
degree of interest
[[Page 53578]]
from a broad spectrum of stakeholders. We thank our many commenters and
acknowledge their valued input throughout the rulemaking process. We
summarize and respond to comments on our proposals in the appropriate
sections of this final rule with comment period, though we are not able
to address all comments or all issues that all commenters raised due to
the volume of comments and feedback. Specifically, due to the volume of
comments we have not summarized feedback from commenters on items we
solicited feedback on for future rulemaking purposes. However, in
general, commenters continue to be supportive as we continue
implementation of the Quality Payment Program and maintain optimism as
we move from FFS Medicare payment towards a payment structure focused
on the quality and value of care. Public support for our proposed
approach and policies in the proposed rule, which many were finalized,
focused on the potential for improving the quality of care delivered to
beneficiaries and increasing value to the public--while rewarding
eligible clinicians for their efforts. Additionally we note that we
received a number of comments from stakeholders in regards to the
application of MIPS to certain Part B drugs. Additional guidance on the
applicability of MIPS to Part B drugs can be found on our Web site at
qpp.cms.gov.
We thank stakeholders again for their responses throughout our
process, in various venues, including comments on the Request for
Information Regarding Implementation of the Merit-based Incentive
Payment System, Promotion of Alternative Payment Models, and Incentive
Payments for Participation in Eligible Alternative Payment Models
(herein referred to as the MIPS and APMs RFI) (80 FR 59102 through
59113) and the CY 2017 Quality Payment Program final rule (81 FR 77008
through 77831). We intend to continue open communication with
stakeholders, including consultation with tribes and tribal officials,
on an ongoing basis as we develop the Quality Payment Program in future
years.
We will continue to offer help so clinicians can be successful in
the program and make informed decisions about how to participate. You
can find out more about the help that's available at qpp.cms.gov, which
has many free and customized resources, or by calling 1-866-288-8292.
As with the policy decisions, stakeholder feedback is essential to the
development of educational resources as well. We look forward to your
feedback on existing or the need for new resources.
II. Provisions of the Proposed Regulations, and Analysis of and
Responses to Comments
The following is a summary of the proposed provisions in the
``Medicare Program; CY 2018 Updates to the Quality Payment Program''
proposed rule (82 FR 30010-30500) (hereinafter referred to as the ``CY
2018 Quality Payment Program proposed rule.'' In this section, we also
provide summaries of the public comments and our responses.
A. Introduction
The Quality Payment Program, authorized by the Medicare Access and
CHIP Reauthorization Act of 2015 (MACRA) is a new approach for
reforming care across the health care delivery system for eligible
clinicians. Under the Quality Payment Program, eligible clinicians can
participate via one of two pathways: Advanced Alternative Payment
Models (APMs); or the Merit-based Incentive Payment System (MIPS). We
began implementing the Quality Payment Program through rulemaking for
calendar year (CY) 2017. This rule provides updates for the second and
future years of the Quality Payment Program.
B. Definitions
At Sec. 414.1305, subpart O, we define the following terms:
Ambulatory Surgical Center (ASC)-based MIPS eligible
clinician.
CMS Multi-Payer Model.
Facility-based MIPS eligible clinician.
Full TIN APM.
Improvement Scoring.
Other MIPS APM.
Solo practitioner.
Virtual group.
We revise the definitions of the following terms:
Affiliated practitioner.
APM Entity.
Attributed beneficiary.
Certified Electronic Health Record Technology (CEHRT).
Final Score.
Hospital-based MIPS eligible clinician.
Low-volume threshold.
Medicaid APM.
Non-patient facing MIPS eligible clinician.
Other Payer Advanced APM.
Rural areas.
Small practice.
We remove the following terms:
Advanced APM Entity.
These terms and definitions are discussed in detail in relevant
sections of this final rule with comment period.
C. MIPS Program Details
1. MIPS Eligible Clinicians
a. Definition of a MIPS Eligible Clinician
In the CY 2017 Quality Payment Program final rule (81 FR77040
through 77041), we defined at Sec. 414.1305 a MIPS eligible clinician,
as identified by a unique billing TIN and NPI combination used to
assess performance, as any of the following (excluding those identified
at Sec. 414.1310(b)): A physician (as defined in section 1861(r) of
the Act), a physician assistant, nurse practitioner, and clinical nurse
specialist (as such terms are defined in section 1861(aa)(5) of the
Act), a certified registered nurse anesthetist (as defined in section
1861(bb)(2) of the Act), and a group that includes such clinicians. We
established at Sec. 414.1310(b) and (c) that the following are
excluded from this definition per the statutory exclusions defined in
section 1848(q)(1)(C)(ii) and (v) of the Act: (1) QPs; (2) Partial QPs
who choose not to report on applicable measures and activities that are
required to be reported under MIPS for any given performance period in
a year; (3) low-volume threshold eligible clinicians; and (4) new
Medicare-enrolled eligible clinicians. In accordance with sections
1848(q)(1)(A) and (q)(1)(C)(vi) of the Act, we established at Sec.
414.1310(b)(2) that eligible clinicians (as defined at Sec. 414.1305)
who are not MIPS eligible clinicians have the option to voluntarily
report measures and activities for MIPS. Additionally, we established
at Sec. 414.1310(d) that in no case will a MIPS payment adjustment
apply to the items and services furnished during a year by eligible
clinicians who are not MIPS eligible clinicians, as described in Sec.
414.1310(b) and (c), including those who voluntarily report on
applicable measures and activities specified under MIPS.
In the CY 2017 Quality Payment Program final rule (81 FR 77340), we
noted that the MIPS payment adjustment applies only to the amount
otherwise paid under Part B with respect to items and services
furnished by a MIPS eligible clinician during a year, in which we will
apply the MIPS payment adjustment at the TIN/NPI level. We have
received requests for additional clarifications on which specific Part
B services are subject to the MIPS payment adjustment, as well as which
Part B services are included for eligibility determinations. We note
that
[[Page 53579]]
when Part B items or services are furnished by suppliers that are also
MIPS eligible clinicians, there may be circumstances in which it is not
operationally feasible for us to attribute those items or services to a
MIPS eligible clinician at an NPI level in order to include them for
purposes of applying the MIPS payment adjustment or making eligibility
determinations.
To further clarify, there are circumstances that involve Part B
prescription drugs and durable medical equipment (DME) where the
supplier may also be a MIPS eligible clinician. In the case of a MIPS
eligible clinician who furnishes a Part B covered item or service, such
as prescribing Part B drugs that are dispensed, administered, and
billed by a supplier that is a MIPS eligible clinician, or ordering DME
that is administered and billed by a supplier that is a MIPS eligible
clinician, it is not operationally feasible for us at this time to
associate those billed allowed charges with a MIPS eligible clinician
at an NPI level in order to include them for purposes of applying the
MIPS payment adjustment or making eligibility determinations. To the
extent that it is not operationally feasible for us to do so, such
items or services would not be included for purposes of applying the
MIPS payment adjustment or making eligibility determinations. However,
for those billed Medicare Part B allowed charges that we are able to
associate with a MIPS eligible clinician at an NPI level, such items
and services would be included for purposes of applying the MIPS
payment adjustment or making eligibility determinations.
b. Groups
As discussed in the CY 2017 Quality Payment Program final rule (81
FR 77088 through 77831), we indicated that we will assess performance
either for individual MIPS eligible clinicians or for groups. We
defined a group at Sec. 414.1305 as a single Taxpayer Identification
Number (TIN) with two or more eligible clinicians (including at least
one MIPS eligible clinician), as identified by their individual NPI,
who have reassigned their Medicare billing rights to the TIN. We
recognize that MIPS eligible clinicians participating in MIPS may be
part of a TIN that has one portion of its NPIs participating in MIPS
according to the generally applicable scoring criteria while the
remaining portion of its NPIs is participating in a MIPS APM or an
Advanced APM according to the MIPS APM scoring standard. In the CY 2017
Quality Payment Program final rule (81 FR 77058), we noted that except
for groups containing APM participants, we are not permitting groups to
``split'' TINs if they choose to participate in MIPS as a group. Thus,
we would like to clarify that we consider a group to be either an
entire single TIN or portion of a TIN that: (1) Is participating in
MIPS according to the generally applicable scoring criteria while the
remaining portion of the TIN is participating in a MIPS APM or an
Advanced APM according to the MIPS APM scoring standard; and (2)
chooses to participate in MIPS at the group level. We also defined an
APM Entity group at Sec. 414.1305 as a group of eligible clinicians
participating in an APM Entity, as identified by a combination of the
APM identifier, APM Entity identifier, TIN, and NPI for each
participating eligible clinician.
c. Small Practices
In the CY 2017 Quality Payment Program final rule (81 FR 77188), we
defined the term small practices at Sec. 414.1305 as practices
consisting of 15 or fewer clinicians and solo practitioners. However,
it has come to our attention that there is inconsistency between the
proposed definition of a solo practitioner discussed in section
II.C.4.b. of this final rule with comment period and the established
definition of a small practice. Therefore, to resolve this
inconsistency and ensure greater consistency with established MIPS
terminology, we are modifying the definition of a small practice at
Sec. 414.1305 to mean a practice consisting of 15 or fewer eligible
clinicians. This modification is not intended to substantively change
the definition of a small practice. In section II.C.4.d. of this final
rule with comment period, we discuss how small practice status would
apply to virtual groups. Also, in the final rule with comment period,
we noted that we would not make an eligibility determination regarding
the size of small practices, but indicated that small practices would
attest to the size of their group practice (81 FR 77057). However, we
have since realized that our system needs to account for small practice
size in advance of a performance period for operational purposes
relating to assessing and scoring the improvement activities
performance category, determining hardship exceptions for small
practices, calculating the small practice bonus for the final score,
and identifying small practices eligible for technical assistance. As a
result, we believe it is critical to modify the way in which small
practice size would be determined. To make eligibility determinations
regarding the size of small practices for performance periods occurring
in 2018 and future years, we proposed that we would determine the size
of small practices as described in this section of the final rule with
comment period (82 FR 30020). As noted in the CY 2017 Quality Payment
Program final rule, the size of a group (including a small practice)
would be determined before exclusions are applied (81 FR 77057). We
note that group size determinations are based on the number of NPIs
associated with a TIN, which would include eligible clinicians (NPIs)
who may be excluded from MIPS participation and do not meet the
definition of a MIPS eligible clinician.
To make eligibility determinations regarding the size of small
practices for performance periods occurring in 2018 and future years,
we proposed that we would determine the size of small practices by
utilizing claims data (82 FR 30020). For purposes of this section, we
are coining the term ``small practice size determination period'' to
mean a 12-month assessment period, which consists of an analysis of
claims data that spans from the last 4 months of a calendar year 2
years prior to the performance period followed by the first 8 months of
the next calendar year and includes a 30-day claims run out. This would
allow us to inform small practices of their status near the beginning
of the performance period as it pertains to eligibility relating to
technical assistance, applicable improvement activities criteria, the
proposed hardship exception for small practices under the advancing
care information performance category, and the proposed small practice
bonus for the final score.
Thus, for purposes of performance periods occurring in 2018 and the
2020 MIPS payment year, we would identify small practices based on 12
months of data starting from September 1, 2016 to August 31, 2017. We
would not change an eligibility determination regarding the size of a
small practice once the determination is made for a given performance
period and MIPS payment year. We recognize that there may be
circumstances in which the small practice size determinations made do
not reflect the real-time size of such practices. We considered two
options that could address such potential discrepancies. One option
would include an expansion of the proposed small practice size
determination period to 24 months with two 12-month segments of data
analysis (before and during the performance period), in which we would
conduct a second analysis of claims data during the performance period.
Such an expanded
[[Page 53580]]
determination period may better capture the real-time size of small
practices, but determinations made during the performance period
prevent our system from being able to account for the assessment and
scoring of the improvement activities performance category and
identification of small practices eligible for technical assistance
prior to the performance period. Specifically, our system needs to
capture small practice determinations in advance of the performance
period in order for the system to reflect the applicable requirements
for the improvement activities performance category and when a small
practice bonus would be applied. A second option would include an
attestation component, in which a small practice that was not
identified as a small practice during the small practice size
determination period would be able to attest to the size of their group
practice prior to the performance period. However, this second option
would require us to develop several operational improvements, such as a
manual process or system that would provide an attestation mechanism
for small practices, and a verification process to ensure that only
small practices are identified as eligible for technical assistance.
Since individual MIPS eligible clinicians and groups are not required
to register to participate in MIPS (except for groups utilizing the CMS
Web Interface for the Quality Payment Program or administering the
CAHPS for MIPS survey), requiring small practices to attest to the size
of their group practice prior to the performance period could increase
burden on individual MIPS eligible clinicians and groups that are not
already utilizing the CMS Web Interface for the Quality Payment Program
or administering the CAHPS for MIPS survey. We solicited public comment
on the proposal regarding how we would determine small practice size.
The following is a summary of the public comments received on the
``Small Practices'' proposal and our responses:
Comment: Several commenters supported using historical claims data
to make a small practice size determination. One commenter also noted
support for the definition of a small practice using the number of NPIs
associated with a TIN.
Response: We are finalizing that we will utilize a 12-month
assessment period, which consists of an analysis of claims data that
spans from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and includes a 30-day claims run out for the small practice size
determination.
Comment: Several commenters supported the proposal to notify small
practices of their status near the beginning of the performance period
so that practices can plan accordingly.
Response: We are finalizing that we will utilize a 12-month
assessment period, which consists of an analysis of claims data that
spans from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and includes a 30-day claims run out for the small practice size
determination. We anticipate providing MIPS eligible clinicians with
their small practice size determination by Spring 2018, for the
applicable 2018 performance period.
Comment: Several commenters recommended that practices be allowed
to attest the size of their practice if they are not identified during
the small practice size determination period. Specifically, a few
commenters expressed concern that utilizing claims data will result in
practices learning of their small practice status too close to the
start of the performance period. A few commenters recommended that we
should rely on attestation alone, and expressed concern that claims
data will not provide a reliable, real-time determination of practice
size. Another commenter specifically recommended that practices be
required to attest 180 days before the close of the performance period
so that practices can accurately predict their status. One commenter
recommended that we validate practice size for groups attesting as
small using recent claims data. One commenter recommended utilizing a
claims determination process as well as attestation, and using
whichever method yields a smaller practice size.
Response: Regarding the various commenters that provided different
methods for validating practice size, including: Attesting as small
using recent claims data; utilizing an 180 days attestation period; or
utilizing a claims determination process as well as attestation, we
have considered various approaches and have determined that the most
straightforward approach which provides the lowest burden to MIPS
eligible clinicians is the utilization of claims data. By utilizing
claims data, we can apply the status of a small practice accurately
without requiring clinicians to take a separate action and attest to
being a small practice. Therefore, we are finalizing that we will
utilize a 12-month assessment period, which consists of an analysis of
claims data that spans from the last 4 months of a calendar year 2
years prior to the performance period followed by the first 8 months of
the next calendar year and includes a 30-day claims run out for the
small practice size determination. We anticipate providing MIPS
eligible clinicians with their small practice size determination by
Spring 2018, for the applicable 2018 performance period.
As discussed in the CY 2018 Quality Payment Program proposed rule
(82 FR 30020), there are operational barriers with allowing groups to
attest to their size. Specifically, since individual MIPS eligible
clinicians and groups are not required to register to participate in
MIPS (except for groups utilizing the CMS Web Interface for the Quality
Payment Program or administering the CAHPS for MIPS survey), requiring
small practices to attest to the size of their group practice prior to
the performance period could increase burden on individual MIPS
eligible clinicians and groups. In addition, attestation would require
us to develop several operational improvements, such as a manual
process or system that would provide an attestation mechanism for small
practices, and a verification process to ensure that only small
practices are identified as eligible for technical assistance. We
believe utilizing claims data will support most eligibility
determinations because we consider it a reliable source of how a MIPS
eligible clinician or group interacts with Medicare.
Comment: One commenter expressed concern that using performance
period data or an attestation portal as a second step in the small
practice identification process does not provide practices with
adequate advanced notice of their practice size determination and could
limit their ability to access small practice support services.
Response: We are finalizing that we will utilize a 12-month
assessment period, which consists of an analysis of claims data that
spans from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and includes a 30-day claims run out for the small practice size
determination. This proposed modification of the claims run out period
from 60 days to 30 days increases the speed of delivery for
communication and creation of the file using claims data. In addition,
using the 30-day claims run out allows us to inform small practices of
their determination as soon as technically possible, as it pertains to
eligibility relating to technical assistance, applicable improvement
[[Page 53581]]
activities criteria, the proposed hardship exception for small
practices under the advancing care information performance category,
and the proposed small practice bonus for the final score. As a result,
we do not believe clinicians' ability to access small practice support
services will be limited.
Comment: A few commenters recommended that we should not allow
practices to attest that they are small practices. Specifically, one
commenter expressed concern that practices may mistakenly expect to be
identified as small based on their number of MIPS eligible clinicians
and attest incorrectly.
Response: We acknowledge and agree with the commenters' concern. We
have considered various approaches and have determined that the most
straightforward and best representation of small practice size
determination is the utilization of claims data. Therefore, we are
finalizing that we will utilize a 12-month assessment period, which
consists of an analysis of claims data that spans from the last 4
months of a calendar year 2 years prior to the performance period
followed by the first 8 months of the next calendar year and includes a
30-day claims run out for the small practice size determination.
Comment: Several commenters did not support the previously
finalized definition of small practices as practices consisting of 15
or fewer clinicians and solo practitioners. One commenter recommended
that we modify the definition of small practices to include those that
are similar in challenges and structure, but that may include more than
15 clinicians. The commenter noted that several small practices may be
loosely tied together under the same TIN but may function as small
practices without the benefit of shared organizational and
administrative resources. The commenter recommended that we assess the
number of clinicians at a physical practice site to determine small
practice status and ability to join a virtual group. Several commenters
believed that we should define small practices based on the number of
MIPS eligible clinicians, not eligible clinicians. A few commenters
supported defining small practices based on the number of full-time
equivalent employees, arguing that rural and HPSAs use different
staffing arrangements to fully staff their practices.
Response: Section 1848(q)(2)(B)(iii) of the Act defines small
practices as consisting of 15 or fewer professionals. We previously
defined small practices at Sec. 414.1305 as practices consisting of 15
or fewer clinicians and solo practitioners in order to include both
MIPS eligible clinicians and eligible clinicians, such as those in
APMs. As discussed above, we are modifying the definition of a small
practice at Sec. 414.1305 to mean a practice consisting of 15 or fewer
eligible clinicians. This modification is not intended to substantively
change the definition of a small practice. In response to the
suggestions that we assess the number of clinicians at a physical
practice site to determine small practice status, or make the small
practice assessment based on the number of full-time equivalent
employees, we acknowledge that some practices may be structured in this
manner; however, we do not currently have a reliable method of making a
determination that does not require a separate action from such
practices, such as attestation or submission of supporting
documentation to verify these statuses. Rather, we believe the approach
of simply counting the NPIs (clinicians) that are associated with a TIN
provides a simple method for all stakeholders to understand.
Final Action: After consideration of the public comments, we are
finalizing that we will utilize a 12-month assessment period, which
consists of an analysis of claims data that spans from the last 4
months of a calendar year 2 years prior to the performance period
followed by the first 8 months of the next calendar year and includes a
30-day claims run out for the small practice size determination. In
addition, as discussed above, we are modifying the definition of a
small practice at Sec. 414.1305 to mean a practice consisting of 15 or
fewer eligible clinicians. This modification is not intended to
substantively change the definition of a small practice. Finally, we
refer readers to section II.C.4.b. of this final rule with comment
period for a discussion of the definition of a solo practitioner.
d. Rural Area and Health Professional Shortage Area Practices
In the CY 2017 Quality Payment Program final rule, we defined rural
areas at Sec. 414.1305 as clinicians in ZIP codes designated as rural,
using the most recent Health Resources and Services Administration
(HRSA) Area Health Resource File data set available; and Health
Professional Shortage Areas (HPSAs) at Sec. 414.1305 as areas
designated under section 332(a)(1)(A) of the Public Health Service Act.
For technical accuracy purposes, we proposed to remove the language
``clinicians in'' as clinicians are not technically part of a ZIP code
and modify the definition of a rural areas at Sec. 414.1305 as ZIP
codes designated as rural, using the most recent Health Resources and
Services Administration (HRSA) Area Health Resource File data set
available.
We recognize that there are cases in which an individual MIPS
eligible clinician (including a solo practitioner) or a group may have
multiple practice sites associated with its TIN and as a result, it is
critical for us to outline the application of rural area and HPSA
practice designations to such practices. For performance periods
occurring in 2017, we consider an individual MIPS eligible clinician or
a group with at least one practice site under its TIN in a ZIP code
designated as a rural area or HPSA to be a rural area or HPSA practice.
For performance periods occurring in 2018 and future years, we believe
that a higher threshold than one practice within a TIN is necessary to
designate an individual MIPS eligible clinician, a group, or a virtual
group as a rural or HPSA practice. We recognize that the establishment
of a higher threshold starting in 2018 would more appropriately
identify groups and virtual groups with multiple practices under a
group's TIN or TINs that are part of a virtual group as rural or HPSA
practice and ensure that groups and virtual groups are assessed and
scored according to requirements that are applicable and appropriate.
We note that in the CY 2017 Quality Payment Program final rule (81 FR
77048 through 77049), we defined a non-patient facing MIPS eligible
clinician at Sec. 414.1305 as including a group provided that more
than 75 percent of the NPIs billing under the group's TIN meet the
definition of a non-patient facing individual MIPS eligible clinician
during the non-patient facing determination period. We refer readers to
section II.C.1.e. of this final rule with comment period for our policy
to modify the definition of a non-patient facing MIPS eligible
clinician. We believe that using a similar threshold for applying the
rural and HPSA designation to an individual MIPS eligible clinician, a
group, or virtual group with multiple practices under its TIN or TINs
within a virtual group will add consistency for such practices across
the MIPS as it pertains to groups and virtual groups obtaining such
statuses. We also believe that establishing a 75 percent threshold
renders an adequate representation of a group or virtual group where a
significant portion of a group or a virtual group is identified as
having such status. Therefore, for performance periods occurring in
2018 and future years, we proposed that an individual
[[Page 53582]]
MIPS eligible clinician, a group, or a virtual group with multiple
practices under its TIN or TINs within a virtual group would be
designated as a rural or HPSA practice if more than 75 percent of NPIs
billing under the individual MIPS eligible clinician or group's TIN or
within a virtual group, as applicable, are designated in a ZIP code as
a rural area or HPSA (82 FR 30020 through 30021).
The following is a summary of the public comments received on the
``Rural Area and Health Professional Shortage Area Practices''
proposals and our responses:
Comment: Several commenters supported the proposals to modify the
definition of rural areas as ZIP codes designated as rural and a rural
group when more than 75 percent of NPIs billing under the individual
MIPS eligible clinician or group's TIN or within a virtual group, as
applicable, are designated in a ZIP code as a rural area or HPSA.
Another commenter recommended that we conduct further analysis on those
clinicians who thought they qualified as a rural area or HPSA practice
but did not meet the 75 percent threshold.
Response: We are finalizing that the definition of a rural areas at
Sec. 414.1305 as ZIP codes designated as rural, using the most recent
Health Resources and Services Administration (HRSA) Area Health
Resource File data set available. In addition, we are finalizing that
for performance periods occurring in 2018 and future years, that an
individual MIPS eligible clinician, a group, or a virtual group with
multiple practices under its TIN or TINs within a virtual group would
be designated as a rural or HPSA practice if more than 75 percent of
NPIs billing under the individual MIPS eligible clinician or group's
TIN or within a virtual group, as applicable, are designated in a ZIP
code as a rural area or HPSA. In regard to the suggestion that we
conduct further analysis on those clinicians who thought they qualified
as a rural area or HPSA practice but did not meet the 75 percent
threshold, we would encourage those stakeholders to contact our Quality
Payment Program Service Center which may be reached at 1-866-288-8292
(TTY 1-877-715-6222), available Monday through Friday, 8:00 a.m.-8:00
p.m. Eastern Time or via email at QPP@cms.hhs.gov.
Comment: One commenter recommended we further analyze the
characteristics of practices currently defined as rural or HPSA to
identify practices that may be inappropriately classified.
Response: We believe that establishing a 75 percent threshold more
appropriately identifies groups and virtual groups with multiple
practices under a group's TIN or TINs that are part of a virtual group
as rural or HPSA practices and ensure that groups and virtual groups
are assessed and scored according to requirements that are applicable
and appropriate. We will take the suggestions for further analysis on
the characteristics of practices currently defined as rural or HPSA to
identify practices that may be inappropriately classified into
consideration in future rulemaking as necessary.
Comment: Several commenters did not support the proposed definition
of rural areas and did not support the proposed group definition of
rural and HPSA practice. One commenter did not support the use of ZIP
codes as a reliable indicator of rural status as some clinicians have
multiple sites inside and outside of rural areas. A few commenters
recommended that we not adopt the policy that a group be considered
rural if more than 75 percent of NPIs billing under the TIN are
designated in a ZIP code as rural or HPSA because it would overly limit
the number of rural group practices. Of these commenters, two
recommended using 50 percent as a threshold, and one commenter
recommended a gradual transition using the 2017 threshold for the 2018
MIPS performance period and thresholds of 25 percent, 50 percent, and
75 percent in performance periods occurring in 2019, 2020, and 2021,
respectively. A few commenters believed that expanding the number of
clinicians in rural or HPSA groups would hamper the ability of those
practices to participate fully in the transition to value-based care
and increase disparities between urban and rural care. One commenter
stated that the status of rural or HPSA should be assigned to an
individual but not be assigned to a group.
Response: We are finalizing that an individual MIPS eligible
clinician, a group, or a virtual group with multiple practices under
its TIN or TINs within a virtual group would be designated as a rural
or HPSA practice if more than 75 percent of NPIs billing under the
individual MIPS eligible clinician or group's TIN or within a virtual
group, as applicable, are designated in a ZIP code as a rural area or
HPSA. We do not believe establishing a 75 percent threshold would
overly limit the number of rural group practices, nor hamper their
ability to participate fully in the transition to value-based care, or
increase disparities between urban and rural care. In response to the
various threshold recommendations, we believe that the 75 percent
threshold provides adequate representation of the group, and it also
aligns with our definition of a non-patient facing group, which
provides consistency across the program. We believe rural and HPSA
status should be assigned to groups because we believe those clinicians
that are in a rural or HPSA area and choose to participate in MIPS as
part of a group, should receive the benefit of those statuses,
regardless of their chosen participation mechanism. In regards to the
commenter who did not support the use of ZIP codes as a reliable
indicator of rural status due to clinicians practicing at multiple
sites, we disagree. We believe that utilizing ZIP codes designated as
rural is an appropriate indicator of rural status. We further note that
if a clinician practices at multiple sites that have different TINs,
each TIN would have a separate rural analysis applied for that
particular site (TIN).
Final Action: After consideration of the public comments, we are
finalizing the definition of rural areas at Sec. 414.1305 as ZIP codes
designated as rural, using the most recent Health Resources and
Services Administration (HRSA) Area Health Resource File data set
available. In addition, we are finalizing that for performance periods
occurring in 2018 and future years, that an individual MIPS eligible
clinician, a group, or a virtual group with multiple practices under
its TIN or TINs within a virtual group would be designated as a rural
or HPSA practice if more than 75 percent of NPIs billing under the
individual MIPS eligible clinician or group's TIN or within a virtual
group, as applicable, are designated in a ZIP code as a rural area or
HPSA.
e. Non-Patient Facing MIPS Eligible Clinicians
Section 1848(q)(2)(C)(iv) of the Act requires the Secretary, in
specifying measures and activities for a performance category, to give
consideration to the circumstances of professional types (or
subcategories of those types determined by practice characteristics)
who typically furnish services that do not involve face-to-face
interaction with a patient. To the extent feasible and appropriate, the
Secretary may take those circumstances into account and apply
alternative measures or activities that fulfill the goals of the
applicable performance category to such non-patient facing MIPS
eligible clinicians. In carrying out these provisions, we are required
to consult with non-patient facing MIPS eligible clinicians.
In addition, section 1848(q)(5)(F) of the Act allows the Secretary
to re-weight
[[Page 53583]]
MIPS performance categories if there are not sufficient measures and
activities applicable and available to each type of MIPS eligible
clinician. We assume many non-patient facing MIPS eligible clinicians
will not have sufficient measures and activities applicable and
available to report under the performance categories under MIPS. We
refer readers to section II.C.6.f. of this final rule with comment
period for the discussion regarding how we address performance category
weighting for MIPS eligible clinicians for whom no measures or
activities are applicable and available in a given performance
category.
In the CY 2017 Quality Payment Program final rule (81 FR 77048
through 77049), we defined a non-patient facing MIPS eligible clinician
for MIPS at Sec. 414.1305 as an individual MIPS eligible clinician
that bills 100 or fewer patient-facing encounters (including Medicare
telehealth services defined in section 1834(m) of the Act) during the
non-patient facing determination period, and a group provided that more
than 75 percent of the NPIs billing under the group's TIN meet the
definition of a non-patient facing individual MIPS eligible clinician
during the non-patient facing determination period. In order to account
for the formation of virtual groups starting in the 2018 performance
year and how non-patient facing determinations would apply to virtual
groups, we need to modify the definition of a non-patient facing MIPS
eligible clinician. Therefore, for performance periods occurring in
2018 and future years, we proposed to modify the definition of a non-
patient facing MIPS eligible clinician at Sec. 414.1305 to mean an
individual MIPS eligible clinician that bills 100 or fewer patient-
facing encounters (including Medicare telehealth services defined in
section 1834(m) of the Act) during the non-patient facing determination
period, and a group or virtual group provided that more than 75 percent
of the NPIs billing under the group's TIN or within a virtual group, as
applicable, meet the definition of a non-patient facing individual MIPS
eligible clinician during the non-patient facing determination period
(82 FR 30021).
We considered a patient-facing encounter to be an instance in which
the individual MIPS eligible clinician or group billed for items and
services furnished such as general office visits, outpatient visits,
and procedure codes under the PFS. We published the list of patient-
facing encounter codes for performance periods occurring in 2017 at
qpp.cms.gov/resources/education. We intend to publish the list of
patient-facing encounter codes for performance periods occurring in
2018 at qpp.cms.gov by the end of 2017. The list of patient-facing
encounter codes is used to determine the non-patient facing status of
MIPS eligible clinicians.
The list of patient-facing encounter codes includes two general
categories of codes: Evaluation and Management (E&M) codes; and
Surgical and Procedural codes. E&M codes capture clinician-patient
encounters that occur in a variety of care settings, including office
or other outpatient settings, hospital inpatient settings, emergency
departments, and nursing facilities, in which clinicians utilize
information provided by patients regarding history, present illness,
and symptoms to determine the type of assessments to conduct.
Assessments are conducted on the affected body area(s) or organ
system(s) for clinicians to make medical decisions that establish a
diagnosis or select a management option(s).
Surgical and Procedural codes capture clinician-patient encounters
that involve procedures, surgeries, and other medical services
conducted by clinicians to treat medical conditions. In the case of
many of these services, evaluation and management work is included in
the payment for the single code instead of separately reported.
Patient-facing encounter codes from both of these categories describe
direct services furnished by eligible clinicians with impact on patient
safety, quality of care, and health outcomes.
For purposes of the non-patient facing policies under MIPS, the
utilization of E&M codes and Surgical and Procedural codes allows for
accurate identification of patient-facing encounters, and thus,
accurate eligibility determinations regarding non-patient facing
status. As a result, MIPS eligible clinicians considered non-patient
facing are able to prepare to meet requirements applicable to non-
patient facing MIPS eligible clinicians. We proposed to continue
applying these policies for purposes of the 2020 MIPS payment year and
future years (82 FR 30021).
As described in the CY 2017 Quality Payment Program final rule, we
established the non-patient facing determination period for purposes of
identifying non-patient facing MIPS eligible clinicians in advance of
the performance period and during the performance period using
historical and performance period claims data. This eligibility
determination process allows us to begin identifying non-patient facing
MIPS eligible clinicians prior to or shortly after the start of the
performance period. The non-patient facing determination period is a
24-month assessment period, which includes a two-segment analysis of
claims data regarding patient-facing encounters during an initial 12-
month period prior to the performance period followed by another 12-
month period during the performance period. The initial 12-month
segment of the non-patient facing determination period spans from the
last 4 months of a calendar year 2 years prior to the performance
period followed by the first 8 months of the next calendar year and
includes a 60-day claims run out, which allows us to inform individual
MIPS eligible clinicians and groups of their non-patient facing status
during the month (December) prior to the start of the performance
period. The second 12-month segment of the non-patient facing
determination period spans from the last 4 months of a calendar year 1
year prior to the performance period followed by the first 8 months of
the performance period in the next calendar year and includes a 60-day
claims run out, which will allow us to inform additional individual
MIPS eligible clinicians and groups of their non-patient status during
the performance period.
However, based on our analysis of data from the initial segment of
the non-patient facing determination period for performance periods
occurring in 2017 (that is, data spanning from September 1, 2015 to
August 31, 2016), we found that it may not be necessary to include a
60-day claims run out since we could achieve a similar outcome for such
eligibility determinations by utilizing a 30-day claims run out. In our
comparison of data analysis results utilizing a 60-day claims run out
versus a 30-day claims run out, there was a 1 percent decrease in data
completeness (see Table 1 for data completeness regarding comparative
analysis of a 60-day and 30-day claims run out). The small decrease in
data completeness would not negatively impact individual MIPS eligible
clinicians or groups regarding non-patient facing determinations. We
believe that a 30-day claims run out would allow us to complete the
analysis and provide such determinations in a more timely manner.
[[Page 53584]]
Table 1--Percentages of Data Completeness for 60-Day and 30-Day Claims
Run Out
------------------------------------------------------------------------
30-Day 60-Day
Incurred year claims run claims run
out * out *
------------------------------------------------------------------------
2015........................................ 97.1% 98.4%
------------------------------------------------------------------------
* Note: Completion rates are estimated and averaged at aggregated
service categories and may not be applicable to subsets of these
totals. For example, completion rates can vary by clinician due to
claim processing practices, service mix, and post payment review
activity. Completion rates vary from subsections of a calendar year;
later portions of a given calendar year will be less complete than
earlier ones. Completion rates vary due to variance in loading
patterns due to technical, seasonal, policy, and legislative factors.
Completion rates are a function of the incurred date used to process
claims, and these factors will need to be updated if claims are
processed on a claim from date or other methodology.
For performance periods occurring in 2018 and future years, we
proposed a modification to the non-patient facing determination period,
in which the initial 12-month segment of the non-patient facing
determination period would span from the last 4 months of a calendar
year 2 years prior to the performance period followed by the first 8
months of the next calendar year and include a 30-day claims run out;
and the second 12-month segment of the non-patient facing determination
period would span from the last 4 months of a calendar year 1 year
prior to the performance period followed by the first 8 months of the
performance period in the next calendar year and include a 30-day
claims run out (82 FR 30022). The proposal would only change the
duration of the claims run out, not the 12-month timeframes used for
the first and second segments of data analysis.
For purposes of the 2020 MIPS payment year, we would initially
identify individual MIPS eligible clinicians and groups who are
considered non-patient facing MIPS eligible clinicians based on 12
months of data starting from September 1, 2016, to August 31, 2017. To
account for the identification of additional individual MIPS eligible
clinicians and groups that may qualify as non-patient facing during
performance periods occurring in 2018, we would conduct another
eligibility determination analysis based on 12 months of data starting
from September 1, 2017, to August 31, 2018.
Similarly, for future years, we would conduct an initial
eligibility determination analysis based on 12 months of data
(consisting of the last 4 months of the calendar year 2 years prior to
the performance period and the first 8 months of the calendar year
prior to the performance period) to determine the non-patient facing
status of individual MIPS eligible clinicians and groups, and conduct
another eligibility determination analysis based on 12 months of data
(consisting of the last 4 months of the calendar year prior to the
performance period and the first 8 months of the performance period) to
determine the non-patient facing status of additional individual MIPS
eligible clinicians and groups. We would not change the non-patient
facing status of any individual MIPS eligible clinician or group
identified as non-patient facing during the first eligibility
determination analysis based on the second eligibility determination
analysis. Thus, an individual MIPS eligible clinician or group that is
identified as non-patient facing during the first eligibility
determination analysis would continue to be considered non-patient
facing for the duration of the performance period and MIPS payment year
regardless of the results of the second eligibility determination
analysis. We would conduct the second eligibility determination
analysis to account for the identification of additional, previously
unidentified individual MIPS eligible clinicians and groups that are
considered non-patient facing.
Additionally, in the CY 2017 Quality Payment Program final rule (81
FR 77241), we established a policy regarding the re-weighting of the
advancing care information performance category for non-patient facing
MIPS eligible clinicians. Specifically, MIPS eligible clinicians who
are considered to be non-patient facing will have their advancing care
information performance category automatically reweighted to zero (81
FR 77241). For groups that are considered to be non-patient facing
(that is, more than 75 percent of the NPIs billing under the group's
TIN meet the definition of a non-patient facing individual MIPS
eligible clinician) during the non-patient facing determination period,
we are finalizing in section II.C.7.b.(3) of this final rule with
comment period to automatically reweight their advancing care
information performance category to zero. We proposed to continue
applying these policies for purposes of the 2020 MIPS payment year and
future years.
The following is a summary of the public comments received on the
``Non-Patient Facing MIPS Eligible Clinicians'' proposals and our
responses:
Comment: Several commenters supported the policy to define non-
patient facing clinicians as individual eligible clinicians billing 100
or fewer encounters, and group or virtual groups to be defined as non-
patient facing if more than 75 percent of eligible clinicians billing
under the group meets the individual clinician definition. One
commenter appreciated the flexibility we are demonstrating in
considering the use of telehealth. Another commenter recommended we
implement the same thresholds for rural and HPSA practices.
Response: We are finalizing for performance periods occurring in
2018 and future years that at Sec. 414.1305 non-patient facing MIPS
eligible clinician means an individual MIPS eligible clinician that
bills 100 or fewer patient-facing encounters (including Medicare
telehealth services defined in section 1834(m) of the Act) during the
non-patient facing determination period, and a group or virtual group
provided that more than 75 percent of the NPIs billing under the
group's TIN or within a virtual group, as applicable, meet the
definition of a non-patient facing individual MIPS eligible clinician
during the non-patient facing determination period.
Comment: Several commenters did not support the proposed definition
of non-patient facing as an individual MIPS eligible clinician that
bills 100 or fewer patient-facing encounters during the non-patient
facing determination period, and a group provided that more than 75
percent of the NPIs billing under the group's TIN meet the definition
of a non-patient facing individual MIPS eligible clinician during the
non-patient facing determination period. One commenter recommended that
the definition of a non-patient facing clinician be defined at the
individual clinician level and not be applied at a group level. Another
commenter did not support applying the non-patient facing definition to
pathologists using PECOS, but rather believed all pathologists should
be automatically identified as non-patient facing.
Response: We do not agree with the commenters who did not support
the proposed definition of a non-patient facing MIPS eligible clinician
at the individual or group level. We weighed several options when
considering the appropriate definition of non-patient facing MIPS
eligible clinicians and believe we have established an appropriate
threshold that provides the most appropriate representation of a non-
patient facing MIPS eligible clinician. The definition of a non-patient
facing MIPS eligible clinician is based on a methodology that would
allow us to more accurately identify MIPS eligible clinicians who are
non-patient facing by applying a threshold to recognize that a MIPS
eligible clinician who furnishes almost exclusively non-
[[Page 53585]]
patient facing services should be treated as a non-patient facing MIPS
eligible clinician despite furnishing a small number of patient-facing
services. This approach also allows us to determine if an individual
clinician or a group of clinicians is non-patient facing. We believe
that having the determination of non-patient facing available at the
individual and group level provides further flexibilities for MIPS
eligible clinicians on the options available to them for participation
within the program. Our methodology used to identify non-patient facing
MIPS eligible clinicians included a quantitative, comparative analysis
of claims and HCPCS code data. We refer commenters to CY 2017 Quality
Payment Program Final Rule (81 FR 77041 through 77049) for a full
discussion on the logic for which clinicians are eligible to be non-
patient facing MIPS eligible clinicians. We agree and intend to provide
the non-patient facing determination prior to the performance period
following the non-patient facing determination period as discussed in
section II.C.1.e. of this final rule with comment period. Regarding the
comment disagreeing with applying the non-patient facing definition to
pathologists using PECOS, we note that we are not utilizing PECOS for
the non-patient facing determination, rather we utilize Part B claims
data.
Comment: Two commenters recommended that we release all patient-
facing codes through formal notice-and-comment rulemaking rather than
subregulatory guidance.
Response: In the CY 2018 Quality Payment Program proposed rule (82
FR 30021), we noted that we consider a patient-facing encounter to be
an instance in which the individual MIPS eligible clinician or group
billed for items and services furnished such as general office visits,
outpatient visits, and procedure codes under the PFS, and we described
in detail two general categories of codes included in this list of
codes, specifically, E&M codes and Surgical and Procedural codes, and
our rationale for including these codes, which we proposed to continue
applying for purposes of the 2020 MIPS payment year and future years.
Therefore, we do not believe it is necessary to specify each individual
code in notice-and-comment rulemaking. Moreover, we are unable to
provide the patient-facing codes through the notice-and-comment
rulemaking as the final list of Current Procedural Terminology (CPT)
codes used to determine patient facing encounters are often not
available in conjunction with the proposed and final rulemaking
timelines. However, we intend to publish the patient-facing codes as
close to when the final rule with comment period is issued as possible
and prior to the start of the performance period. We will adopt any
changes to this policy through future rulemaking as necessary.
Comment: Several commenters supported the proposed policy on
determination periods. The commenters agreed with the proposed policy
to use 2 determination periods. A few commenters recommended that we
notify MIPS eligible clinicians and groups prior to the start of the
performance period by either including such information in the MIPS
eligibility notifications sent to eligible clinicians or responding to
MIPS eligible clinician or group requests for information. Two
commenters recommended that we allow an appeal process or attestation
by MIPS eligible clinicians for the non-patient facing designation.
Response: We agree with the commenters regarding the non-patient
facing determination period and that MIPS eligible clinicians should be
notified prior to the performance period regarding their eligibility
status. In the CY 2017 Quality Payment Program final rule (81 FR 77043
through 77048), we established the non-patient facing determination
period for purposes of identifying non-patient facing MIPS eligible
clinicians in advance of the performance period and during the
performance period using historical and performance period claims data.
In addition, we would like to note that MIPS eligible clinicians may
access the Quality Payment Program Web site at www.qpp.cms.gov and
check if they are required to submit data to MIPS by entering their NPI
into the online tool. In response to the comment regarding appeals for
non-patient facing status, if a MIPS eligible clinician disagrees with
the non-patient facing determination, we note that clinicians can
contact the Quality Payment Program Service Center which may be reached
at 1-866-288-8292 (TTY 1-877-715-6222), available Monday through
Friday, 8:00 a.m.-8:00 p.m. Eastern Time or via email at
QPP@cms.hhs.gov. If an error in the non-patient facing determination is
discovered, we will update the MIPS eligible clinicians' status
accordingly.
Final Action: After consideration of the public comments, we are
finalizing for performance periods occurring in 2018 and future years
that at Sec. 414.1305 non-patient facing MIPS eligible clinician means
an individual MIPS eligible clinician that bills 100 or fewer patient-
facing encounters (including Medicare telehealth services defined in
section 1834(m) of the Act) during the non-patient facing determination
period, and a group or virtual group provided that more than 75 percent
of the NPIs billing under the group's TIN or within a virtual group, as
applicable, meet the definition of a non-patient facing individual MIPS
eligible clinician during the non-patient facing determination period.
In addition, we are finalizing that for performance periods occurring
in 2018 and future years that for purposes of non-patient facing MIPS
eligible clinicians, we will utilize E&M codes and Surgical and
Procedural codes for accurate identification of patient-facing
encounters, and thus, accurate eligibility determinations regarding
non-patient facing status. Further, we are finalizing that a patient-
facing encounter is considered to be an instance in which the
individual MIPS eligible clinician or group billed for items and
services furnished such as general office visits, outpatient visits,
and procedure codes under the PFS. Finally, we are finalizing that for
performance periods occurring in 2018 and future years, that for the
non-patient facing determination period, in which the initial 12-month
segment of the non-patient facing determination period would span from
the last 4 months of a calendar year 2 years prior to the performance
period followed by the first 8 months of the next calendar year and
include a 30-day claims run out; and the second 12-month segment of the
non-patient facing determination period would span from the last 4
months of a calendar year 1 year prior to the performance period
followed by the first 8 months of the performance period in the next
calendar year and include a 30-day claims run out.
f. MIPS Eligible Clinicians Who Practice in Critical Access Hospitals
Billing Under Method II (Method II CAHs)
In the CY 2017 Quality Payment Program final rule (81 FR 77049), we
noted that MIPS eligible clinicians who practice in CAHs that bill
under Method I (Method I CAHs), the MIPS payment adjustment would apply
to payments made for items and services billed by MIPS eligible
clinicians, but it would not apply to the facility payment to the CAH
itself. For MIPS eligible clinicians who practice in Method II CAHs and
have not assigned their billing rights to the CAH, the MIPS payment
adjustment would apply in the same manner as for MIPS eligible
clinicians who bill for items and services in Method I CAHs. As
established in the CY 2017 Quality Payment Program final rule (81 FR
77051), the MIPS payment adjustment will apply to Method II CAH
payments
[[Page 53586]]
under section 1834(g)(2)(B) of the Act when MIPS eligible clinicians
who practice in Method II CAHs have assigned their billing rights to
the CAH.
We refer readers to the CY 2017 Quality Payment Program final rule
(81 FR 77049 through 77051) for our discussion of MIPS eligible
clinicians who practice in Method II CAHs.
g. MIPS Eligible Clinicians Who Practice in Rural Health Clinics (RHCs)
or Federally Qualified Health Centers (FQHCs)
As established in the CY 2017 Quality Payment Program final rule
(81 FR 77051 through 77053), services furnished by an eligible
clinician under the RHC or FQHC methodology, will not be subject to the
MIPS payments adjustments. As noted, these eligible clinicians have the
option to voluntarily report on applicable measures and activities for
MIPS, in which the data received will not be used to assess their
performance for the purpose of the MIPS payment adjustment.
We refer readers to the CY 2017 Quality Payment Program final rule
(81 FR 77051 through 77053) for our discussion of MIPS eligible
clinicians who practice in RHCs or FQHCs.
h. MIPS Eligible Clinicians Who Practice in Ambulatory Surgical Centers
(ASCs), Home Health Agencies (HHAs), Hospice, and Hospital Outpatient
Departments (HOPDs)
Section 1848(q)(6)(E) of the Act provides that the MIPS payment
adjustment is applied to the amount otherwise paid under Part B with
respect to the items and services furnished by a MIPS eligible
clinician during a year. Some eligible clinicians may not receive MIPS
payment adjustments due to their billing methodologies. If a MIPS
eligible clinician furnishes items and services in an ASC, HHA,
Hospice, and/or HOPD and the facility bills for those items and
services (including prescription drugs) under the facility's all-
inclusive payment methodology or prospective payment system
methodology, the MIPS adjustment would not apply to the facility
payment itself. However, if a MIPS eligible clinician furnishes other
items and services in an ASC, HHA, Hospice, and/or HOPD and bills for
those items and services separately, such as under the PFS, the MIPS
adjustment would apply to payments made for such items and services.
Such items and services would also be considered for purposes of
applying the low-volume threshold. Therefore, we proposed that services
furnished by an eligible clinician that are payable under the ASC, HHA,
Hospice, or HOPD methodology would not be subject to the MIPS payments
adjustments (82 FR 30023). However, these eligible clinicians have the
option to voluntarily report on applicable measures and activities for
MIPS, in which case the data received would not be used to assess their
performance for the purpose of the MIPS payment adjustment. We note
that eligible clinicians who bill under both the PFS and one of these
other billing methodologies (ASC, HHA, Hospice, and/or HOPD) may be
required to participate in MIPS if they exceed the low-volume threshold
and are otherwise eligible clinicians; in such case, the data reported
would be used to determine their MIPS payment adjustment.
The following is a summary of the public comments received on the
``MIPS Eligible Clinicians Who Practice in ASCs, HHAs, HOPDs'' proposal
and our responses:
Comment: A few commenters agreed with the proposal that services
furnished by an eligible clinician that are payable under the ASC, HHA,
Hospice, or Outpatient payment methodology would not be subject to the
MIPS payment adjustments.
Response: We appreciate the commenters' support. We are finalizing
that services furnished by an eligible clinician that are payable under
the ASC, HHA, Hospice, or HOPD methodology will not be subject to the
MIPS payments adjustments and that such data will not be utilized for
MIPS eligibility purposes.
Final Action: After consideration of the public comments, we are
finalizing that services furnished by an eligible clinician that are
payable under the ASC, HHA, Hospice, or HOPD methodology will not be
subject to the MIPS payments adjustments and that such data will not be
utilized for MIPS eligibility purposes, as proposed.
i. MIPS Eligible Clinician Identifiers
As described in the CY 2017 Quality Payment Program final rule (81
FR 77057), we established the use of multiple identifiers that allow
MIPS eligible clinicians to be measured as an individual or
collectively through a group's performance and that the same identifier
be used for all four performance categories. While we have multiple
identifiers for participation and performance, we established the use
of a single identifier, TIN/NPI, for applying the MIPS payment
adjustment, regardless of how the MIPS eligible clinician is assessed.
(1) Individual Identifiers
As established in the CY 2017 Quality Payment Program final rule
(81 FR 77058), we define a MIPS eligible clinician at Sec. 414.1305 to
mean the use of a combination of unique billing TIN and NPI combination
as the identifier to assess performance of an individual MIPS eligible
clinician. Each unique TIN/NPI combination is considered a different
MIPS eligible clinician, and MIPS performance is assessed separately
for each TIN under which an individual bills.
(2) Group Identifiers for Performance
As established in the CY 2017 Quality Payment Program final rule
(81 FR 77059), we codified the definition of a group at Sec. 414.1305
to mean a group that consists of a single TIN with two or more eligible
clinicians (including at least one MIPS eligible clinician), as
identified by their individual NPI, who have reassigned their billing
rights to the TIN.
(3) APM Entity Group Identifiers for Performance
As described in the CY 2017 Quality Payment Program final rule (81
FR 77060), we established that each eligible clinician who is a
participant of an APM Entity is identified by a unique APM participant
identifier. The unique APM participant identifier is a combination of
four identifiers: (1) APM Identifier (established by CMS; for example,
XXXXXX); (2) APM Entity identifier (established under the APM by CMS;
for example, AA00001111); (3) TIN(s) (9 numeric characters; for
example, XXXXXXXXX); (4) EP NPI (10 numeric characters; for example,
1111111111). We codified the definition of an APM Entity group at Sec.
414.1305 to mean a group of eligible clinicians participating in an APM
Entity, as identified by a combination of the APM identifier, APM
Entity identifier, TIN, and NPI for each participating eligible
clinician.
2. Exclusions
a. New Medicare-Enrolled Eligible Clinician
As established in the CY 2017 Quality Payment Program final rule
(81 FR 77061 through 77062), we defined a new Medicare-enrolled
eligible clinician at Sec. 414.1305 as a professional who first
becomes a Medicare-enrolled eligible clinician within the PECOS during
the performance period for a year and had not previously submitted
claims under Medicare such as an individual, an entity, or a part of a
clinician group or under a different billing number or tax identifier.
Additionally, we established
[[Page 53587]]
at Sec. 414.1310(c) that these eligible clinicians will not be treated
as a MIPS eligible clinician until the subsequent year and the
performance period for such subsequent year. We established at Sec.
414.1310(d) that in no case would a MIPS payment adjustment apply to
the items and services furnished during a year by new Medicare-enrolled
eligible clinicians for the applicable performance period.
We used the term ``new Medicare-enrolled eligible clinician
determination period'' to refer to the 12 months of a calendar year
applicable to the performance period. During the new Medicare-enrolled
eligible clinician determination period, we conduct eligibility
determinations on a quarterly basis to the extent that is technically
feasible to identify new Medicare-enrolled eligible clinicians that
would be excluded from the requirement to participate in MIPS for the
applicable performance period.
b. Qualifying APM Participant (QP) and Partial Qualifying APM
Participant (Partial QP)
In the CY 2017 Quality Payment Program final rule (81 FR 77062), we
established at Sec. 414.1305 that a QP (as defined at Sec. 414.1305)
is not a MIPS eligible clinician, and therefore, is excluded from MIPS.
Also, we established that a Partial QP (as defined at Sec. 414.1305)
who does not report on applicable measures and activities that are
required to be reported under MIPS for any given performance period in
a year is not a MIPS eligible clinician, and therefore, is excluded
from MIPS.
c. Low-Volume Threshold
Section 1848(q)(1)(C)(ii)(III) of the Act provides that the
definition of a MIPS eligible clinician does not include eligible
clinicians who are below the low-volume threshold selected by the
Secretary under section 1848(q)(1)(C)(iv) of the Act for a given year.
Section 1848(q)(1)(C)(iv) of the Act requires the Secretary to select a
low-volume threshold to apply for the purposes of this exclusion which
may include one or more of the following: (1) The minimum number, as
determined by the Secretary, of Part B-enrolled individuals who are
treated by the eligible clinician for a particular performance period;
(2) the minimum number, as determined by the Secretary, of items and
services furnished to Part B-enrolled individuals by the eligible
clinician for a particular performance period; and (3) the minimum
amount, as determined by the Secretary, of allowed charges billed by
the eligible clinician for a particular performance period.
In the CY 2017 Quality Payment Program final rule (81 FR 77069
through 77070), we defined MIPS eligible clinicians or groups who do
not exceed the low-volume threshold at Sec. 414.1305 as an individual
MIPS eligible clinician or group who, during the low-volume threshold
determination period, has Medicare Part B allowed charges less than or
equal to $30,000 or provides care for 100 or fewer Part B-enrolled
Medicare beneficiaries. We established at Sec. 414.1310(b) that for a
year, eligible clinicians who do not exceed the low-volume threshold
(as defined at Sec. 414.1305) are excluded from MIPS for the
performance period for a given calendar year.
In the CY 2017 Quality Payment Program final rule (81 FR 77069
through 77070), we defined the low-volume threshold determination
period to mean a 24-month assessment period, which includes a two-
segment analysis of claims data during an initial 12-month period prior
to the performance period followed by another 12-month period during
the performance period. The initial 12-month segment of the low-volume
threshold determination period spans from the last 4 months of a
calendar year 2 years prior to the performance period followed by the
first 8 months of the next calendar year and includes a 60-day claims
run out, which allows us to inform eligible clinicians and groups of
their low-volume status during the month (December) prior to the start
of the performance period. The second 12-month segment of the low-
volume threshold determination period spans from the last 4 months of a
calendar year 1 year prior to the performance period followed by the
first 8 months of the performance period in the next calendar year and
includes a 60-day claims run out, which allows us to inform additional
eligible clinicians and groups of their low-volume status during the
performance period.
We recognize that individual MIPS eligible clinicians and groups
that are small practices or practicing in designated rural areas face
unique dynamics and challenges such as fiscal limitations and workforce
shortages, but serve as a critical access point for care and provide a
safety net for vulnerable populations. Claims data shows that
approximately 15 percent of individual MIPS eligible clinicians (TIN/
NPIs) are considered to be practicing in rural areas after applying all
exclusions. Also, we have heard from stakeholders that MIPS eligible
clinicians practicing in small practices and designated rural areas
tend to have a patient population with a higher proportion of older
adults, as well as higher rates of poor health outcomes, co-
morbidities, chronic conditions, and other social risk factors, which
can result in the costs of providing care and services being
significantly higher compared to non-rural areas. We also have heard
from many solo practitioners and small practices that still face
challenges and additional resource burden in participating in the MIPS.
In the CY 2017 Quality Payment Program final rule, we did not
establish an adjustment for social risk factors in assessing and
scoring performance. In response to the CY 2017 Quality Payment Program
final rule, we received public comments indicating that individual MIPS
eligible clinicians and groups practicing in designated rural areas
would be negatively impacted and at a disadvantage if assessment and
scoring methodology did not adjust for social risk factors.
Additionally, commenters expressed concern that such individual MIPS
eligible clinicians and groups may be disproportionately more
susceptible to lower performance scores across all performance
categories and negative MIPS payments adjustments, and as a result,
such outcomes may further strain already limited fiscal resources and
workforce shortages, and negatively impact access to care (reduction
and/or elimination of available services).
After the consideration of stakeholder feedback, we proposed to
modify the low-volume threshold policy established in the CY 2017
Quality Payment Program final rule (82 FR 30024). We stated that we
believe that increasing the dollar amount and beneficiary count of the
low-volume threshold would further reduce the number of eligible
clinicians that are required to participate in the MIPS, which would
reduce the burden on individual MIPS eligible clinicians and groups
practicing in small practices and designated rural areas. Based on our
analysis of claims data, we found that increasing the low-volume
threshold to exclude individual eligible clinicians or groups that have
Medicare Part B allowed charges less than or equal to $90,000 or that
provide care for 200 or fewer Part B-enrolled Medicare beneficiaries
will exclude approximately 134,000 additional clinicians from MIPS from
the approximately 700,000 clinicians that would have been eligible
based on the low-volume threshold that was finalized in the CY 2017
Quality Payment Program final rule. Almost half of the additionally
excluded clinicians are in small practices, and approximately 17
percent are clinicians from practices in designated rural areas.
Applying this
[[Page 53588]]
criterion decreases the percentage of the MIPS eligible clinicians that
come from small practices. For example, prior to any exclusions,
clinicians in small practices represent 35 percent of all clinicians
billing Part B services. After applying the eligibility criteria for
the CY 2017 Quality Payment Program final rule, MIPS eligible
clinicians in small practices represent approximately 27 percent of the
clinicians eligible for MIPS; however, with the increased low-volume
threshold, approximately 22 percent of the clinicians eligible for MIPS
are from small practices. In our analysis, the proposed changes to the
low-volume threshold showed little impact on MIPS eligible clinicians
from practices in designated rural areas. MIPS eligible clinicians from
practices in designated rural areas account for15 to 16 percent of the
total MIPS eligible clinician population. We note that, due to data
limitations, we assessed rural status based on the status of individual
TIN/NPI and did not model any group definition for practices in
designated rural areas.
We believe that increasing the number of such individual eligible
clinicians and groups excluded from MIPS participation would reduce
burden and mitigate, to the extent feasible, the issue surrounding
confounding variables impacting performance under the MIPS. Therefore,
beginning with the 2018 MIPS performance period, we proposed to
increase the low-volume threshold. Specifically, at Sec. 414.1305, we
proposed to define an individual MIPS eligible clinician or group who
does not exceed the low-volume threshold as an individual MIPS eligible
clinician or group who, during the low-volume threshold determination
period, has Medicare Part B allowed charges less than or equal to
$90,000 or provides care for 200 or fewer Part B-enrolled Medicare
beneficiaries. This would mean that approximately 37 percent of
individual eligible clinicians and groups would be eligible for MIPS
based on the low-volume threshold exclusion (and the other exclusions).
However, approximately 65 percent of Medicare payments would still be
captured under MIPS as compared to 72.2 percent of Medicare payments
under the CY 2017 Quality Payment Program final rule.
We recognize that increasing the dollar amount and beneficiary
count of the low-volume threshold would increase the number of
individual eligible clinicians and groups excluded from MIPS. We
assessed various levels of increases and found that $90,000 as the
dollar amount and 200 as the beneficiary count balances the need to
account for individual eligible clinicians and groups who face
additional participation burden while not excluding a significant
portion of the clinician population.
Eligible clinicians who do not exceed the low-volume threshold (as
defined at Sec. 414.1305) are excluded from MIPS for the performance
period with respect to a year. The low-volume threshold also applies to
eligible clinicians who practice in APMs under the APM scoring standard
at the APM Entity level, in which APM Entities do not exceed the low-
volume threshold. In such cases, the eligible clinicians participating
in the MIPS APM Entity would be excluded from the MIPS requirements for
the applicable performance period and not subject to a MIPS payment
adjustment for the applicable year. Such an exclusion would not affect
an APM Entity's QP determination if the APM Entity is an Advanced APM.
In the CY 2017 Quality Payment Program final rule, we established
the low-volume threshold determination period to refer to the timeframe
used to assess claims data for making eligibility determinations for
the low-volume threshold exclusion (81 FR 77069 through 77070). We
defined the low-volume threshold determination period to mean a 24-
month assessment period, which includes a two-segment analysis of
claims data during an initial 12-month period prior to the performance
period followed by another 12-month period during the performance
period. Based on our analysis of data from the initial segment of the
low-volume threshold determination period for performance periods
occurring in 2017 (that is, data spanning from September 1, 2015 to
August 31, 2016), we found that it may not be necessary to include a
60-day claims run out since we could achieve a similar outcome for such
eligibility determinations by utilizing a 30-day claims run out.
In our comparison of data analysis results utilizing a 60-day
claims run out versus a 30-day claims run out, there was a 1 percent
decrease in data completeness. The small decrease in data completeness
would not substantially impact individual MIPS eligible clinicians or
groups regarding low-volume threshold determinations. We believe that a
30-day claims run out would allow us to complete the analysis and
provide such determinations in a more timely manner. For performance
periods occurring in 2018 and future years, we proposed a modification
to the low-volume threshold determination period, in which the initial
12-month segment of the low-volume threshold determination period would
span from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and include a 30-day claims run out; and the second 12-month
segment of the low-volume threshold determination period would span
from the last 4 months of a calendar year 1 year prior to the
performance period followed by the first 8 months of the performance
period in the next calendar year and include a 30-day claims run out
(82 FR 30025). We stated that the proposal would only change the
duration of the claims run out, not the 12-month timeframes used for
the first and second segments of data analysis.
For purposes of the 2020 MIPS payment year, we would initially
identify individual eligible clinicians and groups that do not exceed
the low-volume threshold based on 12 months of data starting from
September 1, 2016 to August 31, 2017. To account for the identification
of additional individual eligible clinicians and groups that do not
exceed the low-volume threshold during performance periods occurring in
2018, we would conduct another eligibility determination analysis based
on 12 months of data starting from September 1, 2017 to August 31,
2018. We would not change the low-volume status of any individual
eligible clinician or group identified as not exceeding the low-volume
threshold during the first eligibility determination analysis based on
the second eligibility determination analysis. Thus, an individual
eligible clinician or group that is identified as not exceeding the
low-volume threshold during the first eligibility determination
analysis would continue to be excluded from MIPS for the duration of
the performance period regardless of the results of the second
eligibility determination analysis. We established our policy to
include two eligibility determination analyses in order to prevent any
potential confusion for an individual eligible clinician or group to
know whether or not participate in MIPS; also, such policy makes it
clear from the onset as to which individual eligible clinicians and
groups would be required to participate in MIPS. We would conduct the
second eligibility determination analysis to account for the
identification of additional, previously unidentified individual
eligible clinicians and groups who do not exceed the low-volume
threshold. We note that low-volume threshold determinations are made at
the individual and group level, and not at the virtual group level.
As noted above, section 1848(q)(1)(C)(iv) of the Act requires the
[[Page 53589]]
Secretary to select a low-volume threshold to apply for the purposes of
this exclusion which may include one or more of the following: (1) The
minimum number, as determined by the Secretary, of Part B-enrolled
individuals who are treated by the eligible clinician for a particular
performance period; (2) the minimum number, as determined by the
Secretary, of items and services furnished to Part B-enrolled
individuals by the eligible clinician for a particular performance
period; and (3) the minimum amount, as determined by the Secretary, of
allowed charges billed by the eligible clinician for a particular
performance period. We have established a low-volume threshold that
accounts for the minimum number of Part-B enrolled individuals who are
treated by an eligible clinician and that accounts for the minimum
amount of allowed charges billed by an eligible clinician. We did not
make proposals specific to a minimum number of items and service
furnished to Part-B enrolled individuals by an eligible clinician.
In order to expand the ways in which claims data could be analyzed
for purposes of determining a more comprehensive assessment of the low-
volume threshold, we have assessed the option of establishing a low-
volume threshold for items and services furnished to Part-B enrolled
individuals by an eligible clinician. We have considered defining items
and services by using the number of patient encounters or procedures
associated with a clinician. Defining items and services by patient
encounters would assess each patient per visit or encounter with the
eligible clinician. We believe that defining items and services by
using the number of patient encounters or procedures is a simple and
straightforward approach for stakeholders to understand. However, we
are concerned that using this unit of analysis could incentivize
clinicians to focus on volume of services rather than the value of
services provided to patients. Defining items and services by procedure
would tie a specific clinical procedure furnished to a patient to a
clinician. We solicited public comment on the methods of defining items
and services furnished by clinicians described in this paragraph above
and alternate methods of defining items and services (82 FR 30025
through 30026).
For the individual eligible clinicians and groups that would be
excluded from MIPS participation as a result of an increased low-volume
threshold, we believe that in future years it would be beneficial to
provide, to the extent feasible, such individual eligible clinicians
and groups with the option to opt-in to MIPS participation if they
might otherwise be excluded under the low-volume threshold, such as
where they only meet one of the threshold determinations (including a
third determination based on Part B items and services, if
established). For example, if a clinician meets the low-volume
threshold of $90,000 in allowed charges, but does not meet the
threshold of 200 patients or, if established, the threshold pertaining
to Part B items and services, we believe the clinician should, to the
extent feasible, have the opportunity to choose whether or not to
participate in the MIPS and be subject to MIPS payment adjustments. We
recognize that this choice would present additional complexity to
clinicians in understanding all of their available options and may
impose additional burden on clinicians by requiring them to notify us
of their decision. Because of these concerns and our desire to
establish options in a way that is a low-burden and user-focused
experience for all MIPS eligible clinicians, we would not be able to
offer this additional flexibility until performance periods occurring
in 2019. Therefore, as a means of expanding options for clinicians and
offering them the ability to participate in MIPS if they otherwise
would not be included, for the purposes of the 2021 MIPS payment year,
we proposed to provide clinicians the ability to opt-in to the MIPS if
they meet or exceed one, but not all, of the low-volume threshold
determinations, including as defined by dollar amount, beneficiary
count or, if established, items and services (82 FR 30026).
We note that there may be additional considerations we should
address for scenarios in which an individual eligible clinician or a
group does not exceed the low-volume threshold and opts-in to
participate in MIPS. We therefore sought comment on any additional
considerations we should address when establishing this opt-in policy.
Additionally, we note that there is the potential with this opt-in
policy for there to be an impact on our ability to create quality
benchmarks that meet our sample size requirements. For example, if
particularly small practices or solo practitioners with low Part B
beneficiary volumes opt-in, such clinicians may lack sufficient sample
size to be scored on many quality measures, especially measures that do
not apply to all of a MIPS eligible clinician's patients. We therefore
sought comment on how to address any potential impact on our ability to
create quality benchmarks that meet our sample size requirements (82 FR
30026).
The following is a summary of the public comments received on the
``Low-Volume Threshold'' proposals and our responses:
Comment: Many commenters supported raising the low-volume threshold
to exclude an individual MIPS eligible clinician or group who, during
the low-volume threshold determination period, has Medicare Part B
allowed charges less than or equal to $90,000 or provides care for 200
or fewer Part B-enrolled Medicare beneficiaries. Several commenters
further suggested that we retroactively apply the threshold to the 2017
MIPS performance period because changing the low-volume threshold for
the 2018 MIPS performance period would create confusion, complicate
operational and strategic planning for eligible clinicians, and create
inefficiencies for clinicians. One commenter noted that we has not yet
issued the required second round of reports notifying MIPS eligible
clinicians whether they are below the low-volume threshold, so it would
be technically feasible to implement the lower threshold before the end
of the CY 2017 reporting period. A few commenters supported the
proposal but recommended that we maintain the current, lower low-volume
threshold for at least 2, 3, or more years to allow for planning and
investment by clinicians in the program.
Response: We appreciate the support from commenters who supported
raising the low-volume threshold. We are finalizing our proposal to
define at Sec. 414.1305 an individual eligible clinician or group that
does not exceed the low-volume threshold as an individual eligible
clinician or group that, during the low-volume threshold determination
period, has Medicare Part B allowed charges less than or equal to
$90,000 or provides care for 200 or fewer Part B-enrolled Medicare
beneficiaries. We do not believe that we have the flexibility to
retroactively apply the revised low-volume threshold to the 2017 MIPS
performance period threshold. We are aware that by finalizing this
policy, some MIPS eligible clinicians who were eligible to participate
in MIPS for Year 1 will be excluded for Year 2. However, we would like
to note that those MIPS eligible clinicians may still participate in
Year 1. Finally, we agree with the commenter that there are benefits of
maintaining the same low-volume threshold for several years and will
take this into consideration in future years.
Comment: Several commenters did not support the proposed low-volume
threshold because the commenters believed the low-volume threshold
[[Page 53590]]
should be raised further to exclude more clinicians. Several of those
commenters specifically recommended that we set the threshold no lower
than $100,000 in Medicare Part B charges and to only apply to practices
with 10 or fewer eligible clinicians.
Response: We disagree with the commenters regarding raising the
low-volume threshold further. Based on our data analysis, applying the
proposed criterion decreases the percentage of MIPS eligible clinicians
that come from small practices. We note that from our updated data
models we found that the revised low-volume threshold will exclude
approximately 123,000 additional clinicians from MIPS from the
approximately 744,000 clinicians that would have been eligible based on
the low-volume threshold that was finalized in the CY 2017 Quality
Payment Program final rule. We believe that if we were to raise the
low-volume threshold further, we may prevent medium size practices that
wish to participate from the opportunity to receive an upward
adjustment and would have fewer clinicians engaged in value-based care.
We believe the finalized low-volume threshold strikes the appropriate
balance with the need to account for individual MIPS eligible
clinicians and groups who face additional participation burden while
not excluding a significant portion of the clinician population. We are
finalizing the low-volume threshold to exclude an individual eligible
clinician or group that, during the low-volume threshold determination
period, has Medicare Part B allowed charges less than or equal to
$90,000 or provides care for 200 or fewer Part B-enrolled Medicare
beneficiaries.
Comment: Many commenters did not support raising the low-volume
threshold for the 2018 MIPS performance period because they believed it
would be unfair to clinicians who were already participating or planned
to participate in MIPS in future years. The commenters noted that
clinicians may have already invested in MIPS participation. Many
commenters did not support the proposed low-volume threshold because
they believed that raising the low-volume threshold would reduce
payment and incentives for excluded clinicians to participate in value-
based care, which would create additional quality and reimbursement
disparities for the beneficiaries seen by the excluded clinicians,
creating a 2-tiered system of clinicians and related beneficiaries that
are participating in value-based care. The commenters noted that
raising the low-volume threshold would signal to the industry that we
are not focused on transitioning to value-based payment and care. A few
commenters expressed concern that raising the low-volume threshold
would create further disparities in quality between urban and rural
clinicians based on the reduced incentives for rural clinicians to
participate in value-based purchasing programs. One of these commenters
strongly recommended that we study the impact on the rural health
industry prior to implementing the increased low-volume threshold. Many
commenters noted that excluding more clinicians would risk dismantling
the EHR infrastructure that has developed over recent years as
additional practices opt-out of participation in programs designed to
increase adoption and use of EHRs, wasting the billions of dollars we
have invested to date in EHRs. The commenters believed that reduction
in use of EHRs will affect participating clinicians as well by
hampering connectivity and information sharing between excluded
clinicians and participating clinicians. Some commenters also stated
that decreased investment in EHRs by excluded clinicians will drive
greater disparities in care quality between clinicians who are engaged
in value-based purchasing and those who are not. One commenter strongly
recommended that we delay implementation of the proposed low-volume
threshold. Another commenter recommended that, rather than exclude
clinicians from MIPS, we should allow clinicians to continue the pick-
your-pace approach and continue participating in MIPS.
Response: We acknowledge there will be MIPS eligible clinicians who
were eligible for Year 1 of MIPS that are no longer eligible for Year 2
of MIPS. However, from our analyses, the MIPS eligible clinicians
affected are mainly smaller practices and practices in rural areas,
many of which have raised concerns regarding their ability to
participate in MIPS. We want to encourage all clinicians to participate
in value-based care within the MIPS; however, we have continued to hear
from practices that challenges to participation in the Quality Payment
Program still exist. Therefore, we believe it is appropriate to raise
the low-volume threshold to not require these practices to participate
in the program. However, we will review the impacts of this policy to
determine if it should remain. We do not believe that raising the low-
volume threshold will cause quality disparities between urban and rural
practices. With the increased low-volume threshold, additional
practices will not be required to participate in the Quality Payment
Program; however, we still encourage all clinicians to provide high-
value care to their patients. The goal of raising the low-volume
threshold is to reduce burden on small practices, and we do not believe
it will create a 2-tiered system. We appreciate the suggestion to study
the impact on the rural health industry before finalizing this policy.
We do not believe a study is necessary prior to finalizing this policy;
rather, we believe that there is sufficient evidence from stakeholder
feedback to reflect the value of increasing the low-volume threshold at
this time. We do not agree that this policy would risk dismantling the
EHR infrastructure. We believe that the low-volume threshold in Year 2
provides MIPS eligible clinicians and groups, particularly those in
smaller practices and rural areas, that do not exceed the low-volume
threshold with additional time to further invest in their EHR
infrastructure to gain experience in implementing and utilizing an EHR
infrastructure to meet their needs and prepare for their potential
participation in MIPS in future years while not being subject to the
possibility of a negative payment adjustment. We believe that
clinicians and patients benefit from the utilization and capabilities
of an EHR infrastructure and would continue to utilize this technology.
In addition, we do not believe we should delay implementation of this
policy as it reduces the burden on individual MIPS eligible clinicians
and those in small practices and in some rural areas. The intention of
the Year 1 pick-your-pace policies were to set the foundation for MIPS
to support long-term, high quality patient care through feedback by
lowering the barriers to participation. Year 2 continues this
transition as we are providing a gradual ramp-up of the program and of
the performance thresholds. For the low-volume threshold, we are
finalizing our proposal to increase the threshold, which excludes more
eligible clinicians from MIPS. Specifically, we are finalizing our
proposal to exclude an individual eligible clinician or group that,
during the low-volume threshold determination period, has Medicare Part
B allowed charges less than or equal to $90,000 or provides care for
200 or fewer Part B-enrolled Medicare beneficiaries.
Comment: Many commenters did not support the proposed low-volume
threshold because it is based on the amount of Medicare billings from
clinicians or number of beneficiaries. Instead, the commenters offered
[[Page 53591]]
recommendations for alternative ways of applying the low-volume
threshold. Many commenters recommended that we exclude all practices
with 15 or fewer clinicians. Several commenters recommended redefining
the low-volume threshold so that it would mirror the policy for non-
patient facing eligible clinicians by excluding a group from MIPS if 75
percent or more of its eligible clinicians individually fall below the
low-volume threshold or if the group's average Medicare allowed charges
or Medicare patient population falls below the threshold. The
commenters noted that this would align status determinations across the
Quality Payment Program and reduce complexity and burden. One commenter
recommended excluding: Practices with less than $100,000 per clinician
in Medicare charges not including Part B drug costs; practices with 10
or fewer clinicians; and rural clinicians practicing in an area with
fewer than 100 clinicians per 100,000 population. The commenter further
encouraged us to consider excluding specialists who practice in ZIP
codes or other geographic areas with low per capita numbers of
clinicians in their specialty per population. One commenter recommended
that we establish 2 different low-volume thresholds for primary care
and specialty care clinicians. Another commenter recommended using a
percentage of Medicare charges to total charges and a percentage of
Medicare patients to total patients as opposed to the use of claims and
patients. One commenter noted that the low-volume threshold's inclusion
of beneficiaries creates an incentive for clinicians to turn away
Medicare beneficiaries in order to fall below the low-volume threshold.
Another commenter recommended that we exclude all clinicians who have
elected to have non-participation status for Medicare. As an
alternative to raising the low-volume threshold, one commenter
recommended that we reduce the reporting requirement for small
practices or for those practices between the previous threshold of
$30,000 and 100 beneficiaries to $90,000 and 200 beneficiaries. Several
commenters specifically did not support that a group could meet the
low-volume threshold based on services provided by a small percentage
of the clinicians in the group. A few commenters recommended that we
exclude individuals who do not meet the low-volume threshold, even if
the group practice otherwise met the low-volume threshold.
Response: We note that some of the suggestions provided are not
compliant with the statute, specifically, the suggestions on basing the
low-volume threshold exclusion on practice size, practice location and
specialty characteristics. We note that section 1848(q)(1)(C)(iv) of
the Act requires the Secretary to select a low-volume threshold to
apply for the purposes of this exclusion which may include one or more
of the following: (1) The minimum number, as determined by the
Secretary, of Part B-enrolled individuals who are treated by the
eligible clinician for a particular performance period; (2) the minimum
number, as determined by the Secretary, of items and services furnished
to Part B-enrolled individuals by the eligible clinician for a
particular performance period; and (3) the minimum amount, as
determined by the Secretary, of allowed charges billed by the eligible
clinician for a particular performance period. We do not believe the
statute provides discretion in establishing exclusions other than the
three exclusions specified above. Additionally, for the commenters
suggestion to use a percentage of Medicare charges to total charges and
a percentage of Medicare patients to total patients as opposed to the
use of a minimum number of claims and patients, we will take this
suggestion under consideration for future rulemaking. In regards to the
commenters suggestion to exclude all clinicians from MIPS that have
non-participation status within Medicare, we note that these clinicians
may still fall within the definition of a MIPS eligible clinician at
Sec. 414.1305. However, as provided in Sec. 414.1310(d), in no case
will a MIPS payment adjustment apply to the items and services
furnished during a year by clinicians who are not MIPS eligible
clinicians.
We note that the low-volume threshold is different from the other
exclusions in that it is not determined solely based on the individual
NPI status, it is based on both the TIN/NPI (to determine an exclusion
at the individual level) and TIN (to determine an exclusion at the
group level) status. In regard to group-level reporting, the group, as
a whole, is assessed to determine if the group (TIN) exceeds the low-
volume threshold. Thus, eligible clinicians (TIN/NPI) who do not exceed
the low-volume threshold at the individual reporting level and would
otherwise be excluded from MIPS participation at the individual level,
would be required to participate in MIPS at the group level if such
eligible clinicians are part of a group reporting at the group level
that exceeds the low-volume threshold. In the CY 2017 Quality Payment
Program final rule (82 FR 77071) we considered aligning how MIPS
exclusions would be applied at the group level. We recognized that
alignment would provide a uniform application across exclusions and
offer simplicity, but we also believed that it is critical to ensure
that there are opportunities encouraging coordination, teamwork, and
shared responsibility within groups. In order to encourage
coordination, teamwork, and shared responsibility at the group level,
we finalized that we would assess the low-volume threshold so that all
clinicians within the group have the same status: all clinicians
collectively exceed the low-volume threshold or they do not exceed the
low-volume threshold. We appreciate the other concerns and
recommendations provided by the commenters. We received a range of
suggestions and considered the various options. We are finalizing our
proposal to exclude an individual MIPS eligible clinician or group
that, during the low-volume threshold determination period, has
Medicare Part B allowed charges less than or equal to $90,000 or
provides care for 200 or fewer Part B-enrolled Medicare beneficiaries.
In this final rule with comment period, we are requesting additional
comments regarding the application of low-volume threshold at the group
level.
Comment: Many commenters supported the proposed policy to provide
clinicians the ability to opt-in to the MIPS if they meet or exceed
one, but not all, of the low-volume threshold determinations, including
as defined by dollar amount, beneficiary count, or, if established,
items and services beginning with the 2019 MIPS performance period.
Other commenters supported applying the opt-in based on the Medicare
Part B charges criterion, but not the Medicare beneficiary criterion.
Several commenters supported the proposal to allow opt-in but requested
that the policy be retroactively applied to the 2017 MIPS performance
period. A few commenters supported the proposed opt-in option but
recommended that we establish separate performance benchmarks for
excluded individuals or groups that opt-in. Other commenters
recommended that we shield opt-in clinicians so that they can avoid a
negative payment adjustment or other disadvantages of participation.
Response: We appreciate the support of the proposed policy to
provide clinicians the ability to opt-in to the MIPS if they meet or
exceed one, but not all, of the low-volume threshold
[[Page 53592]]
determinations, including as defined by dollar amount, beneficiary
count, or, if established, items and services beginning with the 2019
MIPS performance period. However, we are not finalizing this proposal
for the 2019 MIPS performance period. We are concerned that we will not
be able to operationalize this policy in a low-burden manner to MIPS
eligible clinicians as currently proposed. Specifically, our goal is to
implement a process whereby a clinician can be made aware of their low-
volume threshold status and make an informed decision on whether they
will participate in MIPS or not. We believe it is critical to implement
a process that provides the least burden to clinicians in communicating
this decision to us. Therefore, in this final rule with comment period,
we are seeking additional comments on the best approach of implementing
a low-volume threshold opt-in policy. As we plan to revisit this policy
in the 2018 notice-and-comment rulemaking cycle. This additional time
and additional public comments will give us the opportunity to explore
how best to implement this policy and to perform additional analyses.
We do not agree that we should allow any MIPS eligible clinicians that
meet the low-volume threshold exclusion from any criterion to opt-in to
MIPS, as it may impact our ability to create quality performance
benchmarks that meet our sample size requirements. For example, if
particularly small practices or solo practitioners with low Part B
beneficiary volumes opt-in, such clinician's may lack sufficient sample
size to be scored on many quality measures, especially measures that do
not apply to all of a MIPS eligible clinician's patients. In addition,
we do not believe MIPS eligible clinicians who opt-in should have
different performance benchmarks nor avoid a negative payment
adjustment. If the MIPS eligible clinician decides to opt-in, then they
are committing to participating in the entire program, which would
include being assessed on the same criteria as other MIPS eligible
clinicians.
Comment: A few commenters opposed the proposed policy to provide
clinicians the ability to opt-in to the MIPS if they meet or exceed
one, but not all, of the low-volume threshold determinations, including
as defined by dollar amount, beneficiary count, or, if established,
items and services beginning with the 2019 MIPS performance period. One
commenter believed that an opt-in policy would complicate the program's
ability to accurately evaluate clinician performance, which may result
in unequal outcomes based on clinician participation at the individual-
or group-level and specialty types. The commenter recommended that we
fully evaluate the effect of the opt-in policy prior to implementing
any changes.
Response: We agree with the commenters' concerns and acknowledge
that allowing an opt-in option may present additional complexity and
could inadvertantly create a model where only high-performers opt-in.
Therefore, we are not finalizing this proposal for the 2019 MIPS
performance period. Rather, we are seeking further comment on the best
approach to implementing the low-volume opt-in policy. This additional
time will give us the opportunity to perform additional analyses. We
intend to revisit this policy in the 2018 notice-and-comment rulemaking
cycle.
Comment: Several commenters supported the current low-volume
threshold assessment period and proposal to use a 30-day claims run
out. One commenter agreed with retaining the low-volume threshold
status if triggered during the first 12-month determination period
regardless of the status resulting from the second 12-month
determination period. Another commenter did not support the use of a
determination period for low-volume threshold that is outside of the
performance period and believed that only data overlapping the
performance period should be used to determine low-volume threshold
status.
Response: We appreciate the commenters' support of the low-volume
threshold determination period and the proposed use of a 30-day claims
run out. We believe that it is beneficial for MIPS eligible clinicians
to know whether they are excluded under the low-volume threshold prior
to the start of the performance period. In order to identify these MIPS
eligible clinicians prior to the start of the performance period, we
must use historical data that is outside of the performance period. We
refer commenters to the CY 2017 Quality Payment Program final rule (82
FR 77069 through 77070) for a full discussion of this policy.
Final Action: After consideration of the public comments, we are
finalizing our proposal to define at Sec. 414.1305 an individual
eligible clinician or group that does not exceed the low-volume
threshold as an individual eligible clinician or group that, during the
low-volume threshold determination period, has Medicare Part B allowed
charges less than or equal to $90,000 or provides care for 200 or fewer
Part B-enrolled Medicare beneficiaries. In addition, for performance
periods occurring in 2018 and future years, we are finalizing a
modification to the low-volume threshold determination period, in which
the initial 12-month segment of the low-volume threshold determination
period would span from the last 4 months of a calendar year 2 years
prior to the performance period followed by the first 8 months of the
next calendar year and include a 30-day claims run out; and the second
12-month segment of the low-volume threshold determination period would
span from the last 4 months of a calendar year, 1 year prior to the
performance period followed by the first 8 months of the performance
period in the next calendar year and include a 30-day claims run out.
In addition, in this final rule with comment period, we are seeking
further comment on the best approach to implementing a low-volume
threshold opt-in policy. We welcome suggestions on ways to implement
the low-volume threshold opt-in that does not add additional burden to
clinicians. We also are interested in receiving feedback on ways to
mitigate our concern that only high-performers will choose to opt-in.
We also are soliciting comment on whether our current application of
the low-volume threshold to groups is still appropriate. We refer
readers to the CY 2017 Quality Payment Program final rule (81 FR 77062
through 77070) for a discussion on how the low-volume threshold is
currently applied to groups.
3. Group Reporting
a. Background
As discussed in the CY 2017 Quality Payment Program final rule, we
established the following requirements for groups (81 FR 77072):
Individual eligible clinicians and individual MIPS
eligible clinicians will have their performance assessed as a group as
part of a single TIN associated with two or more eligible clinicians
(including at least one MIPS eligible clinician), as identified by an
NPI, who have reassigned their Medicare billing rights to the TIN (at
Sec. 414.1310(e)(1)).
A group must meet the definition of a group at all times
during the performance period for the MIPS payment year in order to
have its performance assessed as a group (at Sec. 414.1310(e)(2)).
Individual eligible clinicians and individual MIPS
eligible clinicians within a group must aggregate their performance
data across the TIN to have their performance assessed as a group (at
Sec. 414.1310(e)(3)).
[[Page 53593]]
A group that elects to have its performance assessed as a
group will be assessed as a group across all four MIPS performance
categories (at Sec. 414.1310(e)(4)).
We stated in the CY 2017 Quality Payment Program final rule that
groups attest to their group size for purpose of using the CMS Web
Interface or identifying as a small practice (81 FR 77057). In section
II.C.1.c. of this final rule with comment period, we are finalizing our
proposal to modify the way in which we determine small practice size by
establishing a process under which CMS would utilize claims data to
make small practice size determinations. In addition, in section
II.C.4.e. of this final rule comment period, we are finalizing our
proposal to establish a policy under which CMS would utilize claims
data to determine group size for groups of 10 or fewer eligible
clinicians seeking to form or join a virtual group.
As noted in the CY 2017 Quality Payment Program final rule, group
size would be determined before exclusions are applied (81 FR 77057).
We note that group size determinations are based on the number of NPIs
associated with a TIN, which would include individual eligible
clinicians (NPIs) who may be excluded from MIPS participation and do
not meet the definition of a MIPS eligible clinician.
b. Registration
As discussed in the CY 2017 Quality Payment Program final rule (81
FR 77072 through 77073), we established the following policies:
A group must adhere to an election process established and
required by CMS (Sec. 414.1310(e)(5)), which includes:
++ Groups will not be required to register to have their
performance assessed as a group except for groups submitting data on
performance measures via participation in the CMS Web Interface or
groups electing to report the CAHPS for MIPS survey for the quality
performance category. For all other data submission mechanisms, groups
must work with appropriate third party intermediaries as necessary to
ensure the data submitted clearly indicates that the data represent a
group submission rather than an individual submission.
++ In order for groups to elect participation via the CMS Web
Interface or administration of the CAHPS for MIPS survey, such groups
must register by June 30 of the applicable performance period (that is,
June 30, 2018, for performance periods occurring in 2018). We note that
groups participating in APMs that require APM Entities to report using
the CMS Web Interface are not required to register for the CMS Web
Interface or administer the CAHPS for MIPS survey separately from the
APM.
When groups submit data utilizing third party intermediaries, such
as a qualified registry, QCDR, or EHR, we are able to obtain group
information from the third party intermediary and discern whether the
data submitted represents group submission or individual submission
once the data are submitted.
In the CY 2017 Quality Payment Program final rule (81 FR 77072
through 77073), we discussed the implementation of a voluntary
registration process if technically feasible. Since the publication of
the CY 2017 Quality Payment Program final rule, we have determined that
it is not technically feasible to develop and build a voluntary
registration process. Until further notice, we are not implementing a
voluntary registration process.
Also, in the CY 2017 Quality Payment Program final rule (81 FR
77075), we expressed our commitment to pursue the active engagement of
stakeholders throughout the process of establishing and implementing
virtual groups. Please refer to the CY 2018 Quality Payment Program
proposed rule (82 FR 30027) for a full discussion of the public
comments and additional stakeholder feedback we received in response to
the CY 2017 Quality Payment Program final rule and additional
stakeholder feedback gathered through hosting several virtual group
listening sessions and convening user groups.
As discussed in the CY 2018 Quality Payment Program proposed rule
(82 FR 30027), one of the overarching themes we have heard is that we
make an option available to groups that would allow a portion of a
group to report as a separate subgroup on measures and activities that
are more applicable to the subgroup and be assessed and scored
accordingly based on the performance of the subgroup. In future
rulemaking, we intend to explore the feasibility of establishing group-
related policies that would permit participation in MIPS at a subgroup
level and create such functionality through a new identifier.
Therefore, we solicited public comment on the ways in which
participation in MIPS at the subgroup level could be established. In
addition, in this final rule with comment period, we are seeking
comment on additional ways to define a group, not solely based on a
TIN. For example, redefining a group to allow for practice sites to be
reflected and/or for specialties within a TIN to create groups.
We received several comments on subgroup level policies and will
take them into consideration for future rulemaking.
4. Virtual Groups
a. Background
There are generally three ways to participate in MIPS: (1)
Individual-level reporting; (2) group-level reporting; and (3) virtual
group-level reporting. In the CY 2018 Quality Payment Program proposed
rule (82 FR 30027 through 30034), we proposed to establish requirements
for MIPS participation at the virtual group level.
Section 1848(q)(5)(I) of the Act provides for the use of voluntary
virtual groups for certain assessment purposes, including the election
of certain practices to be a virtual group and the requirements for the
election process. Section 1848(q)(5)(I)(i) of the Act provides that
MIPS eligible clinicians electing to be a virtual group must: (1) Have
their performance assessed for the quality and cost performance
categories in a manner that applies the combined performance of all the
MIPS eligible clinicians in the virtual group to each MIPS eligible
clinician in the virtual group for the applicable performance period;
and (2) be scored for the quality and cost performance categories based
on such assessment for the applicable performance period. Section
1848(q)(5)(I)(ii) of the Act requires the Secretary to establish and
implement, in accordance with section 1848(q)(5)(I)(iii) of the Act, a
process that allows an individual MIPS eligible clinician or a group
consisting of not more than 10 MIPS eligible clinicians to elect, for a
performance period, to be a virtual group with at least one other such
individual MIPS eligible clinician or group. Virtual groups may be
based on appropriate classifications of providers, such as by
geographic areas or by provider specialties defined by nationally
recognized specialty boards of certification or equivalent
certification boards.
Section 1848(q)(5)(I)(iii) of the Act provides that the virtual
group election process must include the following requirements: (1) An
individual MIPS eligible clinician or group electing to be in a virtual
group must make their election prior to the start of the applicable
performance period and cannot change their election during the
performance period; (2) an individual MIPS eligible clinician or group
may elect to be in no more than one virtual group for a performance
period, and, in the case of a group, the election applies to all MIPS
eligible clinicians in the
[[Page 53594]]
group; (3) a virtual group is a combination of TINs; (4) requirements
providing for formal written agreements among individual MIPS eligible
clinicians and groups electing to be a virtual group; and (5) such
other requirements as the Secretary determines appropriate.
b. Definition of a Virtual Group
(1) Generally
As noted above, section 1848(q)(5)(I)(ii) of the Act requires the
Secretary to establish and implement, in accordance with section
1848(q)(5)(I)(iii) of the Act, a process that allows an individual MIPS
eligible clinician or group consisting of not more than 10 MIPS
eligible clinicians to elect, for a performance period, to be a virtual
group with at least one other such individual MIPS eligible clinician
or group. Given that section 1848(q)(5)(I)(iii)(III) of the Act
provides that a virtual group is a combination of TINs, we interpreted
the references to an ``individual'' MIPS eligible clinician in section
1848(q)(5)(I)(ii) of the Act to mean a solo practitioner, which, for
purposes of section 1848(q)(5)(I) of the Act, we proposed to define as
a MIPS eligible clinician (as defined at Sec. 414.1305) who bills
under a TIN with no other NPIs billing under such TIN (82 FR 30027).
Also, we recognized that a group (TIN) may include not only NPIs
who meet the definition of a MIPS eligible clinician, but also NPIs who
do not meet the definition of a MIPS eligible clinician at Sec.
414.1305 or who are excluded from the definition of a MIPS eligible
clinician under Sec. 414.1310(b) or (c). Thus, we interpreted the
references to a group ``consisting of not more than 10'' MIPS eligible
clinicians in section 1848(q)(5)(I)(ii) of the Act to mean a group with
10 or fewer eligible clinicians (as such terms are defined at Sec.
414.1305) (82 FR 30027). Under Sec. 414.1310(d), the MIPS payment
adjustment would apply only to NPIs in the virtual group who meet the
definition of a MIPS eligible clinician at Sec. 414.1305 and who are
not excluded from the definition of a MIPS eligible clinician under
Sec. 414.1310(b) or (c). We noted that groups must include at least
one MIPS eligible clinician in order to meet the definition of a group
at Sec. 414.1305 and thus be eligible to form or join a virtual group.
We proposed to define a virtual group at Sec. 414.1305 as a
combination of two or more TINs composed of a solo practitioner (that
is, a MIPS eligible clinician (as defined at Sec. 414.1305) who bills
under a TIN with no other NPIs billing under such TIN) or a group with
10 or fewer eligible clinicians (as such terms are defined at Sec.
414.1305) under the TIN that elects to form a virtual group with at
least one other such solo practitioner or group for a performance
period for a year (82 FR 30027 through 30028).
With regard to the low-volume threshold, we recognized that such
determinations are made at the individual and group level, but not at
the virtual group level (82 FR 30031). For example, if an individual
MIPS eligible clinician is part of a practice that is participating in
MIPS (that is, reporting) at the individual level, then the low-volume
threshold determination is made at the individual level. Whereas, if an
individual MIPS eligible clinician is part of a practice that is
participating in MIPS (that is, reporting) at the group level, then the
low-volume threshold determination is made at the group level and would
be applicable to such MIPS eligible clinician regardless of the low-
volume threshold determination made at the individual level. Similarly,
if a solo practitioner or a group with 10 or fewer eligible clinicians
seeks to participate in MIPS (that is, report) at the virtual group
level, then the low-volume threshold determination made at the
individual or group level, respectively, would be applicable to such
solo practitioner or group. Thus, solo practitioners or groups with 10
or fewer eligible clinicians that are determined not to exceed the low-
volume threshold at the individual or group level, respectively, would
not be eligible to participate in MIPS as an individual, group, or
virtual group, as applicable.
Given that a virtual group must be a combination of TINs, we
recognized that the composition of a virtual group could include, for
example, one solo practitioner (NPI) who is practicing under multiple
TINs (TIN A and TIN B), in which the solo practitioner would be able to
form a virtual group with his or her own self based on each TIN
assigned to the solo practitioner (TIN A/NPI and TIN B/NPI) (82 FR
30032). As discussed in section II.C.4.b.(3) of this final rule with
comment period, we did not propose to establish a limit on the number
of TINs that may form a virtual group at this time.
Lastly, we noted that qualification as a virtual group for purposes
of MIPS does not change the application of the physician self-referral
law to a financial relationship between a physician and an entity
furnishing designated health services, nor does it change the need for
such a financial relationship to comply with the physician self-
referral law (82 FR 30028).
We refer readers to section II.C.4.b.(3) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(2) Application to Groups Containing Participants in a MIPS APM or an
Advanced APM
Additionally, we recognized that there are circumstances in which a
TIN may have one portion of its NPIs participating under the generally
applicable MIPS scoring criteria while the remaining portion of NPIs
under the TIN is participating in a MIPS APM or an Advanced APM under
the MIPS APM scoring standard (82 FR 30028). To clarify, for all
groups, including those containing participants in a MIPS APM or an
Advanced APM, the group's performance assessment will be based on the
performance of the entire TIN. Generally, for groups other than those
containing participants in a MIPS APM or an Advanced APM, each MIPS
eligible clinician under the TIN (TIN/NPI) receives a MIPS adjustment
based on the entire group's performance assessment (entire TIN). For
groups containing participants in a MIPS APM or an Advanced APM, only
the portion of the TIN that is being scored for MIPS according to the
generally applicable scoring criteria (TIN/NPI) receives a MIPS
adjustment based on the entire group's performance assessment (entire
TIN). The remaining portion of the TIN that is being scored according
to the APM scoring standard (TIN/NPI) receives a MIPS adjustment based
on that standard. We noted that such participants may be excluded from
MIPS if they achieve QP or Partial QP status. For more information, we
refer readers to the CY 2017 Quality Payment Program final rule (81 FR
77058, 77330 through 77331).
We proposed to apply a similar policy to groups, including those
containing participants in a MIPS APM or an Advanced APM, that are
participating in MIPS as part of a virtual group (82 FR 30028).
Specifically, for groups other than those containing participants in a
MIPS APM or an Advanced APM, each MIPS eligible clinician under the TIN
(TIN/NPI) would receive a MIPS adjustment based on the virtual group's
combined performance assessment (combination of TINs). For groups
containing participants in a MIPS APM or an Advanced APM, only the
portion of the TIN that is being scored for MIPS according to the
generally applicable scoring criteria (TIN/NPI) would receive a MIPS
adjustment based on the virtual
[[Page 53595]]
group's combined performance assessment (combination of TINs). As
discussed in section II.C.6.g. of this final rule with comment period,
we proposed to use waiver authority to ensure that the remaining
portion of the TIN that is being scored according to the APM scoring
standard (TIN/NPI) would receive a MIPS adjustment based on that
standard. We noted that such participants may be excluded from MIPS if
they achieve QP or Partial QP status.
We refer readers to section II.C.4.b.(3) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(3) Appropriate Classifications
As noted above, the statute provides the Secretary with discretion
to establish appropriate classifications regarding the composition of
virtual groups, such as by geographic area or by specialty. We
recognized that virtual groups would each have unique characteristics
and varying patient populations. However, we believe it is important
for virtual groups to have the flexibility to determine their own
composition at this time, and, as a result, we did not propose to
establish any such classifications regarding virtual group composition
(82 FR 30028).
We further noted that the statute does not limit the number of TINs
that may form a virtual group, and we did not propose to establish such
a limit at this time (82 FR 30028). We did consider proposing to
establish such a limit, such as 50 or 100 participants. In particular,
we were concerned that virtual groups of too substantial a size (for
example, 10 percent of all MIPS eligible clinicians in a given
specialty or sub-specialty) may make it difficult to compare
performance between and among clinicians. We believe that limiting the
number of virtual group participants could eventually assist virtual
groups as they aggregate their performance data across the virtual
group. However, we believe that as we initially implement virtual
groups, it is important for virtual groups to have the flexibility to
determine their own size, and thus, the better approach is not to place
such a limit on virtual group size. We will monitor the ways in which
solo practitioners and groups with 10 or fewer eligible clinicians form
virtual groups and may propose to establish appropriate classifications
regarding virtual group composition or a limit on the number of TINs
that may form a virtual group in future rulemaking as necessary.
We solicited public comment on these proposals, as well as our
approach of not establishing appropriate classifications (such as by
geographic area or by specialty) regarding virtual group composition or
a limit on the number of TINs that may form a virtual group at this
time.
We noted that we received public comments in response to the CY
2017 Quality Payment Program final rule and additional stakeholder
feedback by hosting several virtual group listening sessions and
convening user groups (82 FR 30028). We refer readers to the CY 2018
Quality Payment Program proposed rule (82 FR 30027) for a summary of
these comments and our response.
The following is a summary of the public comments received
regarding our proposals, as well as our approach of not establishing
appropriate classifications (such as by geographic area or by
specialty) regarding virtual group composition or a limit on the number
of TINs that may form a virtual group at this time.
Comment: A majority of commenters supported the concept of virtual
groups, as defined, as a participation option available under MIPS.
Response: We appreciate the support from the commenters.
Comment: Several commenters did not support virtual groups being
limited to groups consisting of not more than 10 eligible clinicians
and requested that CMS expand virtual group participation to groups
with more than 10 eligible clinicians.
Response: As noted above, we interpreted the references to a group
``consisting of not more than 10'' MIPS eligible clinicians in section
1848(q)(5)(I)(ii) of the Act to mean a group with 10 or fewer eligible
clinicians (as such terms are defined at Sec. 414.1305) (82 FR 30027).
We do not have discretion to expand virtual group participation to
groups with more than 10 MIPS eligible clinicians.
Comment: One commenter recommended that CMS seek a technical
amendment to section 1848(q)(5)(I) of the Act to replace the group
eligibility threshold of 10 or fewer MIPS eligible clinicians with a
patient population requirement of at least 5,000 to improve the
validity of the reporting of virtual groups.
Response: We appreciate the feedback from the commenter and will
take the commenter's recommendation into consideration.
Comment: A few commenters recommended that CMS allow a large,
multispecialty group under one TIN to split into clinically relevant
reporting groups, or allow multiple TINs within a health care delivery
system to report as a virtual group.
Response: In the CY 2017 Quality Payment Program final rule (81 FR
77058), we noted that except for groups containing APM participants, we
do not permit groups to ``split'' TINs if they choose to participate in
MIPS as a group. As we considered the option of permitting groups to
split TINs, we identified several issues that would make it challenging
and cumbersome to implement a split TIN option such as the
administrative burden of groups having to monitor and track which NPIs
are reporting under which portion of a split TIN and the identification
of appropriate criteria to be used for determining the ways in which
groups would be able to split TINs (for example, based on specialty,
practice site, location, health IT systems, or other factors). However,
we recognize that there are certain advantages for allowing TINs to
split, such as those the identified by the commenter. We intend to
explore the option of permitting groups to split TINs, and any changes
would be proposed in future rulemaking. Thus, we consider a group to
mean an entire single TIN that elects to participate in MIPS at the
group or virtual group level. However, for multiple TINs that are
within a health care delivery system, such TINs would be able to form a
virtual group provided that each TIN has 10 or fewer eligible
clinicians.
Comment: A significant portion of commenters expressed concern
regarding the ineligibility of virtual group participation for solo
practitioners and groups that do not exceed the low-volume threshold.
The commenters noted that such solo practitioners and groups would not
be able to benefit from participating as part of a virtual group and
noted that the purpose of virtual group formation was to provide such
solo practitioners and groups, which are otherwise unable to
participate on their own, with an opportunity to join with other such
entities and collectively become eligible to participate in MIPS as
part of a virtual group. A few commenters recommended that the low-
volume threshold be conducted at the virtual group level.
Response: In regard to stakeholder concerns pertaining to the low-
volume threshold eligibility determinations made at the individual and
group level that would prevent certain solo practitioners and groups
from being eligible to form a virtual group, we believe there are
statutory constraints that do not allow us to establish a low-
[[Page 53596]]
volume threshold at the virtual group level. The statute includes
specific references to ``MIPS eligible clinicians'' throughout the
virtual group provisions, and we believe that such references were
intended to limit virtual group participation to ``MIPS eligible
clinicians'', that is, eligible clinicians who meet the definition of a
MIPS eligible clinician and are not excluded under the low-volume
threshold or any other statutory exclusion. As a result, we do not
believe we are able to establish a low-volume threshold at the virtual
group level because a solo practitioner or group would need to be
considered eligible to participate in MIPS to form or join a virtual
group.
Comment: Many commenters supported the flexibility provided for
virtual group composition, such as to not have parameters pertaining to
geographic area, specialty, size, or other factors, while other
commenters had concerns that such flexibility could circumvent bona
fide clinical reasons for collaboration, incentivize practice
consolidation, and cause an increase in costs without improving quality
and health outcomes.
Response: We appreciate the support from the commenters regarding
the flexibility we are providing to virtual groups pertaining to
composition. In regard to concerns from other commenters regarding such
flexibility, we note that TINs vary in size, clinician composition,
patient population, resources, technological capabilities, geographic
area, and other characteristics, and may join or form virtual groups
for various reasons, and we do not want to inhibit virtual group
formation due to parameters. At this juncture of virtual group
implementation, we believe that virtual groups should have the
flexibility to determine their composition and size, and thus we do not
want to limit the ways in which virtual groups are composed. However,
we encourage TINs within virtual groups to assess means for promoting
and enhancing the coordination of care and improving the quality of
care and health outcomes. We will monitor the ways in which solo
practitioners and groups with 10 or fewer eligible clinicians form
virtual groups and may propose to establish appropriate classifications
regarding virtual group composition or a limit on the number of TINs
that may form a virtual group in future rulemaking as necessary.
Comment: One commenter requested that CMS continue to examine the
formation and implementation of virtual groups, ensuring equity and
taking into account variability in patient case-mix and practice needs.
Response: We appreciate the feedback from the commenter and will
take the commenter's recommendation into consideration.
Comment: One commenter indicated that the Quality Payment Program
encourages eligible clinicians to aggregate data, share financial risk,
and work together as virtual groups, which promotes joint
accountability and creates delivery systems that are better able to
improve the cost, quality, and experience of care. As a result, the
commenter recommended that CMS issue detailed guidance and develop
tools, resources, technical assistance, and other materials for
guidance as to how clinicians can form virtual groups.
Response: We appreciate the feedback from the commenter and note
that we intend to publish a virtual group toolkit that provides
information pertaining to requirements and outlines the steps a virtual
group would pursue during the election process, which can be accessed
on the CMS Web site at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html.
Comment: A few commenters recommended that only MIPS eligible
clinicians be considered as part of a virtual group as written in the
statute. The commenters indicated that CMS continues to include all
eligible clinicians versus only MIPS eligible clinicians in the count
to determine TIN size and requested that CMS instead rely on the ``not
more than 10 MIPS eligible clinicians'' language in the statute, which
would allow more groups to take advantage of the virtual group
reporting option and focus more directly on the number of clinicians
who are participating in and contributing to MIPS rather than
clinicians who are excluded.
Response: We note that our proposed definition of a virtual group
reflects the statutory premise of virtual group participation
pertaining to MIPS eligible clinicians. In the CY 2017 final rule (81
FR 77539), we define a MIPS eligible clinician (identified by a unique
billing TIN and NPI combination used to assess performance) at Sec.
414.1305 to mean any of the following (excluding those identified at
Sec. 414.1310(b)): (1) A physician as defined in section 1861(r) of
the Act; (2) a physician assistant, a nurse practitioner, and clinical
nurse specialist as such terms are defined in section 1861(aa)(5) of
the Act; (3) a certified registered nurse anesthetist as defined in
section 1861(bb)(2) of the Act; and (4) a group that includes such
clinicians. The definition of a MIPS eligible clinician includes a
group and we define a group at Sec. 414.1305 to mean a single TIN with
two or more eligible clinicians (including at least one MIPS eligible
clinician), as identified by their individual NPI, who have reassigned
their billing rights to the TIN. Since a group is included under the
definition of a MIPS eligible clinician, which would include two or
more eligible clinicians (including at least one MIPS eligible
clinician), our definition of a virtual group is consistent with
statute.
In regard to determining TIN size for purposes of virtual group
eligibility, we count each NPI associated with a TIN in order to
determine whether or not a TIN exceeds the threshold of 10 NPIs, which
is an approach that we believe provides continuity over time if the
definition of a MIPS eligible clinician is expanded in future years
under section 1848(q)(1)(C)(i)(II) of the Act to include other eligible
clinicians. We considered an alternative approach for determining TIN
size, which would determine TIN size for virtual group eligibility
based on NPIs who are MIPS eligible clinicians. However, as we
conducted a comparative assessment of the application of such
alternative approach with the current definition of a MIPS eligible
clinician (as defined at Sec. 414.1305) and a potential expanded
definition of a MIPS eligible clinician, we found that such an approach
could create confusion as to which factors determine virtual group
eligibility and cause the pool of virtual group eligible TINs to
significantly be reduced once the definition of a MIPS eligible
clinician would be expanded, which may impact a larger portion of
virtual groups that intend to participate in MIPS as a virtual group
for consecutive performance periods. Such impact would be the result of
the current definition of a MIPS eligible clinician being narrower than
the potential expanded definition of a MIPS eligible clinician. For
example, under the recommended approach, a TIN with a total of 15 NPIs
(10 MIPS eligible clinicians and 5 eligible clinicians) would not
exceed the threshold of 10 MIPS eligible clinicians and would be
eligible to participate in MIPS as a virtual group for the 2018
performance period; however, if the definition of a MIPS eligible
clinician were expanded through rulemaking for the 2019 performance
period, such TIN, with no change in TIN size (15 NPIs), would exceed
the threshold of 10 MIPS eligible clinicians if 1 or more of the 5
eligible clinicians met the expanded definition
[[Page 53597]]
of a MIPS eligible clinician and no longer eligible to participate in
MIPS as part of a virtual group. We did not pursue such an approach
given that it did not align with our objective of establishing virtual
group eligibility policies that are simplistic in understanding and
provide continuity.
Final Action: After consideration of the public comments received,
we are finalizing with modification our proposal to define a solo
practitioner at Sec. 414.1305 as a practice consisting of one eligible
clinician (who is also a MIPS eligible clinician). We are also
finalizing with modification our proposal to define a virtual group at
Sec. 414.1305 as a combination of two or more TINs assigned to one or
more solo practitioners or one or more groups consisting of 10 or fewer
eligible clinicians, or both, that elect to form a virtual group for a
performance period for a year. We are modifying the definition (i) to
remove the redundant phrases ``with at least one other such solo
practitioner or group'' and unnecessary parenthetical cross references;
(ii) to accurately characterize TINs as being ``assigned to'' (rather
than ``composed of'') a solo practitioner or group; and (iii) to
clearly indicate that a virtual group can be composed of ``one or
more'' solo practitioners or groups of 10 or fewer eligible clinicians.
We note that we are modifying our proposed definitions for greater
clarity and consistency with established MIPS terminology.
We are also finalizing our proposal that for groups (TINs) that
participate in MIPS as part of a virtual group and do not contain
participants in a MIPS APM or an Advanced APM, each MIPS eligible
clinician under the TIN (each TIN/NPI) will receive a MIPS payment
adjustment based on the virtual group's combined performance assessment
(combination of TINs). For groups (TINs) that participate in MIPS as
part of a virtual group and contain participants in a MIPS APM or an
Advanced APM, only the portion of the TIN that is being scored for MIPS
according to the generally applicable scoring criteria will receive a
MIPS adjustment based on the virtual group's combined performance
assessment (combination of TINs). As discussed in section II.C.6.g. of
this final rule with comment period, the remaining portion of the TIN
that is being scored according to the APM scoring standard will receive
a MIPS payment adjustment based on that standard. We note that such
participants may be excluded from MIPS if they achieve QP or Partial QP
status.
At this juncture, we are not establishing additional
classifications (such as by geographic area or by specialty) regarding
virtual group composition or a limit on the number of TINs that may
form a virtual group.
c. Virtual Group Identifier for Performance
To ensure that we have accurately captured all of the MIPS eligible
clinicians participating in a virtual group, we proposed that each MIPS
eligible clinician who is part of a virtual group would be identified
by a unique virtual group participant identifier (82 FR 30028 through
30029). The unique virtual group participant identifier would be a
combination of three identifiers: (1) Virtual group identifier
(established by CMS; for example, XXXXXX); (2) TIN (9 numeric
characters; for example, XXXXXXXXX); and (3) NPI (10 numeric
characters; for example, 1111111111). For example, a virtual
participant identifier could be VG-XXXXXX, TIN-XXXXXXXXX, NPI-
11111111111. We solicited public comment on this proposal.
The following is a summary of the public comments received
regarding our proposal.
Comment: A majority of commenters expressed support for our
proposal.
Response: We appreciate the support from the commenters.
Comment: One commenter indicated that a virtual group identifier
would lead to administrative simplification and more accurate
identification of MIPS eligible clinicians caring for Medicare
beneficiaries, which could be used in recognizing and eliminating
redundancies in the payer system.
Response: We appreciate the support from the commenter. We believe
that our proposed virtual group identifier will accurately identify
each MIPS eligible clinician participating in a virtual group and be
easily implemented by virtual groups.
Comment: One commenter thanked CMS for not requiring virtual groups
to form new TINs, which would add to the administrative burden for
entities electing to become virtual groups, while another commenter
requested clarification regarding whether or not members of a virtual
group would need to submit a Reassignment of Benefits Form (CMS-855R)
to the MAC and reassign their billing rights to the elected virtual
group.
Response: We note that a virtual group is recognized as an official
collective entity for reporting purposes, but is not a distinct legal
entity for billing purposes. As a result, a virtual group does not need
to establish a new TIN for purposes of participation in MIPS, nor does
any eligible clinician in the virtual group need to reassign their
billing rights to a new or different TIN.
Comment: A few commenters indicated that EHR developers need to
know the specifications for the virtual group identifier as soon as
technically feasible in order for such specifications to be included in
their development efforts and implemented early in 2018. One commenter
indicated that qualified registries submit data at the TIN level for
group reporting and that individual NPI data is effectively obscured,
and requested clarification regarding the type of information qualified
registries would report for virtual groups, such as the virtual group
identifier alone (VG-XXXXXX) or the combination of all three
identifiers (VG-XXXXXX, TIN-XXXXXXXXX, NPI-11111111111).
Response: For a virtual groups that are determined to have met the
virtual group formation criteria and approved to participate in MIPS as
an identified official virtual group, we will notify official
designated virtual group representatives of their official virtual
group status and issue a virtual group identifier. We intend to notify
virtual groups of their official status as close to the start of the
performance period as technically feasible. Virtual groups will need to
provide their virtual group identifiers to the third party
intermediaries that will be submitting their performance data, such as
qualified registries, QCDRs, and/or EHRs. Qualified registries, QCDRs,
and EHRs will include the virtual group identifier alone (VG-XXXXXX) in
the file submissions. For virtual groups that elect to participate in
MIPS via the CMS Web Interface or administer the CAHPS for MIPS survey,
they will register via the CMS Web Interface and include the virtual
group identifier alone (VG-XXXXXX) during registration. We intend to
update submission specifications prior to the start of the applicable
submission period.
Comment: One commenter expressed concerns regarding the burden of
using a virtual group identifier and the added administrative
complexity to the claims process of using layered identifiers and
modifiers. The commenter requested that CMS simplify the reporting
process for MIPS eligible clinicians, groups, and virtual groups rather
than increase the administrative burden.
Response: We appreciate the feedback from the commenter. We do not
believe that the virtual group identifier would be burdensome for
virtual groups to implement. We believe that our proposed virtual group
identifier is the most appropriate and simple approach,
[[Page 53598]]
which will allow for the accurate identification of each MIPS eligible
clinician participating in a virtual group and be easily implemented by
virtual groups.
Final Action: After consideration of the public comments received,
we are finalizing our proposal that each MIPS eligible clinician who is
part of a virtual group will be identified by a unique virtual group
participant identifier, which will be a combination of three
identifiers: (1) Virtual group identifier (established by CMS; for
example, XXXXXX); (2) TIN (9 numeric characters; for example,
XXXXXXXXX); and (3) NPI (10 numeric characters; for example,
1111111111). For example, a virtual group participant identifier could
be VG-XXXXXX, TIN-XXXXXXXXX, NPI-11111111111.
d. Application of Group-Related Policies to Virtual Groups
(1) Generally
In the CY 2017 Quality Payment Program final rule (81 FR 77070
through 77072), we finalized various requirements for groups under MIPS
at Sec. 414.1310(e), under which groups electing to report at the
group level are assessed and scored across the TIN for all four
performance categories. In the CY 2018 Quality Payment Program proposed
rule (82 FR 30029), we proposed to apply our previously finalized and
proposed group-related policies to virtual groups, unless otherwise
specified. We recognized that there are instances in which we may need
to clarify or modify the application of certain previously finalized or
proposed group-related policies to virtual groups, such as the
definition of a non-patient facing MIPS eligible clinician; small
practice, rural area and HPSA designations; and groups that contain
participants in a MIPS APM or an Advanced APM (see section II.C.4.b. of
this final rule with comment period). More generally, such policies may
include, but are not limited to, those that require a calculation of
the number of NPIs across a TIN (given that a virtual group is a
combination of TINs), the application of any virtual group
participant's status or designation to the entire virtual group, and
the applicability and availability of certain measures and activities
to any virtual group participant and to the entire virtual group.
We refer readers to section II.C.4.d.(5) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(2) Application of Non-Patient Facing Status to Virtual Groups
With regard to the applicability of the non-patient facing MIPS
eligible clinician-related policies to virtual groups, in the CY 2017
Quality Payment Program final rule (81 FR 77048 through 77049), we
defined the term non-patient facing MIPS eligible clinician at Sec.
414.1305 as an individual MIPS eligible clinician that bills 100 or
fewer patient facing encounters (including Medicare telehealth services
defined in section 1834(m) of the Act) during the non-patient facing
determination period, and a group provided that more than 75 percent of
the NPIs billing under the group's TIN meet the definition of a non-
patient facing individual MIPS eligible clinician during the non-
patient facing determination period. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30021, 30029), we proposed to modify the
definition of a non-patient facing MIPS eligible clinician to include
clinicians in a virtual group, provided that more than 75 percent of
the NPIs billing under the virtual group's TINs meet the definition of
a non-patient facing individual MIPS eligible clinician during the non-
patient facing determination period. We noted that other policies
previously established and proposed in the proposed rule for non-
patient facing groups would apply to virtual groups (82 FR 30029). For
example, as discussed in section II.C.1.e. of this final rule with
comment period, virtual groups determined to be non-patient facing
would have their advancing care information performance category
automatically reweighted to zero.
We refer readers to section II.C.4.d.(5) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(3) Application of Small Practice Status to Virtual Groups
With regard to the application of small practice status to virtual
groups, in the CY 2017 Quality Payment Program final rule (81 FR
77188), we defined the term small practices at Sec. 414.1305 as
practices consisting of 15 or fewer clinicians and solo practitioners.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30019,
30029), we proposed that a virtual group would be identified as a small
practice if the virtual group does not have 16 or more eligible
clinicians. In addition, we proposed for performance periods occurring
in 2018 and future years to identify small practices by utilizing
claims data; for performance periods occurring in 2018, we would
identify small practices based on 12 months of data starting from
September 1, 2016 to August 31, 2017 (82 FR 30019 through 30020). We
refer readers to section II.C.1.c. of this final rule with comment
period for the discussion of our proposal to identify small practices
by utilizing claims data. We refer readers to section II.C.4.d.(3) of
this final rule with comment period for the discussion regarding how
small practice status would apply to virtual groups for scoring under
MIPS.
We refer readers to section II.C.4.d.(5) of this final rule with
comment period for a summary of the public comments we received on our
proposal to apply small practice status to virtual groups and our
responses.
(4) Application of Rural Area and HSPA Practice Status to Virtual
Groups
In the CY 2018 Quality Payment Program proposed rule (82 FR 30020
through 30021), we proposed to determine rural area and HPSA practice
designations at the individual, group, and virtual group level.
Specifically, for performance periods occurring in 2018 and future
years, we proposed that an individual MIPS eligible clinician, a group,
or a virtual group with multiple practices under its TIN or TINs within
a virtual group would be designated as a rural area or HPSA practice if
more than 75 percent of NPIs billing under the individual MIPS eligible
clinician or group's TIN or within a virtual group, as applicable, are
designated in a ZIP code as a rural area or HPSA. We noted that other
policies previously established and proposed in the proposed rule for
rural area and HPSA groups would apply to virtual groups (82 FR 30029).
We note that in section II.C.7.b.(1)(b) of this final rule with comment
period, we describe our scoring proposals for practices that are in a
rural area.
We refer readers to section II.C.4.d.(5) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(5) Applicability and Availability of Measures and Activities to
Virtual Groups
As noted above, we proposed to apply our previously finalized and
proposed group-related policies to virtual groups, unless otherwise
specified (82 FR 30029). In particular, we recognized that the measures
and activities applicable and available to groups would also be
applicable and available to virtual groups. Virtual groups would be
required to meet the reporting
[[Page 53599]]
requirements for each measure and activity, and the virtual group would
be responsible for ensuring that their measure and activity data are
aggregated across the virtual group (for example, across their TINs).
We noted that other previously finalized and proposed group-related
policies pertaining to the four performance categories would apply to
virtual groups.
The following is a summary of the public comments received
regarding our proposals.
Comment: Many commenters supported our proposal to generally apply
MIPS group-related policies to virtual groups, unless otherwise
specified. The commenters indicated that such alignment would ease
undue administrative and reporting burden.
Response: We appreciate the support from the commenters.
Comment: Several commenters supported our proposal to modify the
definition of a non-patient facing MIPS eligible clinician to include
clinicians in a virtual group provided that more than 75 percent of the
NPIs billing under the virtual group's TINs meet the definition of a
non-patient facing individual MIPS eligible clinician.
Response: We appreciate the support from the commenters.
Comment: One commenter expressed support for our proposal that a
virtual group would be identified as a small practice if the virtual
group does not have 16 or more eligible clinicians, while another
commenter expressed support for our proposal that a virtual group with
more than 75 percent of the NPIs billing under the virtual group's TINs
are in a ZIP code designated as a rural area or HPSA would be
designated as a rural area or HPSA practice at the virtual group level.
Response: We appreciate the support from the commenters regarding
our proposals.
Comment: Several commenters did not support our proposal that a
virtual group would be identified as a small practice if the virtual
group does not have 16 or more eligible clinicians. The commenters
expressed concerns that the benefits of forming a virtual group could
be outweighed by the loss of the proposed small practice bonus points
for virtual groups with more than 15 eligible clinicians, and that the
elimination of small practice bonus points for such virtual groups
would undermine the establishment of small practice policies afforded
to such entities in statute. The commenters indicated that the
formation of virtual groups would involve substantial administrative
burdens for small practices, and that each TIN within a virtual group
would otherwise qualify as a small practice and should not lose the
accommodations to which they would otherwise be entitled. The
commenters suggested that any virtual group, regardless of size, be
considered a small practice. The commenters further stated that small
practices that just slightly exceed the low-volume threshold may have
the most challenges and difficulty succeeding in the Quality Payment
Program.
Response: We note that virtual groups with 15 or fewer eligible
clinicians will continue to be considered a small practice as a
collective entity. The small practice status is applied based on the
collective entity as a whole and not based on the small practice status
of each TIN within a virtual group. If a virtual group has 16 or more
eligible clinicians, it would not be considered to have a small
practice status as a collective whole. We believe that our approach is
consistent with statute and not unfair to small practices that are a
part of virtual groups with 16 or more eligible clinicians. Section
1848(q)(2)(B)(iii) of the Act specifically refers to small practices of
15 or fewer clinicians, and we do not believe it is appropriate to
apply such designation to a virtual group as a collective single entity
when a virtual group has 16 or more eligible clinicians. We encourage
small practices to weigh the benefit of the special provisions specific
to small practices against the benefits of virtual group participation
when considering whether to form a virtual group that has 16 or more
eligible clinicians. We refer readers to section II.C.7.b.(1)(c) of
this final rule with comment for the discussion regarding the scoring
of small practices. We want to ensure that small practices have the
ability to determine the most appropriate means for participating in
MIPS, whether it be as individuals, as a group or part of a virtual
group. The formation of virtual groups provides for a comprehensive
measurement of performance, shared responsibility, and an opportunity
to effectively and efficiently coordinate resources to achieve
requirements under each performance category. A small practice may
elect to join a virtual group in order to potentially increase their
performance under MIPS or elect to participate in MIPS as a group and
take advantage of other flexibilities and benefits afforded to small
practices. We note that if a virtual group has 16 or more eligible
clinicians, it will not be considered a small practice.
Comment: A few commenters did not support our proposal that a
virtual group with more than 75 percent of the NPIs billing under the
virtual group's TINs are in a ZIP code designated as a rural area or
HPSA would be designated as a rural area or HPSA practice at the
virtual group level. The commenters requested that CMS reduce the
threshold pertaining to rural area and HPSA practice status for virtual
groups and recommended that a virtual group with more than 50 percent
of the NPIs billing under a virtual group's TINs are in a ZIP code
designated as a rural area or HPSA would be designated as a rural area
or HPSA practice at the virtual group level.
Response: We disagree with the recommendation from the commenters.
In order for a virtual group to be designated as a rural area or HPSA
practice, we believe that a significant portion of a virtual group's
NPIs would need to be in a ZIP code designated as a rural area or HPSA.
Our proposal provides a balance between requiring more than half of a
virtual group's NPIs to have such designations and requiring all NPIs
within a virtual group to have such designations. Also, our proposed
threshold pertaining to rural area and HPSA practice status for virtual
groups aligns with other group-related and virtual group policies,
which creates continuity among policies and makes virtual group
implementation easier for TINs forming virtual groups.
Comment: One commenter urged CMS to eliminate the all-cause
hospital readmission measure from the quality performance category
score for virtual groups with 16 or more eligible clinicians. The
commenter noted that virtual groups would be newly formed and unlikely
to have the same infrastructure and care coordination functionality
that established groups under a single TIN may have in place, and that
factoring the all-cause hospital readmission measure into their score
would be inappropriate.
Response: We recognize that small practices, including solo
practitioners, would not be assessed on the all-cause hospital
readmission measure as individual TINs. However, we believe that the
all-cause hospital readmission measure is an appropriate measure, when
applicable, to assess performance under the quality performance
category of virtual groups with 16 or more eligible clinicians that
meet the case volume of 200 cases. For virtual groups that do not meet
the minimum case volume of 200, the all-cause hospital readmission
measure would not be scored. Also, we believe that our approach for
assessing performance based on the all-cause hospital readmission
measure for virtual groups with 16 or more eligible clinicians is
appropriate because it reflects the same
[[Page 53600]]
policy for groups, which was developed as a requirement to reduce
burden (such measure is based on administrative claims data and does
not require a separate submission of data) and ensure that we do not
unfairly penalize MIPS eligible clinicians or groups that did not have
adequate time to prepare adequately to succeed in the program while
still rewarding high performers.
Comment: One commenter supported our proposal to generally apply
our group-related policies to virtual groups, specifically with regard
to the improvement activities performance category requirements, under
which groups and virtual groups would receive credit for an improvement
activity as long as one NPI under the group's TIN or virtual group's
TINs performs an improvement activity for a continuous 90-day period.
Response: We appreciate the support from the commenter.
Comment: One commenter requested clarification regarding how the
proposed group-related policy that at least 50 percent of the practice
sites within a TIN must be certified or recognized as a patient-
centered medical home or comparable specialty practice in order to
receive full credit in the improvement activities performance category
applies to virtual groups. Another commenter recommended that a virtual
group receive full credit for the improvement activities performance
category if at least 50 percent of its eligible clinicians are
certified or recognized as a patient-centered medical home or
comparable specialty practice.
Response: As discussed in section II.C.7.a.(5)(c) of this final
rule with comment period, in order for a group to receive full credit
as a certified or recognized patient-centered medical home or
comparable specialty practice under the improvement activities
performance category, at least 50 percent of the practice sites within
the TIN must be recognized as a patient-centered medical home or
comparable specialty practice. In order for a virtual group to receive
full credit as a certified or recognized patient-centered medical home
or comparable specialty practice under the improvement activities
performance category, at least 50 percent of the practice sites within
the TINs that are part of a virtual group must be certified or
recognized as a patient-centered medical home or comparable specialty
practice.
Comment: One commenter requested that CMS clarify how a virtual
group would be expected to meet the advancing care information
performance category requirements and whether all TINs within a virtual
group would be required to have certified EHR technology.
Response: In general and unless stated otherwise, for purposes of
the advancing care information performance category, the policies
pertaining to groups will apply to virtual groups. We refer readers to
section II.C.6.f. of this final rule with comment period for more
information on the generally applicable policies for the advancing care
information performance category.
We note that as with virtual group reporting for the other MIPS
performance categories, to report as a virtual group, the virtual group
will need to aggregate data for all of the individual MIPS eligible
clinicians within the virtual group for which its TINs have data in
CEHRT. For solo practitioners and groups that choose to report as a
virtual group, performance on the advancing care information
performance category objectives and measures will be reported and
evaluated at the virtual group level. The virtual group will submit the
data that its TINs have utilizing CEHRT and exclude data not collected
from a non-certified EHR system. While we do not expect that every MIPS
eligible clinician in a virtual group will have access to CEHRT, or
that every measure will apply to every clinician in the virtual group,
only those data contained in CEHRT should be reported for the advancing
care information performance category.
For example, the virtual group calculation of the numerators and
denominators for each measure must reflect all of the data from the
individual MIPS eligible clinicians (unless a clinician can be
excluded) that have been captured in CEHRT for the given advancing care
information performance category measure. If the groups (not including
solo practitioners) that are part of a virtual group have CEHRT that is
capable of supporting group level reporting, the virtual group would
submit the aggregated data across the TINs produced by the CEHRT. If a
group (TIN) that is part of a virtual group does not have CEHRT that is
capable of supporting group level reporting, such group would aggregate
the data by adding together the numerators and denominators for each
MIPS eligible clinician within the group for whom the group has data
captured in CEHRT. If an individual MIPS eligible clinician meets the
criteria to exclude a measure, their data can be excluded from the
calculation of that particular measure only.
We recognize that it can be difficult to identify unique patients
across a virtual group for the purposes of aggregating data on the
advancing care information performance category measures, particularly
when TINs within a virtual group may be using multiple CEHRT systems.
For the 2018 performance period, TINs within virtual groups may be
using systems which are certified to different CEHRT Editions. We
consider ``unique patients'' to be individual patients treated by a TIN
within a virtual group who would typically be counted as one patient in
the denominator of an advancing care information performance category
measure. This patient may see multiple MIPS eligible clinicians within
a TIN that is part of a virtual group, or may see MIPS eligible
clinicians at multiple practice sites of a TIN that is part of a
virtual group. When aggregating performance on advancing care
information measures for virtual group level reporting, we do not
require that a virtual group determines that a patient seen by one MIPS
eligible clinician (or at one location in the case of TINs working with
multiple CEHRT systems) is not also seen by another MIPS eligible
clinician in the TIN that is part of the virtual group or captured in a
different CEHRT system. Virtual groups are provided with some
flexibility as to the method for counting unique patients in the
denominators to accommodate such scenarios where aggregation may be
hindered by systems capabilities across multiple CEHRT platforms. We
refer readers to section II.C.6.f.(4) of this final rule with comment
for the discussion regarding certification requirements.
Comment: One commenter requested that CMS require that a majority
of eligible clinicians within a virtual group participate in activities
to which the virtual group attests in the improvement activities and
advancing care information performance categories in order for the
virtual group to receive credit for those activities.
Response: We note that a virtual group would need to meet the
group-related requirements under each performance category. For the
improvement activities performance category, a virtual group would meet
the reporting requirements if at least one NPI within the virtual group
completed an improvement activity for a minimum of a continuous 90-day
period within CY 2018. In regard to the advancing care information
performance category, a virtual group would need to fulfill the
required base score measures for a minimum of 90 days in order to earn
points for the advancing care information performance category.
Additionally, virtual groups are able to submit performance score
measures and bonus score measures in order to
[[Page 53601]]
increase the number of points earned under the advancing care
information performance category.
Comment: A few commenters requested that virtual groups have the
same flexibility afforded to groups regarding the ability to report on
different measures and utilize multiple submission mechanisms under
each performance category.
Response: We note that virtual groups will have the same
flexibility as groups to report on measures and activities that are
applicable and available to them. As discussed in section II.C.6.a.(1)
of this final rule with comment period, the submission mechanisms
available to groups under each performance category will also be
available to virtual groups. Similarly, virtual groups will also have
the same option as groups to utilize multiple submission mechanisms,
but only one submission mechanism per performance category for the 2018
performance period. However, starting with the 2019 performance period,
groups and virtual groups will be able to utilize multiple submission
mechanisms for each performance category.
Comment: A few commenters recommended that CMS establish
performance feedback for virtual groups and each TIN within a virtual
group that includes complete performance data for each performance
category. One commenter requested that CMS provide instructions
regarding the appeal and audit process for virtual groups and TINs
within a virtual group.
Response: We note that performance feedback for virtual groups will
be similar to feedback reports for groups, which is based on the
performance of the entire group for each performance category. We note
that virtual groups are required to aggregate their data across the
virtual group, and will be assessed and scored at the virtual group
level. Each TIN within the virtual group will receive feedback on their
performance based on participation in MIPS as a virtual group, in which
each TIN under the virtual group will have the same performance
feedback applicable to the four performance categories. At this
juncture, it is not technically feasible nor do we believe it is
appropriate for us to de-aggregate data at the virtual group level and
reassess performance data at the TIN or TIN/NPI level without requiring
TINs and/or TIN/NPIs to submit data separately. We refer readers to
section II.C.9.a. of this final rule with comment period for the
discussion pertaining to performance feedback.
Moreover, we note that virtual groups will have an opportunity to
request a targeted review of their MIPS payment adjustment factor(s)
for a performance period. In regard to an audit process, virtual groups
would be subject to the MIPS data validation and auditing requirements
as described in section II.C.9.c. of this final rule with comment
period.
Final Action: After consideration of public comments received, we
are finalizing our proposal to apply our previously finalized and
proposed group-related policies to virtual groups, unless otherwise
specified.
We are also finalizing our proposal to modify the definition of a
non-patient facing MIPS eligible clinician at Sec. 414.1305 to include
a virtual group, provided that more than 75 percent of the NPIs billing
under the virtual group's TINs meet the definition of a non-patient
facing individual MIPS eligible clinician during the non-patient facing
determination period. Other previously finalized and proposed policies
related to non-patient facing MIPS eligible clinicians would apply to
such virtual groups.
We are also finalizing our proposal that a virtual group will be
considered a small practice if a virtual group consists of 15 or fewer
eligible clinicians. Other previously finalized and proposed policies
related to small practices would apply to such virtual groups.
We are also finalizing our proposal that a virtual group will be
designated as a rural area or HPSA practice if more than 75 percent of
NPIs billing under the virtual group's TINs are designated in a ZIP
code as a rural area or HPSA, the virtual group's TINs are designated
as rural areas or HPSA practices. Other previously finalized and
proposed policies related to rural area or HPSA practices would apply
to such virtual groups.
In response to public comments, we are also finalizing that a
virtual group will be considered a certified or recognized patient-
centered medical home or comparable specialty practice under Sec.
414.1380(b)(3)(iv) if at least 50 percent of the practices sites within
the TINs are certified or recognized as a patient-centered medical home
or comparable specialty practice.
e. Virtual Group Election Process
(1) Generally
As noted in section II.C.4.a. of this final rule with comment
period, section 1848(q)(5)(I)(iii)(I) and (II) of the Act provides that
the virtual group election process must include certain requirements,
including that: (1) An individual MIPS eligible clinician or group
electing to be in a virtual group must make their election prior to the
start of the applicable performance period and cannot change their
election during the performance period; and (2) an individual MIPS
eligible clinician or group may elect to be in no more than one virtual
group for a performance period, and, in the case of a group, the
election applies to all MIPS eligible clinicians in the group.
Accordingly, we proposed to codify at Sec. 414.1315(a) that a solo
practitioner (as defined at Sec. 414.1305) or group consisting of 10
or fewer eligible clinicians (as such terms are defined at Sec.
414.1305) electing to be in a virtual group must make their election
prior to the start of the applicable performance period and cannot
change their election during the performance period (82 FR 30029
through 30030). Virtual group participants may elect to be in no more
than one virtual group for a performance period, and, in the case of a
group, the election applies to all MIPS eligible clinicians in the
group.
We noted that in the case of a TIN within a virtual group being
acquired or merged with another TIN, or no longer operating as a TIN
(for example, a group practice closes), during a performance period,
such solo practitioner's or group's performance data would continue to
be attributed to the virtual group (82 FR 30032). The remaining parties
to the virtual group would continue to be part of the virtual group
even if only one solo practitioner or group remains. We consider a TIN
that is acquired or merged with another TIN, or no longer operating as
a TIN (for example, a group practice closes), to mean a TIN that no
longer exists or operates under the auspices of such TIN during a
performance period.
In order to provide support and reduce burden, we intend to make
technical assistance (TA) available, to the extent feasible and
appropriate, to support clinicians who choose to come together as a
virtual group. Clinicians can access the TA infrastructure and
resources that they may already be utilizing. For Quality Payment
Program year 3, we intend to provide an electronic election process if
technically feasible. We proposed that clinicians who do not elect to
contact their designated TA representative would still have the option
of contacting the Quality Payment Program Service Center to obtain
information pertaining to virtual groups (82 FR 30030).
We refer readers to section II.C.4.e.(3) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
[[Page 53602]]
(2) Virtual Group Election Deadline
For performance periods occurring in 2018 future years, we proposed
to establish a virtual group election period (82 FR 30030).
Specifically, we proposed to codify at Sec. 414.1315(a) that a solo
practitioner (as defined at Sec. 414.1305) or group consisting of 10
or fewer eligible clinicians (as such terms are defined at Sec.
414.1305) electing to be in a virtual group must make their election by
December 1 of the calendar year preceding the applicable performance
period. A virtual group representative would be required to make the
election, on behalf of the members of a virtual group, regarding the
formation of a virtual group for the applicable performance period, by
the election deadline. For example, a virtual group representative
would need to make an election, on behalf of the members of a virtual
group, by December 1, 2017 for the members of the virtual group to
participate in MIPS as a virtual group during the CY 2018 performance
period. We intend to publish the beginning date of the virtual group
election period applicable to performance periods occurring in 2018 and
future years in subregulatory guidance.
We refer readers to section II.C.4.e.(3) of this final rule with
comment period for a summary of the public comments we received on
these proposals and our responses.
(3) Virtual Group Eligibility Determinations and Formation
We proposed to codify at Sec. 414.1315(c) a two-stage virtual
group election process, stage 1 of which is optional, for performance
periods occurring in 2018 and 2019 (82 FR 30030 through 30032). Stage 1
pertains to virtual group eligibility determinations, and stage 2
pertains to virtual group formation. We noted that activity involved in
stage 1 is not required, but a resource available to solo practitioners
and groups with 10 or fewer eligible clinicians. Solo practitioners and
groups that engage in stage 1 and are determined eligible for virtual
group participation would proceed to stage 2; otherwise, solo
practitioners and groups that do not engage in any activity during
stage 1 would begin the election process at stage 2. Engaging in stage
1 would provide solo practitioners and groups with the option to
confirm whether or not they are eligible to join or form a virtual
group before going to the lengths of executing formal written
agreements, submitting a formal election registration, allocating
resources for virtual group implementation, and other related
activities; whereas, by engaging directly in stage 2 as an initial
step, solo practitioners and groups might conduct all such efforts to
only have their election registration be rejected with no recourse or
remaining time to amend and resubmit.
In stage 1, solo practitioners and groups with 10 or fewer eligible
clinicians interested in forming or joining a virtual group would have
the option to contact their designated TA representative in order to
obtain information pertaining to virtual groups and/or determine
whether or not they are eligible, as it relates to the practice size
requirement of a solo practitioner or a group of 10 or fewer eligible
clinicians, to participate in MIPS as a virtual group (Sec.
414.1315(c)(1)(i)). During stage 1 of the virtual group election
process, we would determine whether or not a TIN is eligible to form or
join a virtual group. In order for a solo practitioner to be eligible
to form or join a virtual group, the solo practitioner would need to
meet the definition of a solo practitioner at Sec. 414.1305 and not be
excluded from MIPS under Sec. 414.1310(b) or (c). In order for a group
to be eligible to form or join a virtual group, a group would need to
meet the definition of a group at Sec. 414.1305, have a TIN size that
does not exceed 10 eligible clinicians, and not be excluded from MIPS
under Sec. 414.1310(b) or (c). For purposes of determining TIN size
for virtual group participation eligibility, we coined the term
``virtual group eligibility determination period'' and defined it to
mean an analysis of claims data during an assessment period of up to 5
months that would begin on July 1 and end as late as November 30 of the
calendar year prior to the applicable performance period and includes a
30-day claims run out.
To capture a real-time representation of TIN size, we proposed to
analyze up to 5 months of claims data on a rolling basis, in which
virtual group eligibility determinations for each TIN would be updated
and made available monthly (82 FR 30030). We noted that an eligibility
determination regarding TIN size is based on a relative point in time
within the 5-month virtual group eligibility determination period, and
not made at the end of such 5-month determination period.
If at any time a TIN is determined to be eligible to participate in
MIPS as part of a virtual group, the TIN would retain that status for
the duration of the election period and the applicable performance
period. TINs could determine their status by contacting their
designated TA representative; otherwise, the TIN's status would be
determined at the time that the TIN's virtual group election is
submitted. For example, if a group contacted their designated TA
representative on October 20, 2017, the claims data analysis would
include the months of July through September of 2017, and, if
determined not to exceed 10 eligible clinicians, the TIN's size would
be determined at such time, and the TIN's eligibility status would be
retained for the duration of the election period and the CY 2018
performance period. If another group contacted their designated TA
representative on November 20, 2017, the claims data analysis would
include the months of July through October of 2017, and, if determined
not to exceed 10 eligible clinicians, the TIN's size would be
determined at such time, and the TIN's eligibility status would be
retained for the duration of the election period and the CY 2018
performance period.
We believe such a virtual group determination period process
provides a relative representation of real-time TIN size for purposes
of virtual group eligibility and allows solo practitioners and groups
to know their real-time eligibility status immediately and plan
accordingly for virtual group implementation. It is anticipated that
starting in September of each calendar year prior to the applicable
performance period, solo practitioners and groups would be able to
contact their designated TA representative and inquire about virtual
group participation eligibility. We noted that TIN size determinations
are based on the number of NPIs associated with a TIN, which would
include clinicians (NPIs) who do not meet the definition of a MIPS
eligible clinician at Sec. 414.1305 or who are excluded from MIPS
under Sec. 414.1310(b) or (c).
For groups that do not choose to participate in stage 1 of the
election process (that is, the group does not request an eligibility
determination), we will make an eligibility determination during stage
2 of the election process. If a group began the election process at
stage 2 and if its TIN size is determined not to exceed 10 eligible
clinicians and not excluded based on the low-volume threshold exclusion
at the group level, the group is determined eligible to participate in
MIPS as part of a virtual group, and such virtual group eligibility
determination status would be retained for the duration of the election
period and applicable performance period. Stage 2 pertains to virtual
group formation. For stage two, we proposed the following (82 FR
30031):
[[Page 53603]]
TINs comprising a virtual group must establish a written
formal agreement between each member of a virtual group prior to an
election (Sec. 414.1315(c)(2)(i)).
On behalf of a virtual group, the official designated
virtual group representative must submit an election by December 1 of
the calendar year prior to the start of the applicable performance
period (Sec. 414.1315(c)(2)(ii)). Such election will occur via email
to the Quality Payment Program Service Center using the following email
address for the 2018 and 2019 performance periods:
MIPS_VirtualGroups@cms.hhs.gov.
The submission of a virtual group election must include,
at a minimum, information pertaining to each TIN and NPI associated
with the virtual group and contact information for the virtual group
representative (Sec. 414.1315(c)(2)(iii)). A virtual group
representative would submit the following type of information: Each TIN
associated with the virtual group; each NPI associated with a TIN that
is part of the virtual group; name of the virtual group representative;
affiliation of the virtual group representative to the virtual group;
contact information for the virtual group representative; and
confirmation through acknowledgment that a formal written agreement has
been established between each member of the virtual group (solo
practitioner or group) prior to election and each eligible clinician in
the virtual group is aware of participating in MIPS as a virtual group
for an applicable performance period. Each party to the virtual group
agreement must retain a copy of the virtual group's written agreement.
We noted that the virtual group agreement is subject to the MIPS data
validation and auditing requirements as described in section II.C.9.c.
of this final rule with comment period.
Once an election is made, the virtual group representative
must contact their designated CMS contact to update any election
information that changed during an applicable performance period at
least one time prior to the start of an applicable submission period
(Sec. 414.1315(c)(2)(iv)). Virtual groups will use the Quality Payment
Program Service Center as their designated CMS contact; however, we
will define this further in subregulatory guidance.
For stage 2 of the election process, we would review all submitted
election information; confirm whether or not each TIN within a virtual
group is eligible to participate in MIPS as part of a virtual group;
identify the NPIs within each TIN participating in a virtual group that
are excluded from MIPS in order to ensure that such NPIs would not
receive a MIPS payment adjustment or, when applicable and when
information is available, would receive a payment adjustment based on a
MIPS APM scoring standard; calculate the low-volume threshold at the
individual and group levels in order to determine whether or not a solo
practitioner or group is eligible to participate in MIPS as part of a
virtual group; and notify virtual groups as to whether or not they are
considered official virtual groups for the applicable performance
period. For virtual groups that are determined to have met the virtual
group formation criteria and identified as an official virtual group
participating in MIPS for an applicable performance period, we would
contact the official designated virtual group representative via email
notifying the virtual group of its official virtual group status and
issuing a virtual group identifier for performance (as described in
section II.C.4.c. of this final rule with comment period) that would
accompany the virtual group's submission of performance data during the
submission period.
As we engaged in various discussions with stakeholders during the
rulemaking process through listening sessions and user groups,
stakeholders indicated that many solo practitioners and small groups
have limited resources and technical capacities, which may make it
difficult for the entities to form virtual groups without sufficient
time and technical assistance. Depending on the resources and technical
capacities of the entities, stakeholders conveyed that it may take
entities 3 to 18 months to prepare to participate in MIPS as a virtual
group. The majority of stakeholders indicated that virtual groups would
need at least 6 to 12 months prior to the start of the CY 2018
performance period to form virtual groups, prepare health IT systems,
and train staff to be ready for the implementation of virtual group
related activities by January 1, 2018.
We recognized that for the first year of virtual group formation
and implementation prior to the start of the CY 2018 performance
period, the timeframe for virtual groups to make an election by
registering would be relatively short, particularly from the date we
issue the publication of a final rule toward the end of the 2017
calendar year. To provide solo practitioners and groups with 10 or
fewer eligible clinicians with additional time to assemble and
coordinate resources, and form a virtual group prior to the start of
the CY 2018 performance period, we provided virtual groups with an
opportunity to make an election prior to the publication of our final
rule. On October 11, 2017, the election period began and we issued
information pertaining to the start date of the election process via
subregulatory guidance, which can be accessed on the CMS Web site at
https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. As discussed in section II.C.4.e. of this final rule with
comment period, we are extending the virtual group election. Virtual
groups would have from October 11, 2017 to December 31, 2017 to make an
election for the 2018 performance year. However, any MIPS eligible
clinicians applying to be a virtual group that does not meet all
finalized virtual group requirements would not be permitted to
participate in MIPS as a virtual group.
As previously noted, solo practitioners and groups participating in
a virtual group would have the size of their TIN determined for
eligibility purposes. We recognized that the size of a TIN may
fluctuate during a performance period with eligible clinicians and/or
MIPS eligible clinicians joining or leaving a group. For solo
practitioners and groups that are determined eligible to form or join a
virtual group based on the one-time determination per applicable
performance period, any new eligible clinicians or MIPS eligible
clinicians that join the TIN during the performance period would
participate in MIPS as part of the virtual group. In such cases, we
recognized that a solo practitioner or group may exceed 1 eligible
clinician or 10 eligible clinicians, as applicable, associated with its
TIN during an applicable performance period, but such solo practitioner
or group would have been determined eligible to form or join a virtual
group given that the TIN did not have more than 1 eligible clinician or
10 eligible clinicians, as applicable, associated with its TIN at the
time of election. As previously noted, the virtual group representative
would need to contact the Quality Payment Program Service Center to
update the virtual group's information that was provided during the
election period if any information changed during an applicable
performance period at least one time prior to the start of an
applicable submission period (for example, include new NPIs who joined
a TIN that is part of a virtual group).
[[Page 53604]]
Virtual groups must re-register before each performance period.
The following is a summary of the public comments received
regarding our proposed election process for virtual groups.
Comment: Generally, all commenters expressed support for the
technical assistance infrastructure and two-stage election process.
Response: We appreciate the support from commenters.
Comment: A majority of commenters expressed concern regarding the
election deadline of December 1, while several commenters recommended
that an election deadline be established during the performance period
in order for virtual groups to have the adequate and necessary time to
prepare for the implementation of virtual groups, including the
establishment and execution of formal written agreements and
coordination within virtual groups to address issues pertaining to
interoperability, measure selection, data collection and aggregation,
measure specifications, workflows, resources, and other related items.
A few commenters recommended an election deadline of June 30 to align
with the election deadline for groups and virtual groups to register to
use the CMS Web Interface and/or administer the CAHPS for MIPS survey.
Response: We appreciate the feedback from commenters regarding the
election deadline of December 1 and note that section
1848(q)(5)(I)(iii)(I) of the Act provides that the virtual group
election process must require an individual MIPS eligible clinician or
group electing to be in a virtual group to make their election prior to
the start of the applicable performance period. Given that the CY
performance period for the quality and cost performance categories
begins on January 1, a solo practitioner or group electing to be in a
virtual group would need to make their election prior to January 1. As
a result, we are modifying our proposed election deadline by extending
it to December 31 of the calendar year preceding the applicable
performance period. We note that our proposed election deadline of
December 1 was intended to allow us to notify virtual groups of their
official status prior to the start of the performance period. With the
modification we are finalizing for the election deadline of December
31, it is not operationally feasible for us to notify virtual groups of
their official virtual group status prior to the start of the
performance period. However, we intend to notify virtual groups of
their official status as close to the start of the performance period
as technically feasible.
Comment: A few commenters indicated that solo practitioners and
groups should have the option of leaving a virtual group during the
performance period or allow a virtual group to remove a solo
practitioner or group for non-compliance or low performance.
Response: We note that the statute specifies that a virtual group
election cannot be changed during the performance period, and such
election would remain for the duration of the performance period.
Comment: A few commenters requested that CMS allow virtual group
agreements to be executed during the performance period in order to
provide the virtual group parties with time to establish goals and
objectives, build relationships with each other, and identify
additional agreement provisions that may be necessary to include in
order to meet program requirements.
Response: We note that section 1848(q)(5)(I)(iii)(I) and (IV) of
the Act provides that the virtual group election process must require
an individual MIPS eligible clinician or group electing to be in a
virtual group to make their election prior to the start of the
applicable performance period, and include requirements providing for
formal written agreements among individual MIPS eligible clinicians and
groups electing to be a virtual group. Thus, we are not authorized to
establish an agreement deadline during the performance period. However,
we note that the parties to a virtual group agreement would not be
precluded from amending their agreement during the performance period,
which enables them to incorporate any additional agreement provisions
that they later identify as necessary. A virtual group representative
would notify CMS of the implementation and execution of an amended
virtual group agreement.
Final Action: After consideration of the public comments received,
we are finalizing the following policies. We are codifying at Sec.
414.1315(a) that a solo practitioner or a group of 10 or fewer eligible
clinicians must make their election to participate in MIPS as a virtual
group prior to the start of the applicable performance period and
cannot change their election during the performance period; and
codifying at Sec. 414.1315(c) a two-stage virtual group election
process, stage 1 of which is optional, for the applicable 2018 and 2019
performance periods. We are finalizing a modification to our proposed
election period deadline by codifying at Sec. 414.1315(b) that,
beginning with performance periods occurring in 2018, a solo
practitioner, or group of 10 or fewer eligible clinicians electing to
be in a virtual group must make their election by December 31 of the
calendar year preceding the applicable performance period.
f. Virtual Group Agreements
As noted in section II.C.4.a. of this final rule with comment
period, section 1848(q)(5)(I)(iii)(IV) of the Act provides that the
virtual group election process must provide for formal written
agreements among individual MIPS eligible clinicians (solo
practitioners) and groups electing to be a virtual group. We proposed
that each virtual group member (that is, each solo practitioner or
group) would be required to execute formal written agreements with each
other virtual group member to ensure that requirements and expectations
of participation in MIPS are clearly articulated, understood, and
agreed upon (82 FR 30032 through 30033). We noted that a virtual group
may not include a solo practitioner or group as part of the virtual
group unless an authorized person of the TIN has executed a formal
written agreement. During the election process and submission of a
virtual group election, a designated virtual group representative would
be required to confirm through acknowledgement that an agreement is in
place between each member of the virtual group. An agreement would be
executed for at least one performance period. If an NPI joins or leaves
a TIN, or a change is made to a TIN that impacts the agreement itself,
such as a legal business name change, during the applicable performance
period, a virtual group would be required to update the agreement to
reflect such changes and submit changes to CMS via the Quality Payment
Program Service Center.
We proposed, at Sec. 414.1315(c)(3), that a formal written
agreement between each member of a virtual group must include the
following elements:
Expressly state the only parties to the agreement are the
TINs and NPIs of the virtual group (at Sec. 414.1315(c)(3)(i)). For
example, the agreement may not be between a virtual group and another
entity, such as an independent practice association (IPA) or management
company that in turn has an agreement with one or more TINs within the
virtual group. Similarly, virtual groups should not use existing
contracts between TINs that include third parties.
Be executed on behalf of the TINs and the NPIs by
individuals who are authorized to bind the TINs and the
[[Page 53605]]
NPIs, respectively at Sec. 414.1315(c)(3)(ii)).
Expressly require each member of the virtual group
(including each NPI under each TIN) to agree to participate in MIPS as
a virtual group and comply with the requirements of the MIPS and all
other applicable laws and regulations (including, but not limited to,
federal criminal law, False Claims Act, anti-kickback statute, civil
monetary penalties law, the Health Insurance Portability and
Accountability Act of 1996, and physician self-referral law) (at Sec.
414.1315(c)(3)(iii)).
Require each TIN within a virtual group to notify all NPIs
associated with the TIN of their participation in the MIPS as a virtual
group (at Sec. 414.1315(c)(3)(iv)).
Set forth the NPI's rights and obligations in, and
representation by, the virtual group, including without limitation, the
reporting requirements and how participation in MIPS as a virtual group
affects the ability of the NPI to participate in the MIPS outside of
the virtual group (at Sec. 414.1315(c)(3)(v)).
Describe how the opportunity to receive payment
adjustments will encourage each member of the virtual group (including
each NPI under each TIN) to adhere to quality assurance and improvement
(at Sec. 414.1315(c)(3)(vi)).
Require each member of the virtual group to update its
Medicare enrollment information, including the addition and deletion of
NPIs billing through a TIN that is part of a virtual group, on a timely
basis in accordance with Medicare program requirements and to notify
the virtual group of any such changes within 30 days after the change
(at Sec. 414.1315(c)(3)(vii)).
Be for a term of at least one performance period as
specified in the formal written agreement (at Sec.
414.1315(c)(3)(viii)).
Require completion of a close-out process upon termination
or expiration of the agreement that requires the TIN (group part of the
virtual group) or NPI (solo practitioner part of the virtual group) to
furnish, in accordance with applicable privacy and security laws, all
data necessary in order for the virtual group to aggregate its data
across the virtual group (at Sec. 414.1315(c)(3)(ix)).
On August 18, 2017, we published a 30-day Federal Register notice
(82 FR 39440) announcing our formal submission of the information
collection request (ICR) for the virtual group election process to OMB,
which included a model formal written agreement, and informing the
public on its additional opportunity to review the ICR and submit
comments by September 18, 2017. OMB approved the ICR on September 27,
2017 (OMB control number 0938-1343). The model formal written agreement
is not required, but serves as a template that virtual groups could
utilize in establishing an agreement with each member of a virtual
group. Such agreement template will be made available via subregulatory
guidance. Each prospective virtual group member should consult their
own legal and other appropriate counsel as necessary in establishing
the agreement.
We want to ensure that all eligible clinicians who bill through the
TINs that are components of a virtual group are aware of their
participation in a virtual group. We want to implement an approach that
considers a balance between the need to ensure that all eligible
clinicians in a group are aware of their participation in a virtual
group and the minimization of administration burden.
We solicited public comment on these proposals and on approaches
for virtual groups to ensure that all eligible clinicians in a group
are aware of their participation in a virtual group.
The following is a summary of the public comments received
regarding our proposal to require formal written agreement between each
member of a virtual group.
Comment: Several commenters expressed support for the proposed
provisions that virtual groups would need to include as part of the
formal written agreement establishing a virtual group.
Response: We appreciate the support from commenters.
Comment: A few commenters expressed concern regarding the burden
associated with the agreements required for virtual group
implementation and execution. One commenter indicated that the formal
written agreement process, while essential to allow for data capture,
poses administrative burden and other complexities when utilizing
multiple submission mechanisms.
Response: We note that section 1848(q)(5)(I)(iii)(IV) of the Act
provides that the virtual group election process must provide for
``formal written agreements among MIPS eligible professionals'' (that
is, individual MIPS eligible clinicians and groups) that elect to be a
virtual group. As such, we do not believe that our proposal to require
a written agreement governing the virtual group is excessively
burdensome. However, although we believe the agreements should identify
each eligible clinician billing under the TIN of a practice within the
virtual group, we have concluded that it would be unnecessarily
burdensome to require each such eligible clinician to be a party to the
virtual group agreement. In addition, we agree that it is unnecessarily
burdensome to require each solo practitioner or group that wishes to be
part of a virtual group to have a separate agreement with every other
solo practitioner or group that wishes to be part of the same virtual
group. We do not believe the statute compels such a requirement; a
single agreement among all solo practitioners and groups forming a
virtual group is sufficient to implement the statutory requirement.
Accordingly, we have revised the regulation text at Sec.
414.1315(c)(3) to clarify that the parties to a formal written virtual
group agreement must be only the groups and solo practitioners (as
identified by name of party, TIN, and NPI) that compose the virtual
group. We note that we are modifying our proposals for greater clarity.
We recognize that our proposals regarding virtual group agreements
as well as other virtual group matters used the term ``member of a
virtual group'' inconsistently. In some places, we used the term to
refer only to the components of the virtual group (that is, the solo
practitioners and groups that can form a virtual group), while in other
places we used the term to mean both the components of the virtual
group and the eligible clinicians billing through a TIN that is a
component of the virtual group. We believe that some of the perceived
burden of the requirement for a virtual group agreement was due to the
ambiguous use of this terminology. Wherever possible, we modified our
proposals to ensure that they appropriately distinguishes between the
components of a virtual group and the eligible clinicians billing
through a TIN that is a component of a virtual group.
Comment: One commenter expressed support for the proposed agreement
provision that would require the parties to a virtual group agreement
to be only solo practitioners and groups (not third parties), while
another commenter did not support such provision and indicated that
many small practices have joined IPAs to provide centralized support
for quality improvement training, health technology support, reporting,
and analytics needed for success under payment reform programs such as
the Quality Payment Program. The commenter also indicated that IPAs
could serve as the administrator of a virtual group by collecting and
submitting data on behalf of the virtual group and requested that CMS
eliminate the requirement for all members of a virtual group to execute
a single joint agreement and expand the allowable
[[Page 53606]]
scope of the agreements by permitting IPAs to sign a virtual group
agreement with each member of a virtual group.
Response: For purposes of participation in MIPS as a virtual group,
we note that eligible clinicians within a virtual group are
collectively assessed and scored across each performance category based
on applicable measures and activities that pertain to the performance
of all TINs and NPIs within a virtual group. Each TIN and NPI within a
virtual group has an integral role in improving quality of care and
health outcomes, and increasing care coordination. As such, we believe
it is appropriate prohibit third parties from becoming parties to a
virtual group agreement. However, we note that virtual groups are not
precluded from utilizing, or executing separate agreements with, third
parties to provide support for virtual group implementation.
Comment: To minimize the administrative burden, one commenter
suggested that CMS not require all agreement requirements to be met in
freestanding agreements. The commenter noted that the agreement could
be an addendum to existing contracts to eliminate the need to draft an
independent agreement, unless necessary.
Response: We consider an ``existing'' contract to mean a contract
that was established and executed prior to the formation of a virtual
group. Depending on the parties to an existing contract, freestanding
virtual group agreements may not be necessary. For example, if an
existing contract was established between two or more TINs prior to the
formation of a virtual group and such TINs formed a virtual group among
themselves, the required provisions of a virtual group agreement could
be included in the existing contract as an addendum as long as the
parties to the existing contract include each TIN within the virtual
group and all other requirements are satisfied prior to the applicable
performance period. However, if the existing contract is with a third
party intermediary or does not include each TIN within the virtual
group, the virtual group agreement could not be effectuated as an
addendum to the existing contract.
We recognize that including virtual group agreement provisions as
an addendum to an existing contract may reduce administrative burden
and in certain circumstances such an addendum can be incorporated to an
existing contract. However, we do believe it is critical that the
inclusion of such provisions as an addendum does not limit or restrict
the responsibility of each party to collectively meet the program
requirements under MIPS. We reiterate that the statute requires formal
written agreements to between each solo practitioner and group forming
the virtual group. Individuals billing under the TIN of a party to a
virtual group are collectively assessed and scored across each
performance category based on applicable measures and activities that
pertain to the performance of all TINs and NPIs within a virtual group.
Each TIN and NPI within a virtual group has an integral role in
improving quality of care and health outcomes, and increasing care
coordination. As such, we believe it is appropriate to require
agreements to only be between solo practitioners and groups and not
include third parties. However, we note that virtual groups are not
precluded from utilizing, or executing separate agreements with, third
parties to provide support for virtual group implementation.
Comment: One commenter requested that CMS clarify the parameters
surrounding the proposed agreement provision that requires agreements
to be executed on behalf of the TINs and the NPIs by individuals who
are authorized to bind the TINs and the NPIs, and how CMS would
evaluate the criterion in such provision when reviewing written
agreements.
Response: If a solo practitioner (or his or her professional
corporation) is a party to a virtual group agreement, the solo
practitioner could execute the agreement individually or on behalf of
his or her professional corporation. We recognize that groups (TINs)
have varying administrative and operational infrastructures. In
general, one or more officers, agents, or other authorized individuals
of a group would have the authority to legally bind the group. The
parties to a virtual group agreement should ensure that the agreement
is executed only by appropriately authorized individuals.
Comment: One commenter expressed support for the proposed agreement
provision that would require NPIs billing under a TIN in a virtual
group to agree to participate in MIPS as a virtual group, and urged CMS
to notify, by a means of direct communication, each NPI regarding his
or her participation in MIPS as part of a virtual group prior to the
performance period.
Response: We appreciate the support from the commenter. We believe
that it is critical for each eligible clinician in a virtual group to
be aware of his or her participation in MIPS as part of a virtual
group. Based on our experience under the Medicare Shared Savings
Program, we found that NPIs continued to be unaware of their
participation in a Medicare Shared Savings Program ACO regardless of
the ACO's obligation to notify each NPI via direct communication. We
considered directly notifying all NPIs regarding their participation in
MIPS as part of a virtual group, but based on our experience under the
Medicare Shared Savings Program, we do not believe that such action
would be an effective way of ensuring that each NPI is aware of his or
her TIN being part of a virtual group. We believe that communication
within a TIN is imperative and the crux of ensuring that each NPI is
aware of his or her participation in MIPS as part of a virtual group.
As part of the virtual group election process, we will notify each
virtual group representative regarding the official status of the
virtual group. We will also require each TIN within a virtual group to
notify all NPIs associated with the TIN of their participation in the
MIPS as a virtual group.
Comment: One commenter expressed support for one of the proposed
agreement provisions that would set forth the NPI's rights and
obligations in, and representation by, the virtual group. As part of
the process for establishing an agreement, the commenter, as well as
other commenters, requested that CMS allow virtual groups to discuss
with all participants in the virtual group the ways in which the
virtual group would meet the requirements for each performance
category, the type of submission mechanism(s) the virtual group intends
to utilize, the timelines for aggregating data across the TINs within
the virtual group and for data submission, and the assessment and
scoring of performance and application of the MIPS payment adjustment.
Another commenter requested that the agreements include other elements
such as requiring participation in improvement activities, use of EHR,
and data sharing workflows, and suggested that CMS provide guidance on
specific efficiencies and improvement goals that a virtual group could
support and encourage virtual groups to create a plan for achieving
those goals as a virtual group. A commenter suggested that the model
agreement include provisions related to a mutual interest in quality
performance, shared responsibility in decision making, a meaningful way
to effectively use data to drive performance, and a mechanism to share
best practices within the virtual group. Another commenter requested
for CMS to develop a checklist for interested TINs to assist them in
understanding the
[[Page 53607]]
requirements pertaining a virtual group agreement.
Response: For the successful implementation of virtual groups, we
believe that it is critical for everyone participating in a virtual
group (including the individuals billing under the TIN of a group) to
understand their rights and obligations in a virtual group. We believe
that virtual groups should have the flexibility to identify additional
requirements that would facilitate and guide a virtual group as it
works to achieve its goals and meet program requirements. We note that
the model agreement serves as a template that virtual groups could
utilize in establishing a virtual group agreement, and could include
other elements that would meet the needs of the virtual group to ensure
that each TIN and NPI within a virtual group are collectively and
collaboratively working together. We encourage the parties to a virtual
group agreement to actively engage in discussions with eligible
clinicians to develop a strategic plan, identify resources and needs,
and establish processes, workflows, and other tools as they prepare for
virtual group reporting. To support the efforts of solo practitioners
and groups with 10 or fewer eligible clinicians in virtual group
implementation, we intend to publish a virtual group toolkit that
provides information pertaining to requirements and outlines the steps
a virtual group would pursue during an the election process.
Comment: One commenter requested that the agreement be a 1-year
term and renewable thereafter.
Response: We note that an agreement will need to be executed for at
least one performance period. However, with virtual groups being
required to be assessed and scored across all four performance
categories, and the quality and cost performance categories having a
calendar year performance period (at Sec. 414.1320), we clarify that a
virtual group agreement would need to be executed for least a 1-year
term. Virtual groups have the flexibility to establish a new agreement
or renew the execution of an existing agreement for the preceding
applicable performance period.
Comment: One commenter requested that the virtual group agreements
clearly specify the repercussions of an eligible clinician or group
within a virtual group who fails to report as part of the virtual
group.
Response: We believe that the proposed provisions of a virtual
group agreement provide a foundation that sets forth the
responsibilities and obligations of each party for a performance
period. Virtual groups have the flexibility to include other elements
in an agreement. Each virtual group will be unique, and as a result, we
encourage virtual groups to establish and execute an agreement that
guides how a virtual group would meet its goals and objectives, and
program requirements. Some virtual groups may elect to include a
provision that outlines the implications of a solo practitioner or
group failing to meet the elements of an agreement. We will also
require such agreements to describe how the opportunity to receive
payment adjustments will encourage each member of the virtual group
(and each NPI under each TIN in the virtual group) to adhere to quality
assurance and improvement.
Comment: One commenter recommended that virtual group agreements
contain similar elements used in agreements by the private sector,
which would address factors pertaining to health IT and administrative
and operationalization components such as: Requiring the establishment
of a plan for integrating each virtual group component's health IT (for
example, EHRs, patient registries, and practice management systems),
including a timeline to work with health IT vendors on such
integration, if applicable; requiring component of a virtual group to
serve a common patient population and provide a list of hospitals and/
or facilities with which they have an affiliation and a list of
counties in which they would be active; and determining how a virtual
group would be staffed and governed by identifying staff allocations to
organizational leadership, clinical leadership, practice consultants,
and IT resources.
Response: We recognize that different sectors may have established
agreements with various elements to facilitate and assure attainment of
program goals and objectives, which may serve as a useful tool to
virtual groups. We encourage virtual groups to assess whether or not
their agreement should include other elements in addition to our
proposed agreement provisions. Virtual groups have the flexibility to
identify other elements that would be critical to include in an
agreement specific to their particular virtual group. We believe it is
essential to continue to provide virtual groups with the flexibility to
establish agreements that will most appropriately reflect the unique
characteristics of a virtual group.
Also, we note that different TINs, particularly small practices,
may have access to different resources, which makes it difficult to
identify specific requirements pertaining to the inclusion of
administration and operationalization of health IT components in a
virtual group agreement that would be universally applicable to any
virtual group composition, while maintaining the flexibility and
discretion afforded to virtual groups in establishing additional
elements for their agreements that meet the needs of virtual groups. We
recognize that each TIN within a virtual group will need to coordinate
within the virtual group to address issues pertaining to
interoperability, data collection, measure specifications, workflows,
resources, and other related items, and believe that a virtual group is
the most appropriate entity to determine how it will prepare,
implement, and execute the functions of the virtual group to meet the
requirements for each performance category. We believe that our
proposed agreement elements provide a critical foundation for virtual
group implementation, which establishes a clear responsibility and
obligation of each NPI to the virtual group for the duration of an
applicable performance period.
Comment: Many commenters expressed concern regarding the timeframe
virtual groups would have to make an election and establish agreements.
The commenters indicated that the election period is very restrictive
and does not provide interested solo practitioners and groups with
sufficient time to meet and execute the required elements of an
agreement and work through all of the necessary details in forming and
implementing a virtual group. The commenters also noted that
contractual agreements between NPIs and TINs often take several months,
at least, to negotiate and finalize. A few commenters indicated that
interested solo practitioners and groups would not have adequate time
to make informed decisions regarding virtual group participation. The
commenters noted that it would be helpful to have the virtual group
agreement template available for review and comment in advance. One
commenter indicated that the lack of virtual group requirements at this
early stage of the Quality Payment Program causes a lack of clarity and
stability for eligible clinicians and/or groups interested in forming
virtual groups.
Response: In order to provide support and reduce burden, we intend
to make TA available, to the extent feasible and appropriate, to
support clinicians who choose to come together as a virtual group.
Clinicians can access the TA infrastructure and resources that they
[[Page 53608]]
may already be utilizing. In section II.C.4.e. of this final rule with
comment period, we establish a two-stage virtual group election
process, stage 1 of which is optional, for performance periods
occurring in 2018 and 2019 (82 FR 30030 through 30032). Stage 1
pertains to virtual group eligibility determinations, and stage 2
pertains to virtual group formation. During stage 1, solo practitioners
and groups have the option to contact their designated TA
representative in order to obtain information pertaining to virtual
groups and/or determine whether or not they are eligible, as it relates
to the practice size requirement. Clinicians who do not elect to
contact their designated TA representative would still have the option
of contacting the Quality Payment Program Service Center to obtain
information pertaining to virtual groups.
We recognize that the election period, including the timeframe
virtual groups would have to establish and implement the virtual group
agreement, and the timeline for establishing virtual group policies in
this final rule with comment period is short and imposes certain
potential barriers for virtual group formation and limitations for the
first year of virtual group implementation that we are not able to
eliminate due to statutory constraints, such as the requirement for
virtual groups to make an election made prior to an applicable
performance period. In order to mitigate some of the challenges, we
developed a model agreement to serve as a template that could be
utilized by virtual groups as they prepare for the implementation of
virtual groups and are finalizing a modification to the election period
deadline by extending it to December 31, which can be accessed on the
CMS Web site at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. In this final rule with comment period,
we are establishing virtual group policies for the 2018 and 2019
performance periods. Solo practitioners and groups with 10 or fewer
eligible clinicians that are not able to form virtual groups for the
2018 performance period should have sufficient time to prepare and
implement requirements applicable to virtual groups for the 2019
performance period.
Comment: A majority of commenters indicated that virtual group
formation involves preparing health IT systems, training staff to be
ready for implementation, sharing and aggregating data, and
coordinating workflows. The commenters expressed concern that while
such steps are necessary to ensure the success of virtual groups, such
steps could raise issues regarding compliance with certain fraud and
abuse laws, particularly the physician self-referral law (section 1877
of the Act) and the anti-kickback statute (section 1128B(b) of the
Act). The commenters requested that CMS assess the potential risks
virtual groups may have under the physician self-referral law and
whether or not a regulatory exception would be necessary to
successfully implement and maximize the advantages of the virtual group
option. One commenter noted that parties to a virtual group agreement
may want to enter into financial arrangements with each other to
maximize the benefit of the virtual group (for example, pay for one
party to organize and submit all measures on behalf of all the virtual
group parties) and that such an arrangement may result in some eligible
clinicians being unable to refer patients to other participants in the
virtual group without running afoul of the physician self-referral law,
unless CMS established an exception for virtual groups. A few
commenters requested that the Secretary exercise prosecutorial
discretion by not enforcing the anti-kickback statute and the physician
self-referral law for activities involving the development and
operation of a virtual group.
Many commenters expressed concerns regarding the lack of
information and clarity pertaining to the interaction between virtual
groups and the physician self-referral law, anti-kickback statute, and
antitrust law. The commenters requested that CMS clarify the program
integrity obligations of virtual groups, issue safe harbors, and
publish guidance outlining how the physician self-referral law, anti-
kickback statute, and antitrust law apply to virtual groups. The
commenters asserted that this was needed in order for solo
practitioners and groups to maintain safeguards against fraud and abuse
while soliciting partners to form a virtual group and working toward
common MIPS goals.
Response: Nothing in this final rule with comment period changes
the application of the physician self-referral law, anti-kickback
statute, or anti-trust laws. We note that a ``group practice'' as
defined for purposes of the physician self-referral law is separate and
distinct from a ``virtual group'' as defined in this final rule. A
virtual group may, but is not required, to include a ``group practice''
as defined for purposes of the physician self-referral law. Whether an
entity that is assigned a TIN and is included in a virtual group should
be a ``group practice'' (as defined for purposes of the physician self-
referral law) is a separate legal issue that is not governed by this
final rule with comment period. We recognize that a virtual group may
include multiple clinician practices and that the clinicians in one
practice may refer patients for services that will be furnished by
other practices in the virtual group. However, we believe that the
virtual group arrangement can be structured in a manner that both
complies with an existing physician self-referral law exception and
does not violate the anti-kickback statute. We note that the issuance
of guidance, exceptions, or safe harbors regarding the physician self-
referral law or the anti-kickback statute is beyond the scope of this
rulemaking, and MACRA does not authorize the Secretary to waive any
fraud and abuse laws for MIPS. Finally, HHS is not authorized to
interpret or provide guidance regarding the anti-trust laws.
Comment: Several commenters supported the development of a model
agreement. One commenter indicated that the model agreement lacked the
details necessary to enable virtual groups to cover all required
criteria and urged CMS to supply a template that is inclusive of needed
detail and instructions.
Response: We appreciate the support from commenters. In regard to
the model agreement, we established such a template in order to reduce
the burden of virtual groups having to develop an agreement. On August
18, 2017, we published a 30-day Federal Register notice (82 FR 39440)
announcing our formal submission of the ICR for the virtual group
election process to OMB, which included a model formal written
agreement, and informing the public on its additional opportunity to
review the information collection request and submit comments by
September 18, 2017. OMB approved the ICR on September 27, 2017 (OMB
control number 0938-1343). The utilization of our model agreement is
not required, but serves as a tool that can be utilized by virtual
groups. Each prospective party to a virtual group agreement should
consult their own legal and other appropriate counsel as necessary in
establishing the agreement. We note that the received comments
pertaining to the content of the model agreement are out of scope for
this final rule with comment period.
Final Action: After consideration of public comments received, we
are finalizing with modification our proposal at Sec. 414.1315(c)(3)
regarding virtual group agreements. This final rule
[[Page 53609]]
with comment period requires a formal written agreement between each
solo practitioner and group that composes a virtual group; the revised
regulation text makes it clear the formal written virtual group
agreement must identify, but need not include as parties to the
agreement, all eligible clinicians who bill under the TINs that are
components of the virtual group. The requirement to execute a formal
written virtual group agreement ensures that requirements and
expectations of participation in MIPS are clearly articulated,
understood, and agreed upon. We are finalizing our proposal that a
virtual group agreement must be executed on behalf of a party to the
agreement by an individual who is authorized to bind the party. For
greater clarity, we are finalizing with modification our proposals at
Sec. 414.1315(c)(3) that a formal written agreement between each
member of a virtual group must include the following elements:
Identifies the parties to the agreement by name of party,
TIN, and NPI, and includes as parties to the agreement only the groups
and solo practitioners that compose the virtual group (at Sec.
414.1315(c)(3)(i)).
Is executed on behalf of each party by an individual who
is authorized to bind the party (at Sec. 414.1315(c)(3)(ii)).
Expressly requires each member of the virtual group (and
each NPI under each TIN in the virtual group) to participate in MIPS as
a virtual group and comply with the requirements of the MIPS and all
other applicable laws and regulations (including, but not limited to,
federal criminal law, False Claims Act, anti-kickback statute, civil
monetary penalties law, the Health Insurance Portability and
Accountability Act of 1996, and physician self-referral law) (at Sec.
414.1315(c)(3)(iii)).
Identifies each NPI under each TIN in the virtual group
and requires each TIN within a virtual group to notify all NPIs
associated with the TIN of their participation in the MIPS as a virtual
group (at Sec. 414.1315(c)(3)(iv)).
Sets forth the NPI's rights and obligations in, and
representation by, the virtual group, including without limitation, the
reporting requirements and how participation in MIPS as a virtual group
affects the ability of the NPI to participate in the MIPS outside of
the virtual group (at Sec. 414.1315(c)(3)(v)).
Describes how the opportunity to receive payment
adjustments will encourage each member of the virtual group (and each
NPI under each TIN in the virtual group) to adhere to quality assurance
and improvement (at Sec. 414.1315(c)(3)(vi)).
Requires each party to the agreement to update its
Medicare enrollment information, including the addition and deletion of
NPIs billing through its TIN, on a timely basis in accordance with
Medicare program requirements and to notify the virtual group of any
such changes within 30 days after the change (at Sec.
414.1315(c)(3)(vii)).
Is for a term of at least one performance period as
specified in the formal written agreement (at Sec.
414.1315(c)(3)(viii)).
Requires completion of a close-out process upon
termination or expiration of the agreement that requires each party to
the virtual group agreement to furnish, in accordance with applicable
privacy and security laws, all data necessary in order for the virtual
group to aggregate its data across the virtual group (at Sec.
414.1315(c)(3)(ix)).
During the election process and submission of a virtual group
election, a designated virtual group representative will be required to
confirm through acknowledgement that an agreement is in place between
all solo practitioners and groups that compose the virtual group. An
agreement will be executed for at least one performance period. If a
NPI joins or leaves a TIN, or a change is made to a TIN that impacts
the agreement itself, such as a legal business name change, during the
applicable performance period, a virtual group will be required to
update the agreement to reflect such changes and submit changes to CMS
via the Quality Payment Program Service Center.
g. Virtual Group Reporting Requirements
As discussed in section II.C.4.d. of this final rule with comment
period, we believe virtual groups should generally be treated under the
MIPS as groups. Therefore, for MIPS eligible clinicians participating
at the virtual group level, we proposed at Sec. 414.1315(d) the
following requirements (82 FR 30033):
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level would have their performance assessed as a virtual
group (at Sec. 414.1315(d)(1)).
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level would need to meet the definition of a virtual
group at all times during the performance period for the MIPS payment
year (at Sec. 414.1315(d)(2)).
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level must aggregate their performance data across
multiple TINs in order for their performance to be assessed as a
virtual group (at Sec. 414.1315(d)(3)).
MIPS eligible clinicians that elect to participate in MIPS
at the virtual group level would have their performance assessed at the
virtual group level across all four MIPS performance categories (at
Sec. 414.1315(d)(4)).
Virtual groups would need to adhere to an election process
established and required by CMS (at Sec. 414.1315(d)(5)).
The following is a summary of the public comments received
regarding our proposed virtual group reporting requirements.
Comment: Many commenters generally supported our proposed reporting
requirements for virtual groups.
Response: We appreciate the support from the commenters.
Comment: One commenter expressed support of our proposed virtual
group reporting requirements and indicated that a majority of
practicing vascular surgeons are part of private practices, including
groups of 10 or fewer eligible clinicians, and would benefit from
participating in MIPS as part of a virtual group. The commenter noted
that the implementation of virtual groups would ease burdens on small
practices and eligible clinicians by allowing them to report data
together for each performance category, and be assessed and scored as a
virtual group. Another commenter supported our proposal that allows
small practices to aggregate their data at the virtual group level,
which would allow them to have a larger denominator to spread risk and
mitigate the impact of adverse outlier situations.
Response: We appreciate the support from the comment regarding our
proposed virtual group reporting requirements.
Comment: One commenter indicated that the reporting of performance
data for all NPIs under a TIN participating in a virtual group,
particularly non-MIPS eligible clinicians who are excluded from MIPS
participation, would be a regulatory burden to virtual groups.
Response: We do not believe that requiring virtual groups to report
on data for all NPIs under a TIN participating in a virtual group would
be burdensome to virtual groups. Based on previous feedback from
stakeholders regarding group reporting under PQRS, we believe that it
would be more burdensome for virtual groups to
[[Page 53610]]
determine which clinicians are MIPS eligible versus not MIPS eligible
and remove performance data for non-MIPS eligible clinicians when
reporting as a virtual group. While entire TINs participate in a
virtual group, including each NPI under a TIN, and are assessed and
scored collectively as a virtual group, we note that only NPIs that
meet the definition of a MIPS eligible clinician would be subject to a
MIPS payment adjustment.
Comment: A majority of commenters did not support our proposal to
require all eligible clinicians who are part of a TIN participating in
MIPS at the virtual group level to aggregate their performance data
across multiple TINs in order for their performance to be assessed and
scored as a virtual group. The commenters expressed concerns that it
would be burdensome for rural and small practices and prohibitive for
virtual groups to perform data aggregation and requested that CMS
aggregate data for virtual groups. The commenters indicated that the
requirement for virtual groups to aggregate data across the virtual
group could be a potential barrier for virtual group participation and
would be unlikely to occur without error. One commenter requested that
CMS further define data aggregation and clarify whether or not
individual reports from each NPI within a virtual group could simply be
added together for all NPIs in the virtual group or if each NPI's data
could be pulled from each TIN's QRDA file.
Response: We appreciate the feedback from the commenters and
recognize that data aggregation across multiple TINs within a virtual
group may pose varying challenges. At this juncture, it is not
technically feasible for us to aggregate the data for virtual groups,
but will consider such option in future years. In order to support the
implementation of virtual groups as a participation option under MIPS,
we intend to issue subregulatory guidance pertaining to data
aggregation for virtual groups.
Comment: A few commenters recommended that for the first year of
virtual group implementation, CMS hold virtual groups and registries
that support virtual groups harmless from penalties if they encounter
technical challenges related to data aggregation. The commenters noted
that the potential penalty for technical challenges in data aggregation
is a severe 5 percent for TINs that are already operating on small
margins and expressed concerns that registries supporting virtual group
reporting would be opening themselves to potential disqualification for
the aforementioned challenges in data aggregation.
Response: We appreciate the feedback from commenters and note that
statute requires virtual groups to be assessed and scored, and subject
to a MIPS payment adjustment as a result of TINs participating in a
virtual group under MIPS. The statute does not authorize us to
establish additional exclusions that are not otherwise identified in
statute. If a virtual group encounters technical challenges regarding
data aggregation and are not able to report on measures and activities
via QCDRs, qualified registries, or EHRs, virtual groups would have the
option of reporting via the CMS Web Interface (for virtual groups of 25
or more eligible clinicians), a CMS-approved survey vendor for the
CAHPS for MIPS survey, and administrative claims (if applicable) for
the quality and cost performance categories, and via attestation for
the improvement activities and advancing care information performance
categories. The administrative claims submission mechanism does not
require virtual groups to submit data for purposes of the quality and
cost performance categories but the calculation of performance data is
conducted by CMS.
We note that the measure reporting requirements applicable to
groups are also generally applicable to virtual groups. However, we
note that the requirements for calculating measures and activities when
reporting via QCDRs, qualified registries, EHRs, and attestation differ
in their application to virtual groups. Specifically, these
requirements apply cumulatively across all TINs in a virtual group.
Thus, virtual groups will aggregate data for each NPI under each TIN
within the virtual group by adding together the numerators and
denominators and then cumulatively collate to report one measure ratio
at the virtual group level. Moreover, if each MIPS eligible clinician
within a virtual group faces a significant hardship or has EHR
technology that has been decertified, the virtual group can apply for
an exception to have its advancing care information performance
category reweighted. If such exception application is approved, the
virtual group's advancing care information performance category is
reweighted to zero percent and applied to the quality performance
category increasing the quality performance weight from 50 percent to
75 percent.
Additionally, the data submission criteria applicable to groups are
also generally applicable to virtual groups. However, we note that data
completeness and sampling requirements for the CMS Web Interface and
CAHPS for MIPS survey differ in their application to virtual groups.
Specifically, data completeness for virtual groups applies cumulatively
across all TINs in a virtual group. Thus, we note that there may be a
case when a virtual group has one TIN that falls below the 60 percent
data completeness threshold, which is an acceptable case as long as the
virtual group cumulatively exceeds such threshold. In regard to the CMS
Web Interface and CAHPS for MIPS survey, sampling requirements pertain
to Medicare Part B patients with respect to all TINs in a virtual
group, where the sampling methodology would be conducted for each TIN
within the virtual group and then cumulatively aggregated across the
virtual group. A virtual group would need to meet the beneficiary
sampling threshold cumulatively as a virtual group.
Comment: A few commenters urged CMS to set clear expectations as to
how virtual groups should submit data across performance categories and
from multiple systems while ensuring their information is aggregated
and reported correctly to maximize the virtual group's final score and
requested that CMS provide clarity regarding virtual group reporting.
One commenter indicated that virtual group reporting can be completed
through QCDRs, in which multiple eligible clinicians in a virtual group
could report to one place on the quality of care furnished to their
respective patients. The commenter noted that the commitments from CMS
and ONC regarding interoperability and electronic data sharing would
continue to further the feasibility of virtual group reporting through
EHRs in the future. However, a few commenters requested clarification
regarding how data can and should be submitted for virtual groups, and
whether or not QCDRs and other clinical outcomes data registries would
be able to assist virtual groups by sharing in the responsibility for
aggregating data. The commenters noted that the aggregation of data
across various TINs and health IT systems may be logistically difficult
and complex, as groups and health IT systems have different ways of
collecting and storing data and stated that data aggregation across
various systems for measures and activities under each performance
category may not be possible if qualified registries do not have the
option to assist virtual groups.
Response: We appreciate the feedback from commenters and recognize
that commenters seek clarification regarding submission requirements
for third party intermediaries such as QCDRs, qualified registries, and
EHRs. We note that third
[[Page 53611]]
party intermediaries would need to meet the same requirements
established at Sec. 414.1400 and form and manner per submission
mechanism when submitting data on behalf of virtual groups. We intend
to issue subregulatory guidance for virtual groups and third party
intermediaries pertaining to data aggregation and the collection and
submission of data.
Comment: One commenter requested clarification regarding the
submission of data for virtual groups via EHRs. The commenter indicated
that while groups may already be familiar with the reporting of quality
measures via EHRs, the addition of the improvement activities and
advancing care information performance categories adds a new level of
complexity. Also, the commenter requested clarification regarding
whether or not CMS has an established mechanism that would accept
multiple QRDA III submissions for a single virtual group pertaining to
the improvement activities and advancing care information performance
categories. The commenter indicated that standards do not exist to
combine files pertaining to the improvement activities and advancing
care information performance categories from disparate vendors and
requested clarification regarding whether or not combined files would
be needed for virtual groups and for CMS to issue guidance to vendors
at least 18 months in advance regarding development and implementation.
Response: We appreciate the feedback from the commenter and note
that we intend to issue additional subregulatory guidance for third
party intermediaries pertaining to the collection and submission of
data for all performance categories. In regard to the submission of
multiple QRDA III files, our system is not built to allow for the
submission of multiple QRDA III files. Groups and virtual groups are
required to submit one QRDA III file for each performance category.
Given that virtual groups are required to aggregate their data at the
virtual level and submit one file of data per performance category,
there may be circumstances that would require a virtual group to
combine their files in order to meet the submission requirements.
However, it should be noted that all other measures and activities
requirements would also need to be met in order for virtual groups to
meeting reporting and submission requirements.
Comment: One commenter requested that CMS allow QCDRs and other
clinical outcomes data registries to support virtual groups in
aggregating measures and activities for reporting.
Response: We note that virtual groups are not precluded from
utilizing third party intermediaries such as QCDRs and qualified
registries to support virtual groups in meeting virtual group reporting
requirements. We intend to issue subregulatory guidance for virtual
groups and third party intermediaries pertaining to data aggregation
and the collection and submission of data.
Comment: A few commenters expressed concern that the submission
mechanisms available to virtual groups involve multiple layers of legal
and operational complexity. The commenters indicated that certain
registries have internal data governance standards, including patient
safety organization requirements, that they must follow when
contracting with single TIN participants, such that legal agreements
made between solo practitioners and small groups within a virtual group
may complicate the registries' ability to comply with those
requirements. The commenters recommended that CMS provide guidance to
registries on how to handle data sharing among virtual groups with
respect to patient safety organization requirements. One commenter
expressed concern regarding how registries would be able to meet
virtual group requirements to report a sufficient number of measures
given that some registries may have made a variety of measures
available for individual eligible clinicians to report, but may need to
increase the available measures to report in order to support virtual
group reporting. The commenter requested that CMS provide guidance
regarding the expectations for registries supporting virtual group
reporting, particularly when considering the role of specialty
registries and the quality performance category.
Response: We recognize that certain registries may have internal
governance standards complicating how they would support virtual
groups, but note that by definition, a virtual group is a combination
of TINs. We appreciate the feedback from commenters and note that we
intend to issue additional subregulatory guidance for third party
intermediaries such as qualified registries.
Comment: One commenter expressed concern regarding how quality data
would be collected, aggregated and displayed for solo practitioners and
groups composing the virtual group. The commenter requested
clarification regarding whether or not solo practitioners and groups
composing the virtual group would be allowed to view the quality data
of other solo practitioners and groups in the virtual group. Also, the
commenter indicated that it is not clear what responsibility a
qualified registry would have, if any, to verify if a virtual group
reporting through a registry has all the appropriate legal agreements
in place prior to their participation in the registry.
Response: We appreciate the commenter expressing such concern and
note that we intend to issue subregulatory guidance for virtual groups
and third party intermediaries pertaining to data aggregation and the
collection and submission of data. We note that the measure reporting
requirements applicable to groups are also generally applicable to
virtual groups. However, we note that the requirements for calculating
measures and activities when reporting via QCDRs, qualified registries,
EHRs, and attestation differ in their application to virtual groups.
Specifically, these requirements apply cumulatively across all TINs in
a virtual group. Thus, virtual groups will aggregate data for each NPI
under each TIN within the virtual group by adding together the
numerators and denominators and then cumulatively collate to report one
measure ratio at the virtual group level. Moreover, if each MIPS
eligible clinician within a virtual group faces a significant hardship
or has EHR technology that has been decertified, the virtual group can
apply for an exception to have its advancing care information
performance category reweighted. If such exception application is
approved, the virtual group's advancing care information performance
category is reweighted to zero percent and applied to the quality
performance category increasing the quality performance weight from 50
percent to 75 percent.
Additionally, the data submission criteria applicable to groups are
also generally applicable to virtual groups. However, we note that data
completeness and sampling requirements for the CMS Web Interface and
CAHPS for MIPS survey differ in their application to virtual groups.
Specifically, data completeness for virtual groups applies cumulatively
across all TINs in a virtual group. Thus, we note that there may be a
case when a virtual group has one TIN that falls below the 60 percent
data completeness threshold, which is an acceptable case as long as the
virtual group cumulatively exceeds such threshold. In regard to the CMS
Web Interface and CAHPS for MIPS survey, sampling requirements pertain
to Medicare Part B patients with respect to all TINs in a virtual
group, where the sampling methodology would be conducted for
[[Page 53612]]
each TIN within the virtual group and then cumulatively aggregated
across the virtual group. A virtual group would need to meet the
beneficiary sampling threshold cumulatively as a virtual group. In
regard to the comment requesting clarification on whether or not solo
practitioners and groups composing a virtual group would be allowed to
view quality data of other solo practitioners and groups in the virtual
group, we note that virtual groups have the flexibility to determine
if, how, and when solo practitioners and groups in the virtual group
would be able to view quality data and/or data pertaining to the other
three performance categories, in which such permissibility could be
established as a provision under the virtual group agreement. Moreover,
the establishment and execution of a virtual group agreement is the
responsibility of the parties electing to participate in MIPS as part
of a virtual group. Health IT vendors or third party intermediaries are
not required to verify that each virtual group has established and
executed a prior virtual group agreement.
Comment: One commenter indicated that there would be added
technical challenges for a virtual group representative when submitting
on behalf of their virtual group given that he or she may face errors
or warnings during submission and, due to the possibility that
individual files could come from various EHR vendors, that
representative would not have authority or the ability to work directly
with another TIN's vendor.
Response: We note that virtual groups have the flexibility to
determine how they would complete reporting under MIPS. We believe that
virtual groups would need to address operational elements to ensure
that it would meet the reporting requirements for each performance
category. Virtual groups are able to utilize the same multiple
submission mechanisms that are available to groups. For the 2018
performance period, groups and virtual groups can utilize multiple
submission mechanism, but only use one submission mechanism per
performance category. Starting with the 2019 performance period, groups
and virtual groups will be able to utilize multiple submission
mechanisms for each performance category.
Comment: One commenter recommended that the virtual group
infrastructure be defined and tested prior to implementation and noted
that virtual group implementation does not appear to be ready for CY
2018. Another commenter suggested that the virtual group reporting
option have a transition year for the CY 2018 and CY 2019 performance
periods in order for solo practitioners and groups to become familiar
with implementing the virtual group reporting option as well as the
election process and executing agreements. The commenter requested that
virtual groups have the ``pick your pace'' options that were
established for the CY 2017 performance period for the CY 2018
performance period in order to test the virtual group option, whereby
the virtual group would only need to report one quality measure or one
improvement activity to avoid a negative MIPS payment adjustment.
Response: We note that it is not permissible for virtual groups to
meet the requirements established for the 2017 performance period given
that such requirements are not applicable to the 2018 performance
period. Moreover, the ``pick your pace'' options were based on the
lower performance threshold established for the CY 2017 performance
period. As discussed in section II.C.8.c. of this final rule with
comment period, we are finalizing a higher performance threshold for
the CY 2018 performance period, and the statute requires the
establishment of one performance threshold for a performance period,
which is the same for all MIPS eligible clinicians regardless of how or
when they participate in MIPS. Year 2 requirements for virtual groups
are defined throughout this final rule with comment period.
Comment: One commenter requested that CMS require virtual groups to
report a plan prior to the start of the performance period regarding
how members of the virtual group (solo practitioners and groups) would
share data internally, including how they would identify the measures
that the virtual group would report, and share NPI-level performance
data on those measures with each other during the performance period to
facilitate performance improvement.
Response: We appreciate the commenter recommending requirements for
virtual groups, but disagree with the recommendation that would require
virtual groups to submit a report to us prior to the start of the
performance period outlining how the virtual group would share data
internally, how the virtual group would identify the measures and
activities to report, and share NPI-level performance data on those
measures with each other during the performance period to facilitate
performance improvement. We believe that the submission of such report
prior to the start of the performance period would increase
administrative burden for virtual groups. However, we encourage virtual
groups to actively engage in discussions with its members to develop a
strategic plan, select measures and activities to report, identify
resources and needs, and establish processes, workflows, and other
tools as they prepare for virtual group reporting. Virtual groups have
the flexibility to identify other elements, in addition to our proposed
agreement provisions, that would be critical to include in an agreement
specific to their particular virtual group. We believe that virtual
groups should have the flexibility to identify additional requirements
that would facilitate and guide a virtual group as it works to achieve
its goals and meet program requirements.
Comment: One commenter recommended that CMS require all eligible
clinicians within a virtual group to report on the same measure set.
The commenter indicated that unifying measures would allow CMS to
aggregate numerators and denominators more easily when calculating
performance against measures.
Response: For virtual groups that report via the CMS Web Interface,
they would report on all measures within the CMS Web Interface. For
virtual groups that report via other submission mechanisms, they would
report on the same 6 measures for the quality performance category. We
encourage virtual groups to assess the types of measures and measure
sets to report to ensure that they would meet the reporting
requirements for the applicable performance categories.
Comment: One commenter recommended that CMS develop a web-based
portal that would streamline reporting requirements for virtual groups.
For example, CMS could model, to the extent possible and appropriate, a
virtual group web-based portal on the CMS Web Interface. The
availability of a web-based portal would relieve a substantial burden
for solo practitioners and small groups who do not have the same level
of resources as larger groups to purchase and maintain the
infrastructure necessary for MIPS reporting. Moreover, the commenter
indicated that a single reporting portal would ease data collection
burden on CMS, enabling the Agency to collect and pull data from a
single source under a single submission mechanism rather than engaging
in a more cumbersome process that could require multiple data
collection and submission mechanisms.
Response: We have developed a web-based portal submission system
that streamlines and simplifies the
[[Page 53613]]
submission of data at the individual, group, and virtual group level,
including the utilization of multiple submission mechanisms (one
submission mechanism per performance category), for each performance
category. We will be issuing guidance at qpp.cms.gov pertaining to the
utilization and functionality of such portal.
Comment: Several commenters requested that CMS clarify whether or
not data should be de-duplicated for virtual group reporting. The
commenters indicated that TINs already have an issue of not being able
to de-duplicate patient data across different health IT systems/
multiple EHRs. The commenters indicated that virtual groups need clear
guidelines regarding how to achieve accurate reporting and suggested
that CMS may want to consider delaying implementation of the virtual
group reporting option until all related logistics issues and solutions
are identified.
Response: We interpret the commenter's reference to ``de-
duplicate'' to mean the identification of unique patients across a
virtual group. We recognize that it may be difficult to identify unique
patients across a virtual group for the purposes of aggregating
performance on the advancing care information measures, particularly
when a virtual group is using multiple CEHRT systems. For 2018, virtual
groups may be using systems which are certified to different CEHRT
editions further adding to this challenge. We consider ``unique
patients'' to be individual patients treated by a TIN within a virtual
group who would typically be counted as one patient in the denominator
of an advancing care information measure. This patient may see multiple
MIPS eligible clinicians within a TIN that is part of a virtual group,
or may see MIPS eligible clinicians at multiple practice sites of a TIN
that is part of a virtual group. When aggregating performance on
advancing care information measures for virtual group level reporting,
we do not require that a virtual group determine that a patient seen by
one MIPS eligible clinician (or at one location in the case of TINs
working with multiple CEHRT systems) is not also seen by another MIPS
eligible clinician in the TIN that is part of the virtual group or
captured in a different CEHRT system.
In regard to the suggestion provided by the commenter regarding the
delay of the implementation of virtual groups, we are not able to
further postpone the implementation of virtual groups. We recognize
that there are various elements and factors that virtual groups would
need to address prior to the execution of virtual groups. Also, we
recognize that certain solo practitioners and groups may not be ready
to form virtual groups for the 2018 performance period.
Comment: One commenter expressed concern regarding how a health IT
vendor would support a virtual group regardless of submission
mechanism, CEHRT, registry, and/or billing claims. The commenter
indicated that having multiple health IT vendors and products to
support within a single virtual group would complicate the ability to
aggregate data for a final score, affect the productivity of the health
IT vendor in its effort to support the virtual groups, and increase
coding and billing errors.
Response: We note that virtual groups may elect to utilize health
IT vendors and/or third party intermediaries for the collection and
submission of data on behalf of virtual groups. As discussed in section
II.C.6.a.(1) of this final rule with comment period, the submission
mechanisms available to groups under each performance category will
also be available to virtual groups. Similarly, virtual groups will
also have the same option as groups to utilize multiple submission
mechanisms, but only one submission mechanism per performance category
for the 2018 performance period. However, starting with the 2019
performance period, groups and virtual groups will be able to utilize
multiple submission mechanisms for each performance category. We
believe that our policies pertaining to the availability and
utilization of multiple submission mechanisms increases flexibility and
reduces burden. However, we recognize that data aggregation across at
the virtual group level may pose varying challenges.
We note that the measure reporting requirements applicable to
groups are also generally applicable to virtual groups. However, we
note that the requirements for calculating measures and activities when
reporting via QCDRs, qualified registries, EHRs, and attestation differ
in their application to virtual groups. Specifically, these
requirements apply cumulatively across all TINs in a virtual group.
Thus, virtual groups will aggregate data for each NPI under each TIN
within the virtual group by adding together the numerators and
denominators and then cumulatively collate to report one measure ratio
at the virtual group level. Moreover, if each MIPS eligible clinician
within a virtual group faces a significant hardship or has EHR
technology that has been decertified, the virtual group can apply for
an exception to have its advancing care information performance
category reweighted. If such exception application is approved, the
virtual group's advancing care information performance category is
reweighted to zero percent and applied to the quality performance
category increasing the quality performance weight from 50 percent to
75 percent.
Additionally, the data submission criteria applicable to groups are
also generally applicable to virtual groups. However, we note that data
completeness and sampling requirements for the CMS Web Interface and
CAHPS for MIPS survey differ in their application to virtual groups.
Specifically, data completeness for virtual groups applies cumulatively
across all TINs in a virtual group. Thus, we note that there may be a
case when a virtual group has one TIN that falls below the 60 percent
data completeness threshold, which is an acceptable case as long as the
virtual group cumulatively exceeds such threshold. In regard to the CMS
Web Interface and CAHPS for MIPS survey, sampling requirements pertain
to Medicare Part B patients with respect to all TINs in a virtual
group, where the sampling methodology would be conducted for each TIN
within the virtual group and then cumulatively aggregated across the
virtual group. A virtual group would need to meet the beneficiary
sampling threshold cumulatively as a virtual group.
Final Action: After consideration of the public comments received,
we are finalizing the following virtual group reporting requirements:
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level will have their performance assessed as a virtual
group at Sec. 414.1315(d)(1).
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level will need to meet the definition of a virtual group
at all times during the performance period for the MIPS payment year
(at Sec. 414.1315(d)(2)).
Individual eligible clinicians and individual MIPS
eligible clinicians who are part of a TIN participating in MIPS at the
virtual group level must aggregate their performance data across
multiple TINs in order for their performance to be assessed as a
virtual group (at Sec. 414.1315(d)(3)).
MIPS eligible clinicians that elect to participate in MIPS
at the virtual group level will have their performance assessed at the
virtual group level across all four MIPS performance categories (at
Sec. 414.1315(d)(4)).
[[Page 53614]]
Virtual groups will need to adhere to an election process
established and required by CMS (at Sec. 414.1315(d)(5)).
h. Virtual Group Assessment and Scoring
As noted in section II.C.4.a. of this final rule with comment
period, section 1848(q)(5)(I)(i) of the Act provides that MIPS eligible
clinicians electing to be a virtual group must: (1) Have their
performance assessed for the quality and cost performance categories in
a manner that applies the combined performance of all the MIPS eligible
clinicians in the virtual group to each MIPS eligible clinician in the
virtual group for the applicable performance period; and (2) be scored
for the quality and cost performance categories based on such
assessment for the applicable performance period. We believe it is
critical for virtual groups to be assessed and scored at the virtual
group level for all performance categories, as it eliminates the burden
of virtual group components having to report as a virtual group and
separately outside of a virtual group. Additionally, we believe that
the assessment and scoring at the virtual group level provides for a
comprehensive measurement of performance, shared responsibility, and an
opportunity to effectively and efficiently coordinate resources to also
achieve performance under the improvement activities and the advancing
care information performance categories. Therefore, we proposed at
Sec. 414.1315(d)(4) that virtual groups would be assessed and scored
across all four MIPS performance categories at the virtual group level
for a performance period for a year (82 FR 30033 through 30034).
In the CY 2017 Quality Payment Program final rule (81 FR 77319
through 77329), we established the MIPS final score methodology at
Sec. 414.1380, which would apply to virtual groups. We refer readers
to sections II.C.4.h. and II.C.6.g. of this final rule with comment
period for scoring policies that would apply to virtual groups.
As noted in section II.C.4.g. of this final rule with comment
period, we proposed to allow solo practitioners and groups with 10 or
fewer eligible clinicians that have elected to be part of a virtual
group to have their performance measured and aggregated at the virtual
group level across all four performance categories; however, we would
apply payment adjustments at the individual TIN/NPI level. Each TIN/NPI
would receive a final score based on the virtual group performance, but
the payment adjustment would still be applied at the TIN/NPI level. We
would assign the virtual group score to all TIN/NPIs billing under a
TIN in the virtual group during the performance period.
During the performance period, we recognized that NPIs in a TIN
that has joined a virtual group may also be participants in an APM. The
TIN, as part of the virtual group, would be required to submit
performance data for all eligible clinicians associated with the TIN,
including those participating in APMs, to ensure that all eligible
clinicians associated with the TIN are being measured under MIPS.
APMs seek to deliver better care at lower cost and to test new ways
of paying for care and measuring and assessing performance. In the CY
2017 Quality Payment Program final rule, we established policies to the
address concerns we have expressed in regard to the application of
certain MIPS policies to MIPS eligible clinicians in MIPS APMs (81 FR
77246 through 77269). In the CY 2018 Quality Payment Program proposed
rule, we reiterated those concerns and proposed additional policies for
the APM scoring standard (82 FR 30080 through 30091). We believe it is
important to consistently apply the APM scoring standard under MIPS for
eligible clinicians participating in MIPS APMs in order to avoid
potential misalignments between the evaluation of performance under the
terms of the MIPS APM and evaluation of performance on measures and
activities under MIPS, and to preserve the integrity of the initiatives
we are testing. Therefore, we believe it is necessary to waive the
requirement to only use the virtual group scores under section
1848(q)(5)(I)(i)(II) of the Act, and instead to apply the score under
the APM scoring standard for eligible clinicians in virtual groups who
are also in an APM Entity participating in an APM.
Specifically, for participants in MIPS APMs, we proposed to use our
authority under section 1115A(d)(1) of the Act for MIPS APMs authorized
under section 1115A of the Act, and under section 1899(f) of the Act
for the Shared Savings Program, to waive the requirement under section
1848(q)(2)(5)(I)(i)(II) of the Act that requires performance category
scores from virtual group reporting to be used to generate the final
score upon which the MIPS payment adjustment is based for all TIN/NPIs
in the virtual group. Instead, we would use the score assigned to the
MIPS eligible clinician based on the applicable APM Entity score to
determine MIPS payment adjustments for all MIPS eligible clinicians
that are part of an APM Entity participating in a MIPS APM, in
accordance with Sec. 414.1370, instead of determining MIPS payment
adjustments for these MIPS eligible clinicians using the final score of
their virtual group.
We noted that MIPS eligible clinicians who are participants in both
a virtual group and a MIPS APM would be assessed under MIPS as part of
the virtual group and under the APM scoring standard as part of an APM
Entity group, but would receive their payment adjustment based only on
the APM Entity score. In the case of an eligible clinician
participating in both a virtual group and an Advanced APM who has
achieved QP status, the clinician would be assessed under MIPS as part
of the virtual group, but would still be excluded from the MIPS payment
adjustment as a result of his or her QP status. We refer readers to
section II.C.6.g. of this final rule with comment period for further
discussion regarding the waiver.
The following is a summary of the public comments received
regarding our proposals.
Comment: Many commenters supported our proposals regarding the
assessment and scoring of virtual group performance and the application
of the MIPS payment adjustment to MIPS eligible clinicians based on the
virtual group's final score.
Response: We appreciate the support from the commenters.
Comment: One commenter supported our proposal to assess and score
virtual groups at the virtual group level and indicated that such an
approach would provide comprehensive measurement, shared responsibility
and coordination of resources, and reduce burden. Another commenter
expressed support for requiring the aggregation of data across the TINs
within a virtual group, including the performance data of APM
participants, to assess the performance of a virtual group given that
it would be difficult for TINs to separate and exclude data for some
NPIs. One commenter supported our proposal to utilize waiver authority,
which allows MIPS eligible clinicians within a virtual group to receive
their MIPS payment adjustment based on the virtual group score while
allowing APM participants who are also a part of a virtual group to
receive their MIPS payment adjustment based on their APM Entity score
under the APM scoring standard.
Response: We appreciate the support from the commenters regarding
our proposals.
Comment: One commenter requested clarification regarding whether or
not the MIPS payment adjustment would only apply to MIPS eligible
clinicians within a virtual group.
[[Page 53615]]
Response: We note that each eligible clinician in a virtual group
will receive a virtual group score that is reflective of the combined
performance of a virtual group; however, only MIPS eligible clinicians
will receive a MIPS payment adjustment based on the virtual group final
score. In the case of an eligible clinician participating in both a
virtual group and an Advanced APM who has achieved QP status, such
eligible clinician will be assessed under MIPS as part of the virtual
group, but will still be excluded from the MIPS payment adjustment as a
result of his or her QP status. Conversely, in the case of an eligible
clinician participating in both a virtual group and an Advanced APM who
has achieved Partial QP status, it is recognized that such eligible
clinician would be excluded from the MIPS payment adjustment unless
such eligible clinician elects to report under MIPS. We note that
affirmatively agreeing to participate in MIPS as part of a virtual
group prior to the start of the applicable performance period would
constitute an explicit election to report under MIPS. Thus, eligible
clinicians who participate in a virtual group and achieve Partial QP
status would remain subject to the MIPS payment adjustment due to their
election to report under MIPS. New Medicare-enrolled eligible
clinicians and clinician types not included in the definition of a MIPS
eligible clinician who are associated with a TIN that is part of a
virtual group would receive a virtual group score, but would not
receive a MIPS payment adjustment. MIPS eligible clinicians who are
participants in both a virtual group and a MIPS APM will be assessed
under MIPS as part of the virtual group and under the APM scoring
standard as part of an APM Entity group, but will receive their payment
adjustment based only on the APM Entity score.
Comment: In order to increase virtual group participation and
incentivize solo practitioners and groups (including rural and small
practices) to form virtual groups and move toward joint accountability,
many commenters recommended that CMS provide bonus points to TINs that
elect to form virtual groups given that virtual groups would face
administrative and operational challenges, such as identifying reliable
partners, aggregating and sharing data, and coordinating workflow
across multiple TINs and NPIs. One commenter recommended that CMS
consider granting virtual groups (of any size) special reporting and/or
scoring accommodations similar to the previously finalized and proposed
policies for small practices (for example, attesting to only one to two
improvement activities) in order to account for the short timeframe (a
few months) TINs have to form and implement virtual groups in
preparation for the CY 2018 performance period.
Response: We appreciate the recommendations from commenters. We
believe that the ability for solo practitioners and groups to form and/
or join virtual groups is an advantage and provides flexibility. We
note that virtual groups are generally able to take advantage and
benefit from all scoring incentives and bonuses that are currently
provided under MIPS. We will take into consideration the development of
additional incentives, and any changes would be proposed in future
rulemaking.
Comment: One commenter requested that CMS consider scoring virtual
groups by weighting each individual group category score by the number
of clinicians. The commenter indicated that the requirement to
consolidate scoring for each performance category would limit the
ability of TINs to take advantage of the virtual group option,
particularly with regard to the advancing care information performance
category, where the use of different EHR vendors may make finding
viable partners difficult and preclude easy reporting. Another
commenter indicated that our proposal to require virtual groups to be
scored across all performance categories may cause unintended
consequences, such as virtual groups being dissuaded from admitting
TINs that do have EHR technology certified to the 2014 Edition in order
for virtual groups' advancing care information performance category
scores not to be impacted.
Response: We believe it is important for TINs participating in MIPS
as part of a virtual group to be assessed and scored at the virtual
group level across each performance category. We believe it provides
continuity in assessment and allows virtual groups to share and
coordinate resources pertaining to each performance category. We
recognize that there may be challenges pertaining to aligning EHR
technology and the ways in which EHR technology captures data, but
believe that virtual groups have the opportunity to coordinate and
identify means to align elements of EHR technology that would benefit
the virtual group. In order for virtual groups to accurately have their
performance assessed and scored as a collective entity and identify
areas to improve care coordination, quality of care, and health
outcomes, we believe that each eligible clinician in a virtual group
should be assessed and scored across all four performance categories at
the virtual group level.
Comment: One commenter suggested that CMS explore the development
of a test to determine, in advance, if a virtual group would have
sufficient numbers for valid measurement.
Response: We interpret the commenter's reference to ``sufficient
numbers for valid measurement'' to mean sufficient numerator and
denominator data to enable the data to accurately reflect the virtual
group's performance on specific measures and activities. As virtual
groups are implemented, we will take this recommendation into
consideration.
Comment: One commenter expressed concern that virtual groups would
have the ability to skew benchmark scoring standards to the
disadvantage of MIPS eligible clinicians who choose not to participate
in MIPS as part of a virtual group.
Response: We disagree with the commenter and do not believe that
virtual groups would skew benchmark scoring standards to the
disadvantage of MIPS eligible clinicians participating in MIPS at the
individual or group level as a result of how benchmarks are calculated,
which is based on the composite of available data for all MIPS eligible
clinicians. MIPS eligible clinicians that are participating in MIPS as
part of a virtual group would already be eligible and able to
participate in MIPS at the individual or group level; therefore, the
benchmark scoring standards would not be skewed regardless of such MIPS
eligible clinicians participating in MIPS at the individual, group, or
virtual group level. Also, we believe that solo practitioners and
groups with 10 or fewer eligible clinicians that form virtual groups
would increase their performance by joining together.
Comment: One commenter urged CMS to address risk adjustment
mechanisms for virtual groups and develop methodologies to account for
the unique nature of virtual groups and noted that appropriate risk
adjustment is critical for virtual groups because of the heterogeneous
make-up of virtual groups (for example, geographic and specialty
diversity).
Response: We appreciate the recommendation from the commenter.
Under the Improving Medicare Post-Acute Transformation (IMPACT) Act of
2014, the Office of the Assistant Secretary for Planning and Evaluation
(ASPE) has been conducting studies on the issue of risk adjustment for
sociodemographic factors on quality measures and cost, as well as other
strategies for including social
[[Page 53616]]
determinants of health status evaluation in CMS programs. We will
closely examine the ASPE studies when they are available and
incorporate findings as feasible and appropriate through future
rulemaking. Also, we will monitor outcomes of beneficiaries with social
risk factors, as well as the performance of the MIPS eligible
clinicians who care for them to assess for potential unintended
consequences such as penalties for factors outside the control of
clinicians.
Comment: One commenter requested clarification regarding how
compliance would be implemented for the quality and improvement
activities performance categories at the virtual group level and
whether or not a virtual group would be able to achieve the highest
possible score for the improvement activities performance category if
only one NPI within the virtual group meets the requirements regardless
of the total number of NPIs participating in the virtual group. Also,
the commenter requested clarification regarding whether or not a
virtual group would meet the requirements under the quality performance
category if the virtual group included a TIN that reported a specialty
measures set that is not applicable to other eligible clinicians in the
virtual group.
Response: As discussed in section II.C.4.d. of this final rule with
comment period, we are generally applying our previously finalized and
proposed group policies to virtual groups, unless specified. Thus, in
order for virtual groups to meet the requirements for the quality and
improvement activities performance categories, they would need to meet
the same requirements established for groups and meet virtual group
reporting requirements. Virtual groups will have their performance
assessed and scored for the quality and improvement activities
performance categories based submitting the minimum number of measures
and activities. Generally, virtual groups reporting quality measures
are required to select at least 6 measures, one of which must be an
outcome measure, or if an outcome measure is not available a high
priority measure to collectively report for the performance period of
CY 2018. Virtual groups are encouraged to select the quality measures
that are most appropriate to the TINs and NPIs within their virtual
group and patient population.
For the 2018 performance period, virtual groups submitting data on
quality measures using QCDRs, qualified registries, or via EHR must
report on at least 60 percent of the virtual group's patients that meet
the measure's denominator criteria, regardless of payer for the
performance period. We expect to receive quality data for both Medicare
and non-Medicare patients under these submission mechanisms. Virtual
groups submitting quality measures data using the CMS Web Interface or
a CMS-approved survey vendor to report the CAHPS for MIPS survey must
meet the data submission requirements on the sample of the Medicare
Part B patients CMS provides. We note that the measure reporting
requirements applicable to groups are also generally applicable to
virtual groups. However, we note that the requirements for calculating
measures and activities when reporting via QCDRs, qualified registries,
EHRs, and attestation differ in their application to virtual groups.
Specifically, these requirements apply cumulatively across all TINs in
a virtual group. Thus, virtual groups will aggregate data for each NPI
under each TIN within the virtual group by adding together the
numerators and denominators and then cumulatively collate to report one
measure ratio at the virtual group level. Moreover, if each MIPS
eligible clinician within a virtual group faces a significant hardship
or has EHR technology that has been decertified, the virtual group can
apply for an exception to have its advancing care information
performance category reweighted. If such exception application is
approved, the virtual group's advancing care information performance
category is reweighted to zero percent and applied to the quality
performance category increasing the quality performance weight from 50
percent to 75 percent.
Additionally, the data submission criteria applicable to groups are
also generally applicable to virtual groups. However, we note that data
completeness and sampling requirements for the CMS Web Interface and
CAHPS for MIPS survey differ in their application to virtual groups.
Specifically, data completeness for virtual groups applies cumulatively
across all TINs in a virtual group. Thus, we note that there may be a
case when a virtual group has one TIN that falls below the 60 percent
data completeness threshold, which is an acceptable case as long as the
virtual group cumulatively exceeds such threshold. In regard to the CMS
Web Interface and CAHPS for MIPS survey, sampling requirements pertain
to Medicare Part B patients with respect to all TINs in a virtual
group, where the sampling methodology would be conducted for each TIN
within the virtual group and then cumulatively aggregated across the
virtual group. A virtual group would need to meet the beneficiary
sampling threshold cumulatively as a virtual group.
In regard to performance under the improvement activities
performance category, we clarified in the CY 2017 Quality Payment
Program final rule (81 FR 77181) that if one MIPS eligible clinician
(NPI) in a group completed an improvement activity, the entire group
(TIN) would receive credit for that activity. In addition, we specified
that all MIPS eligible clinicians reporting as a group would receive
the same score for the improvement activities performance category if
at least one clinician within the group is performing the activity for
a continuous 90 days in the performance period. As discussed in section
II.C.4.d. of this final rule with comment period, we are finalizing our
proposal to generally apply our previously finalized and proposed group
policies to virtual groups, unless otherwise specified. Thus, if one
MIPS eligible clinician (NPI) in a virtual group completed an
improvement activity, the entire virtual group would receive credit for
that activity and receive the same score for the improvement activities
performance category if at least one clinician within the virtual group
is performing the activity for a minimum of a continuous 90-day period
in CY 2018. In order for virtual groups to achieve full credit under
the improvement activities performance category for the 2018
performance period, they would need to submit four medium-weighted or
two high-weighted activities that were for a minimum of a continuous
90-day period in CY 2018. Virtual groups that are considered to be non-
patient facing or small practices, or designated as rural or HPSA
practices will receive full credit by submitting one high-weighted
improvement activity or two medium-weighted improvement activities that
were conducted for a minimum of a continuous 90-day period in CY 2018.
In regard to compliance with quality and improvement activities
performance category requirements, virtual groups would meet the same
performance category requirements applicable to groups. In section
II.C.4.g. of this final rule with comment period, we outline virtual
group reporting requirements. Virtual groups are required to adhere to
the requirements established for each performance category. Performance
data submitted to CMS on behalf of virtual groups must be meet form and
manner requirements for each submission mechanism.
[[Page 53617]]
Final Action: After consideration of the public comments received,
we are finalizing the following proposals. Solo practitioners and
groups with 10 or fewer eligible clinicians that have elected to be
part of a virtual group will have their performance measured and
aggregated at the virtual group level across all four performance
categories. We will apply payment adjustments at the individual TIN/NPI
level. Each TIN/NPI will receive a final score based on the virtual
group performance, but the payment adjustment would still be applied at
the TIN/NPI level. We will assign the virtual group score to all TIN/
NPIs billing under a TIN in the virtual group during the performance
period.
For participants in MIPS APMs, we will use our authority under
section 1115A(d)(1) for MIPS APM authorized under section 1115A of the
Act, and under section 1899(f) for the Shared Savings Program, to waive
the requirement under section 1848 (q)(2)(5)(I)(i)(II) of the Act that
requires performance category scores from virtual group reporting to be
used to generate the final score upon which the MIPS payment adjustment
is based for all TIN/NPIs in the virtual group. We will use the score
assigned to the MIPS eligible clinician based on the applicable APM
Entity score to determine MIPS payment adjustments for all MIPS
eligible clinicians that are part of an APM Entity participating in a
MIPS APM, in accordance with Sec. 414.1370, instead of determining
MIPS payment adjustments for these MIPS eligible clinicians using the
final score of their virtual group.
5. MIPS Performance Period
In the CY 2017 Quality Payment Program final rule (81 FR 77085), we
finalized at Sec. 414.1320(b)(1) that for purposes of the 2020 MIPS
payment year, the performance period for the quality and cost
performance categories is CY 2018 (January 1, 2018 through December 31,
2018). We finalized at Sec. 414.1320(b)(2) that for purposes of the
2020 MIPS payment year, the performance period for the improvement
activities and advancing care information performance categories is a
minimum of a continuous 90-day period within CY 2018, up to and
including the full CY 2018 (January 1, 2018, through December 31,
2018). We did not propose any changes to these policies.
We also finalized at Sec. 414.1325(f)(2) that for Medicare Part B
claims, data must be submitted on claims with dates of service during
the performance period that must be processed no later than 60 days
following the close of the performance period. In this final rule with
comment period, we are finalizing three policies (small practice size
determination, non-patient facing determination, and low-volume
threshold determination) that utilize a 30-day claims run out. We refer
readers to sections II.C.l.c., II.C.l.e., and II.C.2.c. of this final
rule with comment period for details on these three policies. Lastly,
we finalized that individual MIPS eligible clinicians or groups who
report less than 12 months of data (due to family leave, etc.) are
required to report all performance data available from the applicable
performance period (for example, CY 2018 or a minimum of a continuous
90-day period within CY 2018).
We proposed at Sec. 414.1320(c)(1) that for purposes of the 2021
MIPS payment year and future years, the performance period for the
quality and cost performance categories would be the full calendar year
(January 1 through December 31) that occurs 2 years prior to the
applicable payment year. For example, for the 2021 MIPS payment year,
the performance period would be CY 2019 (January 1, 2019 through
December 31, 2019), and for the 2022 MIPS payment year, the performance
period would be CY 2020 (January 1, 2020 through December 31, 2020).
We proposed at Sec. 414.1320(d)(1) that for purposes of the 2021
MIPS payment year, the performance period for the improvement
activities and advancing care information performance categories would
be a minimum of a continuous 90-day period within CY 2019, up to and
including the full CY 2019 (January 1, 2019 through December 31, 2019).
The following is a summary of the public comments received on the
``MIPS Performance Period'' proposals and our responses:
Comment: Many commenters did not support our proposal that
beginning with the 2021 MIPS payment year, the performance period for
the quality and cost performance categories would be the full calendar
year that occurs 2 years prior to the applicable payment year. The
commenters believed that MIPS eligible clinicians are not prepared to
move from ``pick your pace'' flexibility to a full calendar year
performance period and that the proposal would create significant
administrative burden and confusion for MIPS eligible clinicians. A few
commenters noted that a full calendar year of data does not necessarily
improve the validity of the data. Many commenters recommended that CMS
continue ``pick your pace'' flexibility with respect to the performance
period, while several commenters expressed an interest in CMS allowing
clinicians to choose the length of their performance period. One
commenter recommended that CMS provide bonus points to clinicians who
report for a performance period that is longer than 90 days. A few
commenters recommended that CMS analyze the quality and cost
performance data to determine the appropriate length of the performance
period, taking into consideration whether there are any unintended
consequences for practices of a particular size or specialty. One
commenter suggested that CMS work with physicians to develop options
and a specific plan to provide accommodations where possible, such as
providing clinicians multiple different performance periods to choose
from. A few commenters noted that a 90-day performance period may
eliminate issues for clinicians that either switch or update their EHR
system during the performance period. Furthermore, a few commenters
noted that since the QCDR self-nominations are not due until November
1, 2017, CMS would need to review and approve QCDR measures within less
than 2 months, for clinicians to have QCDR measures to report at the
start of the CY 2018 performance period. One commenter noted that a 90-
day performance period is preferable as clinicians will need time to
update their systems and train staff after QCDR measures have been
approved.
Response: We understand the commenters' concerns. However, we
believe that it would not be in the best interest of MIPS eligible
clinicians to have less than a full calendar year performance period
for the quality and cost performance categories beginning with the 2021
MIPS payment year, as we previously finalized at Sec. 414.1320(b)(1) a
full calendar year performance period for the quality and cost
performance categories for the 2020 MIPS payment year, which will occur
during CY 2018. By finalizing a full calendar year performance period
for the quality and cost performance categories for the 2021 MIPS
payment year, we are maintaining consistency with the performance
period for the 2020 MIPS payment year. We believe this will be less
burdensome and confusing for MIPS eligible clinicians. We also would
like to note that a longer performance period for the quality and cost
performance categories will likely include more patient encounters,
which will increase the denominator of the quality and cost measures.
Statistically, larger sample sizes provide more accurate and actionable
information. Additionally, the longer performance
[[Page 53618]]
period (a year) is consistent with how many of the measures used in our
program were designed to be reported and performed, such as Quality
#303 (Cataracts: Improvement in Patient's Visual Function within 90
Days Following Cataract Surgery) and Quality #304 (Cataracts: Patient
Satisfaction within 90 Days Following Cataract Surgery). Finally, some
of the measures do not allow for a 90-day performance period (such as
those looking at complications after certain surgeries or improvement
in certain conditions after treatment). In regards to the
recommendation of providing bonus points to MIPS eligible clinicians
that report for a performance period longer than 90 days, we believe a
more appropriate incentive is for MIPS eligible clinicians to perform
on a full year so that they have the ability to improve their
performance due to having a larger sample size, etc. We also understand
the commenters' preference of a 90-day performance period, so that
there is adequate time to update systems and train staff. We agree that
adequate time is needed to update systems, workflows and train staff.
However, we note that the quality measures are finalized as part of
this final rule, and the specifications are published on our Web site
by no later than December 31 prior to the performance period. While we
strongly encourage all clinicians to review the current performance
period's measure specifications, we note that the overwhelming majority
of MIPS quality measures are maintained year over year with only minor
code set updates. Further, for quality, we have a 60 percent data
completeness threshold, which provides a buffer for clinicians if they
are not able to implement their selected measures immediately at the
start of the performance period. Finally, we would like to clarify that
many registries, QCDRs, and EHRs have the ability to accept historical
data so that once the EHR system is switched or updated, the MIPS
eligible clinician can report their information. With regard to the
suggestion that we work with physicians to develop options and a
specific plan to provide accommodations where possible, such as
providing clinicians multiple different performance periods to choose
from, we will consider this suggestion for future rulemaking as
necessary.
Comment: While we did not propose any changes to the previously
finalized performance periods for the 2020 MIPS payment year, many
commenters did not support a full calendar year performance period for
the quality performance category for the 2020 MIPS payment year. The
commenters noted that MIPS eligible clinicians are not prepared to move
from ``pick your pace'' flexibility to a full calendar year performance
period and that this policy will create significant administrative
burden and confusion for MIPS eligible clinicians.
Response: We understand the commenters' concerns in regards to the
full calendar year MIPS performance period for the quality performance
category for the 2020 MIPS payment year. We would like to note that the
MIPS performance period for the 2020 MIPS payment year was finalized in
the CY 2017 Quality Payment Program final rule, and we made no new
proposals for the MIPS performance period for the 2020 MIPS payment
year. Therefore, we are unable to modify the MIPS performance period
for the quality performance category for the 2020 MIPS payment year.
Comment: Several commenters supported the proposal to increase the
performance period for the 2021 MIPS payment year and future payment
years to 12 months occurring 2 years prior because the longer
performance period provides a more accurate picture of eligible
clinicians' performance. A few commenters noted that their support was
contingent on CMS approving 2018 QCDR measure specifications by
December 1, 2017. One commenter noted that a 90-day performance period
is insufficient to thoroughly assess performance. One commenter noted
that the full year will ensure continuity in the quality of care
delivered to beneficiaries. One commenter noted that a TIN
participating in Track 1 of the Shared Savings Program is automatically
required to report for the full year, so requiring all MIPS eligible
clinicians to participate for a full year would be fairer now that
scores are reflected on Physician Compare.
Response: We thank the commenters for their support. We would also
like to note that in the CY 2017 Quality Payment Program final rule (81
FR 77158), we stated that we would post the approved QCDR measures
through the qualified posting by no later than January 1, 2018.
Comment: A few commenters did not support the proposed performance
periods because the quality and cost performance categories would not
be aligned with the improvement activities and advancing care
information performance categories. The commenters believed it would be
confusing to clinicians. One commenter recommended that all performance
categories have a 12-month performance period.
Response: We understand the commenters' concerns that the proposed
performance periods for quality and cost would not be consistent with
the improvement activities and advancing care information performance
categories. For the improvement activities performance category, a
minimum of a continuous 90-day performance period provides MIPS
eligible clinicians more flexibility as some improvement activities may
be ongoing, while others may be episodic. For the advancing care
information performance category, a minimum of a continuous 90-day
period performance period provides MIPS eligible clinicians more
flexibility and time to adopt and implement 2015 Edition CEHRT. As for
the quality and cost performance categories, we believe that a full
calendar year performance period is most appropriate. Additionally,
submitting only 90 days of performance data may create challenges for
specific measures. Finally, with respect to the cost performance
category, we would like to note that no data submission is required, as
this performance category is calculated utilizing Part B claim data.
Comment: Many commenters supported the proposed 90-day performance
period for the improvement activities and advancing care information
performance categories. A few commenters requested that CMS adopt a 90-
day performance period for the improvement activities and advancing
care information performance categories for the 2022 MIPS payment year
and future years.
Response: We thank the commenters for their support and will
consider the commenters' recommendation for future rulemaking.
Comment: A few commenters did not support the length of time
between the proposed performance period and the applicable payment year
because the commenters believed it would not allow practices time to
make necessary adjustments before the next performance period begins.
One commenter recommended that, as the program matures, one
consideration for shortening this timeframe could be a quarterly
rolling annual performance period with a three- to 6-month validation
period prior to any payment adjustment. Another commenter recommended
that we consider staggered performance periods; for example payment
adjustments for 2021, would ideally be based on a performance period
running from July 1, 2019 through June 30, 2020.
Response: We understand the commenters' concerns regarding the
length of time between the proposed
[[Page 53619]]
performance period and the applicable payment year and appreciate the
commenters' suggestions for shortening this timeframe. While a
shortened timeframe between performance period and payment year may be
desirable, there are operational challenges with this approach that we
do not anticipate can be resolved in the near future. Specifically, we
need to allow time for the post submission processes of calculating
MIPS eligible clinicians' final scores, establishing budget neutrality,
issuing the payment adjustment factors, and allowing for a targeted
review period to occur prior to the application of the MIPS payment
adjustment to MIPS eligible clinicians' claims. However, we are
continuing to look for opportunities to shorten the timeframe between
the end of the performance period and when payment adjustments are
applied.
Comment: One commenter recommended a 2-year performance period for
clinicians who have patient volume insufficient for statistical
analysis so that the clinician has a sufficient sample size to analyze.
Response: We thank the commenter for their suggestion and will
consider it for future rulemaking. We would like to note that in this
final rule with comment period, we are only finalizing the performance
period for the 2021 MIPS payment year, not future years, so that we can
continue to monitor and assess whether changes to the performance
period through future rulemaking would be beneficial.
Comment: One commenter encouraged CMS to implement the MIPS program
as soon as possible. This commenter noted that a transition period
could discourage eligible clinicians from participating in the program.
Response: We appreciate the commenter's recommendation to implement
the MIPS program as soon as possible; however, we disagree that a
transition period will discourage participation. We believe that a
transition period will reduce barriers from participation that existed
in the legacy programs.
Final Action: After consideration of the public comments, we are
finalizing at Sec. 414.1320(c)(1) that for purposes of the 2021 MIPS
payment year, the performance period for the quality and cost
performance categories is CY 2019 (January 1, 2019 through December 31,
2019). We are not finalizing the proposed performance period for the
quality and cost performance categories for purposes of the 2022 MIPS
payment year and future years. We are also redesignating proposed Sec.
414.1320(d)(1) and finalizing at Sec. 414.1320(c)(2) that for purposes
of the 2021 MIPS payment year, the performance period for the advancing
care information and improvement activities performance categories is a
minimum of a continuous 90-day period within CY 2019, up to and
including the full CY 2019 (January 1, 2019 through December 31, 2019).
6. MIPS Performance Category Measures and Activities
a. Performance Category Measures and Reporting
(1) Submission Mechanisms
We finalized in the CY 2017 Quality Payment Program final rule (81
FR 77094) at Sec. 414.1325(a) that individual MIPS eligible clinicians
and groups must submit measures and activities, as applicable, for the
quality, improvement activities, and advancing care information
performance categories. For the cost performance category, we finalized
that each individual MIPS eligible clinician's and group's cost
performance would be calculated using administrative claims data. As a
result, individual MIPS eligible clinicians and groups are not required
to submit any additional information for the cost performance category.
We finalized in the CY 2017 Quality Payment Program final rule (81 FR
77094 through 77095) multiple data submission mechanisms for MIPS,
which provide individual MIPS eligible clinicians and groups with the
flexibility to submit their MIPS measures and activities in a manner
that best accommodates the characteristics of their practice, as
indicated in Tables 2 and 3. Table 2 summarizes the data submission
mechanisms for individual MIPS eligible clinicians that we finalized at
Sec. 414.1325(b) and (e). Table 3 summarizes the data submission
mechanisms for groups that are not reporting through an APM that we
finalized at Sec. 414.1325(c) and (e).
Table 2--Data Submission Mechanisms for MIPS Eligible Clinicians
Reporting Individually
[TIN/NPI]
------------------------------------------------------------------------
Performance category/submission Individual reporting data submission
combinations accepted mechanisms
------------------------------------------------------------------------
Quality........................... Claims.
QCDR.
Qualified registry.
EHR.
Cost.............................. Administrative claims.\1\
Advancing Care Information........ Attestation.
QCDR.
Qualified registry.
EHR.
Improvement Activities............ Attestation.
QCDR.
Qualified registry.
EHR.
------------------------------------------------------------------------
Table 3--Data Submission Mechanisms for MIPS Eligible Clinicians
Reporting as Groups
[TIN]
------------------------------------------------------------------------
Performance category/submission Group reporting data submission
combinations accepted mechanisms
------------------------------------------------------------------------
Quality........................... QCDR.
Qualified registry.
EHR.
[[Page 53620]]
CMS Web Interface (groups of 25 or
more).
CMS-approved survey vendor for CAHPS
for MIPS (must be reported in
conjunction with another data
submission mechanism).
and
Administrative claims (for all-cause
hospital readmission measure; no
submission required).
Cost.............................. Administrative claims.\1\
Advancing Care Information........ Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25 or
more).
Improvement Activities............ Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25 or
more).
------------------------------------------------------------------------
We finalized at Sec. 414.1325(d) that individual MIPS eligible
clinicians and groups may elect to submit information via multiple
mechanisms; however, they must use the same identifier for all
performance categories, and they may only use one submission mechanism
per performance category.
---------------------------------------------------------------------------
\1\ Requires no separate data submission to CMS: Measures are
calculated based on data available from MIPS eligible clinicians'
billings on Medicare Part B claims. Note: Claims differ from
administrative claims as they require MIPS eligible clinicians to
append certain billing codes to denominator eligible claims to
indicate the required quality action or exclusion occurred.
---------------------------------------------------------------------------
We proposed to revise Sec. 414.1325(d) for purposes of the 2020
MIPS payment year and future years, beginning with performance periods
occurring in 2018, to allow individual MIPS eligible clinicians and
groups to submit data on measures and activities, as applicable and
available, via multiple data submission mechanisms for a single
performance category (specifically, the quality, improvement
activities, or advancing care information performance category) (82 FR
30035). Under this proposal, individual MIPS eligible clinicians and
groups that have fewer than the required number of measures and
activities applicable and available under one submission mechanism
could submit data on additional measures and activities via one or more
additional submission mechanisms, as necessary, to receive a potential
maximum number of points under a performance category.
If an individual MIPS eligible clinician or group submits the same
measure through two different mechanisms, each submission would be
calculated and scored separately. We do not have the ability to
aggregate data on the same measure across submission mechanisms. We
would only count the submission that gives the clinician the higher
score, thereby avoiding the double count. We refer readers to section
II.C.7.a.(2) of this final rule with comment period, which further
outlines how we proposed to score measures and activities regardless of
submission mechanism.
We believe that this flexible approach would help individual MIPS
eligible clinicians and groups with reporting, as it provides more
options for the submission of data for the applicable performance
categories. We believe that by providing this flexibility, we would be
allowing MIPS eligible clinicians to choose the measures and activities
that are most meaningful to them, regardless of the submission
mechanism. We are aware that this proposal for increased flexibility in
data submission mechanisms may increase complexity and in some
instances necessitate additional costs for clinicians, as they may need
to establish relationships with additional data submission mechanism
vendors in order to report additional measures and/or activities for
any given performance category. We clarified that the requirements for
the performance categories remain the same, regardless of the number of
submission mechanisms used. It is also important to note that for the
improvement activities and advancing care information performance
categories, that using multiple data submission mechanisms may limit
our ability to provide real-time feedback. While we strive to provide
flexibility to individual MIPS eligible clinicians and groups, we noted
that our goal within the MIPS program is to minimize complexity and
administrative burden to individual MIPS eligible clinicians and
groups.
As discussed in section II.C.4 of this final rule with comment
period, we proposed to generally apply our previously finalized and
proposed group policies to virtual groups. With respect to data
submission mechanisms, we proposed that virtual groups would be able to
use a different submission mechanism for each performance category, and
would be able to utilize multiple submission mechanisms for the quality
performance category, beginning with performance periods occurring in
2018 (82 FR 30036). However, virtual groups would be required to
utilize the same submission mechanism for the improvement activities
and the advancing care information performance categories.
For those MIPS eligible clinicians participating in a MIPS APM, who
are on an APM Participant List on at least one of the three snapshot
dates as finalized in the CY 2017 Quality Payment Program Final Rule
(81 FR 77444 through 77445), or for MIPS eligible clinicians
participating in a full TIN MIPS APM, who are on an APM Participant
List on at least one of the four snapshot dates as discussed in section
II.C.6.g.(2) of this final rule with comment period, the APM scoring
standard applies. We refer readers to Sec. 414.1370 and the CY 2017
Quality Payment Program final rule (81 FR 77246), which describes how
MIPS eligible clinicians participating in APM entities submit data to
MIPS in the form and manner required, including separate approaches to
the quality and cost performance categories applicable to MIPS APMs. We
did not propose any changes to how APM entities in MIPS APMs and their
participating MIPS eligible clinicians submit data to MIPS.
[[Page 53621]]
The following is a summary of the public comments received on the
``Performance Category Measures and Reporting: Submission Mechanisms''
proposal and our responses:
Comment: Many commenters supported the proposal to allow MIPS
eligible clinicians and groups to submit measures and activities via
multiple submission mechanisms. Several commenters noted it will help
ease reporting and administrative burden. Several commenters also noted
it will provide greater flexibility, including increasing the number of
measures available. Several commenters stated it will allow clinicians
to report the measures that are most meaningful and applicable to them.
Several commenters also stated it will help MIPS eligible clinicians
and groups successfully report required measures and meet MIPS
reporting requirements. A few commenters specifically supported the
policy to allow reporting of quality measures across multiple data
submission mechanisms because 6 clinically-applicable quality measures
may not always be available using one submission mechanism; it will
provide clinicians who belong to multi-specialty groups more ease in
reporting quality measures they may be already reporting to qualified
vendors, versus forcing different specialties to find a common
reporting platform that causes much more administrative, and often
financial burden; it will allow greater flexibility in measure
selection and will particularly benefit specialists who may want to
report one or 2 eCQMs but will need to use a registry to report the
rest of their measure set; and it is especially helpful for those who
want to report via EHR to the extent possible even though not all
measures can be submitted via that mechanism. One commenter asked if
specialists who would have used a specialty measure set would be
required to use multiple submission methods to meet the 6-measure
requirement.
Response: We appreciate the commenters support for our proposal.
Due to operational feasibility concerns, we are not finalizing this
proposal beginning with the CY 2018 performance period as proposed, but
instead beginning with the CY 2019 performance period. Moreover, we are
not requiring that MIPS individual clinicians and groups submit via
additional submission mechanisms; however, through this proposal the
option would be available for those that have applicable measures and/
or activities available to them. As discussed in section
II.C.7.a.(2)(e) of this final rule with comment period, beginning with
the CY 2019 performance period, we will apply our validation process to
determine if other measures are available and applicable only with
respect to the data submission mechanism(s) that a MIPS eligible
clinician utilizes for the quality performance category for a
performance period. With regard to a specialty measure set, specialists
who report on a speciality measure set are only required to report on
the measures within that set, even if it is less than the required 6
measures. If the specialty set includes measures that are available
through multiple submission mechanisms, then through this policy,
beginning with the 2019 performance period, the option to report
additional measures would be available for those that have applicable
measures and/or activities available to them, which may potentially
increase their score, but they are not required to utilize multiple
submission methods to meet the 6 measure requirement. In addition, for
MIPS eligible clinicians reporting on a specialty measure set via
claims or registry, we will apply our validation process to determine
if other measures are available and applicable within the specialty
measure set only with respect to the data submission mechanism(s) that
a MIPS eligible clinician utilizes for the quality performance category
for a performance period.
Comment: A few commenters stated this proposal will allow MIPS
eligible clinicians to determine which method is most appropriate for
the different MIPS categories. Several commenters noted it will
encourage MIPS participation. Many commenters stated it will encourage
the reporting of measures through new submission methods such as QCDRs
and EHRs. A few commenters stated it will reduce burden on clinicians
and EHR vendors by allowing large groups that report under different
EHRs to report using multiple EHRs.
Response: In the CY 2017 Quality Payment Program final rule, we
finalized that for the quality performance category, an individual MIPS
eligible clinician or group that submits data on quality measures via
EHR, QCDR, qualified registry, claims, or a CMS-approved survey vendor
for the CAHPS for MIPS survey will be assigned measure achievement
points for 6 measures (1 outcome, or if an outcome measure is not
available, another high priority measure and the next 5 highest scoring
measures) as available and applicable, and we will receive applicable
measure bonus points for all measures submitted that meet the bonus
criteria (81 FR 77282 through 77301). Consistent with this policy, we
would like to clarify that for performance periods beginning in 2019,
if a MIPS eligible clinician or group reports for the quality
performance category by using multiple instances of the same data
submission mechanism (for example, multiple EHRs) then all the
submissions would be scored, and the 6 quality measures with the
highest performance (that is, the greatest number of measure
achievement points) would be utilized for the quality performance
category score. As noted above, if an individual MIPS eligible
clinician or group submits the same measure through two different
mechanisms, each submission would be calculated and scored separately.
We do not have the ability to aggregate data on the same measure across
multiple submission mechanisms. We would only count the submission that
gives the clinician the higher score, thereby avoiding the double
count. For example, if a MIPS eligible clinician submits performance
data for Quality Measure 236, Controlling High Blood Pressure, using a
registry and also through an EHR, these two submissions would be scored
separately, and we would apply the submission with the higher score
towards the quality performance score. We would not aggregate the score
of the registry and EHR submission of the same measure. This approach
decreases the likelihood of cumulative overcounting in the event that
the submissions may have time or patient overlaps that may not be
readily identifiable.
Comment: One commenter supported that virtual groups would be able
to use multiple submission mechanisms for quality reporting but would
have to use the same submission mechanism for the improvement
activities and advancing care information performance categories. A few
commenters suggested that both groups and virtual groups have the same
submission requirements. Another commenter suggested that we reconsider
multiple submission mechanisms due to the complexity it will place on
clinicians.
Response: We are not finalizing our proposal that virtual groups
would be required to utilize the same submission mechanism for the
improvement activities and the advancing care information performance
categories because we believe that virtual groups should have the same
reporting capabilities as groups. Thus, groups and virtual groups have
the same submission requirements, which for the CY 2018 performance
period, includes the utilization of multiple submission mechanisms with
the caveat that only one submission mechanism must be used per
performance category. Starting
[[Page 53622]]
with the CY 2019 performance period, groups and virtual groups will be
able to utilize multiple submission mechanisms for each performance
category. As noted above, due to operational feasibility concerns, we
are not finalizing this proposal beginning with the CY 2018 performance
period as proposed, but instead beginning with the CY 2019 performance
period.
Comment: A few commenters stated this proposal would help
clinicians and groups receive the maximum number of points available.
One commenter noted it will ease the path for small and rural practice
clinicians to participate in MIPS. One commenter stated it will support
reporting the highest quality data available. One commenter noted it
may allow clinicians to complete more activities. One commenter noted
it will provide EHR and registry vendors flexibility in submitting data
on behalf of their customers. One commenter stated that while it may
add some burdens to reporting quality measures because MIPS eligible
clinicians will be required to report on 6 quality measures instead of
only the number available via a given submission mechanism, they stated
that they believe it will ultimately drive adoption of more robust
measures based on clinical data and outcomes.
Response: We note that under this policy, individual MIPS eligible
clinicians and groups are not required to, but may use multiple data
submission mechanisms to report on six quality measures in order to
potentially achieve the maximum score for the quality performance
category beginning with the 2019 performance period. Individual MIPS
eligible clinicians and groups could report on additional measures and/
or activities using multiple data submission mechanisms for the
Quality, Advancing Care Information, and Improvement Activities
performance categories should applicable measures and/or activities be
available to them. We agree that this policy provides small and rural
practice clinicians with additional flexibility to participate in MIPS
by not limiting them to the use of one submission mechanism per
performance category. We believe that MIPS eligible clinicians and
groups should select and report on measures that provide meaningful
measurement within the scope of their practice that should include a
focus on more outcomes-based measurement.
Comment: One commenter who supported the proposal expressed concern
that the flexibility may create more complexity and confusion, as well
as burden on CMS. Another commenter stated that while there could be
some burdens with requiring clinicians to use multiple submission
mechanisms, if they have fewer than the required number of measures and
activities applicable and available under one submission mechanism, as
the requirements for the performance categories remain the same
regardless of the number of submission mechanisms used. A commenter
expressed concern with making multiple submissions part of the measure
validation process for the review of whether 6 measures are available
for reporting.
Response: We appreciate the commenters support for our proposal.
Due to operational feasibility concerns, we are not finalizing this
proposal beginning with the CY 2018 performance period as proposed, but
instead beginning with the CY 2019 performance period. Moreover, we are
not requiring that MIPS individual clinicians and groups submit via
additional submission mechanisms; however, through this proposal the
option would be available for those that have applicable measures and/
or activities available to them. As discussed in section
II.C.7.a.(2)(e) of this final rule with comment period, beginning with
the CY 2019 performance period, we will apply our validation process to
determine if other measures are available and applicable only with
respect to the data submission mechanism(s) that a MIPS eligible
clinician utilizes for the quality performance category for a
performance period. With regard to a specialty measure set, specialists
who report on a speciality measure set are only required to report on
the measures within that set, even if it is less than the required 6
measures. If the specialty set includes measures that are available
through multiple submission mechanisms, then through this policy,
beginning with the 2019 performance period, the option to report
additional measures would be available for those that have applicable
measures and/or activities available to them, which may potentially
increase their score, but they are not required to utilize multiple
submission methods to meet the 6 measure requirement. In addition, for
MIPS eligible clinicians reporting on a specialty measure set via
claims or registry, we will apply our validation process to determine
if other measures are available and applicable within the specialty
measure set only with respect to the data submission mechanism(s) that
a MIPS eligible clinician utilizes for the quality performance category
for a performance period.
Comment: A few commenters offered additional recommendations
including: That CMS eventually require a MIPS eligible clinician or
group to submit all data on measures and activities across a single
data submission mechanism of their choosing to ensure that reliable,
trustworthy, comparative data can be extracted from the MIPS eligible
clinician and/or group's MIPS performance information and to alleviate
the resource intensity associated with retaining all data across the
multiple submission mechanisms for auditing purposes; and that claims-
only reporting for the quality performance category be phased-out due
to difficulty with clinically abstracting meaningful quality data.
Response: We thank the commenter for their recommendations
regarding using a single data submission mechanism and phasing out
claims-only reporting for the quality performance category, and will
take their recommendations into consideration for future rulemaking. We
refer readers to section II.C.9.c of this final rule with comment
period for a discussion of our data validation and auditing policies.
Comment: Commenters requested that CMS continue to look for ways to
increase flexibility in the Quality Payment Program and believed the
best way to ensure participating clinicians can meet the requirements
of each performance category is to increase the number of meaningful
measures available. For clinicians who do not want to manage multiple
submission mechanisms an alternative solution would be for each
specialty within a group to create their own TINs and report as
subgroups, because the commenter stated that allowing all MIPS eligible
groups to report unique sets of measures via a single mechanism or
multiple mechanisms promotes the ability for all clinicians to have a
meaningful impact on overall MIPS performance, although the commenter
recognized that this subgroup approach could create challenges with the
current MIPS group scoring methodology.
Response: We agree that reporting on quality measures should be
meaningful for clinicians, and note that measures are taken into
consideration on an annual basis prior to rule-making and we encourage
stakeholders to communicate their concerns regarding gaps in measure
development to measure stewards. We thank commenters for their
suggestions regarding an alternative approach to submission mechanisms.
We would like to clarify that each newly created TIN would be
considered a new group, and as discussed in the CY 2018 Quality Payment
Program proposed rule (82 FR 30027), we intend to explore the
[[Page 53623]]
feasibility of establishing group-related policies that would permit
participation in MIPS at a subgroup level through future rulemaking. We
refer readers section II.C.3. of this final rule with comment period
for additional information regarding group reporting.
Comment: Commenters suggested that CMS ensure that entire specialty
specific measure sets can be reported through a single submission
mechanism of their choice, specifically expressing concern for the
measures within the radiation oncology subspecialty measure set.
Response: We would like to note that a majority of the measures in
the specialty measure sets are available through registry reporting,
and that specifically to the commenters concern, that all the measures
within the radiation oncology subspecialty measure set are available
through registry reporting. A majority of the quality measures in the
MIPS program are not owned by CMS, but rather are developed and
maintained by third party measure stewards. As a part of measure
development and maintenance, measure stewards conduct feasibility
testing of adding a new submission mechanism as a reporting option for
their measure. We will share this recommendation with the measure
stewards for future consideration.
Comment: One commenter suggested that CMS retroactively provide
similar flexibility for the CY 2017 MIPS performance period.
Response: For operational and feasibility reasons, we believe that
it would not be possible to retroactively allow MIPS individual
eligible clinicians and groups to submit data through multiple
submission mechanisms for the CY 2017 MIPS performance period.
Comment: Some commenters suggested that CMS not overly rely on
claims-based measures to drive quality improvement and scoring in
future program years, that CMS develop a transition plan toward only
accepting data from electronic systems that have demonstrated abilities
to produce valid measurement, such as those EHRs that have achieved
NCQA eMeasure Certification; and that CMS create educational programs
to help clinicians and groups understand the multiple submission
option. A few commenters recommended making more quality measures
available under each of the submission mechanisms so MIPS eligible
clinicians have sufficient measures within a single submission
mechanism. One commenter stated it would inadvertently advantage large
practices that may be better equipped to track measures. One commenter
asked for clarification to distinguish between the scenarios where a
clinician is required to submit under both EHR and registry because
their EHR is not certified for enough measures and when a clinician is
required to submit under both EHR and registry because CMS has not
created enough electronic measures for the clinician's specialty.
Response: We appreciate the suggestions, and will take them into
consideration for future rulemaking. As indicated in the CY 2017
Quality Payment Program final rule (81 FR 77090), we intend to reduce
the number of claims-based measures in the future as more measures are
available through health IT mechanisms that produce valid measurement
such as registries, QCDRs, and health IT vendors. We plan to
continuously work with MIPS eligible clinicians and other stakeholders
to continue to improve the submission mechanisms available for MIPS. We
agree that there is value to EHR based reporting; however, we recognize
that there are relatively fewer measures available via EHR reporting
and we generally want to retain solutions that are low burden unless
and until we identify viable alternatives. As indicated in the quality
measures appendices in this final rule with comment period, we are
finalizing 54 out of the 275 quality measures available through EHR
reporting for the CY 2018 performance period. MIPS eligible clinicians
should evaluate the options available to them and choose which
available submission mechanism and measures they believe will provide
meaningful measurement for their scope of practice. We intend to
provide stakeholders with additional education with regards to the use
of multiple submission mechanisms by the implementation of this policy
for the CY 2019 performance period. We plan to continuously work with
MIPS eligible clinicians and other stakeholders to continue to improve
the submission mechanisms available for MIPS. It is not our intent to
provide larger practices an advantage over smaller practices, rather
our intention is to provide all MIPS eligible clinicians and groups the
opportunity to submit data on measures that are available and
applicable to their scope of practice. We are not requiring that MIPS
individual clinicians and groups submit via additional submission
mechanisms; however, through this proposal the option would be
available for those that have applicable measures and/or activities
available to them. As discussed in section II.C.7.a.(2)(e) of this
final rule with comment period, beginning with the CY 2019 performance
period, we will apply our validation process to determine if other
measures are available and applicable only with respect to the data
submission mechanism(s) that a MIPS eligible clinician utilizes for the
quality performance category for a performance period. With regard to a
specialty measure set, specialists who report on a speciality measure
set are only required to report on the measures within that set, even
if it is less than the required 6 measures. If the specialty set
includes measures that are available through multiple submission
mechanisms, then through this policy, beginning with the 2019
performance period, the option to report additional measures would be
available for those that have applicable measures and/or activities
available to them, which may potentially increase their score, but they
are not required to utilize multiple submission methods to meet the 6
measure requirement. In addition, for MIPS eligible clinicians
reporting on a specialty measure set via claims or registry, we will
apply our validation process to determine if other measures are
available and applicable within the specialty measure set only with
respect to the data submission mechanism(s) that a MIPS eligible
clinician utilizes for the quality performance category for a
performance period.
Comment: Several commenters recommended that CMS make multiple
submission mechanisms optional only. A few commenters expressed concern
that a requirement to report via multiple mechanisms to meet the
required 6 measures in the quality performance category would increase
burden on MIPS eligible clinicians and groups that are unable to meet
the minimum requirement using one submission mechanism. A few
commenters stated that MIPS eligible clinicians and groups should not
be required to contract with vendors and pay to report data on
additional quality measures that are not reportable through their
preferred method or be penalized for failing to report additional
measures via a second submission mechanism and that CMS should only
review the measures available to a clinician or group given their
chosen submission mechanism--claims, registry, EHR or QCDR--to
determine if they could have reported on additional measures. A few
commenters recommended that CMS only offer multiple submission
mechanisms as an option that could earn a clinician bonus points to
recognize investment in an additional submission mechanism. One
commenter
[[Page 53624]]
recommended that reporting using more than one submission mechanism be
required for a given performance period only if the MIPS eligible
clinician or group already has an additional submission mechanism in
place that could be utilized to submit additional measures.
Response: We are not requiring that MIPS individual clinicians and
groups submit via additional submission mechanisms; however, through
this proposal the option would be available for those that have
applicable measures and/or activities available to them. As discussed
in section II.C.7.a.(2)(e) of this final rule with comment period,
beginning with the CY 2019 performance period, we will apply our
validation process to determine if other measures are available and
applicable only with respect to the data submission mechanism(s) that a
MIPS eligible clinician utilizes for the quality performance category
for a performance period. With regard to a specialty measure set,
specialists who report on a speciality measure set are only required to
report on the measures within that set, even if it is less than the
required 6 measures. If the specialty set includes measures that are
available through multiple submission mechanisms, then through this
policy, beginning with the 2019 performance period, the option to
report additional measures would be available for those that have
applicable measures and/or activities available to them, which may
potentially increase their score, but they are not required to utilize
multiple submission methods to meet the 6 measure requirement. In
addition, for MIPS eligible clinicians reporting on a specialty measure
set via claims or registry, we will apply our validation process to
determine if other measures are available and applicable within the
specialty measure set only with respect to the data submission
mechanism(s) that a MIPS eligible clinician utilizes for the quality
performance category for a performance period.
Comment: Many commenters did not support our proposal to allow
submission of measures via multiple submission mechanisms or expressed
concerns with the proposal. Several commenters expressed concern that
it would add burden, confusion, and complexity for MIPS eligible
clinicians and groups, as well as vendors, possibly requiring them to
track measures across mechanisms based on varying benchmarks and to
review measures and tools to determine if there are additional
applicable measures.
Response: We understand the commenters concerns with regards to
burden and complexity around the use of multiple submission mechanisms.
we are not requiring that MIPS individual clinicians and groups submit
via additional submission mechanisms; however, through this proposal
the option would be available for those that have applicable measures
and/or activities available to them. As discussed in section
II.C.7.a.(2)(e) of this final rule with comment period, beginning with
the CY 2019 performance period, we will apply our validation process to
determine if other measures are available and applicable only with
respect to the data submission mechanism(s) that a MIPS eligible
clinician utilizes for the quality performance category for a
performance period. With regard to a specialty measure set, specialists
who report on a speciality measure set are only required to report on
the measures within that set, even if it is less than the required 6
measures. If the specialty set includes measures that are available
through multiple submission mechanisms, then through this policy,
beginning with the 2019 performance period, the option to report
additional measures would be available for those that have applicable
measures and/or activities available to them, which may potentially
increase their score, but they are not required to utilize multiple
submission methods to meet the 6 measure requirement. In addition, for
MIPS eligible clinicians reporting on a specialty measure set via
claims or registry, we will apply our validation process to determine
if other measures are available and applicable within the specialty
measure set only with respect to the data submission mechanism(s) that
a MIPS eligible clinician utilizes for the quality performance category
for a performance period.
Comment: A few commenters expressed concern that this policy could
substantially increase costs and burden for MIPS eligible clinicians,
as it may require a MIPS eligible clinician or group practice to
purchase an additional data submission mechanism in order to report 6
measures, and another commenter expressed concern for financial impact
on small and solo practices. A few commenters stated that it would
increase costs to vendors, which would be passed on to customers and
patients. One commenter expressed concern regarding decreased
productivity, and increased opportunity for coding errors. A few
commenters expressed concern that they may be required to report on
measures that are potentially not clinically relevant. One commenter
noted that requiring the clinician to use multiple submission
mechanisms would penalize them for something out of their control,
specifically development of specialty-specific eCQMs, noting that even
with software certified to all 64 eCQMs, fewer than 6 have a positive
denominator. A few commenters expressed concern with how this proposal
would interact with the measure validation process to determine whether
a clinician could have reported additional measures, specifically
expressing concern that it would require eligible clinicians to look
across multiple mechanisms to fulfill the 6-measure requirement and
that MIPS eligible clinicians should not be held accountable to meet
more measures or look across submission mechanisms, and potentially
invest in multiple mechanisms, because CMS is making additional
submission mechanisms available.
Response: We are not requiring that MIPS individual clinicians and
groups submit via additional submission mechanisms; however, through
this proposal the option would be available for those that have
applicable measures and/or activities available to them. As discussed
in section II.C.7.a.(2)(e) of this final rule with comment period,
beginning with the CY 2019 performance period, we will apply our
validation process to determine if other measures are available and
applicable only with respect to the data submission mechanism(s) that a
MIPS eligible clinician utilizes for the quality performance category
for a performance period. With regard to a specialty measure set,
specialists who report on a speciality measure set are only required to
report on the measures within that set, even if it is less than the
required 6 measures. If the specialty set includes measures that are
available through multiple submission mechanisms, then through this
policy, beginning with the 2019 performance period, the option to
report additional measures would be available for those that have
applicable measures and/or activities available to them, which may
potentially increase their score, but they are not required to utilize
multiple submission methods to meet the 6 measure requirement. In
addition, for MIPS eligible clinicians reporting on a specialty measure
set via claims or registry, we will apply our validation process to
determine if other measures are available and applicable within the
specialty measure set only with respect to the data submission
mechanism(s) that a MIPS eligible clinician utilizes for the quality
[[Page 53625]]
performance category for a performance period.
Comment: One commenter recommended that CMS withhold the option for
submission through multiple mechanisms in the quality category for
future implementation, or until CMS has become comfortable with the
data received in year 1 of the program.
Response: We agree with the commenter and due to operational
feasibility concerns, we have determined that this proposal will be
implemented beginning with the CY 2019 performance period. By the time
this proposal is implemented for the CY 2019 performance period, we
will have greater familiarity with which the way data is submitted to
CMS based off submissions from the CY 2017 performance period.
Comment: One commenter asked that CMS confirm that a MIPS eligible
clinician would be allowed to submit data using multiple QCDRs under
the same TIN/NPI or TIN because allowing submission via multiple QCDRs
in single TIN could serve as a pathway forward for greater specialist
participation within multispecialty groups.
Response: A MIPS individual eligible clinician or group would be
able to submit data using multiple QCDRs if they are able to find
measures supported by other QCDRs that would provide meaningful
measurement for the clinicians, and those measures are applicable.
Consistent with the policy finalized in the CY 2017 Quality Payment
Program final rule (81 FR 77282 through 77301), we would like to
clarify that beginning with the CY 2019 performance period, if a MIPS
eligible clinician or group reports for the quality performance
category by using multiple instances of the same submission mechanism
(for example, multiple QCDRs), then all the submissions would be
scored, and the 6 quality measures with the highest performance (that
is, the greatest number of measure achievement points) would be
utilized for the quality performance category score. As noted above, if
an individual MIPS eligible clinician or group submits the same measure
through two different submission mechanisms, each submission would be
calculated and scored separately. We do not have the ability to
aggregate data on the same measure across submission mechanisms.
Similarly, data completeness cannot be combined for the same measure
that is reported through multiple submission mechanisms, but data
completeness would need to be achieved for each measure and associated
submission mechanism.
Comment: One commenter requested clarification on how the data
completeness will be determined if reporting the same quality measures
via multiple submission mechanisms, for example, if a clinician
utilized two submission mechanisms to report the same measure, would 50
percent data completeness need to be achieved for each submission
mechanism or for the combined data submitted. Another commenter asked
how CMS will take into consideration data that is submitted using the
same submission mechanism, but using two different products or
services, specifically data submitted from two different certified EHRs
in a single performance period when clinicians switch EHRs mid-
performance year.
Response: In the CY 2017 Quality Payment Program final rule, we
finalized that for the quality performance category, an individual MIPS
eligible clinician or group that submits data on quality measures via
EHR, QCDR, qualified registry, claims, or a CMS-approved survey vendor
for the CAHPS for MIPS survey will be assigned measure achievement
points for 6 measures (1 outcome, or if an outcome measure is not
available, another high priority measure and the next 5 highest scoring
measures) as available and applicable, and we will receive applicable
measure bonus points for all measures submitted that meet the bonus
criteria (81 FR 77282 through 77301). Consistent with this policy, we
would like to clarify that for performance periods beginning in 2019,
if a MIPS eligible clinician or group reports for the quality
performance category by using multiple instances of the same data
submission mechanism (for example, multiple EHRs) then all the
submissions would be scored, and the 6 quality measures with the
highest performance (that is, the greatest number of measure
achievement points) would be utilized for the quality performance
category score. As noted above, if an individual MIPS eligible
clinician or group submits the same measure through two different
mechanisms, each submission would be calculated and scored separately.
We do not have the ability to aggregate data on the same measure across
multiple submission mechanisms. We would only count the submission that
gives the clinician the higher score, thereby avoiding the double
count. For example, if a MIPS eligible clinician submits performance
data for Quality Measure 236, Controlling High Blood Pressure, using a
registry and also through an EHR, these two submissions would be scored
separately, and we would apply the submission with the higher score
towards the quality performance score; we would not aggregated the
score of the registry and EHR submission of the same measure. This
approach decreases the likelihood of cumulative overcounting in the
event that the submissions may have time or patient overlaps that may
not be readily identifiable.
Final Action: After consideration of the public comments received,
we are finalizing our proposal at Sec. 414.1325(d) with modification.
Specifically, due to operational reasons, and to allow for additional
time to communicate how this policy intersects with out measure
applicability policies, we are not finalizing this policy for the CY
2019 performance period. For the CY 2018 performance period, we intend
to continue implementing the submission mechanisms policies as
finalized in the CY 2017 Quality Payment Program final rule (81 FR
77094) that individual MIPS eligible clinicians and groups may elect to
submit information via multiple submission mechanisms; however, they
must use one submission mechanism per performance category. We are,
however, finalizing our proposal beginning with the CY 2019 performance
period. Thus, for purposes of the 2021 MIPS payment year and future
years, beginning with performance periods occurring in 2019, individual
MIPS eligible clinicians, groups, and virtual groups may submit data on
measures and activities, as applicable, via multiple data submission
mechanisms for a single performance category (specifically, the
quality, improvement activities, or advancing care information
performance category). Individual MIPS eligible clinicians and groups
that have fewer than the required number of measures and activities
applicable and available under one submission mechanism may submit data
on additional measures and activities via one or more additional
submission mechanisms, as necessary, provided that such measures and
activities are applicable and available to them.
We are finalizing our proposal with modification. Specifically, we
are not finalizing our proposal for the CY 2018 performance period, and
our previously finalized policies continue to apply for the CY 2018
performance period. Thus, for the CY 2018 performance period, virtual
groups may elect to submit information via multiple submission
mechanisms; however, they must use the same identifier for all practice
categories, and they may only use one submission mechanism per
performance
[[Page 53626]]
category. We are, however, finalizing our proposal beginning with the
CY 2019 performance period. Thus, beginning with the CY 2019
performance period, virtual groups will be able to use multiple
submission mechanisms for each performance category.
(2) Submission Deadlines
In the CY 2017 Quality Payment Program final rule (81 FR 77097), we
finalized submission deadlines by which all associated data for all
performance categories must be submitted for the submission mechanisms
described in this rule.
As specified at Sec. 414.1325(f)(1), the data submission deadline
for the qualified registry, QCDR, EHR, and attestation submission
mechanisms is March 31 following the close of the performance period.
The submission period will begin prior to January 2 following the close
of the performance period, if technically feasible. For example, for
performance periods occurring in 2018, the data submission period will
occur prior to January 2, 2019, if technically feasible, through March
31, 2019. If it is not technically feasible to allow the submission
period to begin prior to January 2 following the close of the
performance period, the submission period will occur from January 2
through March 31 following the close of the performance period. In any
case, the final deadline will remain March 31, 2019.
At Sec. 414.1325(f)(2), we specified that for the Medicare Part B
claims submission mechanism, data must be submitted on claims with
dates of service during the performance period that must be processed
no later than 60 days following the close of the performance period.
Lastly, for the CMS Web Interface submission mechanism, at Sec.
414.1325(f)(3), we specified that the data must be submitted during an
8-week period following the close of the performance period that will
begin no earlier than January 2, and end no later than March 31. For
example, the CMS Web Interface submission period could span an 8-week
timeframe beginning January 16 and ending March 13. The specific
deadline during this timeframe will be published on the CMS Web site.
We did not propose any changes to the submission deadlines in the CY
2018 Quality Payment Program proposed rule.
b. Quality Performance Criteria
(1) Background
Sections 1848(q)(1)(A)(i) and (ii) of the Act require the Secretary
to develop a methodology for assessing the total performance of each
MIPS eligible clinician according to performance standards and, using
that methodology, to provide for a final score for each MIPS eligible
clinician. Section 1848(q)(2)(A)(i) of the Act requires us to use the
quality performance category in determining each MIPS eligible
clinician's final score, and section 1848(q)(2)(B)(i) of the Act
describes the measures and activities that must be specified under the
quality performance category.
The statute does not specify the number of quality measures on
which a MIPS eligible clinician must report, nor does it specify the
amount or type of information that a MIPS eligible clinician must
report on each quality measure. However, section 1848(q)(2)(C)(i) of
the Act requires the Secretary, as feasible, to emphasize the
application of outcomes-based measures.
Sections 1848(q)(1)(E) of the Act requires the Secretary to
encourage the use of QCDRs, and section 1848(q)(5)(B)(ii)(I) of the Act
requires the Secretary to encourage the use of CEHRT and QCDRs for
reporting measures under the quality performance category under the
final score methodology, but the statute does not limit the Secretary's
discretion to establish other reporting mechanisms.
Section 1848(q)(2)(C)(iv) of the Act generally requires the
Secretary to give consideration to the circumstances of non-patient
facing MIPS eligible clinicians and allows the Secretary, to the extent
feasible and appropriate, to apply alternative measures or activities
to such clinicians.
As discussed in the CY 2017 Quality Payment Program final rule (81
FR 77098 through 77099), we finalized MIPS quality criteria that focus
on measures that are important to beneficiaries and maintain some of
the flexibility from PQRS, while addressing several of the comments we
received in response to the CY 2017 Quality Payment Program proposed
rule and the MIPS and APMs RFI.
To encourage meaningful measurement, we finalized allowing
individual MIPS eligible clinicians and groups the flexibility to
determine the most meaningful measures and data submission mechanisms
for their practice.
To simplify the reporting criteria, we aligned the
submission criteria for several of the data submission mechanisms.
To reduce administrative burden and focus on measures that
matter, we lowered the required number of the measures for several of
the data submission mechanisms, yet still required that certain types
of measures, particularly outcome measures, be reported.
To create alignment with other payers and reduce burden on
MIPS eligible clinicians, we incorporated measures that align with
other national payers.
To create a more comprehensive picture of a practice's
performance, we also finalized the use of all-payer data where
possible.
As beneficiary health is always our top priority, we finalized
criteria to continue encouraging the reporting of certain measures such
as outcome, appropriate use, patient safety, efficiency, care
coordination, or patient experience measures. However, as discussed in
the CY 2017 Quality Payment Program final rule (81 FR 77098), we
removed the requirement for measures to span across multiple domains of
the NQS. While we do not require that MIPS eligible clinicians select
measures across multiple domains, we encourage them to do so.
(2) Contribution to Final Score
For MIPS payment year 2019, the quality performance category will
account for 60 percent of the final score, subject to the Secretary's
authority to assign different scoring weights under section
1848(q)(5)(F) of the Act. Section 1848(q)(2)(E)(i)(I)(aa) of the Act
states that the quality performance category will account for 30
percent of the final score for MIPS. However, section
1848(q)(2)(E)(i)(I)(bb) of the Act stipulates that for the first and
second years for which MIPS applies to payments, the percentage of the
final score applicable for the quality performance category will be
increased so that the total percentage points of the increase equals
the total number of percentage points by which the percentage applied
for the cost performance category is less than 30 percent. Section
1848(q)(2)(E)(i)(II)(bb) of the Act requires that, for the transition
year for which MIPS applies to payments, not more than 10 percent of
the final score shall be based on the cost performance category.
Furthermore, section 1848(q)(2)(E)(i)(II)(bb) of the Act states that,
for the second year for which MIPS applies to payments, not more than
15 percent of the final score shall be based on the cost performance
category.
In the CY 2017 Quality Payment Program final rule (81 FR 77100), we
finalized at Sec. 414.1330(b) that, for MIPS payment years 2019 and
2020, 60 percent and 50 percent, respectively, of
[[Page 53627]]
the MIPS final score will be based on the quality performance category.
For the third and future years, 30 percent of the MIPS final score will
be based on the quality performance category.
As discussed in section II.C.6.d. of this final rule with comment
period, we are not finalizing our proposal to weight the cost
performance category at zero percent for the second MIPS payment year
(2020) and are instead retaining the previously finalized cost
performance category weight of 10 percent for that year. In accordance
with section 1848(q)(5)(E)(i)(I)(bb) of the Act, for the first 2 years,
the percentage of the MIPS final score that would otherwise be based on
the quality performance category (that is, 30 percent) must be
increased by the same number of percentage points by which the
percentage based on the cost performance category is less than 30
percent. We proposed to modify Sec. 414.1330(b)(2) to reweight the
percentage of the MIPS final score based on the quality performance
category for MIPS payment year 2020 as may be necessary to account for
any reweighting of the cost performance category, if finalized (82 FR
30037). Thus, since we are not finalizing our proposal to reweight the
cost performance category to zero percent for MIPS payment year 2020,
we are not finalizing our proposal to modify Sec. 414.1330(b)(2), as
the performance in the quality performance category currently comprises
50 percent of a MIPS eligible clinician's final score for MIPS payment
year 2020, and no reweighting is necessary to account for the
previously finalized cost performance category weight. We refer readers
to section II.C.6.d. of this final rule with comment period for more
information on the cost performance category.
Section 1848(q)(5)(B)(i) of the Act requires the Secretary to treat
any MIPS eligible clinician who fails to report on a required measure
or activity as achieving the lowest potential score applicable to the
measure or activity. Specifically, under our finalized scoring
policies, an individual MIPS eligible clinician or group that reports
on all required measures and activities could potentially obtain the
highest score possible within the performance category, assuming they
perform well on the measures and activities they report. An individual
MIPS eligible clinician or group who does not submit data on a required
measure or activity would receive a zero score for the unreported items
in the performance category (in accordance with section
1848(q)(5)(B)(i) of the Act). The individual MIPS eligible clinician or
group could still obtain a relatively good score by performing very
well on the remaining items, but a zero score would prevent the
individual MIPS eligible clinician or group from obtaining the highest
possible score within the performance category.
The following is a summary of the public comments received on the
``Contribution to Final Score'' proposal and our responses:
Comment: Many commenters supported the policy to weight the quality
performance category at 60 percent of the final score for the 2020 MIPS
payment year. One commenter expressed appreciation for the proposal
because it maintains consistency within the program, which facilitates
easier implementation for upcoming years.
Response: We appreciate the commenters' support. However, as noted
above, we are not finalizing our proposal at Sec. 414.1330(b)(2) to
provide that performance in the quality performance category will
comprise 60 percent of a MIPS eligible clinician's final score for MIPS
payment year 2020. We believe that by keeping our current policy to
weight the quality performance period at 50 percent and the cost
performance category at 10 percent will help ease the transition so
that MIPS eligible clinicians can understand how they will be scored in
future years under MIPS generally and the cost performance category in
particular, as the cost performance category will be weighted at 30
percent beginning with MIPS payment year 2021.
Comment: One commenter did not support the policy to weight the
quality performance category at 60 percent of the final score for the
2020 MIPS payment year. Instead, the commenter recommended that CMS
retain the previously finalized weighting for the quality performance
category of 50 percent for the 2020 MIPS payment year. The commenter
explained that since the 2021 MIPS payment year will require a cost
performance category weighting of 30 percent, they recommended that CMS
not retreat from progressing toward that amount in the intervening
year.
Response: We appreciate the commenter's recommendation and note
that we are not finalizing the cost performance category weighting at
zero percent toward the final score for the 2020 MIPS payment year.
Further, the percentage of the MIPS final score based on the quality
performance category for MIPS payment year 2020 will be 50 percent in
accordance with section 1848(q)(5)(E)(i)(I)(bb) of the Act. We refer
readers to section II.C.6.d. of this final rule with comment period for
more information on the cost performance category.
Comment: One commenter requested clarification on the policy to
weight the quality performance category at 60 percent of the final
score for the 2020 MIPS payment year instead of 50 percent. The
commenter requested clarification as to which performance category the
10 percent difference would apply if the quality performance category
was weighted at 50 percent in the 2020 MIPS payment year.
Response: As previously noted in this final rule with comment
period, for the 2020 MIPS payment year, the quality performance
category will be weighted at 50 percent. The 10 percent difference will
be applied to the cost performance category.
Comment: A few commenters urged CMS to reconsider the proposal to
weight the quality performance category at 60 percent of the final
score for the 2020 MIPS payment year for non-patient facing MIPS
eligible clinicians. One commenter noted that the quality performance
category accounts for 85 percent of the total score for pathologists,
and placing this much weight on the quality performance category puts
pathologists at an unfair disadvantage given the lack of reportable
measures. The commenter recommended that the improvement activity
performance category be weighted more heavily at a 50 percent weight
and that the quality performance category receive a 50 percent weight.
Another commenter indicated that it was not possible for non-patient
facing MIPS eligible clinicians to achieve a score higher than 40
percent, in the quality performance category, given a lack of measures
and given that those measures that are applicable are only worth 3
points. While this score allows them to avoid a penalty, the commenter
noted it precludes them from achieving a bonus. Thus, the commenter
recommended that CMS reweight the quality performance category for non-
patient facing MIPS eligible clinicians so that they can receive a
score of 70 percent or higher. This would give non-patient facing MIPS
eligible clinicians motivation for improvement as well as encourage
them to continue to participate in the Quality Payment Program should
it become voluntary.
Response: As previously noted in this final rule with comment
period, we are not finalizing our proposal to reweight the quality
performance to 60 percent of the final score or the cost performance
category to zero percent of the final score for the 2020 MIPS payment
year.
[[Page 53628]]
Therefore, we are keeping our previously finalized policy to weight the
quality performance category at 50 percent and the cost performance
category at 10 percent for the 2020 MIPS payment year. It is important
to note that for the 2021 MIPS payment year that the quality
performance category will be 30 percent of the final score, and the
cost performance category will be 30 percent of the final score as
required by statute. We cannot weight the improvement activities
performance category more heavily as suggested because section
1848(q)(5)(E)(i)(III) of the Act specifies that the improvement
activities performance category will account for 15 percent of the
final score and was codified as such at Sec. 414.1355. Regarding the
comment on applicable measures being worth less points, we note that
non-patient facing MIPS eligible clinicians may report on a specialty-
specific measure set (which may have fewer than the required six
measures) or may report through a QCDR that can report QCDR measures in
order to earn the full points in the quality performance category.
Final Action: After consideration of the public comments, we are
not finalizing our proposal at Sec. 414.1330(b)(2) to provide that
performance in the quality performance category will comprise 60
percent of a MIPS eligible clinician's final score for MIPS payment
year 2020. Rather we will be maintaining our previously finalized
policy at Sec. 414.1330(b)(2) to provide that the performance in the
quality performance category will comprise 50 percent of a MIPS
eligible clinician's final score for MIPS payment year 2020.
(3) Quality Data Submission Criteria
(a) Submission Criteria
(i) Submission Criteria for Quality Measures Excluding Groups Reporting
via the CMS Web Interface and the CAHPS for MIPS Survey
In the CY 2017 Quality Payment Program final rule (81 FR 77114), we
finalized at Sec. 414.1335(a)(1) that individual MIPS eligible
clinicians submitting data via claims and individual MIPS eligible
clinicians and groups submitting data via all mechanisms (excluding the
CMS Web Interface and the CAHPS for MIPS survey) are required to meet
the following submission criteria. For the applicable period during the
performance period, the individual MIPS eligible clinician or group
will report at least six measures, including at least one outcome
measure. If an applicable outcome measure is not available, the
individual MIPS eligible clinician or group will be required to report
one other high priority measure (appropriate use, patient safety,
efficiency, patient experience, and care coordination measures) in lieu
of an outcome measure. If fewer than six measures apply to the
individual MIPS eligible clinician or group, then the individual MIPS
eligible clinician or group would be required to report on each measure
that is applicable. We defined ``applicable'' to mean measures relevant
to a particular MIPS eligible clinician's services or care rendered. As
discussed in section II.C.7.a.(2) of this final rule with comment
period, we will only make determinations as to whether a sufficient
number of measures are applicable for claims-based and registry
submission mechanisms; we will not make this determination for EHR and
QCDR submission mechanisms, for example.
Alternatively, the individual MIPS eligible clinician or group will
report one specialty measure set, or the measure set defined at the
subspecialty level, if applicable. If the measure set contains fewer
than six measures, MIPS eligible clinicians will be required to report
all available measures within the set. If the measure set contains six
or more measures, MIPS eligible clinicians will be required to report
at least six measures within the set. Regardless of the number of
measures that are contained in the measure set, MIPS eligible
clinicians reporting on a measure set will be required to report at
least one outcome measure or, if no outcome measures are available in
the measure set, the MIPS eligible clinician will report another high
priority measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) within the measure set in
lieu of an outcome measure. MIPS eligible clinicians may choose to
report measures in addition to those contained in the specialty measure
set and will not be penalized for doing so, provided that such MIPS
eligible clinicians follow all requirements discussed here.
In accordance with Sec. 414.1335(a)(1)(ii), individual MIPS
eligible clinicians and groups will select their measures from either
the set of all MIPS measures listed or referenced in Table A of the
Appendix in this final rule with comment period or one of the specialty
measure sets listed in Table B of the Appendix in this final rule with
comment period. We note that some specialty measure sets include
measures grouped by subspecialty; in these cases, the measure set is
defined at the subspecialty level. Previously finalized quality
measures may be found in the CY 2017 Quality Payment Program final rule
(81 FR 77558 through 77816).
We also finalized the definition of a high priority measure at
Sec. 414.1305 to mean an outcome, appropriate use, patient safety,
efficiency, patient experience, or care coordination quality measure.
Except as discussed in section II.C.6.b.(3)(a) of this final rule with
comment period with regard to the CMS Web Interface and the CAHPS for
MIPS survey, we did not propose any changes to the submission criteria
or definitions established for measures in the proposed rule.
In the CY 2017 Quality Payment Program final rule (81 FR 77114), we
solicited comments regarding adding a requirement to our finalized
policy that patient-facing MIPS eligible clinicians would be required
to report at least one cross-cutting measure in addition to the high
priority measure requirement for further consideration for the Quality
Payment Program Year 2 and future years. For clarification, we consider
a cross-cutting measure to be any measure that is broadly applicable
across multiple clinical settings and individual MIPS eligible
clinicians or groups within a variety of specialties. We specifically
requested feedback on how we could construct a cross-cutting measure
requirement that would be most meaningful to MIPS eligible clinicians
from different specialties and that would have the greatest impact on
improving the health of populations. We refer readers to the CY 2018
Quality Payment Program proposed rule (82 FR 30038 through 30039) for a
full discussion of the comments received and responses provided.
Except as discussed in section II.C.6.b.(3)(a)(iii) of this final
rule with comment period with regard to the CAHPS for MIPS survey, we
did not propose any changes to the submission criteria for quality
measures. We solicited additional feedback on meaningful ways to
incorporate cross-cutting measurement into MIPS and the Quality Payment
Program generally. We received several comments regarding incorporating
cross-cutting measurements into the Quality Payment Program and will
take them into consideration in future rulemaking.
(ii) Submission Criteria for Quality Measures for Groups Reporting via
the CMS Web Interface
In the CY 2017 Quality Payment Program final rule (81 FR 77116), we
finalized at Sec. 414.1335(a)(2) the following criteria for the
submission of data on quality measures by registered groups of 25 or
more eligible clinicians
[[Page 53629]]
who want to report via the CMS Web Interface. For the applicable 12-
month performance period, the group would be required to report on all
measures included in the CMS Web Interface completely, accurately, and
timely by populating data fields for the first 248 consecutively ranked
and assigned Medicare beneficiaries in the order in which they appear
in the group's sample for each module or measure. If the sample of
eligible assigned beneficiaries is less than 248, then the group would
report on 100 percent of assigned beneficiaries. A group would be
required to report on at least one measure for which there is Medicare
patient data. Groups reporting via the CMS Web Interface are required
to report on all of the measures in the set. Any measures not reported
would be considered zero performance for that measure in our scoring
algorithm. Specifically, we proposed to revise Sec. 414.1335(a)(2) to
clarify that the CMS Web Interface criteria applies only to groups of
25 or more eligible clinicians (82 FR 30039). As previously finalized
at Sec. 414.1335(a)(2)(i), groups using the CMS Web Interface must
report on all measures included in the CMS Web Interface and report on
the first 248 consecutively ranked beneficiaries in the sample for each
measure or module.
In the CY 2017 Quality Payment Program final rule (81 FR 77116), we
finalized to continue to align the 2019 CMS Web Interface beneficiary
assignment methodology with the attribution methodology for two of the
measures that were formerly in the VM: the acute and chronic composite
measures of Agency for Healthcare Research and Quality (AHRQ)
Prevention Quality Indicators (PQIs) discussed in the CY 2017 Quality
Payment Program proposed rule (81 FR 28192) and total per capita cost
for all attributed beneficiaries discussed in the CY 2017 Quality
Payment Program proposed rule (81 FR 28196). When establishing MIPS, we
also finalized a modified attribution process to update the definition
of primary care services and to adapt the attribution to different
identifiers used in MIPS. These changes are discussed in the CY 2017
Quality Payment Program proposed rule (81 FR 28196).
We clarify that the attribution methodology for the CMS Web
Interface implemented under MIPS is similar to the attribution
methodology implemented under the Physician Quality Reporting System
(PQRS) Group Practice Reporting Option (GPRO) Web Interface, which
utilizes a two-step attribution process to associate beneficiaries with
TINs during the period in which performance is assessed. The process
attributes a beneficiary to a TIN that bills the plurality of primary
care services for that beneficiary. In order to conduct attribution for
the CMS Web Interface, we utilize retrospective assignment to identify
beneficiaries eligible for sampling and identify the beneficiary claims
that will be utilized for the calculations of cost. Beneficiary
assignment for groups is based on a 10-month period (between January
and October) and determined retrospectively after the month of October
for the applicable performance period. We note that it is not
operationally feasible for us to utilize a period longer than 10
months, to assess claims data for beneficiary assignment for a
performance period.
Lastly, we note that groups reporting via the CMS Web Interface may
also report the CAHPS for MIPS survey and receive bonus points for
submitting that measure. We did not propose any changes to the
submission criteria for quality measures for groups reporting via the
CMS Web Interface in the proposed rule.
The following is a summary of the public comments received on the
``Submission Criteria for Quality Measures for Groups Reporting via the
CMS Web Interface'' proposal and our responses:
Comment: One commenter suggested that CMS allow groups with fewer
than 25 eligible clinicians (such as 2 or more eligible clinicians in a
group) to use CMS Web Interface reporting. The commenter was concerned
that the Quality Payment Program is more limiting than PQRS with regard
to available submission mechanisms.
Response: The CMS Web Interface has been limited to groups of 25 or
more eligible clinicians because smaller groups have not been able to
meet the data submission requirements on the sample of the Medicare
Part B patients we provide. We would like to clarify that we have made
available the same submission mechanisms for the Quality Payment
Program that were available for PQRS. In addition, we are finalizing
our proposal to revise Sec. 414.1325(d) for purposes of the 2021 MIPS
payment year and future years to allow individual MIPS eligible
clinicians and groups to submit measures and activities, as applicable,
via as many submission mechanisms as necessary to meet the requirements
of the quality, improvement activities, or advancing care information
performance categories. We refer readers to section II.C.6.a.(1) of
this final rule with comment period for more information on submission
mechanisms.
Final Action: After consideration of the public comments, we are
finalizing our proposal at Sec. 414.1335(a)(2) to clarify that the CMS
Web Interface criteria applies only to group of 25 or more eligible
clinicians. As previously finalized at Sec. 414.1335(a)(2)(i), the
group must report on the first 248 consecutively ranked beneficiaries
in the sample for each measure or module.
(iii) Performance Criteria for Quality Measures for Groups Electing To
Report Consumer Assessment of Healthcare Providers and Systems (CAHPS)
for MIPS Survey
In the CY 2017 Quality Payment Program final rule (81 FR 77100), we
finalized at Sec. 414.1335(a)(3) the following criteria for the
submission of data on the CAHPS for MIPS survey by registered groups
via CMS-approved survey vendor: For the applicable 12-month performance
period, a group that wishes to voluntarily elect to participate in the
CAHPS for MIPS survey measure must use a survey vendor that is approved
by CMS for a particular performance period to transmit survey measures
data to CMS. The CAHPS for MIPS survey counts for one measure towards
the MIPS quality performance category and, as a patient experience
measure, also fulfills the requirement to report at least one high
priority measure in the absence of an applicable outcome measure. In
addition, groups that elect this data submission mechanism must select
an additional group data submission mechanism (that is, qualified
registries, QCDRs, EHR, etc.) in order to meet the data submission
criteria for the MIPS quality performance category. The CAHPS for MIPS
survey will count as one patient experience measure, and the group will
be required to submit at least five other measures through one other
data submission mechanism. A group may report any five measures within
MIPS plus the CAHPS for MIPS survey to achieve the six measures
threshold. We did not propose any changes to the performance criteria
for quality measures for groups electing to report the CAHPS for MIPS
survey in the proposed rule.
In the CY 2017 Quality Payment Program final rule (see 81 FR
77120), we finalized retaining the CAHPS for MIPS survey administration
period that was utilized for PQRS of November to February. However,
this survey administration period has become operationally problematic
for the administration of MIPS. In order to compute scoring, we must
have the
[[Page 53630]]
CAHPS for MIPS survey data earlier than the current survey
administration period deadline allows. Therefore, we proposed for the
Quality Payment Program Year 2 and future years that the survey
administration period would, at a minimum, span over 8 weeks and would
end no later than February 28th following the applicable performance
period (82 FR 30040). In addition, we proposed to further specify the
start and end timeframes of the survey administration period through
our normal communication channels.
In addition, as discussed in the CY 2017 Quality Payment Program
final rule (81 FR 77116), we anticipated exploring the possibility of
updating the CAHPS for MIPS survey under MIPS, specifically not
finalizing all of the proposed Summary Survey Measures (SSMs). The
CAHPS for MIPS survey currently consists of the core CAHPS Clinician &
Group (CG-CAHPS) Survey developed by the Agency for Healthcare Research
and Quality (AHRQ), plus additional survey questions to meet CMS's
program needs. We proposed for the Quality Payment Program Year 2 and
future years to remove two SSMs, specifically, ``Helping You to Take
Medication as Directed'' and ``Between Visit Communication'' from the
CAHPS for MIPS survey (82 FR 30040). We proposed to remove the SSM
entitled ``Helping You to Take Medication as Directed'' due to low
reliability. In 2014 and 2015, the majority of groups had very low
reliability on this SSM. Furthermore, based on analyses conducted of
SSMs in an attempt to improve their reliability, removing questions
from this SSM did not result in any improvements in reliability. The
SSM, ``Helping You to Take Medication as Directed,'' has also never
been a scored measure with the Medicare Shared Savings Program CAHPS
for Accountable Care Organizations (ACOs) Survey. We refer readers to
the CY 2014 Physician Fee Schedule final rule for a discussion on the
CAHPS for ACOs survey scoring (79 FR 67909 through 67910) and measure
tables (79 FR 67916 through 67917). The SSM entitled ``Between Visit
Communication'' currently contains only one question. This question
could also be considered related to other SSMs entitled: ``Care
Coordination'' or ``Courteous and Helpful Office Staff,'' but does not
directly overlap with any of the questions under that SSM. However, we
proposed to remove this SSM in order to maintain consistency with the
Medicare Shared Savings Program which, utilizes the CAHPS for ACOs
Survey. The SSM entitled ``Between Visit Communication'' has never been
a scored measure with the Medicare Shared Savings Program CAHPS for
ACOs Survey. We refer readers to section II.C.6.g. for the discussion
of the CAHPS for ACOs survey.
In addition to public comments we received, we also took into
consideration analysis we conducted before finalizing this provision.
Specifically, we reviewed the findings of the CAHPS for ACOs survey
pilot, which was administered from November 2016 through February 2017.
The CAHPS for ACOs survey pilot utilized a survey instrument which did
not contain the two SSMs that we proposed for removal from the CAHPS
for MIPS survey. For more information on the other SSMs within the
CAHPS for MIPS survey, please see the explanation of the CAHPS for PQRS
survey in the CY 2016 PFS final rule with comment period (80 FR 71142
through 71143).
Table 4--Summary Survey Measures (SSMs) Included in the CAHPS for MIPS
Survey
------------------------------------------------------------------------
-------------------------------------------------------------------------
Getting Timely Care, Appointments, and Information.
How Well Providers Communicate.
Patient's Rating of Provider.
Access to Specialists.
Health Promotion and Education.
Shared Decision-Making.
Health Status and Functional Status.
Courteous and Helpful Office Staff.
Care Coordination.
Stewardship of Patient Resources.
------------------------------------------------------------------------
We sought comment on expanding the patient experience data
available for the CAHPS for MIPS survey (82 FR 30040 through 30401).
Currently, the CAHPS for MIPS survey is available for groups to report
under the MIPS. The patient experience survey data that is available on
Physician Compare is highly valued by patients and their caregivers as
they evaluate their health care options. However, in user testing with
patients and caregivers in regard to the Physician Compare Web site,
the users regularly request more information from patients like them in
their own words. Patients regularly request that we include narrative
reviews of individual clinicians and groups on the Web site. AHRQ is
fielding a beta version of the CAHPS Patient Narrative Elicitation
Protocol (https://www.ahrq.gov/cahps/surveys-guidance/item-sets/elicitation/). This includes five open-ended questions
designed to be added to the CG CAHPS survey, after which the CAHPS for
MIPS survey is modeled. These five questions have been developed and
tested in order to capture patient narratives in a scientifically
grounded and rigorous way, setting it apart from other patient
narratives collected by various health systems and patient rating
sites. More scientifically rigorous patient narrative data would not
only greatly benefit patients in their decision for healthcare, but it
would also greatly aid individual MIPS eligible clinicians and groups
as they assess how their patients experience care. We sought comment on
adding these five open-ended questions to the CAHPS for MIPS survey in
future rulemaking. Beta testing is an ongoing process, and we
anticipate reviewing the results of that testing in collaboration with
AHRQ before proposing changes to the CAHPS for MIPS survey.
We are requiring, where possible, all-payer data for all reporting
mechanisms, yet certain reporting mechanisms are limited to Medicare
Part B data. Specifically, the CAHPS for MIPS survey currently relies
on sampling protocols based on Medicare Part B billing; therefore, only
Medicare Part B beneficiaries are sampled through that methodology. We
requested comments on ways to modify the methodology to assign and
sample patients using data from other payers for reporting mechanisms
that are currently limited to Medicare Part B data (82 FR 30041). In
particular, we sought comment on the ability of groups to provide
information on the patients to whom they provide care during a calendar
year, whether it would be possible to identify a list of patients seen
by individual clinicians in the group, and what type of patient contact
information groups would be able to provide. Further, we sought comment
on the challenges groups may anticipate in trying to provide this type
of information, especially for vulnerable beneficiary populations, such
as those lacking stable housing. We also sought comment on EHR vendors'
ability to provide information on the patients who receive care from
their client groups.
The following is a summary of the public comments received on the
``Performance Criteria for Quality Measures for Groups Electing to
Report the CAHPS for MIPS Survey'' proposals and our responses:
Comment: A few commenters supported removing the 2 SSMs, ``Helping
You to Take Medication as Directed'' and ``Between Visit
Communication'' from CAHPS for MIPS for the 2018 MIPS performance
period and future MIPS performance periods. The commenters recommended
that CMS communicate all changes made to the CAHPS for MIPS survey well
in advance of the annual registration
[[Page 53631]]
deadline. While supportive of CMS' proposal to remove these 2 SSMs, one
commenter urged CMS to replace the ``Helping You to Take Medication as
Directed'' module with a reliable way to measure patient experience for
patients as part of understanding their medications. Finally, one
commenter urged CMS to make the survey even shorter, stating that it is
still significantly too long to gain a large enough adoption rate among
patients and needs to be reduced further to increase completion rates.
Response: We thank the commenters for their support and will make
every effort to continue to communicate changes to the CAHPS for MIPS
survey. We also appreciate the commenters' suggestion to replace the
``Helping You to Take Medication as Directed'' SSM with a reliable way
to measure patients' understanding of their medications, as well as the
suggestion to reduce the number of questions in the CAHPS for MIPS
survey, and will consider these suggestions for future years of the
CAHPS for MIPS survey. We are finalizing for the Quality Payment
Program Year 2 and future years to remove two SSMs, specifically,
``Helping You to Take Medication as Directed'' and ``Between Visit
Communication'' from the CAHPS for MIPS survey.
Comment: Several commenters did not support the proposal to remove
the 2 SSMs without alternative domains or better patient experience or
patient-reported outcomes measures to replace them and urged us to
leave these SSMs in the survey at this time. Commenters noted that
although the ``Between Visit Communication'' measure is related to 2
other SSMs (``Care Coordination'' and ``Courteous and Helpful Office
Staff''), these measures do not entirely overlap, and poor
communication between visits can have serious consequences. The
commenters also expressed concern that the ``Helping You to Take
Medication as Directed'' SSM is needed to continue to capture safe and
appropriate medication use as a domain of the CAHPS for MIPS survey.
One commenter expressed concern that removal of the SSM is premature
and encouraged us to improve this SSM instead of removing it entirely,
urging us to retain the SSM and capture this information within both
the CAHPS for MIPS and the CAHPS for ACOs surveys if necessary. Another
commenter recommended that CMS keep the current CAHPS format which they
noted provides important feedback on key areas such as timely
appointments, easy access to information, and good communication with
healthcare providers.
Response: We acknowledge the commenters' concerns with respect to
removing the ``Between Visit Communication'' and ``Helping You to Take
Medication as Directed'' SSMs. We would like to note that the Shared
Savings Program piloted tested a revised CAHPS survey that did not
include these two SSMs, and we have reviewed the results of that
survey. Results from the pilot study suggest that administration of the
shortened version of the survey (that is, the pilot survey) is likely
to result in improvements in overall response rates. Findings show that
the response rate to the pilot survey was 3.4 percentage points higher
than the response rate to the Reporting Year (RY) 2016 CAHPS for ACOs
survey among ACOs participating in the pilot study. Increases in
response rates tended to be larger among ACOs that had lower response
rates in the prior year. In addition, after accounting for survey
questions that were removed from the pilot survey, the average survey
responses for ACOs who participated in the pilot study were mostly
similar across the two survey versions (pilot and RY 2016). Based on
results of the piloted CAHPS survey, we recommend the removal of the
two SSMs ``Between Visit Communication'' and ``Helping You to Take
Medications as Directed''. Further, the SSM, ``Between Visit
Communication,'' currently contains only one question. This question
could also be considered related to other SSMs entitled: ``Care
Coordination'' or ``Courteous and Helpful Office Staff,'' but does not
directly overlap with any of the questions under that SSM. As for the
SSM, ``Helping You to Take Medication as Directed,'' this SSM has had
low reliability. However, we will continue to look at ways to further
improve the CAHPS for MIPS survey including exploring new questions and
domains of patient experience. We are finalizing for the Quality
Payment Program Year 2 and future years to remove two SSMs,
specifically, ``Helping You to Take Medication as Directed'' and
``Between Visit Communication'' from the CAHPS for MIPS survey.
Comment: A few commenters supported the proposal to reduce the
minimum fielding period for CAHPS for MIPS from 4 months to 2 months in
the 2018 MIPS performance period to allow CMS to have adequate time to
collect the data needed to administer the MIPS program. One commenter
urged CMS to explore additional ways to improve the survey in terms of
the survey administration time frame, frequency of results, and the
length of the survey and its administration, which is often well after
the patient's visit.
Response: We plan to consider additional ways to improve the survey
in regards to the timeframe for administering the survey, frequency of
the results, as well as the survey instrument and its administration.
We are finalizing that for the Quality Payment Program Year 2 and
future years the survey administration period would span over a minimum
of 8 weeks to a maximum of 17 weeks and would end no later than
February 28th following the applicable performance period. In addition,
we are finalizing to further specify the start and end timeframes of
the survey administration period through our normal communication
channels.
Comment: A few commenters did not support the proposal to change
the minimum fielding period for CAHPS for MIPS, expressing concern that
2 months of data is inadequate for a meaningful assessment of the
patient experience. One commenter expressed concern that the cost to
engage a survey vendor for a relatively short period and for
potentially low returns may limit the value of participation,
especially if the cost is in addition to costs for the mechanisms to
support the other 5 quality measures. Commenters encouraged CMS to
field the CAHPS for MIPS survey for at least 10 to 14 weeks--or to
select 12 weeks in alignment with existing CAHPS guidelines--in order
to improve the patient response rate and avoid unintentionally
excluding patients who have a more difficult time responding within the
shortened response period.
Response: We appreciate the commenters' concern that 2 months of
data is inadequate for a meaningful assessment of patient experience
and the recommendation to field the CAHPS for MIPS survey for at least
10 to 14 weeks. We would like to clarify that the proposal was for the
survey administration, at a minimum, to span over 8 weeks. We believe
that an 8 week minimum is adequate for the meaningful assessment of the
patient experience because it provides sufficient time for the
beneficiaries to respond to the survey. With respect to the 2018 CAHPS
for MIPS survey, we anticipate that the survey administration period
will be longer than the minimum 8 weeks and note that we will specify
the start and end timeframes of the survey administration period
through our normal communication channels. Further, this policy will
allow us the flexibility to adjust the survey administration period to
meet future operational needs, as well
[[Page 53632]]
as any newly identified adjustments to the survey administration period
that would result in improvements, such as response rates. We are
finalizing that for the Quality Payment Program Year 2 and future years
the survey administration period would, span over a minimum of 8 weeks
to a maximum of 17 weeks and end no later than February 28th following
the applicable performance period. We refer readers to section
II.C.6.a. of this final rule with comment period for more information
on submission mechanisms.
Final Action: After consideration of the public comments, we are
finalizing that for the Quality Payment Program Year 2 and future years
the survey administration period would span over a minimum of 8 weeks
to a maximum of 17 weeks and would end no later than February 28th
following the applicable performance period. In addition, we are
finalizing to further specify the start and end timeframes of the
survey administration period through our normal communication channels.
Further, we are finalizing for the Quality Payment Program Year 2 and
future years to remove two SSMs, specifically, ``Helping You to Take
Medication as Directed'' and ``Between Visit Communication'' from the
CAHPS for MIPS survey.
(b) Data Completeness Criteria
In the CY 2017 Quality Payment Program final rule (81 FR 77125), we
finalized data completeness criteria for the transition year and MIPS
payment year 2020. We finalized at Sec. 414.1340 the data completeness
criteria that follows for performance periods occurring in 2017.
Individual MIPS eligible clinicians or groups submitting
data on quality measures using QCDRs, qualified registries, or via EHR
must report on at least 50 percent of the individual MIPS eligible
clinician or group's patients that meet the measure's denominator
criteria, regardless of payer, for the performance period. In other
words, for these submission mechanisms, we expect to receive quality
data for both Medicare and non-Medicare patients. For the transition
year, MIPS eligible clinicians whose measures fall below the data
completeness threshold of 50 percent would receive 3 points for
submitting the measure.
Individual MIPS eligible clinicians submitting data on
quality measures data using Medicare Part B claims, would report on at
least 50 percent of the Medicare Part B patients seen during the
performance period to which the measure applies. For the transition
year, MIPS eligible clinicians whose measures fall below the data
completeness threshold of 50 percent would receive 3 points for
submitting the measure.
Groups submitting quality measures data using the CMS Web
Interface or a CMS-approved survey vendor to report the CAHPS for MIPS
survey must meet the data submission requirements on the sample of the
Medicare Part B patients that CMS provides.
In addition, we finalized an increased data completeness threshold
of 60 percent for MIPS for performance periods occurring in 2018 for
data submitted on quality measures using QCDRs, qualified registries,
via EHR, or Medicare Part B claims. We noted that we anticipate we will
propose to increase these thresholds for data submitted on quality
measures using QCDRs, qualified registries, via EHR, or Medicare Part B
claims for performance periods occurring in 2019 and onward.
We proposed to modify the previously established data completeness
criteria for MIPS payment year 2020 (82 FR 30041 through 30042).
Specifically, we proposed to provide an additional year for individual
MIPS eligible clinicians and groups to gain experience with MIPS before
increasing the data completeness thresholds for data submitted on
quality measures using QCDRs, qualified registries, via EHR, or
Medicare Part B claims. We noted concerns about the unintended
consequences of accelerating the data completeness threshold so
quickly, which may jeopardize MIPS eligible clinicians' ability to
participate and perform well under the MIPS, particularly those
clinicians who are least experienced with MIPS quality measure data
submission. We wanted to ensure that an appropriate yet achievable
level of data completeness is applied to all MIPS eligible clinicians.
We continue to believe it is important to incorporate higher data
completeness thresholds in future years to ensure a more accurate
assessment of a MIPS eligible clinician's performance on quality
measures and to avoid any selection bias. Therefore, we proposed a 60
percent data completeness threshold for MIPS payment year 2021. We
strongly encouraged all MIPS eligible clinicians to perform the quality
actions associated with the quality measures on their patients. The
data submitted for each measure is expected to be representative of the
individual MIPS eligible clinician's or group's overall performance for
that measure. The data completeness threshold of less than 100 percent
is intended to reduce burden and accommodate operational issues that
may arise during data collection during the initial years of the
program. We provided this notice to MIPS eligible clinicians so that
they can take the necessary steps to prepare for higher data
completeness thresholds in future years.
Therefore, we proposed to revise the data completeness criteria for
the quality performance category at Sec. 414.1340(a)(2) to provide
that MIPS eligible clinicians and groups submitting quality measures
data using the QCDR, qualified registry, or EHR submission mechanism
must submit data on at least 50 percent of the individual MIPS eligible
clinician's or group's patients that meet the measure's denominator
criteria, regardless of payer, for MIPS payment year 2020. We also
proposed to revise the data completeness criteria for the quality
performance category at Sec. 414.1340(b)(2) to provide that MIPS
eligible clinicians and groups submitting quality measures data using
Medicare Part B claims, must submit data on at least 50 percent of the
applicable Medicare Part B patients seen during the performance period
to which the measure applies for MIPS payment year 2020. We further
proposed at Sec. 414.1340(a)(3), that MIPS eligible clinicians and
groups submitting quality measures data using the QCDR, qualified
registry, or EHR submission mechanism must submit data on at least 60
percent of the individual MIPS eligible clinician or group's patients
that meet the measure's denominator criteria, regardless of payer, for
MIPS payment year 2021. We also proposed at Sec. 414.1340(b)(3), that
MIPS eligible clinicians and groups submitting quality measures data
using Medicare Part B claims, must submit data on at least 60 percent
of the applicable Medicare Part B patients seen during the performance
period to which the measure applies for MIPS payment year 2021. We
noted that we anticipate for future MIPS payment years we will propose
to increase the data completeness threshold for data submitted using
QCDRs, qualified registries, EHR submission mechanisms, or Medicare
Part B claims. As MIPS eligible clinicians gain experience with the
MIPS, we would propose to steadily increase these thresholds for future
years through rulemaking. In addition, we sought comment on what data
completeness threshold should be established for future years.
In the CY 2017 Quality Payment Program final rule (81 FR 77125
through 77126), we finalized our approach of including all-payer data
for the QCDR, qualified registry, and EHR submission mechanisms because
we believed this approach provides a more complete picture of each MIPS
eligible clinician's
[[Page 53633]]
scope of practice and provides more access to data about specialties
and subspecialties not currently captured in PQRS. In addition, those
clinicians who utilize the QCDR, qualified registry, or EHR data
submission methods must contain a minimum of one quality measure for at
least one Medicare patient. We did not propose any changes to these
policies. As noted in the CY 2017 Quality Payment Program final rule,
those MIPS eligible clinicians who fall below the data completeness
thresholds will receive 3 points for the specific measures that fall
below the data completeness threshold in the transition year of MIPS
only. For the Quality Payment Program Year 2, we proposed that MIPS
eligible clinicians would receive 1 point for measures that fall below
the data completeness threshold, with an exception for small practices,
which would still receive 3 points for measures that fail data
completeness. We refer readers to section II.C.6.b.(3) of this final
rule with comment period for our finalized policies on instances when
MIPS eligible clinicians' measures fall below the data completeness
threshold.
The following is a summary of the public comments received on the
``Data Completeness Criteria'' proposals and our responses:
Comment: Several commenters expressed support for our proposal to
increase the data completeness threshold to 60 percent for the 2021
MIPS payment year.
Response: We appreciate the commenters' support and are finalizing
this proposal.
Comment: Several commenters urged CMS to not finalize an increase
in the data completeness threshold for the 2021 MIPS payment year or
future payment years. Commenters noted that constant changing in
reporting requirements creates administrative challenges for eligible
clinicians and their staff. Other commenters observed that a higher
threshold of data completeness requires a significant amount of
technical and administrative coordination which can take several months
to properly validate, both for MIPS eligible clinicians in larger
practices and those in small and rural practices.
Response: We understand the commenters' concerns but believe it is
important to incorporate higher thresholds to ensure a more accurate
assessment of a MIPS eligible clinician's performance on the quality
measures and to avoid any selection bias. Therefore, we are not
finalizing our proposal to decrease the data completeness threshold to
50 percent for the 2020 MIPS payment year and are instead retaining the
previously finalized data completeness threshold of 60 percent that
year. In addition, we are finalizing our proposal to increase the data
completeness threshold to 60 percent for MIPS payment year 2021.
Comment: Many commenters supported the proposal to apply the data
completeness criteria that was previously finalized for the CY 2017
performance period to the CY 2018 performance period because they
believed that it would help create stability within the quality
performance category, would enable MIPS eligible clinicians and groups
to gain additional experience reporting on quality measures and make
improvements, and would enhance the ability of MIPS eligible clinicians
and groups to perform well in the program. Several commenters noted
that taking a slower approach to increasing the data completeness
criteria is the best way to ensure reliable and accurate data is
submitted so that CMS has a complete and accurate reflection of MIPS
eligible clinician performance.
Response: While we understand the commenters' desire to take a more
gradual approach, we must balance this with need to ensure that we have
a complete an accurate reflection of MIPS eligible clinician
performance. As such, we are not finalizing our proposal to decrease
the data completeness threshold to 50 percent for the 2020 MIPS payment
year and are instead retaining the previously finalized data
completeness threshold of 60 percent for that year. In addition, we are
finalizing our proposal to increase the data completeness threshold to
60 percent for MIPS payment year 2021.
Comment: A few commenters did not support our proposal to delay
moving to a higher data completeness threshold until the 2019 MIPS
performance period and 2021 MIPS payment year, expressing concern that
a delay would encourage MIPS eligible clinicians and groups to avoid
the selection of population-based measures that would more easily meet
any higher completeness requirements that we might set; would
negatively impact the ability of high performers to receive a
substantial payment increase in the 2020 MIPS payment year; and would
not prepare MIPS eligible clinicians and groups for a more rigorous
program in future years. A few commenters suggested that 50 percent of
available data is insufficient and that a larger patient sample
provides a more reliable and valid representation of true performance
and will better support clinician groups in internal benchmarking for
quality improvement. One commenter noted that a delay would continue to
create a misalignment between the MIPS and Advanced APM tracks. One
commenter disagreed with the 50 percent threshold itself, expressing
concern that this standard may motivate MIPS eligible clinicians and
groups to ``cherry pick'' the cases that make up the denominator for
reporting. This commenter suggested that for any reporting mechanism
for which a MIPS eligible clinician could attest to a formal, auditable
representative sampling, we should exempt the MIPS eligible clinician
from the data completeness standard.
Response: We agree that a larger sample reduces the likelihood of
selection bias and provides a more reliable and valid representation of
true performance. As a result, we are not finalizing our proposal to
decrease the data completeness threshold to 50 percent for the 2020
MIPS payment year and are instead retaining the previously finalized
data completeness threshold of 60 percent for that year. In addition,
we are finalizing our proposal to increase the data completeness
threshold to 60 percent for MIPS payment year 2021.
Final Action: After consideration of the public comments, we are
not finalizing our proposal regarding the data completeness criteria
for MIPS payment year 2020. Instead, we are retaining our previously
finalized requirements at:
Sec. 414.1340(a)(2) that MIPS eligible clinicians and
groups submitting quality measures data using the QCDR, qualified
registry, or EHR submission mechanism must submit data on at least 60
percent of the MIPS eligible clinician or group's patients that meet
the measure's denominator criteria, regardless of payer for MIPS
payment year 2020; and
Sec. 414.1340(b)(2) that MIPS eligible clinicians
submitting quality measures data using Medicare Part B claims, must
submit data on at least 60 percent of the applicable Medicare Part B
patients seen during the performance period to which the measure
applies for MIPS payment years 2020.
We are, however, finalizing our proposal regarding the data
completeness criteria for MIPS payment year 2021. Specifically, we are
finalizing at:
Sec. 414.1340(a)(2) that MIPS eligible clinicians and
groups submitting quality measures data using the QCDR, qualified
registry, or EHR submission mechanism must submit data on at least 60
percent of the MIPS eligible clinician or group's patients that meet
the measure's denominator criteria,
[[Page 53634]]
regardless of payer for MIPS payment year 2021; and
Sec. 414.1340(b)(2) that MIPS eligible clinicians
submitting quality measures data using Medicare Part B claims, must
submit data on at least 60 percent of the applicable Medicare Part B
patients seen during the performance period to which the measure
applies for MIPS payment years 2021.
(c) Summary of Data Submission Criteria
Table 5 reflects our final quality data submission criteria for
MIPS payment years 2020 and 2021 via Medicare Part B claims, QCDR,
qualified registry, EHR, CMS Web Interface, and the CAHPS for MIPS
survey. It is important to note that while we finalized at Sec.
414.1325(d) in the CY 2017 Quality Payment Program final rule that
individual MIPS eligible clinicians and groups may only use one
submission mechanism per performance category, in section II.C.6.a.(1)
of this final rule with comment period, we are finalizing to revise
Sec. 414.1325(d) for purposes of the 2021 MIPS payment year and future
years to allow individual MIPS eligible clinicians and groups to submit
measures and activities, as applicable, via as many submission
mechanisms as necessary to meet the requirements of the quality,
improvement activities, or advancing care information performance
categories. We refer readers to section II.C.6.a.(1) of this final rule
with comment period for further discussion of this policy.
TABLE 5--Summary of Final Quality Data Submission Criteria for MIPS Payment Year 2020 and 2021 via Part B
Claims, QCDR, Qualified Registry, EHR, CMS Web Interface, and the CAHPS for MIPS Survey
----------------------------------------------------------------------------------------------------------------
Submission
Performance period Clinician type mechanism Submission criteria Data completeness
----------------------------------------------------------------------------------------------------------------
Jan 1-Dec 31........... Individual MIPS Part B Claims...... Report at least six 60 percent of
eligible measures including one individual MIPS
clinicians. outcome measure, or if eligible
an outcome measure is clinician's
not available report Medicare Part B
another high priority patients for the
measure; if less than performance
six measures apply then period.
report on each measure
that is applicable.
Individual MIPS
eligible clinicians
would have to select
their measures from
either the set of all
MIPS measures listed or
referenced, or one of
the specialty measure
sets listed in, the
applicable final rule.
Jan 1-Dec 31........... Individual MIPS QCDR, Qualified Report at least six 60 percent of
eligible Registry, & EHR. measures including one individual MIPS
clinicians, groups. outcome measure, or if eligible
an outcome measure is clinician's, or
not available report group's patients
another high priority across all payers
measure; if less than for the
six measures apply then performance
report on each measure period.
that is applicable.
Individual MIPS
eligible clinicians, or
groups would have to
select their measures
from either the set of
all MIPS measures
listed or referenced,
or one of the specialty
measure sets listed in,
the applicable final
rule.
Jan 1-Dec 31........... Groups............. CMS Web Interface.. Report on all measures Sampling
included in the CMS Web requirements for
Interface; AND populate the group's
data fields for the Medicare Part B
first 248 consecutively patients.
ranked and assigned
Medicare beneficiaries
in the order in which
they appear in the
group's sample for each
module/measure. If the
pool of eligible
assigned beneficiaries
is less than 248, then
the group would report
on 100 percent of
assigned beneficiaries.
Jan 1-Dec 31........... Groups............. CAHPS for MIPS CMS-approved survey Sampling
Survey. vendor would need to be requirements for
paired with another the group's
reporting mechanism to Medicare Part B
ensure the minimum patients.
number of measures is
reported. CAHPS for
MIPS survey would
fulfill the requirement
for one patient
experience measure
towards the MIPS
quality data submission
criteria. CAHPS for
MIPS survey would only
count for one measure
under the quality
performance category.
----------------------------------------------------------------------------------------------------------------
We note that the measure reporting requirements applicable to
groups are also generally applicable to virtual groups. However, we
note that the requirements for calculating measures and activities when
reporting via QCDRs, qualified registries, EHRs, and attestation differ
in their application to virtual groups. Specifically, these
requirements apply cumulatively across all TINs in a virtual group.
Thus, virtual groups will aggregate data for each NPI under each TIN
within the virtual group by adding together the numerators and
denominators and then cumulatively collate to report one measure ratio
at the virtual group level. Moreover, if each MIPS eligible clinician
within a virtual group faces a significant hardship or has EHR
technology that has been decertified, the virtual group can apply for
an exception to have its advancing care information performance
category reweighted. If such exception application is approved, the
virtual group's advancing care information performance category is
reweighted to zero percent and applied to the quality performance
category increasing the quality performance weight from 50 percent to
75 percent.
Additionally, the data submission criteria applicable to groups are
also generally applicable to virtual groups. However, we note that data
completeness and sampling requirements for the CMS Web Interface and
CAHPS for MIPS survey differ in their application to virtual groups.
Specifically, data completeness for virtual groups applies cumulatively
across all TINs in a virtual group. Thus, we note that there may be a
case when a virtual group has one TIN that falls below the 60 percent
data completeness threshold, which is an acceptable case as long as the
virtual group cumulatively exceeds such threshold. In regard to the CMS
Web Interface and CAHPS for MIPS survey, sampling requirements pertain
to Medicare Part B patients with respect to all TINs in a virtual
group, where the sampling methodology would be conducted for
[[Page 53635]]
each TIN within the virtual group and then cumulatively aggregated
across the virtual group. A virtual group would need to meet the
beneficiary sampling threshold cumulatively as a virtual group.
(4) Application of Quality Measures to Non-Patient Facing MIPS Eligible
Clinicians
In the CY 2017 Quality Payment Program final rule (81 FR 77127), we
finalized at Sec. 414.1335 that non-patient facing MIPS eligible
clinicians would be required to meet the applicable submission criteria
that apply for all MIPS eligible clinicians for the quality performance
category. We did not propose any changes to this policy in the proposed
rule.
(5) Application of Facility-Based Measures
Section 1848(q)(2)(C)(ii) of the Act provides that the Secretary
may use measures used for payment systems other than for physicians,
such as measures used for inpatient hospitals, for purposes of the
quality and cost performance categories. However, the Secretary may not
use measures for hospital outpatient departments, except in the case of
items and services furnished by emergency physicians, radiologists, and
anesthesiologists. We refer readers to section II.C.7.a.(4) of this
final rule with comment period for a full discussion of the finalized
policies regarding the application of facility-based measures.
(6) Global and Population-Based Measures
In the CY 2017 Quality Payment Program final rule (81 FR 77136), we
did not finalize all of our proposals on global and population-based
measures as part of the quality score. Specifically, we did not
finalize our proposal to use the acute and chronic composite measures
of the AHRQ Prevention Quality Indicators (PQIs). We agreed with
commenters that additional enhancements, including the addition of risk
adjustment, needed to be made to these measures prior to inclusion in
MIPS. We did, however, calculate these measures at the TIN level, and
provided the measure data through the QRURs released in September 2016,
and this data can be used by MIPS eligible clinicians for informational
purposes.
We did finalize the all-cause hospital readmissions (ACR) measure
from the VM Program as part of the annual list of quality measures for
the MIPS quality performance category. We finalized this measure with
the following modifications. We did not apply the ACR measure to solo
practices or small groups (groups of 15 or less). We did apply the ACR
measure to groups of 16 or more who meet the case volume of 200 cases.
A group will be scored on the ACR measure even if it did not submit any
quality measures, if it submitted in other performance categories.
Otherwise, the group will not be scored on the readmission measure if
it did not submit data in any of the performance categories. In our
transition year policies, the readmission measure alone would not
produce a neutral to positive MIPS payment adjustment since in order to
achieve a neutral to positive MIPS payment adjustment, an individual
MIPS eligible clinician or group must submit information in one of the
three performance categories as discussed in the CY 2017 Quality
Payment Program final rule (81 FR 77329). However, for MIPS eligible
clinicians who did not meet the minimum case requirements, the ACR
measure was not applicable. In the CY 2018 Quality Payment Program
proposed rule, we did not propose to remove this measure from the list
of quality measures for the MIPS quality performance category. Nor did
we propose any changes for the ACR measure in the proposed rule. As
discussed in section II.C.4.d. of this final rule with comment period,
we are finalizing our proposal to generally apply our finalized group
policies to virtual groups.
c. Selection of MIPS Quality Measures for Individual MIPS Eligible
Clinicians and Groups Under the Annual List of Quality Measures
Available for MIPS Assessment
(1) Background and Policies for the Call for Measures and Measure
Selection Process
Under section 1848(q)(2)(D)(i) of the Act, the Secretary, through
notice and comment rulemaking, must establish an annual list of MIPS
quality measures from which MIPS eligible clinicians may choose for
purposes of assessment for a performance period. The annual list of
MIPS quality measures must be published in the Federal Register no
later than November 1 of the year prior to the first day of a
performance period. Updates to the annual list of MIPS quality measures
must be published in the Federal Register no later than November 1 of
the year prior to the first day of each subsequent performance period.
Updates may include the addition of new MIPS quality measures,
substantive changes to MIPS quality measures, and removal of MIPS
quality measures. We refer readers to the CY 2018 Quality Payment
Program proposed rule (82 FR 30043 and 30044) for additional
information regarding eCQM reporting and the Measure Development Plan
that serves as a strategic framework for the future of the clinician
quality measure development to support MIPS and APMs. We encourage
stakeholders to develop additional quality measures for MIPS that would
address the gaps.
Under section 1848(q)(2)(D)(ii) of the Act, the Secretary must
solicit a ``Call for Quality Measures Under Consideration'' each year.
Specifically, the Secretary must request that eligible clinician
organizations and other relevant stakeholders identify and submit
quality measures to be considered for selection in the annual list of
MIPS quality measures, as well as updates to the measures. Under
section 1848(q)(2)(D)(ii) of the Act, eligible clinician organizations
are professional organizations as defined by nationally recognized
specialty boards of certification or equivalent certification boards.
However, we do not believe there needs to be any special restrictions
on the type or make-up of the organizations that submit measures for
consideration through the call for measures. Any such restriction would
limit the type of quality measures and the scope and utility of the
quality measures that may be considered for inclusion under the MIPS.
As we described previously in the CY 2017 Quality Payment Program
final rule (81 FR 77137), we will accept quality measures submissions
at any time, but only measures submitted during the timeframe provided
by us through the pre-rulemaking process of each year will be
considered for inclusion in the annual list of MIPS quality measures
for the performance period beginning 2 years after the measure is
submitted. This process is consistent with the pre-rulemaking process
and the annual call for measures, which are further described at
(https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/Pre-Rule-Making.html).
Submission of potential quality measures, regardless of whether
they were previously published in a proposed rule or endorsed by an
entity with a contract under section 1890(a) of the Act, which is
currently the National Quality Forum, is encouraged. The annual Call
for Measures process allows eligible clinician organizations and other
relevant stakeholder organizations to identify and submit quality
measures for consideration. Presumably, stakeholders would not submit
[[Page 53636]]
measures for consideration unless they believe that the measure is
applicable to clinicians and can be reliably and validly measured at
the individual clinician level. The NQF-convened Measure Application
Partnership (MAP) provides an additional opportunity for stakeholders
to provide input on whether or not they believe the measures are
applicable to clinicians as well as feasible, scientifically
acceptable, and reliable and valid at the clinician level. Furthermore,
we must go through notice and comment rulemaking to establish the
annual list of quality measures, which gives stakeholders an additional
opportunity to review the measures and provide input on whether or not
they believe the measures are applicable to clinicians, as well as
feasible, scientifically acceptable, and reliable and valid at the
clinician level. Additionally, we are required by statute to submit new
measures to an applicable specialty- appropriate, peer-reviewed
journal.
As previously noted, we encourage the submission of potential
quality measures regardless of whether such measures were previously
published in a proposed rule or endorsed by an entity with a contract
under section 1890(a) of the Act. However, we proposed to request that
stakeholders apply the following considerations when submitting quality
measures for possible inclusion in MIPS:
Measures that are not duplicative of an existing or
proposed measure.
Measures that are beyond the measure concept phase of
development and have started testing, at a minimum, with strong
encouragement and preference for measures that have completed or are
near completion of reliability and validity testing.
Measures that include a data submission method beyond
claims-based data submission.
Measures that are outcome-based rather than clinical
process measures.
Measures that address patient safety and adverse events.
Measures that identify appropriate use of diagnosis and
therapeutics.
Measures that address the domain for care coordination.
Measures that address the domain for patient and caregiver
experience.
Measures that address efficiency, cost, and resource use.
Measures that address significant variation in
performance.
We will apply these considerations when considering quality
measures for possible inclusion in MIPS.
In addition, we note that we are likely to reject measures that do
not provide substantial evidence of variation in performance; for
example, if a measure developer submits data showing a small variation
in performance among a group already composed of high performers, such
evidence would not be substantial enough to assure us that sufficient
variation in performance exists. We also noted that we are likely to
reject measures that are not outcome-based measures, unless: (1) There
is substantial documented and peer reviewed evidence that the clinical
process measured varies directly with the outcome of interest; and (2)
it is not possible to measure the outcome of interest in a reasonable
timeframe.
We also noted that retired measures that were in one of CMS's
previous quality programs, such as the Physician Quality Reporting
System (PQRS) program, will likely be rejected if proposed for
inclusion. This includes measures that were retired due to being topped
out, as defined below. For example, measures may be retired due to
attaining topped out status because of high performance, or measures
that are retired due to a change in the evidence supporting their use.
In the CY 2017 Quality Payment Program final rule (81 FR 77153), we
established that we will categorize measures into the six NQS domains
(patient safety, person- and caregiver-centered experience and
outcomes, communication and care coordination, effective clinical care,
community/population health, and efficiency and cost reduction). We
intend to submit future MIPS quality measures to the NQF-convened
Measure Application Partnership's (MAP), as appropriate, and we intend
to consider the MAP's recommendations as part of the comprehensive
assessment of each measure considered for inclusion under MIPS.
In the CY 2017 Quality Payment Program final rule (81 FR 77155), we
established that we use the Call for Quality Measures process as a
forum to gather the information necessary to draft the journal articles
for submission from measure developers, measure owners and measure
stewards. The submission of this information does not preclude us from
conducting our own research using Medicare claims data, Medicare survey
results, and other data sources that we possess. We submit new measures
for publication in applicable specialty-appropriate, peer-reviewed
journals before including such measures in the final annual list of
quality measures.
In the CY 2017 Quality Payment Program final rule (81 FR 77158), we
established at Sec. 414.1330(a)(2) that for purposes of assessing
performance of MIPS eligible clinicians in the quality performance
category, we use quality measures developed by QCDRs. In the
circumstances where a QCDR wants to use a QCDR measure for inclusion in
the MIPS program for reporting, those measures go through a CMS
approval process during the QCDR self-nomination period. We also
established that we post the quality measures for use by QCDRs by no
later than January 1 for performance periods occurring in 2018 and
future years.
Previously finalized MIPS quality measures can be found in the CY
2017 Quality Payment Program final rule (81 FR 77558 through 77675).
Updates may include the addition of proposed new MIPS quality measures,
including measures selected 2 years ago during the Call for Measures
process. The new MIPS quality measures proposed for inclusion in MIPS
for the 2018 performance period and future years were found in Table A
of the CY 2018 Quality Payment Program proposed rule (82 FR 30261
through 30270). The proposed new and modified MIPS specialty sets for
the 2018 performance period and future years were listed in Table B of
the CY 2018 Quality Payment Program proposed rule (82 FR 30271 through
30454), and included existing measures that were proposed with
modifications, new measures, and measures finalized in the CY 2017
Quality Payment Program final rule. We noted that the modifications
made to the specialty sets may include the removal of certain quality
measures that were previously finalized. The specialty measure sets
should be used as a guide for eligible clinicians to choose measures
applicable to their specialty. To clarify, some of the MIPS specialty
sets have further defined subspecialty sets, each of which is
effectively a separate specialty set. In instances where an individual
MIPS eligible clinician or group reports on a specialty or subspecialty
set, if the set has less than six measures, that is all the clinician
is required to report. MIPS eligible clinicians are not required to
report on the specialty measure sets, but they are suggested measures
for specific specialties. Throughout measure utilization, measure
maintenance should be a continuous process done by the measure owners,
to include environmental scans of scientific literature about the
measure. New information gathered during this ongoing review may
trigger an ad hoc review. Please note that these specialty specific
measure sets are not all inclusive of every specialty or subspecialty.
On January 25, 2017, we announced that we would be accepting
[[Page 53637]]
recommendations for potential new specialty measure sets for year 2 of
MIPS under the Quality Payment Program. These recommendations were
based on the MIPS quality measures finalized in the CY 2017 Quality
Payment Program final rule, and include recommendations to add or
remove the current MIPS quality measures from the specialty measure
sets. The current specialty measure sets can be found on the Quality
Payment Program Web site at https://qpp.cms.gov/measures/quality. All
specialty measure sets submitted for consideration were assessed to
ensure that they met the needs of the Quality Payment Program.
As a result, we proposed (82 FR 30045) to add new quality measures
to MIPS (Table A in the proposed rule (82 FR 30261 through 30270)),
revise the specialty measure sets in MIPS (Table B in the proposed rule
(82 FR 30271 through 30454)), remove specific MIPS quality measures
only from specialty sets (Table C.1 in the proposed rule (82 FR 30455
through 30462)), and proposed to remove specific MIPS quality measures
from the MIPS program for the 2018 performance period (Table C.2 in the
proposed rule (82 FR 30463 through 30465)). In addition, we proposed to
also remove cross cutting measures from most of the specialty sets.
Specialty groups and societies reported that cross cutting measures may
or may not be relevant to their practices, contingent on the eligible
clinicians or groups. We chose to retain the cross cutting measures in
Family Practice, Internal Medicine, and Pediatrics specialty sets
because they are frequently used in these practices. The proposed 2017
cross cutting measures (81 FR 28447 through 28449) were compiled and
placed in a separate table for eligible clinicians to elect to use or
not, for reporting. To clarify, the cross-cutting measures are intended
to provide clinicians with a list of measures that are broadly
applicable to all clinicians regardless of the clinician's specialty.
Even though it is not required to report on cross-cutting measures, it
is provided as a reference to clinicians who are looking for additional
measures to report outside their specialty. We continue to consider
cross-cutting measures to be an important part of our quality measure
programs, and seek comment on ways to incorporate cross-cutting
measures into MIPS in the future. The Table of Cross-Cutting Measures
can be found in Table D of the Appendix in the CY 2018 Quality Payment
Program proposed rule (82 FR 30466 through 30467).
For MIPS quality measures that are undergoing substantive changes,
we proposed to identify measures including, but not limited to measures
that have had measure specification, measure title, and domain changes.
MIPS quality measures with proposed substantive changes can be found at
Table E of the Appendix in the CY 2018 Quality Payment Program proposed
rule (82 FR 30468 through 30478).
The measures that would be used for the APM scoring standard and
our authority for waiving certain measure requirements are described in
section II.C.6.g.(3)(b)(ii) of this final rule with comment period, and
the measures that would be used to calculate a quality score for the
APM scoring standard are proposed in Tables 14, 15, and 16 of the CY
2018 Quality Payment Program proposed rule (82 FR 30091 through 30095).
We also sought comment on whether there are any MIPS quality
measures that commenters believe should be classified in a different
NQS domain than what is being proposed, or that should be classified as
a different measure type (for example, process vs. outcome) than what
we proposed (82 FR 30045). We did not receive any public comments in
response to this solicitation.
The following is a summary of the public comments received on the
``Background and Policies for the Call for Measures and Measure
Selection Process proposals and our responses:
Comment: A few commenters supported the proposal to remove cross-
cutting measures from most specialty sets. One commenter agreed that
cross-cutting measures may or may not be relevant to certain practices.
Response: We appreciate the commenters support.
Comment: One commenter recommended that CMS retain cross-cutting
measures in specialty sets with fewer than six measures because the
commenter believed it would allow parity in quality measure reporting
across all clinicians and provide incentives for all specialties to
develop quality measures.
Response: We did not retain the cross-cutting measures in all the
specialty sets, including those sets with less than six measures,
because we believe that cross-cutting measures are not necessarily
reflective of all specialty groups' scope of their practice. One goal
of the MIPS program is to ensure that meaningful measurement occurs,
and CMS chose to retain the cross cutting measures in Family Practice,
Internal Medicine, and Pediatrics specialty sets because they are
frequently used in these practices. The cross-cutting measures are
intended to provide clinicians with a list of measures that are broadly
applicable to all clinicians regardless of the clinician's specialty.
Even though MIPS eligible clinicians are not required to report on
cross-cutting measures, they are provided as a reference to clinicians
who are looking for additional measures to report outside their
specialty.
Final Action: After consideration of the public comments received,
we refer readers to the appendix of this final rule with comment period
for the finalized list of new quality measures available for reporting
in MIPS for the 2018 performance period and future years (Table A); the
finalized specialty measure sets available for reporting in MIPS for
the 2018 performance period and future years (Table B); the MIPS
quality measures removed only from specialty sets for the 2018
performance period and future years (Table C.1); the MIPS quality
measures removed from the MIPS program for the 2018 performance period
and future years (Table C.2); the cross-cutting measures available for
the 2018 MIPS performance period and future years (Table D); and the
MIPS quality measures finalized with substantive changes for the 2018
performance period and future years (Table E).
(2) Topped Out Measures
As defined in the CY 2017 Quality Payment Program final rule at (81
FR 77136), a measure may be considered topped out if measure
performance is so high and unvarying that meaningful distinctions and
improvement in performance can no longer be made. Topped out measures
could have a disproportionate impact on the scores for certain MIPS
eligible clinicians, and provide little room for improvement for the
majority of MIPS eligible clinicians. We refer readers to section
II.C.7.a.(2)(c) of this final rule with comment period for additional
information regarding the scoring of topped out measures.
Although we proposed a 3-year timeline to identify and propose to
remove (through future rulemaking) topped out measures (82 FR 30046).
We would like to clarify that the proposed timeline is more accurately
described as a 4-year timeline. After a measure has been identified as
topped out for 3 consecutive years, we may propose to remove the
measure through notice-and-comment rulemaking for the 4th year.
Therefore, in the 4th year, if finalized through rulemaking, the
measure would be removed and would no longer be available for reporting
during the performance period. This proposal would provide a path
toward removing topped out measures over time, and will
[[Page 53638]]
apply to the MIPS quality measures. QCDR measures that consistently are
identified as topped out according to the same timeline as proposed
below, would not be approved for use in year 4 during the QCDR self-
nomination review process. These identified QCDR measures would not be
removed through the notice-and-comment and rulemaking process described
below.
We proposed to phase in this policy starting with a select set of
six highly topped out measures identified in section II.C.7.a.(2)(c) of
this final rule with comment period. We also proposed to phase in
special scoring for measures identified as topped out in the published
benchmarks for 2 consecutive performance periods, starting with the
select set of highly topped out measures for the 2018 MIPS performance
period. An example illustrating the proposed timeline for the removal
and special scoring of topped out measures, as it would be applied to
the select set of highly topped out measures identified in section
II.C.7.a.(2)(c) of this final rule with comment period, is as follows:
Year 1: Measures are identified as topped out in the
benchmarks published for the 2017 MIPS performance Period. The 2017
benchmarks are posted on the Quality Payment Program Web site: https://qpp.cms.gov/resources/education.
Year 2: Measures are identified as topped out in the
benchmarks published for the 2018 MIPS performance period. We refer
readers to section II.C.7.a.(2)(c) of this final rule with comment
period for additional information regarding the scoring of topped out
measures.
Year 3: Measures are identified as topped out in the
benchmarks published for the 2019 MIPS performance period. The measures
identified as topped out in the benchmarks published for the 2019 MIPS
performance period and the previous two consecutive performance periods
would continue to have special scoring applied for the 2019 MIPS
performance period and would be considered, through notice-and-comment
rulemaking, for removal for the 2020 MIPS performance period.
Year 4: Topped out measures that are finalized for removal
are no longer available for reporting. For example, the measures in the
set of highly topped out measures identified as topped out for the
2017, 2018 and 2019 MIPS performance periods, and if subsequently
finalized for removal will not be available on the list of measures for
the 2020 MIPS performance period and future years.
For all other measures, the timeline would apply starting with the
benchmarks for the 2018 MIPS performance period. Thus, the first year
any other topped out measure could be proposed for removal would be in
rulemaking for the 2021 MIPS performance period, based on the
benchmarks being topped out in the 2018, 2019, and 2020 MIPS
performance periods. If the measure benchmark is not topped out during
one of the 3 MIPS performance periods, then the lifecycle would stop
and start again at year 1 the next time the measure benchmark is topped
out.
We sought comment on the proposed timeline; specifically, regarding
the number of years before a topped out measure is identified and
considered for removal, and under what circumstances we should remove
topped out measures once they reach that point (82 FR 30046). We also
noted that if for some reason a measure benchmark is topped out for
only one submission mechanism benchmark, then we would remove that
measure from the submission mechanism, but not remove the measure from
other submission mechanisms available for submitting that measure. The
comments we received and our responses are discussed further below.
We also sought comment on whether topped out Summary Survey
Measures (SSMs), if topped out, should be considered for removal from
the Consumer Assessment of Healthcare Providers and Systems (CAHPS) for
MIPS Clinician or Group Survey measure due to high, unvarying
performance within the SSM, or whether there is another alternative
policy that could be applied for topped out SSMs within the CAHPS for
MIPS Clinician or Group Survey measure (82 FR 30046). We received a
comment on this item and appreciate the input received. As this was a
request for comment only, we will take the feedback provided into
consideration for future rulemaking.
In the CY 2017 Quality Payment Program final rule, we stated that
we do not believe it would be appropriate to remove topped out measures
from the CMS Web Interface for the Quality Payment Program because the
CMS Web Interface measures are used in MIPS and in APMs, such as the
Shared Savings Program. Removing topped out measures from the CMS Web
Interface would not be appropriate because we have aligned policies
where possible, with the Shared Savings Program, such as using the
Shared Savings Program benchmarks for the CMS Web Interface measures
(81 FR 77285). In the CY 2017 Quality Payment Program final rule, we
also finalized that MIPS eligible clinicians reporting via the CMS Web
Interface must report all measures included in the CMS Web Interface
(81 FR 77116). Thus, if a CMS Web Interface measure is topped out, the
CMS Web Interface reporter cannot select other measures. We refer
readers to section II.C.7.a.(2) of this final rule with comment period
for information on scoring policies with regards to topped out measures
from the CMS Web Interface for the Quality Payment Program. We did not
propose to include CMS Web Interface measures in our proposal on
removing topped out measures.
The following is a summary of the public comments received on the
``Topped Out Measures'' proposals and our responses:
Comment: Many commenters supported the proposed timeline for
identification and removal of topped out measures because the process
relies on multiple years of data and the lifecycle permits enough time
to avoid disadvantaging certain clinicians who may report these
measures. The commenters supported the lifecycle over multiple years to
find a trend in high performance, providing time for consideration of
replacement measures to sustain the focus on clinical areas where
improvement opportunities exist. A few commenters supported the
timeline and encouraged CMS to develop a more comprehensive approach to
identifying topped out measures, to ensure that voluntary reporting on
a menu of quality measures does not allow eligible clinicians to
`cherry pick' measures. One commenter supported not applying the topped
out measure policies to measures in the CMS Web Interface to align with
measures used in APMs such as the Shared Savings Program for the CMS
Web Interface submission mechanism for the Quality Payment Program.
Response: We agree that by identifying and removing topped out
measures, we may greatly reduce eligible clinicians' ability to
``cherry pick'' measures. We believe that the benchmarks will help us
identify those measures that meet our definition of topped out
measures.
Comment: Many commenters did not support removal of the measures,
because they noted: Benchmarks published for the 2017 performance
period were not derived from MIPS reported data; criteria to identify
topped out measures did not include consideration of important clinical
considerations including patient safety and the ability to accurately
measure and motivate high quality care; and removal of measures may
disproportionately impact one submission mechanism or clinicians
[[Page 53639]]
based on medical specialty, practice size or regional variation.
Several commenters indicated that identification of topped out measures
is challenging because measures are voluntarily selected with limited
reporting on each measure, and thus benchmarks may appear to be topped
out when in fact there is still the room for improvement. A few
commenters cautioned against removing measures because this may lead to
``back sliding'' due to a shift in resources from support of current
practices yielding high performance to new practices to support a new
measure. Several commenters indicated that the criteria for selection
of topped out measures should be expanded to consider the clinical
importance of measures, and a few commenters recommended the
identification of measures that are essential for high quality care
such as patient safety, public health or patient experience that should
never be removed from the list of measures. Many commenters voiced
concern over the potential number of measures that may be topped out
which they believed would leave eligible clinicians, particularly
specialists with few relevant measures to submit. Many commenters
recommended only removing topped out measures if there are adequate
replacement measures added to the measures list. A few commenters
indicated that topped out measures could be incorporated into composite
measures reflecting multiple, important aspects of care. A few
commenters recommended that prior to the removal of a measure, CMS
evaluate the topped out measure across submission mechanisms to
determine if the measure is harmonized across submission mechanisms.
Response: The benchmarks for the 2017 performance period are
derived from the measure's historical performance data which helps us
trend the measure's anticipated performance in the future. Topped out
measures are considered topped out if the measure performance is so
high and unvarying that meaningful differences and improvement in
performance can no longer be seen. Retaining topped out measures could
have a disproportionate impact on the scores for certain MIPS eligible
clinicians. We note that topped out measures must be consecutively
identified for 3 years (in MIPS) as topped out before it is proposed
for removal in the 4th year through rulemaking and comment period. As a
part of the topped out measure timeline, we will take into
consideration other factors such as clinical relevance and the
availability of other relevant specialty measures prior to deciding
whether or not to remove the measure from the program. Through the Call
for Measures process and annual approval of QCDR measures, we
anticipate that MIPS eligible clinicians and groups will have measures
that provide meaningful measurement and are reflective of their current
scope of practice. We believe that through the annual Call for Measures
and QCDR self-nomination processes additional quality measures will be
developed and implemented in the program, that will provide eligible
clinicians and groups with a continuously growing selection of measures
to choose from that will allow for meaningful measurement. We recognize
that there are certain types of high value measures such as patient
safety and patient experience, but we disagree that such measures
should be designated as never to be removed from the list of available
quality measures. We thank the commenters for their suggestion to
remove topped out measures if there are adequate replacement measures
added to the measures list, and we will take this into consideration,
while encouraging measure stewards to submit measures to us through the
Call for Measures process. We would like to note that this policy
creates a standard timeline for us to consider which measures are
topped out and may need to be removed. Each removal would need to be
proposed and finalized through rulemaking, and we would have the
discretion to retain any particular measure that, after consideration
of public comments and other factors, may be determined to be
inappropriate for removal.
Comment: Several commenters did not support the removal of topped
out measures from QCDR submissions because commenters believed this
would reduce the ability of specialists to develop and strengthen new
measures. A few commenters believed that not including QCDR measures in
the topped out measure policy would ensure that eligible clinicians,
including anesthesia clinicians, have measures of merit during the
transition to full implementation of MIPS. One commenter urged CMS not
to remove QCDR topped out measures but rather allow topped out measures
as controls for new and developing measures by which true statistical
validity and reliability can be assessed. One commenter voiced concern
over potential removal of QCDR topped out measures without going
through the notice-and-comment rulemaking process. One commenter
indicated that EHR measures used by QCDRs are less likely to be topped
out because QCDRs led by specialty societies have significant expertise
in quality measure development, measurement, and implementation, and
are uniquely poised to develop and test meaningful measures. The
commenter indicated that specialty registries can continue to monitor
vital topped out measures, even if the measures are removed from MIPS
reporting. A few commenters noted that many topped out process measures
are important to monitor and to provide feedback to clinicians because
less than very high performance is concerning and should be flagged.
Response: We disagree that the removal of topped out QCDR measures
would reduce the ability of specialists to develop and strengthen new
measures. Rather, we believe that QCDRs can develop QCDR measures that
would address areas in which there is a known performance gap and in
which there is need for improvement. We also disagree that the removal
of QCDR measures should occur through the notice-and-comment rulemaking
process, as QCDR measures are not approved for use in the program
through rulemaking. We refer readers to section 1848(q)(2)(D)(vi) of
the Act, which expressly provides that QCDR measures are not subject to
the notice-and-comment rulemaking requirements described in section
1848(q)(2)(D)(i) of the Act that apply to other MIPS measures, and that
the Secretary is only required to publish the list of QCDR measures on
the CMS Web site. We appreciate the QCDRs expertise in given areas of
specialty, but as previously indicated, we will utilize benchmarks for
all submission mechanisms to appropriately identify measures as topped
out, and will consider performance in all submission mechanisms before
indicating that a given measure is topped out. QCDR measures should
also be removed from MIPS through a similar timeline when QCDR measures
meet the definition of a topped out measure. We understand the
importance of monitoring high performance among clinicians, but we also
believe that topped out QCDR measures may inadvertently penalize
clinicians who are considered high performers when they are compared to
other high performer clinicians, as described in the CY 2017 Quality
Payment Program final rule (81 FR 77286). For example, a clinician who
performs at the 90th percentile, when compared to another high
performing clinician who scored in the 98th percentile, could
potentially receive a lower score based on the cohort in
[[Page 53640]]
which they are compared. QCDR measures, their performance data, and
clinical relevance are reviewed extensively as QCDRs self-nominate and
submit their QCDR measures for consideration on an annual basis. We
agree that specialty registries can continue to monitor their data
submission of topped out measures for purposes of monitoring
performance and improvement, even after the measures are removed from
MIPS. Additional data provided by QCDRs or discussions about their QCDR
measures is taken into consideration during the review process.
Comment: A few commenters encouraged CMS to have a transparent
process using multiple communication processes to indicate which
measures are topped out and which measures will have the scoring cap to
ensure MIPS eligible clinicians have the necessary time to alter their
reporting under the quality performance category before topped-out
measures are finalized for removal. Some commenters recommended that
CMS provide detailed information on the measures considered to be
topped-out, including the number and type of clinicians or groups
reporting the measure each year, the number and type of clinicians or
groups consistently reporting the measure, the range of performance
scores and any statistical testing information. Other commenters
suggested that CMS announce the status of a topped out measure in a
draft proposed rule with at least a 45-day comment period. One
commenter urged CMS to announce topped out measures at a consistent
time each year.
Response: We intend to indicate which measures are topped out
through the benchmarks that will be published on the Quality Payment
Program Web site annually, as feasible prior to the beginning of each
performance period. We intend to consider, and as appropriate, propose
removal of topped out measures in future notice-and-comment rulemaking
in accordance with the proposed timeline. We thank commenters for their
suggestions as to what information should be available on measures
considered topped out and will provide additional data elements, as
technically feasible and appropriate.
Comment: A few commenters did not support the proposed lifecycle
and made suggestions regarding the delay of the initiation of the
lifecycle or extension to the timeline, to allow more time to adjust
and continue to demonstrate improvement over time within MIPS. A few
commenters recommended lengthening the lifecycle by 1 year, allowing
the measure to be scored for 2 years after the measure is identified as
topped out. The commenters indicated this will support MIPS eligible
clinicians in incorporating appropriate measures into EHR systems and
updating clinical practice. Several commenters recommended a delay in
the start of the lifecycle to allow benchmarks to be developed from
MIPS data and a more representative sample, while giving time for MIPS
eligible clinicians to experience the program. One commenter requested
a delay in the initiation of the lifecycle for measures without a
benchmark to allow additional submissions in future years which may
lead to the development of benchmarks.
Response: We note that the topped out measure lifecycle has built
in a 4-year timeline, which would be triggered when topped out measures
are identified through the benchmarks as topped out. We believe the 4-
year timeline would provide MIPS eligible clinicians, groups, and
third-party intermediaries with a sufficient amount of time to adjust
to the removal of identified topped out measures. Topped out measures
are identified through the benchmarks, and cannot be identified as
topped out until the benchmark is established. We would like to note
that this policy creates a standard timeline for us to consider which
measures are topped out and may need to be removed. Each removal would
need to be proposed and finalized through rulemaking, and we would have
the discretion to retain any particular measure that, after
consideration of public comments and other factors may be determined to
be inappropriate for removal. We believe that the 4-year timeline will
provide MIPS eligible clinicians with sufficient time to incorporate
measures into their EHR systems and to update their clinical practice.
Comment: A few commenters did not support the proposed topped out
measure removal timeline, noting that the proposal would delay the
retirement of topped out measures and selection and use of different
quality measures. One commenter believed that allowing performance to
be supported by the selection of topped out measures will not provide
sufficient incentive for eligible clinicians to select the more
challenging and difficult measures available.
Response: We believe that the topped out measure timeline reflects
a sufficient amount of time in which we are able to clearly distinguish
topped out measures through their performance in the benchmarks. The
timing will allow us to take into consideration any variances in the
benchmarks, and provide sufficient timing to request public comment on
the proposed removal of topped out measures. There are a variety of
quality and QCDR measures to choose from in the MIPS program, and we
encourage MIPS individual eligible clinicians and groups to select
measures that provide meaningful measurement for them.
Final Action: After consideration of the public comments received
and since topped out measures may provide little room for improvement
for the majority of MIPS eligible clinicians, and a disproportionate
impact on the scores for certain MIPS eligible clinicians, we are
finalizing our proposed 4-year timeline to identify topped out
measures, after which we may propose to remove the measures through
future rule making topped out measures. After a measure has been
identified as topped out for 3 consecutive years, we may propose to
remove the measure through notice and comment rulemaking for the 4th
year. Therefore, in the 4th year, if finalized through rulemaking, the
measure would be removed and would no longer be available for reporting
during the performance period. This policy provides a path toward
removing topped out MIPS quality measures over time. QCDR measures that
consistently are identified as topped out according to the same
timeline would not be approved for use in year 4 during the QCDR self-
nomination review process. Removal of these QCDR measures would not go
through the comment and rulemaking process as MIPS quality measures
would.
(3) Non-Outcome Measures
In the CY 2017 Quality Payment Program final rule, we sought
comment on whether we should remove non-outcomes measures for which
performance cannot reliably be scored against a benchmark (for example,
measures that do not have 20 reporters with 20 cases that meet the data
completeness standard) for 3 years in a row (81 FR 77288).
Based on the need for CMS to further assess this issue, we did not
propose to remove non-outcome measures. However, we sought comment on
what the best timeline for removing both non-outcome and outcome
measures that cannot be reliably scored against a benchmark for 3 years
(82 FR 30047). We received a number of comments on this item and
appreciate the input received. As this was a request for comment only,
we will take the feedback provided into consideration for future
rulemaking.
[[Page 53641]]
(4) Quality Measures Determined To Be Outcome Measures
Under the MIPS, individual MIPS eligible clinicians are generally
required to submit at least one outcome measure, or, if no outcome
measure is available, one high priority measure. As such, our
determinations as to whether a measure is an outcome measure is of
importance to stakeholders. We did not make any proposals on how
quality measures are determined to be outcome measures, and refer
readers to the CY 2018 Quality Payment Program proposed rule (82 FR
30047) for the criteria utilized in determining if a measure is
considered an outcome measure. We sought comment on the criteria and
process outlined in the proposed rule on how we designate outcome
measures (82 FR 30047). We received a number of comments on this item
and appreciate the input received. As this was a request for comment
only, we will take the feedback provided into consideration for future
rulemaking.
d. Cost Performance Category
(1) Background
(a) General Overview
Measuring cost is an integral part of measuring value as part of
MIPS. In implementing the cost performance category for the transition
year (2017 MIPS performance period/2019 MIPS payment year), we started
with measures that had been used in previous programs (mainly the VM)
but noted our intent to move towards episode-based measurement as soon
as possible, consistent with the statute and the feedback from the
clinician community. Specifically, we adopted 2 measures that had been
used in the VM: The total per capita costs for all attributed
beneficiaries measure (referred to as the total per capita cost
measure); and the MSPB measure (81 FR 77166 through 77168). We also
adopted 10 episode-based measures that had previously been included in
the Supplemental Quality and Resource Use Reports (sQRURs) (81 FR 77171
through 77174).
At Sec. 414.1325(e), we finalized that all measures used under the
cost performance category would be derived from Medicare administrative
claims data and, thus, participation would not require additional data
submission. We finalized a reliability threshold of 0.4 for measures in
the cost performance category (81 FR 77170). We also finalized a case
minimum of 35 for the MSPB measure (81 FR 77171) and 20 for the total
per capita cost measure (81 FR 77170) and each of the 10 episode-based
measures (81 FR 77175) in the cost performance category to ensure the
reliability threshold is met.
For the transition year, we finalized a policy to weight the cost
performance category at zero percent of the final score in order to
give clinicians more opportunity to understand the attribution and
scoring methodologies and gain more familiarity with the measures
through performance feedback so that clinicians may take action to
improve their performance (81 FR 77165 through 77166). In the CY 2017
Quality Payment Program final rule, we finalized a cost performance
category weight of 10 percent for the 2020 MIPS payment year (81 FR
77165). For the 2021 MIPS payment year and beyond, the cost performance
category will have a weight of 30 percent of the final score as
required by section 1848(q)(5)(E)(i)(II)(aa) of the Act.
For descriptions of the statutory basis and our existing policies
for the cost performance category, we refer readers to the CY 2017
Quality Payment Program final rule (81 FR 77162 through 77177).
As finalized at Sec. 414.1370(g)(2), the cost performance category
is weighted at zero percent for MIPS eligible clinicians scored under
the MIPS APM scoring standard because many MIPS APMs incorporate cost
measurement in other ways. For more on the APM scoring standard, see
II.C.6.g. of this final rule with comment period.
(2) Weighting in the Final Score
We proposed at Sec. 414.1350(b)(2) to change the weight of the
cost performance category from 10 percent to zero percent for the 2020
MIPS payment year. We noted that we continue to have concerns about the
level of familiarity with and understanding of cost measures among
clinicians. We noted that we could use the additional year where the
cost performance category would not affect the final score to increase
understanding of the measures so that clinicians would be more
comfortable with their role in reducing costs for their patients. In
addition, we could use this additional year to develop and refine
episode-based cost measures, which are cost measures that are focused
on clinical conditions or procedures. We intend to propose in future
rulemaking policies to adopt episode-based measures currently in
development.
Although we believed that reducing this weight could be appropriate
given the level of understanding of the measures and the scoring
standards, we noted that section 1848(q)(5)(E)(i)(II)(aa) of the Act
requires the cost performance category to be assigned a weight of 30
percent of the MIPS final score beginning in the 2021 MIPS payment
year. We recognized that assigning a zero percent weight to the cost
performance category for the 2020 MIPS payment year may not provide a
smooth enough transition for integrating cost measures into MIPS and
may not provide enough encouragement to clinicians to review their
performance on cost measures. Therefore, we sought comment on keeping
the weight of the cost performance category at 10 percent for the 2020
MIPS payment year (82 FR 30048).
We invited public comments on this proposal of a zero percent
weighting for the cost performance category and the alternative option
of a 10 percent weighting for the cost performance category for the
2020 MIPS payment year (82 FR 30048).
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters supported our alternative option to keep
the weight of the cost performance category at 10 percent for the 2020
MIPS payment year, as we previously finalized in the CY 2017 Quality
Payment Program final rule. The commenters expressed concern that the
statutorily mandated 30 percent weight of the cost performance category
in the 2021 MIPS payment year would be too steep an increase from zero
percent, and MIPS eligible clinicians would be unprepared. Some
commenters indicated that they believed that cost measures are
intrinsic measures of value and that clinicians can demonstrate value
through lower costs. One commenter recommended that the cost
performance category be weighted at 15 percent for the 2020 MIPS
payment year.
Response: We share the commenters' concerns about the increase in
the weight of the cost performance category from zero percent in the
2020 MIPS payment year to 30 percent in the 2021 MIPS payment year,
which is statutorily required. We agree with the commenters that cost
measures are an important component of value, and that weighting the
cost category at 10 percent will help to provide a smoother transition
for clinicians by giving them more time to experience cost measurement
with the cost category having a lower relative weight of 10 percent.
Furthermore, moving forward with a lower relative weight in
anticipation of the requirement to go to 30 percent in the 2021 MIPS
payment year will allow more time for the development of episode-based
cost measures, which are being developed with substantial
[[Page 53642]]
clinician input. We are therefore adopting our alternative option to
maintain the 10 percent weight for the cost performance category for
the 2020 MIPS payment year, as we finalized in the CY 2017 Quality
Payment Program final rule (81 FR 77165).
Comment: Many commenters supported our proposal to weight the cost
performance category at zero percent of the final score for the 2020
MIPS payment year. The commenters stated that MIPS eligible clinicians
are still gaining familiarity with the scoring methodology and the cost
measures and would appreciate additional time to review feedback
reports. Some commenters supported the proposal because episode-based
measures were not yet included and therefore many clinicians would not
be measured in the cost performance category. Some commenters suggested
that CMS use the additional time to continue to improve risk
adjustment, attribution, and other components of cost measures.
Response: We will continue to work to make clinicians more familiar
with the measures and continue to refine the measures. However, we are
concerned that not assigning any weight to the cost performance
category when the weight is required to be at 30 percent in the third
MIPS payment year will result in too dramatic a transition in a single
year. We also agree with commenters that new episode-based cost
measures will be an important part of the cost category, and intend to
make future proposals about implementing episode-based measures as soon
as they are developed.
Comment: Several commenters stated that although the statute
requires the cost performance category to be weighted at 30 percent of
the final score in the third MIPS payment year, we should use
flexibility in the statute to weight the cost performance category at
zero percent or a percentage lower than 30 percent for the third MIPS
payment year and for additional years in the future either by
determining that there are no applicable measures in the cost
performance category or using broader flexibility to reweight the
performance categories. These commenters supported the zero percent
weight for the 2020 MIPS payment year but believed that the cost
performance category should not count towards the final score until
clinicians have gained more experience with this category, episode-
based measures are more developed, and risk adjustment models are more
robust.
Response: While we understand the concerns of commenters, section
1848(q)(5)(E) of the Act requires the cost performance category to be
weighted at 30 percent of the final score beginning in the third MIPS
payment year. We do not believe the statute affords us flexibility to
adjust this prescribed weight, unless we determine there are not
sufficient cost measures applicable and available to MIPS eligible
clinicians under section 1848(q)(5)(F) of the Act. We believe that a
clinician's influence on the costs borne by both patients and the
Medicare program is an important component of measuring value as
envisioned by the creation of the MIPS program. In addition, because of
our concerns about the dramatic transition between the cost performance
category being weighed at zero percent for a year and 30 percent for
the next year, we are adopting our alternative to maintain the 10
percent weight for the cost performance category for the 2020 MIPS
payment year. We continue to work with clinicians to better understand
the cost measures as they prepare for the category to be weighted at 30
percent of the final score. We are seeking extensive input from
clinicians on the development of episode-based measures and technical
updates to existing measures in addition to providing feedback reports
so that clinicians can better understand the measures.
Comment: Several commenters recommended that the cost performance
category be weighted at 10 percent in the 2020 MIPS payment year only
for those clinicians who volunteer to be measured on cost. Other
commenters expressed their support for a zero percent weighting but
requested that clinicians be given information on how they would have
scored under cost measurement.
Response: We do not have the statutory authority to score cost
measures on a voluntary basis under MIPS. Because the MIPS cost
measures are calculated based on Medicare claims data and do not
require additional reporting by clinicians, we are able to provide
outreach and model scoring scenarios without clinicians volunteering to
complete any actions. We are planning to provide feedback on both
individual measures as well as the cost performance category to
increase understanding and familiarity going into future years.
Comment: Many commenters requested that CMS provide extensive
feedback on cost measures and the cost performance category score to
ensure that clinicians are best positioned for the cost performance
category to be weighted at 30 percent of the final score for the 2021
MIPS payment year.
Response: We discuss in section II.C.9.a of this final rule with
comment period our plans to provide performance feedback, including on
cost measures. As noted there, we will also be providing information on
newly developed episode-based measures which may become a part of the
MIPS cost performance category in future years.
Comment: A few commenters recommended that the cost performance
category be weighted at zero percent for certain specialties or types
of clinicians for an indefinite period of time because not enough
measures are available for them. One commenter suggested that if at
least one episode-based measure cannot be calculated for a clinician or
group that they not be scored in the cost performance category.
Response: We recognize that not every clinician will have cost
measures attributed to them in the initial years of MIPS and therefore
may not receive a cost performance category score. However, we do not
believe that it is appropriate to exclude certain clinicians from cost
measurement on the basis of their specialty if they are attributed a
sufficient number of cases to meet the case minimum for the cost
measure. We did not propose any episode-based measures for the 2018
MIPS performance period. We address MIPS cost performance category
scoring policies in section II.C.7.a.(3) of this final rule with
comment period, but we did not propose any changes related to the
minimum number of measures required to receive a cost performance
category score. A MIPS eligible clinician must be attributed a
sufficient number of cases for at least one cost measure, and that cost
measure must have a benchmark, in order for the clinician to receive a
cost performance category score (81 FR 77322 through 77323).
Comment: One commenter recommended that small practices (defined as
15 or fewer clinicians) not have the cost performance category
contribute to the weight of their final score, at least until more
valid and reliable measures are developed.
Response: While we have a strong commitment to ensuring that small
practices are able to participate in MIPS, we do not have the statutory
authority to exempt small practices from the cost performance category.
We have offered additional flexibility for small practices in a number
of areas, including a small practice bonus that will be added to the
final score for the 2020 MIPS payment year (see section II.C.7.b.(1)(c)
of this final rule with comment period). Many of these policies are
intended to recognize the different level of administrative or other
support a small practice might have in comparison to a larger entity.
Because the MIPS cost
[[Page 53643]]
measures do not require reporting of data by clinicians other than the
usual submission of claims, there is no additional administrative
burden associated with being a small practice in the cost performance
category. Furthermore, it is possible that some small practices will
not have any cost measures applicable and available to them because
they may not meet the case minimums for any of the cost measures. Other
small practices may have a considerable volume of patients and wish to
be rewarded for their commitment to reducing the cost of care.
Comment: A few commenters recommended that the cost performance
category be weighted at a percentage higher than zero percent but lower
than 10 percent so that the cost performance category would have a
limited contribution to the final score.
Response: We are adopting our alternative of maintaining the cost
performance category weight at 10 percent of the final score for the
2020 MIPS payment year. We are doing so because we are concerned about
the dramatic transition between a zero percent weight and the 30
percent weight mandated for the 2021 MIPS payment year. We did receive
many comments in favor of the 10 percent weight and do not believe that
a weight below 10 percent will provide an easier transition to the 30
percent weight for the 2021 MIPS payment year.
Comment: Some commenters expressed general concern about our
approach to measuring the cost performance category. Some suggested
that cost measures should not be included if there are not quality
measures for the same group of patients. A few commenters suggested
that cost measures should only consider services that were personally
provided or ordered by a clinician.
Response: We have designed the Quality Payment Program to be
flexible and allow clinicians to select quality measures that reflect
their practice. We expect that most clinicians and groups will select
measures based on the types of patients they typically see. Because the
measures for the cost performance category are calculated based on
Medicare claims submitted, we believe they will also reflect a
clinician's practice. While we are finalizing cost measures that do not
directly correspond to quality measures, we note that each performance
category is weighted and combined to determine the final score. In that
sense, we believe that we are measuring value by rewarding performance
in quality while keeping down costs. We also believe that clinicians
can influence the cost of services that they do not personally perform
by improving care management with other clinicians and avoiding
unnecessary services.
Final Action: After consideration of the public comments, we are
not finalizing our proposal to weight the cost performance category at
zero percent of the final score for the 2020 MIPS payment year. We are
instead adopting our alternative option to maintain the weight of the
cost performance category at 10 percent of the final score for the 2020
MIPS payment year as we finalized in the CY 2017 Quality Payment
Program final rule (81 FR 77165).
(3) Cost Criteria
(a) Measures Proposed for the MIPS Cost Performance Category
(i) Background
Under Sec. 414.1350(a), we specify cost measures for a performance
period to assess the performance of MIPS eligible clinicians on the
cost performance category. For the 2017 MIPS performance period, we
will utilize 12 cost measures that are derived from Medicare
administrative claims data. Two of these measures, the MSPB measure and
total per capita cost measure, have been used in the VM (81 FR 77166
through 77168), and the remaining 10 are episode-based measures that
were included in the sQRURs in 2014 and 2015 (81 FR 77171 through
77174).
Section 1848(r) of the Act specifies a series of steps and
activities for the Secretary to undertake to involve the physician,
practitioner, and other stakeholder communities in enhancing the
infrastructure for cost measurement, including for purposes of MIPS,
which we summarized in detail in the CY 2018 Quality Payment Program
proposed rule (82 FR 30048).
(ii) Total Per Capita Cost and MSPB Measures
For the 2018 MIPS performance period and future performance
periods, we proposed to include in the cost performance category the
total per capita cost measure and the MSPB measure as finalized for the
2017 MIPS performance period (82 FR 30048 through 30049). We referred
readers to the description of these measures in the CY 2017 Quality
Payment Program final rule (81 FR 77164 through 77171). We proposed to
include the total per capita cost measure because it is a global
measure of all Medicare Part A and Part B costs during the performance
period. MIPS eligible clinicians are familiar with the total per capita
cost measure because the measure has been used in the VM since the 2015
payment adjustment period and performance feedback has been provided
through the annual QRUR since 2013 for a subset of groups that had 20
or more eligible professionals) and to all groups in the annual QRUR
since 2014 and mid-year QRUR since 2015. We proposed to use the MSPB
measure because many MIPS eligible clinicians will be familiar with the
measure from the VM, where it has been included since the 2016 payment
adjustment period and in annual QRUR since 2014 and the mid-year QRUR
since 2015, or its hospital-specified version, which has been a part of
the Hospital VBP Program since 2015. In addition to familiarity, these
two measures cover a large number of patients and provide an important
measurement of clinician contribution to the overall population that a
clinician encounters.
We did not propose any changes to the methodologies for payment
standardization, risk adjustment, and specialty adjustment for these
measures and refer readers to the CY 2017 Quality Payment Program final
rule (81 FR 77164 through 77171) for more information about these
methodologies.
We noted that we will continue to evaluate cost measures that are
included in MIPS on a regular basis and anticipate that measures could
be added or removed, subject to rulemaking under applicable law, as
measure development continues. We will also maintain the measures that
are used in the cost performance category by updating specifications,
risk adjustment, and attribution as appropriate. We anticipate
including a list of cost measures for a given performance period in
annual rulemaking.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters opposed the inclusion of the total per
capita cost measure and the MSPB measure as cost measures for the 2018
MIPS performance period and future performance periods. Commenters
expressed concern that these measures did not differentiate between
services or circumstances that clinicians could control from those that
they could not. The commenters stated that the MSPB measure had been
developed for the hospital setting and had not been endorsed for use
for clinician accountability by the NQF. The commenters stated that the
total per capita cost measure had not been endorsed by the NQF. Some
commenters recommended that these
[[Page 53644]]
measures be eliminated when episode-based measures are made part of the
program because episode-based measures are more focused on certain
conditions.
Response: Both the total per capita cost and MSPB measures were
included in the QRURs and used in the VM for many years before the
implementation of MIPS. These two measures cover a large number of
patients and provide an important measurement of clinician contribution
to the overall population that a clinician encounters. Like all of the
cost measures that we have developed, we continue to refine these
measures for improvement. If we find that episode-based measures would
be an appropriate replacement for both of these measures, we would
address that issue in future rulemaking. At this time, we believe that
the total per capita and MSPB measures are tested and reliable for
Medicare populations and are therefore the best measures available for
the cost performance category. We are concurrently developing new
episode-based cost measures with substantial clinician input, that we
will consider for proposals in future rulemaking.
Comment: Several commenters supported our proposal to include the
total per capita cost measure and MSPB measure as cost measures for the
2018 MIPS performance period. These commenters stated that these
measures had been used in the legacy VM and would be applicable to many
clinicians.
Response: We appreciate the commenters for their support.
Comment: A few commenters recommended that Part B drugs be excluded
from the cost measures because Part D drugs are excluded. They
suggested that including Part B drugs is unfair because it would
penalize clinicians for prescribing or providing appropriate care.
Response: We believe that clinicians play a key role in prescribing
drugs for their patients and that the costs associated with drugs can
be a significant contributor to the overall cost of caring for a
patient. We do not believe it would be appropriate to remove the cost
of Medicare Part B drugs from the cost measures, when other services
that are ordered but not performed by clinicians, such as laboratory
tests or diagnostic imaging, are included. Clinicians play a similar
role in prescribing Part D drugs, and Part D drugs can also be a
significant contributor to the overall cost of care. However, there are
technical challenges that would need to be addressed to integrate Part
D drug costs. Section 1848(q)(2)(B)(ii) of the Act requires CMS, to the
extent feasible and applicable, to account for the cost of drugs under
Medicare Part D as part of cost measurement under MIPS, and we will
continue to explore the addition of this data in cost measures.
Final Action: After consideration of the public comments, we are
finalizing our proposal to include the total per capita cost and MSPB
measures in the cost performance category for the 2018 MIPS performance
period and future performance periods.
(iii) Episode-Based Measures
Episode-based measures differ from the total per capita cost
measure and MSPB measure because their specifications only include
services that are related to the episode of care for a clinical
condition or procedure (as defined by procedure and diagnosis codes),
as opposed to including all services that are provided to a patient
over a given period of time. For the 2018 MIPS performance period, we
did not propose to include in the cost performance category the 10
episode-based measures that we adopted for the 2017 MIPS performance
period in the CY 2017 Quality Payment Program final rule (81 FR 77171
through 77174). We instead will work to develop new episode-based
measures, with significant clinician input, for future performance
periods.
We received extensive comments on our proposal to include 41 of
these episode-based measures for the 2017 MIPS performance period,
which we responded to in the CY 2017 Quality Payment Program final rule
(81 FR 77171 through 77174). We also received additional comments after
publication of that final rule with comment period about the decision
to include 10 episode-based measures for the 2017 MIPS performance
period. Although comments were generally in favor of the inclusion of
episode-based measures in the future, there was also overwhelming
stakeholder interest in more clinician involvement in the development
of these episode-based measures as required by section 1848(r)(2) of
the Act. Although there was an opportunity for clinician involvement in
the development of some of the episode-based measures included for the
2017 MIPS performance period, it was not as extensive as the process we
are currently using to develop episode-based measures. We believe that
the new episode-based measures, which we intend to propose in future
rulemaking to include in the cost performance category for the 2019
MIPS performance period, will be substantially improved by more
extensive stakeholder feedback and involvement in the process.
A draft list of care episodes and patient condition groups that
could become episode-based measures used in the Quality Payment
Program, along with trigger codes that would indicate the beginning of
the episode, was posted for comment in December 2016 (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Episode-Based-Cost-Measure-Development-for-the-Quality-Payment-Program.pdf). This
material was informed by engagement with clinicians from over 50
clinician specialty societies through a Clinical Committee formed to
participate in cost measure development. Subsequently, Clinical
Subcomittees have been formed to provide input from a diverse array of
clinicians on identifying conditions and procedures for episode groups.
For the first set of episode-based cost measures being developed, the
Clinical Subcommittees have nearly 150 clinicians affiliated with
nearly 100 national specialty societies, recommending which services or
claims would be counted in episode costs. This will ensure that cost
measures in development are directly informed by a substantial number
of clinicians and members of specialty societies.
In addition, a technical expert panel has met to provide oversight
and guidance for our development of episode-based cost measures. The
technical expert panel has offered recommendations for defining an
episode group, assigning costs to the group, attributing episode groups
to clinicians, risk adjusting episodes, and aligning cost and quality.
This expert feedback has been built into the current cost measure
development process.
As this process continues, we are continuing to seek input from
clinicians. We believe that episode-based measures will benefit from
this comprehensive approach to development. In addition, because it is
possible that the new episode-based measures under development could
address similar conditions as those in the episode-based measures
finalized for the 2017 MIPS performance period, we believe that it
would be better to focus attention on the new episode-based measures,
so that clinicians would not receive feedback or scores from two
measures for the same patient condition or procedure. We will endeavor
to have as many episode-based measures available as possible for the
2019 MIPS performance period but will continue to develop measures for
potential consideration in the more distant future.
Although we did not propose to include any episode-based measures
in
[[Page 53645]]
calculating the cost performance category score for the 2020 MIPS
payment year, we noted that we do plan to continue to provide
confidential performance feedback to clinicians on their performance on
episode-based measures developed under the processes required by
section 1848(r)(2) of the Act as appropriate in order to increase
familiarity with the concept of episode-based measurement as well as
the specific episodes that could be included in determining the cost
performance category score in the future. We recently provided an
initial opportunity for clinicians to review their performance based on
the new episode-based measures, as the measures are developed and as
the information is available. We note that this feedback will be
specific to the new episode-based measures that are developed under the
process described above and may be presented in a different format than
MIPS eligible clinicians' performance feedback as described in section
II.C.9.a. of this final rule with comment period. However, our
intention is to align the feedback as much as possible to ensure
clinicians receive opportunities to review their performance on
potential new episode-based measures for the cost performance category
prior to the 2019 MIPS performance period. We are concerned that
continuing to provide feedback on the older episode-based measures
along with feedback on new episode-based measures will be confusing and
a poor use of resources. Because we are focusing on development of new
episode-based measures, our feedback on episode-based measures that
were previously developed will discontinue after 2017, as these
measures would no longer be maintained or reflect changes in diagnostic
and procedural coding. We intend to provide feedback on newly developed
episode-based measures as they become available in a new format around
summer 2018. We noted that the feedback provided in the summer of 2018
will go to those MIPS eligible clinicians for whom we are able to
calculate the episode-based measures, which means it would be possible
a clinician may not receive feedback on episode-based measures in both
the fall of 2017 and the summer of 2018. We believe that receiving
feedback on the new episode-based measures will support clinicians in
their readiness for the 2019 MIPS performance period.
As previously finalized in the in the CY 2017 Quality Payment
Program final rule (81 FR 77173), the 10 episode-based measures (which
we did not propose for the 2018 MIPS performance period) will be used
for determining the cost performance category score for the 2019 MIPS
payment year in conjunction with the MSPB measure and the total per
capita cost measure, although the cost performance category score will
be weighted at zero percent in that year.
The following is a summary of the public comments received and our
responses:
Comment: Many commenters supported our decision not to propose for
the 2018 MIPS performance period the 10 episode-based measures that
will be used for the 2017 MIPS performance period. These commenters
stated that they supported the focus on the development of new episode-
based measures that are currently being developed under section
1848(r)(2) of the Act and agreed that there would be confusion if
multiple versions of episode-based measures existed.
Response: We appreciate the commenters for their support.
Comment: Many commenters expressed support for episode-based
measurement but concern about our stated plan to introduce new episode-
based measures to be used in the cost performance category beginning in
next year's proposed rule. Many commenters expressed support for the
process that had prioritized clinician involvement but were concerned
that the measures would not be able to be tested or understood by
clinicians prior to their introduction in the MIPS program. Some
commenters recommended that episode-based measures be made available
for feedback for at least a year before contributing to the cost
performance category percent score. Some commenters recommended that
cost measures not be included unless they were endorsed by the NQF or
recommended by the MAP.
Response: As part of our episode-based measure development, we are
completing an extensive outreach initiative in the fall of 2017 to
share performance information with many clinicians on the newly
developed episode-based measures as part of field testing, a part of
measure development. We believe these efforts go beyond the typical
testing associated with many performance measures and should reveal
issues that were not clear during the development, which also included
many clinician experts. We did not make any specific proposals related
to the inclusion of episode-based measures in future years, but this
development work is intended to develop measures that could be used in
the MIPS cost performance category. All measures that will be included
in the program would be included in a future proposed rule, and we
would discuss the assessment and testing of the measures at the time of
their proposal. Although CMS is conducting a rigorous process to ensure
that any new measure is rigorously reviewed before implementation, we
believe it is in the interest of MIPS participants, particularly
certain specialists, to have access to new episode based measures. We
will consider the opportunity to submit measures that have been or may
be adopted for the cost performance category for NQF endorsement and to
the MAP review process in the future.
Comment: A few commenters recommended specific clinical topics for
episode-based measures, including oncology care, chronic care, care of
the frail elderly, and rare disease. One commenter recommended that CMS
consider a measure that focuses on adherence to clinical pathways,
rather than using costs of care, because clinical pathways would
differentiate care that is appropriate from care that is not.
Response: We appreciate the suggestions for future development of
episode-based measures and other measures. We will continue to endeavor
to develop measures that capture the cost of care for as many different
types of patients and clinicians as possible. We would also review
potential different methodologies as appropriate.
Comment: Several commenters recommended that episode-based measures
be developed so that all drugs costs are considered in the same manner,
rather than Medicare Part B drugs being included and Medicare Part D
drugs being excluded. These commenters suggested the clinical
interchangeability of these types of drugs and expressed concern that
they would be considered differently in determining cost measures.
Response: Section 1848(q)(2)(B)(ii) of the Act requires CMS, to the
extent feasible and applicable, to account for the cost of drugs under
Medicare Part D as part of cost measurement under MIPS. As stated in
the CY 2017 Quality Payment Program final rule, we will continue to
explore methods to add Part D drug costs into cost measures in the
future. We believe that Part D drugs are a significant contributor to
costs for both patients and the Medicare program and should be measured
when technically feasible. Episode-based measures may include Part B
drug costs if clinically appropriate as we believe these are an
important component of health spending.
Comment: One commenter requested that CMS develop a process to
allow stakeholders to develop their own cost measures, rather than only
relying on
[[Page 53646]]
the CMS episode-based measure development. This commenter suggested
that this would allow for more leadership from relevant specialties of
medicine.
Response: Although we continue to develop episode-based measures,
we are open to considering other types of measures for use in the cost
performance category. If an episode-based measure or cost measure were
to be created by an external stakeholder, we may consider it for
inclusion in the program along the same criteria that we have used to
develop and refine other cost measures.
Comment: A few commenters opposed the decision to not propose the
inclusion of the 10 episode-based measures as cost performance category
measures for the 2018 MIPS performance period and for future
performance periods. These commenters suggested that clinicians would
benefit from having these measures as part of their score even as new
episode-based measures are developed.
Response: Many of the 10 episode-based measures that are included
for the 2019 MIPS payment year have similar topics to those in the new
list of episode-based measures we are currently developing. We believe
that continuing to use these measures would create confusion.
Furthermore, we want to potentially include episode-based cost measures
that have significant clinician input, which is a cornerstone of the
new episode-based cost measures currently being developed.
Comment: A few commenters recommended that, in addition to episode-
based measures, we include condition-specific total per capita cost
measures that were used in the VM.
Response: We are currently focusing on the development of episode-
based measures. We continue to believe that the total per capita cost
measure we have adopted is inclusive of the four condition-specific
total per capita cost measures that have been used under the VM
(chronic obstructive pulmonary disease, congestive heart failure,
coronary artery disease, and diabetes mellitus).
Final Action: After consideration of the public comments, for the
2018 MIPS performance period, we will not include in the cost
performance category the 10 episode-based measures that we adopted for
the 2017 performance period, and we do not anticipate proposing to
include these measures in future performance periods. We will continue
to work on development and outreach for new episode-based measures,
such as those that are undergoing field testing in October 2017, and
may propose to include them in MIPS as appropriate in future
rulemaking.
(iv) Attribution
In the CY 2017 Quality Payment Program final rule, we changed the
list of primary care services that had been used to determine
attribution for the total per capita cost measure by adding
transitional care management (CPT codes 99495 and 99496) codes and a
chronic care management code (CPT code 99490) (81 FR 77169). In the CY
2017 Physician Fee Schedule final rule, we changed the payment status
for two existing CPT codes (CPT codes 99487 and 99489) that could be
used to describe care management from B (bundled) to A (active) meaning
that the services would be paid under the Physician Fee Schedule (81 FR
80349). The services described by these codes are substantially similar
to those described by the chronic care management code that we added to
the list of primary care services beginning with the 2017 performance
period. We therefore proposed to add CPT codes 99487 and 99489, both
describing complex chronic care management, to the list of primary care
services used to attribute patients under the total per capita cost
measure (82 FR 30050).
We did not propose any changes to the attribution methods for the
MSPB measure and referred readers to the CY 2017 Quality Payment
Program final rule (81 FR 77168 through 77169) for more information.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Some commenters supported the proposal to add CPT codes
99487 and 99489 to the list of primary care services used to attribute
patients under the total per capita cost measure, noting the similarity
of these codes to other codes defined as primary care services for this
purpose.
Response: We appreciate the commenters for their support.
Comment: Some commenters expressed concern that cost measures were
being attributed to clinicians before patient relationship codes were
being reported by clinicians. Some commenters recommended that cost
measures not be used before the patient relationship codes are
implemented and studied.
Response: To facilitate the attribution of patients and episodes to
one or more clinicians, section 1848(r)(3) of the Act requires the
development of patient relationship categories and codes that define
and distinguish the relationship and responsibility of a physician or
applicable practitioner with a patient at the time of furnishing an
item or service. In the CY 2018 Physician Fee Schedule proposed rule
(82 FR 34129), we proposed to use certain HCPCS modifiers as the
patient relationship codes. Section 1848(r)(4) of the Act requires
claims submitted for items and services furnished by a physician or
applicable practitioner on or after January 1, 2018, shall, as
determined appropriate by the Secretary, include the applicable patient
relationship code, in addition to other information. We proposed (82 FR
34129) that for at least an initial period while clinicians gain
familiarity, reporting the HCPCS modifiers on claims would be
voluntary, and the use and selection of the modifiers would not be a
condition of payment. The statute requires us to include the cost
performance category in the MIPS program, and thus, we cannot delay the
use of cost measures in MIPS until after the patient relationship codes
have been implemented, as recommended by the commenters. However, we
may consider future changes to our attribution methods for cost
measures based on the patient relationship codes that will be reported
on claims.
Comment: One commenter recommended that services provided in a
nursing facility (POS 32) not be included for purposes of attribution
under the total per capita cost measure. One commenter recommended that
services provided in a skilled nursing home facility (POS 31) continue
to be excluded for purposes of attribution under the total per capita
cost measure.
Response: Patients in a skilled nursing home facility (SNF) (POS
31) require more frequent practitioner visits--often from 1 to 3 times
a week. In contrast, patients in nursing facilities (NFs) (POS 32) are
almost always permanent residents and generally receive their primary
care services in the facility for the duration of their life. On the
other hand, patients in an NF (POS 32) are usually seen every 30 to 60
days unless medical necessity dictates otherwise. We believe this
distinction is important enough to treat these sites of service
differently in terms of attribution for the total per capita cost
measure. Services provided in POS 31 are not included in the definition
of primary care services used for the total per capita cost measure,
but services provided in POS 32 are. We will continue to evaluate
attribution methods as part of measure development and maintenance.
Comment: One commenter opposed the addition of complex chronic care
management codes because palliative
[[Page 53647]]
care physicians often bill for the services, but serve in a consulting
role as opposed to serving as a primary care clinician.
Response: We believe that the attribution model that assigns
patients on the basis of a plurality of services would not assign
patients for the purposes of the total per capita cost measure on the
basis of a single visit, unless that patient had also not seen a
primary care clinician during the year. We believe these codes are
consistent with other services typically provided by primary care
clinicians.
Comment: Many commenters expressed general concerns about
attribution methods, stating that they were not well understood, were
not properly tested, and were unfair. These commenters encouraged CMS
to improve or perfect attribution methods.
Response: We will continue to work to improve attribution methods
as we develop the measures and methods that are part of the cost
performance category. We do not use a single attribution method--
instead the attribution method is linked to a measure and attempts to
best identify the clinician who may have influenced the spending for a
patient, whether it be all spending in a year or a more narrow set of
spending in a defined period. As we continue our work to develop
episode-based measures and refine the two cost measures included for
the 2018 MIPS performance period, we will work to explain the
methodology for attribution and how it works in relation to the measure
and the scoring methodology.
Final Action: After consideration of the public comments, we are
finalizing our proposal to add CPT codes 99487 and 99489 to the list of
primary care services used to attribute patients under the total per
capita cost measure.
(v) Reliability
In the CY 2017 Quality Payment Program final rule (81 FR 77169
through 77170), we finalized a reliability threshold of 0.4 for
measures in the cost performance category. Reliability is an important
evaluation for cost measures to ensure that differences in performance
are not the result of random variation. In the proposed rule, we
provided a summary of the importance of reliability in measurement and
how high reliability must be balanced with other goals, such as
measuring where there is significant variation and ensuring that cost
measurement is not limited to large groups with large case volume (82
FR 30050). Although we did not propose any adjustments to our
reliability policies, we did receive a number of comments on issues
related to reliability which we will consider as part of future
rulemaking. We will continue to evaluate reliability as we develop new
measures and to ensure that our measures meet an appropriate standard.
(b) Attribution for Individuals and Groups
We did not propose any changes for how we attribute cost measures
to individual and group reporters. We refer readers to the CY 2017
Quality Payment Program final rule for more information (81 FR 77175
through 77176). Although we did not propose any adjustments to our
attribution policies, we did receive a number of comments on issues
related to attribution which we will consider as part of future
rulemaking.
(c) Incorporation of Cost Measures With SES or Risk Adjustment
Both measures proposed for inclusion in the cost performance
category for the 2018 MIPS performance period are risk adjusted at the
measure level. Although the risk adjustment of the 2 measures is not
identical, in both cases it is used to recognize the higher risk
associated with demographic factors (such as age) or certain clinical
conditions. We recognize that the risks accounted for with this
adjustment are not the only potential attributes that could lead to a
higher cost patient. Stakeholders have pointed to many other factors
such as income level, race, and geography that they believe contribute
to increased costs. These issues and our plans for attempting to
address them are discussed in length in section II.C.7.b.(1)(a) of this
final rule with comment period. While we did not propose any changes to
address risk adjustment for cost measures in this rule, we continue to
believe that this is an important issue and it will be considered
carefully in the development of future cost measures and for the
overall cost performance category. Although we did not propose any
adjustments to our policies on incorporating cost measures with SES or
risk adjustment, we did receive a number of comments which we will
consider as part of future rulemaking.
(d) Incorporation of Cost Measures With ICD-10 Impacts
In the CY 2018 Quality Payment Program proposed rule (82 FR 30098),
we discussed our proposal to assess performance on any measures
impacted by ICD-10 updates based only on the first 9 months of the 12-
month performance period. Because the total per capita cost and MSPB
measures include costs from all Medicare Part A and B services,
regardless of the specific ICD-10 codes that are used on claims, and do
not assign patients based on ICD-10, we do not anticipate that any
measures for the cost performance category would be affected by this
ICD-10 issue during the 2018 MIPS performance period. However, as we
continue our plans to expand cost measures to incorporate episode-based
measures, ICD-10 changes could become important. Episode-based measures
may be opened (triggered) by and may assign services based on ICD-10
codes. Therefore, a change to ICD-10 coding could have a significant
effect on an episode-based measure. Changes to ICD-10 codes will be
incorporated into the measure specifications on a regular basis through
the measure maintenance process. Please refer to section
II.C.7.a.(1)(c) of this final rule with comment period for a summary of
the comments and our response on this issue.
(e) Application of Measures to Non-Patient Facing MIPS Eligible
Clinicians
We did not propose changes to the policy we finalized in the CY
2017 Quality Payment Program final rule (81 FR 77176) that we will
attribute cost measures to non-patient facing MIPS eligible clinicians
who have sufficient case volume, in accordance with the attribution
methodology. Although we did not propose any adjustments to our
attribution of cost measures to non-patient facing MIPS eligible
clinicians with sufficient case volume policies, we did receive a few
comments which we will consider as part of future rulemaking.
Section 1848(q)(2)(C)(iv) of the Act requires the Secretary to
consider the circumstances of professional types who typically furnish
services without patient facing interaction (non-patient facing) when
determining the application of measures and activities. In addition,
this section allows the Secretary to apply alternative measures or
activities to non-patient facing MIPS eligible clinicians that fulfill
the goals of a performance category. Section 1848(q)(5)(F) of the Act
allows the Secretary to re-weight MIPS performance categories if there
are not sufficient measures and activities applicable and available to
each type of MIPS eligible clinician involved.
We believe that non-patient facing clinicians are an integral part
of the care team and that their services do contribute to the overall
costs but at this time we believe it better to focus on the development
of a comprehensive system of episode-based measures which focus
[[Page 53648]]
on the role of patient-facing clinicians. Accordingly, for the 2018
MIPS performance period, we did not propose alternative cost measures
for non-patient facing MIPS eligible clinicians or groups. This means
that non-patient facing MIPS eligible clinicians or groups are unlikely
to be attributed any cost measures that are generally attributed to
clinicians who have patient-facing encounters with patients. Therefore,
we anticipate that, similar to MIPS eligible clinicians or groups that
do not meet the required case minimums for any cost measures, many non-
patient facing MIPS eligible clinicians may not have sufficient cost
measures applicable and available to them and would not be scored on
the cost performance category under MIPS.
We will continue to explore methods to incorporate non-patient
facing clinicians into the cost performance category in the future.
(f) Facility-Based Measurement as It Relates to the Cost Performance
Category
In the CY 2018 Quality Payment Program proposed rule (82 FR 30123),
we discussed our proposal to implement section 1848(q)(2)(C)(ii) of the
Act by assessing clinicians who meet certain requirements and elect
participation based on the performance of their associated hospital in
the Hospital VBP Program. We refer readers to section II.C.7.a.(4) of
this final rule with comment period for full details on the final
policies related to facility-based measurement, including the measures
and how the measures are scored, for the cost performance category.
e. Improvement Activity Criteria
(1) Background
Section 1848(q)(2)(C)(v)(III) of the Act defines an improvement
activity as an activity that relevant eligible clinician organizations
and other relevant stakeholders identify as improving clinical practice
or care delivery, and that the Secretary determines, when effectively
executed, is likely to result in improved outcomes. Section
1848(q)(2)(B)(iii) of the Act requires the Secretary to specify
improvement activities under subcategories for the performance period,
which must include at least the subcategories specified in section
1848(q)(2)(B)(iii)(I) through (VI) of the Act, and in doing so to give
consideration to the circumstances of small practices, and practices
located in rural areas and geographic health professional shortage
areas (HPSAs).
Section 1848(q)(2)(C)(iv) of the Act generally requires the
Secretary to give consideration to the circumstances of non-patient
facing individual MIPS eligible clinicians or groups and allows the
Secretary, to the extent feasible and appropriate, to apply alternative
measures and activities to such individual MIPS eligible clinicians and
groups.
Section 1848(q)(2)(C)(v) of the Act required the Secretary to use a
request for information (RFI) to solicit recommendations from
stakeholders to identify improvement activities and specify criteria
for such improvement activities, and provides that the Secretary may
contract with entities to assist in identifying activities, specifying
criteria for the activities, and determining whether individual MIPS
eligible clinicians or groups meet the criteria set. For a detailed
discussion of the feedback received from the MIPS and APMs RFI, see the
CY 2017 Quality Payment Program 2017 final rule (81 FR 77177).
In the CY 2017 Quality Payment Program final rule (81 FR 77178), we
defined improvement activities at Sec. 414.1305 as an activity that
relevant MIPS eligible clinicians, organizations and other relevant
stakeholders identify as improving clinical practice or care delivery
and that the Secretary determines, when effectively executed, is likely
to result in improved outcomes.
In the CY 2017 Quality Payment Program final rule (81 FR 77199), we
solicited comments on activities that would advance the usage of health
IT to support improvement activities for future consideration. Please
refer to the CY 2018 Quality Payment Program proposed rule (82 FR
30052) for a full discussion of the public comments we received in
response to the CY 2017 Quality Payment Program final rule and our
responses provided on activities that would advance the usage of health
IT to support improvement activities.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30052),
we sought comment on how we might provide flexibility for MIPS eligible
clinicians to effectively demonstrate improvement through health IT
usage while also measuring such improvement for future consideration.
We received many comments on this topic and will take them into
consideration for future rulemaking.
(2) Contribution to the Final Score
(i) Patient-Centered Medical Home
In the CY 2017 Quality Payment Program final rule (81 FR 77179
through 77180), we finalized at Sec. 414.1355 that the improvement
activities performance category would account for 15 percent of the
final score. We also finalized at Sec. 414.1380(b)(3)(iv) criteria for
recognition as a certified patient-centered medical home or comparable
specialty practice. Since then, it has come to our attention that the
common terminology utilized in the general medical community for
``certified'' patient-centered medical home is ``recognized'' patient-
centered medical home.
Therefore, in order to provide clarity, in the CY 2018 Quality
Payment Program proposed rule (82 FR 30052), we proposed that the term
``recognized'' be accepted as equivalent to the term ``certified'' when
referring to the requirements for a patient-centered medical home to
receive full credit for the improvement activities performance category
for MIPS. Specifically, we proposed to revise Sec. 414.1380(b)(3)(iv)
to provide that a MIPS eligible clinician or group in a practice that
is certified or recognized as a patient-centered medical home or
comparable specialty practice, as determined by the Secretary, receives
full credit for performance on the improvement activities performance
category. A practice is certified or recognized as a patient-centered
medical home if it meets any of the criteria specified under Sec.
414.1380(b)(3)(iv).
We invited public comment on this proposal.
Comment: A few commenters supported the proposed expansion of the
patient-centered medical home definition, to include both medical homes
that are ``certified'' and those that are ``recognized.'' These
commenters noted that inclusion of both terms aligns with the
terminology used by various organizations and states that have patient-
centered medical home programs that may be eligible for full credit in
the improvement activities performance category.
Response: We thank the commenters for their support.
Comment: One commenter stated that in the CY 2017 Quality Payment
Program final rule, the documented recognition as a patient-centered
medical home from an accredited body combined with continual
improvements was listed as already receiving credit in the improvement
activity performance category.
Response: We believe the commenter is referring to our discussion
in the CY 2017 Quality Payment Program final rule (81 FR 77179 through
77180), where we finalized at Sec. 414.1380 an expanded definition of
what is acceptable for recognition as a certified-
[[Page 53649]]
patient centered medical home or comparable specialty practice. We
recognized a MIPS eligible clinician or group as being a certified
patient-centered medical home or comparable specialty practice if they
have achieved certification or accreditation as such from a national
program, or they have achieved certification or accreditation as such
from a regional or state program, private payer or other body that
certifies at least 500 or more practices for patient-centered medical
home accreditation or comparable specialty practice certification. In
the CY 2018 Quality Payment Program proposed rule, we did not propose
any substantive changes to that definition. However, for the sake of
clarity we proposed that we will accept the designation of
``recognized'' as equivalent to the designation of ``certified'' when
referring to the requirements for a patient-centered medical home or
comparable specialty practice to receive full credit for the
improvement activities performance category for MIPS and also to update
Sec. 414.1380(b)(3)(iv) to reflect this. Our intention behind this
proposal was to reflect common terminology utilized in the general
medical community--that ``certified'' patient-centered medical home is
equivalent to ``recognized'' patient-centered medical home. A practice
is certified or recognized as a patient-centered medical home if it
meets any of the criteria specified under Sec. 414.1380(b)(3)(iv).
Comment: Several commenters provided comments that were not related
to our proposal to accept the designation of ``recognized'' as
equivalent to the designation of ``certified'' when referring to the
requirements for a patient-centered medical home or comparable
specialty practice to receive full credit for the improvement
activities performance category for MIPS. They are summarized here.
Some commenters recommended that CMS consider other models as patient-
centered medical homes for full credit in this performance category.
These commenters suggested that CMS consider full credit to those MIPS
eligible clinicians and groups participating in models such as a
Patient Centered Medical Neighborhood (PCMN), participation in a
Certified Community Behavioral Health Clinic (CCBHC) or a Medicaid
Section 2703 Health Home, or Blue Distinction[supreg] Total Care. Other
commenters suggested that CMS establish a policy to offer full auto-
credit to any practice that achieves National Committee for Quality
Assurance (NCQA) recognition by December 31st of a given performance
year, since NCQA requires that practices seeking patient-centered
medical home and patient-centered specialty practice (PCSP) recognition
perform the appropriate activities for a minimum of 90 days. Further,
these commenters recommended that this policy should extend to any
other approved patient-centered medical home programs that use a 90-day
look-back period.
Response: We acknowledge the commenter's suggestions that we
consider additional models as patient-centered medical homes for full
credit in this performance category. In the CY 2017 Quality Payment
Program final rule (81 FR 77180), we previously stated that we
recognize a MIPS eligible clinician or group as being a certified
patient-centered medical home or comparable specialty practice if they
have achieved certification or accreditation as such from a national
program, or they have achieved certification or accreditation as such
from a regional or state program, private payer or other body that
certifies at least 500 or more practices for patient-centered medical
home accreditation or comparable specialty practice certification. We
went on to state that examples of nationally recognized accredited
patient-centered medical homes are: (1) The Accreditation Association
for Ambulatory Health Care; (2) the National Committee for Quality
Assurance (NCQA) Patient-Centered Medical Home; (3) The Joint
Commission Designation; or (4) the Utilization Review Accreditation
Commission (URAC) (81 FR 77180). We finalized that the criteria for
being a nationally recognized accredited patient-centered medical home
are that it must be national in scope and must have evidence of being
used by a large number of medical organizations as the model for their
patient-centered medical home (81 FR 77180). We also stated that we
will also provide full credit for the improvement activities
performance category for a MIPS eligible clinician or group that has
received certification or accreditation as a patient-centered medical
home or comparable specialty practice from a national program or from a
regional or state program, private payer, or other body that
administers patient-centered medical home accreditation and certifies
500 or more practices for patient-centered medical home accreditation
or comparable specialty practice certification (81 FR 77180). We note,
however, that in the CY 2018 Quality Payment Program proposed rule we
did not propose any changes to the definition of what is acceptable for
recognition as a certified patient-centered medical home or comparable
specialty practice that we finalized in the CY 2017 Quality Payment
Program final rule (81 FR 77180) and codified under Sec.
414.1380(b)(3)(iv). Without more information, we cannot provide
information as to whether the suggested entities fall within our
previously established definition above. Furthermore, while we are not
considering any changes to this definition and criteria for the CY 2018
performance period, we may consider commenters' suggestions as we craft
policy for future rulemaking. Moreover, we would like to make clear
that credit is not automatically granted; MIPS eligible clinicians and
groups must attest in order to receive the credit (81 FR 77181) which
is codified at Sec. 414.1360.
Final Action: After consideration of the public comments received,
we are finalizing, as proposed, our proposals: (1) That the term
``recognized'' be accepted as equivalent to the term ``certified'' when
referring to the requirements for a patient-centered medical home to
receive full credit for the improvement activities performance category
for MIPS; and (2) to update Sec. 414.1380(b)(3)(iv) to reflect this
change.
(ii) Weighting of Improvement Activities
As previously explained in the CY 2017 Quality Payment Program
final rule (81 FR 77194), we believe that high weighting should be used
for activities that directly address areas with the greatest impact on
beneficiary care, safety, health, and well-being. In the CY 2017
Quality Payment Program final rule (81 FR 77198), we requested
commenters' specific suggestions for additional activities or
activities that may merit additional points beyond the ``high'' level
for future consideration.
Comment: Several commenters urged CMS to increase the overall
number of high-weighted activities in this performance category. Some
commenters recommended additional criteria for designating high-
weighted activities, such as an improvement activity's impact on
population health, medication adherence, and shared decision-making
tools, and encouraged CMS to be more transparent in our weighting
decisions. Several commenters recommended that CMS weight registry-
related activities as high, and suggested that we award individual MIPS
eligible clinicians and groups in APMs full credit in this performance
category. The commenters also offered many recommendations for changing
[[Page 53650]]
current medium-weighted activities to high and offered many specific
suggestions for new high-weighted improvement activities.
Response: After review and consideration of comments in the CY 2017
Quality Payment Program final rule, while we did not propose changes to
our approach for weighting improvement activities in the CY 2018
Quality Payment Program proposed rule (82 FR 30052), we will take the
additional criteria suggested by commenters for designating high-
weighted activities into consideration in future rulemaking. We did
however, propose new, high-weighted as well as new medium weighted
activities, in Table F in the Appendix of the proposed rule. We refer
readers to Table F in the Appendix of this final rule with comment
period where we are finalizing new activites, and Table G in the same
Appendix where we are finalizing changes to existing improvement
activities.
For MIPS eligible clinicians participating in MIPS APMs, in the CY
2017 Quality Payment Program final rule (81 FR 77185) we finalized a
policy to reduce reporting burden through the APM scoring standard for
this performance category to recognize improvement activities work
performed through participation in MIPS APMs. This policy is codified
at Sec. 414.1370(g)(3), and we refer readers to the CY 2017 Quality
Payment Program final rule for further details on reporting and scoring
this performance category under the APM Scoring Standard (81 FR 77259
through 77260). In the CY 2018 Quality Payment Program proposed rule,
we did not propose any changes to these policies.
We received many comments on this topic and will take them into
consideration for future rulemaking.
(3) Improvement Activities Data Submission Criteria
(a) Submission Mechanisms
(i) Generally
In the CY 2017 Quality Payment Program final rule (81 FR 77180), we
discussed that for the transition year of MIPS, we would allow for
submission of data for the improvement activities performance category
using the qualified registry, EHR, QCDR, CMS Web Interface, and
attestation data submission mechanisms through attestation.
Specifically, in the CY 2017 Quality Payment Program final rule (81 FR
77180), we finalized a policy that regardless of the data submission
method, with the exception of MIPS eligible clinicians in MIPS APMs,
all individual MIPS eligible clinicians or groups must select
activities from the Improvement Activities Inventory. In addition, we
codified at Sec. 414.1360, that for the transition year of MIPS, all
individual MIPS eligible clinicians or groups, or third party
intermediaries such as health IT vendors, QCDRs and qualified
registries that submit on behalf of an individual MIPS eligible
clinician or group, must designate a ``yes'' response for activities on
the Improvement Activities Inventory. We also codified at Sec.
414.1360 that in the case where an individual MIPS eligible clinician
or group is using a health IT vendor, QCDR, or qualified registry for
their data submission, the individual MIPS eligible clinician or group
will validate the improvement activities that were performed, and the
health IT vendor, QCDR, or qualified registry would submit on their
behalf.
We would like to maintain stability in the Quality Payment Program
and continue these policies into future years. In the CY 2018 Quality
Payment proposed rule (82 FR 30053), we proposed to update Sec.
414.1360 for the transition year of MIPS and future years, to reflect
that all individual MIPS eligible clinicians or groups, or third party
intermediaries such as health IT vendors, QCDRs and qualified
registries that submit on behalf of an individual MIPS eligible
clinician or group, must designate a ``yes'' response for activities on
the Improvement Activities Inventory. We note that these are the same
requirements as previously codified for the transition year;
requirements for the transition year remain unchanged. We merely
proposed to extend the same policies for future years.
In addition, in the case where an individual MIPS eligible
clinician or group is using a health IT vendor, QCDR, or qualified
registry for their data submission, we proposed that the MIPS eligible
clinician or group will certify all improvement activities were
performed and the health IT vendor, QCDR, or qualified registry would
submit on their behalf (82 FR 30053). In summary, we proposed to
continue our previously established policies for future years and to
generally apply our group policies to virtual groups. Furthermore, we
refer readers to the CY 2018 Quality Payment Program proposed rule at
(82 FR 30029) and section II.C.4.d. of this final rule, where we are
finalizing to generally apply our group policies to virtual groups.
While we previously codified at Sec. 414.1325(d) in the CY 2017
Quality Payment Program final rule that individual MIPS eligible
clinicians and groups may only use one submission mechanism per
performance category, (81 FR 77275), in section II.C.6.a.(1) of this
final rule with comment period, we are finalizing our proposal, with
modification, to revise Sec. 414.1325(d) for purposes of the 2021 MIPS
payment year and future years to allow individual MIPS eligible
clinicians and groups to submit measures and activities, as applicable,
via as many submission mechanisms as necessary to meet the requirements
of the quality, improvement activities, or advancing care information
performance categories. We refer readers to section II.C.6.a.(1) of
this final rule with comment period for discussion of this proposal as
finalized.
We also included a designation column in the Improvement Activities
Inventory at Table H in the Appendix of the CY 2017 Quality Payment
Program final rule (81 FR 77817) that indicated which activities
qualified for the advancing care information bonus codified at Sec.
414.1380. In future updates to the Improvement Activities Inventory, we
intend to continue to indicate which activities qualify for the
advancing care information performance category bonus.
We invited public comment on our proposals.
Comment: A few commenters expressed support for our proposal that
for the transition year of MIPS and future years, all individual MIPS
eligible clinicians or groups, or third party intermediaries such as
health IT vendors, QCDRs, and qualified registries that submit on
behalf of an individual MIPS eligible clinician or group, must
designate a ``yes'' response for activities on the Improvement
Activities Inventory, and that where an individual MIPS eligible
clinician or group is using a health IT vendor, QCDR, or qualified
registry for their data submission, the MIPS eligible clinician or
group must certify all improvement activities were performed and the
health IT vendor, QCDR, or qualified registry would submit on their
behalf.
Response: We thank the commenters for their support. We realize the
way the proposal was worded may have caused some potential confusion.
Therefore, we are clarifying here that our proposal merely extends the
same requirements, as previously codified for the transition year, to
future years; requirements for the transition year remain unchanged.
Comment: One commenter urged CMS to refer to registries more
broadly, rather than using the term ``QCDR,'' noting that many
qualified registries are in use by clinicians, even though these may
not have received official QCDR
[[Page 53651]]
status for one reason or another. Another commenter requested that CMS
consider allowing other third parties that may be collecting
information that is indicative of completion of an improvement activity
to submit data to CMS, suggesting that for example, an organization
that awards continuing medical education (CME) credits that qualify as
improvement activities could submit a list of MIPS eligible clinicians
who received qualifying CME credit directly to CMS.
Response: We note that the terms ``qualified registry'' and
``QCDR'' are defined terms for the purposes of MIPS, as codified at
Sec. 414.1400. We refer readers to section II.C.10. of this final rule
with comment period for a detailed discussion of third party
intermediaries. While we recognize that there are other registries that
are not considered MIPS qualified registry or QCDRs, those registries
have the option to become a MIPS QCDR using the process finalized in
the CY 2017 Quality Payment Program final rule (81 FR 77365), or
qualify as a MIPS registry using the process finalized at 81 FR 77383
in that same final rule. If an organization becomes a MIPS qualified
registry or QCDR then they could submit MIPS data to us.
Final Action: After consideration of the public comments we
received, we are finalizing our proposals, with clarification, to
continue our previously established policies for future years.
Specifically: (1) For purposes of MIPS Year 2 and future years, MIPS
eligible clinicians or groups must submit data on MIPS improvement
activities in one of the following manners: Via qualified registries;
EHR submission mechanisms; QCDR, CMS Web Interface; or attestation. Our
proposal language may have potentially caused some confusion, because
it included the transition year; however, we are clarifying here that
policies were previously established for that year and remain
unchanged. We are also finalizing, as proposed: (2) For activities that
are performed for at least a continuous 90 days during the performance
period, MIPS eligible clinicians must submit a yes response for
activities within the Improvement Activities Inventory; and (3) that
Sec. 414.1360 will be updated to reflect these changes.
(ii) Group Reporting
In the CY 2017 Quality Payment Program final rule (81 FR 77181), we
clarified that if one MIPS eligible clinician (NPI) in a group
completed an improvement activity, the entire group (TIN) would receive
credit for that activity. In addition, we specified that all MIPS
eligible clinicians reporting as a group would receive the same score
for the improvement activities performance category if at least one
clinician within the group is performing the activity for a continuous
90 days in the performance period. We refer readers to section
II.C.4.d. of this final rule with comment period, where we are
finalizing to generally apply our group policies to virtual groups. We
did not propose any changes to our group reporting policies in the
proposed rule. However, in the CY 2017 Quality Payment Program proposed
rule (82 FR 30053), we requested comment for future consideration on
whether we should establish a minimum threshold (for example, 50
percent) of the clinicians (NPIs) that must complete an improvement
activity in order for the entire group (TIN) to receive credit in the
improvement activities performance category in future years. In
addition, we requested comments for future consideration on recommended
minimum threshold percentages and whether we should establish different
thresholds based on the size of the group. In the proposed rule, (82 FR
30053), we noted that we are concerned that while establishing any
specific threshold for the percentage of NPIs in a TIN that must
participate in an improvement activity for credit will incentivize some
groups to move closer to the threshold, it may have the unintended
consequence of incentivizing groups who are exceeding the threshold to
gravitate back toward the threshold. Therefore, we requested comments
for future consideration on how to set this threshold while maintaining
the goal of promoting greater participation in an improvement activity.
Additionally, we noted in the CY 2017 Quality Payment Program final
rule (81 FR 77197) that we intended, in future years, to score the
improvement activities performance category based on performance and
improvement, rather than simple attestation. In the CY 2018 Quality
Payment Program proposed rule (82 FR 30053), we sought comment on how
we could measure performance and improvement for future consideration;
we were especially interested in ways to measure performance without
imposing additional burden on eligible clinicians, such as by using
data captured in eligible clinicians' daily work.
We received many comments on these topics and will take them into
consideration as we craft for future policies.
(b) Submission Criteria
(i) Background
In the CY 2017 Quality Payment Program final rule (81 FR 77185), we
finalized at Sec. 414.1380 to set the improvement activities
submission criteria under MIPS, to achieve the highest potential score,
at two high-weighted improvement activities or four medium-weighted
improvement activities, or some combination of high and medium-weighted
improvement activities. While the minimum reporting period for one
improvement activity is 90 days, the maximum frequency with which an
improvement activity may be reported would be once during the 12-month
performance period (81 FR 77185). In addition, we refer readers to
section II.C.4.d. of this final rule with comment period, where we are
finalizing to generally apply group policies to virtual groups.
In the CY 2017 Quality Payment Program final rule (81 FR 77185), we
established exceptions to the above for: Small practices; practices
located in rural areas; practices located in geographic HPSAs; non-
patient facing individual MIPS eligible clinicians or groups; and
individual MIPS eligible clinicians and groups that participate in a
MIPS APM or a patient-centered medical home submitting in MIPS.
Specifically, for individual MIPS eligible clinicians and groups that
are small practices, practices located in rural areas or geographic
HPSAs, or non-patient facing individual MIPS eligible clinicians or
groups, to achieve the highest score, one high-weighted or two medium-
weighted improvement activities are required (81 FR 77185). For these
individual MIPS eligible clinicians and groups, in order to achieve
one-half of the highest score, one medium-weighted improvement activity
is required (81 FR 77185).
In the CY 2017 Quality Payment Program final rule (81 FR 77185), we
finalized that under the APM scoring standard, all clinicians
identified on the Participation List of an APM receive at least one-
half of the highest score applicable to the MIPS APM. To develop the
improvement activities score assigned to each MIPS APM, we compare the
requirements of the specific MIPS APM with the list of activities in
the Improvement Activities Inventory and score those activities in the
same manner that they are otherwise scored for MIPS eligible clinicians
(81 FR 77185). If by our assessment the MIPS APM does not receive the
maximum improvement activities performance category score then the APM
entity can submit additional
[[Page 53652]]
improvement activities (81 FR 77185). All other individual MIPS
eligible clinicians or groups that we identify as participating in APMs
that are not MIPS APMs will need to select additional improvement
activities to achieve the improvement activities highest score (81 FR
77185). We did not propose any changes to these policies; we refer
readers to section II.C.6.g. of this final rule with comment period for
further discussion of the APM scoring standard.
We received many comments on this topic and will take them into
consideration for future rulemaking.
(ii) Patient-Centered Medical Homes or Comparable Specialty Practices
In the CY 2017 Quality Payment Program final rule (81 FR 77185), we
finalized at Sec. 414.1380(b)(3)(iv) to provide full credit for the
improvement activities performance category, as required by law, for an
individual MIPS eligible clinician or group that has received
certification or accreditation as a patient-centered medical home or
comparable specialty practice from a national program or from a
regional or state program, private payer or other body that administers
patient-centered medical home accreditation and certifies 500 or more
practices for patient-centered medical home accreditation or comparable
specialty practice certification, or for an individual MIPS eligible
clinician or group that is a participant in a medical home model. We
noted in the CY 2017 Quality Payment Program final rule (81 FR 77178)
that practices may receive this designation at a practice level and
that TINs may be comprised of both undesignated practices and
designated practices. We finalized at Sec. 414.1380(b)(3)(viii) that
to receive full credit as a certified patient-centered medical home or
comparable specialty practice, a TIN that is reporting must include at
least one practice site which is a certified patient-centered medical
home or comparable specialty practice (81 FR 77178). We also indicated
that we would continue to have more stringent requirements in future
years, and would lay the groundwork for expansion towards continuous
improvement over time (81 FR 77189).
We received many comments on the CY 2017 Quality Payment Program
final rule regarding our transition year policy that only one practice
site within a TIN needs to be certified as a patient-centered medical
home for the entire TIN to receive full credit in the improvement
activities performance category. While several commenters supported our
transition year policy, others disagreed and suggested to move to a
more stringent requirement in future years while still offering some
flexibility. We refer readers to the CY 2017 Quality Payment Program
final rule (81 FR 77180 through 77182) for the details of those
comments and our responses. In response to these comments, in the CY
2018 Quality Payment Program proposed rule (82 FR 30054), we proposed
to revise Sec. 414.1380(b)(3)(x) to provide that for the 2020 MIPS
payment year and future years, to receive full credit as a certified or
recognized patient-centered medical home or comparable specialty
practice, at least 50 percent of the practice sites within the TIN must
be recognized as a patient-centered medical home or comparable
specialty practice. This is an increase to the previously established
requirement codified at Sec. 414.1380(b)(3) in the CY 2017 Quality
Payment Program final rule (81 FR 77178) that only one practice site
within a TIN needs to be certified as a patient-centered medical home.
We chose not to propose to require that every site be certified,
because that could potentially be overly restrictive given that some
sites within a TIN may be in the process of being certified as patient-
centered medical homes. We believe a 50 percent threshold is
achievable, and is supported by a study of physician-owned primary care
groups in a recent Annals of Family Medicine article (Casalino, et al.,
2016) https://www.annfammed.org/content/14/1/16.full. For nearly all
groups in this study (sampled with variation in size and geographic
area), at least 50 percent of the practice sites within the group had a
medical home designation.\2\ If the group is unable to meet the 50
percent threshold, then the individual MIPS eligible clinician may
choose to receive full credit as a certified patient-centered medical
home or comparable specialty practice by reporting as an individual for
all performance categories. In addition, we refer readers to section
II.C.4.d. of this final rule with comment period, where we are
finalizing to generally apply our group policies to virtual groups.
Further, in the proposed rule, we welcomed suggestions on an
appropriate threshold for the number of NPIs within the TIN that must
be recognized as a certified patient-centered medical home or
comparable specialty practice to receive full credit in the improvement
activities performance category.
---------------------------------------------------------------------------
\2\ Casalino LP, Chen MA, Staub CT, Press MJ, Mendelsohn JL,
Lynch JT, Miranda Y. Large independent primary care medical groups.
Ann Fam Med. 2016;14(1):16-25.
---------------------------------------------------------------------------
In the CY 2018 QPP proposed rule (82 FR 30054 through 55) we
invited public comments on our proposals to revise Sec.
414.1380(b)(3)(x) to provide that for the 2020 MIPS payment year and
future years, to receive full credit as a certified or recognized
patient-centered medical home or comparable specialty practice, at
least 50 percent of the practice sites within the TIN must be
recognized as a patient-centered medical home or comparable specialty
practice. However, we are correcting here that we intended to add Sec.
414.1380(b)(3)(x) as a new provision, not revise it. This was an
inadvertent typographical error. The following is a summary of the
public comments received and our responses.
Comment: Several commenters supported CMS's proposal to raise the
threshold to 50 percent for the number of practice sites that must be
recognized within a TIN to receive full credit in this performance
category as a patient-centered medical home or comparable specialty.
These commenters noted that the proposal strikes an appropriate balance
between requiring a TIN to show substantial accomplishment before
receiving full credit in this performance category and acknowledging
that it may be infeasible for every practice site within a TIN to
achieve this recognition. One commenter urged CMS to accept data feeds
from accrediting bodies so that we can move to requiring 100 percent of
practice sites within a TIN to achieve recognition in order to receive
full credit in this performance category.
Response: We appreciate the support and agree that establishing a
50 percent threshold strikes an appropriate balance. In addition, we
appreciate the comment regarding accepting data feeds from accrediting
bodies and will explore this idea, including whether it is technically
feasible, as we craft future policy.
Comment: Another commenter supported the proposal, but suggested
that it would be most logical to define the threshold as 50 percent of
the primary care sites for TINs with both specialty and primary care,
and for groups with both primary care and specialty only sites, the
denominator for the threshold should be the number of primary care.
Response: It is important to note that our criteria for patient-
centered medical homes include specialty sites as well, not just
primary care. We do not believe it is appropriate to restrict patient-
centered medical home designation to primary care sites only. Based on
a survey of patient-centered medical homes accrediting organizations,
named in the Annals of Family Medicine article
[[Page 53653]]
2017 (Casalino, et al., 2016) cited above in our proposal, only one
specifically requires that practices be primary care, and another
offers specialty-specific patient-centered medical homes recognition.
Therefore, it is reasonable to assume that specialty practices could
attain patient-centered medical homes recognition through multiple
accrediting organizations along with their primary care sites if they
choose to. Overall, we believe that setting the patient-centered
medical home group threshold at 50 percent of the group is achievable
and it is our goal to encourage TINs to have more practice sites
undergo transformation.
Comment: Several commenters opposed CMS's proposal to raise the
threshold to 50 percent for the number of practice sites that must be
recognized within a TIN to receive full credit in this performance
category as a patient-centered medical home or comparable specialty,
expressing concern that it is a significant change from the first year
of the program, and urging CMS to continue to provide flexibility in
this area or gradually increase the threshold in future years.
Several commenters expressed concern that the proposed threshold is
premature and may interfere with their ability to report participation
in an improvement activity that may be unique to their specialty group
or discourage participation by some clinicians in the medical home
models altogether.
Other commenters expressed concern that CMS's proposed policy fails
to account for the effort and investment required to achieve this
designation, and fails to account for how the work of those sites that
do achieve such recognition impacts specialty clinics within a TIN by
ensuring coordinated primary care for patients as those practice sites
refer patients needing specialized care who cannot be managed by
primary care. Some commenters recommended that CMS consider
alternatives to our proposal. A few commenters recommended a threshold
of 2 or more practice sites, or beginning with a lower threshold, such
as 33 percent. One commenter suggested that CMS instead consider
awarding prorated credit for the entire TIN that is in proportion to
the percentage of the TIN that is a patient-centered medical home or
comparable specialty. For example, for one practice site that is a
patient-centered medical home out of five sites under the same TIN,
this practice would receive 20 percent of the 100 percent credit for
the performance category score, and eligible clinicians in the other
four sites within the TIN would need to demonstrate other improvement
activities. Although MIPS eligible clinicians may choose to receive
full credit by reporting as an individual clinician, one commenter
noted that this is not a reasonable alternative due to the complexity
and burden required to do so as part of a large multi-specialty group.
This commenter suggested that before proceeding with this policy, CMS
should determine an alternative that allows a portion of a group under
one TIN to report as a separate subgroup on measures and activities
that are more applicable to that subgroup. This commenter suggested
that alternatively, CMS could incorporate thresholds into improvement
activities, consider a variety of thresholds (for example, clinicians
participating, target population included, entire practice included,
etc.) and adjust the thresholds based on the type of improvement
activity. Another commenter suggested that CMS equate this threshold
with the credit received, giving the example that if 70 percent of NPIs
within a TIN are performing an improvement activity, then that group
should get 70 percent credit toward that improvement activity score.
Commenters suggested that this addresses the possibility of a decline
in further improvement once the set threshold is achieved.
Response: We disagree with the commenters who oppose increasing the
threshold to 50 percent of the group practice sites to receive patient-
centered medical home designation. Currently, only one practice site in
a TIN with multiple practice sites is required for full credit as a
patient-centered medical home. We recognized that the transition year
was the first time MIPS eligible clinicians or groups would be measured
on the quality improvement work on a national scale. Therefore, we
approached the improvement activities performance category with these
principles in mind along with the overarching principle for the MIPS
program that we are building a process that will have increasingly more
stringent requirements over time. We noted in the CY 2017 Quality
Payment Program final rule (81 FR 77188 through 77189) that the
baseline requirements that would continue to have more stringent
requirements in future years, and that we were laying the groundwork
for expansion towards continuous improvement over time. We recognized
that quality improvement is a critical aspect of improving the health
of individuals and the health care delivery system overall. We have
provided great flexibility during the transition year and believe it is
time to increase this threshold to encourage TINs to increase their
number of patient-centered medical homes. We do not believe that only
one MIPS eligible clinician should represent the entire group going
forward; and accordingly, do not believe one MIPS eligible clinician's
patient-centered medical home status should qualify the entire group to
receive the maximum improvement activity performance category score (15
points) toward their final score beyond the transition year. In
addition, as discussed in our proposal above, we determined that a 50
percent threshold would be appropriate, because we believe it is an
achievable goal and it is supported by a study of physician-owned
primary care groups in a recent Annals of Family Medicine article, in
which nearly all groups of varying sizes in this study (sampled with
variation in size and geographic area) have 50 percent or more of the
practice sites within a group being an NCQA patient-centered medical
home.
In response to the commenters who believed that the proposed
threshold may interfere with their ability to report participation in
an improvement activity that may be unique to their specialty group, if
those specialty groups decided to use patient-centered medical home
recognition as their credit for the improvement activities performance
category, those specialty groups would not be able to report on
improvement activities unique to their specialties. We refer commenters
to our proposal above where we state that if the group is unable to
meet the 50 percent threshold, then the individual MIPS eligible
clinicians may choose to receive full credit as a recognized or
certified patient-centered medical home, or comparable specialty
practice, as an individual for all performance categories (82 FR
30054). To emphasize this point, specialty clinicians could either be
recognized as a comparable specialty practice under the patient-
centered medical home designation, which would reflect the specialty
care they provide, or they could report on improvement activities that
may be unique to their specialty group as individual reporters.
Therefore, we do not believe that setting the patient-centered medical
home threshold at 50 percent would interfere with a group's ability to
report other specialty specific improvement activities. We believe that
the suggestion that we use a threshold of 2 or more practice sites,
instead of 50 percent might discourage large medical groups from
investing in patient-centered medical home transformation more broadly,
because many large
[[Page 53654]]
medical groups may have ten practice sites, and having only 2 sites
recognized as patient-centered medical homes would mean a very
different investment for a practice with 3 sites than it would for a
practice with 30 sites. We also disagree that a threshold of 33 percent
is appropriate, because the literature (Casalino, et al., 2016), as
cited in our proposal, demonstrated that a 50 percent target is
achievable and we have seen a subset of large, multi-specialty medical
groups from across the country that have already surpassed this target.
We also believe that finalizing a proportion lower than 50 percent of
practice sites would unfairly discredit practices that have greater
integration and required a significant investment. In addition, using a
pro-rated approach as suggested by a commenter brings significant added
complexity and burden, for which we do not believe outweighs the
benefits.
Comment: Several commenters indicated that large multispecialty
practices, such as academic medical centers, have a large number of
specialists; therefore, it is unlikely that 50 percent of the practice
sites under their TIN would be recognized as medical homes. Commenters
cautioned that excluding these medical homes from getting credit while
other practices get full credit is likely to discourage practice
locations from seeking this designation.
Response: Regarding commenters who suggested that large
multispecialty practices, such as academic medical centers with a large
number of specialists would be unlikely to have 50 percent of the
practice sites under their TIN recognized as medical homes, we want to
raise the threshold to encourage greater transition such that there a
meaningful investment in transforming their practice sites. Having 50
percent of their sites being recognized as patient-centered medical
homes represents a significant investment toward practice
transformation that is achievable and supported by the literature. As
cited in our proposal (Casalino, et al., 2016), studies have
demonstrated that a 50 percent target is achievable and we have seen a
subset of large, multi-specialty medical groups from across the country
that have already surpassed this target. In addition, if an academic
medical center has numerous sites, and only one is a patient-centered
medical home, we do not believe that represent the same degree of
investment in practice transformation as a TIN with 50 percent or more
of the practice sites being recognized medical homes, because because
having only 2 sites recognized as patient-centered medical homes would
mean a very different investment for a practice with 3 sites than it
would for a practice with 30 sites.
Comment: Another commenter suggested that CMS use the CMS Study on
Burden Associated with Quality Reporting that is discussed in section
II.C.6.e.(9) of this final rule with comment period to solicit input
from stakeholders about how to assess thresholds of participation,
score practices on performance, and assess improvement.
Response: As discussed in the CY 2017 Quality Payment Program final
rule (81 FR 77195) the CMS Study on Burden Associated with Quality
Reporting goals are to see whether there will be improved outcomes,
reduced burden in reporting, and enhancements in clinical care by
selected MIPS eligible clinicians desiring:
A more data driven approach to quality measurement.
Measure selection unconstrained by a CEHRT program or
system.
Improving data quality submitted to CMS.
Enabling CMS get data more frequently and provide feedback
more often.
We do not believe the CMS Study on Burden Associated with Quality
Reporting is the appropriate vehicle to assess thresholds of
participation, score practices on performance, and assess improvement.
We will, however, take these comments into consideration as we craft
future policies.
Comment: One commenter requested that CMS more clearly define the
term ``practice'' used in the CY 2018 Quality Payment proposed rule (82
FR 30054) and clarify, for example, whether ``practice'' means a
physical location where services are delivered or the administrative
address, among other things, and urged CMS to define this term in a way
that includes as many individual MIPS eligible clinicians as possible.
Response: In the CY 2018 Quality Payment Program proposed rule (82
FR 30054), we proposed to revise Sec. 414.1380(b)(3)(x) to provide
that for the 2020 MIPS payment year and future years, to receive full
credit as a certified or recognized patient-centered medical home or
comparable specialty practice, at least 50 percent of the practice
sites within the TIN must be recognized as a patient-centered medical
home or comparable specialty practice. However, we note again that we
intended to add Sec. 414.1380(b)(3)(x) as a new provision, not revise
it. This was an inadvertent typographical error. We interpret commenter
to be referring to our use of the term ``practice sites'' and we agree
with the commenter that defining this term will reduce ambiguity. In
response, we are clarifying in this final rule that a practice site is
the physical location where services are delivered. We are
operationalizing this definition by using the practice address field
within the Provider Enrollment, Chain and Ownership System (PECOS). We
believe this definition is generally acceptable in the medical
community as a whole, because physical practice locations are a common
way for primary care to be organized.
Comment: One commenter stated that the intent of MACRA was to give
all practices recognized as a patient-centered medical home or
comparable specialty full credit in the improvement activities
performance category and that the proposed threshold is not consistent
with the intent of Congress.
Response: We disagree with the commenter that our proposal is
contrary to the intent of Congress. Section 1848(q)(5)(C)(i) of the Act
specifies that a MIPS eligible clinician or group that is certified or
recognized as a patient-centered medical home or comparable specialty
practice, as determined by the Secretary, must be given the highest
potential score for the improvement activities performance category for
the performance period. We believe the statute gives the Secretary
discretion to determine what qualifies as a certified or recognized
patient-centered medical home or comparable specialty practice. We have
provided the utmost flexibility by allowing any undesignated practices
to receive full credit simply by virtue of being in a TIN with one
designated practice. As discussed in the CY 2017 Quality Payment
Program final rule (81 FR 30054) for the transition year, that
practices may receive a patient-centered medical home designation at a
practice level, and that individual TINs may be composed of both
undesignated practices and practices that have received a designation
as a patient-centered medical home (for example, only one practice site
has received patient-centered medical home designation in a TIN that
includes five practice sites). In addition, we finalized an expanded
definition of what is acceptable for recognition as a certified
patient-centered medical home or comparable specialty practice (81 FR
77180). We refer readers to Sec. 414.1380(3)(iv) for details. We
recognized a MIPS eligible clinician or group as being a certified
patient-centered medical home or comparable specialty practice if they
have achieved certification or accreditation as such from a national
program, or they have achieved certification or accreditation as
[[Page 53655]]
such from a regional or state program, private payer or other body that
certifies at least 500 or more practices for patient-centered medical
home accreditation or comparable specialty practice certification (81
FR 77180). Examples of nationally recognized accredited patient-
centered medical homes are: (1) The Accreditation Association for
Ambulatory Health Care; (2) the National Committee for Quality
Assurance (NCQA) Patient-Centered Medical Home; (3) The Joint
Commission Designation; or (4) the Utilization Review Accreditation
Commission (URAC) (81 FR 77180). We finalized that the criteria for
being a nationally recognized accredited patient-centered medical home
are that it must be national in scope and must have evidence of being
used by a large number of medical organizations as the model for their
patient-centered medical home (81 FR 77180). We also provided full
credit for the improvement activities performance category for a MIPS
eligible clinician or group that has received certification or
accreditation as a patient-centered medical home or comparable
specialty practice from a national program or from a regional or state
program, private payer or other body that administers patient-centered
medical home accreditation and certifies 500 or more practices for
patient-centered medical home accreditation or comparable specialty
practice certification (81 FR 77180). Once a MIPS eligible clinician or
group is certified or recognized as a patient-centered medical home or
comparable specialty practice, those clinicians or groups are given the
full improvement activities score (for example, 40 points) (81 FR
77180). This policy specifically applies to MIPS eligible groups;
individual MIPS eligible clinicians may still attest that their
practice is part of a patient-centered medical home or comparable
specialty practice as established for the transition year in the CY
2017 Quality Payment Program final rule (81 FR 77189).
Final Action: After consideration of the public comments received,
we are finalizing our proposals with clarification. Specifically, we
are finalizing that for the 2020 MIPS payment year and future years, to
receive full credit as a certified or recognized patient-centered
medical home or comparable specialty practice, at least 50 percent of
the practice sites within the TIN must be recognized as a patient-
centered medical home or comparable specialty practice. We are
clarifying that a practice site as is the physical location where
services are delivered. We are also finalizing to add Sec.
414.1380(b)(3)(x) to reflect these changes.
(A) CPC+
In the CY 2018 Quality Payment Program proposed rule (82 FR 30054)
we stated that we have determined that the Comprehensive Primary Care
Plus (CPC+) APM design satisfies the requirements to be designated as a
medical home model, as defined in Sec. 414.1305; and therefore, as
defined at 81 FR 77178 that states that patient-centered medical homes
will be recognized if it is a nationally recognized accredited patient-
centered medical home, a Medicaid Medical Home Model, or a Medical Home
Model, CPC+ APM is also a certified or recognized patient-centered
medical home for purposes of the improvement activities performance
category. We have also determined that the CPC+ APM meets the criteria
to be an Advanced APM. We refer readers to https://qpp.cms.gov/docs/QPP_Advanced_APMs_in_2017.pdf for more information. Participating CPC+
practices in the Model must adopt, at a minimum, the certified health
IT needed to meet the certified EHR technology (CEHRT) definition at
Sec. 414.1305. In addition, participating CPC+ practices receive
payments for covered professional services based on quality measures
comparable to those used in the quality performance category of MIPS,
and they bear more than a nominal amount of financial risk for monetary
losses as described at Sec. 414.1415.
We recognized the possibility that certain practices that applied
to participate in Round 2 of the CPC+ APM, but were not chosen to
participate in the model, could potentially be randomized into a CPC+
control group. The control group practices would meet all of the same
eligibility requirements as the CPC+ participating practices (also
known as the ``intervention group'') but the control group would not
``participate'' in the APM (for example, undertake the CPC+ care
delivery activities such as providing 24/7 clinician access, or empanel
attributed Medicare beneficiaries) or receive any of the CPC+ payments.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30015), we
discussed that we believe MIPS eligible clinicians, who are
participating in the CPC+ APM, whether actively in the intervention
group or as part of the control group, should therefore receive full
credit for the improvement activities performance category.
Accordingly, in the CY 2018 Quality Payment Program proposed rule
(82 FR 30054 through 30055), we proposed that MIPS eligible clinicians
in practices randomized to the control group in the CPC+ APM would
receive full credit as a medical home model, and therefore, a certified
patient-centered medical home, for the improvement activities
performance category. In other words, MIPS eligible clinicians who
attest that they are in practices that have been randomized to the
control group in the CPC+ APM would receive full credit for the
improvement activities performance category for each performance period
in which they are on the Practitioner Roster, the official list of
eligible clinicians participating in a practice in the CPC+ control
group (82 FR 30054 through 30055).
We invited public comment on our proposal. The following is a
summary of public comments received on this proposal and our response.
Comment: Several commenters stated that they support recognizing
CPC+ control group participants as participating in a medical home
model and receiving full credit for the improvement activities
performance category. The commenters noted that the focus of these
practices should be on developing a balance in primary care and
specialty care focused high-weight improvement activities. One
commenter stated that the requirements of CPC+ are such that there
would be a guarantee that participants are carrying out true practice
improvement activities focused on patient-centered care.
Response: We thank commenters for their support. As an update, CMS
has not randomized any practices that will begin participation in CPC+
in 2018 into a control group. Because we have not randomized any
practices into a control group in CPC+ Round 2, we are not finalizing
our proposal.
Final Action: After consideration of the public comments we
received and developments in the CPC+ Model, we are not finalizing our
proposal as discussed above.
(c) Required Period of Time for Performing an Activity
In the CY 2017 Quality Payment Program final rule (81 FR 77186), we
specified at Sec. 414.1360 that MIPS eligible clinicians or groups
must perform improvement activities for at least 90 consecutive days
during the performance period for improvement activities performance
category credit. Activities, where applicable, may be continuing (that
is, could have started prior to the performance period and are
continuing) or be adopted in the
[[Page 53656]]
performance period as long as an activity is being performed for at
least 90 days during the performance period. In the CY 2018 Quality
Payment Program proposed rule (82 FR 30055), we did not propose any
changes to the required period of time for performing an activity for
the improvement activities performance category in the proposed rule.
We also refer readers to section II.C.4.d. of this final rule with
comment period, where we are finalizing to generally apply our group
policies to virtual groups.
We received many comments on this topic and will take them into
consideration for future rulemaking.
(4) Application of Improvement Activities to Non-Patient Facing
Individual MIPS Eligible Clinicians and Groups
In the CY 2017 Quality Payment Program final rule (81 FR 77187), we
specified at Sec. 414.1380(b)(3)(vii) that for non-patient facing
individual MIPS eligible clinicians or groups, to achieve the highest
score one high-weighted or two medium-weighted improvement activities
are required. For these individual MIPS eligible clinicians and groups,
in order to achieve one-half of the highest score, one medium-weighted
improvement activity is required (81 FR 77187). In the CY 2018 Quality
Payment Program proposed rule (82 FR 30055), we did not propose any
changes to the application of improvement activities to non-patient
facing individual MIPS eligible clinicians and groups for the
improvement activities performance category.
We received a few comments on this topic and will take them into
consideration for future rulemaking.
(5) Special Consideration for Small, Rural, or Health Professional
Shortage Areas Practices
In the CY 2017 Quality Payment Program final rule (81 FR 77188), we
finalized at Sec. 414.1380(b)(3)(vii) that one high-weighted or two
medium-weighted improvement activities are required for individual MIPS
eligible clinicians and groups that are small practices or located in
rural areas, or geographic HPSAs, to achieve full credit. In addition,
we specified at Sec. 414.1305 that a rural area means ZIP codes
designated as rural, using the most recent HRSA Area Health Resource
File data set available (81 FR 77012). Lastly, in the CY 2017 Quality
Payment Program final rule (81 FR 77539 through 77540), we codified the
following definitions at Sec. 414.1305: (1) Small practices is defined
to mean practices consisting of 15 or eligible clinicians; and (2)
Health Professional Shortage Areas (HPSA) refers to areas as designated
under section 332(a)(1)(A) of the Public Health Service Act. In the CY
2018 Quality Payment Program proposed rule (82 FR 30055), we did not
propose any changes to the special consideration for small, rural, or
health professional shortage areas practices for the improvement
activities performance category in the proposed rule.
We received many comments on this topic and will take them into
consideration for future rulemaking.
(6) Improvement Activities Subcategories
In the CY 2017 Quality Payment Program final rule (81 FR 77190), we
finalized at Sec. 414.1365 that the improvement activities performance
category will include the subcategories of activities provided at
section 1848(q)(2)(B)(iii) of the Act. In addition, we finalized (81 FR
77190) at Sec. 414.1365 the following additional subcategories:
Achieving Health Equity; Integrated Behavioral and Mental Health; and
Emergency Preparedness and Response. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30055), we did not propose any changes to
the improvement activities subcategories for the improvement activities
performance category in the proposed rule.
We received a few comments on this topic and will take them into
consideration for future rulemaking.
(7) Improvement Activities Inventory
We refer readers to Table H in the Appendix of the CY 2017 Quality
Payment Program final rule (81 FR 77817) for our previously finalized
Improvement Activities Inventory for the transition year of MIPS and
future years. In this final rule with comment period, we are finalizing
updates to the Improvement Activities Inventory, formalizing the
process for adding new improvement activities to the Improvement
Activities Inventory, and finalizing the criteria for nominating new
improvement activities. These are discussed in detail below.
(a) Annual Call for Activities Process for Adding New Activities
(i) Transition Year
As discussed in the CY 2017 Quality Payment Program final rule (81
FR 77190), for the transition year of MIPS, we implemented the initial
Improvement Activities Inventory and took several steps to ensure it
was inclusive of activities in line with statutory and program
requirements. Prior to selecting the improvement activities, we
conducted background research. We interviewed high performing
organizations of all sizes, conducted an environmental scan to identify
existing models, activities, or measures that met all or part of the
improvement activities performance category requirements, including the
patient-centered medical homes, the Transforming Clinical Practice
Initiative (TCPI), CAHPS surveys, and AHRQ's Patient Safety
Organizations (81 FR 77190). In addition, we reviewed comments from the
CY 2016 PFS final rule with comment period (80 FR 70886), and those
received in response to the MIPS and APMs RFI published in the October
1, 2015 Federal Register (80 FR 59102, 59106 through 59107) regarding
the improvement activities performance category. The Improvement
Activities Inventory finalized in the CY 2017 Quality Payment Program
final rule (81 FR 77817 through 77831) in Table H of the Appendix, for
the transition year and future years, was compiled as a result of the
stakeholder input, an environmental scan, the MIPS and APMs RFI
comments, and subsequent working sessions with AHRQ and ONC and
additional communications with CDC, SAMHSA, and HRSA.
(ii) Year 2
For the Quality Payment Program Year 2, we provided an informal
process for submitting new improvement activities for potential
inclusion in the comprehensive Improvement Activities Inventory for the
Quality Payment Program Year 2 and future years through subregulatory
guidance (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/Annual-Call-for-Measures-and-Activities-for-MIPS_Overview-Factsheet.pdf). As part of this informal
process, we solicited and received input from various MIPS eligible
clinicians and organizations suggesting possible new activities via a
nomination form that was posted on the CMS Web site at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/CallForMeasures.html. These nominations were
vetted by an internal CMS review panel that conferred with other
federal partners. New activities or modifications to existing
activities were proposed in the CY 2018 Quality Payment Program
proposed rule (82 FR 30479 through 30500) Improvement Activities
Inventory in Tables F and G of the Appendix for Year 2 of the MIPS
program. We refer readers to the CY 2017 Quality Payment Program final
rule (81 FR 77817 through 77831) in
[[Page 53657]]
Table H of the Appendix, for the transition year and future years and
Tables F and G of the Appendix of this final rule with comment period
for our finalized Improvement Activities Inventory for Year 2 and
future years of MIPS.
(iii) Year 3 and Future Years
For the Quality Payment Program Year 3 and future years, in the CY
2018 Quality Payment Program proposed rule (82 FR 30055), we proposed
to formalize an Annual Call for Activities process for adding possible
new activities or providing modifications to the current activities in
the Improvement Activities Inventory. We believe this is a way to
engage eligible clinician organizations and other relevant
stakeholders, including beneficiaries, in the identification and
submission of improvement activities for consideration. Specifically,
we proposed that individual MIPS eligible clinicians or groups and
other relevant stakeholders may recommend activities for potential
inclusion in the Improvement Activities Inventory via a similar
nomination form that was utilized in the Year 2 of MIPS informal Annual
Call for Activities found on the Quality Payment Program Web site at
www.qpp.cms.gov, as discussed above. As part of this formalized
process, individual MIPS eligible clinicians, groups, and other
relevant stakeholders would be able to nominate new improvement
activities that we may consider adding to the Improvement Activities
Inventory. Individual MIPS eligible clinicians and groups and relevant
stakeholders would be required to provide an explanation, via the
nomination form, of how the improvement activity meets all the relevant
criteria proposed in the CY 2018 Quality Payment Program proposed rule
(82 FR 30055 through 30056) and finalized below in section
II.C.6.e.(7)(b) of this final rule with comment period.
We refer readers to the Improvement Activities Inventory in Tables
F and G of the Appendix of this final rule with comment period where we
are finalizing new activities and changes to existing activities, some
with modification. We invited public comment on our proposal to
formalize the Annual Call for Activities process for the Quality
Payment Program Year 3 and future years.
Comment: Several commenters supported the proposal to formalize an
Annual Call for Activities process to facilitate the solicitation of
new improvement activities. The commenters supported CMS's criteria for
improvement activities that continue to follow the priorities of the
National Quality Strategy. In addition, other commenters noted that the
addition of new improvement activities creates more opportunity for
clinicians to select a suite of activities that further a particular
improvement goal, rather than choosing several discrete activities,
which together may not move the practice toward transformation. Several
commenters supported the inclusion of improvement activities that
demonstrate the delivery of patient and family centered care.
Response: We thank the commenters for their support.
Comment: One commenter suggested that quality and practice
transformation standards such as the National Committee for Quality
Assurance (NCQA) patient-centered medical home recognition should be
the basis for reporting and validating improvement activities.
Response: We are considering the standards for reporting and
validation, given that each national accrediting organization has
different reporting requirements and utilizes different standards to
evaluate their respective patient-centered medical home recognition
programs. Although the AHRQ patient-centered medical home definition
(https://pcmh.ahrq.gov/page/defining-pcmh) identified national
accrediting organizations' patient-centered medical home models which
align around recognized functions, their standards and activities for
evaluation of each element may be different. Likewise, the data
collected and maintained by each accrediting organization may be
updated during different time frames and assess or evaluate performance
elements using different methodologies, presenting challenges to
standardizing validation that would need to be addressed through
further research. We are evaluating the technical feasibility of having
an external entity report and potentially validate improvement
activities in the future. We refer readers to the Quality Payment
Program Web site Resource Library at https://qpp.cms.gov/ (MIPS Data
Validation Criteria), which provides the current expected data
validation information.
Comment: Some commenters also encouraged CMS to ensure that
accepted improvement activities are aligned with measures for the other
performance categories.
Response: We agree that it is important to create a program in
which the performance categories are aligned as much as possible. We
will continue to identify those improvement activities that are also
eligible for the advancing care information performance category in the
Improvement Activities Inventory. We encourage stakeholders to submit
new improvement activities and modifications to existing improvement
activities that help to align performance categories through the Annual
Call for Activities.
Comment: Commenters also encouraged CMS to ensure that the process
is transparent, urged CMS to continue to be flexible and include as
many proposed improvement activities on the final list as possible, and
urged CMS to create more explicit inclusion criteria, which would
further streamline the process of hospitals identifying the broader
activity to which the discrete activity belongs. A few commenters
expressed concern that improvement activities that were submitted were
not accepted and urged CMS to be more transparent in the manner in
which they make decisions about: (1) Which improvement activities are
included and not included in the inventory, and (2) the weighting of
improvement activities. In addition, they urged CMS to provide
additional rationale to submitters when their recommended improvement
activities are not accepted and engage specialists and non-specialists
equally to select improvement activities for inclusion in the
Inventory.
Response: As we have developed the Improvement Activity Inventory,
we have strived to be flexible and have accepted as many improvement
activities as possible that are appropriate. As we work to further
develop the Annual Call for Activities process, we intend to be as
transparent as feasible. In the CY 2017 Quality Payment final rule (81
FR 77190), we discussed some guidelines by which improvement activities
are selected based on a set of criteria. We note that the Annual Call
for Activities that was held in Year 2 for improvement activities that
will be applicable for the 2018 performance period, was an informal
process. We formally proposed criteria in the CY 2018 QPP proposed rule
(82 FR 30055 through 30056) and are finalizing them in section
II.C.6.e.(7)(b) of this final rule with comment period. We refer
readers to section II.C.6.e.(7)(b) of this final rule with comment
period.
We will take the commenters' feedback into consideration as we work
to refine the Annual Call for Activities process for future years.
Comment: Another commenter recommended that CMS prioritize
additional modifications to existing improvement activities and adopt
new
[[Page 53658]]
improvement activities on a more gradual basis.
Response: We do not disagree that we should prioritize
modifications to the current improvement activities over new
improvement activities as we believe they are both valuable. We must
balance burden with including a sufficient number and variety of
improvement activities in the Inventory so that all MIPS eligible
clinician and groups have relevant activities to select. However, we
are mindful of adopting new activities gradually; the Improvement
Activities Inventory has not grown by more than 20 percent for the 2018
performance period.
Comment: Several commenters supported the addition of improvement
activities for hospitals, but recommended that CMS work with partners,
clinicians, researchers, and other stakeholders to develop a broad set
of activities to fill existing gaps in the program. Some commenters
expressed concern that the Inventory is too heavily focused on primary
care and urged us to work closely with specialty societies to solicit
and develop additional improvement activities.
Response: We consistently engage a variety of groups within
different specialties via webinars and listening sessions to get
improvement activity feedback. We do not agree that the Improvement
Activities Inventory is primary care focused as there are many
activities specialists may perform. As discussed in the CY 2017 Quality
Payment Program final rule (82 FR 77190), we wanted to create a broad
list of activities that can be used by multiple practice types to
demonstrate improvement activities and activities that may lend
themselves to being measured for improvement in future years. We took
several steps to ensure the initial improvement activities inventory is
inclusive of activities in line with the statutory language. We had
numerous interviews with highly performing organizations of all sizes,
conducted an environmental scan to identify existing models,
activities, or measures that met all or part of the improvement
activities performance category. We also encourage specialties to
submit new improvement activities and modifications to existing
improvement activities through the Annual Call for Activities.
Comment: Several commenters noted that it would make more sense to
reorganize and augment the Improvement Activities Inventory to align
explicitly with the requirements in MIPS, and for APMs. Several
commenters believed that improvement activities should be developed and
added that would support a practice's capacity to analyze its own
quality data and be prepared to share downside risk in order to
participate in an APM. The commenters encouraged CMS to align the
thresholds and reporting requirements across performance categories for
any of these overlapping activities, in order to reduce burden.
Response: Section 1848(q)(2)(B)(iii) of the Act specified
subcategories improvement activities. However, we are working to ensure
that improvement activities align across the performance categories and
must balance burden with including a sufficient number and variety of
improvement activities in the Inventory so that all MIPS eligible
clinician and groups have relevant activities to select, and in
particular for clinicians who do not participate in APMs as we do not
want to the Inventory to be exclusive to any one group. We encourage
stakeholders to submit new improvement activities and modifications to
existing improvement activities through the Annual Call for Activities.
Comment: One commenter encouraged CMS to specify improvement
activities for which a participant can use application programming
interfaces (APIs) to receive another advancing care information bonus
point. The commenter noted that doing so would further incentivize
clinicians to utilize the API functionality for health information
sharing with beneficiaries as part of patient engagement and care
coordination activities.
Response: We will take the commenter's suggestions for specifying
improvement activities that are eligible for a bonus in the advancing
care information performance category into consideration in future
rulemaking. We note that in the CY 2017 Quality Payment Program final
rule (81 FR 77182), we finalized a policy to allow MIPS eligible
clinicians to achieve a bonus in the advancing care information
performance category when they use functions included in CEHRT to
complete eligible activities from the improvement activities inventory,
and codified at Sec. 414.1380 that we would provide a designation
indicating which activities qualify for the advancing care information
bonus finalized. In addition, we refer readers to section II.E.5.g. of
this final rule with comment period for details on how improvement
activities using CEHRT relate to the objectives and measures of the
advancing care information and improvement activities performance
categories. We acknowledge the commenters additional suggestions and
note that in addition to those functions included under the CEHRT
definition, we advised in the CY 2017 Quality Payment Program final
rule (81 FR 77199) that we may consider including additional ONC
certified health IT capabilities as part of activities within the
improvement activities inventory in future years. However, we are not
making any changes to our current approach for allowing MIPS eligible
clinicians to achieve a bonus in the advancing care performance
category in this final rule with comment period.
Comment: Some commenters encouraged CMS to consider the role of
digital technologies in improving care and including related activities
as part of continual improvement activities for future consideration.
Some commenters supported the inclusion of telehealth-related
improvement activities in the Inventory and suggested that these be
high-weighted activities. A few commenters recommended that improvement
activities that are registry-focused be assigned a high-weight, or
alternatively, that CMS allow eligible clinicians who participate in a
registry and meet certain basic requirements to receive the maximum
score in this performance category.
Response: We acknowledge commenters' suggestions for considering
digital technologies and telehealth in improvement activities and their
weighting in the Improvement Activities Inventory. We note that we have
reserved high weighting for activities that directly address areas with
the greatest impact on beneficiary care, safety, health, and well-
being, as explained in the CY 2017 Quality Payment Program final rule
(81 FR 77194). We are not making any changes to this approach in this
final rule with comment period; however, we will take these commenters'
suggestions into consideration for future rulemaking. We also encourage
stakeholders to submit activities for consideration during our formal
Annual Call for Activities finalized in this final rule with comment
period, as suggestions regarding improvement activity type, and content
will be taken into consideration as part of that process. Finally, we
do not agree that clinicians who participate in a registry and meet
certain basic requirements should receive the maximum improvement
activities score at this time, as we do not have sufficient data to
determine what basic requirements might be sufficient to merit full
credit, or what impact such an approach would have across MIPS eligible
clinicians and groups.
Comment: One commenter noted that some of the proposed activities
exclude
[[Page 53659]]
clinicians who are not physicians from participation, and advised us to
be mindful of this going forward.
Response: We believe that this comprehensive Improvement Activities
Inventory includes a broad range of activities that can be used by
multiple clinician and practice types to demonstrate improvement
activities and activities that may lend themselves to being measured
for improvement in future years. We will take this concern into
consideration, however, as we craft future policy, and we encourage
stakeholders to submit new activities or suggestions for modifications
to existing activities for consideration during our Annual Call for
Activities.
Comment: One commenter urged CMS to accept input from stakeholders
regarding the weighting of several activities already included in the
inventory that are resource intensive and currently have a medium
weighting, and reconsider the weighting of these activities.
Response: We refer the commenter and readers to Tables F and G in
the Appendices of this final rule with comment period where we received
public input on the weighting of a number of existing improvement
activities. We previously stated that we believe high weighting should
be used for activities that directly address areas with the greatest
impact on beneficiary care, safety, health, and well-being, as
explained in the CY 2017 Quality Payment Program final rule (81 FR
77194). While we are not making any changes to this approach in this
final rule with comment period, we will take the commenter's
suggestions into consideration for future rulemaking. We also encourage
commenter to submit new improvement activities, or recommendations for
modifications to existing activities (including weighting) to us for
consideration during the Annual Call for Activities.
Comment: Several commenters proposed additional concepts for
improvement activities for the Quality Payment Program Year 2,
including improvement activities that address participation in self-
assessment or ongoing learning activities; improving access to care;
engaging patients and families in practice governance; using telehealth
for patient interactions; and collaborating and data-sharing with
Regional Health Improvement Networks. Several comments were received
requesting various new improvement activities for inclusion in the
Improvement Activities Inventory.
Response: We thank the commenters for their suggestions. While the
informal process for the nominating improvement activities for MIPS
Year 2 is now closed, we encourage stakeholders to submit new
improvement activities and modifications to existing improvement
activities through the upcoming Annual Call for Activities.
Final Action: After consideration of the public comments we
received, we are finalizing our proposal, as proposed, to formalize the
Annual Call for Activities process for Quality Payment Program Year 3
and future years, as discussed in this final rule with comment period.
(b) Criteria for Nominating New Improvement Activities for the Annual
Call for Activities
In the CY 2017 Quality Payment final rule (81 FR 77190), we
discussed guidelines for the selection of improvement activities. In
the CY 2018 Quality Payment Program proposed rule (82 FR 30055 and
30485), we formally proposed that for the Quality Payment Program Year
3 and future years, that stakeholders would apply one or more of the
following criteria when submitting improvement activities in response
to the proposed formal Annual Call for Activities. We intend to also
use these criteria in selecting improvement activities for inclusion in
the program.
Relevance to an existing improvement activities
subcategory (or a proposed new subcategory);
Importance of an activity toward achieving improved
beneficiary health outcome;
Importance of an activity that could lead to improvement
in practice to reduce health care disparities;
Aligned with patient-centered medical homes;
Activities that may be considered for an advancing care
information bonus;
Representative of activities that multiple individual MIPS
eligible clinicians or groups could perform (for example, primary care,
specialty care);
Feasible to implement, recognizing importance in
minimizing burden, especially for small practices, practices in rural
areas, or in areas designated as geographic HPSAs by HRSA;
Evidence supports that an activity has a high probability
of contributing to improved beneficiary health outcomes; or
CMS is able to validate the activity.
We also noted in our proposal that in future rulemaking, activities
that overlap with other performance categories may be proposed to be
included if such activities support the key goals of the program.
We invited public comment on our proposal. The following is a
summary of the public comments received on the ``Criteria for
Nominating New Improvement Activities for the Annual Call for
Activities'' proposals and our responses.
Comment: Several commenters provided suggested additional selection
criteria: (1) Improvement activities should focus on meaningful
activities from the person and family's point of view, not structural
processes that do not improve clinical care; and (2) there should be
consideration for adding new activities that focus on identifying and
supporting the patient's family or personal caregiver. A few commenters
requested that CMS expand the definition of eligible activities to
include ``actions that reduce barriers to care,'' and to include
interpretation and transportation services explicitly.
Response: We acknowledge commenters' suggestions for additional
criteria, and in response to these comments, we are expanding the
proposed criteria to also include: (1) Improvement activities that
focus on meaningful actions from the person and family's point of view;
and (2) improvement activities that support the patient's family or
personal caregiver. We believe these are appropriate to add, because
they closely align with one of our MIPS strategic goals, to use a
patient-centered approach to program development that leads to better,
smarter, and healthier care.
In addition, we currently include several activities in the
Improvement Activities Inventory that address barriers to care, such as
IA_CC_16, Primary Care Physician and Behavioral Health Bilateral
Electronic Exchange of Information for Shared Patients, which rewards
primary care and behavioral health practices using the same electronic
health record system for shared patients or for exchanging information
bilaterally and IA_PM_18 Provide Clinical-Community Linkages, which
rewards MIPS eligible clinicians engaging community health workers to
provide a comprehensive link to community resources through family-
based services focusing on success in health, education, and self-
sufficiency. However, we will consider criteria that address ``actions
that reduce barriers to care'' and those that identify interpretation
and transportation services as we craft future policies.
Final Action: After consideration of the public comments received,
we are finalizing with modification, for the Quality Payment Program
Year 3 and future years, that stakeholders should apply one or more of
the criteria when
[[Page 53660]]
submitting improvement activities in response to the Annual Call for
Activities. In addition to the criteria listed in the proposed rule for
nominating new improvement activities for the Annual Call for
Activities policy we are modifying and expanding the proposed criteria
list to also include: (1) Improvement activities that focus on
meaningful actions from the person and family's point of view, and (2)
improvement activities that support the patient's family or personal
caregiver. The finalized list of criteria for submitting improvement
activities in response to the Annual Call for Activities is as follows:
Relevance to an existing improvement activities
subcategory (or a proposed new subcategory);
Importance of an activity toward achieving improved
beneficiary health outcome;
Importance of an activity that could lead to improvement
in practice to reduce health care disparities;
Aligned with patient-centered medical homes;
Focus on meaningful actions from the person and family's
point of view;
Support the patient's family or personal caregiver;
Activities that may be considered for an advancing care
information bonus;
Representative of activities that multiple individual MIPS
eligible clinicians or groups could perform (for example, primary care,
specialty care);
Feasible to implement, recognizing importance in
minimizing burden, especially for small practices, practices in rural
areas, or in areas designated as geographic HPSAs by HRSA;
Evidence supports that an activity has a high probability
of contributing to improved beneficiary health outcomes; or
CMS is able to validate the activity.
(c) Submission Timeline for Nominating New Improvement Activities for
the Annual Call for Activities
During the informal process, we accepted nominations from February
16 through Febuary 28, 2017. For the Quality Payment Program Year 2, we
provided an informal process for submitting new improvement activities
for potential inclusion in the comprehensive Improvement Activities
Inventory for the Quality Payment Program Year 2 and future years
through subregulatory guidance (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/Annual-Call-for-Measures-and-Activities-for-MIPS_Overview-Factsheet.pdf). As part
of this informal process, we solicited and received input from various
MIPS eligible clinicians and organizations suggesting possible new
activities via a nomination form that was posted on the CMS Web site at
https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/CallForMeasures.html.
It is our intention that the nomination and acceptance process for
improvement activities, to the best extent possible, parallel the
Annual Call for Measures process that is already conducted for MIPS
quality measures. We refer readers to the CY 2017 Quality Payment
Program final rule (81 FR 77147 through 77153) and section II.C.6.c.(1)
of this final rule with comment period for more information. Therefore,
aligned with this process, in the CY 2018 Quality Payment Program
proposed rule (82 FR 30056), we proposed to accept submissions for
prospective improvement activities at any time during the performance
period for the Annual Call for Activities and create an Improvement
Activities Under Review (IAUR) list which will be displayed on a CMS
Web site. For example, the CY 2019 performance period spans January 1,
2019 through December 31, 2019, therefore, the submission period for CY
2019 prospective improvement activities would be March 2, 2017 through
March 1, 2018. When submissions are received after the March 1 deadline
then that submission will be included in the next performance period
activities cycle. We will consider the IAUR list we make decisions on
which improvement activities to include in a future Improvement
Activities Inventory. We will analyze the IAUR list while considering
the criteria for inclusion of improvement activities as finalized in
section II.C.6.e.(7)(b) of this final rule with comment period.
In addition, in the CY 2018 Quality Payment Program proposed rule
(82 FR 30056), we proposed that for the formal Annual Call for
Activities, only activities submitted by March 1 would be considered
for inclusion in the Improvement Activities Inventory for the
performance periods occurring in the following calendar year. In other
words, we will accept improvement activities at any time throughout the
year, however, we will only consider those improvement activities that
are received by the March 1 deadline for the following performance
period. This proposal was slightly different than the Call for Measures
timeline. The Annual Call for Measures requires a 2-year implementation
timeline, because the measures being considered for inclusion in MIPS
undergo the pre-rulemaking process with review by the Measures
Application Partnership (MAP) (81 FR 77153). In order to decrease the
timeframe for activity implementation, we did not propose that
improvement activities also undergo MAP review. Our intention is that
while we we will accept improvement activities at any time throughout
the year, we will close the Annual Call for Activities submissions by
March 1 before the applicable performance period, which would enable us
to propose the new improvement activities for adoption in the same
year's rulemaking cycle for implementation in the following year. When
submissions are received after the March 1 deadline then that
submission will be included in the next performance period activities
cycle. For example, an improvement activity submitted to the IAUR prior
to March 1, 2018 could be considered for performance periods beginning
in 2019. If an improvement activity submission is submitted April 1,
2018 then the submission could be considered for performance periods
beginning in 2020.
In addition, in the CY 2018 Quality Payment Program proposed rule
(82 FR 30056), we proposed that we would add new improvement activities
to the inventory through notice-and-comment rulemaking.
We invited public comment on our proposals.
Final Action: We did not receive any public comments on these
proposals. Therefore, we are finalizing our proposals, as proposed, to:
(1) Accept submissions for prospective improvement activities at any
time during the performance period for the Annual Call for Activities
and create an Improvement Activities under Review (IAUR) list; (2) only
consider prospective activities submitted by March 1 for inclusion in
the Improvement Activities Inventory for the performance periods
occurring in the following calendar year; and (3) add new improvement
activities to the inventory through notice-and-comment rulemaking.
(8) Removal of Improvement Activities
In future years, we anticipate developing a process and
establishing criteria for identifying activities for removal from the
Improvement Activities Inventory through the Annual Call for Activities
process. We anticipate proposing these requirements in the future
through notice-and-comment rulemaking. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30056), we invited public comments on what
criteria should be
[[Page 53661]]
used to identify improvement activities for removal from the
Improvement Activities Inventory.
We received a few comments on this topic and will take them into
consideration for future rulemaking.
(9) Approach for Adding New Subcategories
In the CY 2017 Quality Payment Program final rule (81 FR 77197), we
finalized the following criteria for adding a new subcategory to the
improvement activities performance category:
The new subcategory represents an area that could
highlight improved beneficiary health outcomes, patient engagement and
safety based on evidence.
The new subcategory has a designated number of activities
that meet the criteria for an improvement activity and cannot be
classified under the existing subcategories.
Newly identified subcategories would contribute to
improvement in patient care practices or improvement in performance on
quality measures and cost performance categories.
In the CY 2018 Quality Payment Program proposed rule at (82 FR
30056), while we did not propose any changes to the approach for adding
new subcategories for the improvement activities performance category,
we did propose that in future years of the Quality Payment Program, we
would add new improvement activities subcategories through notice-and-
comment rulemaking. We did not receive any comments on this proposal
and are finalizing, as proposed, that in future years of the Quality
Payment Program, we will add new improvement activities subcategories
through notice-and-comment rulemaking.
In the CY 2018 Quality Payment Program proposed rule at (82 FR
30056), we also sought comments on new improvement activities
subcategories for future consideration. In particular, in the CY 2017
Quality Payment Program final rule (81 FR 77194), several stakeholders
have suggested that a separate subcategory for improvement activities
specifically related to health IT would make it easier for MIPS
eligible clinicians and vendors to understand and earn points toward
their final score through the use of health IT. Such a health IT
subcategory could include only improvement activities that are
specifically related to the advancing care information performance
category measures and allow MIPS eligible clinicians to earn credit in
the improvement activities performance category, while receiving a
bonus in the advancing care information performance category as well.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30056), we
sought suggestions on how a health IT subcategory within the
improvement activities performance category could be structured to
provide MIPS eligible clinicians with flexible opportunities to gain
experience in using CEHRT and other health IT to improve their
practice.
We received many comments on this topic and will take these into
consideration for future rulemaking.
(10) CMS Study on Burdens Associated With Reporting Quality Measures
(a) Background
In the CY 2017 Quality Payment Program final rule (81 FR 77195), we
finalized specifics regarding the CMS Study on Improvement Activities
and Measurement including the study purpose, study participation credit
and requirements, and the study procedure. In the CY 2018 Quality
Payment Program proposed rule (82 FR 30056), we proposed to modify the
name of the study to the ``CMS Study on Burdens Associated with
Reporting Quality Measures'' to more accurately reflect the purpose of
the study. The study assesses clinician burden and data submission
errors associated with the collection and submission of clinician
quality measures for MIPS, enrolling groups of different sizes and
individuals in both rural and non-rural settings and also different
specialties. We previously finalized that study participants receive
full credit in the improvement activities performance category if they
successfully elect, participate, and submit data to CMS for the full
calendar year (81 FR 77196). In the CY 2017 final rule (81 FR 77195
through 77197), we requested comments on the study and received
generally supportive feedback for the study.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30056
through 30057), we did not propose any changes to the study purpose.
However, we proposed changes to the study participation credit and
requirements for sample size, how the study sample is categorized into
groups, and the study procedures for the frequency of surveys, focus
groups, and quality data submission. These proposals are discussed in
more detail below.
(b) Sample Size
In addition to performing descriptive statistics to compare the
trends in errors and burden between study years 2017 and 2018 as
previously finalized in the (81 FR 77196), we would like to perform a
more rigorous statistical analysis with the 2018 data, which will
require a larger sample size. Therefore, in the CY 2018 Quality Payment
Program proposed rule (82 FR 30056), we proposed increasing the sample
size for CY 2018 and beyond to provide the minimum sample needed to get
a significant result with adequate power; that is, we proposed
increasing the number of the study participants to provide the minimum
needed to make a meaningful and factual conclusion out of the study.
This is described in more detail below. The sample size, as finalized
in the CY 2017 Quality Payment Program final rule (81 FR 77196), for
performance periods occurring in CY 2017 consisted of 42 MIPS groups as
stated by MIPS criteria from the following seven categories:
10 urban individual or groups of <3 eligible clinicians.
10 rural individual or groups of <3 eligible clinicians.
10 groups of 3-8 eligible clinicians.
5 groups of 8-20 eligible clinicians.
3 groups of 20-100 eligible clinicians.
2 groups of 100 or greater eligible clinicians.
2 specialty groups.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30057),
we proposed to increase the sample size for the performance periods
occurring in CY 2018 to a minimum of:
20 urban individual or groups of <3 eligible clinicians,--
(broken down into 10 individuals and 10 groups).
20 rural individual or groups of <3 eligible clinicians--
(broken down into 10 individuals and 10 groups).
10 groups of 3-8 eligible clinicians.
10 groups of 8-20 eligible clinicians.
10 groups of 20-100 eligible clinicians.
10 groups of 100 or greater eligible clinicians.
6 groups of >20 eligible clinicians reporting as
individuals--(broken down into 3 urban and 3 rural).
6 specialty groups--(broken down into 3 reporting
individually and 3 reporting as a group).
Up to 10 non-MIPS eligible clinicians reporting as a group
or individual (any number of individuals and any group size).
(c) Study Procedures
(i) Frequency of Survey and Focus Group
In the CY 2018 Quality Payment Program proposed rule (82 FR 30057),
we also proposed changes to the study procedures. In the CY 2017
Quality
[[Page 53662]]
Payment Program final rule (81 FR 77196), we finalized that for
transition year of MIPS, study participants were required to attend a
monthly focus group to share lessons learned in submitting quality data
along with providing survey feedback to monitor effectiveness. However,
an individual MIPS eligible clinician or group who chooses to report
all 6 measures within a period of 90 days may not need to be a part of
all of the focus groups and survey sessions after their first focus
group and survey following the measurement data submission (81 FR
77196). This is because they may have nothing new to contribute in
terms of discussion of errors or clinician burdens (81 FR 77196). This
also applied to MIPS eligible clinicians that submitted only three MIPS
measures within the performance period, if all three measures within
the 90-day period or at one submission (81 FR 77196). We finalized that
all study participants would participate in surveys and focus group
meetings at least once after each measures data submission (81 FR
77196). For those who elect to report data for a 90-day period, we made
further engagement optional (81 FR 77196).
In order to prevent or reduce study participants from incurring any
more significant additional administrative work as compared to other
MIPS eligible clinicians not participating in the study, in the CY 2018
Quality Payment Program proposed rule (82 FR 30057), we proposed that
for Quality Payment Program Year 2 and future years, that study
participants would be required to attend as frequently as four monthly
surveys and focus group sessions throughout the year, but certain study
participants would be able to attend less frequently. In other words,
study participants would be required to attend a maximum of four
surveys and focus group sessions throughout the year.
(ii) Data Submission
We previously required study measurement data to be collected at
baseline and then at every 3 months (quarterly basis) afterwards for
the duration of the calendar year (81 FR 77196). We also finalized a
minimum requirement of three MIPS quality measures, four times within
the year (81 FR 77196). We stated that we believe this is inconsistent
with clinicians reporting a full year's data, as we believe some study
participants may choose to submit data for all measures at one time, or
alternatively, may choose to submit data up to six times during the 1-
year period (82 FR 30057).
As a result, in the CY 2018 Quality Payment Program proposed rule
(82 FR 30057), we proposed, for the Quality Payment Program Year 2 and
future years, to offer study participants flexibility in their
submissions such that they could submit all their quality measures data
at once, as allowed in the MIPS program, and participate in study
surveys and focus groups, while still earning improvement activities
credit.
We invited public comments on our proposals.
Comment: A few commenters supported the proposal to examine the
challenges and costs of reporting quality measures by expanding, and
aptly renaming the ``CMS Study on Burdens Associated with Reporting
Quality Measures.''
Response: We thank the commenters for their support.
Comment: A few commenters encouraged CMS to automate the process to
reduce administrative burden and publicly share the survey results.
Response: We note the suggestion to automate the process (that is,
the measure data submission process) to reduce administrative burden,
and will take this into consideration as we craft future policies. We
also note that the current study survey and focus group have open ended
questions that ask study participants about their recommendations and
opinions on the ease and complexities of technology on the process.
Furthermore, we intend to make the results of the study public
immediately after the end of the study year CY 2018 (summer 2019) to
all study participants, relevant stakeholders, and on the CMS Web site.
Comment: A few commenters suggested that CMS consider additional
study inclusion factors, such as: Practices and clinicians located in
both urban and rural HPSAs and clinicians who serve a high proportion
of low-income patients and patients of color.
Response: The study selection criteria already includes; but is not
limited to, rural, urban and geographical location (based on
demographic characteristics) (81 FR 77195). For study years CY 2017 and
CY 2018, we are able to analyze and compare clinicians who serve at
both rural and urban HPSAs, based on participant's zip codes collected
during recruitment. Additionally, we will consider for future
rulemaking further efforts to include proportionate HPSAs and minority
patients in the recruitment and screening of the study participants.
Final Action: After consideration of the public comments we
received, we are finalizing our proposals for the CY 2018 and beyond as
proposed, to: (1) Modify the name of the study to the ``CMS Study on
Burdens Associated with Reporting Quality Measures;'' (2) increase the
sample size for CY 2018 and beyond as discussed previously in this
final rule with comment period; (3) require study participants to
attend as frequently as four monthly surveys and focus group sessions
throughout the year; and (4) for the Quality Payment Program Year 2 and
future years, offer study participants flexibility in their submissions
such that they can submit all their quality measures data at once and
participate in study surveys and focus groups while still earning
improvement activities credit.
It must be noted that although the aforementioned activities
constitute an information collection request as defined in the
implementing regulations of the Paperwork Reduction Act (PRA) of 1995
(5 CFR 1320), the associated burden is exempt from application of the
Paperwork Reduction Act. Specifically, section 1848(s)(7) of the Act,
as added by section 102 of the MACRA (Pub. L. 114-10) states that
Chapter 35 of title 44, United States Code, shall not apply to the
collection of information for the development of quality measures. Our
goals for new measures are to develop new high quality, low cost
measures that are meaningful, easily understandable and operable, that
also, reliably and validly measure what they purport. This study shall
inform us on the root causes of clinicians' performance measure data
collection and data submission burdens and challenges that hinders
accurate and timely quality measurement activities. In addition, this
study will inform us on the characteristic attributes that our new
measures must possess to be able to accurately capture and measure the
priorities and gaps MACRA aims for, as described in the Quality
Measures Development Plan.\2\ This study, therefore, serves as the
initial stage of developing new measures and also adapting existing
measures. We believe that understanding clinician's challenges and
skepticisms, and especially, understanding the factors that undermine
the optimal functioning and effectiveness of quality measures are
requisites of developing measures that are not only measuring what it
purports but also that are user friendly and understandable for
frontline clinicians--our main stakeholders in measure development.
This will lead to the creation of practice-derived, tested measures
that reduces burden and create a culture of continuous improvement in
measure development.
[[Page 53663]]
f. Advancing Care Information Performance Category
(1) Background
Section 1848(q)(2)(A) of the Act includes the meaningful use of
CEHRT as a performance category under the MIPS. We refer to this
performance category as the advancing care information performance
category, and it is reported by MIPS eligible clinicians as part of the
overall MIPS program. As required by sections 1848(q)(2) and (5) of the
Act, the four performance categories of the MIPS shall be used in
determining the MIPS final score for each MIPS eligible clinician. In
general, MIPS eligible clinicians will be evaluated under all four of
the MIPS performance categories, including the advancing care
information performance category.
(2) Scoring
Section 1848(q)(5)(E)(i)(IV) of the Act states that 25 percent of
the MIPS final score shall be based on performance for the advancing
care information performance category. We established at Sec.
414.1380(b)(4) that the score for the advancing care information
performance category would be comprised of a base score, performance
score, and potential bonus points for reporting on certain measures and
activities. For further explanation of our scoring policies for the
advancing care information performance category, we refer readers to 81
FR 77216-77227.
(a) Base Score
For the CY 2018 performance period, we did not propose any changes
to the base score methodology as established in the CY 2017 Quality
Payment Program final rule (81 FR 77217-77223).
(b) Performance Score
In the CY 2017 Quality Payment Program final rule (81 FR 77223
through 77226), we finalized that MIPS eligible clinicians can earn 10
percentage points in the performance score for meeting the Immunization
Registry Reporting Measure. We proposed to modify our policy for
scoring the Public Health and Clinical Data Registry Reporting
Objective beginning with the performance period in CY 2018 because we
have learned that there are areas of the country where immunization
registries are not available, and we did not intend to disadvantage
MIPS eligible clinicians practicing in those areas. We proposed if a
MIPS eligible clinician fulfills the Immunization Registry Reporting
Measure, the MIPS eligible clinician would earn 10 percentage points in
the performance score. If a MIPS eligible clinician cannot fulfill the
Immunization Registry Reporting Measure, we proposed that the MIPS
eligible clinician could earn 5 percentage points in the performance
score for each public health agency or clinical data registry to which
the clinician reports for the following measures, up to a maximum of 10
percentage points: Syndromic Surveillance Reporting; Electronic Case
Reporting; Public Health Registry Reporting; and Clinical Data Registry
Reporting. A MIPS eligible clinician who chooses to report to more than
one public health agency or clinical data registry may receive credit
in the performance score for the submission to more than one agency or
registry; however, the MIPS eligible clinician would not earn more than
a total of 10 percentage points for such reporting.
We further proposed similar flexibility for MIPS eligible
clinicians who choose to report the measures specified for the Public
Health Reporting Objective of the 2018 Advancing Care Information
Transition Objective and Measure set. (In section II.C.6.f.(6)(b) of
the proposed rule, we proposed to allow MIPS eligible clinicians to
report using the 2018 Advancing Care Information Transition Objectives
and Measures in 2018.) We proposed if a MIPS eligible clinician
fulfills the Immunization Registry Reporting Measure, the MIPS eligible
clinician would earn 10 percentage points in the performance score. If
a MIPS eligible clinician cannot fulfill the Immunization Registry
Reporting Measure, we proposed that the MIPS eligible clinician could
earn 5 percentage points in the performance score for each public
health agency or specialized registry to which the clinician reports
for the following measures, up to a maximum of 10 percentage points:
Syndromic Surveillance Reporting; Specialized Registry Reporting. A
MIPS eligible clinician who chooses to report to more than one
specialized registry or public health agency to submit syndromic
surveillance data may earn 5 percentage points in the performance score
for reporting to each one, up to a maximum of 10 percentage points.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters supported the flexibility that we proposed to
complete the objective and earn points in the performance score. Other
commenters stated that they appreciate the options for earning a
performance score. Most commenters appreciated our intent not to
disadvantage those clinicians without access to immunization
registries; however, they stated our proposal to award 5 percentage
points for reporting to each additional public health agency or
clinical data registry minimized the value of reporting to other types
of public health information beyond immunization information.
Commenters suggested that we award 10 percentage points in the
performance score for reporting to a single agency or registry.
Response: We appreciate the suggestion and did not intend to
disadvantage MIPS eligible clinicians who lack access to immunization
registries or do not administer immunizations. The Public Health and
Clinical Data Registry Reporting Objective focuses on the importance of
the ongoing lines of communication that should exist between MIPS
eligible clinicians and public health agencies and clinical data
registries, and we want to encourage reporting to them. These
registries play an important part in monitoring the health status of
patients and some, for example syndromic surveillance registries, help
in the early detection of outbreaks, which is critical to public health
overall. For these reasons, we are finalizing our proposal with
modification so that a MIPS eligible clinician may earn 10 percentage
points in the performance score for reporting to any single public
health agency or clinical data registry, regardless of whether an
immunization registry is available to the clinician.
Comment: One commenter requested that we clarify that the exclusion
for Immunization Registry Reporting is still available to those who do
not routinely provide vaccinations.
Response: No exclusion is available for the Immunization Registry
Reporting Measure for the advancing care information performance
category. The Immunization Registry Reporting Measure is part of the
performance score in which clinicians can select which measures they
wish to report. If they do not provide vaccinations, then they would
not report on the Immunization Registry Reporting Measure. The final
policy we are adopting should provide flexibility for clinicians who do
not administer immunizations by allowing them to earn performance score
points for public health reporting that is not related to
immunizations.
Final Action: After consideration of the public comments and for
the reasons noted above, we are finalizing our proposal with
modification. Rather than awarding 5 percentage points in the
performance score for each public health agency or clinical data
registry that a MIPS eligible clinician reports to (for a maximum of 10
percentage
[[Page 53664]]
points), we are finalizing that a MIPS eligible clinician may earn 10
percentage points in the performance score for reporting to any single
public health agency or clinical data registry to meet any of the
measures associated with the Public Health and Clinical Data Registry
Reporting Objective (or any of the measures associated with the Public
Health Reporting Objective of the 2018 Advancing Care Information
Transition Objectives and Measures, for clinicians who choose to report
on those measures), regardless of whether an immunization registry is
available to the clinician. A MIPS eligible clinician can earn only 10
percentage points in the performance score under this policy, no matter
how many agencies or registries they report to. This policy will apply
beginning with the 2018 performance period.
(c) Bonus Score
In the CY 2017 Quality Payment Program final rule (81 FR 77220
through 77226), for the Public Health and Clinical Data Registry
Reporting Objective and the Public Health Reporting Objective, we
finalized that MIPS eligible clinicians who report to one or more
public health agencies or clinical data registries beyond the
Immunization Registry Reporting Measure will earn a bonus score of 5
percentage points in the advancing care information performance
category. Based on our proposals discussed above to allow MIPS eligible
clinicians who cannot fulfill the Immunization Registry Reporting
Measure to earn additional points in the performance score, we proposed
to modify this policy so that MIPS eligible clinicians cannot earn
points in both the performance score and bonus score for reporting to
the same public health agency or clinical data registry. We proposed to
modify our policy beginning with the performance period in CY 2018. We
proposed that a MIPS eligible clinician may earn the bonus score of 5
percentage points for reporting to at least one additional public
health agency or clinical data registry that is different from the
agency/agencies or registry/or registries to which the MIPS eligible
clinician reports to earn a performance score. A MIPS eligible
clinician would not receive credit under both the performance score and
bonus score for reporting to the same agency or registry.
We proposed that for the Advancing Care Information Objectives and
Measures, a bonus of 5 percentage points would be awarded if the MIPS
eligible clinician reports ``yes'' for any one of the following
measures associated with the Public Health and Clinical Data Registry
Reporting Objective: Syndromic Surveillance Reporting; Electronic Case
Reporting; Public Health Registry Reporting; or Clinical Data Registry
Reporting. We proposed that for the 2018 Advancing Care Information
Transition Objectives and Measures, a bonus of 5 percent would be
awarded if the MIPS eligible clinician reports ``yes'' for any one of
the following measures associated with the Public Health Reporting
Objective: Syndromic Surveillance Reporting or Specialized Registry
Reporting. We proposed that to earn the bonus score, the MIPS eligible
clinician must be in active engagement with one or more additional
public health agencies or clinical data registries that is/are
different from the agency or registry that they identified to earn a
performance score.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters supported the awarding of bonus points for
reporting to an additional agency or registry. A commenter supported
providing bonus points for registry reporting rather than mandating
registry reporting.
Response: We appreciate the support for our proposal.
Comment: One commenter requested that we make it explicitly clear
that MIPS eligible clinicians cannot earn a bonus score for reporting
to the same agency or registry that they identified for the purposes of
earning a performance score.
Response: Under our final policy discussed above, MIPS eligible
clinicians may report to a single public health or clinical data
registry and earn 10 percentage points in the performance score.
Reporting to a different public health or clinical data registry may
earn the MIPS eligible clinician five percentage points in the bonus
score. In order to earn the bonus score, the MIPS eligible clinician
must be in active engagement with a different public health agency or
clinical data registry than the one to which they reported to earn the
10 percentage points for the performance score. We expect to engage in
education and outreach efforts to ensure MIPS eligible clinicians are
aware of the policies adopted in this final rule with comment period
including the policy for earning bonus points for the advancing care
information performance category.
Comment: One commenter supported closing the loophole so that a
MIPS eligible clinician cannot receive double credit under both the
performance score and bonus score for reporting to the same agency or
registry.
Response: We appreciate the support for our proposal. As we
proposed, MIPS eligible clinician cannot receive credit under both the
performance score and bonus score for reporting to the same public
health agency or registry.
Final Action: After consideration of the public comments that we
received, we are adopting our proposal as proposed and updating the
regulation text at Sec. 414.1380(b)(4)(C)(1).
(d) Improvement Activities Bonus Score under the Advancing Care
Information Performance Category
In the CY 2017 Quality Payment Program final rule (81 FR 77202), we
discussed our approach to the measurement of the use of health IT to
allow MIPS eligible clinicians and groups the flexibility to implement
health IT in a way that supports their clinical needs. Toward that end,
we adopted a policy to award a bonus score to MIPS eligible clinicians
who use CEHRT to complete certain activities in the improvement
activities performance category based on our belief that the use of
CEHRT in carrying out these activities could further the outcomes of
clinical practice improvement.
We adopted a final policy to award a 10 percent bonus for the
advancing care information performance category if a MIPS eligible
clinician attests to completing at least one of the improvement
activities we have specified using CEHRT (81 FR 77209). We referred
readers to Table 8 in the CY 2017 Quality Payment Program final rule
(81 FR 77202-77209) for a list of the improvement activities eligible
for the advancing care information performance category bonus. We
proposed to expand this policy beginning with the CY 2018 performance
period by identifying additional improvement activities in Table 6 (82
FR 30060-30063) that would be eligible for the advancing care
information performance category bonus score if they are completed
using CEHRT functionality.
The following is a summary of the public comments received on this
proposal and our responses:
Comment: One commenter supported rewarding clinicians who used
CEHRT to perform improvement activities. Other commenters appreciated
the proposed additions to the list of improvement activities using
CEHRT that would be eligible for the bonus.
Response: We appreciate the support as we continue to believe that
offering this bonus will encourage MIPS eligible clinicians to use
CEHRT not only to
[[Page 53665]]
document patient care, but also to improve their clinical practices by
using CEHRT in a meaningful manner that supports clinical practice
improvement. We refer readers to Table 6 which lists all improvement
activities eligible for the advancing care information performance
category improvement activity bonus in 2018.
Comment: One commenter recommended that MIPS eligible clinicians
and groups that attest to completing one or more of the improvement
activities using CEHRT should not only earn improvement activity credit
but also automatically earn the base score for the advancing care
information performance category, amounting to 50 percent of the
advancing care information performance category.
Response: We appreciate the commenter's input and continue to be
interested in options for incentivizing clinicians to use CEHRT in the
completion of improvement activities. We will take this comment under
consideration for future rulemaking on this topic.
Final Action: After consideration of the public comments, we are
finalizing with modifications the list of improvement activities shown
in Table 6 that will be eligible for the advancing care information
performance category bonus score beginning with the 2018 performance
period if they are completed using CEHRT. We refer readers to Table F:
New Improvement Activities for the Quality Payment Program Year 2 and
Future Years and Table G: Improvement Activities with Changes for the
Quality Payment Program Year 2 and Future Years for more information on
modifications to the Improvement Activities that were proposed.
Table 6--Improvement Activities Eligible for the Advancing Care Information Performance Category Bonus Beginning
With the 2018 Performance Period
----------------------------------------------------------------------------------------------------------------
Improvement
Improvement activity activity Related advancing
performance category Activity name Activity performance care information
subcategory category weight measure(s) *
----------------------------------------------------------------------------------------------------------------
Expanded Practice Access...... Provide 24/7 Provide 24/7 access to Medium............ Provide Patient
access to MIPS eligible Access.
eligible clinicians, groups, Secure Messaging.
clinicians or or care teams for Send A Summary of
groups who have advice about urgent Care.
real-time access and emergent care Request/Accept
to patient's (for example, MIPS Summary of Care.
medical record. eligible clinician
and care team access
to CEHRT, cross-
coverage with access
to CEHRT, or protocol-
driven nurse line
with access to CEHRT)
that could include
one or more of the
following:
Expanded
hours in evenings and
weekends with access
to the patient
medical record (for
example, coordinate
with small practices
to provide alternate
hour office visits
and urgent care);
Use of
alternatives to
increase access to
care team by MIPS
eligible clinicians
and groups, such as e-
visits, phone visits,
group visits, home
visits and alternate
locations (for
example, senior
centers and assisted
living centers); and/
or
Provision of
same-day or next-day
access to a
consistent MIPS
eligible clinician,
group or care team
when needed for
urgent care or
transition
management.
Patient Safety and Practice Communication of A MIPS eligible Medium............ Secure Messaging.
Assessment. Unscheduled clinician providing Send A Summary of
Visit for unscheduled care Care.
Adverse Drug (such as an emergency Request/Accept
Event and Nature room, urgent care, or Summary of Care.
of Event. other unplanned
encounter) attests
that, for greater
than 75 percent of
case visits that
result from a
clinically
significant adverse
drug event, the MIPS
eligible clinician
transmits
information,
including through the
use of CEHRT to the
patient's primary
care clinician
regarding both the
unscheduled visit and
the nature of the
adverse drug event
within 48 hours. A
clinically
significant adverse
event is defined as a
medication-related
harm or injury such
as side-effects,
supratherapeutic
effects, allergic
reactions, laboratory
abnormalities, or
medication errors
requiring urgent/
emergent evaluation,
treatment, or
hospitalization.
Patient Safety and Practice Consulting AUC Clinicians attest that High.............. Clinical Decision
Assessment. using clinical they are consulting Support (CEHRT
decision support specified applicable function only).
when ordering AUC through a
advanced qualified clinical
diagnostic decision support
imaging. mechanism for all
applicable imaging
services furnished in
an applicable
setting, paid for
under an applicable
payment system, and
ordered on or after
January 1, 2018. This
activity is for
clinicians that are
early adopters of the
Medicare AUC program
(2018 performance
year) and for
clinicians that begin
the Medicare AUC
program in future
years as specified in
our regulation at
Sec. 414.94. The
AUC program is
required under
section 218 of the
Protecting Access to
Medicare Act of 2014.
Qualified mechanisms
will be able to
provide a report to
the ordering
clinician that can be
used to assess
patterns of image-
ordering and improve
upon those patterns
to ensure that
patients are
receiving the most
appropriate imaging
for their individual
condition.
Patient Safety and Practice Cost Display for Implementation of a Medium............ Clinical Decision
Assessment. Laboratory and cost display for Support (CEHRT
Radiographic laboratory and function only).
Orders. radiographic orders,
such as costs that
can be obtained
through the Medicare
clinical laboratory
fee schedule.
[[Page 53666]]
Population Management......... Glycemic For at-risk outpatient Medium............ Patient-Specific
Screening Medicare Education.
Services. beneficiaries, Patient Generated
individual MIPS Health Data or
eligible clinicians Data from Non-
and groups must clinical
attest to Settings.
implementation of
systematic preventive
approaches in
clinical practice for
at least 60 percent
for the 2018
performance period
and 75 percent in
future years, of
CEHRT with
documentation of
screening patients
for abnormal blood
glucose according to
current U.S.
Preventive Services
Task Force (USPSTF)
and/or American
Diabetes Association
(ADA) guidelines.
Population Management......... Glycemic For outpatient High.............. Patient Generated
management Medicare Health Data.
services. beneficiaries with Clinical
diabetes and who are Information
prescribed Reconciliation.
antidiabetic agents Clinical Decision
(for example, Support, CCDS,
insulin, Family Health
sulfonylureas), MIPS History (CEHRT
eligible clinicians functions only).
and groups must
attest to having:
For the first
performance period,
at least 60 percent
of medical records
with documentation of
an individualized
glycemic treatment
goal that:
[cir] Takes into
account patient-
specific factors,
including, at least
(1) age, (2)
comorbidities, and
(3) risk for
hypoglycemia, and.
[cir] Is reassessed at
least annually.
The performance
threshold will
increase to 75
percent for the
second performance
period and onward.
Clinicians would
attest that, 60
percent for first
year, or 75 percent
for the second year,
of their medical
records that document
individualized
glycemic treatment
represent patients
who are being treated
for at least 90 days
during the
performance period.
Population Management......... Glycemic For at-risk outpatient Medium............ Patient-Specific
Referring Medicare Education.
Services. beneficiaries, Patient Generated
individual MIPS Health Data or
eligible clinicians Data from Non-
and groups must clinical
attest to Settings.
implementation of
systematic preventive
approaches in
clinical practice for
at least 60 percent
for the CY 2018
performance period
and 75 percent in
future years, of
CEHRT with
documentation of
referring eligible
patients with
prediabetes to a CDC-
recognized diabetes
prevention program
operating under the
framework of the
National Diabetes
Prevention Program.
Population Management......... Anticoagulant Individual MIPS High.............. Provide Patient
management eligible clinicians Access.
improvements. and groups who Patient-Specific
prescribe oral Education.
Vitamin K antagonist View, Download,
therapy (warfarin) Transmit.
must attest that, for Secure Messaging.
60 percent of Patient Generated
practice patients in Health Data or
the transition year Data from Non-
and 75 percent of Clinical
practice patients in Setting.
Quality Payment Send a Summary of
Program Year 2 and Care.
future years, their Request/Accept
ambulatory care Summary of Care.
patients receiving Clinical
warfarin are being Information
managed by one or Reconciliation
more of the following Exchange.
improvement Clinical Decision
activities;. Support (CEHRT
Patients are Function Only).
being managed by an
anticoagulant
management service,
that involves
systematic and
coordinated care,
incorporating
comprehensive patient
education, systematic
prothrombin time (PT-
INR) testing,
tracking, follow-up,
and patient
communication of
results and dosing
decisions;.
Patients are
being managed
according to
validated electronic
decision support and
clinical management
tools that involve
systematic and
coordinated care,
incorporating
comprehensive patient
education, systematic
PT-INR testing,
tracking, follow-up,
and patient
communication of
results and dosing
decisions;.
For rural or
remote patients,
patients are managed
using remote
monitoring or
telehealth options
that involve
systematic and
coordinated care,
incorporating
comprehensive patient
education, systematic
PT-INR testing,
tracking, follow-up,
and patient
communication of
results and dosing
decisions; and/or
For patients
who demonstrate
motivation,
competency, and
adherence, patients
are managed using
either a patient self-
testing (PST) or
patient-self-
management (PSM)
program.
[[Page 53667]]
Population Management......... Provide Clinical- Engaging community Medium............ Provide Patient
Community health workers to Access.
Linkages. provide a Patient-Specific
comprehensive link to Education.
community resources Patient-Generated
through family-based Health Data.
services focusing on
success in health,
education, and self-
sufficiency. This
activity supports
individual MIPS
eligible clinicians
or groups that
coordinate with
primary care and
other clinicians,
engage and support
patients, use of
CEHRT, and employ
quality measurement
and improvement
processes. An example
of this community
based program is the
NCQA Patient-Centered
Connected Care (PCCC)
Recognition Program
or other such
programs that meet
these criteria.
Population Management......... Advance Care Implementation of Medium............ Patient-Specific
Planning. practices/processes Education.
to develop advance Patient-Generated
care planning that Health Data.
includes: Documenting
the advance care plan
or living will within
CEHRT, educating
clinicians about
advance care planning
motivating them to
address advance care
planning needs of
their patients, and
how these needs can
translate into
quality improvement,
educating clinicians
on approaches and
barriers to talking
to patients about end-
of-life and
palliative care needs
and ways to manage
its documentation, as
well as informing
clinicians of the
healthcare policy
side of advance care
planning.
Population Management......... Chronic care and Proactively manage Medium............ Provide Patient
preventative chronic and Access.
care management preventive care for Patient-Specific
for empanelled empanelled patients Education.
patients. that could include View, Download,
one or more of the Transmit.
following: Secure Messaging.
Provide Patient Generated
patients annually Health Data or
with an opportunity Data from Non-
for development and/ Clinical
or adjustment of an Setting.
individualized plan Send A Summary of
of care as Care.
appropriate to age Request/Accept
and health status, Summary of Care.
including health risk Clinical
appraisal; gender, Information
age and condition- Reconciliation.
specific preventive Clinical Decision
care services; plan Support, Family
of care for chronic Health History
conditions; and (CEHRT functions
advance care only).
planning;.
Use condition-
specific pathways for
care of chronic
conditions (for
example,
hypertension,
diabetes, depression,
asthma and heart
failure) with
evidence-based
protocols to guide
treatment to target;.
Use pre-visit
planning to optimize
preventive care and
team management of
patients with chronic
conditions;.
Use panel
support tools
(registry
functionality) to
identify services
due;.
Use reminders
and outreach (for
example, phone calls,
emails, postcards,
patient portals and
community health
workers where
available) to alert
and educate patients
about services due;
and/or
Routine
medication
reconciliation
Population Management......... Implementation of Provide longitudinal Medium............ Provide Patient
methodologies care management to Access.
for improvements patients at high risk Patient-Specific
in longitudinal for adverse health Education.
care management outcome or harm that Patient Generated
for high risk could include one or Health Data or
patients. more of the Data from Non-
following: clinical
Use a Settings.
consistent method to Send A Summary of
assign and adjust Care.
global risk status Request/Accept
for all empaneled Summary of Care.
patients to allow Clinical
risk stratification information
into actionable risk reconciliation.
cohorts. Monitor the Clinical Decision
risk-stratification Support, CCDS,
method and refine as Family Health
necessary to improve History, Patient
accuracy of risk List (CEHRT
status functions only).
identification;.
Use a
personalized plan of
care for patients at
high risk for adverse
health outcome or
harm, integrating
patient goals, values
and priorities; and/
or.
Use on-site
practice-based or
shared care managers
to proactively
monitor and
coordinate care for
the highest risk
cohort of patients..
Population Management......... Implementation of Provide episodic care Medium............ Send A Summary of
episodic care management, including Care.
management management across Request/Accept
practice. transitions and Summary of Care.
referrals that could Clinical
include one or more Information
of the following: Reconciliation.
Routine and
timely follow-up to
hospitalizations, ED
visits and stays in
other institutional
settings, including
symptom and disease
management, and
medication
reconciliation and
management; and/or.
Managing care
intensively through
new diagnoses,
injuries and
exacerbations of
illness..
[[Page 53668]]
Population Management......... Implementation of Manage medications to Medium............ Clinical
medication maximize efficiency, Information
management effectiveness and Reconciliation.
practice safety that could Clinical Decision
improvements. include one or more Support,
of the following: Computerized
Reconcile and Physician Order
coordinate Entry Electronic
medications and Prescribing
provide medication (CEHRT functions
management across only).
transitions of care
settings and eligible
clinicians or groups;.
Integrate a
pharmacist into the
care team; and/or.
Conduct
periodic, structured
medication reviews..
Achieving Health Equity....... Promote use of Demonstrate High.............. Public Health
patient-reported performance of Registry
outcome tools. activities for Reporting.
employing patient- Clinical Data
reported outcome Registry
(PRO) tools and Reporting.
corresponding Patient-Generated
collection of PRO Health Data.
data (e.g., use of
PQH-2 or PHQ-9 and
PROMIS instruments)
such as patient
reported Wound
Quality of Life
(QoL), patient
reported Wound
Outcome, and patient
reported Nutritional
Screening.
Care Coordination............. Practice Develop pathways to Medium............ Send a Summary of
Improvements neighborhood/ Care.
that Engage community-based Request/Accept
Community resources to support Summary of Care.
Resources to patient health goals Patient-Generated
Support Patient that could include Health Data.
Health Goals. one or more of the
following:
Maintain
formal (referral)
links to community-
based chronic disease
self-management
support programs,
exercise programs and
other wellness
resources with the
potential for
bidirectional flow of
information and
provide a guide to
available community
resources..
Including
through the use of
tools that facilitate
electronic
communication between
settings;.
Screen
patients for health-
harming legal needs;
Screen and
assess patients for
social needs using
tools that are CEHRT
enabled and that
include to any extent
standards-based,
coded question/field
for the capture of
data as is feasible
and available as part
of such tool; and/or
................. Provide a
guide to available
community resources.
Care Coordination............. Primary Care The primary care and Medium............ Send a Summary of
Physician and behavioral health Care.
Behavioral practices use the Request/Accept
Health Bilateral same CEHRT system for Summary of Care.
Electronic shared patients or
Exchange of have an established
Information for bidirectional flow of
Shared Patients. primary care and
behavioral health
records.
Care Coordination............. PSH Care Participation in a Medium............ Send a Summary of
Coordination. Perioperative Care.
Surgical Home (PSH) Request/Accept
that provides a Summary of Care.
patient-centered, Clinical
physician-led, Information
interdisciplinary, Reconciliation.
and team-based system Health
of coordinated Information
patient care, which Exchange.
coordinates care from
pre-procedure
assessment through
the acute care
episode, recovery,
and post-acute care.
This activity allows
for reporting of
strategies and
processes related to
care coordination of
patients receiving
surgical or
procedural care
within a PSH. The
clinician must
perform one or more
of the following care
coordination
activities:
Coordinate
with care managers/
navigators in
preoperative clinic
to plan and
implementation
comprehensive post
discharge plan of
care;
Deploy
perioperative clinic
and care processes to
reduce post-operative
visits to emergency
rooms;.
Implement
evidence-informed
practices and
standardize care
across the entire
spectrum of surgical
patients; or.
Implement
processes to ensure
effective
communications and
education of
patients' post-
discharge
instructions.
Care Coordination............. Implementation of Performance of regular Medium............ Send A Summary of
use of practices that Care.
specialist include providing Request/Accept
reports back to specialist reports Summary of Care.
referring back to the referring Clinical
clinician or MIPS eligible Information
group to close clinician or group to Reconciliation.
referral loop. close the referral
loop or where the
referring MIPS
eligible clinician or
group initiates
regular inquiries to
specialist for
specialist reports
which could be
documented or noted
in the CEHRT.
Care Coordination............. Implementation of Implementation of Medium............ Secure Messaging.
documentation practices/processes, Send A Summary of
improvements for including a Care.
developing discussion on care, Request/Accept
regular to develop regularly Summary of Care.
individual care updated individual Clinical
plans. care plans for at- Information
risk patients that Reconciliation.
are shared with the
beneficiary or
caregiver(s).
Individual care plans
should include
consideration of a
patient's goals and
priorities, as well
as desired outcomes
of care.
[[Page 53669]]
Care Coordination............. Implementation of Implementation of Medium............ Provide Patient
practices/ practices/processes Access (formerly
processes for to develop regularly Patient Access).
developing updated individual View, Download,
regular care plans for at- Transmit.
individual care risk patients that Secure Messaging.
plans. are shared with the Patient Generated
beneficiary or Health Data or
caregiver(s). Data from Non-
Clinical
Setting.
Care Coordination............. Practice Ensure that there is Medium............ Send A Summary of
improvements for bilateral exchange of Care.
bilateral necessary patient Request/Accept
exchange of information to guide Summary of Care.
patient patient care, such as Clinical
information. Open Notes, that Information
could include one or Reconciliation.
more of the
following:
Participate
in a Health
Information Exchange
if available; and/or.
Use
structured referral
notes.
Beneficiary Engagement........ Engage Patients Engage patients and High.............. Patient-Generated
and Families to families to guide Health Data.
Guide improvement in the Provide Patient
Improvement in system of care by Access.
the System of leveraging digital View, Download,
Care. tools for ongoing or Transmit.
guidance and
assessments outside
the encounter,
including the
collection and use of
patient data for
return-to-work and
patient quality of
life improvement.
Platforms and devices
that collect patient-
generated health data
(PGHD) must do so
with an active
feedback loop, either
providing PGHD in
real or near-real
time to the care
team, or generating
clinically endorsed
real or near-real
time automated
feedback to the
patient. Includes
patient reported
outcomes (PROs).
Examples include
patient engagement
and outcomes tracking
platforms, cellular
or web-enabled bi-
directional systems,
and other devices
that transmit
clinically valid
objective and
subjective data back
to care teams.
Because many consumer-
grade devices capture
PGHD (for example,
wellness devices),
platforms or devices
eligible for this
improvement activity
must be, at a
minimum, endorsed and
offered clinically by
care teams to
patients to
automatically send
ongoing guidance (one
way). Platforms and
devices that
additionally collect
PGHD must do so with
an active feedback
loop, either
providing PGHD in
real or near-real
time to the care
team, or generating
clinically endorsed
real or near-real
time automated
feedback to the
patient (e.g.
automated patient-
facing instructions
based on glucometer
readings). Therefore,
unlike passive
platforms or devices
that may collect but
do not transmit PGHD
in real or near-real
time to clinical care
teams, active devices
and platforms can
inform the patient or
the clinical care
team in a timely
manner of important
parameters regarding
a patient's status,
adherence,
comprehension, and
indicators of
clinical concern.
Beneficiary Engagement........ Use of CEHRT to In support of Medium............ Provide Patient
capture patient improving patient Access.
reported access, performing Patient-Specific
outcomes. additional activities Education.
that enable capture Care Coordination
of patient reported through Patient
outcomes (for Engagement.
example, home blood
pressure, blood
glucose logs, food
diaries, at-risk
health factors such
as tobacco or alcohol
use, etc.) or patient
activation measures
through use of CEHRT,
containing this data
in a separate queue
for clinician
recognition and
review.
Beneficiary Engagement........ Engagement of Access to an enhanced Medium............ Provide Patient
patients through patient portal that Access.
implementation. provides up to date Patient-Specific
information related Education.
to relevant chronic
disease health or
blood pressure
control, and includes
interactive features
allowing patients to
enter health
information and/or
enables bidirectional
communication about
medication changes
and adherence.
Beneficiary Engagement........ Engagement of Engage patients, Medium............ Provide Patient
patients, family family and caregivers Access.
and caregivers in developing a plan Patient-specific
in developing a of care and Education.
plan of care. prioritizing their View, Download,
goals for action, Transmit
documented in the (Patient
CEHRT. Action).
Secure Messaging.
Patient Safety and Practice... Use of decision Use decision support Medium............ Clinical Decision
support and and protocols to Support (CEHRT
standardized manage workflow in function only).
treatment the team to meet
protocols. patient needs.
[[Page 53670]]
Achieving Health Equity....... Promote Use of Demonstrate Medium............ Patient Generated
Patient-Reported performance of Health Data or
Outcome Tools. activities for Data from a Non-
employing patient- Clinical
reported outcome Setting.
(PRO) tools and Public Health and
corresponding Clinical Data
collection of PRO Registry
data such as the use Reporting.
of PQH-2 or PHQ-9,
PROMIS instruments,
patient reported
Wound Quality of Life
(QoL), patient
reported Wound
Outcome, and patient
reported Nutritional
Screening.
Behavioral and Mental Health.. Implementation of Offer integrated High.............. Provide Patient
integrated behavioral health Access.
Patient Centered services to support Patient-Specific
Behavioral patients with Education.
Health (PCBH) behavioral health View, Download,
model. needs, dementia, and Transmit.
poorly controlled Secure Messaging.
chronic conditions. Patient Generated
The services could Health Data or
include one or more Data from Non-
of the following:. Clinical
Use evidence- Setting.
based treatment Care coordination
protocols and through Patient
treatment to goal Engagement.
where appropriate;. Send A Summary of
Use evidence- Care.
based screening and
case finding
strategies to
identify individuals
at risk and in need
of services;.
Ensure
regular communication
and coordinated
workflows between
eligible clinicians
in primary care and
behavioral health;.
Conduct
regular case reviews
for at-risk or
unstable patients and
those who are not
responding to
treatment;.
Use of a
registry or certified
health information
technology
functionality to
support active care
management and
outreach to patients
in treatment; and/or
Integrate Request/Accept
behavioral health and Summary of Care.
medical care plans
and facilitate
integration through
co-location of
services when
feasible; and/or
Participate
in the National
Partnership to
Improve Dementia Care
Initiative, which
promotes a
multidimensional
approach that
includes public
reporting, state-
based coalitions,
research, training,
and revised surveyor
guidance.
Behavioral and Mental Health.. Electronic Health Enhancements to CEHRT Medium............ Patient Generated
Record to capture additional Health Data or
Enhancements for data on behavioral Data from Non-
BH data capture. health (BH) Clinical
populations and use Setting.
that data for Send A Summary of
additional decision- Care.
making purposes (for Request/Accept
example, capture of Summary of Care.
additional BH data Clinical
results in additional Information
depression screening Reconciliation.
for at-risk patient
not previously
identified).
----------------------------------------------------------------------------------------------------------------
(3) Performance Periods for the Advancing Care Information Performance
Category
In the CY 2017 Quality Payment Program final rule (81 FR 77210
through 77211), we established a performance period for the advancing
care information performance category to align with the overall MIPS
performance period of one full year to ensure all four performance
categories are measured and scored based on the same period of time. We
stated for the first and second performance periods of MIPS (CYs 2017
and 2018), we will accept a minimum of 90 consecutive days of data and
encourage MIPS eligible clinicians to report data for the full year
performance period. We proposed the same policy for the advancing care
information performance category for the performance period in CY 2019,
Quality Payment Program Year 3, and would accept a minimum of 90
consecutive days of data in CY 2019.
Comment: Commenters supported the continuation of a performance
period of a minimum of 90 consecutive days of data in CY 2019. Some
stated that maintaining the 90-day performance period for the first 3
years of MIPS is important to add stability for the reporting on the
performance category. One commenter requested that we maintain the 90-
day performance period for the advancing care information performance
category in perpetuity as a shorter period enables physicians to adopt
innovative uses of technology and permits flexibility to test new
health IT solutions.
Response: While we believe a 90-day performance period is
appropriate for advancing care information for the 2017, 2018 and 2019
performance periods, we believe it is premature to establish the
performance periods for any additional years at this time. We will
consider creating a 90-day performance period for 2020 and beyond and
may address this issue in future rulemaking.
Comment: A few commenters expressed their disappointment that we
proposed another 90-day performance period for 2019 and urged us to
move to full calendar year reporting as soon as possible. They stated
that patients and families should be able to experience the benefits of
health IT--getting questions answered through secure email, or having
summary of care records incorporated into new providers' health records
-any day of the year, rather than a particular three-month period.
Response: Although we proposed that the MIPS performance period for
the advancing care information performance category would be a minimum
of 90 consecutive days, MIPS eligible clinicians have the option to
submit data for longer periods up to a full calendar year. Furthermore,
we believe
[[Page 53671]]
that once the functionality in the 2015 Edition CEHRT is implemented,
MIPS eligible clinicians will use it all the time.
Final Action: After consideration of the public comments, we are
adopting our policy as proposed. We will accept a minimum of 90
consecutive days of data in CY 2019 and are revising Sec.
414.1320(d)(1).
(4) Certification Requirements
In the CY 2017 Quality Payment Program final rule (81 FR 77211
through 77213), we outlined the requirements for MIPS eligible
clinicians using CEHRT during the CY 2017 performance period for the
advancing care information performance category as it relates to the
objectives and measures they select to report, and also outlined
requirements for the CY 2018 performance period. We additionally
adopted a definition of CEHRT at Sec. 414.1305 for MIPS eligible
clinicians that is based on the definition that applies in the EHR
Incentive Programs under Sec. 495.4.
For the CY 2017 performance period, we adopted a policy by which
MIPS eligible clinicians may use EHR technology certified to either the
2014 or 2015 Edition certification criteria, or a combination of the
two. For the CY 2018 performance period, we previously stated that MIPS
eligible clinicians must use EHR technology certified to the 2015
Edition to meet the objectives and measures specified for the advancing
care information performance category. We received significant comments
from the CY 2017 Quality Payment Program final rule and feedback from
stakeholders through meetings and listening sessions requesting that we
extend the use of 2014 Edition CEHRT beyond CY 2017 into CY 2018. Many
commenters expressed concern over the lack of products certified to the
2015 Edition. Other commenters stated that switching from the 2014
Edition to the 2015 Edition requires a large amount of time and
planning and if it is rushed there is a potential risk to patient
health. Also, our experience with the transition from EHR technology
certified to the 2011 Edition to EHR technology certified to the 2014
Edition did make us aware of the many issues associated with the
adoption of EHR technology certified to a new Edition. These include
the time that will be necessary to effectively deploy EHR technology
certified to the 2015 Edition standards and certification criteria and
to make the necessary patient safety, staff training, and workflow
investments to be prepared to report for the advancing care information
performance category for 2018. Thus, we proposed that MIPS eligible
clinicians may use EHR technology certified to either the 2014 or 2015
Edition certification criteria, or a combination of the two for the CY
2018 performance period. We proposed to amend Sec. 414.1305 to reflect
this change.
We further noted, that to encourage new participants to adopt
certified health IT and to incentivize participants to upgrade their
technology to 2015 Edition products which better support
interoperability across the care continuum, we proposed to offer a
bonus of 10 percentage points under the advancing care information
performance category for MIPS eligible clinicians who report the
Advancing Care Information Objectives and Measures for the performance
period in CY 2018 using only 2015 Edition CEHRT. We proposed to amend
Sec. 414.1380(b)(4)C)(3) to reflect this change. We proposed this one-
time bonus for CY 2018 to support and recognize MIPS eligible
clinicians and groups that invest in implementing certified EHR
technology in their practice. We sought comment on the proposed bonus,
the proposed amount of percentage points for the bonus, and whether the
bonus should be limited to new participants in MIPS and small
practices.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Most commenters supported our proposal to allow the use of
2014 Edition or 2015 Edition CEHRT or a combination for the 2018
performance period. One stated that allowing flexibility allows
clinicians more time to fully evaluate their EHR optimization in a
meaningful way that ensures EHR systems are in place, tested thoroughly
and operating as intended. Many stated that an additional transition
year would be very helpful in allowing physicians to plan for the
required upgrades, which can be costly and time-consuming. A few
commenters supported our proposal because they believed it would delay
the requirement to report on the Advancing Care Information Objectives
and Measures derived from meaningful use Stage 3 in 2018.
Response: We thank commenters for their feedback and support of
CEHRT flexibility in 2018. We hope that allowing MIPS eligible
clinicians to use 2014 Edition or 2015 Edition CEHRT or a combination
of the two in 2018 will allow for a smooth transition to 2015 Edition
CEHRT.
Comment: One commenter requested that we affirm that 2015 Edition
CEHRT will be required for the 2019 performance period so that they can
plan accordingly.
Response: Under our current policy as reflected in Sec. 414.1305,
2015 Edition CEHRT will be required in the performance period in 2019.
We believe that there are many benefits for MIPS eligible clinicians
and their patients implementing the 2015 Edition of CEHRT. These
include enabling health information exchange through new and enhanced
certification criteria standards, as well as through implementation
specifications for interoperability. The 2015 Edition also incorporates
changes that are designed to spur innovation and provide more choices
to health care providers and patients for the exchange of electronic
health information, including new Application Programming Interface
(API) certification criteria.
Comment: Several commenters stated their disappointment with the
proposed delayed transition to 2015 Edition CEHRT as they believe that
it includes significant patient-facing technologies and new
functionalities to support patient engagement and improve
interoperability.
Response: While we understand the concern, we believe that it is
important to provide MIPS eligible clinicians with flexibility and more
time to adopt and implement 2015 CEHRT We recognize there is burden
associated with the development and deployment of each new version of
CEHRT, which may be labor intensive and expensive for clinicians so we
believe the additional time will be welcomed. In addition if MIPS
eligible clinicians are ready to report using the 2015 Edition, we
encourage them to do so.
Comment: Most commenters supported our proposal to award a 10
percentage point bonus for using 2015 Edition CEHRT exclusively in
2018.
Response: We appreciate the support for this proposal and believe
the bonus will incentivize MIPS eligible clinicians to work to
implement 2015 Edition CEHRT by 2018.
Comment: Some commenters suggested that we offer a bonus to those
MIPS eligible clinicians who use a combination of 2014 Edition and 2015
Edition CEHRT in 2018. Others suggested that the bonus be available not
only in 2018 but also in 2019. Other commenters requested that the
bonus be available to all MIPS eligible clinicians regardless of
whether they are new to the MIPS program or not. Other commenters
believed that CMS has struck an appropriate balance of minimizing the
regulatory burden on clinicians not yet prepared to transition to 2015
Edition CEHRT, while
[[Page 53672]]
incentivizing those clinicians who have transitioned to 2015 Edition
CEHRT.
Response: We believe it is appropriate to award the bonus only to
those MIPS eligible clinicians who are able to use only the 2015
Edition of CEHRT in 2018. We understand that it is challenging to
transition from the 2014 Edition to the 2015 Edition of CEHRT and wish
to incentivize clinicians to complete the transition. We believe this
bonus will help to incentivize participants to continue the process of
upgrading from 2014 Edition to 2015 Edition, especially small practices
where the investment in updated workflows and implementation may
present unique challenges. We agree with commenters that the bonus
should be available to all MIPS eligible clinicians who use 2015
Edition CEHRT exclusively in 2018. In addition, we intend this bonus to
support and recognize the efforts to report on the Advancing Care
Information Measures using EHR technology certified to the 2015
Edition, which include more robust measures using updated standards and
functions which support interoperability.
Comment: Several commenters requested that the bonus points
available for using 2015 Edition CEHRT be raised from 10 percentage
points to 20 percentage points because it would provide stronger
incentive for clinicians to upgrade to and implement 2015 Edition CEHRT
for use in 2018, thereby expanding the availability of 2015 Edition
enhancements, such as the new open APIs. A few commenters recommended
that we provide a bonus of 15 percentage points. Commenters expressed
concern that adding only 10 percentage points to the score for the use
of 2015 Edition CEHRT is too trivial an incentive and would do little
to offset the work participants must do to prepare to participate in
the Quality Payment Program.
Response: We disagree and believe that a 10 percentage point bonus
provides an adequate incentive for MIPS eligible clinicians to use 2015
Edition CEHRT exclusively for a minimum of a 90-day performance period
in 2018. Additionally, the addition of this 10 percentage point bonus
would bring the total bonus points available under the advancing care
information performance category to 25 percentage points. We remind
readers that a MIPS eligible clinician may earn a maximum score of 165
percentage points (including the 2015 Edition CEHRT bonus) for the
advancing care information performance category, but all scores of 100
percentage points and higher will receive full credit for the category
(25 percentage points) in the final score.
Comment: One commenter did not support bonus points for clinicians
that adopt 2015 Edition CEHRT in 2018 because they do not agree that
2015 Edition CEHRT enhances a physician's ability to provide higher
quality care. Another commenter indicated that providing bonus points
may disadvantage clinicians who have prior experience with CEHRT but
are unable to fully implement the 2015 Edition due to vendor issues
beyond their control.
Response: While we appreciate these concerns, we believe that the
availability of the bonus is appropriate and we wish to incentivize
clinicians to complete the transition to 2015 Edition CEHRT in 2018.
Final Action: After consideration of the public comments, we are
adopting as proposed our proposal to allow the use of 2014 Edition or
2015 Edition CEHRT, or a combination of the two Editions, for the
performance period in 2018. We will offer a one-time bonus of 10
percentage points under the advancing care information performance
category for MIPS eligible clinicians who report the Advancing Care
Information Objectives and Measures for the performance period in CY
2018 using only 2015 Edition CEHRT. We will not limit the bonus to new
participants. We are revising Sec. Sec. 414.1305 and 414.1380(b)(4) of
the regulation text to reflect this policy.
(5) Scoring Methodology Considerations
Section 1848(q)(5)(E)(i)(IV) of the Act states that 25 percent of
the MIPS final score shall be based on performance for the advancing
care information performance category. Further, section
1848(q)(5)(E)(ii) of the Act, provides that in any year in which the
Secretary estimates that the proportion of eligible professionals (as
defined in section 1848(o)(5) of the Act) who are meaningful EHR users
(as determined under section 1848(o)(2) of the Act) is 75 percent or
greater, the Secretary may reduce the applicable percentage weight of
the advancing care information performance category in the MIPS final
score, but not below 15 percent, and increase the weightings of the
other performance categories such that the total percentage points of
the increase equals the total percentage points of the reduction. We
noted that section 1848(o)(5) of the Act defines an eligible
professional as a physician, as defined in section 1861(r) of the Act.
In CY 2017 Quality Payment Program final rule (81 FR 77226-77227),
we established a final policy, for purposes of applying section
1848(q)(5)(E)(ii) of the Act, to estimate the proportion of physicians
as defined in section 1861(r) of the Act who are meaningful EHR users,
as those physician MIPS eligible clinicians who earn an advancing care
information performance category score of at least 75 percent for a
performance period. We established that we will base this estimation on
data from the relevant performance period, if we have sufficient data
available from that period. We stated that we will not include in the
estimation physicians for whom the advancing care information
performance category is weighted at zero percent under section
1848(q)(5)(F) of the Act, which we relied on in the CY 2017 Quality
Payment Program final rule (81 FR 77226 through 77227) to establish
policies under which we would weigh the advancing care information
performance category at zero percent of the final score. In addition,
we proposed not to include in the estimation physicians for whom the
advancing care information performance category would be weighted at
zero percent under the proposal in section II.C.6.f.(7) of the proposed
rule to implement certain provisions of the 21st Century Cures Act
(that is, physicians who are determined hospital-based or ambulatory
surgical center-based, or who are granted an exception based on
significant hardship or decertified EHR technology).
We stated that we were considering modifications to the policy we
established in last year's rulemaking to base our estimation of
physicians who are meaningful EHR users for a MIPS payment year (for
example, 2019) on data from the relevant performance period (for
example, 2017). We stated our concern that if in future rulemaking we
decide to propose to change the weight of the advancing care
information performance category based on our estimation, such a change
may cause confusion to MIPS eligible clinicians who are adjusting to
the MIPS program and believe this performance category will make up 25
percent of the final score for the 2019 MIPS payment year. We noted the
earliest we would be able to make our estimation based on 2017 data and
propose in future rulemaking to change the weight of the advancing care
information performance category for the 2019 MIPS payment year would
be in mid-2018, as the deadline for data submission is March 31, 2018.
We requested public comments on whether this timeframe is sufficient,
or whether a more extended timeframe would be preferable. We proposed
to modify our existing policy such that we would base our estimation
[[Page 53673]]
of physicians who are meaningful EHR users for a MIPS payment year on
data from the performance period that occurs 4 years before the MIPS
payment year.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Some commenters supported our proposal because they
believed it was appropriate to allow additional time to ensure that
clinicians are aware of the percentage weighting of each MIPS
performance category. A few commenters stated that more certainty and
advance notice will offer MIPS eligible clinicians more time to prepare
and focus resources on areas of most significance.
Response: We thank commenters for their support and agree that
additional time to determine whether the advancing care information
category weight should be reduced is necessary. Once we make a
determination we will communicate such a change to the percentage
weighting of each of the MIPS performance categories to MIPS eligible
clinicians.
Comment: Some commenters stated that the timeline that we proposed
was not sufficient. Commenters stated that a proposal to change the
weight would not be finalized until late in 2018 and would be applied
to the 2019 MIPS performance period/2021 payment year which does not
give clinicians sufficient time to be educated and respond to the
changes in the category weights. Other commenters stated that any
change to the MIPS program, specifically the possibility of a change in
the weight of the advancing care information performance category, will
have a domino effect on various aspects of the program, as well as
within the health IT space. Based on updated category weight
allocation, which would change the overall computation of a MIPS score,
health IT developers will need to make adjustments to their products
and software accordingly. In turn, those products must be implemented
by clinicians. For product lifecycles, there needs to be at least 12
months' notice given for all parties to adequately plan and execute
these changes. The proposed timeline that CMS has outlined (for
performance period 2017/payment year 2019, the earliest feedback would
be in mid-2018 that would in turn effect the weight of performance
period 2019/payment year 2021) would allow for notification of less
than a year, and is therefore not sufficient.
Response: While we understand these concerns, we previously
finalized our policy to base our estimation of physicians who are
meaningful EHR users for a MIPS payment year on data from the relevant
performance period for the MIPS payment year. For example, for the 2019
MIPS payment year, the performance period is two years prior to the
payment year, in 2017. We proposed to extend the look-back period to
the performance period that occurs 4 years before the MIPS payment
year, which would give additional time for MIPS eligible clinicians and
health IT developers to adjust to the new weighting prior to the start
of the actual performance period for the MIPS payment year. We continue
to believe that this timeframe is sufficient.
Comment: Some commenters expressed concern with the proposal to
base CMS' estimation of meaningful EHR users on data from the
performance period that occurs 4 years before the MIPS payment year.
According to the commenters, the 4-year look-back period is
unreasonably long given the rapid pace of technology, especially given
continued delays in adopting 2015 Edition technology. Commenters
encouraged CMS to shorten this look-back period. Prematurely reducing
the advancing care information performance category's weight could
impair progress towards robust, person-centered uses of health IT.
Response: While we appreciate these concerns, we also believe that
it is important to give MIPS eligible clinicians sufficient notice
before we change the weighting of a category so that they can plan
appropriately. We note that the earliest data we can use to calculate
the proportion of physicians who are meaningful EHR users will be the
data from the 2017 performance period, which will not be available
until 2018. Under our current policy, 2018 is the earliest we would be
able to propose in rulemaking to reduce and redistribute the weight of
the advancing care information performance category for the 2019 MIPS
payment year, based on the proportion of physicians who were meaningful
EHR users during the performance period in 2017. As previously stated,
we believe it is important for MIPS eligible clinicians to be aware of
this reweighting prior to the relevant performance period during which
they would be measured for the MIPS payment year, which is why we
believe the proposed 4-year timeline is more appropriate.
Comment: Some commenters recommended keeping the advancing care
information performance category as 25 percent of the MIPS final score
in years in which 75 percent or more of physicians are meaningful EHR
users. Other commenters recommended that if the weight of the advancing
care information performance category is reduced, it should not all be
redistributed to the quality category. Many commenters suggested that
it be redistributed to quality and improvement activities performance
categories particularly for physicians for whom there are not the
required number of meaningful quality measures.
Response: We appreciate this input. We intend to make our decision
about whether to change the performance category weight based on data
from the performance period that is 4 years prior to the MIPS payment
year. We have not yet proposed a new weight for the advancing care
information performance category or to which category or categories the
points would be distributed.
Comment: A few commenters stated that it is too early to consider
reweighting a category before any data has been received or analyzed.
When reweighting is implemented, they urged CMS to ensure that
clinicians are informed of the reweighting prior to the performance
period. Changing the weight of a performance category retrospectively
would add confusion to an already complex program.
Response: We have not yet proposed to reduce the weight of the
performance category. We are simply establishing the timeframe for when
we would decide whether to reduce and redistribute the weight.
Comment: Several commenters suggested that any proposed changes to
the weight of the advancing care information performance category
resulting from an assessment of the proportion of clinicians who are
meaningful EHR users, for example, those who achieve an advancing care
information performance category score of at least 75 percent, should
be based on at least two to three MIPS performance periods worth of
data to ensure an accurate baseline.
Response: We agree that this decision should be made with
consideration for the reliability and validity of data, however, we
disagree that it would require multiple performance periods to obtain
the necessary data to make this determination. We also note that we are
not proposing to base this decision on any particular year at this
point in time, we are only addressing the timeframe relationship
between when the data is reported and when the reweighting would take
place. For example, should the data show that 75 percent or more
physicians are considered meaningful users based on data submitted for
the 2017 performance period, we would propose to reweight the advancing
care information performance category weight in 2021 instead of 2019.
We
[[Page 53674]]
believe this policy would allow adequate time for MIPS eligible
clincians, EHR vendors and other stakeholders to adjust to the new
scoring structure prior to submitting data for the effected payment
year.
Final Action: Based on the public comments and for the reasons
discussed in the proposed rule, we are adopting our proposal as
proposed. Our ability to implement this policy will be dependent on the
availability of data from the performance period that occurs 4 years
before the MIPS payment year.
(6) Objectives and Measures
(a) Advancing Care Information Objectives and Measures Specifications
We proposed to maintain for the CY 2018 performance period the
Advancing Care Information Objectives and Measures as finalized in the
CY 2017 Quality Payment Program final rule (81 FR 77227 through 77229).
We proposed the following modifications to certain objectives and
measures.
Provide Patient Access Measure: For at least one unique patient
seen by the MIPS eligible clinician: (1) The patient (or the patient-
authorized representative) is provided timely access to view online,
download, and transmit his or her health information; and (2) The MIPS
eligible clinician ensures the patient's health information is
available for the patient (or patient-authorized representative) to
access using any application of their choice that is configured to meet
the technical specifications of the Application Programing Interface
(API) in the MIPS eligible clinician's CEHRT.
Proposed definition of timely: Beginning with the 2018 performance
period, we proposed to define ``timely'' as within 4 business days of
the information being available to the MIPS eligible clinician. This
definition of timely is the same as we adopted under the EHR Incentive
Programs (80 FR 62815).
Proposed change to the View, Download, Transmit (VDT) Measure:
During the performance period, at least one unique patient (or patient-
authorized representatives) seen by the MIPS eligible clinician
actively engages with the EHR made accessible by the MIPS eligible
clinician by either (1) viewing, downloading or transmitting to a third
party their health information; or (2) accessing their health
information through the use of an API that can be used by applications
chosen by the patient and configured to the API in the MIPS eligible
clinician's CEHRT; or (3) a combination of (1) and (2). We proposed
this change because we erroneously described the actions in the measure
(viewing, downloading or transmitting; or accessing through an API) as
being taken by the MIPS eligible clinician rather than the patient or
the patient-authorized representatives. We proposed this change would
apply beginning with the performance period in 2017.
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinician into their EHR using the functions of
CEHRT.
Proposed change to the Objective: The MIPS eligible clinician
provides a summary of care record when transitioning or referring their
patient to another setting of care, receives or retrieves a summary of
care record upon the receipt of a transition or referral or upon the
first patient encounter with a new patient, and incorporates summary of
care information from other health care providers into their EHR using
the functions of CEHRT.
We inadvertently used the term ``health care clinician'' and
proposed to replace it with the more appropriate term ``health care
provider''. We proposed this change would apply beginning with the
performance period in 2017.
Send a Summary of Care Measure: For at least one transition of care
or referral, the MIPS eligible clinician that transitions or refers
their patient to another setting of care or health care clinician (1)
creates a summary of care record using CEHRT; and (2) electronically
exchanges the summary of care record.
Proposed Change to the Send a Summary of Care Measure: For at least
one transition of care or referral, the MIPS eligible clinician that
transitions or refers their patient to another setting of care or
health care provider (1) creates a summary of care record using CEHRT;
and (2) electronically exchanges the summary of care record.
We inadvertently used the term ``health care clinician'' and
proposed to replace it with the more appropriate term ``health care
provider''. We proposed this change would apply beginning with the 2017
performance period.
Syndromic Surveillance Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
syndromic surveillance data from a non-urgent care ambulatory setting
where the jurisdiction accepts syndromic data from such settings and
the standards are clearly defined.
Proposed change to the Syndromic Surveillance Reporting Measure:
The MIPS eligible clinician is in active engagement with a public
health agency to submit syndromic surveillance data.
We proposed this change because we inadvertently finalized the
measure description that we had proposed for Stage 3 of the EHR
Incentive Program (80 FR 62866) and not the measure description that we
finalized (80 FR 62970). We are modifying the proposed change so that
it does align with the measure description finalized for Stage 3 by
adding the phrase ``from an urgent care setting'' to the end of the
measure description.
In the proposed rule, we noted that we have split the Specialized
Registry Reporting Measure that we adopted under the 2017 Advancing
Care Information Transition Objectives and Measures into two separate
measures, Public Health Registry Reporting and Clinical Data Registry
Reporting, to better define the registries available for reporting. We
proposed to allow MIPS eligible clinicians and groups to continue to
count active engagement in electronic public health reporting with
specialized registries. We proposed to allow these registries to be
counted for purposes of reporting the Public Health Registry Reporting
Measure or the Clinical Data Registry Reporting Measure beginning with
the 2018 performance period. A MIPS eligible clinician may count a
specialized registry if the MIPS eligible clinician achieved the phase
of active engagement as described under ``active engagement option 3:
production'' in the 2015 EHR Incentive Programs final rule with comment
period (80 FR 62862 through 62865), meaning the clinician has completed
testing and validation of the electronic submission and is
electronically submitting production data to the public health agency
or clinical data registry.
(b) 2017 and 2018 Advancing Care Information Transition Objectives and
Measures Specifications
In the CY 2017 Quality Payment Program final rule (81 FR 77229
through 77237), we finalized the 2017 Advancing Care Information
Transition Objectives and Measures for MIPS eligible clinicians using
EHR technology certified to the 2014 Edition. Because we proposed in
section II.C.6.f.(4) of the
[[Page 53675]]
proposed rule to continue to allow the use of EHR technology certified
to the 2014 Edition in the 2018 performance period, we also proposed to
allow MIPS eligible clinicians to report the 2017 Advancing Care
Information Transition Objectives and Measures in 2018. We proposed to
make several modifications identified and described below to the 2017
Advancing Care Information Transition Objectives and Measures for the
advancing care information performance category of MIPS for the 2017
and 2018 performance periods.
Objective: Patient Electronic Access.
Objective: The MIPS eligible clinician provides patients (or
patient-authorized representative) with timely electronic access to
their health information and patient-specific education.
Proposed Change to the Objective.
We proposed to modify this objective beginning with the 2017
performance period by removing the word ``electronic'' from the
description of timely access as it was erroneously included in the
final rule (81 FR 77228).
Objective: Patient-Specific Education.
Objective: The MIPS eligible clinician provides patients (or
patient authorized representative) with timely electronic access to
their health information and patient-specific education.
Proposed Change to the Objective: The MIPS eligible clinician uses
clinically relevant information from CEHRT to identify patient-specific
educational resources and provide those resources to the patient. We
inadvertently finalized the description of the Patient Electronic
Access Objective for the Patient-Specific Education Objective, so that
the Patient-Specific Education Objective had the wrong description. We
proposed to correct this error by adopting the description of the
Patient-Specific Education Objective adopted under modified Stage 2 in
the 2015 EHR Incentive Programs final rule (80 FR 62809 and 80 FR
62815). We proposed this change would apply beginning with the
performance period in 2017.
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinicians into their EHR using the functions of
CEHRT.
Proposed change to the Objective: The MIPS eligible clinician
provides a summary of care record when transitioning or referring their
patient to another setting of care, receives or retrieves a summary of
care record upon the receipt of a transition or referral or upon the
first patient encounter with a new patient, and incorporates summary of
care information from other health care providers into their EHR using
the functions of CEHRT.
We inadvertently used the term ``health care clinician'' and
proposed to replace it with the more appropriate term ``health care
provider''. We proposed this change would apply beginning with the
performance period in 2017.
Health Information Exchange Measure: The MIPS eligible clinician
that transitions or refers their patient to another setting of care or
health care clinician (1) uses CEHRT to create a summary of care
record; and (2) electronically transmits such summary to a receiving
health care clinician for at least one transition of care or referral.
Proposed change to the measure: The MIPS eligible clinician that
transitions or refers their patient to another setting of care or
health care provider (1) uses CEHRT to create a summary of care record;
and (2) electronically transmits such summary to a receiving health
care provider for at least one transition of care or referral.
We inadvertently used the term ``health care clinician'' and
proposed to replace it with the more appropriate term ``health care
provider''. We proposed this change would apply beginning with the
performance period in 2017.
Denominator: Number of transitions of care and referrals during the
performance period for which the EP was the transferring or referring
health care clinician.
Proposed change to the denominator: Number of transitions of care
and referrals during the performance period for which the MIPS eligible
clinician was the transferring or referring health care provider. This
change reflects the change proposed to the Health Information Exchange
Measure replacing ``health care clinician'' with ``health care
provider.'' We also inadvertently referred to the EP in the description
and are replacing ``EP'' with ``MIPS eligible clinician.'' We proposed
this change would apply beginning with the performance period in 2017.
Objective: Medication Reconciliation.
Proposed Objective: We proposed to add a description of the
Medication Reconciliation Objective beginning with the CY 2017
performance period, which we inadvertently omitted from the CY 2017
Quality Payment Program proposed and final rules, as follows:
Proposed Objective: The MIPS eligible clinician who receives a
patient from another setting of care or provider of care or believes an
encounter is relevant performs medication reconciliation. This
description aligns with the objective adopted for Modified Stage 2 at
80 FR 62811.
Medication Reconciliation Measure: The MIPS eligible clinician
performs medication reconciliation for at least one transition of care
in which the patient is transitioned into the care of the MIPS eligible
clinician.
Numerator: The number of transitions of care or referrals
in the denominator where the following three clinical information
reconciliations were performed: Medication list, Medication allergy
list, and current problem list.
Proposed Modification to the Numerator.
Proposed Numerator: The number of transitions of care or referrals
in the denominator where medication reconciliation was performed.
We proposed to modify the numerator by removing medication list,
medication allergy list, and current problem list. These three criteria
were adopted for Stage 3 (80 FR 62862) but not for Modified Stage 2 (80
FR 62811). We proposed this change would apply beginning with the
performance period in 2017.
The following is a summary of the public comments received on the
``Advancing Care Information Objectives and Measures'' and the ``2017
and 2018 Advancing Care Information Transition Objectives and
Measures'' proposals and our responses:
Comment: Several commenters were confused by our proposal related
to specialized registries and active engagement option 3, production,
believing that the only way to receive credit for the Public Health
Agency and Clinical Data Registry Reporting Objective is through the
production option.
Response: MIPS eligible clinicians may fulfill the Public Health
Agency and Clinical Data Registry Reporting Objective or the Public
Health Reporting Objective through any of the active engagement options
as described at 80 FR 62818-62819: completed registration to submit
data; testing and validation; or production. Our proposal pertained to
MIPS eligible clinicians who choose to use option 3, production, for
specialized registries.
Comment: Several commenters supported the proposed definition of
timely for the Patient Electronic Access
[[Page 53676]]
Measure. One stated that the proposed definition supports practice
workflows where patient information may become available prior to a
weekend or holiday. This proposal would allow the necessary time for an
eligible clinician to review and ensure accurate information is made
available to patients.
Response: We appreciate the support for our proposal. We sought to
give MIPS eligible clinicians sufficient time to make information
available. We specified 4 business days so as not to include holidays
and weekends.
Comment: In the interest of reducing administrative burden, a
commenter encouraged the alignment of the definition of ``timely'' for
the ``Provide Patient Access Measure'' in the Medicaid EHR Incentive
Program and the Quality Payment Program. For both programs, they
supported defining ``timely'' as follows: Providing access to health
information within 4 business days of the information being available
to the MIPS eligible clinician, as opposed to the 48 hour standard in
Stage 3 of the Medicaid EHR Incentive Program.
Response: We understand that there are two different definitions of
timely. We proposed 4 business days for MIPS because we believe it
provides an adequate timeframe for a new program and the clinicians who
may not have previously participated in the Medicare and Medicaid EHR
Incentive Programs. We may consider aligning the Medicaid EHR Incentive
Program definition in the future.
Comment: One commenter disagreed with our proposal related to the
Patient Electronic Access Objective and suggested that the definition
of timely access under the Health Insurance Portability and
Accountability Act (HIPAA) is appropriate. The commenter stated that
under the HIPAA Privacy Rule, a covered entity must act on an
individual's request for access no later than 30 calendar days after
receipt of the request.
Response: We disagree and believe that 4 business days will provide
MIPS eligible clinicians with an adequate amount of time to provide
their patients with electronic access to their health information. We
further note that the HIPAA timeframe relates to an individual's
request for their information and the Patient Electronic Access Measure
relates to information being made available regardless of whether a
request is made.
Comment: One commenter cautioned CMS of the unintended consequences
related to the proposed definition of providing ``timely'' access for
patients or their authorized representatives. The commenter stated that
the proposed definition of timely (4 business days) may result in the
inability of clinicians to achieve the base score, and thus, any
advancing care information performance category score.
Response: While we appreciate this concern, we believe that by
establishing the definition of timely as 4 business days MIPS eligible
clinicians should have a sufficient amount of time to fulfill the
Patient Electronic Access Measure. We also note that you only need to
provide timely access for one patient to achieve the base score for the
advancing care information performance category.
Comment: Most commenters supported the proposed modifications to
the Advancing Care Information Objectives and Measures as reasonable
and appropriate. Another commenter stated that until an overhaul of the
advancing care information performance category is undertaken, they
support the modifications as proposed and urged CMS to finalize them as
described.
Response: We thank commenters for their support of the proposed
modifications and agree that these modifications should be finalized.
Comment: Some commenters suggested that we clarify that MIPS
eligible clinicians may report either the Advancing Care Information
Objectives and Measures or the Advancing Care Information Transition
Objectives and Measures using 2015 Edition or 2014 Edition CEHRT.
Response: For the 2018 performance period, MIPS eligible clinicians
will have the option to report the Advancing Care Information
Transition Objectives and Measures using 2014 Edition CEHRT, 2015
Edition CEHRT, or a combination of 2014 and 2015 Edition CEHRT, as long
as the EHR technology they possess can support the objectives and
measures to which they plan to attest. Similarly, MIPS eligible
clinicians will have the option to attest to the Advancing Care
Information Objectives and Measures using 2015 Edition CEHRT or a
combination of 2014 and 2015 Edition CEHRT, as long as their EHR
technology can support the objectives and measures to which they plan
to attest.
Final Action: After considering the public comments that we
received, we are finalizing our proposals as proposed with one
modification to the description of the Syndromic Surveillance Reporting
Measure: The MIPS eligible clinician is in active engagement with a
public health agency to submit syndromic surveillance data from an
urgent care setting.
Table 7--2018 Performance Period Advancing Care Information Performance Category Scoring Methodology Advancing
Care Information Objectives and Measures
----------------------------------------------------------------------------------------------------------------
2018 advancing Required/not
2018 advancing care information care information required for base Performance score Reporting
objective measure score (50%) (up to 90%) requirement
----------------------------------------------------------------------------------------------------------------
Protect Patient Health Security Risk Required.......... 0................. Yes/No Statement.
Information. Analysis.
Electronic Prescribing.......... e-Prescribing **.. Required.......... 0................. Numerator/
Denominator.
Patient Electronic Access....... Provide Patient Required.......... Up to 10%......... Numerator/
Access. Denominator.
Patient-Specific Not Required...... Up to 10%......... Numerator/
Education. Denominator.
Coordination of Care Through View, Download, or Not Required...... Up to 10%......... Numerator/
Patient Engagement. Transmit (VDT). Denominator.
Secure Messaging.. Not Required...... Up to 10%......... Numerator/
Denominator.
Patient-Generated Not Required...... Up to 10%......... Numerator/
Health Data. Denominator.
Health Information Exchange..... Send a Summary of Required.......... Up to 10%......... Numerator/
Care **. Denominator.
Request/Accept Required.......... Up to 10%......... Numerator/
Summary of Care Denominator.
**.
Clinical Not Required...... Up to 10%......... Numerator/
Information Denominator.
Reconciliation.
Public Health and Clinical Data Immunization Not Required...... 0 or 10% *........ Yes/No Statement.
Registry Reporting. Registry
Reporting.
Syndromic Not Required...... 0 or 10%*......... Yes/No Statement.
Surveillance
Reporting.
Electronic Case Not Required...... 0 or 10%*......... Yes/No Statement.
Reporting.
Public Health Not Required...... 0 or 10%*......... Yes/No Statement.
Registry
Reporting.
Clinical Data Not Required...... 0 or 10%*......... Yes/No Statement.
Registry
Reporting.
----------------------------------------------------------------------------------------------------------------
[[Page 53677]]
Bonus (up to 25%)
----------------------------------------------------------------------------------------------------------------
Report to one or more additional p5% bonusalth Yes/No Statement..
agencies or clinical data registries beyond the one
identified for the performance score.
Report improvement activities usin10% bonus......... Yes/No Statement..
Report using only 2015 Edition CEH10% bonus......... Based on measures
submitted..
----------------------------------------------------------------------------------------------------------------
* A MIPS eligible clinician may earn 10 percent for each public health agency or clinical data registry to which
the clinician reports, up to a maximum of 10 percent under the performance score.
** Exclusions are available for these measures.
Table 8--2018 Performance Period Advancing Care Information Performance Category Scoring Methodology for 2018
Advancing Care Information Transition Objectives and Measures
----------------------------------------------------------------------------------------------------------------
2018 advancing Required/not
2018 advancing care information care information required for base Performance score Reporting
transition objective transition measure score (50%) (Up to 90%) requirement
----------------------------------------------------------------------------------------------------------------
Protect Patient Health Security Risk Required.......... 0................. Yes/No Statement.
Information. Analysis.
Electronic Prescribing.......... E-Prescribing**... Required.......... 0................. Numerator/
Denominator.
Patient Electronic Access....... Provide Patient Required.......... Up to 20%......... Numerator/
Access. Denominator.
View, Download, or Not Required...... Up to 10%......... Numerator/
Transmit (VDT). Denominator.
Patient-Specific Education...... Patient-Specific Not Required...... Up to 10%......... Numerator/
Education. Denominator.
Secure Messaging................ Secure Messaging.. Not Required...... Up to 10%......... Numerator/
Denominator.
Health Information Exchange..... Health Required.......... Up to 20%......... Numerator/
Information** Denominator.
Exchange.
Medication Reconciliation....... Medication Not Required...... Up to 10%......... Numerator/
Reconciliation. Denominator.
Public Health Reporting......... Immunization Not Required...... 0 or 10%*......... Yes/No Statement.
Registry
Reporting.
Syndromic Not Required...... 0 or 10% *........ Yes/No Statement.
Surveillance
Reporting.
Specialized Not Required...... 0 or 10% *........ Yes/No Statement.
Registry
Reporting.
----------------------------------------------------------------------------------------------------------------
Bonus up to 15%
----------------------------------------------------------------------------------------------------------------
Report to one or more additional public health agencies or clinical data 5% bonus.......... Yes/No Statement.
registries beyond the one identified for the performance score.
Report improvement activities using CEHRT............................... 10% bonus......... Yes/No Statement.
----------------------------------------------------------------------------------------------------------------
* A MIPS eligible clinician may earn 10% for each public health agency or clinical data registry to which the
clinician reports, up to a maximum of 10% under the performance score.
** Exclusions are available for these measures.
To facilitate readers in identifying the requirements of CEHRT for
the Advancing Care Information Objectives and Measures, we are
including the Table 9, which includes the 2015 Edition and 2014 Edition
certification criteria required to meet the objectives and measures.
Table 9--Advancing Care Information and Advancing Care Information Transition Objectives and Measures and
Certification Criteria for 2014 and 2015 Editions
----------------------------------------------------------------------------------------------------------------
Objective Measure 2015 Edition 2014 Edition
----------------------------------------------------------------------------------------------------------------
Protect Patient Health Information Security Risk The requirements are a The requirements are
Analysis. part of CEHRT specific to included in the Base EHR
each certification Definition.
criterion.
Electronic Prescribing............ e-Prescribing........ Sec. 170.315(b)(3) Sec. 170.314(b)(3)
(Electronic Prescribing). (Electronic
Sec. 170.315(a)(10) Prescribing). Sec.
(Drug-Formulary and 170.314(a)(10) (Drug-
Preferred Drug List Formulary and Preferred
checks. Drug List checks.
[[Page 53678]]
Patient Electronic Access......... Provide Patient Sec. 170.315(e)(1) Sec. 170.314(e)(1)
Access. (View, Download, and (View, Download, and
Transmit to 3rd Party). Transmit to 3rd Party).
Sec. 170.315(g)(7)
(Application Access--
Patient Selection). Sec.
170.315(g)(8)
(Application Access--Data
Category Request). Sec.
170.315(g)(9)
(Application Access--All
Data Request). The three
criteria combined are the
``API'' certification
criteria.
Patient Electronic Access/Patient Patient Specific Sec. 170.315(a)(13) Sec. 170.314(a)(13)
Specific Education. Education. (Patient-specific (Patient-specific
Education Resources). Education Resources).
Coordination of Care Through View, Download, or Sec. 170.315(e)(1) Sec. 170.314(e)(1)
Patient Engagement/Patient Transmit (VDT). (View, Download, and (View, Download, and
Electronic Access. Transmit to 3rd Party). Transmit to 3rd Party).
Sec. 170.315(g)(7)
(Application Access--
Patient Selection). Sec.
170.315(g)(8)
(Application Access--Data
Category Request). Sec.
170.315(g)(9)
(Application Access--All
Data Request) The three
criteria combined are the
``API'' certification
criteria.
Coordination of Care Through Secure Messaging..... Sec. 170.315(e)(2) Sec. 170.314(e)(3)
Patient Engagement. (Secure Messaging). (Secure Messaging).
Coordination of Care Through Patient-Generated Sec. 170.315(e)(3) N/A.
Patient Engagement. Health Data. (Patient Health
Information Capture)
Supports meeting the
measure, but is NOT
required to be used to
meet the measure. The
certification criterion
is part of the CEHRT
definition beginning in
2018.
Health Information Exchange....... Send a Summary of Sec. 170.315(b)(1) Sec. 170.314(b)(2)
Care. (Transitions of Care). (Transitions of Care-
Create and Transmit
Transition of Care/
Referral Summaries or
Sec. 170.314(b)(8)
(Optional--Transitions
of Care).
Health Information Exchange....... Request/Accept Sec. 170.315(b)(1) Sec. 170.314(b)(1)
Summary of Care. (Transitions of Care). (Transitions of Care-
Receive, Display and
Incorporate Transition
of Care/Referral
Summaries or Sec.
170.314(b)(8) (Optional--
Transitions of Care).
Health Information Exchange....... Clinical Information Sec. 170.315(b)(2) Sec. 170.314(b)(4)
Reconciliation. (Clinical Information (Clinical Information
Reconciliation and Reconciliation or Sec.
Incorporation). 170.314(b)(9) (Optional--
Clinical Information
Reconciliation and
Incorporation).
Health Information Exchange....... Health Information N/A....................... Sec. 170.314(b)(2)
Exchange. (Transitions of Care-
Create and Transmit
Transition of Care/
Referral Summaries or
Sec. 170.314(b)(8)
(Optional--Transitions
of Care).
Medication Reconciliation......... Medication N/A....................... Sec. 170.314(b)(4)
Reconciliation. (Clinical Information
Reconciliation) or Sec.
170.314(b)(9)
(Optional-- Clinical
Information
Reconciliation and
Incorporation).
Public Health and Clinical Data Immunization Registry Sec. 170.315(f)(1) N/A.
Registry Reporting/Public Health Reporting. (Transmission to
Reporting. Immunization Registries).
Public Health and Clinical Data Syndromic Sec. 170.315(f)(2) Sec. 170.314(f)(3)
Registry Reporting/Public Health Surveillance (Transmission to Public (Transmission to Public
Reporting. Reporting. Health Agencies-- Health Agencies--
Syndromic Surveillance) Syndromic Surveillance)
Urgent Care Setting Only. or Sec. 170.314(f)(7)
(Optional-Ambulatory
Setting Only--
Transmission to Public
Health Agencies--
Syndromic Surveillance).
Public Health and Clinical Data Electronic Case Sec. 170.315(f)(5) N/A.
Registry Reporting. Reporting. (Transmission to Public
Health Agencies--
Electronic Case
Reporting).
Public Health and Clinical Data Public Health EPs may choose one or more Sec. 170.314(f)(5)
Registry Reporting. Registry Reporting. of the following: Sec. (Optional--Ambulatory
170.315(f)(4) Setting Only--Cancer
(Transmission to Cancer Case Information and
Registries). Sec. Sec. 170.314(f)(6)
170.315(f)(7) (Optional--Ambulatory
(Transmission to Public Setting Only--
Health Agencies--Health Transmission to Cancer
Care Surveys). Registries).
Public Health and Clinical Data Clinical Data No 2015 Edition health IT N/A.
Registry Reporting. Registry Reporting. certification criteria at
this time.
[[Page 53679]]
Public Health Reporting........... Specialized Registry N/A....................... Sec. 170.314(f)(5)
Reporting. (Optional--Ambulatory
Setting Only--Cancer
Case Information) and
Sec. 170.314(f)(6)
(Optional-- Ambulatory
Setting Only--
Transmission to Cancer
Registries).
----------------------------------------------------------------------------------------------------------------
(c) Exclusions
We proposed to add exclusions to the measures associated with the
Health Information Exchange and Electronic Prescribing Objectives
required for the base score, as described below. We proposed these
exclusions would apply beginning with the CY 2017 performance period.
Proposed Exclusion for the E-Prescribing Objective and Measure
Advancing Care Information Objective and Measure
Objective: Electronic Prescribing.
Objective: Generate and transmit permissible prescriptions
electronically.
E-Prescribing Measure: At least one permissible prescription
written by the MIPS eligible clinician is queried for a drug formulary
and transmitted electronically using CEHRT.
Proposed Exclusion: Any MIPS eligible clinician who writes fewer
than 100 permissible prescriptions during the performance period.
2017 and 2018 Advancing Care Information Transition Objective and
Measure
Objective: Electronic Prescribing.
Objective: MIPS eligible clinicians must generate and transmit
permissible prescriptions electronically.
E-Prescribing Measure: At least one permissible prescription
written by the MIPS eligible clinician is queried for a drug formulary
and transmitted electronically using CEHRT.
Proposed Exclusion: Any MIPS eligible clinician who writes fewer
than 100 permissible prescriptions during the performance period.
Proposed Exclusion for the Health Information Exchange Objective and
Measures Advancing Care Information Objective and Measures
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinician into their EHR using the functions of
CEHRT.
Send a Summary of Care Measure: For at least one transition of care
or referral, the MIPS eligible clinician that transitions or refers
their patient to another setting of care or health care clinician (1)
creates a summary of care record using CEHRT; and (2) electronically
exchanges the summary of care record.
We note that we finalized our proposal to replace ``health care
clinician'' with ``health care provider'' in the objective and measure.
Proposed Exclusion: Any MIPS eligible clinician who transfers a
patient to another setting or refers a patient is fewer than 100 times
during the performance period.
Request/Accept Summary of Care Measure: For at least one transition
of care or referral received or patient encounter in which the MIPS
eligible clinician has never before encountered the patient, the MIPS
eligible clinician receives or retrieves and incorporates into the
patient's record an electronic summary of care document.
Proposed Exclusion: Any MIPS eligible clinician who receives
transitions of care or referrals or has patient encounters in which the
MIPS eligible clinician has never before encountered the patient fewer
than 100 times during the performance period.
2017 and 2018 Advancing Care Information Transition Objective and
Measures
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinicians into their EHR using the functions of
CEHRT.
Health Information Exchange Measure: The MIPS eligible clinician
that transitions or refers their patient to another setting of care or
health care clinician (1) uses CEHRT to create a summary of care
record; and (2) electronically transmits such summary to a receiving
health care clinician for at least one transition of care or referral.
We note that we finalized our proposal to replace ``health care
clinician'' with ``health care provider'' in the objective and measure.
Proposed Exclusion: Any MIPS eligible clinician who transfers a
patient to another setting or refers a patient fewer than 100 times
during the performance period.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters overwhelmingly supported the addition of
exclusions for the Electronic Prescribing, Health Information Exchange,
Send a Summary of Care, and Request/Accept Summary of Care Measures.
Response: We appreciate the support of the proposed modifications,
and for the reasons discussed in the proposed rule, agree that it is
appropriate to establish these exclusions.
Comment: One commenter supported establishing exclusions but
recommended that the thresholds be set at fewer than 200 instead of
fewer than 100 as proposed.
Response: We disagree. We proposed the exclusions because these
measures may be outside a MIPS eligible clinician's licensing authority
or outside their scope of practice. By claiming the exclusion, the MIPS
eligible clinician is indicating that the measure is inapplicable to
them, because they have few patients or insufficient number of actions
that would allow calculation of the measure. We proposed the fewer than
100 threshold to align with the exclusions for these measures that were
established for the Medicare and Medicaid EHR Incentive Programs. We
believe that the threshold of fewer than 100 will enable MIPS eligible
clinicians
[[Page 53680]]
who do not prescribe, or transfer or refer patients or rarely do so to
claim the exclusion(s) and still fulfill the base score of the
advancing care information performance category. We believe a threshold
of 200 is too high, and believe that a MIPS eligible clinician who is
prescribing, transferring or referring more than 100 times during the
performance period is taking the actions described in the measures
often enough to be able to report on the measures for at least one
patient to fulfil the base score requirement.
Comment: One commenter was pleased to see CMS's intent to establish
an exclusion for the e-Prescribing Measure. They stated that as doctors
of chiropractic are statutorily prohibited in most states from
prescribing medication, this measure created a great deal of concern
over the last year that doctors of chiropractic would be adversely
affected by not reporting this measure.
Response: We did not intend to disadvantage chiropractors or other
types of clinicians who may be prohibited by law from prescribing
medication. We are establishing this exclusion for the e-Prescribing
Measure beginning with the 2017 performance period.
Comment: One commenter requested that if a MIPS eligible clinician
claims an exclusion for the base score for the Health Information
Exchange Measure, they should also be able to claim an exclusion for
the performance score for this measure so their total advancing care
information points are not adversely affected.
Response: We disagree. MIPS eligible clinicians have many options
to earn performance score points. If a measure is not applicable to a
clinician, they have the flexibility to select other performance score
measures on which to report.
Comment: One commenter asked if these exclusions are available if
reporting as a group.
Response: Yes, MIPS eligible clinicians may claim the exclusion if
they are reporting as a group. In the CY 2017 Quality Payment Program
final rule (81 FR 77215), we stated that the group will need to
aggregate data for all the individual MIPS eligible clinicians within
the group for whom they have data in CEHRT, and if an individual MIP
eligible clinician meets the criteria to exclude a measure, their data
can be excluded from the calculation of that particular measure only.
Comment: One commenter questioned whether clinicians who qualify to
exclude these measures will be allowed to report on the measures. The
commenter encouraged CMS to consider allowing these clinicians to
exclude or to attest to the measures as they stated that both measures
are key objectives in the advancing care information performance
category and also stated that it will be beneficial to encourage
clinicians to attest to both measures, even if they qualify to exclude
them.
Response: MIPS eligible clinicians may claim these exclusions if
they qualify, although they do not have to claim the exclusions and may
report on the measures if they choose to do so.
Comment: One commenter requested that for the Request/Accept a
Summary of Care Measure, that the exclusion be more closely tied to the
logic for the denominator of that measure, so that the exclusion is
specified in terms of new patients for whom a summary of care is
available.
Response: While we understand this concern, we disagree that the
exclusion should be limited to new patients. While we believe the
exclusion should include instances where the MIPS eligible clinician
has never before encountered the patient, we do not want to limit it to
just those instances.
Final Action: After consideration of the public comments, we are
finalizing these proposals as proposed. We note that the exclusions
apply beginning with the 2017 performance period.
(7) Additional Considerations
(a) 21st Century Cures Act
As we noted in the CY 2017 Quality Payment Program final rule (81
FR 77238), section 101(b)(1)(A) of the MACRA amended section
1848(a)(7)(A) of the Act to sunset the meaningful use payment
adjustment at the end of CY 2018. Section 1848(a)(7) of the Act
includes certain statutory exceptions to the meaningful use payment
adjustment under section 1848(a)(7)(A) of the Act. Specifically,
section 1848(a)(7)(D) of the Act exempts hospital-based EPs from the
application of the payment adjustment under section 1848(a)(7)(A) of
the Act. In addition, section 1848(a)(7)(B) of the Act provides that
the Secretary may, on a case-by-case basis, exempt an EP from the
application of the payment adjustment under section 1848(a)(7)(A) of
the Act if the Secretary determines, subject to annual renewal, that
compliance with the requirement for being a meaningful EHR user would
result in a significant hardship, such as in the case of an EP who
practices in a rural area without sufficient internet access. The last
sentence of section 1848(a)(7)(B) of the Act also provides that in no
case may an exemption be granted under subparagraph (B) for more than 5
years. The MACRA did not maintain these statutory exceptions for the
advancing care information performance category of the MIPS. Thus, we
had previously stated that the provisions under sections 1848(a)(7)(B)
and (D) of the Act are limited to the meaningful use payment adjustment
under section 1848(a)(7)(A) of the Act and do not apply in the context
of the MIPS.
Following the publication of the CY 2017 Quality Payment Program
final rule, the 21st Century Cures Act (Pub. L. 114-255) was enacted on
December 13, 2016. Section 4002(b)(1)(B) of the 21st Century Cures Act
amended section 1848(o)(2)(D) of the Act to state that the provisions
of sections 1848(a)(7)(B) and (D) of the Act shall apply to assessments
of MIPS eligible clinicians under section 1848(q) of the Act with
respect to the performance category described in subsection
(q)(2)(A)(iv) (the advancing care information performance category) in
an appropriate manner which may be similar to the manner in which such
provisions apply with respect to the meaningful use payment adjustment
made under section 1848(a)(7)(A) of the Act. As a result of this
legislative change, we believe that the general exceptions described
under sections 1848(a)(7)(B) and (D) of the Act are applicable under
the MIPS program. We included the proposals described below to
implement these provisions as applied to assessments of MIPS eligible
clinicians under section 1848(q) of the Act with respect to the
advancing care information performance category.
(i) MIPS Eligible Clinicians Facing a Significant Hardship
In the CY 2017 Quality Payment Program final rule (81 FR 77240
through 77243), we recognized that there may not be sufficient measures
applicable and available under the advancing care information
performance category to MIPS eligible clinicians facing a significant
hardship, such as those who lack sufficient internet connectivity, face
extreme and uncontrollable circumstances, lack control over the
availability of CEHRT, or do not have face-to-face interactions with
patients. We relied on section 1848(q)(5)(F) of the Act to establish a
final policy to assign a zero percent weighting to the advancing care
information performance category in the final score if there are not
sufficient measures and activities applicable and available to MIPS
eligible clinicians within the categories of significant hardship noted
above (81 FR 77243). Additionally, under the final policy (81 FR
77243), we did not impose a limitation on the total number of MIPS
payment years for which the advancing
[[Page 53681]]
care information performance category could be weighted at zero
percent, in contrast with the 5-year limitation on significant hardship
exceptions under the Medicare EHR Incentive Program as required by
section 1848(a)(7)(B) of the Act.
We did not propose substantive changes to this policy; however, as
a result of the changes in the law made by the 21st Century Cures Act
discussed above, we will not rely on section 1848(q)(5)(F) of the Act
and instead proposed to use the authority in the last sentence of
section 1848(o)(2)(D) of the Act for significant hardship exceptions
under the advancing care information performance category under MIPS.
Section 1848(o)(2)(D) of the Act, as amended by section 4002(b)(1)(B)
of the 21st Century Cures Act, states in part that the provisions of
section 1848(a)(7)(B) of the Act shall apply to assessments of MIPS
eligible clinicians with respect to the advancing care information
performance category in an appropriate manner which may be similar to
the manner in which such provisions apply with respect to the payment
adjustment made under section 1848(a)(7)(A) of the Act. We would assign
a zero percent weighting to the advancing care information performance
category in the MIPS final score for a MIPS payment year for MIPS
eligible clinicians who successfully demonstrate a significant hardship
through the application process. We would use the same categories of
significant hardship and application process as established in the CY
2017 Quality Payment Program final rule (81 FR 77240-77243). We would
automatically reweight the advancing care information performance
category to zero percent for a MIPS eligible clinician who lacks face-
to-face patient interaction and is classified as a non-patient facing
MIPS eligible clinician without requiring an application. If a MIPS
eligible clinician submits an application for a significant hardship
exception or is classified as a non-patient facing MIPS eligible
clinician, but also reports on the measures specified for the advancing
care information performance category, they would be scored on the
advancing care information performance category like all other MIPS
eligible clinicians, and the category would be given the weighting
prescribed by section 1848(q)(5)(E) of the Act regardless of the MIPS
eligible clinician's score.
As required under section 1848(a)(7)(B) of the Act, eligible
professionals were not granted significant hardship exceptions for the
payment adjustments under the Medicare EHR Incentive Program for more
than 5 years. We proposed not to apply the 5-year limitation under
section 1848(a)(7)(B) of the Act to significant hardship exceptions for
the advancing care information performance category under MIPS.
We solicited comments on the proposed use of the authority provided
in the 21st Century Cures Act in section 1848(o)(2)(D) of the Act as it
relates to application of significant hardship exceptions under MIPS
and the proposal not to apply a 5-year limit to such exceptions.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters supported our proposal not to apply the 5-
year limit for significant hardship exceptions. Some commenters stated
that the 5-year limit was arbitrary and should be eliminated. Other
commenters stated that the issue causing the hardship may not be
rectified within a 5-year period, and thus, could create undue burdens
on the clinicians in the future. Assigning a zero percent weighting to
the advancing care information performance category for those who
successfully demonstrate a significant hardship through the application
process would provide significant relief.
Response: We thank commenters for their support and agree it is
possible a clinician could experience a hardship for more than 5 years.
Comment: One commenter suggested that under the EHR Incentive
Program, a significant hardship exception would apply even if a health
care provider attested to meaningful use. They requested that we not
penalize eligible clinicians who choose to submit data, and to apply
the exception if they qualify.
Response: We disagree. Under the EHR Incentive Program, if a health
care provider submits a request for or is otherwise granted a
significant hardship exception, and also successfully attests to
meaningful use, we would consider that provider to be a meaningful EHR
user based on its attestation and thus would not apply the exception.
Under MIPS, we continue to believe that this approach is warranted. If
a MIPS eligible clinician chooses to submit data for the advancing care
information performance category, they will be scored. As we explained
in the CY 2017 Quality Payment Program final rule (81 FR 77241), we
believe there may not be sufficient advancing care information measures
applicable and available to MIPS eligible clinicians who experience a
significant hardship, such as insufficient internet connectivity,
extreme and uncontrollable circumstances, lack of control over the
availability of CEHRT, and lack of face-to-face patient interaction. We
believe that the submission of data indicates that there are sufficient
measures applicable and available for the MIPS eligible clinician, and
therefore, the significant hardship exception is not necessary.
Comment: One commenter recommended that CMS outline more specific
criteria for hardship exceptions because allowing too many exceptions
could hinder adoption of the changes required to create a more
efficient, value focused health care system. They suggested that
hardship exceptions only be available for unusual and unique clinician
circumstances.
Response: While we understand the concern expressed in this
comment, we decline to adopt narrower criteria for significant hardship
exceptions at this time. We understand that the transition to MIPS has
created challenges for MIPS eligible clinicians, and we believe the
significant hardship exception policy we proposed would encourage more
clinicians to participate successfully in the other performance
categories of MIPS.
Comment: One commenter questioned the proposed requirement to
reapply for a hardship exception on an annual basis and recommended
that exceptions should be granted for 2 years.
Response: We disagree and believe it is appropriate to limit a
hardship exception to 1 year. We want to encourage MIPS eligible
clinicians to adopt and use CEHRT and allowing multi-year exceptions
would not accomplish that goal. We believe that granting hardship
exceptions for 1 year at a time will enable clinicians to work harder
to successfully participate in the advancing care information
performance category while knowing that there may be the possibility of
receiving a significant hardship exception if it is needed and they
qualify. Furthermore we have created a streamlined mechanism for the
submission of Quality Payment Program Hardship Exception Applications.
Applications that are submitted are reviewed on a rolling basis.
Comment: One commenter expressed concern about occupational
therapists participating in MIPS as they were never eligible for the
EHR Incentive Program. They stated that many clinicians in solo or very
small therapy practices cannot afford the expense of purchasing an EHR
documentation system. For this reason, the commenter requested that in
CY 2018 and future
[[Page 53682]]
years CMS grant them exceptions. Further, they recommended that CMS
dedicate staff to engage therapists in an effort to provide consistent
and targeted education regarding CEHRT requirements, applicable
electronic measures, and other new criteria so they may be successful
under the advancing care information performance category.
Response: We appreciate this comment and point out that under
section 1848(q)(1)(C)(i)(II) of the Act, additional eligible clinicians
such as occupational therapists could be considered MIPS eligible
clinicians starting in the third year of the program. If we decide to
add additional clinician types to the definition of a MIPS eligible
clinician, it would be proposed and finalized through notice and
comment rulemaking. We would support these clinicians and help them to
become successful program participants.
Comment: One commenter expressed concern over the proposal to not
apply the 5-year limit to significant hardship exceptions. They stated
that although it is important to acknowledge circumstances outside of a
clinician's control, it does not seem necessary to grant these hardship
exceptions in perpetuity.
Response: While we appreciate this comment, we disagree. We believe
that a variety of circumstances may arise, and the application of the
5-year limit could unfairly disadvantage MIPS eligible clinicians whose
circumstances warrant a hardship exception. For example, a MIPS
eligible clinician may lack control over the availability of CEHRT and
apply annually for and receive a hardship exception for 5 years. If
their practice is later significantly affected by a natural disaster,
such as a hurricane, they would be unable to receive a hardship
exception due to the 5-year limit, even though they would otherwise
qualify for the exception.
Comment: Commenters recommended adding additional hardship
exception categories such as those eligible for Social Security
benefits, those who have changed specialty taxonomy, those who practice
in Tribal health care facilities and those who are solo practitioners.
Response: While we appreciate the suggestions, we are declining to
adopt these suggestions at this time. We will monitor performance on
the advancing care information performance category to determine if
additional hardship exception categories are appropriate. As we have
previously stated, we do not believe that it is appropriate to reweight
this category solely on the basis of a MIPS eligible clinicians' age or
Social Security status, and believe that while other factors such as
the lack of access to CEHRT or unforeseen environmental circumstances
may constitute a significant hardship, the age of an MIPS eligible
clinician alone or the preference to not obtain CEHRT does not. We note
that solo practitioners would be included in the small practice
significant hardship that we proposed at 82 FR 30076 so a separate
hardship exception category for them is unnecessary.
Final Action: After consideration of the comments we received, we
are finalizing our policy as proposed.
(ii) Significant Hardship Exception for MIPS Eligible Clinicians in
Small Practices
Section 1848(q)(2)(B)(iii) of the Act requires the Secretary to
give consideration to the circumstances of small practices (consisting
of 15 or fewer professionals) and practices located in rural areas and
geographic HPSAs in establishing improvement activities under MIPS. In
the CY 2017 Quality Payment Program final rule (81 FR 77187 through
77188), we finalized that for MIPS eligible clinicians and groups that
are in small practices or located in rural areas, or geographic health
professional shortage areas (HPSAs), to achieve full credit under the
improvement activities category, one high-weighted or two medium-
weighted improvement activities are required.
While there is no corresponding statutory provision for the
advancing care information performance category, we believe that
special consideration should also be available for MIPS eligible
clinicians in small practices. We proposed a significant hardship
exception for the advancing care information performance category for
MIPS eligible clinicians who are in small practices, under the
authority in section 1848(o)(2)(D) of the Act, as amended by section
4002(b)(1)(B) of the 21st Century Cures Act (see discussion of the
statutory authority for significant hardship exceptions in section
II.C.6.f.(7)(ii) of the proposed rule). We proposed that this hardship
exception would be available to MIPS eligible clinicians in small
practices as defined under Sec. 414.1305. We proposed in section
II.C.1.e. of the proposed rule, that CMS would make eligibility
determinations regarding the size of small practices for performance
periods occurring in 2018 and future years. We proposed to reweight the
advancing care information performance category to zero percent of the
MIPS final score for MIPS eligible clinicians who qualify for this
hardship exception. We proposed this exception would be available
beginning with the 2018 performance period and 2020 MIPS payment year.
We proposed a MIPS eligible clinician seeking to qualify for this
exception would submit an application in the form and manner specified
by us by December 31st of the performance period or a later date
specified by us. We also proposed MIPS eligible clinicians seeking this
exception must demonstrate in the application that there are
overwhelming barriers that prevent the MIPS eligible clinician from
complying with the requirements for the advancing care information
performance category. In accordance with section 1848(a)(7)(B) of the
Act, the exception would be subject to annual renewal. Under the
proposal in section II.C.6.f.(7)(a) of the proposed rule, the 5-year
limitation under section 1848(a)(7)(B) of the Act would not apply to
this significant hardship exception for MIPS eligible clinicians in
small practices.
While we would be making this significant hardship exception
available to small practices in particular, we are considering whether
other categories or types of clinicians might similarly require an
exception. We solicited comment on what those categories or types are,
why such an exception is required, and any data available to support
the necessity of the exception. We noted that supporting data would be
particularly helpful to our consideration of whether any additional
exceptions would be appropriate.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters supported and appreciated the significant
hardship exception that we proposed for MIPS eligible clinicians in
small practices. Many commenters stated that there are a number of
administrative and financial barriers that small practices would be
required to negotiate in order to be successful in the advancing care
information performance category.
Response: We appreciate this support and believe it is appropriate
to provide a significant hardship exception for MIPS eligible
clinicians in small practices, in part due to the barriers identified
by the commenters.
Comment: One commenter disagreed with our proposal to establish a
significant hardship exception for small practices. They stated that
while there are challenges that clinicians in small practices face in
implementing HIT, well-implemented HIT can add to a practice's capacity
to deliver high quality care. For a practice with limited support
staff, HIT can make it easier for clinicians to communicate with their
patients, know in real time about the
[[Page 53683]]
care their patients are receiving at other practices, and actively
manage the population health of their entire patient panel. They
recommended that CMS help and encourage small practices to adopt and
meaningfully use HIT, rather than sending the message that HIT is a
``significant hardship'' that small practices should consider avoiding.
Response: While we agree that the use of HIT has many benefits and
ideally all MIPS eligible clinicians would utilize CEHRT, we understand
it may not be feasible at this time for all practices. We hope that
over time more and more practices will realize the benefits of CEHRT
and interoperability with other clinicians and successfully adopt and
utilize CEHRT. We do offer no-cost technical assistance to small
practices through the Small, Underserved, and Rural Support initiative.
To find your local Small, Underserved, and Rural Support organization
please review the Technical Assistance Resource Guide on qpp.cms.gov,
or use the search feature on the ``Small Practices'' Web page.
Comment: Some commenters recommended other significant hardship
exceptions such as for MIPS eligible clinicians practicing in medically
underserved areas or MIPS eligible clinicians caring for a medically
underserved population.
Response: We are adopting several policies in this final rule with
comment period that will reduce its impact on small and solo practices,
including the creation of a hardship exception for MIPS eligible
clinicians in small practices. We will be monitoring participation in
MIPS and in the advancing care information performance category to
determine if it is appropriate to establish additional hardship
exceptions for clinicians in medically underserved areas and those who
serve underserved populations. Further, this final rule with comment
period's provisions are designed to encourage participation,
incentivize continuous improvement, and move participants on a glide
path to improved health care delivery in the Quality Payment Program.
Comment: One commenter applauded CMS for proposing a hardship
exception for small practices and requested that CMS provide more
assistance to small practices that are willing to try to integrate
information technology. They stated the invaluable assistance provided
by the Regional Extension Centers for the Medicare and Medicare EHR
Incentive Programs.
Response: We do offer no-cost technical assistance to small
practices through the Small, Underserved, and Rural Support initiative.
This initiative is comprised of 11 professional and experienced
organizations who are ready to help clinicians in small practices and
rural areas prepare for and participate in the Quality Payment Program.
We try to ensure that priority is given to small practices in rural
locations, health professional shortage areas, and medically
underserved areas. The organizations within the Small, Underserved, and
Rural Support initiative can help clinicians determine if they are
included in the program, choose whether they will participate
individually or as a part of a group, determine their data submission
method, identify appropriate measures and activities, and much more. To
find your local Small, Underserved, and Rural Support organization
please review the Technical Assistance Resource Guide on qpp.cms.gov,
or use the search feature on the ``Small Practices'' Web page.
Comment: Commenters questioned the requirement that MIPS eligible
clinicians must demonstrate that there are overwhelming barriers that
prevent them from complying with the requirements of the advancing care
information performance category. They believe that such a requirement
is not clear or concise, and detracts from program goals.
Response: We understand these concerns; however, we believe that
adopting and implementing CEHRT may not be a significant hardship for
some small practices. For small practices experiencing a significant
hardship, we proposed that they demonstrate, through their application,
there are overwhelming barriers to complying with the requirements of
the advancing care information performance category. We do not
anticipate any additional burden associated with this requirement as we
do not intend to require documentation of the overwhelming barriers.
While we sincerely hope that MIPS eligible clinicians will be able to
successfully report for the advancing care information performance
category, we understand that small practices do have challenges that
would benefit from added flexibility and time to adopt CEHRT.
Comment: One commenter recommended expanding the definition of
small practice so that it is not limited to practices with 15 or fewer
clinicians. Another suggested a threshold of 18 clinicians.
Response: While we understand the concern that the proposed
definition could be under-inclusive, we are not modifying our proposal.
We believe it is more important to reduce burden by having one
definition of small practice for the MIPS program and choose to align
the definition for purposes of this significant hardship exception with
the definition under Sec. 414.1305.
Comment: Some commenters stated that practices located in rural
areas often experience many of the same barriers as small practices
such as financial limitations and workforce shortages. The effects of
these challenges are magnified because clinicians in rural areas serve
as critical access points for care and often provide a safety net for
vulnerable populations. Commenters stated that CMS includes both small
practices and practices located in rural areas in many of its policies
proposed to reduce burden, including the low-volume threshold and
flexibility under the improvement activities category, but neglected to
include practices located in rural areas in its hardship exception
proposal for advancing care information. Commenters believed this was
an oversight and urged CMS to create a hardship exception for
clinicians that practice in rural areas. Others requested that CMS
modify the proposed advancing care information hardship exception so
that it applies to both small practices and practices located in rural
areas. They also requested that CMS make this an automatic exemption so
as not to add to the burden of clinicians in these practices by
requiring them to demonstrate ``overwhelming barriers'' to compliance.
To recognize more advanced practices, the commenter suggested that CMS
could offer an opt-in that would allow small and rural practices that
believe they are prepared to participate in the advancing care
information performance category to do so.
Response: We disagree that the hardship exception should be
``automatic'' for small practices because we believe many small
practices will be able to successfully report on the advancing care
information performance category. For those small practice that wish to
apply for this significant hardship exception, we have simplified the
application process for hardship exceptions under MIPS as compared with
the process available for the Medicare EHR Incentive Program. We will
be monitoring participation in MIPS and in the advancing care
information performance category to determine if it is appropriate to
establish an additional hardship exception for clinicians practicing in
rural areas in future rulemaking.
Final Action: After consideration of the comments that we received,
we are adopting our policy as proposed.
[[Page 53684]]
(iii) Hospital-Based MIPS Eligible Clinicians
In the CY 2017 Quality Payment Program final rule (81 FR 77238
through 77240), we defined a hospital-based MIPS eligible clinician
under Sec. 414.1305 as a MIPS eligible clinician who furnishes 75
percent or more of his or her covered professional services in sites of
service identified by the Place of Service (POS) codes used in the
HIPAA standard transaction as an inpatient hospital (POS 21), on-campus
outpatient hospital (POS 22), or emergency room (POS 23) setting, based
on claims for a period prior to the performance period as specified by
CMS. We discussed our assumption that MIPS eligible clinicians who are
determined hospital-based do not have sufficient advancing care
information measures applicable to them, and we established a policy to
reweight the advancing care information performance category to zero
percent of the MIPS final score for the MIPS payment year in accordance
with section 1848(q)(5)(F) of the Act (81 FR 77240).
We did not propose substantive changes to this policy; however, as
a result of the changes in the law made by the 21st Century Cures Act
discussed above, we will not rely on section 1848(q)(5)(F) of the Act
and instead proposed to use the authority in the last sentence of
section 1848(o)(2)(D) of the Act for exceptions for hospital-based MIPS
eligible clinicians under the advancing care information performance
category. Section 1848(o)(2)(D) of the Act, as amended by section
4002(b)(1)(B) of the 21st Century Cures Act, states in part that the
provisions of section 1848(a)(7)(D) of the Act shall apply to
assessments of MIPS eligible clinicians with respect to the advancing
care information performance category in an appropriate manner which
may be similar to the manner in which such provisions apply with
respect to the payment adjustment made under section 1848(a)(7)(A) of
the Act. We would assign a zero percent weighting to the advancing care
information performance category in the MIPS final score for a MIPS
payment year for hospital-based MIPS eligible clinicians as previously
defined. A hospital-based MIPS eligible clinician would have the option
to report the advancing care information measures for the performance
period for the MIPS payment year for which they are determined
hospital-based. However, if a MIPS eligible clinician who is determined
hospital-based chooses to report on the advancing care information
measures, they would be scored on the advancing care information
performance category like all other MIPS eligible clinicians, and the
category would be given the weighting prescribed by section
1848(q)(5)(E) of the Act regardless of their score.
We proposed to amend Sec. 414.1380(c)(1) and (2) of the regulation
text to reflect this proposal.
We requested comments on the proposed use of the authority provided
in the 21st Century Cures Act in section 1848(o)(2)(D) of the Act as it
relates to hospital-based MIPS eligible clinicians.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: One commenter recommended that this policy be effective as
soon as possible.
Response: We note that this policy would apply beginning with the
first year of the Quality Payment Program, the 2017 performance period.
We did not propose substantive changes to our existing policy for
hospital-based MIPS eligible clinicians; rather, we proposed to rely on
different statutory authority for the policy.
Comment: The majority of commenters supported our proposal. Many
stated that there are insufficient measures applicable and available to
hospital-based MIPS eligible clinicians for the advancing care
information performance category of MIPS.
Response: We appreciate the support of commenters. We continue to
believe that hospital-based MIPS eligible clinicians may not have
control over the decisions that the hospital makes regarding the use of
health IT and CEHRT. These MIPS eligible clinicians therefore may have
no control over the type of CEHRT available, the way that the
technology is implemented and used, or whether the hospital continually
invests in the technology to ensure it is compliant with ONC
certification criteria.
Comment: Commenters urged us to be transparent and give MIPS
eligible clinicians timely notice well in advance of the start of the
performance period whether or not they are hospital-based and therefore
not required to participate in the advancing care information
performance category.
Response: We agree. We want to inform MIPS eligible clinicians as
soon as possible of their hospital-based status. Unfortunately, in this
first year of the Quality Payment Program, we were unable to provide
this information as soon as we had hoped. It became available in August
2017, but for future performance periods it is expected that the
information will be available sooner.
Comment: Commenters stated that under the current hospital-based
group definition, if less than 100 percent of the clinicians in a group
are considered hospital-based, then the group is expected to submit
advancing care information performance category data for the portion of
clinicians who are not hospital-based, even if that is only a small
percentage. Commenters stated they believe the intent of the group
reporting option is to ease the administrative burden of reporting on
behalf of an entire group. Commenters also stated it is unreasonable to
expect a group, where the majority of clinicians are hospital-based, to
parse out the minority of clinicians who are not hospital-based and to
report their advancing care information performance category data to
CMS.
They suggested that CMS adopt a policy whereby if the simple
majority of the group's clinicians meet the definition of hospital-
based, as individuals, then the group as a whole would be exempt from
the advancing care information performance category.
Response: We disagree and note that the group would not be expected
to parse out any data, but would instead report the aggregated data of
the entire group (hospital-based MIPS eligible clinicians included),
thus, there would be no additional burden to prepare the data for
reporting. We direct readers to the discussion of Scoring for MIPS
Eligible Clinicians in Groups in section II.6.f(c)(7) of this final
rule with comment period.
Final Action: After consideration of the comments we received, we
are finalizing our policy as proposed. We will amend Sec.
414.1380(c)(1) and (2) of the regulation text to reflect this policy.
(iv) Ambulatory Surgical Center (ASC)--Based MIPS Eligible Clinicians
Section 16003 of the 21st Century Cures Act amended section
1848(a)(7)(D) of the Act to provide that no payment adjustment may be
made under section 1848(a)(7)(A) of the Act for 2017 and 2018 in the
case of an eligible professional who furnishes substantially all of his
or her covered professional services in an ambulatory surgical center
(ASC). Section 1848(a)(7)(D)(iii) of the Act provides that
determinations of whether an eligible professional is ASC-based may be
made based on the site of service as defined by the Secretary or an
attestation, but shall be made without regard to any employment or
billing arrangement between the eligible professional and any other
supplier or provider of services. Section 1848(a)(7)(D)(iv) of the Act
provides that
[[Page 53685]]
the ASC-based exception shall no longer apply as of the first year that
begins more than 3 years after the date on which the Secretary
determines, through notice and comment rulemaking, that CEHRT
applicable to the ASC setting is available.
Under section 1848(o)(2)(D) of the Act, as amended by section
4002(b)(1)(B) of the 21st Century Cures Act, the ASC-based provisions
of section 1848(a)(7)(D) of the Act shall apply to assessments of MIPS
eligible clinicians under section 1848(q) of the Act with respect to
the advancing care information performance category in an appropriate
manner which may be similar to the manner in which such provisions
apply with respect to the payment adjustment made under section
1848(a)(7)(A) of the Act. We believe the proposals for ASC-based MIPS
eligible clinicians are an appropriate application of the provisions of
section 1848(a)(7)(D) of the Act to MIPS eligible clinicians. Under the
Medicare EHR Incentive Program an approved hardship exception exempted
an EP from the payment adjustment. We believe that weighting the
advancing care information performance category to zero percent is
similar in effect to an exemption from the requirements of that
performance category.
To align with our hospital-based MIPS eligible clinician policy, we
proposed to define at Sec. 414.1305 an ASC-based MIPS eligible
clinician as a MIPS eligible clinician who furnishes 75 percent or more
of his or her covered professional services in sites of service
identified by the Place of Service (POS) code 24 used in the HIPAA
standard transaction based on claims for a period prior to the
performance period as specified by us. We requested comments on this
proposal and solicit comments as to whether other POS codes should be
used to identify a MIPS eligible clinician's ASC-based status or if an
alternative methodology should be used. We noted that the ASC-based
determination will be made independent of the hospital-based
determination.
To determine a MIPS eligible clinician's ASC-based status, we
proposed to use claims with dates of service between September 1 of the
calendar year 2 years preceding the performance period through August
31 of the calendar year preceding the performance period, but in the
event it is not operationally feasible to use claims from this time
period, we would use a 12-month period as close as practicable to this
time period. We proposed this timeline to allow us to notify MIPS
eligible clinicians of their ASC-based status prior to the start of the
performance period and to align with the hospital-based MIPS eligible
clinician determination period. For the 2019 MIPS payment year, we
would not be able to notify MIPS eligible clinicians of their ASC-based
status until after the final rule with comment period is published,
which we anticipate would be later in 2017. We expect that we would
provide this notification through QPP.cms.gov.
For MIPS eligible clinicians who we determine are ASC-based, we
proposed to assign a zero percent weighting to the advancing care
information performance category in the MIPS final score for the MIPS
payment year. However, if a MIPS eligible clinician who is determined
ASC-based chooses to report on the Advancing Care Information Measures
or the Advancing Care Information Transition Measures, if applicable,
for the performance period for the MIPS payment year for which they are
determined ASC-based, we proposed they would be scored on the advancing
care information performance category like all other MIPS eligible
clinicians, and the performance category would be given the weighting
prescribed by section 1848(q)(5)(E) of the Act regardless of their
advancing care information performance category score.
We proposed these ASC-based policies would apply beginning with the
2017 performance period/2019 MIPS payment year.
We proposed to amend Sec. 414.1380(c)(1) and (2) of the regulation
text to reflect these proposals.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: One commenter requested that the ASC-based MIPS eligible
clinician determination be added to the hospital-based determination so
that we would make a determination based on the sum of services
performed in an ASC, inpatient hospital, emergency room and on-campus
outpatient hospital as they believe that application is in line with
congressional intent.
Response: We disagree. We proposed that the ASC-based MIPS eligible
clinician determination be made separately from the hospital-based
determination because section 1848(a)(7)(D) of the Act, as amended by
section 16003 of the 21st Century Cures Act, distinguishes between
hospital-based and ASC-based clinicians, and continue to believe this
approach is most consistent with the statute. However, we note that the
commenter incorrectly described our hospital-based policy by stating we
determine a clinician's status based on one setting. To determine if a
MIPS eligible clinician is hospital-based, we currently consider the
percentage of covered professional services furnished in POS codes 21,
22, and 23 collectively and not separately.
Comment: One commenter urged CMS to allow ASC-based MIPS eligible
clinicians the ability to apply for a significant hardship exception to
reweight their advancing care information performance category score
even if their ASC-based status changes subsequent to the deadline to
apply for the significant hardship exception. The commenter stated that
these clinicians likely would not have control over the CEHRT in their
practice and should have their advancing care information performance
category score reweighted to zero.
Response: We note that we will make the determination about whether
a MIPS eligible clinician is ASC-based by looking claims with dates of
service between September 1 of the calendar year 2 years preceding the
performance period through August 31 of the calendar year preceding the
performance period. It is our intent to make determinations prior to
the close of the submission period for significant hardship exceptions.
Final Action: After consideration of the comments we received, we
are finalizing our policy as proposed. We are amending Sec. 414.1305
and Sec. 414.1380(c)(1) and (2) to reflect this policy.
(v) Exception for MIPS Eligible Clinicians Using Decertified EHR
Technology
Section 4002(b)(1)(A) of the 21st Century Cures Act amended section
1848(a)(7)(B) of the Act to provide that the Secretary shall exempt an
eligible professional from the application of the payment adjustment
under section 1848(a)(7)(A) of the Act with respect to a year, subject
to annual renewal, if the Secretary determines that compliance with the
requirement for being a meaningful EHR user is not possible because the
CEHRT used by such professional has been decertified under ONC's Health
IT Certification Program. Section 1848(o)(2)(D) of the Act, as amended
by section 4002(b)(1)(B) of the 21st Century Cures Act, states in part
that the provisions of section 1848(a)(7)(B) of the Act shall apply to
assessments of MIPS eligible clinicians with respect to the advancing
care information performance category in an appropriate manner which
may be similar to the manner in which such provisions apply with
respect to the
[[Page 53686]]
payment adjustment made under section 1848(a)(7)(A) of the Act.
We proposed that a MIPS eligible clinician may demonstrate through
an application process that reporting on the measures specified for the
advancing care information performance category is not possible because
the CEHRT used by the MIPS eligible clinician has been decertified
under ONC's Health IT Certification Program. We proposed that if the
MIPS eligible clinician's demonstration is successful and an exception
is granted, we would assign a zero percent weighting to the advancing
care information performance category in the MIPS final score for the
MIPS payment year. In accordance with section 1848(a)(7)(B) of the Act,
the exception would be subject to annual renewal, and in no case may a
MIPS eligible clinician be granted an exception for more than 5 years.
We proposed this exception would be available beginning with the CY
2018 performance period and the 2020 MIPS payment year.
We proposed that a MIPS eligible clinician may qualify for this
exception if their CEHRT was decertified either during the performance
period for the MIPS payment year or during the calendar year preceding
the performance period for the MIPS payment year. In addition, we
proposed that the MIPS eligible clinician must demonstrate in their
application and through supporting documentation if available that the
MIPS eligible clinician made a good faith effort to adopt and implement
another CEHRT in advance of the performance period. We proposed a MIPS
eligible clinician seeking to qualify for this exception would submit
an application in the form and manner specified by us by December 31st
of the performance period, or a later date specified by us.
We proposed to amend Sec. 414.1380(c)(1) and (2) of the regulation
text to reflect these proposals.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters supported the creation of a hardship exception
for clinicians whose EHR becomes decertified. One commenter stated that
the proposal was a sensible approach that supports clinicians who
encounter serious issues with EHR technology that are outside their
control.
Response: We appreciate the support of commenters for this
proposal.
Comment: One commenter recommended that clinicians be held harmless
in an automated fashion if there CEHRT becomes decertified. The
commenter expressed concern that the terms ``made a good faith effort''
and ``through supporting documentation'' are vague and requested
further guidance.
Response: While we understand the concern, MIPS eligible clinicians
frequently change EHR vendors, and we would not know that they are
using a product that has been decertified unless they notified CMS. We
have a fairly simple system through which MIPS eligible clinicians may
apply for an exception. Documentation does not need to be submitted
with the application, but MIPS eligible clinicians should retain
documentation that supports their request for an exception based on
decertified EHR technology.
Comment: One commenter recommended that we communicate the
availability of this hardship exception for clinicians who learn that
their CEHRT does not conform to the ONC certification requirements.
Response: We plan to add this decertification exception category to
the Quality Payment Program Hardship Exception Application on
qpp.cms.gov.
Comment: One commenter urged CMS to use at least a 2-year exemption
period and allow clinicians to seek additional time if necessary before
they are subject to the advancing care information performance category
reporting requirements. A few commenters stated it was more appropriate
to allow a 3-year exemption period because of the time necessary to
acquire a new system, move data, redesign workflows and train clinical
and administrative staffs.
Response: All exceptions for the advancing care information
performance category are approved for 1 year only, and the exception
application would be subject to annual renewal. We stated that MIPS
eligible clinician may qualify for this exception if their CEHRT was
decertified either during the performance period for the MIPS payment
year or during the calendar year preceding the performance period for
the MIPS payment year. If the transition to a new CEHRT takes much
longer than expected for reasons beyond the clinician's control, they
could potentially apply for a significant hardship exception based on
extreme and uncontrollable circumstances.
Comment: Several commenters recommended expanding this proposal to
include CEHRT that has its certification suspended. Commenters
indicated a suspension would occur only when ONC identifies that CEHRT
poses a ``potential risk to public health or safety''.
Response: While we understand these concerns, section 1848(a)(7)(B)
of the Act, as amended by section 4002(b)(1)(A) of the 21st Century
Cures Act, provides authority for an exception in the event of
decertification, not suspension of certification.
Final Action: After consideration of the comments we received, we
are finalizing our policy as proposed. We are amending Sec.
414.1380(c)(1) and (2) of the regulation text to reflect this policy.
(b) Hospital-Based MIPS Eligible Clinicians
In the CY 2017 Quality Payment Program final rule (81 FR 77238
through 77240, we defined a hospital-based MIPS eligible clinician as a
MIPS eligible clinician who furnishes 75 percent or more of his or her
covered professional services in sites of services identified by the
Place of Service (POS) codes used in the HIPAA standard transaction as
an inpatient hospital (POS 21), on campus outpatient hospital (POS 22)
or emergency room (POS 23) setting, based on claims for a period prior
to the performance period as specified by CMS.
We proposed to modify our policy to include covered professional
services furnished by MIPS eligible clinicians in an off-campus-
outpatient hospital (POS 19) in the definition of hospital-based MIPS
eligible clinician. POS 19 was developed in 2015 in order to capture
the numerous physicians that are paid for a portion of their services
in an ``off campus-outpatient hospital'' versus an on campus-outpatient
hospital, (POS 22). We also believe that these MIPS eligible clinicians
would not typically have control of the development and maintenance of
their EHR systems, just like those who bill using POS 22. We proposed
to add POS 19 to our existing definition of a hospital-based MIPS
eligible clinician beginning with the performance period in 2018.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Commenters expressed their support for the addition of
off-campus-outpatient hospital (POS 19) to the definition of hospital-
based.
Response: We appreciate the support and believe it is appropriate
to add off-campus-outpatient hospital (POS 19) in the definition of
hospital-based MIPS eligible clinician because this setting is similar
to the on-campus outpatient hospital setting in that the MIPS eligible
clinicians lack control over CEHRT.
Comment: A few commenters urged CMS to automatically reweight the
advancing care information performance category for clinicians who
predominantly practice in settings such as Comprehensive Inpatient
[[Page 53687]]
Rehabilitation Facility (IRF; POS 61) and Skilled Nursing Facility
(SNF: POS 31) as clinicians who practice in these settings will
struggle to meet advancing care information requirements much like
inpatient hospital-based eligible clinicians. For example, they may not
have control over the decisions that the facilities make regarding the
use of health IT and CEHRT, and requirements under the Protect Patient
Health Information Objective to conduct a security risk analysis would
rely on the actions of the facilities, rather than the actions of the
MIPS eligible clinicians.
Response: We thank the commenters for bringing these settings to
our attention, and although we did not include them in our proposals,
we will monitor MIPS participation of clinicians who practice in these
settings to determine if they are able to meet the requirements of the
advancing care information performance category.
Final Action: After consideration of the public comments, we are
adopting our proposal as proposed.
(c) Scoring for MIPS Eligible Clinicians in Groups
In any of the situations described in the sections above, we would
assign a zero percent weighting to the advancing care information
performance category in the MIPS final score for the MIPS payment year
if the MIPS eligible clinician meets certain specified requirements for
this weighting. We noted that these MIPS eligible clinicians may choose
to submit Advancing Care Information Measures or the Advancing Care
Information Transition Measures, if applicable; however, if they choose
to report, they will be scored on the advancing care information
performance category like all other MIPS eligible clinicians and the
performance category will be given the weighting prescribed by section
1848(q)(5)(E) of the Act regardless of their advancing care information
performance category score. This policy includes MIPS eligible
clinicians choosing to report as part of a group or part of a virtual
group.
Groups as defined at Sec. 414.1310(e)(1) are required to aggregate
their performance data across the TIN in order for their performance to
be assessed as a group (81 FR 77058). Additionally, groups that elect
to have their performance assessed as a group will be assessed as a
group across all four MIPS performance categories. By reporting as part
of a group, MIPS eligible clinicians are subscribing to the data
reporting and scoring requirements of the group. We noted that the data
submission criteria for groups reporting advancing care information
performance category described in the CY 2017 Quality Payment Program
final rule (81 FR 77215) state that group data should be aggregated for
all MIPS eligible clinicians within the group. This includes those MIPS
eligible clinicians who may qualify for a zero percent weighting of the
advancing care information performance category due to the
circumstances as described above, such as a significant hardship or
other type of exception, hospital-based or ASC-based status, or certain
types of non-physician practitioners (NPs, PAs, CNSs, and CRNAs). If
these MIPS eligible clinicians report as part of a group or virtual
group, they will be scored on the advancing care information
performance category like all other MIPS eligible clinicians and the
performance category will be given the weighting prescribed by section
1848(q)(5)(E) of the Act regardless of the group's advancing care
information performance category score.
The following is a summary of the public comments received and our
responses:
Comment: One commenter urged CMS not to finalize this policy and
instead to reweight the advancing care information category to zero
percent for any group or virtual group in which the majority of
individual clinicians would be exempt from scoring in that category.
Another commenter suggested that groups should have the option to
include or to not include data from non-patient facing and hospital-
based MIPS eligible clinicians in their aggregated advancing care
information performance category data.
Response: We did not propose any changes to our policy related to
MIPS eligible clinicians in groups. We were simply restating the policy
finalized for groups reporting data for the advancing care information
performance category as described in the CY 2017 Quality Payment
Program final rule (81 FR 77215) that group data should be aggregated
for all MIPS eligible clinicians within the group. This includes those
MIPS eligible clinicians who may qualify for a zero percent weighting
of the advancing care information performance category based on a
significant hardship or other type of exception, hospital-based or ASC-
based status, or certain types of non-physician practitioners (NPs,
PAs, CNSs, and CRNAs). Our policy is 100 percent of the MIPS eligible
clinicians in the group must qualify for a zero percent weighting in
order for the advancing care information performance category to be
reweighted in the final score.
Comment: One commenter requested clarification as to how the
advancing care information performance category of MIPS applies to
group reporting. Specifically, the commenter stated that CMS'
regulations and guidance are unclear as to whether it is permissible
for a MIPS eligible clinician who participates in group reporting to
not utilize CEHRT without disqualifying the entire group from
attempting to report successfully on the advancing care information
performance category. Another commenter asked if groups are able to
report their advancing care information data by aggregating data for
the entire TIN and including a denominator value only for the patients
who were seen in a location with the use of CEHRT, or if the whole
group would receive a zero for the advancing care information
performance category because not all MIPS eligible clinicians in the
group use CEHRT.
Response: In the CY 2017 Quality Payment Program final rule (81 FR
77215), we stated that the group will need to aggregate data for all
the individual MIPS eligible clinicians within the group for whom they
have data in CEHRT. The group should submit the data that they have in
CEHRT and exclude data not collected from a non-certified EHR system.
While we do not expect that every MIPS eligible clinician in the group
will have access to CEHRT, or that every measure will apply to every
clinician in the group, only those data contained in CEHRT should be
reported for the advancing care information performance category.
We will take these comments into consideration and may address the
issues raised in future rulemaking.
(d) Timeline for Submission of Reweighting Applications
In the CY 2017 Quality Payment Program final rule (81 FR 77240-
77243), we established the timeline for the submission of applications
to reweight the advancing care information performance category in the
MIPS final score to align with the data submission timeline for MIPS.
We established that all applications for reweighting the advancing care
information performance category be submitted by the MIPS eligible
clinician or designated group representative in the form and manner
specified by us. All applications may be submitted on a rolling basis,
but must be received by us no later than the close of the submission
period for the relevant performance period, or a later date specified
by us. An application would need to be submitted annually to be
considered for reweighting each year.
[[Page 53688]]
The Quality Payment Program Exception Application will be used to
apply for the following exceptions: Insufficient Internet Connectivity;
Extreme and Uncontrollable Circumstances; Lack of Control over the
Availability of CEHRT; Decertification of CEHRT; and Small Practice.
We proposed to change the submission deadline for the application
as we believe that aligning the data submission deadline with the
reweighting application deadline could disadvantage MIPS eligible
clinicians. We proposed to change the submission deadline for the CY
2017 performance period to December 31, 2017, or a later date specified
by us. We believe this change would help MIPS eligible clinicians by
allowing them to learn whether their application is approved prior to
the data submission deadline for the CY 2017 performance period, March
31, 2018. We noted that if a MIPS eligible clinician submits data for
the advancing care information performance category after an
application has been submitted, the data would be scored, the
application would be considered voided and the advancing care
information performance category would not be reweighted.
We further proposed that the submission deadline for the 2018
performance period will be December 31, 2018, or a later date as
specified by us. We believe this would help MIPS eligible clinicians by
allowing them to learn whether their application is approved prior to
the data submission deadline for the CY 2018 performance period, March
31, 2019.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: One commenter requested that MIPS eligible clinicians be
able to submit applications throughout the performance period and did
not support the change to the application deadline. Another commenter
suggested that MIPS eligible clinicians should be able to submit their
applications throughout the performance period and receive a timely
response from CMS.
Response: We agree that MIPS eligible clinicians should be able to
submit applications throughout the performance period. Under our
proposal, MIPS eligible clinicians could submit applications anytime
during CY 2017, which is the performance period. We believe that if the
application submission period is open during the data submission
period, MIPS eligible clinicians may not know whether their application
is approved prior to the data submission period, and thus we proposed
to change the application submission deadline to align with the end of
the performance period.
Comment: Commenters supported the change in the application
submission deadline because it will reduce the reporting burden for
those who are approved for a hardship exception.
Response: We appreciate the support for our proposal.
Comment: One commenter recommended that the submission deadline for
hardship exceptions be changed to July 31, 2018 as they stated that the
inclusion of the phrase ``or at a later date specified by us''
indicates that CMS acknowledges that the December 31st deadline may not
be appropriate.
Response: We disagree. We believe that it is appropriate to align
the submission period for hardship exception applications with the
performance period so that MIPS eligible clinicians will know whether
their application was approved prior to the MIPS data submission
deadline.
Final Action: After consideration of the public comments, we are
adopting our policy as proposed. We note that the submission of Quality
Payment Program Hardship Exception Applications began during the 2017
performance period (in August 2017) and will close at the end of the
calendar year 2017.
g. APM Scoring Standard for MIPS Eligible Clinicians in MIPS APMs
(1) Overview
Under section 1848(q)(1)(C)(ii)(1) of the Act, Qualifying APM
Participants (QPs) are not MIPS eligible clinicians and are thus
excluded from MIPS reporting requirements and payment adjustments.
Similarly, under section 1848(q)(1)(c)(ii)(II) of the Act, Partial
Qualifying APM Participants (Partial QPs) are also not MIPS eligible
clinicians unless they opt to report and be scored under MIPS. All
other MIPS eligible clinicians, including those participating in MIPS
APMs, are subject to MIPS reporting requirements and payment
adjustments unless they are excluded on another basis such as being
newly enrolled in Medicare or not exceeding the low volume threshold.
In the CY 2017 Quality Payment Program final rule, we finalized the
APM scoring standard, which is designed to reduce reporting burden for
participants in certain APMs by minimizing the need for them to make
duplicative data submissions for both MIPS and their respective APMs
(81 FR 77246 through 77269, 77543). We also sought to ensure that MIPS
eligible clinicians in APM Entities that participate in certain types
of APMs that assess their participants on quality and cost are assessed
as consistently as possible across MIPS and their respective APMs.
Given that many APMs already assess their participants on cost and
quality of care and require engagement in certain improvement
activities, we believe that without the APM scoring standard,
misalignments could be quite common between the evaluation of
performance under the terms of the APM and evaluation of performance on
measures and activities under MIPS.
In the CY 2017 Quality Payment Program final rule, we identified
the types of APMs for which the APM scoring standard would apply as
MIPS APMs (81 FR 77249). We finalized at Sec. 414.1370(b) that to be a
MIPS APM, an APM must satisfy the following criteria: (1) APM Entities
participate in the APM under an agreement with CMS or by law or
regulation; (2) the APM requires that APM Entities include at least one
MIPS eligible clinician on a Participation List; (3) the APM bases
payment incentives on performance (either at the APM Entity or eligible
clinician level) on cost/utilization and quality measures; and (4) the
APM is not either a new APM for which the first performance period
begins after the first day of the MIPS performance period for the year
or an APM in the final year of operation for which the APM scoring
standard is impracticable. We specified that we will post the list of
MIPS APMs prior to the first day of the MIPS performance period for
each year, though we expect that any new models would likely be
announced 2 or more months before the start of the performance period
for a year (81 FR 77250). We finalized in our regulations at Sec.
414.1370(b) that for a new APM to be a MIPS APM, its first performance
period must start on or before the first day of the MIPS performance
period. A list of MIPS APMs is available at www.qpp.cms.gov.
We also note that while it is possible to be both a MIPS APM and an
Advanced APM, the criteria for the two are distinct, and a
determination that an APM is an Advanced APM does not guarantee that it
will be a MIPS APM and vice versa. We refer eligible clinicians to our
Web site, qpp.cms.gov, for more information about participating in MIPS
APMs and Advanced APMs.
We established in the regulations at Sec. 414.1370(c) that the
MIPS performance period under Sec. 414.1320 of our regulations applies
for the APM scoring standard.
We finalized that under section Sec. 414.1370(f) of our
regulations, for the APM scoring standard, MIPS eligible clinicians
will be scored at the APM
[[Page 53689]]
Entity group level, and each MIPS eligible clinician will receive the
APM Entity group's final score. The MIPS payment adjustment is applied
at the TIN/NPI level for each of the MIPS eligible clinicians in the
APM Entity. The MIPS final score is comprised of the four MIPS
performance category scores, as described in our regulation at Sec.
414.1370(g): Quality, cost, improvement activities, and advancing care
information. Both the Medicare Shared Savings Program and Next
Generation ACO Model are MIPS APMs for the 2017 performance period. For
these two MIPS APMs, in accordance with our regulations at Sec.
414.1370(h), the MIPS performance category scores are weighted as
follows: Quality at 50 percent; cost at zero percent; improvement
activities at 20 percent; and advancing care information at 30 percent
of the final score. For all other MIPS APMs for the 2017 performance
period, quality and cost are each weighted at zero percent, improvement
activities at 25 percent, and advancing care information at 75 percent
of the final score.
In the CY 2018 Quality Payment Program proposed rule, for the APM
scoring standard, we proposed to: Amend Sec. 414.1370(e) to add an APM
Entity group assessment date for MIPS eligible clinicians in full TIN
APMs; continue to weight the cost performance category at zero and, in
alignment with that proposal, to not take improvement into account for
the cost performance category for 2020 and subsequent years; add the
CAHPS for ACOs survey to the list of Shared Savings Program and Next
Generation ACO Model quality measures that are used to calculate the
MIPS APM quality performance category score beginning with the 2018
performance period; define ``Other MIPS APMs'' under Sec. 414.1305;
establish a separate MIPS APM list of quality measures for each Other
MIPS APM and use that list for scoring in the quality performance
category; calculate the MIPS quality performance category score for
Other MIPS APMs using the APM-specific quality measures and score only
those measures that are tied to payment as described under the terms of
the APM, are available for scoring near the close of the MIPS
submission period, have a minimum of 20 cases available for reporting
and have an available benchmark; and add scoring for quality
improvement to the MIPS APM quality performance category for all MIPS
APMs beginning in 2018. We also proposed a quality scoring methodology
for Other MIPS APMs beginning in the 2018 MIPS performance period and
described the scoring methodology for quality improvement for Other
MIPS APMs. We proposed to align the MIPS performance category weights
for Other MIPS APMs with those used for Web Interface reporter APMs
under the APM scoring standard; and proposed policies to address
situations where a MIPS eligible clinician qualifies for reweighting to
zero percent in the advancing care information performance category. We
also proposed, beginning with the 2018 performance period, to provide
MIPS eligible clinicians scored using the APM scoring standard with
performance feedback as specified under section 1848(q)(12) of the Act
for the quality, advancing care information, and improvement activities
performance categories to the extent data are available (82 FR 30080
through 30091). We discuss these final policies in this section of this
final rule with comment period.
In reviewing these proposals, we reminded readers that the APM
scoring standard is built upon regular MIPS but provides for special
policies to address the unique circumstances of MIPS eligible
clinicians who are in APM Entities participating in MIPS APMs. For the
cost, improvement activities, and advancing care information
performance categories, unless a separate policy has been established
or is being proposed for the APM scoring standard, the generally
applicable MIPS policies would be applicable to the APM scoring
standard. Additionally, unless we included a proposal to adopt a unique
policy for the APM scoring standard, we proposed to adopt the same
generally applicable MIPS policies proposed elsewhere in the CY 2018
Quality Payment Program proposed rule and would treat the APM Entity
group as the group for purposes of MIPS. For the quality performance
category, however, the APM scoring standard we proposed is presented as
a separate, unique standard, and therefore, generally applicable MIPS
policies would not be applied to the quality performance category under
the APM scoring standard unless specifically stated. We sought comment
on whether there may be potential conflicts or inconsistencies between
the generally applicable MIPS policies and those proposed under the APM
scoring standard, particularly where these could impact our goals to
reduce duplicative and potentially incongruous reporting requirements
and performance evaluations that could undermine our ability to test or
evaluate MIPS APMs, or whether certain generally applicable MIPS
policies should be made explicitly applicable to the APM scoring
standard.
(2) Assessment Dates for Inclusion of MIPS Eligible Clinicians in APM
Entity Groups Under the APM Scoring Standard
In the CY 2017 Quality Payment Program final rule, we specified in
our regulations at Sec. 414.1370(e) that the APM Entity group for
purposes of scoring under the APM scoring standard is determined in the
manner prescribed at Sec. 414.1425(b)(1), which provides that eligible
clinicians who are on a Participation List on at least one of three
dates (March 31, June 30, and August 31) would be considered part of
the APM Entity group. Under these regulations, MIPS eligible clinicians
who are not on a Participation List on one of these three assessment
dates are not scored under the APM scoring standard. Instead, they
would need to submit data to MIPS through one of the MIPS data
submission mechanisms and their performance would be assessed either as
individual MIPS eligible clinicians or as a group according to the
generally applicable MIPS criteria.
We stated that we will continue to use the three assessment dates
of March 31, June 30, and August 31 to identify MIPS eligible
clinicians who are on an APM Entity's Participation List and determine
the APM Entity group that is used for purposes of the APM scoring
standard (82 FR 30081). In addition, beginning in the 2018 performance
period, we proposed to add a fourth assessment date of December 31 to
identify those MIPS eligible clinicians who participate in a full TIN
APM. We proposed to define full TIN APM at Sec. 414.1305 to mean an
APM where participation is determined at the TIN level, and all
eligible clinicians who have assigned their billing rights to a
participating TIN are therefore participating in the APM. An example of
a full TIN APM is the Shared Savings Program, which requires all
individuals and entities that have reassigned their right to receive
Medicare payment to the TIN of an ACO participant to participate in the
ACO and comply with the requirements of the Shared Savings Program.
If an eligible clinician elects to reassign their billing rights to
a TIN participating in a full TIN APM, the eligible clinician is
necessarily participating in the full TIN APM. We proposed to add this
fourth date of December 31 only for MIPS eligible clinicians in a full
TIN APM, and only for purposes of applying the APM scoring standard. We
did not propose to use this additional assessment date of December 31
for purposes of QP
[[Page 53690]]
determinations. Therefore, we proposed to amend Sec. 414.1370(e) to
identify the four assessment dates that would be used to identify the
APM Entity group for purposes of the APM scoring standard, and to
specify that the December 31 date would be used only to identify MIPS
eligible clinicians on the APM Entity's Participation List for a MIPS
APM that is a full TIN APM in order to add them to the APM Entity group
that is scored under the APM scoring standard.
We proposed to use this fourth assessment date of December 31 to
extend the APM scoring standard to only those MIPS eligible clinicians
participating in MIPS APMs that are full TIN APMs, ensuring that a MIPS
eligible clinician who joins the full TIN APM between August 31 and
December 31 in the performance period would be scored under the APM
scoring standard. We considered proposing to use the fourth assessment
date more broadly for all MIPS APMs. However, we noted that we believe
that approach would have allowed MIPS eligible clinicians to
inappropriately leverage the fourth assessment date to avoid reporting
and scoring under the generally applicable MIPS scoring standard when
they were part of the MIPS APM for only a very limited portion of the
performance period. That is, for MIPS APMs that allow split TIN
participation, we were concerned that it would be possible for MIPS
eligible clinicians to briefly join a MIPS APM principally in order to
benefit from the APM scoring standard, despite having limited
opportunity to contribute to the APM Entity's performance in the MIPS
APM. In contrast, we believe MIPS eligible clinicians would be less
likely to join a full TIN APM principally to avail themselves of the
APM scoring standard, since doing so would require either that the
entire TIN join the MIPS APM or the administratively burdensome act of
the MIPS eligible clinician reassigning their billing rights to the TIN
of an entity participating in the full TIN APM.
We will continue to use only the three dates of March 31, June 30,
and August 31 to determine, based on Participation Lists, the MIPS
eligible clinicians who participate in MIPS APMs that are not full TIN
APMs. We sought comment on the proposed addition of the fourth date of
December 31 to assess Participation Lists to identify MIPS eligible
clinicians who participate in MIPS APMs that are full TIN APMs for
purposes of the APM scoring standard.
The following is a summary of the public comments received on this
proposal and our responses:
Comment: CMS received many comments in support of our proposal to
add the December 31 snapshot date for full TIN APMs.
Response: We thank commenters for their support of our proposal.
Comment: Several commenters recommended that CMS make the fourth
snapshot date policy retroactive so that it would apply for the 2017
performance period.
Response: We understand commenters' desire for the proposed policy
to be applied retroactively to the 2017 performance period. However, in
consideration of the requirement that we give the public notice of
proposed changes and an opportunity to comment on them, we refrain from
making retroactive regulatory changes unless there is specific and
articulable authority and a reason why it is necessary and appropriate
to do so; and we do not believe we have those in this situation.
Comment: A few commenters requested a switch to 90 day assessment
periods to determine participation in MIPS APMs.
Response: We believe it is simplest to generally align the snapshot
dates used for purposes of the APM scoring standard for MIPS APMs with
those used to identify eligible clinicians in Advanced APMs for
purposes of QP determinations.
We anticipate that the current snapshot policy will identify the
vast majority of MIPS eligible clinicians in MIPS APMs at the first
snapshot, and then at subsequent snapshot dates will add those MIPS
eligible clinicians who join a MIPS APM later in the year but still
have a significant opportunity to contribute to the APM Entity's
performance in the MIPS APM. As such, we believe this policy would more
appropriately identify MIPS eligible clinicians for purposes of
applying the APM scoring standard.
Comment: A few commenters suggested adding the fourth snapshot date
of December 31 for QP determinations as well.
Response: We reiterate that we did not propose to add a fourth
snapshot date of December 31 for QP determinations and we will not
adopt a policy to do so in this rulemaking. We believe that the
snapshot policy that we finalized in the CY 2017 Quality Payment
Program final rule will allow for us to make QP determinations such
that eligible clinicians, including those who fail to become QPs and
who may need to report to MIPS, would know their QP status in advance
of the end of the MIPS reporting period.
Comment: A few commenters requested that CMS provide more
information as to whether an eligible clinician is counted as a
participant in a MIPS APM so that they know whether or not they are
required to report to MIPS.
Response: In the CY 2017 Quality Payment Program final rule, we
stated that it is important to ensure that the appropriate parties are
properly notified of their status for purposes of MIPS. We also stated
that we would provide additional information on the format of such
notifications and the data we will include as part of our public
communications following the issuance of that final rule (81 FR 77450).
Comment: Numerous comments recommended that CMS extend the fourth
snapshot date to all MIPS APM participants.
Response: In addition to avoiding duplicative or potentially
inconsistent reporting requirements or incentives between MIPS and a
MIPS APM, the APM scoring standard is intended to reduce the reporting
burden of participants in MIPS APMs who have focused their practice
transformation activities in the preceding performance period on the
requirements of participation in the APM. As such, we believe it is
appropriate to ensure that the APM scoring standard applies only for
those who are genuinely committed to participation in MIPS APMs. By
limiting applicability of the APM scoring standard to eligible
clinicians who are on a MIPS APM's Participation List on one of the
first three snapshot dates, we hope to minimize any potential
opportunity for certain MIPS eligible clinicians to take inappropriate
advantage of the APM scoring standard. Full TIN APMs, however, require
that all individuals and entities billing through a TIN agree to
participate in the APM in which the TIN is a participant. As a result,
to avail themselves of the APM scoring standard, clinicians or entities
would have to undergo the additional burden of joining a different
billing TIN before starting their participation in the APM. Therefore,
we believe that the risk of a MIPS eligible clinician inappropriately
leveraging the APM scoring standard by joining an APM late in the year
is significantly diminished in full-TIN APMs, and we are comfortable
allowing for the use of the fourth snapshot date at the end of the
performance period to identify the eligible clinicians participating in
these APMs. We will continue to monitor this issue and may consider in
future rulemaking whether there are other APM designs for which using a
fourth
[[Page 53691]]
snapshot date would also be appropriate.
Final Action: After considering public comments, we are finalizing
our proposal to define full TIN APM at Sec. 414.1305 to mean an APM
where participation is determined at the TIN level, and all eligible
clinicians who have assigned their billing rights to a participating
TIN are therefore participating in the APM. We are also finalizing our
proposal to add a fourth date of December 31 only for MIPS eligible
clinicians in a full TIN APM only for purposes of applying the APM
scoring standard and we are finalizing our proposal to amend Sec.
414.1370(e) to identify the four assessment dates that would be used to
identify the APM Entity group for purposes of the APM scoring standard.
We are also finalizing our proposal to specify that the December 31
date would be used only to identify MIPS eligible clinicians on the APM
Entity's Participation List for a MIPS APM that is a full TIN APM in
order to add them to the APM Entity group that is scored under the APM
scoring standard.
(3) Calculating MIPS APM Performance Category Scores
In the CY 2017 Quality Payment Program final rule, we established a
scoring standard for MIPS eligible clinicians participating in MIPS
APMs to reduce participant reporting burden by reducing the need for
MIPS eligible clinicians participating in these types of APMs to make
duplicative data submissions for both MIPS and their respective APMs
(81 FR 77246 through 77271). In accordance with section
1848(q)(1)(D)(i) of the Act, we finalized a policy under which we
assess the performance of a group of MIPS eligible clinicians in an APM
Entity that participates in one or more MIPS APMs based on their
collective performance as an APM Entity group, as defined in
regulations at Sec. 414.1305.
In addition to reducing reporting burden, we sought to ensure that
MIPS eligible clinicians in MIPS APMs are not assessed in multiple ways
on the same performance activities. Depending on the terms of the
particular MIPS APM, we believe that misalignments could be common
between the evaluation of performance on quality and cost under MIPS
versus under the terms of the APM. We continue to believe that
requiring MIPS eligible clinicians in MIPS APMs to submit data, be
scored on measures, and be subject to payment adjustments that are not
aligned between MIPS and a MIPS APM could potentially undermine the
validity of testing or performance evaluation under the APM. We also
believe imposition of MIPS reporting requirements would result in
reporting activity that provides little or no added value to the
assessment of MIPS eligible clinicians, and could confuse these
eligible clinicians as to which CMS incentives should take priority
over others in designing and implementing care improvement activities.
(a) Cost Performance Category
In the CY 2017 Quality Payment Program final rule, for MIPS
eligible clinicians participating in MIPS APMs, we used our authority
to waive certain requirements under the statute to reduce the scoring
weight for the cost performance category to zero (81 FR 77258, 77262,
and 77266). For MIPS APMs authorized under section 1115A of the Act
using our authority under section 1115A(d)(1) of the Act, we waived the
requirement under section 1848(q)(5)(E)(i)(II) of the Act, which
specifies the scoring weight for the cost performance category. Having
reduced the cost performance category weight to zero, we further used
our authority under section 1115A(d)(1) of the Act to waive the
requirements under sections 1848(q)(2)(B)(ii) and 1848(q)(2)(A)(ii) of
the Act to specify and use, respectively, cost measures in calculating
the MIPS final score for MIPS eligible clinicians participating in MIPS
APMs (81 FR 77261-77262; 81 FR 77265 through 77266). Similarly, for
MIPS eligible clinicians participating in the Shared Savings Program,
we used our authority under section 1899(f) of the Act to waive the
same requirements of section 1848 of the Act for the MIPS cost
performance category (81 FR 77257 through 77258). We finalized this
policy because: (1) APM Entity groups are already subject to cost and
utilization performance assessment under the MIPS APMs; (2) MIPS APMs
usually measure cost in terms of total cost of care, which is a broader
accountability standard that inherently encompasses the purpose of the
claims-based measures that have relatively narrow clinical scopes, and
MIPS APMs that do not measure cost in terms of total cost of care may
depart entirely from MIPS measures; and (3) the beneficiary attribution
methodologies differ for measuring cost under APMs and MIPS, leading to
an unpredictable degree of overlap (for eligible clinicians and for
CMS) between the sets of beneficiaries for which eligible clinicians
would be responsible that would vary based on the unique APM Entity
characteristics such as which and how many eligible clinicians comprise
an APM Entity group. We believe that with an APM Entity's finite
resources for engaging in efforts to improve quality and lower costs
for a specified beneficiary population, measurement of the population
identified through the APM must take priority in order to ensure that
the goals and the model evaluation associated with the APM are as clear
and free of confounding factors as possible. The potential for
different, conflicting results across APMs and MIPS assessments may
create uncertainty for MIPS eligible clinicians who are attempting to
strategically transform their respective practices and succeed under
the terms of the APM.
We sought comment on our proposal to continue to waive the
weighting of the cost performance category for the 2020 payment year
forward (82 FR 30082). The following is a summary of the public
comments we received on this proposal and our responses:
Comment: Several commenters supported our proposal.
Response: We thank commenters for their support of our proposal.
Comment: One commenter requested that the cost performance category
be scored for MIPS APMs to further incentivize efficient and
appropriate care delivery.
Response: We agree that encouraging efficient and appropriate care
delivery is an important aim of MIPS. We also believe that the MIPS
APMs specifically are working toward achieving this goal in diverse and
innovative ways by basing participants' Medicare payment on their
performance on cost/utilization and quality measures. Because
participants in MIPS APMs are already scored and have payment based on
their performance on cost/utilization measures under their respective
APM, we continue to believe scoring of the cost performance category is
unnecessary and could create conflicting incentives for participants in
MIPS APMs if their MIPS payment adjustment is also based in part on
their performance in the MIPS cost performance category.
Comment: One commenter supported the reweighting of the cost
performance category to zero, but requested that CMS provide
performance feedback in the cost performance category to MIPS eligible
clinicians in MIPS APMs.
Response: We understand commenters' desire to have information on
MIPS eligible clinicians' performance under the MIPS cost performance
category, even if they are not scored on cost under the APM scoring
standard. However, each MIPS APM's payment design is unique and
distinct from MIPS. Comparing participants in these APMs to other
[[Page 53692]]
MIPS eligible clinicians may lead to misleading or not meaningful
results. Because we are unable to provide accurate and meaningful
feedback for the MIPS cost performance category for those MIPS eligible
clinicians scored under the APM scoring standard, we will not be
including it in the performance feedback. However, MIPS APMs may
provide some level of feedback to their participants on cost/
utilization measure performance.
Comment: One commenter suggested that CMS should score the cost
performance category for MIPS APMs, but that CMS should first find a
way to align episodes between MIPS and episode-based APMs.
Response: Currently there is no way for us to align MIPS cost
measures with episodes as defined within certain MIPS APMs because each
of those MIPS APMs uniquely identifies the period of time over which
performance is assessed and scored. For example, the Oncology Care
Model has semi-annual performance periods, with 6-month episodes
beginning on any day within a semi-annual performance period; initial
reconciliation occurs after a performance period ends using a two month
claims runout from the final day of the performance period, and
subsequent ``true-up'' reconciliations capture additional claims runout
after 8 and 14 months after the performance period ends. This MIPS APM
uses unique methods of defining when performance is measured and scores
calculated; attempting to align a model like the Oncology Care Model
with other MIPS APMs or MIPS would require large-scale modifications to
one or more initiatives that would undermine their design. Further,
changing a MIPS APM's performance period could be burdensome and
disruptive for health care providers participating in MIPS APMs. For
these reasons, we do not believe it is advisable to change an APMs'
episode timing or performance periods for MIPS purposes.
Because of the innovative and unique nature of each MIPS APM and
the need to maintain flexibility in designing and implementing them, we
do not believe it would be appropriate to attempt to conform
programmatic elements of APMs to MIPS.
Final Action: After considering public comments, we are finalizing
the proposal to continue to use our authority under sections
1115A(d)(1) and 1899(f) of the Act to waive the requirements of the
statute to reweight the cost performance category to zero for MIPS APMs
for the 2020 payment year and subsequent payment years as proposed. We
are codifying this policy at Sec. 414.1370(g)(2).
(i) Measuring Improvement in the Cost Performance Category
In setting performance standards with respect to measures and
activities in each MIPS performance category, section 1848(q)(3)(B) of
the Act requires us to consider historical performance standards,
improvement, and the opportunity for continued improvement. Section
1848(q)(5)(D)(i)(I) of the Act requires us to introduce the measurement
of improvement into performance scores in the cost performance category
for MIPS eligible clinicians for the 2020 MIPS Payment Year if data
sufficient to measure improvement are available. Section
1848(q)(5)(D)(i)(II) of the Act permits us to take into account
improvement in the case of performance scores in other performance
categories. Given that we have in effect waivers of the scoring weight
for the cost performance category, and of the requirement to specify
and use cost measures in calculating the MIPS final score for MIPS
eligible clinicians participating in MIPS APMs, and for the same
reasons that we initially waived those requirements, we proposed to use
our authority under section 1115A(d)(1) of the Act for MIPS APMs
authorized under section 1115A of the Act and under section 1899(f) of
the Act for the Medicare Shared Savings Program to waive the
requirement under section 1848(q)(5)(D)(i)(I) of the Act to take
improvement into account for performance scores in the cost performance
category beginning with the 2018 MIPS performance period (82 FR 30082).
We sought comment on this proposal. The following is a summary of
the public comments we received and our response:
Comment: Several commenters supported CMS's proposal to waive cost
improvement scoring as participants in MIPS APMs are already scored and
reimbursed on their performance on cost/utilization measures under the
terms of the MIPS APM
Response: We thank the commenters for their support of our
proposal.
Final Action: After considering public comments, we are finalizing
the policy to waive cost improvement scoring as proposed.
(b) Quality Performance Category
(i) Web Interface Reporters: Shared Savings Program and Next Generation
ACO Model
(A) Quality Measures
We finalized in the CY 2017 Quality Payment Program final rule that
under the APM scoring standard, participants in the Shared Savings
Program and Next Generation ACO Model would be assessed for the
purposes of generating a MIPS APM quality performance category score
based exclusively on quality measures submitted using the CMS Web
Interface (81 FR 77256, 77261). We also recognized that ACOs in both
the Shared Savings Program and Next Generation ACO Model are required
to use the CMS Web Interface to submit data on quality measures, and
that the measures they are required to report for 2017 were also MIPS
measures for 2017. We finalized a policy to use quality measures and
data submitted by the participant ACOs using the CMS Web Interface and
MIPS benchmarks for these measures to score quality for MIPS eligible
clinicians in these MIPS APMs at the APM Entity level (81 FR 77256,
77261). For these MIPS APMs, which we refer to as CMS Web Interface
reporters going forward, we established that quality performance data
that are not submitted using the Web Interface, for example, the CAHPS
for ACOs survey and claims-based measures, will not be included in the
MIPS APM quality performance category score for 2017. We also
established a policy, codified at Sec. 414.1370(f)(1), to allow Shared
Savings Program participant TINs to report quality on behalf of the TIN
in the event that the ACO Entity fails to report quality on behalf of
the APM Entity group, and for purposes of scoring under the APM scoring
standard to treat such participant TINs as unique APM Entities.
(aa) Addition of New Measures
In the CY 2018 Quality Payment Program proposed rule (82 FR 30082
through 30083), we proposed to score the CAHPS for ACOs survey in
addition to the CMS Web Interface measures that are used to calculate
the MIPS APM quality performance category score for participants in the
Shared Savings Program and Next Generation ACO Model, beginning in the
2018 performance period. The CAHPS for ACOs survey is already required
in the Shared Savings Program and Next Generation ACO Model, and
including the CAHPS for ACOs survey would better align the measures on
which
[[Page 53693]]
participants in these MIPS APMs are assessed under the APM scoring
standard with the measures used to assess participants' quality
performance under the APM.
We did not initially propose to include the CAHPS for ACOs survey
as part of the MIPS APM quality performance category scoring for the
Shared Savings Program and Next Generation ACO Model because we
believed that the CAHPS for ACOs survey would not be collected and
scored in time to produce a MIPS quality performance category score.
However, operational efficiencies have recently been introduced that
have made it possible to score the CAHPS for ACOs survey on the same
timeline as the CAHPS for MIPS survey. Under the proposal, the CAHPS
for ACOs survey would be added to the total number of quality
performance category measures available for scoring in these MIPS APMs.
While the CAHPS for ACOs survey is new to the APM scoring standard,
the CG-CAHPS survey upon which it is based is also the basis for the
CAHPS for MIPS survey, which was included on the MIPS final list for
the 2017 performance period. For further discussion of the CAHPS for
ACOs survey, and the way it will be scored, we refer readers to section
II.C.7.a.(2)(b) of this final rule with comment period, which describes
the CAHPS for MIPS survey and the scoring method that will be used for
MIPS and the APM scoring standard in the 2018 performance period.
We noted that although each question in the CAHPS for ACOs survey
can also be found in the CAHPS for MIPS survey, the CAHPS for ACOs
survey will have one fewer survey question in the Summary Survey
Measure entitled ``Between Visit Communication'', which has never been
a scored measure in the CAHPS for ACOs survey used in the Shared
Savings Program or Next Generation ACO Model, and which we believe to
be inappropriate to score for MIPS, as it is not scored under the terms
of the APM.
Table 10--Web Interface Reporters: Shared Savings Program and Next Generation ACO Model New Measure
----------------------------------------------------------------------------------------------------------------
NQF/quality National
Measure name number (if quality Measure description Primary measure
applicable) strategy domain steward
----------------------------------------------------------------------------------------------------------------
CAHPS for ACOs............... Not Applicable. Patient/ Consumer Assessment of Agency for
Caregiver Healthcare Providers and Healthcare
Experience. Systems (CAHPS) surveys for Research and
Accountable Care Quality
Organizations (ACOs) in the (AHRQ).
Medicare Shared Savings
Program and Next Generation
ACO Model ask consumers about
their experiences with health
care. The CAHPS for ACOs
survey is collected from a
sample of beneficiaries who
get the majority of their
care from participants in the
ACO, and the questions
address care received from a
named clinician within the
ACO.
Survey measures include:--
Getting Timely Care,
Appointments, and Information
--How Well Your Providers
Communicate.
--Patients' Rating of
Provider.
--Access to Specialists....
--Health Promotion and
Education.
--Shared Decision Making...
--Health Status/Functional
Status.
--Stewardship of Patient
Resources.
----------------------------------------------------------------------------------------------------------------
We sought comment on this proposal. The following is a summary of
the public comments we received and our responses:
Comment: Several commenters supported the proposal.
Response: We thank commenters for their support of our proposal.
Comment: One commenter expressed confusion as to whether the CAHPS
for ACOs survey would be required under the APM scoring standard, or
whether it is an optional measure.
Response: Under the APM scoring standard, it is our goal to score
all measures required under the terms of the APM. As part of the
quality performance requirements for the Shared Savings Program and the
Next Generation ACO Model, all participating ACOs are already required
to report all of the measures included in the CAHPS for ACOs survey.
Because we are able to score the CAHPS for ACOs survey, and it is
mandatory under these APMs, it will be scored under the APM scoring
standard for participants in those APMs. Notwithstanding the fact that
the CAHPS for ACOs survey is mandatory for Shared Savings Program and
Next Generation ACO participants, we note that MIPS eligible clinicians
participating in ACOs will nonetheless be eligible to receive bonus
points for reporting this measure, which is classified as a high-
priority measure, beginning in 2018.
Comment: Some commenters objected to the subjective nature of the
CAHPS for ACOs survey and the difficulty in acting on responses to
improve quality.
Response: Under both the Shared Savings Program and Next Generation
ACO Model, ACOs are required to report on all of the measures included
in the CAHPS for ACOs survey, and the results of the survey are used to
assess the quality performance of the ACO. Because the CAHPS for ACOs
survey is required under the terms of those initiatives, we believe it
is appropriate to use the CAHPS for ACOs survey for purposes of the APM
scoring standard.
Final Action: After considering public comments, we are finalizing
the policy to score the CAHPS for ACOs survey in addition to the CMS
Web Interface measures that are used to calculate the MIPS APM quality
performance category score for participants in the Shared Savings
Program and Next Generation ACO Model, beginning in the 2018
performance period, as proposed, in accordance with Sec.
414.1370(g)(1)(i)(A).
(B) Calculating Quality Scores
We refer readers to section II.C.7.a.(2) of this final rule with
comment period for a summary of our final policies related to
calculating the MIPS quality performance category percent score for
MIPS eligible clinicians, including APM Entity groups, reporting
through the CMS Web Interface, and a discussion of our proposed and
final changes to those policies for the 2018 performance period. The
changes we are finalizing in section II.C.7.a.(2) of this final rule
with comment period apply in the same
[[Page 53694]]
manner under the APM scoring standard for Web Interface reporters
except as otherwise noted in this section of this final rule with
comment period.
However, we proposed not to subject Web Interface reporters to a 3
point floor because we do not believe it is necessary to apply this
transition year policy to MIPS eligible clinicians participating in
previously established MIPS APMs (82 FR 30083). We sought comment on
this proposal.
Final Action: We received no public comments on this proposal. We
are finalizing this policy as proposed at Sec. 414.1370(g)(1)(i)(A).
(C) Incentives To Report High Priority Measures
In the CY 2017 Quality Payment Program final rule, we finalized
that for CMS Web Interface reporters, we will apply bonus points based
on the finalized set of measures reportable through the Web Interface
(81 FR 77291 through 77294). We will assign two bonus points for
reporting each outcome measure (after the first required outcome
measure) and for each scored patient experience measure. We will also
assign one bonus point for reporting each other high priority measure.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30083),
we noted that in addition to the measures required by the APM to be
submitted through the CMS Web Interface, APM Entities in the Medicare
Shared Savings Program and Next Generation ACO Model must also report
the CAHPS for ACOs survey, and we proposed that, beginning for the 2020
payment year forward, participants in the APM Entities may receive
bonus points under the APM scoring standard for submitting that
measure. Participants in MIPS APMs, like all MIPS eligible clinicians,
are also subject to the 10 percent cap on bonus points for reporting
high priority measures. APM Entities reporting through the CMS Web
Interface will only receive bonus points if they submit a high priority
measure with a performance rate that is greater than zero, and provided
that the measure meets the case minimum requirements.
Final Action: We received no public comment on this proposal. We
are finalizing this policy as proposed at Sec. 414.1370(g)(1)(i)(C).
(D) Scoring Quality Improvement
Beginning in the 2018 performance period, section
1848(q)(5)(D)(i)(I) of the Act requires us to score improvement for the
MIPS quality performance category for MIPS eligible clinicians,
including those participating in MIPS APMs, if data sufficient to
measure quality improvement are available. We proposed to calculate the
quality improvement score using the methodology described in the CY
2018 Quality Payment Program proposed rule for scoring quality
improvement for MIPS eligible clinicians submitting quality measures
via the CMS Web Interface (82 FR 30113, 30116-30119) and finalized at
II.C.7.a.(2). In the CY 2018 Quality Payment Program proposed rule, we
stated that we believe aligning the scoring methodology used for all
Web Interface submissions will minimize confusion among MIPS eligible
clinicians receiving a MIPS score, including those participating in
MIPS APMs (82 FR 30083).
The following is a summary of the public comments we received on
this proposal and our response:
Comment: Several commenters expressed support for our proposal to
make quality improvement points available beginning in the 2018
performance period.
Response: We thank commenters for their support of our quality
improvement scoring proposal.
Final Action: We are finalizing this policy as proposed at Sec.
414.1370(g)(1)(i)(B).
(E) Total Quality Performance Category Score for Web Interface
Reporters
In the 2018 Quality Payment Program proposed rule (82 FR 30083
through 30084), we proposed to calculate the total quality percent
score for APM Entities in APMs that require Web Interface reporting
according to the methodology for scoring MIPS eligible clinicians
reporting on quality through the CMS Web Interface described in section
II.C.7.a.(2) of this final rule with comment period.
We sought comment on our proposed quality performance category
scoring methodology for Web Interface reporters. The following is a
summary of the public comments we received and our responses:
Comment: One commenter expressed confusion as to how quality
improvement scoring will be calculated at the APM Entity level for MIPS
eligible clinicians in MIPS APMs.
Response: APM Entity groups, like other MIPS groups, will receive
quality improvement scores for any year following a year in which one
or more members of the APM Entity group was subject to MIPS and
received a quality score. If the APM Entity group existed in the
previous performance period and received a quality score, we will use
that score for the purpose of calculating quality improvement points.
If the APM Entity group did not exist or receive a quality score but
some of its participant TIN/NPIs received quality scores in the
previous performance period, the mean of those scores would be applied
to the APM Entity group for the purpose of calculating quality
improvement points. If the APM Entity group did not exist or receive a
quality score and none of its participant MIPS eligible clinicians
received quality scores in the previous performance period, no quality
improvement points will be awarded.
Final Action: After considering the public comments, we are
finalizing the policy for calculating the total quality performance
category score for Web Interface reporters as proposed at Sec.
414.1370(g)(1)(i)(C).
(ii) Other MIPS APMs
In the CY 2018 Quality Payment Program proposed rule (82 FR 20084),
we proposed to define the term Other MIPS APM at Sec. 414.1305 of our
regulations as a MIPS APM that does not require reporting through the
Web Interface. We proposed to add this definition as we believe it will
be useful in discussing our policies for the APM scoring standard. For
the 2018 MIPS performance period, Other MIPS APMs will include the
Comprehensive ESRD Care Model, the Comprehensive Primary Care Plus
Model (CPC+), and the Oncology Care Model.
We sought comment on this proposal. The following is a summary of
the public comments we received on this proposal and our responses:
Comment: One commenter objected to the use of the term Other MIPS
APM and found the distinction between MIPS APMs and Other MIPS APMs
confusing.
Response: We clarify that MIPS APMs are those APMs that meet the
four criteria of: (1) The APM Entities participate in the APM under an
agreement with CMS by law or regulation, (2) the APM requires that
Entities include at least one MIPS eligible clinician on a
Participation List, (3) the APM bases payment incentives on performance
(either at the APM Entity or clinician level) on cost/utilization and
quality measures, and (4) the APM is not either a new APM for which the
first performance period begins after the first day of the MIPS
performance period for the year or an APM in the final year of
operation for which the APM scoring standard is impracticable. Web
Interface reporters are a subset of MIPS APMs where the terms of the
APM require participating APM Entities to report quality data using the
Web Interface. Other MIPS APMs include all MIPS APMs that are
[[Page 53695]]
not Web Interface reporters and are also a subset of MIPS APMs.
Comment: Some commenters expressed confusion as to how Other MIPS
APM policies will impact Next Generation ACOs.
Response: The APM scoring standard applies for MIPS eligible
clinicians in MIPS APMs. In discussing policies for the APM scoring
standard, we developed terminology to describe two subcategories of
MIPS APMs: Web Interface reporters (currently the Shared Savings
Program and Next Generation ACO Model), and Other MIPS APMs (all other
MIPS APMs that are not Web Interface reporters). Policies for Other
MIPS APMs will apply only to MIPS APMs that do not require reporting
through the Web Interface. For the 2018 performance period, only the
Shared Savings Program and Next Generation ACO Model are Web Interface
reporter APMs and therefore are not Other MIPS APMs.
Final Action: After considering public comments, we are finalizing
the policy as proposed by defining the term Other MIPS APM at Sec.
414.1305.
(A) Quality Measures
In the CY 2017 Quality Payment Program final rule, we explained
that current MIPS APMs have requirements regarding the number of
quality measures and measure specifications as well as the measure
reporting method(s) and frequency of reporting, and have an established
mechanism for submission of these measures to us within the structure
of the specific MIPS APM. We explained that operational considerations
and constraints interfered with our ability to use the quality measure
data from some MIPS APMs for the purpose of satisfying MIPS data
submission requirements for the quality performance category for the
first performance period. We concluded that there was insufficient time
to adequately implement changes to the current MIPS APM quality measure
data collection timelines and infrastructure in the first performance
period to conduct a smooth hand-off to the MIPS system that would
enable use of quality measure data from these MIPS APMs to satisfy the
MIPS quality performance category requirements in the first MIPS
performance period (81 FR 77264). Out of concern that subjecting MIPS
eligible clinicians who participate in these MIPS APMs to multiple,
potentially duplicative or inconsistent performance assessments could
undermine the validity of testing or performance evaluation under the
MIPS APMs; and that there was insufficient time to make adjustments in
operationally complex systems and processes related to the alignment,
submission and collection of APM quality measures for purposes of MIPS,
we used our authority under section 1115A(d)(1) of the Act to waive
certain requirements of section 1848(q) of the Act for APMs authorized
under section 1115A of the Act.
In the CY 2017 Quality Payment Program final rule, we finalized
that for the first MIPS performance period only, for MIPS eligible
clinicians participating in APM Entities in Other MIPS APMs, the weight
for the quality performance category is zero (81 FR 77268). To avoid
risking adverse operational or program evaluation consequences for MIPS
APMs while we worked toward incorporating MIPS APM quality measures
into scoring for future performance periods, we used the authority
provided by section 1115A(d)(1) of the Act to waive the quality
performance category weight required under section 1848(q)(5)(E)(i)(I)
of the Act for Other MIPS APMs, all of which are currently authorized
under section 1115A of the Act, and we indicated that with the
reduction of the quality performance category weight to zero, it was
unnecessary to establish for these MIPS APMs a final list of quality
measures as required under section 1848(q)(2)(D) of the Act or to
specify and use quality measures in determining the MIPS final score
for these MIPS eligible clinicians. As such, we further waived the
requirements under sections 1848(q)(2)(D), 1848(q)(2)(B)(i), and
1848(q)(2)(A)(i) of the Act to establish a final list of quality
measures (using certain criteria and processes); and to specify and
use, respectively, quality measures in calculating the MIPS final score
for the first MIPS performance period.
In the CY 2017 Quality Payment Program final rule, we anticipated
that beginning with the second MIPS performance period, the APM quality
measure data submitted to us during the MIPS performance period would
be used to derive a MIPS quality performance score for APM Entities in
all MIPS APMs. We also anticipated that it may be necessary to propose
policies and waivers of requirements of the statute, such as section
1848(q)(2)(D) of the Act, to enable the use of non-MIPS quality
measures in the quality performance category score. We anticipated that
by the second MIPS performance period we would have had sufficient time
to resolve operational constraints related to use of separate quality
measure systems and to adjust quality measure data submission
timelines. Accordingly, we stated our intention, in future rulemaking,
to use our section 1115A(d)(1) waiver authority to establish that the
quality measures and data that are used to evaluate performance for APM
Entities in Other MIPS APMs would be used to calculate a MIPS quality
performance score under the APM scoring standard.
We have since designed the means to overcome the operational
constraints that prevented us from scoring quality for these MIPS APMs
under the APM scoring standard in the first MIPS performance period,
and in the CY 2018 Quality Payment Program proposed rule we proposed to
adopt quality measures for use under the APM scoring standard, and to
begin collecting MIPS APM quality measure performance data to generate
a MIPS quality performance category score for APM Entities
participating in Other MIPS APMs beginning with the 2018 MIPS
performance period.
We sought comment on this proposal. The following is a summary of
the public comments we received on this proposal and our responses:
Comment: Some commenters expressed support for our proposal.
Response: We thank commenters for their support of our proposal.
Final Action: After considering public comments, we are finalizing
our proposal to use quality measure performance data reported for
purposes of the Other MIPS APM to generate a MIPS quality performance
category score for APM Entities participating in Other MIPS APMs
beginning with the 2018 MIPS performance period as proposed. We are
codifying this policy at Sec. 414.1370(g)(1)(ii)(A).
(aa) APM Measures for MIPS
In the CY 2017 Quality Payment Program final rule, we explained the
concerns that led us to express our intent to use the quality measures
and data that apply in MIPS APMs for purposes of the APM scoring
standard, including concerns about the application of multiple,
potentially duplicative or inconsistent performance assessments that
could negatively impact our ability to evaluate MIPS APMs (81 FR
77246). Additionally, the quality and cost/utilization measures that
are used to calculate performance-based payments in MIPS APMs may vary
from one MIPS APM to another. Factors such as the type and quantity of
measures required, the MIPS APM's particular measure specifications,
how frequently the measures must be reported, and the mechanisms used
to collect or submit the measures all add to the diversity in the
quality and cost/utilization measures used to evaluate performance
among MIPS APMs. Given
[[Page 53696]]
these concerns and the differences between and among the quality
measures used to evaluate performance within MIPS APMs as opposed to
those used more generally under MIPS, in the CY 2018 Quality Payment
Program proposed rule, we proposed to use our authority under section
1115A(d)(1) of the Act to waive requirements under section
1848(q)(2)(D) of the Act, which requires the Secretary to use certain
criteria and processes to establish an annual MIPS final list of
quality measures from which all MIPS eligible clinicians may choose
measures for purposes of assessment, and instead to establish a MIPS
APM quality measure list for purposes of the APM scoring standard (82
FR 30084). The MIPS APM quality measure list would be adopted as the
final list of MIPS quality measures under the APM scoring standard for
Other MIPS APMs, and would reflect the quality measures that are used
to evaluate performance on quality within each Other MIPS APM.
The MIPS APM measure list we proposed in Table 13 of the CY 2018
Quality Payment Program proposed rule would define distinct measure
sets for participants in each MIPS APM for purposes of the APM scoring
standard, based on the measures that are used by the APM, and for which
data will be collected by the close of the MIPS submission period. The
measure sets on the MIPS APM measure list would represent all possible
measures which may contribute to an APM Entity's MIPS score for the
MIPS quality performance category, and may include measures that are
the same as or similar to those used by MIPS. However, measures may
ultimately not be used for scoring if a measure's data becomes
inappropriate or unavailable for scoring; for example, if a measure's
clinical guidelines are changed or the measure is otherwise modified by
the APM during the performance period, the data collected during that
performance period would not be uniform, and as such may be rendered
unusable for purposes of the APM scoring standard (82 FR 30091).
We sought comment on this proposal. The following is a summary of
comments we received on this proposal and our responses:
Comment: Several commenters expressed support for the proposed
requirements for APM quality measures under the APM scoring standard.
Response: We thank commenters for their support for our proposal to
use quality measures that are used within Other MIPS APMs for purposes
of scoring under the APM scoring standard.
Comment: One commenter voiced concern that CMS would add measures
to the APM scoring standard measure set outside of notice-and-comment
rulemaking in order to include all measures used by a MIPS APM.
Response: We clarify that we will not be scoring performance for
any measures not included on the MIPS APM quality measure list included
in each year's proposed rule. Any measures that are added to an Other
MIPS APM's measure set after the proposed rule has been published will
not be scored under the APM scoring standard until they have been
proposed and adopted through notice-and-comment rulemaking in the
following year.
Comment: One commenter expressed concern that the time it takes to
develop and implement new measures upon which eligible clinicians are
scored may cause delays in the adoption of new innovative technologies
with value that cannot easily be captured by current measure types,
particularly among MIPS APM participants.
Response: We do not believe any policies under the APM scoring
standard would diminish any existing incentives for the adoption of new
technologies, or affect the flexibility for APMs to set their own
incentives for the adoption of such technologies. Furthermore, we
encourage the public to respond to our annual call for measures, as
described at section II.C.6.c.(1) of this final rule with comment
period, to help ensure that appropriate measures are quickly adopted by
appropriate programs.
Comment: Several commenters requested more information about the
way measures and their benchmarks are decided on for purposes of APMs.
Response: For questions about APM measures and their benchmarks, we
refer readers to https://qpp.cms.gov/apms/overview or the CMS Measure
Development Plan: Supporting the Transition to the Quality Payment
Program 2017 Annual Report, which is available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/2017-CMS-MDP-Annual-Report.pdf. We
did not address or propose to modify policies relating to the
development and adoption of measures or benchmarks for purposes of
individual MIPS APMs.
Comment: One commenter requested that CMS provide more information
about reporting requirements for quality measures under the APM scoring
standard for Other MIPS APMs.
Response: Under the APM scoring standard, APM Entities will not be
required to do any additional reporting on quality measures beyond what
they are already required to report as part of their participation in
the MIPS APM. Therefore, whatever specific reporting mechanisms and
measures are required by each MIPS APM will also be used for purposes
of MIPS under the APM scoring standard.
Final Action: After considering public comments, we are finalizing
our proposal to establish a MIPS APM quality measure list for purposes
of the APM scoring standard at Sec. 414.1370(g)(1)(ii)(A).
(B) Measure Requirements for Other MIPS APMs
Because the quality measure sets for each Other MIPS APM are
unique, in the CY 2018 Quality Payment Program proposed rule, we
proposed to calculate the MIPS quality performance category score using
Other MIPS APM-specific quality measures (82 FR 30085). For purposes of
the APM scoring standard, we proposed to score only measures that: (1)
Are tied to payment as described under the terms of the APM, (2) are
available for scoring near the close of the MIPS submission period, (3)
have a minimum of 20 cases available for reporting, and (4) have an
available benchmark. We discuss our policies for each of these
requirements for Other MIPS APM quality measures below.
We sought comment on this proposal. The following is a summary of
the comments we have received and our responses:
Comment: Some commenters recommended that CMS exclude three
additional categories from those that will be used for scoring under
the APM scoring standard when including Other MIPS APM measures: pay-
for-reporting measures, utilization measures, and measures that are new
within an APM.
Response: We clarify that pay-for-reporting measures will not be
scored because they do not have an available benchmark. As such, we do
not believe it is necessary to explicitly state that we will exclude
these measures. Measures that are new within an APM are often pay-for-
reporting in their first year in order to give time for APM
participants to gain experience with the measure and to establish a
benchmark; in cases where new measures are immediately pay for
performance, we believe it would be appropriate to score them because
the measures themselves would be used under the terms of the APM, and
to the extent possible it is our goal to align scoring under the APM
scoring standard with scoring under the terms of the APM. Tying payment
to cost/utilization is a requirement of all MIPS APMs; cost/
[[Page 53697]]
utilization measures tied to payment are often used by the APMs to meet
this requirement. Because we will not score MIPS APMs on cost, we
believe that cost/utilization measures should be scored under the APM
scoring standard as a means of incentivizing performance within MIPS
APMs in this MIPS priority area.
Comment: One commenter voiced concern that the new APM scoring
standard requirements would be burdensome or confusing for APM Entities
during the first performance period, and suggested that CMS should
allow APM Entities to report according to general MIPS quality
reporting requirements for the APM scoring standard.
Response: We thank commenters for their concern; however, there
will be no added reporting burden for participants in MIPS APMs because
we will be using the quality measure performance data that the APM
Entity is already submitting as part of the requirements for
participation in the MIPS APM.
Final Action: After considering public comments, we are finalizing
as proposed the policy to only score measures that (1) are tied to
payment as described under the terms of the APM, (2) are available for
scoring near the close of the MIPS submission period, (3) have a
minimum of 20 cases available for reporting, and (4) have an available
benchmark at Sec. 414.1370(g)(1)(ii)(A)(1) through (4).
(aa) Tied to Payment
As discussed in the CY 2018 Quality Payment Program proposed rule
(82 FR 30085), for purposes of the APM scoring standard, we proposed to
consider a measure to be tied to payment if an APM Entity group will
receive a payment adjustment or other incentive payment under the terms
of the APM, based on the APM Entity's performance on the measure.
We sought comment on this proposal. We did not receive any comments
on this proposal.
Final Action: We are finalizing the policy as proposed.
(bb) Available for Scoring
Some MIPS APM quality measure results are not available until late
in the calendar year subsequent to the MIPS performance period, which
would prevent us from including them in the MIPS APM quality
performance category score under the APM scoring standard due to the
larger programmatic timelines for providing MIPS eligible clinician
performance feedback by July and issuing budget-neutral MIPS payment
adjustments. Consequently, we proposed to only use the MIPS APM quality
measure data that are submitted by the close of the MIPS submission
period and are available for scoring in time for inclusion to calculate
a MIPS quality performance category score (82 FR 30085). Measures are
to be submitted according to requirements under the terms of the APM;
the measure data will then be aggregated and prepared for submission to
MIPS for the purpose of creating a MIPS quality performance category
score.
We believe using the Other MIPS APMs' quality measure data that
have been submitted no later than the close of the MIPS submission
period and have been processed and made available to MIPS for scoring
in time to calculate a MIPS quality performance category score is
consistent with our intent to decrease duplicative reporting for MIPS
eligible clinicians who would otherwise need to report quality measures
to both MIPS and their APM. Going forward, these are the measures to
which we are referring in our proposal to limit scoring to measures
that are available near the close of the MIPS submission period.
We sought comment on this proposal. We did not receive any comments
on this proposal.
Final Action: We are finalizing the policy as proposed.
(cc) 20 Case Minimum
We also believe that a 20 case minimum, in alignment with the one
finalized generally under MIPS in the CY 2017 Quality Payment Program
final rule, is necessary to ensure the reliability of the measure data
submitted, as we explained the CY 2017 Quality Payment Program final
rule (81 FR 77288).
We proposed that when an APM Entity reports a quality measure that
includes less than 20 cases, under the APM scoring standard, the
measure would receive a null score for that measure's achievement
points, and the measure would be removed from both the numerator and
the denominator of the MIPS quality performance category percentage. We
proposed to apply this policy under the APM scoring standard.
We sought comment on this proposal. The following is a summary of
the comments we received on this proposal and our responses:
Comment: One commenter expressed concern as to how failure to
report the 20 case minimum would impact an APM Entity's score.
Response: We reiterate that entities that do not reach the 20 case
minimum for a particular measure will not be penalized for not
reporting the measure, but instead will receive a null score for that
particular measure, which will be removed from the numerator and
denominator when calculating the total quality score, and will
therefore have neither a positive nor negative impact on the APM
Entity's overall quality performance category score.
Final Action: After considering public comments, we are finalizing
the proposal to establish a 20 case minimum for quality measures under
the APM scoring standard without change.
(dd) Available Benchmark
An APM Entity's achievement score on each quality measure would be
calculated in part by comparing the APM Entity's performance on the
measure with a benchmark performance score. Therefore, we would need
all scored measures to have a benchmark available by the time that the
MIPS quality performance category score is calculated in order to make
that comparison.
We proposed that, for the APM scoring standard, the benchmark score
used for a quality measure would be the benchmark used in the MIPS APM
for calculation of the performance based payments, where such a
benchmark is available. If the MIPS APM does not produce a benchmark
score for a reportable measure that is included on the MIPS APM
measures list, we would use the benchmark score for the measure that is
used for the MIPS quality performance category generally (outside of
the APM scoring standard) for that MIPS performance period, provided
the measure specifications for the measure are the same under both the
MIPS final list and the APM measures list. If neither the APM nor MIPS
has a benchmark available for a reported measure, the APM Entity that
reported that measure would receive a null score for that measure's
achievement points, and the measure would be removed from both the
numerator and the denominator of the quality performance category
percentage.
We sought comment on this proposal. The following is a summary of
the comments we received on this proposal and our responses:
Comment: One commenter objected to CMS's proposal to remove
measures from scoring for which no benchmark is available, and instead
suggested that we assign three points for the measure in alignment with
general MIPS policy.
Response: We thank the commenter for the suggestion. In general, we
have attempted to create consistency between the MIPS and the APM
scoring standard when practicable. However, in this case doing so would
be inappropriate
[[Page 53698]]
because APM participants are required to report on all APM measures
used in the MIPS APM, whereas eligible clinicians reporting under
general MIPS are given the opportunity to choose six of the measures
from the MIPS measure set. We believe that it would be unfair to
require APM Entities to report on measures for which they are unable to
achieve a score above three, which could significantly impact their
total quality performance category score. For example, if an APM
requires that five measures be reported, and an APM Entity received 10
points on 4 measures, but did not meet the 20 case minimum on the fifth
measure, the formula for calculating the quality score would be [(10 +
10 + 10 + 10)/4] = 10 (or, 100 percent), rather than [(10 + 10 + 10 +
10 + 3)/5] = 8.6 (or, 86 percent).
Comment: One commenter expressed concern that by limiting the use
of measures to those for which a benchmark is available, CMS may be
overly restrictive of new models in their first years of operation, and
that CMS should instead use any measure for which a benchmark could be
calculated based on the current performance year.
Response: To score quality for Other MIPS APMs, we will use any
available benchmark that is being used by the APM. In the event an APM
does not have a benchmark for a given measure, we will use MIPS
benchmarks for that measure if available. With this level of benchmark
flexibility, we do not anticipate lack of benchmarks will eliminate a
significant number of measures from the APM scoring standard quality
calculations.
Comment: Several commenters objected to scoring Web Interface
reporters based on the same benchmarks and decile distribution as those
in Other MIPS APMs reporting through other mechanisms.
Response: Under the APM scoring standard, Web Interface reporters
will be scored on a scale unique to Web Interface reporters and based
on benchmarks generated through the Web Interface. Other MIPS APMs do
not use the Web Interface and therefore measures will be scored under
the APM scoring standard's Other MIPS APM scoring methodology using a
decile distribution unique to each Other MIPS APM.
Comment: One commenter objected to the use of MIPS measure
benchmarks in the absence of APM measure benchmarks, despite the
comparability of the measures, because the effect may be to compare the
performance of a specific specialty against that of the general health
care provider population.
Response: While the use of MIPS benchmarks in the absence of an APM
benchmark would mean comparing the performance of APM participants to
the performance of the general MIPS population in order to create the
measure score, for measures that are used by MIPS as well as MIPS APMs,
there would not be an advantage or disadvantage given to APM
participants relative to the general population of MIPS eligible
clinicians and their scores would simply reflect how they would have
performed under general MIPS scoring.
Final Action: After considering public comments, we are finalizing
the policy with respect to the use of quality performance benchmarks as
proposed.
(C) Calculating the Quality Performance Category Percent Score
Eligible clinicians who participate in Other MIPS APMs are subject
to specific quality measure reporting requirements within these APMs.
To best align with APM design and objectives, in the CY 2018 Quality
Payment Program proposed rule, we proposed that the minimum number of
required measures to be reported for the APM scoring standard would be
the minimum number of quality measures that are required by the MIPS
APM and are collected and available in time to be included in the
calculation for the APM Entity score under the APM scoring standard.
For example, if an Other MIPS APM requires participating APM Entities
to report nine of 14 quality measures and the APM Entity fails to
submit them by the APM's submission deadline, then for the purposes of
calculating an APM Entity quality performance category score, the APM
Entity would receive a zero for those measures. An APM Entity that does
not submit any APM quality measures by the submission deadline would
receive a zero for its MIPS APM quality performance category percent
score for the MIPS performance period.
We also proposed that if an APM Entity submits some, but not all of
the measures required by the MIPS APM by the close of the MIPS
submission period, the APM Entity would receive points for the measures
that were submitted, but would receive a score of zero for each
remaining measure between the number of measures reported and the
number of measures required by the APM that were available for scoring.
For example, if an APM Entity in the above hypothetical MIPS APM
submits quality performance data on three of the APM's measures,
instead of the required nine, the APM Entity would receive quality
points in the APM scoring standard quality performance category percent
score for the three measures it submitted, but would receive zero
points for each of the six remaining measures that were required under
the terms of the MIPS APM. On the other hand, if an APM Entity reports
on more than the minimum number of measures required to be reported
under the MIPS APM and the measures meet the other criteria for
scoring, only the measures with the highest scores, up to the number of
measures required to be reported under the MIPS APM, would be counted;
however, any bonus points earned by reporting on measures beyond the
minimum number of required measures would be awarded.
If a measure is reported but fails to meet the 20 case minimum or
does not have a benchmark available, there would be a null score for
that measure, and it would be removed from both the numerator and the
denominator, so as not to negatively affect the APM Entity's quality
performance category score.
We proposed to assign bonus points for reporting high priority
measures or measures with end-to-end CEHRT reporting as described for
general MIPS scoring in the CY 2017 Quality Payment Program final rule
(81 FR 77297 through 77299).
We sought comment on these proposals. The following is a summary of
the public comments we received and our response:
Comment: Some commenters recommended that CMS require APM Entities
to report on the same number of measures required under regular MIPS:
Six.
Response: We thank commenters for this input. However, we note that
under the APM scoring standard, eligible clinicians participating in
MIPS APMs are not required to report any additional measures for
purposes of MIPS scoring beyond those reported under their MIPS APM,
and they will only be scored on the minimum number of measures required
by the APM. The purpose of this policy is to help align incentives
between the APMs and the Quality Payment Program, and not to emphasize
performance in one over the other. Given this, it would not be
appropriate to set a minimum number of measures independent of the
requirements of the APM.
Final Action: After considering public comments, we are finalizing
the policy for calculating the quality performance category score as
proposed at Sec. 414.1370(g)(1)(ii)(A).
[[Page 53699]]
(aa) Quality Measure Benchmarks
An APM Entity's MIPS APM quality score will be calculated by
comparing the APM Entity's performance on a given measure with a
benchmark performance score. We proposed that the benchmark score used
for a quality measure would be the benchmark used by the MIPS APM for
calculation of the performance based payments within the APM, if
possible, in order to best align the measure performance outcomes
between MIPS APMs and MIPS generally. If the MIPS APM does not produce
a benchmark score for a reportable measure that will be available at
the close of the MIPS submission period, the benchmark score for the
measure that is used for the MIPS quality performance category
generally for that performance period would be used, provided the
measure specifications are the same for both. If neither the APM nor
MIPS has a benchmark available for a reported measure, the APM Entity
that reported that measure will receive a null score for that measure's
achievement points, and the measure will be removed from both the
numerator and the denominator of the quality performance category
percentage.
We proposed that for measures that are pay-for-reporting or which
do not measure performance on a continuum of performance, we will
consider these measures to be lacking a benchmark and they will be
treated as such. For example, if a MIPS APM only requires that an APM
Entity must surpass a threshold and does not measure APM Entities on
performance beyond surpassing a threshold, we would not consider such a
measure to measure performance on a continuum.
We proposed to score quality measure performance under the APM
scoring standard using a percentile distribution, separated by decile
categories, as described in the finalized MIPS quality scoring
methodology (81 FR 77282 through 77284). For each benchmark, we will
calculate the decile breaks for measure performance and assign points
based on the benchmark decile range into which the APM Entity's measure
performance falls.
We proposed to use a graduated points-assignment approach, where a
measure is assigned a continuum of points out to one decimal place,
based on its place in the decile. For example, as shown in Table 11, a
raw score of 55 percent would fall within the sixth decile of 41.0
percent to 61.9 percent and would receive between 6.0 and 6.9 points.
Table 11--Benchmark Decile Distribution
------------------------------------------------------------------------
Sample quality Graduated points
Sample benchmark decile measure (percent) (with no floor)
------------------------------------------------------------------------
Example Benchmark Decile 1........ 0-9.9 1.0-1.9
Example Benchmark Decile 2........ 10.0-17.9 2.0-2.9
Example Benchmark Decile 3........ 18.0-22.9 3.0-3.9
Example Benchmark Decile 4........ 23.0-35.9 4.0-4.9
Example Benchmark Decile 5........ 36.0-40.9 5.0-5.9
Example Benchmark Decile 6........ 41.0-61.9 6.0-6.9
Example Benchmark Decile 7........ 62.0-68.9 7.0-7.9
Example Benchmark Decile 8........ 69.0-78.9 8.0-8.9
Example Benchmark Decile 9........ 79.0-84.9 9.0-9.9
Example Benchmark Decile 10....... 85.0-100 10.0
------------------------------------------------------------------------
We sought comment on our proposal. The following is a summary of
the public comments received on this proposal and our response:
Comment: One commenter voiced concern that comparing performance of
MIPS APM participants against the performance of eligible clinicians
assessed under regular MIPS would skew benchmarks and give MIPS APM
participants an unfair advantage in calculating a MIPS score.
Response: In circumstances where an APM does not have a benchmark
available for a measure, but a MIPS benchmark is available, we proposed
to use the MIPS benchmark to create a measure score. These benchmark
scores reflect performance of eligible clinicians in the MIPS program,
against whom MIPS APM participants will ultimately be compared. As
such, we do not believe the use of these benchmarks is inappropriate in
this context.
Final Action: After considering public comments, we are finalizing
the policy for applying quality measure benchmarks to calculate an APM
Entity's MIPS quality measure score as proposed.
(bb) Assigning Quality Measure Points Based on Achievement
For the APM scoring standard quality performance category, we
proposed that each APM Entity that reports on quality measures would
receive between 1 and 10 achievement points for each measure reported
that can be reliably scored against a benchmark, up to the number of
measures that are required to be reported by the APM (82 FR 30086
through 30087). Because measures that lack benchmarks or 20 reported
cases are removed from the numerator and denominator of the quality
performance category percentage, it is unnecessary to include a point-
floor for scoring of Other MIPS APMs. Similarly, because the quality
measures reported by the MIPS APM for MIPS eligible clinicians under
the APM scoring standard are required to be submitted to the APM under
the terms of the APM, and the MIPS eligible clinicians do not select
their APM measures, there will be no cap on topped out measures for
MIPS APM participants being scored under the APM scoring standard,
which differs from the policy for other MIPS eligible clinicians
proposed in the CY 2018 Quality Payment Program proposed rule (82 FR
30101 through 30103).
Beginning in the 2018 MIPS performance period, we proposed that APM
Entities in MIPS APMs, like other MIPS eligible clinicians, would be
eligible to receive bonus points for the MIPS quality performance
category for reporting on high priority measures or measures submitted
via CEHRT (for example, end-to-end submission) (82 FR 30109). For each
Other MIPS APM, we proposed to identify whether any of their available
measures meets the criteria to receive a bonus, and add the bonus
points to the quality achievement points. Further, we proposed that the
total number of awarded bonus points may not exceed 10 percent of the
APM Entity's total available achievement points for the MIPS quality
performance category score.
[[Page 53700]]
To generate the APM Entity's quality performance category
percentage, achievement points would be added to any applicable bonus
points, and then divided by the total number of available achievement
points, with a cap of 100 percent. For more detail on the MIPS quality
performance category percentage score calculation, we refer readers to
section II.C.7.a.(2)(j) of this final rule with comment period.
Under the APM scoring standard for Other MIPS APMs, the number of
available achievement points would be the number of measures required
under the terms of the APM and available for scoring multiplied by ten.
If, however, an APM Entity reports on a required measure that fails the
20 case minimum requirement, or for which there is no available
benchmark for that performance period, the measure would receive a null
score and all points from that measure would be removed from both the
numerator and the denominator.
For example, if an APM Entity reports on four out of four measures
required to be reported by the MIPS APM, and receives an achievement
score of five on each and no bonus points, the APM Entity's quality
performance category percentage would be [(5 points x 4 measures) + 0
bonus points]/(4 measures x 10 max available points), or 50 percent.
If, however, one of those measures failed the 20 case minimum
requirement or had no benchmark available, that measure would have a
null value and would be removed from both the numerator and denominator
to create a quality performance category percentage of [(5 points x 3
measures) + 0 bonus points]/(3 measures x 10 max available points), or
50 percent.
If an APM Entity fails to meet the 20 case minimum on all available
APM measures, that APM Entity would have its quality performance
category score reweighted to zero, as described below.
We sought comment on these proposals. The following is a summary of
the public comments we received and our response:
Comment: One commenter was concerned about the complexity of the
program and suggested that the addition of additional scoring
opportunities could become too burdensome for CMS to effectively
administrate.
Response: The APM scoring standard was designed to simplify
administration of MIPS for both eligible clinicians in certain types of
APMs and for CMS. We believe that we are prepared to effectively
administer MIPS, including the APM scoring standard. However, as we
gain additional experience in implementing MIPS, we will continue to
monitor for opportunities to minimize complexity and reduce burden for
all parties.
Final Action: After considering public comments, we are finalizing
the policy on assigning quality measure points for achievement as
proposed.
(D) Quality Improvement Scoring
Beginning in the 2018 MIPS performance period, we proposed to score
improvement, as well as achievement in the quality performance category
(82 FR 30087).
For the APM scoring standard, we proposed that the quality
improvement percentage points would be awarded based on the following
formula:
Quality Improvement Score = (Absolute Improvement/Previous Year Quality
Performance Category Percent Score Prior to Bonus Points)/10
We sought comment on this proposal. The following is a summary of
the public comments we received on this proposal and our responses:
Comment: Several commenters expressed support for this proposal.
Response: We thank commenters for their support for this proposal.
Comment: One commenter suggested that rather than declining to
score quality improvement for the MIPS eligible clinicians who had a
quality performance category score of 0 in the previous performance
year, or who did not participate in MIPS in the previous performance
year, CMS should instead assign a minimum quality performance category
score of 1 for purposes of calculating an improvement score.
Response: We thank the commenter for the suggestion. While
assigning a quality score of 1 in years in which no quality score is
available would enable us to assign a quality improvement score, it
would also have the effect of giving all first year participants higher
quality improvement scores that do not necessarily reflect improvement.
Instead, with this policy we seek to encourage early and consistent
participation in MIPS by requiring two years of consecutive
participation before the quality improvement score can be applied.
We note that we inadvertently described the formula in error in the
APM scoring standard section, but provided a cross-reference to the
discussion and the correct formula under the general MIPS scoring
standard (82 FR 30096). We are correcting the error in this final rule
with comment period to clarify and resolve the inconsistency by
changing the quality improvement score calculation to the following:
Quality Improvement Score = (Absolute Improvement/Previous Year Quality
Performance Category Percent Score Prior to Bonus Points) * 10
Final Action: After considering public comments, we are finalizing
the proposed quality improvement score calculation with the corrected
formula at Sec. 414.1370(g)(1)(ii)(B).
(E) Calculating Total Quality Performance Category Score
We proposed that the APM Entity's total quality performance
category score would be equal to [(achievement points + bonus points)/
total available achievement points] + quality improvement score. The
APM Entity's total quality performance category score may not exceed
100 percent. We sought comment on the proposed quality performance
category scoring methodology for APM Entities participating in Other
MIPS APMs. The following is a summary of the public comments we
received on this proposal and our response:
Comment: Several commenters expressed support for this proposed
methodology.
Response: We thank commenters for their support of this proposal.
Final Action: After considering public comments, we are finalizing
the policy for calculating the total quality performance category score
as proposed at Sec. 414.1370(g)(1)(ii)(C).
(c) Improvement Activities Performance Category
As we finalized in the CY 2017 Quality Payment Program final rule,
for all MIPS APMs we will assign the same improvement activities score
to each APM Entity based on the activities involved in participation in
a MIPS APM (81 FR 77262 through 77266). As described in the CY 2017
Quality Payment Program final rule, APM Entities will receive a minimum
of one half of the total possible points (81 FR 77254). This policy is
in accordance with section 1848(q)(5)(C)(ii) of the Act. In the event
that the assigned score does not represent the maximum improvement
activities score, the APM Entity group will have the opportunity to
report additional improvement activities to add points to the APM
Entity level score as described in section II.C.6.e. of this final rule
with comment period We note that in the 2017 performance year, we
determined that the improvement activities involved in participation in
all MIPS APMs satisfied the requirements for participating APM entities
to receive the maximum score of 100 percent in this performance
category. In the 2018 Quality Payment
[[Page 53701]]
Program proposed rule, we did not propose any changes to this policy
for the 2018 MIPS performance period (82 FR 30087). We have made some
clarifying edits to Sec. 414.1370(g)(3)(i).
(d) Advancing Care Information Performance Category
In the CY 2017 Quality Payment Program final rule, we finalized our
policy to attribute one score to each MIPS eligible clinician in an APM
Entity group by looking for both individual and group TIN level data
submitted for a MIPS eligible clinician and using the highest available
score (81 FR 77268). We will then use these scores to create an APM
Entity's score based on the average of the highest scores available for
all MIPS eligible clinicians in the APM Entity group. If an individual
or TIN did not report on the advancing care information performance
category, the individual or TIN will contribute a zero to the APM
Entity's aggregate score. Each MIPS eligible clinician in an APM Entity
group will receive one score, weighted equally with the scores of every
other MIPS eligible clinician in the APM Entity group, and we will use
these scores to calculate a single APM Entity-level advancing care
information performance category score.
(i) Special Circumstances
As we explained in the CY 2017 Quality Payment Program final rule,
under generally applicable MIPS scoring policy, we will assign a weight
of zero percent to the advancing care information performance category
in the final score for MIPS eligible clinicians who meet specific
criteria: Hospital-based MIPS eligible clinicians, MIPS eligible
clinicians who are facing a significant hardship, and certain types of
non-physician practitioners (NPs, PAs, CRNAs, CNSs) who are MIPS
eligible clinicians (81 FR 77238 through 77245). In the CY 2018 Quality
Payment Program proposed rule, we proposed to include in this weighting
policy ASC-based MIPS eligible clinicians and MIPS eligible clinicians
who are using decertified EHR technology (82 FR 30077 through 30078).
In the CY 2018 Quality Payment Program proposed rule, we proposed
that under the APM scoring standard, if a MIPS eligible clinician who
qualifies for a zero percent weighting of the advancing care
information performance category in the final score is part of a TIN
reporting at the group level that includes one or more MIPS eligible
clinicians who do not qualify for a zero percent weighting, we would
not apply the zero percent weighting to the qualifying MIPS eligible
clinician, and the TIN would still report on behalf of the entire
group, although the TIN would not need to report data for the
qualifying MIPS eligible clinician. All MIPS eligible clinicians in the
TIN who are participants in the MIPS APM would count towards the TIN's
weight when calculating an aggregated APM Entity score for the
advancing care information performance category.
If, however, the MIPS eligible clinician is a solo practitioner and
qualifies for a zero percent weighting, or if all MIPS eligible
clinicians in a TIN qualify for the zero percent weighting, the TIN
would not be required to report on the advancing care information
performance category, and if the TIN chooses not to report that TIN
would be assigned a weight of 0 when calculating the APM Entity's
advancing care information performance category score.
If advancing care information data are reported by one or more TIN/
NPIs in an APM Entity, an advancing care information performance
category score will be calculated for, and will be applicable to, all
MIPS eligible clinicians in the APM Entity group. If all MIPS eligible
clinicians in all TINs in an APM Entity group qualify for a zero
percent weighting of the advancing care information performance
category, or in the case of a solo practitioner who comprises an entire
APM Entity and qualifies for zero percent weighting, the advancing care
information performance category would be weighted at zero percent of
the final score, and the advancing care information performance
category's weight would be redistributed to the quality performance
category.
We sought comment on this proposal. The following is a summary of
the public comments we received and our responses:
Comment: Some commenters supported the proposed policy to allow
participants in MIPS APMs to also apply for advancing care information
hardship exceptions like eligible clinicians participating under
regular MIPS rules.
Response: We thank commenters for their support for this policy.
Comment: A few commenters suggested that the advancing care
information hardship exception policy for all MIPS APMs should be
uniform and that MIPS eligible clinicians who qualify for an exception,
and who bill through the TIN of an ACO participant in a Shared Savings
Program ACO should not be counted when weighting the TIN for purposes
of calculating the APM Entity advancing care information performance
category score. Some commenters also expressed confusion as to how TIN-
level advancing care information data are to be reported for the Shared
Savings Program if a MIPS eligible clinician in an ACO participant TIN
receives an exception or joins the TIN at various times in the year.
Response: We thank commenters for bringing our attention to this
issue. We proposed that under the APM scoring standard, if a MIPS
eligible clinician is a solo practitioner and qualifies for a zero
percent weighting in the advancing care information performance
category, or if all MIPS eligible clinicians in a TIN qualify for the
zero percent weighting, the TIN would not be required to report on the
advancing care information performance category, and if the TIN chooses
not to report that TIN would be assigned a weight of 0 when calculating
the APM Entity's advancing care information performance category score.
If the MIPS eligible clinician would have reported the advancing care
information performance category as an individual and therefore
contributed to the APM Entity's advancing care information score at the
individual level but qualifies for a zero percent weighting of the
advancing care information performance category, the individual will be
removed from the numerator and denominator when calculating the APM
Entity's advancing care information performance category score, thereby
contributing a null value. If a MIPS eligible clinician qualifies for a
zero percent weighting of the advancing care information performance
category as described in II.C.6.f.(6) of this final rule with comment
period as an individual, but receives an advancing care information
score as part of a group, we will use that group score for that
eligible clinician when calculating the APM Entity's advancing care
information performance category score. We note that group level
advancing care information reporting is not negatively affected by the
failure of a single individual to report because it is based only on
average reported performance within the group, not the average reported
performance of all eligible clinicians in the group--those who do not
report are not factored into the denominator. If, however, all MIPS
eligible clinicians in a TIN qualify for a zero percent weighting of
the advancing care information performance category, the entire TIN
will be removed from the numerator and denominator, and therefore
contribute a null value when calculating the APM Entity score. If all
participant NPIs and TINs in an APM Entity are qualify for a zero
percent weighting of the advancing care
[[Page 53702]]
information performance category and do not report, we will reweight
the entire advancing care information performance category to zero
percent of the final score for the APM Entity as described in Table 13.
Final Action: After considering public comments, we are finalizing
the policy for scoring the advancing care information performance
category under the APM scoring standard as proposed at Sec.
414.1370(g)(4)(iii).
(4) Calculating Total APM Entity Score
(a) Performance Category Weighting
In the 2018 Quality Payment Program proposed rule, we proposed to
continue to use our authority to waive sections 1848(q)(2)(B)(ii) and
1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost
measures, and to maintain the cost performance category weight of zero
under the APM scoring standard for the 2018 performance period and
subsequent MIPS performance periods (82 FR 30082). Because the cost
performance category would be reweighted to zero, that weight would
need to be redistributed to other performance categories. We proposed
to use our authority under section 1115A(d)(1) to waive requirements
under sections 1848(q)(5)(E)(i)(I)(bb), 1848(q)(5)(E)(i)(III) and
1848(q)(5)(E)(i)(IV) of the Act that prescribe the weights,
respectively, for the quality, improvement activities, and advancing
care information performance categories (82 FR 30088). We proposed to
weight the quality performance category score to 50 percent, the
improvement activities performance category to 20 percent, and the
advancing care information performance category to 30 percent of the
final score for all APM Entities in Other MIPS APMs. We proposed these
weights to align the Other MIPS APM performance category weights with
those assigned to the Web Interface reporters, which we adopted as
explained in the CY 2017 Quality Payment Program final rule (81 FR
77262-77263). We believe it is appropriate to align the performance
category weights for APM Entities in MIPS APMs that require reporting
through the Web Interface with those in Other MIPS APMs. By aligning
the performance category weights among all MIPS APMs, we would create
greater scoring parity among the MIPS eligible clinicians in MIPS APMs
who are being scored under the APM scoring standard.
We sought comment on this proposal. The following is a summary of
the comments and our responses:
Comment: Some commenters supported CMS's proposal to weight the
quality performance category to better align the APM scoring standard
with regular MIPS scoring.
Response: We thank commenters for their support of our proposal.
Comment: Several commenters suggested delaying assigning a weight
to the quality performance category until MIPS eligible clinicians and
participants in APMs have had time to adjust to using these measures,
as well as to ensure that the measures have been fully vetted.
Response: We appreciate commenters' concerns that it may take time
for MIPS eligible clinicians to adjust to this new program. However, we
will be entering our second year of the Quality Payment Program in 2018
after a transition year in which quality was not scored for Other MIPS
APMs. Furthermore, while we acknowledge that the Quality Payment
Program is still relatively new program, APM participants are already
required to report the relevant quality measures as part of their
participation in a MIPS APM, in order to receive performance-based
payments. By repurposing these quality measures, we will enable MIPS
APM participants to avoid duplicative reporting and inconsistent
incentives between MIPS and APM requirements.
Comment: Some commenters suggested reweighting the performance
categories under the APM scoring standard to align with the weights
used under regular MIPS in 2018.
Response: While it is our intention to align the policies under the
APM scoring standard with the generally applicable MIPS policies to the
greatest extent possible, in this instance we believe alignment is
inappropriate. As previously noted, under the APM scoring standard, we
proposed to use our authority to waive the establishment of measures
and scoring in the cost performance category in all performance periods
(82 FR 30082), and we are finalizing that proposal in this final rule
with comment period. As a result, we will not align with the generally
applicable MIPS performance category weighting.
Comment: One commenter was confused by Table 12 of the proposed
rule (82 FR 30088), which indicates that all MIPS APMs will have the
same performance category weighting, as well as the same reweighting
standards in the event that a MIPS eligible clinician or TIN is
exempted from the advancing care information performance category, or
an APM Entity has no quality measures available to create a quality
performance category score.
Response: All MIPS APMs will have their performance categories
weighted in the same way. For information about performance category
weighting for MIPS APM under normal circumstances, we refer readers to
Table 12. For information about performance category weighting in
special circumstances, such as when a performance category is
reweighted to zero, we refer readers to Table 13.
Final Action: After considering public comments, we are finalizing
the performance category weighting as proposed at Sec. 414.1370(h).
These final weights are summarized in Table 12.
Table 12--APM Scoring Standard Performance Category Weights--Beginning With the 2018 Performance Period
----------------------------------------------------------------------------------------------------------------
Performance
MIPS performance category APM entity submission Performance category score category
requirement weight (%)
----------------------------------------------------------------------------------------------------------------
Quality............................. The APM Entity will be CMS will assign the same 50
required to submit quality quality category
measures to CMS as required performance score to each
by the MIPS APM. Measures TIN/NPI in an APM Entity
available at the close of group based on the APM
the MIPS submission period Entity's total quality
will be used to calculate score, derived from
the MIPS quality available APM quality
performance category score. measures.
If the APM Entity does not
submit any APM required
measures by the MIPS
submission deadline, the
APM Entity will be assigned
a zero.
[[Page 53703]]
Cost................................ The APM Entity group will Not Applicable.............. 0
not be assessed on cost
under the APM scoring
standard.
Improvement Activities.............. MIPS eligible clinicians are CMS will assign the same 20
not required to report improvement activities
improvement activities score to each APM Entity
data; if the CMS-assigned based on the activities
improvement activities involved in participation
score is below the maximum in the MIPS APM. APM
improvement activities Entities will receive a
score, APM Entities will minimum of one half of the
have the opportunity to total possible points. In
submit additional the event that the assigned
improvement activities to score does not represent
raise the APM Entity the maximum improvement
improvement activity score. activities score, the APM
Entity will have the
opportunity to report
additional improvement
activities to add points to
the APM Entity level score.
Advancing Care Information.......... Each MIPS eligible clinician CMS will attribute the same 30
in the APM Entity group is score to each MIPS eligible
required to report clinician in the APM Entity
advancing care information group. This score will be
to MIPS through either the highest score
group TIN or individual attributable to the TIN/NPI
reporting. combination of each MIPS
eligible clinician, which
may be derived from either
group or individual
reporting. The scores
attributed to each MIPS
eligible clinician will be
averaged for a single APM
Entity score.
----------------------------------------------------------------------------------------------------------------
There could be instances where an Other MIPS APM has no measures
available to score for the quality performance category for a MIPS
performance period; for example, it is possible that none of the Other
MIPS APM's measures would be available for calculating a quality
performance category score by or shortly after the close of the MIPS
submission period because the measures were removed due to changes in
clinical practice guidelines. In addition, as we explained in the CY
2018 Quality Payment Program proposed rule, the MIPS eligible
clinicians in an APM Entity may qualify for a zero percent weighting
for the advancing care information performance category (82 FR 30087
through 30088). In such instances, under the APM scoring standard, we
proposed to reweight the affected performance category to zero, in
accordance with section 1848(q)(5)(F) of the Act (82 FR 30088 through
30089).
If the quality performance category is reweighted to zero, we
proposed to reweight the improvement activities and advancing care
information performance categories to 25 and 75 percent, respectively.
If the advancing care information performance category is reweighted to
zero, the quality performance category weight would be increased to 80
percent.
We sought comment on these proposals. The following is a summary of
the comments on these proposals and our responses:
Comment: A few commenters suggested that in the event an entire
performance category is unable to be scored, its weight should be
evenly distributed among the remaining performance categories rather
than shifted entirely to either quality or advancing care information.
Response: In a situation where either the quality or advancing care
information performance categories have been reweighted to zero, we do
not believe it is appropriate to give any more weight to the
improvement activities performance category because under the APM
scoring standard this performance category score is automatically
assigned, rather than reported and scored like the advancing care
information and quality performance categories.
Comment: Some commenters suggested that in the event a second
performance category (in addition to cost) is weighted to zero, the APM
Entity or all APM Entities in the APM should receive a neutral score
for that performance period.
Response: Our proposed policy of scoring an APM Entity for which
two or more performance categories are available to be scored is
consistent with policies finalized in the CY 2017 Quality Payment
Program final rule, and scoring policies under regular MIPS. We also
continue to believe that it is appropriate to use the available
performance category scores in order to encourage the continued
performance and reporting of measures in any performance category that
is available, including the advancing care information performance
category, which is not necessarily required under the terms of MIPS
APMs. Having data for as many performance categories as possible is
also important for purposes of calculating improvement scoring in
future years and in helping to calculate MIPS payment adjustments.
Final Action: After considering public comments, we are finalizing
our proposals. If the quality performance category is reweighted to
zero, we will reweight the improvement activities and advancing care
information performance categories to 25 and 75 percent, respectively.
If the advancing care information performance category is reweighted to
zero, the quality performance category weight will be increased to 80
percent and the improvement activities performance category will remain
at 20 percent. The final policies are summarized in Table 13. We are
finalizing this policy at Sec. 414.1370(h)(5).
[[Page 53704]]
Table 13--APM Scoring Standard Performance Category Weights for MIPS APMs With Performance Categories Weighted to 0--Beginning With the 2018 Performance
Period
--------------------------------------------------------------------------------------------------------------------------------------------------------
Performance
Performance category
category weight (no
MIPS performance category APM entity submission requirement Performance category score weight (no advancing care
quality) (%) information)
(%)
--------------------------------------------------------------------------------------------------------------------------------------------------------
Quality............................... The APM Entity will not be assessed on CMS will assign the same quality 0 80
quality under MIPS if no quality data category performance score to each TIN/
are available at the close of the MIPS NPI in an APM Entity group based on
submission period. The APM Entity will the APM Entity's total quality score,
submit quality measures to CMS as derived from available APM quality
required by the Other MIPS APM. measures.
Cost.................................. The APM Entity group will not be Not Applicable......................... 0 0
assessed on cost under APM scoring
standard.
Improvement Activities................ MIPS eligible clinicians are not CMS will assign the same improvement 25 20
required to report improvement activities score to each APM Entity
activities data unless the CMS- group based on the activities involved
assigned improvement activities score in participation in the MIPS APM.
is below the maximum improvement APM Entities will receive a minimum of
activities score. one half of the total possible points.
In the event that the assigned score
does not represent the maximum
improvement activities score, the APM
Entity will have the opportunity to
report additional improvement
activities to add points to the APM
Entity level score.
Advancing Care Information............ Each MIPS eligible clinician in the APM CMS will attribute the same score to 75 0
Entity group reports advancing care each MIPS eligible clinician in the
information to MIPS through either APM Entity group. This score will be
group TIN or individual reporting. the highest score attributable to the
TIN/NPI combination of each MIPS
eligible clinician, which may be
derived from either group or
individual reporting. The scores
attributed to each MIPS eligible
clinicians will be averaged for a
single APM Entity score. For
participants in the Shared Savings
Program, advancing care information
will be scored at the TIN level. A TIN
will be exempt from reporting only if
all MIPS eligible clinicians billing
through the TIN qualify for a zero
percent weighting of the advancing
care information performance category.
--------------------------------------------------------------------------------------------------------------------------------------------------------
(b) Risk Factor Score
Section 1848(q)(1)(G) of the Act requires us to consider risk
factors in our scoring methodology. Specifically, that section provides
that the Secretary, on an ongoing basis, shall, as the Secretary
determines appropriate and based on individuals' health status and
other risk factors, assess appropriate adjustments to quality measures,
cost measures, and other measures used under MIPS and assess and
implement appropriate adjustments to payment adjustments, final scores,
scores for performance categories, or scores for measures or activities
under the MIPS.
We did not be create a separate methodology to adjust for patient
risk factors for purposes of the APM scoring standard. However, we
refer readers to section II.C.7.b.(1)(b) of this final rule with
comment period for a description of the complex patient bonus and its
application to APM Entities.
(c) Small Practice Bonus
We believe an adjustment for eligible clinicians in small practices
(referred to herein as the small practice bonus) is appropriate to
recognize barriers faced by small practices, such as unique challenges
related to financial and other resources, environmental factors, and
access to health information technology, and to incentivize eligible
clinicians in small practices to participate in the Quality Payment
Program and to overcome any performance discrepancy due to practice
size.
We refer readers to section II.C.7.b (1)(c) of this final rule with
comment period for a discussion of the small practice adjustment and
its application to APM Entities.
(d) Final Score Methodology
In the CY 2017 Quality Payment Program final rule, we finalized the
methodology for calculating a final score of 0-100 based on the four
performance categories (81 FR 77320). We refer readers to section
II.C.7.b. of this final rule for a discussion of the changes we are
making to the final score methodology. We are codifying our policy
pertaining to the calculation of the total APM Entity score at Sec.
414.1370(i).
(5) MIPS APM Performance Feedback
In the CY 2017 Quality Payment Program final rule, we finalized
that all MIPS eligible clinicians scored under
[[Page 53705]]
the APM scoring standard will receive performance feedback as specified
under section 1848(q)(12) of the Act on the quality and cost
performance categories to the extent applicable, based on data
collected in the September 2016 QRUR, unless they did not have data
included in the September 2016 QRUR. Those eligible clinicians without
data included in the September 2016 QRUR will not receive any
performance feedback until performance data is available for feedback
(81 FR 77270).
Beginning with the 2018 performance period, we proposed that MIPS
eligible clinicians whose MIPS payment adjustment is based on their
score under the APM scoring standard will receive performance feedback
as specified under section 1848(q)(12) of the Act for the quality,
advancing care information, and improvement activities performance
categories to the extent data are available for the MIPS performance
period (82 FR 30089 through 30090). Further, we proposed that in cases
where performance data are not available for a MIPS APM performance
category or the MIPS APM performance category has been weighted to zero
for that performance period, we would not provide performance feedback
on that MIPS performance category.
We believe that with an APM Entity's finite resources for engaging
in efforts to improve quality and lower costs for a specified
beneficiary population, the incentives of the APM must take priority
over those offered by MIPS in order to ensure that the goals and
evaluation associated with the APM are as clear and free of confounding
factors as possible. The potential for different, conflicting messages
in performance feedback provided by the APMs and that provided by MIPS
may create uncertainty for MIPS eligible clinicians who are attempting
to strategically transform their respective practices and succeed under
the terms of the APM. Accordingly, under sections 1115A(d)(1) and
1899(f) of the Act, for all MIPS performance periods we proposed to
waive--for MIPS eligible clinicians participating in MIPS APMs
authorized under sections 1115A and 1899 of the Act, respectively--the
requirement under section 1848(q)(12)(A)(i)(I) of the Act to provide
performance feedback for the cost performance category.
We requested comment on these proposals to waive requirements for
performance feedback on the cost performance category beginning in the
2018 performance year, and for the other performance categories in
years for which the weight for those categories has been reweighted to
zero.
The following is a summary of the public comments we received and
our responses:
Comment: Several commenters supported our proposal.
Response: We thank commenters for their support of this policy.
Final Action: After considering public comments, we are finalizing
the policy on performance feedback for MIPS eligible clinicians
participating in MIPS APMs as proposed.
(6) Summary of Final Policies
In summary, we are finalizing the following policies in this
section:
We are finalizing our proposal to define full TIN APM at
Sec. 414.1305 and to amend Sec. 414.1370(e) to identify the four
assessment dates that would be used to identify the APM Entity group
for full TIN APMs for purposes of the APM scoring standard, and to
specify that the December 31 date will be used only to identify MIPS
eligible clinicians on the APM Entity's Participation List for a MIPS
APM that is a full TIN APM in order to add them to the APM Entity group
that is scored under the APM scoring standard. We will use this fourth
assessment date of December 31 only to extend the APM scoring standard
to those MIPS eligible clinicians participating in MIPS APMs that are
full TIN APMs, ensuring that a MIPS eligible clinician who joins a full
TIN APM late in the performance period would be scored under the APM
scoring standard.
We are finalizing our proposal to continue to weight the
cost performance category under the APM scoring standard for Web
Interface reporters at zero percent beginning with the 2020 MIPS
payment year.
We are finalizing our proposal not to take improvement
into account for performance scores in the cost performance category
for Web Interface reporters beginning with the 2020 MIPS payment year
to align with our final policy of weighting the cost performance
category at zero percent.
We finalizing our proposal to score the CAHPS for ACOs
survey in addition to the Web Interface measures to calculate the MIPS
APM quality performance category score for Web Interface reporters
(including MIPS eligible clinicians participating in the Shared Savings
Program and Next Generation ACO Model), beginning in the 2018
performance period at Sec. 414.1370(e).
We are finalizing our proposal that, beginning with the
2018 performance period, MIPS eligible clinicians in Web Interface
reporters may receive bonus points under the APM scoring standard for
submitting the CAHPS for ACOs survey.
We are finalizing our proposal to calculate the quality
improvement score for MIPS eligible clinicians submitting quality
measures via the Web Interface using the methodology described in
section II.C.7.a.(2) of this final rule with comment period at Sec.
414.1370(g)(1)(i)(B).
We are finalizing our proposal to calculate the total
quality percent score for MIPS eligible clinicians using the Web
Interface according to the methodology described in section
II.C.7.a.(2) of this final rule with comment period at Sec.
414.1370(g)(1)(i)(C).
We are finalizing our proposal to establish a separate
MIPS APM measure list of quality measures for each Other MIPS APM,
which will be the quality measure list used for purposes of the APM
scoring standard for that Other MIPS APM.
We are finalizing our proposals to calculate the MIPS
quality performance category score for Other MIPS APMs using MIPS APM-
specific quality measures. For purposes of the APM scoring standard, we
will score only measures that: (1) Are tied to payment as described
under the terms of the APM, (2) are available for scoring near the
close of the MIPS submission period, (3) have a minimum of 20 cases
available for reporting, and (4) have an available benchmark at Sec.
414.1370(g)(1)(ii)(A)(1) through (4).
We are finalizing our proposal to only use the MIPS APM
quality measure data that are submitted by the close of the MIPS
submission period and are available for scoring in time for inclusion
to calculate a MIPS quality performance category score.
We are finalizing our proposal that, for the APM scoring
standard, the benchmark score used for a quality measure would be the
benchmark used in the MIPS APM for calculation of the performance based
payments, where such a benchmark is available. If the APM does not
produce a benchmark score for a reportable measure that is included on
the APM measures list, we will use the benchmark score for the measure
that is used for the MIPS quality performance category generally for
that performance period, provided the measure specifications for the
measure are the same under both the MIPS final list and the MIPS APM
measures list.
We are finalizing our proposal that the minimum number of
quality measures required to be reported for the
[[Page 53706]]
APM scoring standard would be the minimum number of quality measures
that are required within the MIPS APM and are collected and available
in time to be included in the calculation for the APM Entity score
under the APM scoring standard. We are also finalizing our proposal
that that if an APM Entity submits some, but not all of the measures
required by the MIPS APM by the close of the MIPS submission period,
the APM Entity would receive points for the measures that were
submitted, but would receive a score of zero for each remaining measure
between the number of measures reported and the number of measures
required by the APM that were available for scoring.
We are finalizing our proposal that the benchmark score
used for a quality measure would be the benchmark used by the MIPS APM
for calculation of the performance based payments within the APM, if
possible, in order to best align the measure performance outcomes
between the Quality Payment Program and the APM. We are finalizing our
proposal that for measures that are pay-for-reporting or that do not
measure performance on a continuum of performance, we will consider
these measures to be lacking a benchmark and treat them as such.
We are finalizing our proposal to score quality measure
performance under the APM scoring standard using a percentile
distribution, separated by decile categories, as described in the
finalized MIPS quality scoring methodology. We are also finalizing our
proposal to use a graduated points-assignment approach, where a measure
is assigned a continuum of points out to one decimal place, based on
its place in the decile.
We are finalizing our proposal that each APM Entity that
reports on quality measures would receive between 1 and 10 achievement
points for each measure reported that can be reliably scored against a
benchmark, up to the number of measures that are required to be
reported by the APM.
We are finalizing our proposal that MIPS eligible
clinicians in APM Entities participating in MIPS APMs, like other MIPS
eligible clinicians, would be eligible to receive bonus points for the
MIPS quality performance category for reporting on high priority
measures or measures submitted via CEHRT. For each Other MIPS APM, we
are finalizing our proposal to identify whether any of the available
measures meets the criteria to receive a bonus, and add the bonus
points to the quality achievement points.
We are finalizing our proposal to score improvement as
well as achievement in the quality performance category beginning in
the 2018 performance period at Sec. 414.1370(g)(1)(ii)(B). For the APM
scoring standard, the improvement percentage points will be awarded
based on the following formula:
Quality Improvement Score = (Absolute Improvement/Previous Year Quality
Performance Category Percent Score Prior to Bonus Points) * 10
We are finalizing our proposal that the APM Entity's total
quality performance category score would be equal to [(achievement
points + bonus points)/total available achievement points] + quality
improvement score. We are codifying this policy at Sec.
414.1370(g)(1)(ii)(C).
We are finalizing at Sec. 414.1370(g)(4)(iii), our
proposal for scoring the advancing care information performance
category if a MIPS eligible clinician who qualifies for a zero percent
weighting of the advancing are information performance category. This
policy will apply beginning with the 2019 payment year.
We are finalizing our proposal to maintain the cost
performance category weight of zero for all MIPS APMs under the APM
scoring standard for the 2020 MIPS payment year and subsequent MIPS
payment years. Because the cost performance category will be reweighted
to zero, that weight will be redistributed to other performance
categories. We are finalizing our policy to align the Other MIPS APM
performance category weights with those for Web Interface reporters and
weight the quality performance category to 50 percent, the improvement
activities performance category to 20 percent, and the advancing care
information performance category to 30 percent of the APM Entity final
score for all MIPS APMs at Sec. 414.1370(h)(1) through (4).
We are finalizing our proposal to reweight the quality
performance category to zero under the APM scoring standard in
instances where none of a MIPS APM's measures will be available for
calculating a quality performance category score by or shortly after
the close of the MIPS submission period, for example, due to changes in
clinical practice guidelines. In addition, the MIPS eligible clinicians
in an APM Entity may qualify for a zero percent weighting for the
advancing care information performance category.
We are finalizing our proposal that MIPS eligible
clinicians whose MIPS payment adjustment is based on their score under
the APM scoring standard will receive performance feedback as specified
under section 1848(q)(12) of the Act for the quality, advancing care
information, and improvement activities performance categories to the
extent data are available for the MIPS performance period. Further, we
are finalizing our proposal that in cases where the MIPS APM
performance category has been weighted to zero for a performance
period, we will not provide performance feedback on that MIPS
performance category. We are also finalizing our proposal to waive,
using our authority under sections 1115A(d)(1) and 1899(f) of the Act,
the requirement under section 1848(q)(12)(A)(i)(I) of the Act to
provide performance feedback for the cost performance category for MIPS
eligible clinicians participating in MIPS APMs authorized under
sections 1115A and 1899 of the Act, respectively; this waiver will be
applicable in all years, regardless of the availability of cost
performance data for MIPS eligible clinicians participating in these
MIPS APMs.
We are finalizing at section II.C.4.b. of this final rule
with comment period to apply the APM scoring standard score when
calculating the MIPS payment adjustment for MIPS eligible clinicians
participating in MIPS APMs, even for those whose TINs are participating
in a virtual group. We are codifying this policy at Sec.
414.1370(f)(2).
We are not finalizing our proposal to amend Sec.
414.1370(b)(4)(i).
(7) Measure Sets
We sought comment on Tables 14, 15, and 16 in the CY 2018 Quality
Payment Program proposed rule (82 FR 30091through 30095), which
outlined the measures being introduced for notice and comment, and
would serve as the measure set used by participants in the identified
MIPS APMs in order to create a MIPS score under the APM scoring
standard, as described in section II.C.6.g.(3)(b)(ii) of this final
rule with comment period.
The following is a summary of the public comments we received and
our responses:
Comment: One commenter requested that CMS include more immunization
measures for all MIPS APMs, specifically the Comprehensive ESRD Care
and Comprehensive Primary Care Plus (CPC+) APMs:
NQF #0041/PQRS #110: Influenza Immunization in the ESRD
Population
NQF #0043/PQRS #111: Pneumococcal Vaccination Status
For the Comprehensive Primary Care Plus (CPC+) APM:26
[[Page 53707]]
++ NQF #0041/PQRS #110: Influenza Immunization
++ NQF #0043/PQRS #111: Pneumonia Vaccination Status for Older Adults
Response: The MIPS APM measures list is comprised of only measures
that are already in effect under the terms of the MIPS APMs. The
addition of measures to MIPS APMs is a function of each APM's model
design and objectives and is determined by the terms of each MIPS APM.
Comment: One commenter objected to the use of CMS 166v6 (MIPS ID
312) Use of Imaging Studies for Low Back Pain for the CPC+ Model. The
commenter suggested the requirements of the measure to be overly
general, and that the measure may cause the intervention to be
inappropriately applied because exclusion criteria were too limited and
not all indications for the intervention were included in the measure
requirements. The commenter further objected to the benchmark for the
measure, which requires 100 percent performance to achieve a perfect
score.
Response: The MIPS APM measure list is comprised of measures that
are used under the terms of each MIPS APM. Because this measure has
been removed from the CPC+ measure list for the 2018 performance year,
we will also be removing it from the 2018 MIPS APM measure list for APM
Entities participating in the CPC+ Model.
Comment: One commenter objected to the use of CMS 156v5 (MIPS ID
238) Use of High-Risk Medications in the Elderly (inverse metric) for
the CPC+ Model. The commenter stated that the benchmark thresholds are
effectively unattainable and therefore may reduce the incentive for
clinicians to strive for performance beyond the minimum 50th
percentile; further, the commenter stated that with an 80th percentile
benchmark of 0.01 percent, clinicians do not believe that the benchmark
is valid or appropriate for the best interests of their varied patient
population.
Response: The MIPS APM measure list is comprised of measures that
are used under the terms of each MIPS APM. Because this measure has
been removed from the Comprehensive Primary Care Plus measure list for
the 2018 performance year, we will also be removing it from the 2018
MIPS APM measure list for APM Entities participating in CPC+.
Comment: One commenter objected to the use of CMS 131v5 (MIPS ID
117) Diabetes: Eye Exam for CPC+ because the benchmarks for the measure
may be appropriate for ophthalmologists or optometrists, but the 80th
percentile decile of 99.99 percent is inappropriate for primary care
providers like those participating in CPC+.
Response: CPC+ uses the MIPS benchmarks for electronic clinical
quality measures (eCQMs). These benchmarks are based on data reported
to CMS by all clinicians--primary care and specialists. The 2017
benchmarks were based on data submitted in 2015 to the Physician
Quality Reporting System. The percentile standards mentioned in the
comment are MIPS percentile standards.
Final Action: After considering public comments, we are finalizing
the MIPS APM measure sets as follows in Tables 14, 15, and 16. We note
that some proposed measures have been removed from these MIPS APM
measure lists because the measures have been removed from use under the
terms of the individual APM, and therefore in order to maintain
alignment between the APM scoring standard and the APMs, we have also
removed these measures from the MIPS APM measures list. We have also
updated the below measure list to reflect updates to measure
descriptions provided by the measure stewards, as well as corrected
National Quality Strategy Domains in alignment with the most recent
MIPS APM measure lists.
Table 14--MIPS APM Measures List--Oncology Care Model
----------------------------------------------------------------------------------------------------------------
NQF/quality No. (if National quality Measure Primary measure
Measure name applicable) strategy domain description steward
----------------------------------------------------------------------------------------------------------------
Risk-adjusted proportion of Not Applicable..... Effective Clinical Percentage of OCM- Not Applicable.
patients with all-cause Care. attributed FFS
hospital admissions within the beneficiaries who
6-month episode. were had an acute-
care hospital
stay during the
measurement
period.
Risk-adjusted proportion of Not Applicable..... Effective Clinical Percentage of OCM- Not Applicable.
patients with all-cause ED Care. attributed FFS
visits or observation stays beneficiaries who
that did not result in a had an ER visit
hospital admission within the that did not
6-month episode. result in a
hospital stay
during the
measurement
period.
Proportion of patients who died Not Applicable..... Effective Clinical Percentage of OCM- Not Applicable.
who were admitted to hospice Care. attributed FFS
for 3 days or more. beneficiaries who
died and spent at
least 3 days in
hospice during
the measurement
time period.
Oncology: Medical and 0384/143........... Person and Percentage of Physician
Radiation--Pain Intensity Caregiver patient visits, Consortium for
Quantified. Centered regardless of Performance
Experience. patient age, with Improvement
a diagnosis of Foundations
cancer currently (PCPI).
receiving
chemotherapy or
radiation therapy
in which pain
intensity is
quantified.
Oncology: Medical and 0383/144........... Person and Percentage of American Society
Radiation--Plan of Care for Caregiver visits for of Clinical
Pain. Centered patients, Oncology.
Experience. regardless of
age, with a
diagnosis of
cancer currently
receiving
chemotherapy or
radiation therapy
who report having
pain with a
documented plan
of care to
address pain.
Preventive Care and Screening: 0418/134........... Community/ Percentage of Centers for
Screening for Depression and Population Health. patients aged 12 Medicare &
Follow-Up Plan. and older Medicaid
screened for Services.
depression on the
date of the
encounter using
an age
appropriate
standardized
depression
screening tool
and if positive,
a follow-up plan
is documented on
the date of the
positive screen.
Patient-Reported Experience of Not Applicable..... Person and Summary/Survey Not Applicable.
Care. Caregiver Measures may
Centered include:
Experience. --Overall measure
of patient
experience..
--Exchanging
Information with
Patients..
--Access..........
--Shared Decision
Making..
--Enabling Self-
Management..
--Affective
Communication..
[[Page 53708]]
Prostate Cancer: Adjuvant 0390/104........... Effective Clinical Percentage of American
Hormonal Therapy for High or Care. patients, Urological
Very High Risk Prostate Cancer. regardless of Association
age, with a Education and
diagnosis of Research.
prostate cancer
at high or very
high risk of
recurrence
receiving
external beam and
radiotherapy to
the prostate who
were prescribed
adjuvant hormonal
therapy (GnRH
[gonadotropin
releasing
hormone] agonist
or antagonist).
Adjuvant chemotherapy is 0223............... Communication and Percentage of Commission on
recommended or administered Care Coordination. patients under Cancer, American
within 4 months (120 days) of the age of 80 College of
diagnosis to patients under with AJCC III Surgeons.
the age of 80 with AJCC III (lymph node
(lymph node positive) colon positive) colon
cancer. cancer for whom
adjuvant
chemotherapy is
recommended and
not received or
administered
within 4 months
(120 days) of
diagnosis.
Combination chemotherapy is 0559............... Communication and Percentage of Commission on
recommended or administered Care Coordination. female patients, Cancer, American
within 4 months (120 days) of age >18 at College of
diagnosis for women under 70 diagnosis, who Surgeons.
with AJCC T1cN0M0, or Stage IB- have their first
III hormone receptor negative diagnosis of
breast cancer. breast cancer
(epithelial
malignancy), at
AJCC stage
T1cN0M0 (tumor
greater than 1
cm), or Stage IB-
III, whose
primary tumor is
progesterone and
estrogen receptor
negative
recommended for
multiagent
chemotherapy
(recommended or
administered)
within 4 months
(120 days) of
diagnosis.
Trastuzumab administered to 1858/450........... Efficiency and Proportion of American Society
patients with AJCC stage I Cost Reduction. female patients of Clinical
(T1c)-III and human epidermal (aged 18 years Oncology.
growth factor receptor 2 and older) with
(HER2) positive breast cancer AJCC stage I
who receive adjuvant (Tlc)-Ill, human
chemotherapy. epidermal growth
factor receptor 2
(HER2) positive
breast cancer
receiving
adjuvant
Chemotherapy.
Breast Cancer: Hormonal Therapy 0387............... Communication and Percentage of AMA-convened
for Stage I (T1b)-IIIC Care Coordination. female patients Physician
Estrogen Receptor/Progesterone aged 18 years and Consortium for
Receptor (ER/PR) Positive older with Stage Performance
Breast Cancer. I (T1b) through Improvement.
IIIC, ER or PR
positive breast
cancer who were
prescribed
tamoxifen or
aromatase
inhibitor (AI)
during the 12-
month reporting
period.
Documentation of Current 0419/130........... Patient Safety.... Percentage of Centers for
Medications in the Medical visits for Medicare &
Record. patients aged 18 Medicaid
years and older Services.
for which the
eligible
clinician attests
to documenting a
list of current
medications using
all immediate
resources
available on the
date of the
encounter. This
list must include
ALL known
prescriptions,
over-the
counters,
herbals, and
vitamin/mineral/
dietary AND must
contain the
medications'
name, dosage,
frequency and
route of
administration.
----------------------------------------------------------------------------------------------------------------
Table 15--MIPS APM Measures List--Comprehensive ESRD Care
----------------------------------------------------------------------------------------------------------------
NQF/quality No. National quality Primary measure
Measure name (if applicable) strategy domain Measure description steward
----------------------------------------------------------------------------------------------------------------
ESCO Standardized Mortality 0369/154.......... Patient Safety... This measure is National
Ratio. calculated as a ratio Committee for
but can also be Quality
expressed as a rate. Assurance.
Falls: Screening, Risk 0101/154.......... Communication and (A) Screening for National
Assessment and Plan of Care Coordination. Future Fall Risk: Committee for
to Prevent Future Falls. Patients who were Quality
screened for future Assurance.
fall risk at last
once within 12 months.
(B) Multifactorial
Falls Risk
Assessment: Patients
at risk of future
fall who had a
multifactorial risk
assessment for falls
completed within 12
months.
(C) Plan of Care to
Prevent Future Falls:
Patients at risk of
future fall with a
plan of care for
falls prevention
documented within 12
months.
Advance Care Plan............. 0326/47........... Patient Safety... Percentage of patients National
aged 65 years and Committee for
older who have an Quality
advance care plan or Assurance.
surrogate decision
maker documented in
the medical record or
documentation in the
medical record that
an advance care plan
was discussed but the
patient did not wish
or was not able to
name a surrogate
decision maker or
provide an advance
care plan.
[[Page 53709]]
ICH-CAHPS: Nephrologists' 0258.............. Person and Summary/Survey Agency for
Communication and Caring. Caregiver Measures may include: Healthcare
Centered --Getting timely care, Research and
Experience and appointments, and Quality.
Outcome. information..
--How well providers
communicate..
--Patients' rating of
provider..
--Access to
specialists..
--Health promotion and
education..
--Shared Decision-
making..
--Health status and
functional status..
--Courteous and
helpful office staff..
--Care coordination...
--Between visit
communication..
--Helping you to take
medications as
directed, and.
--Stewardship of
patient resources..
ICH-CAHPS: ICH-CAHPS: Rating 0258.............. Person and Comparison of services Not Applicable.
of Dialysis Center. Caregiver and quality of care
Centered that dialysis
Experience and facilities provide
Outcome. from the perspective
of ESRD patients
receiving in-center
hemodialysis care.
Patients will assess
their dialysis
providers, including
nephrologists and
medical and non-
medical staff, the
quality of dialysis
care they receive,
and information
sharing about their
disease.
ICH-CAHPS: Quality of Dialysis 0258.............. Person and Comparison of services Agency for
Center Care and Operations. Caregiver and quality of care Healthcare
Centered that dialysis Research and
Experience and facilities provide Quality.
Outcome. from the perspective
of ESRD patients
receiving in-center
hemodialysis care.
Patients will assess
their dialysis
providers, including
nephrologists and
medical and non-
medical staff, the
quality of dialysis
care they receive,
and information
sharing about their
disease.
ICH-CAHPS: Providing 0258.............. Person and Comparison of services Agency for
Information to Patients. Caregiver and quality of care Healthcare
Centered that dialysis Research and
Experience and facilities provide Quality.
Outcome. from the perspective
of ESRD patients
receiving in-center
hemodialysis care.
Patients will assess
their dialysis
providers, including
nephrologists and
medical and non-
medical staff, the
quality of dialysis
care they receive,
and information
sharing about their
disease.
ICH-CAHPS: Rating of Kidney 0258.............. Person and Comparison of services Agency for
Doctors. Caregiver and quality of care Healthcare
Centered that dialysis Research and
Experience and facilities provide Quality.
Outcome. from the perspective
of ESRD patients
receiving in-center
hemodialysis care.
Patients will assess
their dialysis
providers, including
nephrologists and
medical and non-
medical staff, the
quality of dialysis
care they receive,
and information
sharing about their
disease.
ICH-CAHPS: Rating of Dialysis 0258.............. Person and Comparison of services Agency for
Center Staff. Caregiver and quality of care Healthcare
ICH-CAHPS: Rating of Dialysis Centered that dialysis Research and
Center. Experience and facilities provide Quality.
Outcome. from the perspective
of ESRD patients
receiving in-center
hemodialysis care.
Patients will assess
their dialysis
providers, including
nephrologists and
medical and non-
medical staff, the
quality of dialysis
care they receive,
and information
sharing about their
disease.
Medication Reconciliation Post 0554.............. Communication and The percentage of National
Discharge. Care discharges from any Committee for
Coordination. inpatient facility Quality
(e.g. hospital, Assurance.
skilled nursing
facility, or
rehabilitation
facility) for
patients 18 years of
age and older seen
within 30 days
following the
discharge in the
office by the
physicians,
prescribing
practitioner,
registered nurse, or
clinical pharmacist
providing on-going
care for whom the
discharge medication
list was reconciled
with the current
medication list in
the outpatient
medical record.
This measure is
reported as three
rates stratified by
age group:
Reporting
Criteria 1: 18-64
years of age..
Reporting
Criteria 2: 65 years
and older..
Total Rate:
All patients 18 years
of age and Older..
Diabetes Care: Eye Exam....... 0055/117.......... Effective Percentage of patients National
Clinical Care. 18-75 years of age Committee for
with diabetes who had Quality
a retinal or dilated Assurance.
eye exam by an eye
care professional
during the
measurement period or
a negative retinal
exam (no evidence of
retinopathy) in the
12 months prior to
the measurement
period.
Diabetes Care: Foot Exam...... 0056/163.......... Effective Percentage of patients National
Clinical Care. 18-75 years of age Committee for
with diabetes (type 1 Quality
and type 2) who Assurance,
received a foot exam
(visual inspection
and sensory exam with
mono filament and a
pulse exam) during
the previous
measurement year.
Influenza Immunization for the 0041/110, 0226.... Community/ Percentage of patients Kidney Care
ESRD Population. Population aged 6 months and Quality Alliance
Health. older seen for a (KCQA).
visit between July 1
and March 31 who
received an influenza
immunization OR who
reported previous
receipt of an
influenza
immunization.
Pneumococcal Vaccination 0043/111.......... Community/ Percentage of patients National
Status. Population 65 years of age and Committee for
Health. older who have ever Quality
received a Assurance.
pneumococcal vaccine.
[[Page 53710]]
Screening for Clinical 0418/134.......... Community/ Percentage of patients Centers for
Depression and Follow-Up Plan. Population aged 12 and older Medicare and
Health. screened for Medicaid
depression on the Services.
date of the encounter
and using an age
appropriate
standardized
depression screening
tool AND if positive,
a follow-up plan is
documented on the
date of the positive
screen.
Tobacco Use: Screening and 0028/226.......... Community/ Percentage of patients Physician
Cessation Intervention. Population aged 18 years and Consortium for
Health. older who were Performance
screened for tobacco Improvement
use one or more times Foundations
within 24 months AND (PCPI).
who received
cessation counseling
intervention if
identified as a
tobacco user.
----------------------------------------------------------------------------------------------------------------
Table 16--MIPS APM Measures List--Comprehensive Primary Care Plus (CPC+)
----------------------------------------------------------------------------------------------------------------
NQF/quality No. National quality Primary measure
Measure name (if applicable) strategy domain Measure description steward
----------------------------------------------------------------------------------------------------------------
Controlling High Blood 0018/236.......... Effective Percentage of patients National
Pressure. Clinical Care. 18-85 years of age Committee for
who had a diagnosis Quality
of hypertension and Assurance.
whose blood pressure
was adequately
controlled (<140/90
mmHg) during the
measurement period.
Diabetes: Eye Exam............ 0055/117.......... Effective Percentage of patients National
Clinical Care. 18-75 years of age Committee for
with diabetes who had Quality
a retinal or dilated Assurance.
eye exam by an eye
care professional
during the
measurement period or
a negative retinal
exam (no evidence of
retinopathy) in the
12 months prior to
the measurement
period.
Diabetes: Hemoglobin A1c 0059/001.......... Effective Percentage of patients National
(HbA1c) Poor Control (<9%). Clinical Care. 18-75 years of age Committee for
with diabetes who had Quality
hemoglobin A1c >9.0% Assurance
during the
measurement period.
Use of High-Risk Medications 0022/238.......... Patient Safety... Percentage of patients National
in the Elderly. 66 years of age and Committee for
older who were Quality
ordered high-risk Assurance.
medications. Two
rates are reported
a. Percentage of
patients who were
ordered at least one
high risk medication.
b. Percentage of
patients who were
ordered at least two
different high risk
medications.
Dementia: Cognitive Assessment 2872/281.......... Effective Percentage of Physician
Clinical Care. patients, regardless Consortium for
of age, with a Performance
diagnosis of dementia Improvement
for whom an Foundation
assessment of (PCPI).
cognition is
performed and the
results reviewed at
least once within a
12-month period.
Falls: Screening for Future 0101/318.......... Patient Safety... (A) Screening for National
Fall Risk. Future Fall Risk: Committee for
Patients who were Quality
screened for future Assurance.
fall risk at least
once within 12 months.
(B) Multifactorial
Falls Risk
Assessment: Patients
at risk of future
fall who had a
multifactorial risk
assessment for falls
completed within 12
months.
(C) Plan of Care to
Prevent Future Falls:
Patients at risk of
future fall with a
plan of care for
falls prevention
documented within 12
months..
Initiation and Engagement of 0004/305.......... Effective Percentage of patients National
Alcohol and Other Drug Clinical Care. 13 years of age and Committee for
Dependence Treatment. older with a new Quality
episode of alcohol Assurance.
and other drug (AOD)
dependence who
received the
following. Two rates
are reported
a. Percentage of
patients who
initiated treatment
within 14 days of the
diagnosis..
b. Percentage of
patients who
initiated treatment
and who had two or
more additional
services with an AOD
diagnosis within 30
days of the
initiation visit..
Closing the Referral Loop: Not Applicable/374 Communication and Percentage of Patients Centers for
Receipt of Specialist Report. Care with referrals, Medicare and
Coordination. regardless of age, Medicaid
for which the Services.
referring provider
receives a report
from the provider to
whom the patient was
referred.
Cervical Cancer Screening..... 0032/309.......... Effective Percentage of women 21- National
Clinical Care. 64 years of age, who Committee for
were screened for Quality
cervical cancer using Assurance.
either of the
following criteria
Women age 21-
64 who had cervical
cytology performed
every 3 years.
Women age 30-
64 who had cervical
cytology/human
papillomavirus (HPV)
co-testing performed
every 5 years.
Colorectal Cancer Screening... 0034/113.......... Effective Percentage of National
Clinical Care. patients, 50-75 years Committee for
of age who had Quality
appropriate screening Assurance.
for colorectal cancer.
Preventive Care and Screening: 0028/226.......... Community/ Percentage of patients Physician
Tobacco Use: Screening and Population aged 18 years and Consortium for
Cessation Intervention. Health. older who were Performance
screened for tobacco Improvement
use one or more times Foundations
within 24 months and (PCPI).
who received
cessation counseling
intervention if
identified as a
tobacco user.
Breast Cancer Screening....... 2372/112.......... Effective Percentage of women 50- National
Clinical Care. 74 years of age who Committee for
had a mammogram to Quality
screen for breast Assurance.
cancer.
[[Page 53711]]
Preventive Care and Screening: 0041/110.......... Community/ Percentage of patients Physician
Influenza Immunization. Population aged 6 months and Consortium for
Health. older seen for a Performance
visit between October Improvement
1 and March 31 who Foundations
received an influenza PCPI(R)
immunization OR who Foundation
reported previous (PCPI[R]).
receipt of an
influenza
immunization.
Pneumonia Vaccination Status 0043/111.......... Community/ Percentage of patients National
for Older Adults. Population 65 years of age and Committee for
Health. older who have ever Quality
received a Assurance.
pneumococcal vaccine.
Diabetes: Medical Attention 0062/119.......... Effective The percentage of National
for Nephropathy. Clinical Care. patients 18-75 years Committee for
of age with diabetes Quality
who had a nephropathy Assurance.
screening test or
evidence of
nephropathy during
the measurement
period.
Ischemic Vascular Disease 0068/204.......... Effective Percentage of patients National
(IVD): Use of Aspirin or Clinical Care. 18 years of age and Committee for
Another. older who were Quality
diagnosed with acute Assurance.
myocardial infarction
(AMI), coronary
artery bypass graft
(CABG) or
percutaneous coronary
interventions (PCI)
in the 12 months
prior to the
measurement period,
or who had an active
diagnosis of ischemic
vascular disease
(IVD) during the
measurement period,
and who had
documentation of use
of aspirin or another
antiplatelet during
the measurement
period.
Preventive Care and Screening: 0418/134.......... Community/ Percentage of patients Centers for
Screening for Depression and Population aged 12 years and Medicare &
Follow-Up Plan. Health. older screened for Medicaid
depression on the Services.
date of the encounter
using an age
appropriate
standardized
depression screening
tool AND if positive,
a follow-up plan is
documented on the
date of the positive
screen.
Statin Therapy for the Not Applicable/438 Effective Percentage of the Centers for
Prevention and Treatment of Clinical Care. following patients-- Medicare &
Cardiovascular Disease. all considered at Medicaid
high risk of Services.
cardiovascular
events--who were
prescribed or were on
statin therapy during
the measurement
period:
* Adults aged >=21
years who were
previously diagnosed
with or currently
have an active
diagnosis of clinical
atherosclerotic
cardiovascular
disease (ASCVD); OR.
* Adults aged >=21
years who have ever
had a fasting or
direct low-density
lipoprotein
cholesterol (LDL-C)
level >=190 mg/dL or
were previously
diagnosed with or
currently have an
active diagnosis of
familial or pure
hypercholesterolemia;
OR.
* Adults aged 40-75
years with a
diagnosis of diabetes
with a fasting or
direct LDL-C level of
70-189 mg/dL.
Inpatient Hospital Utilization Not Applicable.... Not Applicable... For members 18 years National
(IHU). of age and older, the Committee for
risk-adjusted ratio Quality
of observed to Assurance.
expected acute
inpatient discharges
during the
measurement year
reported by Surgery,
Medicine, and Total.
Emergency Department Not Applicable.... Not Applicable... For members 18 years National
Utilization (EDU). of age and older, the Committee for
risk-adjusted ratio Quality
of observed to Assurance.
expected emergency
department (ED)
visits during the
measurement year.
CAHPS......................... CPC+ specific; Person and CG-CAHPS Survey 3.0... Agency for
different than Caregiver- Healthcare
CAHPS for MIPS. Centered Research and
Experience and Quality.
Outcome.
----------------------------------------------------------------------------------------------------------------
7. MIPS Final Score Methodology
For the 2020 MIPS payment year, we intend to build on the scoring
methodology we finalized for the transition year, which allows for
accountability and alignment across the performance categories and
minimizes burden on MIPS eligible clinicians, while continuing to
prepare MIPS eligible clinicians for the performance threshold required
for the 2021 MIPS payment year. Our rationale for our scoring
methodology continues to be grounded in the understanding that the MIPS
scoring system has many components and numerous moving parts.
As we continue to move forward in implementing the MIPS program, we
strive to balance the statutory requirements and programmatic goals
with the ease of use, stability, and meaningfulness for MIPS eligible
clinicians, while also emphasizing simplicity and scoring that is
understandable for MIPS eligible clinicians. We proposed refinements to
the performance standards, the methodology for determining a score for
each of the four performance categories (the ``performance category
score''), and the methodology for determining a final score based on
the performance category scores (82 FR 30140 through 30142).
We intended to continue the transition of MIPS by proposing the
following policies:
Continuation of many transition year scoring policies in
the quality performance category, with an adjustment to the number of
achievement points available for measures that fail to meet the data
completeness criteria, to encourage MIPS eligible clinicians to meet
data completeness while providing an exception for small practices;
An improvement scoring methodology that rewards MIPS
eligible clinicians who improve their performance in the quality and
cost performance categories;
A new scoring option for the quality and cost performance
categories that allows facility-based MIPS eligible clinicians to be
scored based on their facility's performance;
Special considerations for MIPS eligible clinicians in
small practices or those who care for complex patients; and
Policies that allow multiple pathways for MIPS eligible
clinicians to receive a neutral to positive MIPS payment adjustment.
[[Page 53712]]
We noted that these sets of proposed policies will help clinicians
smoothly transition from the transition year to the 2021 MIPS payment
year, for which the performance threshold (which represents the final
score that would earn a neutral MIPS adjustment) would be either the
mean or median (as selected by the Secretary) of the MIPS final scores
for all MIPS eligible clinicians from a previous period specified by
the Secretary.
Unless otherwise noted, for purposes of this section II.C.7. of the
final rule with comment period on scoring, the term ``MIPS eligible
clinician'' will refer to MIPS eligible clinicians that submit data and
are scored at either the individual- or group-level, including virtual
groups, but will not refer to MIPS eligible clinicians who elect
facility-based scoring. The scoring rules for facility-based
measurement are discussed in section II.C.7.a.(4) of this final rule
with comment period. We also note that the APM scoring standard applies
to APM Entities in MIPS APMs, and those policies take precedence where
applicable; however, where those policies do not apply, scoring for
MIPS eligible clinicians as described in section II.C.7. of this final
rule with comment period will apply. We refer readers to section
II.C.6.g. of this final rule with comment period for additional
information about the APM scoring standard.
a. Converting Measures and Activities Into Performance Category Scores
(1) Policies That Apply Across Multiple Performance Categories
The policies for scoring the 4 performance categories are described
in detail in section II.C.7.a. of this final rule with comment period.
However, as the 4 performance categories collectively create a single
MIPS final score, there are several policies that apply across
categories, which we discuss in section II.C.7.a.(1) of this final rule
with comment period.
(a) Performance Standards
In accordance with section 1848(q)(3) of the Act, in the CY 2017
Quality Payment Program final rule, we finalized performance standards
for the four performance categories. We refer readers to the CY 2017
Quality Payment Program final rule for a description of the performance
standards against which measures and activities in the four performance
categories are scored (81 FR 77271 through 77272).
As discussed in the proposed rule (82 FR 30096 through 30098), we
proposed to add an improvement scoring standard to the quality and cost
performance categories starting with the 2020 MIPS payment year.
(b) Policies Related to Scoring Improvement
(i) Background
In accordance with section 1848(q)(5)(D)(i) of the Act, beginning
with the 2020 MIPS payment year, if data sufficient to measure
improvement are available, the final score methodology shall take into
account improvement of the MIPS eligible clinician in calculating the
performance score for the quality and cost performance categories and
may take into account improvement for the improvement activities and
advancing care information performance categories. In addition, section
1848(q)(3)(B) of the Act provides that the Secretary, in establishing
performance standards for measures and activities for the MIPS
performance categories, shall consider: Historical performance
standards; improvement; and the opportunity for continued improvement.
Section 1848(q)(5)(D)(ii) of the Act also provides that achievement may
be weighted higher than improvement.
In the CY 2017 Quality Payment Program final rule, we summarized
public comments received on the proposed rule regarding potential ways
to incorporate improvement into the scoring methodology moving forward,
including approaches based on methodologies used in the Hospital VBP
Program, the Shared Savings Program, and Medicare Advantage 5-star
Ratings Program (81 FR 77306 through 77308). We did not finalize a
policy at that time on this topic and indicated we would take comments
into account in developing a proposal for future rulemaking.
When considering the applicability of these programs to MIPS, we
looked at the approach that was used to measure improvement for each of
the programs and how improvement was incorporated into the overall
scoring system. An approach that focuses on measure-level comparison
enables a more granular assessment of improvement because performance
on a specific measure can be considered and compared from year to year.
All options that we considered last year use a standard set of measures
that do not provide for choice of measures to assess performance;
therefore, they are better structured to compare changes in performance
based on the same measure from year to year. The aforementioned
programs do not use a category-level approach; however, we believe that
a category-level approach would provide a broader perspective,
particularly in the absence of a standard set of measures, because it
would allow for a more flexible approach that enables MIPS eligible
clinicians to select measures and data submission mechanisms that can
change from year to year and be more appropriate to their practice in a
given year.
We believe that both approaches are viable options for measuring
improvement. Accordingly, we noted that we believe that an appropriate
approach for measuring improvement for the quality performance category
and the cost performance category should consider the unique
characteristics of each performance category rather than necessarily
applying a uniform approach across both performance categories. For the
quality performance category, clinicians are offered a variety of
different measures which can be submitted by different mechanisms,
rather than a standard set of measures or a single data submission
mechanism. For the cost performance category, however, clinicians are
scored on the same set of cost measures to the extent each measure is
applicable and available to them; clinicians cannot choose which cost
measures they will be scored on. In addition, all of the cost measures
are derived from administrative claims data with no additional
submission required by the clinician.
When considering the applicability of these programs to MIPS, we
also considered how scoring improvement is incorporated into the
overall scoring system, including when only achievement or improvement
is incorporated into a final score or when improvement and achievement
are both incorporated into a final score.
We refer readers to the proposed rule (82 FR 30096 through 30098)
where we considered how we might adapt for MIPS the various approaches
used for scoring improvement under the Hospital VBP Program, Medicare
Shared Savings Program, and Medicare Advantage 5-Star rating.
We proposed two different approaches for scoring improvement from
year to year. We proposed to measure improvement at the performance
category level for the quality performance category score (82 FR 30113
through 30114) and refer readers to section II.C.7.a.(2)(i) of this
final rule with comment period for a summary of the comments we
received and our responses. Because clinicians can elect the submission
mechanisms and quality measures that are most
[[Page 53713]]
meaningful to their practice, and these choices can change from year to
year, we want a flexible methodology that allows for improvement
scoring even when the quality measures change. This is particularly
important as we encourage MIPS eligible clinicians to move away from
topped out measures and toward more outcome measures. We do not want
the flexibility that is offered to MIPS eligible clinicians in the
quality performance category to limit clinicians' ability to move
towards outcome measures, or limit our ability to measure improvement.
Our final policies for taking improvement into account as part of the
quality performance category score are addressed in detail in sections
II.C.7.a.(2)(i) and II.C.7.a.(2)(j) of this final rule with comment
period.
We noted our belief that there is reason to adopt a different
methodology for scoring improvement for the cost performance category
from that used for the quality performance category. In contrast to the
quality performance category, for the cost performance category, MIPS
eligible clinicians do not have a choice in measures or submission
mechanisms; rather, all MIPS eligible clinicians are assessed on all
measures based on the availability and applicability of the measure to
their practice, and all measures are derived from administrative claims
data. Therefore, for the cost performance category, we proposed to
measure improvement at the measure level (82 FR 30121). We also noted
that we are statutorily required to measure improvement for the cost
performance category beginning with the second MIPS payment year if
data sufficient to measure improvement is available. Because we had
proposed to weight the cost performance category at zero percent for
the 2018 MIPS performance period/2020 MIPS payment year (82 FR 30047
through 30048), the improvement score for the cost performance category
would not have affected the MIPS final score for the 2018 MIPS
performance period/2020 MIPS payment year and would have been for
informational purposes only. However, as discussed in section
II.C.6.d.(2) of this final rule with comment period, we are adopting
our alternative option to maintain a 10 percent weight for the cost
performance category, and therefore, the cost improvement score will be
reflected in the cost performance category percent score and the final
score for the 2018 MIPS performance period/2020 MIPS payment year.
We did not propose to score improvement in the improvement
activities performance category or the advancing care information
performance category at this time, though we may address improvement
scoring for these performance categories in future rulemaking.
We proposed to amend Sec. 414.1380(a)(1)(i) to add that
improvement scoring is available for the quality performance category
and for the cost performance category at Sec. 414.1380(a)(1)(ii)
beginning with the 2020 MIPS payment year.
We solicited public comment on our proposals to score improvement
for the quality and cost performance categories starting with the 2020
MIPS payment year.
The following is a summary of the public comments received on our
proposals and our responses:
Comment: Several commenters supported CMS's proposal to score
improvement for the quality and cost performance categories to
recognize and reward improvement as well as achievement. A few
commenters supported separate approaches for scoring improvement for
the cost and quality performance categories because of the specific
characteristics of each category. A few commenters recommended that CMS
review the impact of these proposals after the first year of
implementation and refine them as necessary to ensure that they achieve
the intended goal of rewarding improvement and not penalizing high
performers. One commenter supported the incorporation of improvement
scoring because measuring changes in year-to-year performance could
create strong incentives for clinicians to further improve upon the
quality and value of care. One commenter believed that clinicians who
make large gains in their performance could be rewarded and
incentivized toward continuous quality improvement, even for the
highest performers. One commenter believed that scoring improvement
would help solo, small, and rural practices with administrative
challenges by incentivizing and offsetting upfront costs of MIPS
participation. One commenter believed that scoring improvement could
boost the success of MIPS because it would improve the quality of care
patients receive, reduce the inefficient use of care, increase the
value of care provided, focus on an enhanced reward system relative to
quality and cost as the primary measurements of care efficiency, and
increase clinicians' incentive to drive improvements in the performance
categories. Finally, one commenter recommended continued transparency
for the improvement scoring methodology and calculations because
clinicians should be able to understand their progress in improving
outcomes.
Response: We thank commenters for their support for scoring
improvement for the quality and cost performance categories starting
with the 2020 MIPS payment year. We will implement improvement scoring
beginning with the 2020 MIPS payment year. We intend to evaluate the
implementation of improvement scoring for the quality and cost
performance categories to determine how the policies we establish in
this final rule with comment period are affecting MIPS eligible
clinicians, including high-performing clinicians. We intend to
implement improvement scoring in a transparent manner and we will
address any changes in improvement scoring through future rulemaking.
Please refer to sections II.C.7.a.(2)(i) and II.C.7.a.(3)(a) of this
final rule with comment period for details for our proposals, comments,
and final policies related to the implementation of improvement scoring
for the quality and cost performance categories, respectively.
Comment: Several commenters did not support scoring improvement for
the quality and cost performance categories because they believed it
would add a layer of complexity for clinicians participating in the
MIPS program. Several commenters did not support 2 separate methods for
improvement scoring for quality and cost because this approach would
lead to further complexity in the MIPS program. Several commenters
recommended that CMS delay implementation of improvement scoring and
that CMS seek feedback from stakeholders and analyze data further
because making adjustments after implementation may be difficult.
Commenters also believed that there had not been sufficient discussion
with stakeholders about the challenges for certain specialties, sites
of service, and other participants; that clinicians need more time to
understand the reporting requirements; and that the program's measures
should be stable prior to implementing improvement scoring.
Response: We acknowledge the commenters' concerns with the
complexity that scoring improvement adds to the calculation of the MIPS
quality performance category score. We also acknowledge the commenters'
concerns related to the challenges of improvement scoring for specific
types of clinician practices, the amount of time to understand the
reporting requirements, and the stability of the program's measures.
Section 1848(q)(5)(D)(i) of the Act requires us to take into account
improvement when
[[Page 53714]]
calculating the quality and cost performance category scores beginning
with the 2020 MIPS payment year if data sufficient to measure
improvement is available. We intend to develop additional educational
materials to help explain improvement scoring. We also intend to
monitor implementation of improvement scoring for the quality and cost
performance categories and will address any changes to improvement
scoring through future rulemaking. Please refer to sections
II.C.7.a.(2)(i)(ii) and II.C.7.a.(3)(a)(i) of this final rule with
comment period for more information about our proposal and discussion
related to data sufficiency for the quality and cost performance
categories. We continue to believe that the separate methodologies for
the quality and cost performance categories are warranted given the
unique characteristics of each performance category.
Comment: One commenter recommended that, in future rulemaking, CMS
advance an improvement score proposal for the improvement activities
and advancing care information performance categories that aligns with
the proposal set forth for the quality performance category to measure
improvement at the category level.
Response: We will address improvement scoring for the improvement
activities and advancing care information performance categories, and
alignment as appropriate, in future rulemaking.
Final Action: After consideration of the public comments, we are
finalizing our proposal to amend Sec. 414.1380(a)(1)(i) and Sec.
414.1380(a)(1)(ii) to add that improvement scoring is available for the
quality performance category and the cost performance category
beginning with the 2020 MIPS payment year.
(ii) Data Sufficiency Standard To Measure Improvement
Section 1848(q)(5)(D)(i) of the Act requires us to measure
improvement for the quality and cost performance categories of MIPS if
data sufficient to measure improvement are available, which we
interpret to mean that we would measure improvement when we can
identify data from a current performance period that can be compared to
data from a prior performance period or data that compares performance
from year to year. We proposed that we would measure improvement for
the quality performance category when data is available because there
is a performance category score for the prior performance period (82 FR
30114 through 30116). We also proposed that we would measure
improvement for the cost performance category when data is available
which is when there is sufficient case volume to provide measurable
data on measures in subsequent years with the same identifier (82 FR
30121). We refer readers to sections II.C.7.a.(2)(i)(ii) and
II.C.7.a.(3)(a)(i) of this final rule with comment period for details
on these proposals, the comments received and our responses, and final
policies.
(c) Scoring Flexibility for ICD-10 Measure Specification Changes During
the Performance Period
The quality and cost performance categories rely on measures that
use detailed measure specifications that include ICD-10-CM/PCS (``ICD-
10'') code sets. We annually issue new ICD-10 coding updates, which are
effective from October 1 through September 30 (https://www.cms.gov/Medicare/Coding/ICD10/ICD10OmbudsmanandICD10CoordinationCenterICC.html). As part of this
update, codes are added as well as removed from the ICD-10 code set.
To provide scoring flexibility for MIPS eligible clinicians and
groups for measures impacted by ICD-10 coding changes in the final
quarter of the Quality Payment Program performance period--which may
render the measures no longer comparable to the historical benchmark--
we proposed in the CY 2018 Quality Payment Program proposed rule (82 FR
30098) to codify at Sec. Sec. 414.1380(b)(1)(xviii) that we will
assess performance on measures considered significantly impacted by
ICD-10 updates based only on the first 9 months of the 12-month
performance period (for example, January 1, 2018 through September 30,
2018, for the 2018 MIPS performance period). As discussed in the CY
2018 Quality Payment Program proposed rule (82 FR 30098), we believed
it would be appropriate to assess performance for significantly
impacted measures based on the first 9 months of the performance
period, rather than the full 12 months because the indicated
performance for the last quarter could be affected by the coding
changes rather than actual differences in performance. We noted that
performance on measures that are not significantly impacted by changes
to ICD-10 codes would continue to be assessed on the full 12-month
performance period (January 1 through December 31) (82 FR 30098).
In the CY 2018 Quality Payment Program proposed rule (82 FR 30098),
we noted that any measure that relies on an ICD-10 code which is added,
modified, or removed, such as in the measure numerator, denominator,
exclusions, or exceptions, could have an impact on the indicated
performance on the measure, although the impact may not always be
significant. In the CY 2018 Quality Payment Program proposed rule, we
proposed an annual review process to analyze the measures that have a
code impact and assess the subset of measures significantly impacted by
ICD-10 coding changes during the performance period (82 FR 30098).
Depending on the data available, we anticipated that our determination
as to whether a measure is significantly impacted by ICD-10 coding
changes would include these factors: A more than 10 percent change in
codes in the measure numerator, denominator, exclusions, and
exceptions; clinical guideline changes or new products or procedures
reflected in ICD-10 code changes; and feedback on a measure received
from measure developers and stewards (82 FR 30098). In the CY 2018
Quality Payment Program proposed rule, we considered an approach where
we would consider any change in ICD-10 coding to impact performance on
a measure and thus only rely on the first 9 months of the 12-month
performance period for such measures; however, we believed such an
approach would be too broad and truncate measurement for too many
measures where performance may not be significantly affected (82 FR
30098). We believed that our proposed approach would ensure the
measures on which individual MIPS eligible clinicians and groups will
have their performance assessed are accurate for the performance period
and are consistent with the benchmark set for the performance period
(82 FR 30098).
We proposed to publish on the CMS Web site which measures are
significantly impacted by ICD-10 coding changes and would require the
9-month assessment (82 FR 30098). We proposed to publish this
information by October 1st of the performance period if technically
feasible, but by no later than the beginning of the data submission
period, which is January 2, 2019 for the 2018 MIPS performance period
(82 FR 30098).
We requested comment on the proposal to address ICD-10 measures
specification changes during the performance period by relying on the
first 9 months of the 12-month performance period (82 FR 30098). We
also requested comment on potential alternate approaches to address
measures that are significantly impacted due to ICD-10 changes during
the performance period, including the factors we might use to determine
[[Page 53715]]
whether a measure is significantly impacted (82 FR 30098).
The following is a summary of the public comments received on these
proposals and our responses:
Comment: A few commenters had suggestions to improve the proposed
ICD-10 annual review process. Some commenters suggested that CMS
develop a centralized process for soliciting feedback on measures that
may be significantly impacted by ICD-10 coding changes and facilitate
discussions between measure developers, stewards, clinicians, and
vendors who would be implementing the changes to resolve ICD-10 coding
changes as quickly as possible. A few commenters also recommended that
CMS address significant changes as a result of ICD-10 changes through
notice and comment rulemaking.
Response: We will take commenters' suggestions for a centralized
process for soliciting feedback on significantly impacted measures and
facilitating discussions into consideration as part of our annual
review process. We are finalizing our proposal to assess measure
performance based only on the first 9 months of the 12-month
performance period when we determine that a measure is significantly
impacted by ICD-10 coding changes. Measures impacted by ICD-10 coding
changes will be identified during the performance period. We are unable
to address each individual ICD-10 code change through rulemaking in
advance as the code changes are identified during the performance
period and take effect on October 1 of the performance period. However,
any changes to this policy, including our process for identifying
significantly impacted measures, and any substantive changes to quality
or cost measures will be addressed through future rulemaking. We are
also finalizing our proposal to publish which measures are
significantly impacted by ICD-10 coding changes by October 1st of the
performance period if technically feasible, but by no later than the
beginning of the data submission period, which is January 2, 2019 for
the 2018 MIPS performance period. We will also provide further
information through subregulatory guidance.
Comment: Several commenters were supportive of the proposal to
score measures that are considered significantly impacted by ICD-10
updates based on only the first 9 months of the performance period to
align with annual ICD-10 updates.
Response: We thank commenters for their support. We are finalizing
our proposal to assess measure performance based only on the first 9
months of the 12-month performance period when we determine measures
are significantly impacted by ICD-10 coding changes. Our determination
as to whether a measure is significantly impacted by ICD-10 coding
changes will consider one or more of the following factors: A more than
10 percent change in codes in the measure numerator, denominator,
exclusions, and exceptions; guideline changes or new products or
procedures reflected in ICD-10 code changes; and feedback on a measure
received from measure developers and stewards.
Comment: Several commenters expressed concern about our proposed
approach to scoring flexibility based on the impact of the ICD-10
update cycle on measures and suggested alternatives. The commenters
noted that because it is possible for ICD-10 updates to occur in April
as well as in October, CMS's proposal does not solve the issue of ICD-
10 updates that occur midway through the performance period. A few
commenters recommended that CMS give full credit for reporting that
would be applied in future years, or alternatively, that CMS consider
novel approaches that developers, stewards, and implementers may have
for accounting for ICD-10 changes--for example, releasing new measure
guidance or suspending the use of certain new ICD-10 codes until the
following performance period.
Response: While the list of ICD-10 codes are available prior to
October 1, for Medicare Part A and Part B, all ICD-10 changes become
effective October 1. As discussed further below, we are finalizing our
proposal to measure the first 9 months of the performance period when
we determine measures are significantly impacted by ICD-10 updates. We
believe this approach ensures our assessment of performance will only
be based on measures with ICD-10 codes and measure specifications that
are consistent with the historical benchmark we set out for the
performance period when available, which are relied on by MIPS eligible
clinicians and groups as they plan for the performance period. While we
acknowledge the other approaches recommended by commenters, such as
providing full credit for reporting that would be applied in future
years, we believe the approach we are finalizing allows comparisons to
historical benchmarks, which use similar codes, and ensures the measure
specifications are accurate for the time period being measured. As some
commenters suggested, the input of stakeholders in this process is
valuable, and we will consider the input of developers, stewards and
implementers as part of the annual review process. More information
will be available through subregulatory guidance.
Comment: Several commenters expressed concern that for certain
measures, truncated reporting would not be appropriate due to their
measure logic, and the measures would be negatively impacted by a
shorter reporting window since it can take a full year to capture the
data needed to successfully report these measures. One commenter
expressed concern that, because certain standards used by registries to
support measure reporting do not include timing information, such as
the QRDA III standard, it is unclear how MIPS eligible clinicians would
be able to submit only 9 months of data. This commenter urged CMS to,
instead, adjust the value sets to account for the updates and have
those changes apply to the entire performance year, with no change to
full-year measure submission. This commenter suggested that this would
allow clinicians to take immediate advantage of critical updates to
value sets without having to wait until the next performance year. The
commenters also noted that the very short timeline between the
discovery and announcement of the error and the end of the submission
period would place an unreasonable burden on MIPS eligible clinicians
to revise and revalidate their submissions. One commenter also noted
that CMS's approach adds complexity because clinicians and groups would
have to track which measures require a full year of reporting and which
require only 9 months.
Response: We acknowledge the commenters' concerns that scoring
based on only 9 months of data raises issues with assessing MIPS
eligible clinicians and groups on less than a full year of data,
particularly for some measures. We also acknowledge that certain
standards used by registries to support measure reporting do not
include timing information. In response to these concerns we note that
where, as part of our annual review process we determine that scoring a
significantly impacted measure based on only 9 months of data is
inappropriate due to the measure logic or other factors, we will
communicate with MIPS eligible clinicians and groups and interested
parties and provide information to them through subregulatory guidance.
However, we expect that these instances would be rare based on our
experience.
We also acknowledge the concerns raised by commenters about the
burden MIPS eligible clinicians may face to revise and revalidate their
submissions,
[[Page 53716]]
and clarify that CMS will monitor ICD-10 updates and coding changes
that significantly impact measures during the performance period to
track which measures require a full year of reporting and which require
only 9 months, and we will also provide this information to MIPS
eligible clinicians, groups, and other interested parties, including
registries, through subregulatory guidance. We acknowledge the
commenter's suggestion that, alternatively, we adjust the value sets to
account for the updates and have those changes apply to the entire
performance year, with no change to full-year measure submission;
however, this approach is not operationally feasible for us to
implement.
Comment: One commenter did not support the approach that CMS
considered but rejected, whereby CMS would consider any change in ICD-
10 coding to impact performance on a measure, and thus, rely on the
first 9 months of the 12-month performance period for such measures.
Response: We acknowledge that such an approach would be too broad
and truncate measurement for too many measures where performance may
not be significantly affected and have rejected this approach.
Comment: Several commenters supported the proposal to publish the
measures significantly impacted by ICD-10 coding changes that would
require the 9-month assessment, and agreed with the factors listed to
consider in determining whether a measure is significantly impacted by
an ICD-10 coding update.
Response: We thank the commenters for their support. We are
finalizing as proposed our proposal to publish the list of measures
requiring a 9-month assessment process on the CMS Web site by October
1st of the performance period if technically feasible, but by no later
than the beginning of the data submission period. For example, for the
2018 performance period, data submissions will begin on January 2,
2019.
Comment: Several commenters requested that CMS provide information
about significantly impacted measures as soon as possible. The
commenters suggested that CMS issue the impacted ICD-10 list well
before October 1 to allow clinician and groups to appropriately prepare
for the upcoming submission cycle. A few commenters recommended that
clinicians and groups will need a 30- to 60-day window of lead time,
such as November 1 or December 1.
Response: We acknowledge that it would be useful to identify
significantly impacted measures as early as possible and will take
commenters' concerns into consideration to identify and publish
information on impacted measures as soon as it is technically feasible
for us to do so. We are finalizing that we will publish this
information by October 1st of the performance period if technically
feasible, but by no later than the beginning of the data submission
period, which is January 2, 2019 for the 2018 MIPS performance period.
We believe this timeline provides us needed time to review whether ICD-
10 code changes are significant or not as well as reviewing other
guideline changes.
Comment: Some commenters recommended that CMS include how the
annual ICD-10 update on October 1 may impact quality measures,
performance scores, and benchmarks for the last quarter of the year
because this is vital information needed for clinicians and groups to
make informed decisions about the performance period best suited for
their practice, including those who may want to choose the last 90 days
of 2018 as their performance period for the advancing care information
and the improvement activities performance categories, which have 90-
day performance periods.
Response: We will consider commenters' suggestions about the
information to include regarding significantly impacted measures as we
prepare our publication of significantly impacted measures. We will
monitor measure changes as they occur and rely on stakeholder input; we
will also provide subregulatory communication and guidance to
stakeholders as to how changes we determine to be significant may
impact the quality measures, performance scores, and benchmarks. We do
not believe this policy will affect the improvement activities and
advancing care information performance categories because those
performance categories have a 90-day reporting period.
Final Action: After consideration of the public comments, we are
finalizing as proposed our policy to provide scoring flexibility for
ICD-10 measure specification changes during the performance period. We
are finalizing that we will establish an annual review process to
analyze the measures that have a code impact and assess the subset of
measures significantly impacted by ICD-10 coding changes during the
performance period. Depending on the data available, our determination
as to whether a measure is significantly impacted by ICD-10 coding
changes will include one or more these factors: A more than 10 percent
change in codes in the measure numerator, denominator, exclusions, and
exceptions; clinical guideline changes or new products or procedures
reflected in ICD-10 code changes; and feedback on a measure received
from measure developers and stewards. Beginning with the 2018 MIPS
performance period, measures we determine to be significantly impacted
by ICD-10 updates will be assessed based only on the first 9 months of
the 12-month performance period. Lastly, we are finalizing as proposed
that we will publish the list of measures requiring a 9-month
assessment process on the CMS Web site by October 1st of the
performance period if technically feasible, but by no later than the
beginning of the data submission period, which is January 2, 2019 for
the 2018 MIPS performance period, as discussed in section II.C.6.a.(2)
of this final rule with comment period. We are codifying these policies
for the quality performance category at Sec. 414.1380(b)(1)(xviii) in
this final rule with comment period. As discussed in section
II.C.6.d.(3)(d) of this final rule with comment period, we will apply a
similar approach for measures in the cost performance category,
although we do not anticipate that the cost measures for the 2018 MIPS
performance period (total per capita cost measure and the MSPB) would
be significantly affected by ICD-10 changes.
As we finalize this policy for measures significantly impacted by
ICD-10 code changes, we are also concerned about instances where
clinical guideline changes or other changes to a measure that occur
during the performance period may significantly impact a measure and
render the measure no longer comparable to the historical benchmark. As
such, we seek comment in this final rule with comment period regarding
whether we should apply similar scoring flexibility to such measures.
(2) Scoring the Quality Performance Category for Data Submission via
Claims, EHR, Third Party Data Submission Options, CMS Web Interface,
and Administrative Claims
Many comments submitted in response to the CY 2017 Quality Payment
Program final rule requested additional clarification on our finalized
scoring methodology for the 2019 MIPS payment year. To provide further
clarity to MIPS eligible clinicians about the transition year scoring
policies, before describing our proposed scoring policies for the 2020
MIPS payment year, we provided a summary of the scoring policies
finalized in the CY 2017 Quality Payment Program final rule
[[Page 53717]]
along with examples of how they apply under several scenarios (82 FR
30098 through 30100).
In the CY 2017 Quality Payment Program final rule (81 FR 77286
through 77287), we finalized that the quality performance category
would be scored by assigning achievement points to each submitted
measure, which we refer to in this section of the proposed rule as
``measure achievement points.'' In the CY 2018 Quality Payment Program
proposed rule (82 FR 30098 through 30099), we proposed to amend various
paragraphs in Sec. 414.1380(b)(1) to use this term in place of
``achievement points.'' MIPS eligible clinicians can also earn bonus
points for certain measures (81 FR 77293 through 77294; 81 FR 77297
through 77299), which we referred to as ``measure bonus points,'' and
we proposed to amend Sec. 414.1380(b)(1)(xiii) (which we proposed to
redesignate as Sec. 414.1380(b)(1)(xiv) \3\), Sec.
414.1380(b)(1)(xiv) (which we proposed to redesignate as Sec.
414.1380(b)(1)(xv)), and Sec. 414.1380(b)(1)(xv) (which we proposed to
redesignate as Sec. 414.1380(b)(1)(xvii)) to use this term in place of
``bonus points.'' The measure achievement points assigned to each
measure would be added with any measure bonus points and then divided
by the total possible points (Sec. 414.1380(b)(1)(xv) (which we
proposed to redesignate as Sec. 414.1380(b)(1)(xvii)). We referred to
the total possible points as ``total available measure achievement
points,'' and we proposed to amend Sec. 414.1380(b)(1)(xv) to use this
term in place of ``total possible points.'' We also proposed to amend
these terms in Sec. 414.1380(b)(1)(xiii)(D) (which we proposed to
redesignate as Sec. 414.1380(b)(1)(xiv)(D)), and Sec.
414.1380(b)(1)(xiv) (which we proposed to redesignate as Sec.
414.1380(b)(1)(xv)).
---------------------------------------------------------------------------
\3\ In section II.C.7.a.(2)(c) of this final rule with comment
period, we are finalizing a new provision to be codified at Sec.
414.1380(b)(1)(xiii), and in section II.C.7.a.(2)(i) of this final
rule with comment period, we are finalizing a new provision to be
codified at Sec. 414.1380(b)(1)(xvi). As a result, we are
finalizing as well that the remaining paragraphs be redesignated in
order following the new provisions.
---------------------------------------------------------------------------
This resulting quality performance category score is a fraction
from zero to 1, which can be formatted as a percent; therefore, for
this section, we will present the quality performance category score as
a percent and refer to it as ``quality performance category percent
score.'' We also proposed to amend Sec. 414.1380(b)(1)(xv) (which we
proposed to redesignate as Sec. 414.1380(b)(1)(xvii)) to use this term
in place of ``quality performance category score.'' Thus, the formula
for the quality performance category percent score that we will use in
this section is as follows:
(Total measure achievement points + total measure bonus points)/total
available measure achievement points = quality performance category
percent score.
This is a summary of the public comments received on the changes to
the regulatory text and our responses:
Comment: One commenter expressed their belief that the continued
changes in terminology not based on statute serves to confuse
clinicians and further complicate the Quality Payment Program.
Response: The amendments to the regulatory text are meant to
clarify our terminology to make the Quality Payment Program easier to
understand. The changes to terminology are not intended to create
confusion but to respond to feedback on the need for more clarity and
meaningful terms than those used in the Act. These amendments to
regulation text are not changes to the program policy.
Final Action: After consideration of public comments, we are
finalizing the proposed clarifications and redesignations in Sec.
414.1380(b)(1) related to measure achievement points and the quality
performance category score.
In the CY 2017 Quality Payment Program final rule, we finalized
that for the quality performance category, an individual MIPS eligible
clinician or group that submits data on quality measures via EHR, QCDR,
qualified registry, claims, or a CMS-approved survey vendor for the
CAHPS for MIPS survey will be assigned measure achievement points for 6
measures (1 outcome or, if an outcome measure is not available, other
high priority measure and the next 5 highest scoring measures) as
available and applicable, and will receive applicable measure bonus
points for all measures submitted that meet the bonus criteria (81 FR
77282 through 77301).
In addition, for groups of 16 or more clinicians who meet the case
minimum of 200, we will also automatically score the administrative
claims-based all-cause hospital readmission measure as a seventh
measure (81 FR 77287). For individual MIPS eligible clinicians and
groups for whom the readmission measure does not apply, the denominator
is generally 60 (10 available measure achievement points multiplied by
6 available measures). For groups for whom the readmission measure
applies, the denominator is generally 70 points.
If we determined that a MIPS eligible clinician has fewer than 6
measures available and applicable, we will score only the number of
measures that are available and adjust the denominator accordingly to
the total available measure achievement points (81 FR 77291). A
description of the validation process to determine measure availability
is provided in the proposed rule (82 FR 30108 through 30109).
For the 2019 MIPS payment year, a MIPS eligible clinician that
submits quality measure data via claims, EHR, or third party data
submission options (that is, QCDR, qualified registry, EHR, or CMS-
approved survey vendor for the CAHPS for MIPS survey), can earn between
3 and 10 measure achievement points for quality measures submitted for
the performance period of greater than or equal to 90 continuous days
during CY 2017. A MIPS eligible clinician can earn measure bonus points
(subject to a cap) if they submit additional high priority measures
with a performance rate that is greater than zero, and that meet the
case minimum and data completeness requirements, or submit a measure
using an end-to-end electronic pathway. An individual MIPS eligible
clinician that has 6 or more quality measures available and applicable
will have 60 total available measure achievement points. An example was
provided in Table 17 of the proposed rule (82 FR 30099). We noted that
in the CY 2017 Quality Payment Program proposed rule, we proposed that
bonus points would be available for high priority measures that are not
scored (not included in the top 6 measures for the quality performance
category score) as long as the measure has the required case minimum,
data completeness, and has a performance rate greater than zero,
because we believed these qualities would allow us to include the
measure in future benchmark development (81 FR 28255). Although we
received public comments on this policy, responded to those comments,
and reiterated this proposal in the CY 2017 Quality Payment Program
final rule (81 FR 77292), we would like to clarify that our policy is
to assign measure bonus points for high priority measures, even if the
measure's achievement points are not included in the total measure
achievement points for calculating the quality performance category
percent score, as long as the measure has the required case minimum,
data completeness, and has a performance rate greater than zero, and
[[Page 53718]]
that this applies beginning with the transition year.
We proposed to amend Sec. 414.1380(b)(1)(xiii)(A) (which we
proposed to redesignate as Sec. 414.1380(b)(1)(xiv)(A)) to state that
measure bonus points may be included in the calculation of the quality
performance category percent score regardless of whether the measure is
included in the calculation of the total measure achievement points. We
also proposed a technical correction to the second sentence of that
paragraph to state that to qualify for high priority measure bonus
points, each measure must be reported with sufficient case volume to
meet the required case minimum, meet the required data completeness
criteria, and not have a zero percent performance rate.
We did not receive any public comments on this proposal.
Final Action: We are finalizing as proposed the amendments and
technical corrections to Sec. 414.1380(b)(1) related to high priority
measure bonus points.
In the CY 2017 Quality Payment Program final rule, we also
finalized scoring policies specific to groups of 25 or more that submit
their quality performance measures using the CMS Web Interface (81 FR
77278 through 77306). Although we did not propose to change the basic
scoring system that we finalized in the CY 2017 Quality Payment Program
final rule for the 2020 MIPS payment year, we noted that we proposed
several modifications to scoring the quality performance category,
including adjusting scoring for measures that do not meet the data
completeness criteria, adding a method for scoring measures submitted
via multiple mechanisms, adding a method for scoring selected topped
out measures, and adding a method for scoring improvement (82 FR
30100). We also noted that we proposed an additional option for
facility-based scoring for the quality performance category (82 FR
30123 through 30132). Further description of these proposals and
finalized policies are discussed below.
(a) Quality Measure Benchmarks
We did not propose to change the policies on benchmarking finalized
in the CY 2017 Quality Payment Program final rule and codified at
paragraphs (b)(1)(i) through (iii) of Sec. 414.1380; however, we
proposed a technical correction to paragraphs (i) and (ii) to clarify
that measure benchmark data are separated into decile categories based
on percentile distribution, and that, other than using performance
period data, performance period benchmarks are created in the same
manner as historical benchmarks using decile categories based on a
percentile distribution and that each benchmark must have a minimum of
20 individual clinicians or groups who reported on the measure meeting
the data completeness requirement and case minimum case size criteria
and performance greater than zero. We referred readers to the
discussion at 81 FR 77282 for more details on that policy.
We received no public comments on the proposed technical correction
to Sec. 414.1380.
Final Action: We are finalizing the technical corrections to Sec.
414.1380(b)(1)(i) through (iii) related to the measure benchmark data
as proposed.
We noted that the proposal to increase the low-volume threshold
could have an impact on our MIPS benchmarks because we include MIPS
eligible clinicians and comparable APMs that meet our benchmark
criteria in our measure benchmarks, which could reduce the number of
individual eligible clinicians and groups that meet the definition of a
MIPS eligible clinician and contribute to our benchmarks (82 FR 30101).
Therefore, we sought feedback on whether we should broaden the criteria
for creating our MIPS benchmarks to include PQRS and any data from
MIPS, including voluntary reporters, that meet our benchmark
performance, case minimum and data completeness criteria when creating
our benchmarks.
We thank commenters for their responses on whether we should
broaden the criteria for creating our MIPS benchmarks. We will consider
them in future rulemaking.
In the CY 2017 Quality Payment Program final rule, we did not
stratify benchmarks by practice characteristics, such as practice size,
because we did not believe there was a compelling rationale for such an
approach, and we believed that stratifying could have unintended
negative consequences for the stability of the benchmarks, equity
across practices, and quality of care for beneficiaries (81 FR 77282).
We noted that we do create separate benchmarks for each of the
following submission mechanisms: EHR submission options; QCDR and
qualified registry submission options; claims submission options; CMS
Web Interface submission options; CMS-approved survey vendor for CAHPS
for MIPS submission options; and administrative claims submission
options (for measures derived from claims data, such as the all-cause
hospital readmission measure) (81 FR 77282). We refer readers to the CY
2018 Quality Payment Program proposed rule for a summary of the
comments we received (82 FR 30101).
We did not propose to change our policies related to stratifying
benchmarks by practice size for the 2020 MIPS payment year. For many
measures, the benchmarks may not need stratification as they are only
meaningful to certain specialties and only expected to be submitted by
those certain specialists. We further clarified that in the majority of
instances our current benchmarking approach only compares like
clinicians to like clinicians. We noted that we continue to believe
that stratifying by practice size could have unintended negative
consequences for the stability of the benchmarks, equity across
practices, and quality of care for beneficiaries. However, we sought
comment on methods by which we could stratify benchmarks, while
maintaining reliability and stability of the benchmarks, to use in
developing future rulemaking for future performance and payment years.
Specifically, we sought comment on methods for stratifying benchmarks
by specialty or by place of service. We also requested comment on
specific criteria to consider for stratifying measures, such as how we
should stratify submissions by multi-specialty practices or by
practices that operate in multiple places of service.
We thank commenters for their suggestions on stratifying benchmarks
and measures for creating MIPS benchmarks. We will consider them in
future rulemaking.
When we were developing the quality measure benchmarks, we were
guided by the principles we used when developing the MIPS unified
scoring system (81 FR 28249 through 28250). We sought a system that
enables MIPS eligible clinicians, beneficiaries, and stakeholders to
understand what is required for a strong performance in MIPS while
being consistent with statutory requirements. We also wanted the
methodology to be as a simple as possible while providing flexibility
for the variety of practice types. Now that we have gone through 1 year
of the program, we are asking for comments on how we can improve our
quality measure benchmarking methodology. In the CY 2018 Quality
Payment Program proposed rule, we requested comments on how we can
specifically improve our benchmarking methodology (82 FR 30100 through
30101). For this final rule with comment period, we are requesting
comments on whether our methodology has been successful in achieving
the goals we aimed to achieve, and, if not,
[[Page 53719]]
what other ways or approaches we could use that are in line with
principles we discussed above.
(b) Assigning Points Based on Achievement
In the CY 2017 Quality Payment Program final rule, we finalized at
Sec. 414.1380(b)(1) that a MIPS quality measure must have a measure
benchmark to be scored based on performance. MIPS quality measures that
do not have a benchmark (for example, because fewer than 20 MIPS
eligible clinicians or groups submitted data that met our criteria to
create a reliable benchmark) will not be scored based on performance
(81 FR 77286). In the CY 2018 Quality Payment Program proposed rule (82
FR 30101), we did not propose any changes to this policy. We did,
however, propose a technical correction to the regulatory text at Sec.
414.1380(b)(1) to delete the term ``MIPS'' before ``quality measure''
in the third sentence of that paragraph and to delete the term MIPS
before ``quality measures'' in the fourth sentence of that paragraph
because this policy applies to all quality measures, including the
measures finalized for the MIPS program and the quality measures
submitted through a QCDR that have been approved for MIPS.
We also did not propose to change the policies to score quality
measure performance using a percentile distribution, separated by
decile categories and assign partial points based on the percentile
distribution finalized in the CY 2017 Quality Payment Program final
rule and codified at paragraphs (b)(1)(ix), (x), and (xi) of Sec.
414.1380; however, we proposed a technical correction to paragraph (ix)
to clarify that measures are scored against measure benchmarks. We
referred readers to the discussion at 81 FR 77286 for more details on
those policies.
In Table 19 of the proposed rule (82 FR 3010), we provided an
example of assigning points for performance based on benchmarks using a
percentile distribution, separated by decile categories. We noted that
for quality measures for which baseline period data is available, we
will publish the numerical baseline period benchmarks with deciles
prior to the start of the performance period (or as soon as possible
thereafter) (see 81 FR 77282). For quality measures for which there is
no comparable data from the baseline period, we will publish the
numerical performance period benchmarks after the end of the
performance period (81 FR 77282). We will also publish further
explanation of how we calculate partial points at qpp.cms.gov.
We did not receive public comments on this proposal.
Final Action: We are finalizing as proposed the technical
corrections to Sec. 414.1380(b)(1) related to quality measures.
(i) Floor for Scored Quality Measures
For the 2017 MIPS performance period, we also finalized at Sec.
414.1380(b)(1) a global 3-point floor for each scored quality measure,
as well as for the hospital readmission measure (if applicable), such
that MIPS eligible clinicians would receive between 3 and 10 measure
achievement points for each submitted measure that can be reliably
scored against a benchmark, which requires meeting the case minimum and
data completeness requirements (81 FR 77286 through 77287). For
measures with a benchmark based on the performance period, rather than
on the baseline period, we stated that we would continue to assign
between 3 and 10 measure achievement points for performance years after
the first transition year because it would help to ensure that the MIPS
eligible clinicians are protected from a poor performance score that
they would not be able to anticipate (81 FR 77282; 81 FR 77287). For
measures with benchmarks based on the baseline period, we stated the 3-
point floor was for the transition year and that we would revisit the
3-point floor in future years (81 FR 77286 through 77287).
We note, for clarification purposes, that we stated in the CY 2018
Quality Payment Program proposed rule (82 FR 30102) that measures
without a benchmark based on the baseline period would be assigned
between 3 and 10 measure achievement points for performance years after
the first transition year. However, we wanted to clarify that only
measures without a benchmark based on the baseline period that later
have a benchmark based on the performance period would be assigned
between 3 and 10 measure achievement points for performance years after
the first transition year. Measures without a benchmark based on the
baseline or performance period would receive 3 points.
For the 2018 MIPS performance period, we proposed to again apply a
3-point floor for each measure that can be reliably scored against a
benchmark based on the baseline period, and to amend Sec.
414.1380(b)(1) accordingly. We noted that we proposed to score measures
in the CMS Web Interface for the Quality Payment Program for which
performance is below the 30th percentile (82 FR 30113). We will revisit
the 3-point floor for such measures again in future rulemaking.
We invited public comment on this proposal to again apply this 3-
point floor for quality measures that can be reliably scored against a
baseline benchmark in the 2018 MIPS performance period.
The following is a summary of the public comments received on the
floor for quality measures proposal and our responses:
Comment: Several commenters supported maintaining the 3-point floor
for measures that can be reliably scored against a benchmark. A few
commenters supported the policy because the 3-point floor maintains
stability and rewards participation in the Quality Payment Program. One
commenter indicated that this will allow time for eligible clinicians
and groups to receive feedback on performance and incorporate changes
into clinical practice.
Response: We thank commenters for their support of the 3-point
floor for the 2018 MIPS performance period and will finalize this
policy as proposed.
Comment: One commenter recommended that CMS should lower the 3-
point floor for measures reliably scored against a baseline benchmark,
citing the need to move past transition year policies in the second
year.
Response: The 3-point floor in the 2018 performance period affords
MIPS eligible clinicians the ability to continue to successfully
transition into the program and provides stability and consistency in
the Quality Payment Program. We will revisit this policy in future
years.
Final Action: After consideration of public comments, we are
finalizing the proposal to again apply the 3-point floor for quality
measures that can be reliably scored against a baseline benchmark in
the 2018 MIPS performance period and to amend Sec. 414.1380(b)(1)
accordingly.
(ii) Additional Policies for the CAHPS for MIPS Measure Score
In the CY 2017 Quality Payment Program final rule, we finalized a
policy for the CAHPS for MIPS measure, such that each Summary Survey
Measure (SSM) will have an individual benchmark, that we will score
each SSM individually and compare it against the benchmark to establish
the number of points, and the CAHPS score will be the average number of
points across SSMs (81 FR 77284).
As described in the CY 2018 Quality Payment Program proposed rule
(82 FR 30102), we proposed to remove two SSMs from the CAHPS for MIPS
survey, which would result in the collection of
[[Page 53720]]
10 SSMs in the CAHPS for MIPS survey. Eight of those 10 SSMs have had
high reliability for scoring in prior years, or reliability is expected
to improve for the revised version of the measure, and they also
represent elements of patient experience for which we can measure the
effect one practice has compared to other practices participating in
MIPS. The ``Health Status and Functional Status'' SSM, however,
assesses underlying characteristics of a group's patient population
characteristics and is less of a reflection of patient experience of
care with the group. Moreover, to the extent that health and functional
status reflects experience with the practice, case-mix adjustment is
not sufficient to separate how much of the score is due to patient
experience versus due to aspects of the underlying health of patients.
The ``Access to Specialists'' SSM has low reliability; historically it
has had small sample sizes, and therefore, the majority of groups do
not achieve adequate reliability, which means there is limited ability
to distinguish between practices' performance.
For these reasons, we proposed not to score the ``Health Status and
Functional Status'' SSM and the ``Access to Specialists'' SSM beginning
with the 2018 MIPS performance period. Despite not being suitable for
scoring, both SSMs provide important information about patient care.
Qualitative work suggests that ``Access to Specialists'' is a critical
issue for Medicare FFS beneficiaries. The survey is also a useful tool
for assessing beneficiaries' self-reported health status and functional
status, even if this measure is not used for scoring practices' care
experiences. Therefore, we believed that continued collection of the
data for these two SSMs is appropriate even though we do not propose to
score them.
Other than these two SSMs, we proposed to score the remaining 8
SSMs because they have had high reliability for scoring in prior years,
or reliability is expected to improve for the revised version of the
measure, and they also represent elements of patient experience for
which we can measure the effect one practice has compared to other
practices participating in MIPS.
We invited comment on our proposal not to score the ``Health Status
and Functional Status'' and ``Access to Specialists'' SSMs beginning
with the 2018 MIPS performance period.
The following is a summary of the public comments received on our
proposal to not score ``Health Status and Functional Status'' and
``Access to Specialists'' SSMs beginning with the 2018 MIPS performance
period:
Comment: One commenter supported CMS's proposal not to score the
two SSMs in the CAHPS for MIPS measure, agreeing that CMS should only
use the 8 SSMs with high reliability to calculate the CAHPS for MIPS
score.
Response: We appreciate commenters' support to not score the
``Health Status and Functional Status'' and ``Access to Specialists''
SSMs beginning with the 2018 MIPS performance period.
Comment: Several commenters did not support the proposal to not
score the ``Health Status and Functional Status'' SSM and argued that
the functional status SSM provides valuable insights and connect to
health outcomes in a meaningful way.
Response: We agree with commenters on the value of the ``Health
Status and Functional Status'' SSM and will continue to collect data on
both the ``Health Status and Functional Status'' and ``Access to
Specialists'' SSMs even though we will no longer score them. Our
concern is that the ``Health Status and Functional Status'' SSM is not
a reliable indicator of patient care and experience for scoring
purposes. As we described above, the ``Health Status and Functional
Status'' SSM reflects more of the characteristics of a group's patient
population than patient experience and does not allow for an adequate
assessment of how much the score is a result of patient experience or
aspects of the underlying health of patients. Additionally, the
``Access to Specialists'' SSM has historically had small sample sizes,
making it highly unreliable, and thus we feel it is not appropriate for
it to be included in scoring.
Final Action: After consideration of public comments, we are
finalizing the proposal to not score the ``Health Status and Functional
Status'' and ``Access to Specialists'' SSMs beginning with the 2018
MIPS performance period, as proposed. We noted in the CY 2018 Quality
Payment Program proposed rule that we proposed to add the CAHPS for
ACOs survey as an available measure for calculating the MIPS APM score
for the Shared Savings Program and Next Generation ACO Model (82 FR
30082 through 30083). We refer readers participating in ACOs to section
II.C.6.g.(3)(b) of this final rule with comment period for the
discussion of the CAHPS for ACOs scoring methodology.
Table 17 summarizes the newly finalized SSMs included in the CAHPS
for MIPS survey and illustrates application of our policy to score only
8 measures.
Table 17--Newly Finalized SSM for CAHPS for MIPS Scoring
----------------------------------------------------------------------------------------------------------------
Newly finalized for inclusion Newly finalized for inclusion
Summary survey measure in the CAHPS for MIPS survey? in CAHPS for MIPS scoring?
----------------------------------------------------------------------------------------------------------------
Getting Timely Care, Appointments, and Yes........................... Yes.
Information.
How Well Providers Communicate.................. Yes........................... Yes.
Patient's Rating of Provider.................... Yes........................... Yes.
Health Promotion & Education.................... Yes........................... Yes.
Shared Decision Making.......................... Yes........................... Yes.
Stewardship of Patient Resources................ Yes........................... Yes.
Courteous and Helpful Office Staff.............. Yes........................... Yes.
Care Coordination............................... Yes........................... Yes.
Health Status and Functional Status............. Yes........................... No.
Access to Specialists........................... Yes........................... No.
----------------------------------------------------------------------------------------------------------------
[[Page 53721]]
(c) Identifying and Assigning Measure Achievement Points for Topped Out
Measures
Section 1848(q)(3)(B) of the Act requires that, in establishing
performance standards with respect to measures and activities, we
consider, among other things, the opportunity for continued
improvement. We finalized in the CY 2017 Quality Payment Program final
rule that we would identify topped out process measures as those with a
median performance rate of 95 percent or higher (81 FR 77286). For non-
process measures we finalized a topped out definition similar to the
definition used in the Hospital VBP Program: The truncated coefficient
of variation is less than 0.10 and the 75th and 90th percentiles are
within 2 standard errors (81 FR 77286). When a measure is topped out, a
large majority of clinicians submitting the measure performs at or very
near the top of the distribution; therefore, there is little or no room
for the majority of MIPS eligible clinicians who submit the measure to
improve. We understand that every measure we have identified as topped
out may offer room for improvement for some MIPS eligible clinicians;
however, we believe asking clinicians to submit measures that we have
identified as topped out and measures for which they already excel is
an unnecessary burden that does not add value or improve beneficiary
outcomes.
Based on 2015 historic benchmark data,\4\ approximately 45 percent
of the quality measure benchmarks currently meet the definition of
topped out, with some submission mechanisms having a higher percent of
topped out measures than others. Approximately 70 percent of claims
measures are topped out, 10 percent of EHR measures are topped out, and
45 percent of registry/QCDR measures are topped out.
---------------------------------------------------------------------------
\4\ The topped out determination is calculated on historic
performance data and the percentage of topped out measures may
change when evaluated for the most applicable annual period.
---------------------------------------------------------------------------
In the CY 2017 Quality Payment Program final rule, we finalized
that for the 2019 MIPS payment year, we would score topped out quality
measures in the same manner as other measures (81 FR 77286). We
finalized that we would not modify the benchmark methodology for topped
out measures for the first year that the measure has been identified as
topped out, but that we would modify the benchmark methodology for
topped out measures beginning with the 2020 MIPS payment year, provided
that it is the second year the measure has been identified as topped
out. In the CY 2018 Quality Payment Program proposed rule (82 FR 30103
through 30106), we proposed a phased in approach to apply special
scoring to topped out measures, beginning with the 2018 MIPS
performance period (2020 MIPS payment year), rather than modifying the
benchmark methodology for topped out measures as indicated in the CY
2017 Quality Payment Program final rule. We also provided a summary of
comments received in response to the CY 2017 Quality Payment Program
final rule (82 FR 30103 through 30104) on how topped out measures
should be scored provided that it is the second year the measure has
been identified as topped out.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30045
through 30047), we proposed a lifecycle for topped out measures by
which, after a measure benchmark is identified as topped out in the
published benchmark for 2 years, in the third consecutive year it is
identified as topped out it will be considered for removal through
notice-and-comment rulemaking or the QCDR approval process and may be
removed from the benchmark list in the fourth year, subject to the
phased in approach described in section II.C.6.c.(2) of this final rule
with comment period.
We also stated in the CY 2017 Quality Payment Program final rule
that we do not believe it would be appropriate to remove topped out
measures from the CMS Web Interface for the Quality Payment Program
because the CMS Web Interface measures are used in MIPS and in APMs
such as the Shared Savings Program and because we have aligned
policies, where possible, with the Shared Savings Program, such as
using the Shared Savings Program benchmarks for the CMS Web Interface
measures (81 FR 77285). In the CY 2017 Quality Payment Program final
rule, we also finalized that MIPS eligible clinicians submitting via
the CMS Web Interface must submit all measures included in the CMS Web
Interface (81 FR 77116). Thus, if a CMS Web Interface measure is topped
out, the CMS Web Interface submitter cannot select other measures.
Because of the lack of ability to select measures, we did not propose
to apply the proposed special scoring adjustment to topped out measures
for CMS Web Interface for the Quality Payment Program (82 FR 30106).
Additionally, because the Shared Savings Program incorporates a
methodology for measures with high performance into the benchmark, we
noted that we do not believe capping benchmarks from the CMS Web
Interface for the Quality Payment Program is appropriate. We finalized
in the CY 2017 Quality Payment Program final rule at Sec.
414.1380(b)(1)(ii)(A) to use benchmarks from the corresponding
reporting year of the Shared Savings Program. The Shared Savings
Program adjusts some benchmarks to a flat percentage when the 60th
percentile is equal to or greater than 80.00 percent for individual
measures (78 FR 74759 through 74763), and, for other measures,
benchmarks are set using flat percentages when the 90th percentile for
a measure are equal to or greater than 95.00 percent (79 FR 67925).
Thus, we did not propose to apply the topped out measure cap to
measures in the CMS Web Interface for the Quality Payment Program.
Starting with the 2019 MIPS performance period, we proposed to
apply the special topped out scoring method to all topped out measures,
provided it is the second (or more) consecutive year the measure is
identified as topped out (82 FR 30103 through 30106). We sought comment
on our proposal to apply special topped out scoring to all topped out
measures, provided it is the second (or more) consecutive year the
measure is identified as topped out. We also sought comment on the
proposal not to apply the topped out measure cap to measures in the CMS
Web Interface for the Quality Payment Program. We refer readers to
section II.C.6.c.(2) of this final rule with comment period for a
summary of the comments we received and our responses.
As part of the lifecycle for topped out measures, we also proposed
a method to phase in special scoring for topped out measure benchmarks
starting with the 2018 MIPS performance period, provided that is the
second consecutive year the measure benchmark is identified as topped
out in the benchmarks published for the performance period (82 FR 30103
through 30106). This special scoring would not apply to measures in the
CMS Web Interface, as explained later in this section. The phased-in
approach described in this section represents our first step in
methodically implementing special scoring for topped out measures.
We did not propose to remove topped out measures for the 2018 MIPS
performance period because we recognize that there are currently a
large number of topped out measures and removing them may impact the
ability of some MIPS eligible clinicians to submit 6 measures and may
impact some specialties more than others. We noted, however, that we
proposed a timeline for removing topped out measures in future years
(82 FR 30046). We believe this provides MIPS eligible
[[Page 53722]]
clinicians the ability to anticipate and plan for the removal of
specific topped out measures, while providing measure developers time
to develop new measures.
We noted that because we create a separate benchmark for each
submission mechanism available for a measure, a benchmark for one
submission mechanism for the measure may be identified as topped out
while another submission mechanism's benchmark may not be topped out.
The topped out designation and special scoring apply only to the
specific benchmark that is topped out, not necessarily every benchmark
for a measure. For example, the benchmark for the claims submission
mechanism may be topped out for a measure, but the benchmark for the
EHR submission mechanisms for that same measure may not be topped out.
In this case, the topped out scoring would only apply to measures
submitted via the claims submission mechanism, which has the topped out
benchmark. We also described that only the submission mechanism that is
topped out for the measure would be removed (82 FR 30104).
We proposed to cap the score of topped out measures at 6 measure
achievement points. We proposed a 6-point cap for multiple reasons.
First, we noted that we believe applying a cap to the current method of
scoring a measure against a benchmark is a simple approach that can
easily be predicted by clinicians. Second, the cap will create
incentives for clinicians to submit other measures for which they can
improve and earn future improvement points. Third, considering the
proposed topped out measure lifecycle, we believed this cap would only
be used for a few years and the simplicity of a cap on the current
benchmarks would outweigh more complicated approaches to scoring such
as a cluster-based options or applying a cap on benchmarks based on
flat-percentage (see 82 FR 30103 through 30104). The rationale for a 6-
point cap is that 6 points is the median score for any measure as it
represents the start of the 6th decile for performance and represents
the spot between the bottom 5 deciles and start of the top 5 deciles.
We believed the proposed capped scoring methodology would
incentivize MIPS eligible clinicians to begin submitting non-topped out
measures without performing below the median score. This methodology
also would not impact scoring for those MIPS eligible clinicians that
do not perform near the top of the measure, and therefore, have
significant room to improve on the measure. We noted that we may also
consider lowering the cap below 6 points in future years, especially if
we remove the 3-point floor for performance in future years.
Although we proposed a new methodology for assigning measure
achievement points for topped out measures, we did not propose to
change the policy for awarding measure bonus points for topped out
measures. Topped out measures will still be eligible for measure bonus
points if they meet the required criteria. We refer readers to sections
II.C.7.a.(2)(f) and II.C.7.a.(2)(g) of this final rule with comment
period for more information about measure bonus points.
While we believe it is important to score topped out measures
differently because they could have a disproportionate impact on the
scores for certain MIPS eligible clinicians and topped out measures
provide little room for improvement for the majority of MIPS eligible
clinicians who submit them, we also recognize that numerous measure
benchmarks are currently identified as topped out, and special scoring
for topped out measures could impact some specialties more than others.
Therefore, we considered ways to phase in special scoring for topped
out measures in a way that will begin to apply special scoring, but
would not overwhelm any one specialty and would also provide additional
time to evaluate the impact of topped out measures before implementing
it for all topped out measures, while also beginning to encourage
submission of measures that are not topped out.
We believe the best way to accomplish this is by applying special
topped out scoring to a select number of measures for the 2018 MIPS
performance period and to then apply the special topped out scoring to
all topped out measures for the 2019 MIPS performance period, provided
it is the second consecutive year the measure is topped out. We believe
this approach allows us time to further evaluate the impact of topped
out measures and allows for a methodical way to phase in topped out
scoring.
We identified measures we believe should be scored with the special
topped out scoring for the 2018 performance period by using the
following set criteria, which are only intended as a way to phase in
our topped-out measure policy for selected measures and are not
necessarily intended to be criteria for use in future policies:
Measure is topped out and there is no difference in
performance between decile 3 through decile 10. We applied this
limitation because, based on historical data, there is no room for
improvement for over 80 percent of MIPS eligible clinicians that
reported on these measures.
Process measures only because we want to continue to
encourage reporting on high priority outcome measures, and the small
subset of structure measures was confined to only three specialties.
MIPS measures only (which does not include measures that
can only be reported through a QCDR) given that QCDR measures go
through a separate process for approval and because we want to
encourage use of QCDRs required by section 1848(q)(1)(E) of the Act.
Measure is topped out for all mechanisms by which the
measure can be submitted. Because we create a separate benchmark for
each submission mechanism available for a measure, a benchmark for one
submission mechanism for the measure may be identified as topped out
while another submission mechanism's benchmark may not be topped out.
For example, the benchmark for the claims submission mechanism may be
topped out for a measure, but the benchmark for the EHR submission
mechanisms for that same measure may not be topped out. We decided to
limit our criteria to only measures that were topped out for all
mechanisms for simplicity and to avoid confusion about what scoring is
applied to a measure.
Measure is in a specialty set with at least 10 measures,
because 2 measures in the pathology specialty set, which only has 8
measures total, would have been included.
Applying these criteria resulted in the 6 measures as listed in
Table 18.
[[Page 53723]]
Table 18--Proposed Topped Out Measures for Special Scoring for the 2018 MIPS Performance Period
----------------------------------------------------------------------------------------------------------------
Topped out for all
Measure name Measure ID Measure type submission Specialty set
mechanisms
----------------------------------------------------------------------------------------------------------------
Perioperative Care: Selection 21 Process............. Yes................ General Surgery,
of Prophylactic Antibiotic-- Orthopedic Surgery,
First OR Second Generation Otolaryngology,
Cephalosporin. Thoracic Surgery,
Plastic Surgery.
Melanoma: Overutilization of 224 Process............. Yes................ Dermatology.
Imaging Studies in Melanoma.
Perioperative Care: Venous 23 Process............. Yes................ General Surgery,
Thromboembolism (VTE) Orthopedic Surgery,
Prophylaxis (When Indicated Otolaryngology,
in ALL Patients). Thoracic Surgery,
Plastic Surgery.
Image Confirmation of 262 Process............. Yes................ n/a.
Successful Excision of Image-
Localized Breast Lesion.
Optimizing Patient Exposure to 359 Process............. Yes................ Diagnostic Radiology.
Ionizing Radiation:
Utilization of a Standardized
Nomenclature for Computerized
Tomography (CT) Imaging
Description.
Chronic Obstructive Pulmonary 52 Process............. Yes................ n/a.
Disease (COPD): Inhaled
Bronchodilator Therapy.
----------------------------------------------------------------------------------------------------------------
We proposed to apply the special topped out scoring method to only
the 6 measures in Table 18 for the 2018 MIPS performance period,
provided they are again identified as topped out in the benchmarks for
the 2018 MIPS performance period. If these measures are not identified
as topped out in the benchmarks published for the 2018 MIPS performance
period, they will not be scored differently because they would not be
topped out for a second consecutive year.
Finally, we proposed to add a new paragraph at Sec.
414.1380(b)(1)(xiii) to codify our proposal for the lifecycle for
removing topped out measures. We also proposed to add at Sec.
414.1380(b)(1)(xiii)(A) that for the 2018 MIPS performance period, the
6 measures identified in Table 18 will receive a maximum of 6 measure
achievement points, provided that the measure benchmarks are identified
as topped out again in the benchmarks published for the 2018 MIPS
performance period. We also proposed to add at Sec.
414.1380(b)(1)(xiii)(B) that beginning with the 2019 MIPS performance
period, measure benchmarks, except for measures in the CMS Web
Interface, that are identified as topped out for two 2 or more
consecutive years will receive a maximum of 6 measure achievement
points in the second consecutive year it is identified as topped out,
and beyond.
We requested comments on our proposal to score topped out measures
differently by applying a 6-point cap, provided it is the second
consecutive year the measure is identified as topped out. Specifically,
we sought feedback on whether 6 points is the appropriate cap or
whether we should consider another value. We also sought comment on our
proposal to apply special topped out scoring only to the 6 measures
identified in Table 18 for the 2018 MIPS performance period. We also
sought comment on our proposal to amend the regulatory text to align
with these proposed policies.
The following is a summary of the public comments received on the
proposal to cap topped out measures at 6-points and our responses:
Comment: Many commenters supported the proposal to cap topped out
measures at 6 points because commenters believed the proposal is a
simplified approach to assigning points to topped out measures, aligns
with certain state programs, and will encourage reporting of other
measures that are more meaningful.
Response: As we note below, we have been persuaded by other
commenters that this adjustment is too abrupt a change to provide in
the 2018 MIPS performance period, so as described below, we intend to
apply a scoring cap but are modifying the proposal as described below.
Comment: Several commenters recommended a more gradual reduction of
points because commenters believed the 6-point cap did not acknowledge
high performance and was too steep a drop in points because measure
benchmarks are not based on MIPS data, selection of measures from a
menu may result in only high performers submitting topped out measures
which could still provide opportunities for improvement, and submitters
have limited measure alternatives for reporting. Commenters recommended
higher point values ranging from 7 to 8, when capping topped out
measures. A few commenters recommended 7 points for topped out
measures, excluding outcome measures and cross cutting measures, which
commenters believed should be allowed a maximum point value, to prevent
penalizing eligible clinicians with limited options for measures when
reporting topped out measures due to limitations largely outside of
their control. One commenter recommended that topped out measures be
capped at 8 points in year 2 of the designation as a topped out measure
and 6 points in year 3 of the designation, providing a more gradual
reduction of points for reporting topped out measures. A few commenters
recommended capping topped out measures at 7.5 points, representing the
lowest possible points associated with the upper quartile of
performance. The commenters believed that the 7.5 points would align
with CMS definition of a topped-out measure that the truncated
coefficient of variation is less than 0.10 and the 75th and 90th
percentiles are within two standard errors (a test of whether the range
of scores in the upper quartile is statistically meaningful). The
commenters believed this would still discourage clinicians from
continuing to report topped out measures because the scores would be
capped and the measures are ineligible for improvement points. The
commenters believed the higher scores are more aligned with points
assigned for upper quartile performance, acknowledging high
performance.
Response: We acknowledge the commenters' concerns with the 6-point
cap and recommendations to increase the point value to acknowledge
performance and allow a more gradual reduction in the achievement score
of topped out measures. The benchmarks for the 2017 performance period
are derived from the measure's historical
[[Page 53724]]
performance data which would be reflective of the measure's anticipated
performance in the future. We believe our policy approach of using a
cap set at 6 points (which represents the median score in our
benchmark) and phasing in this cap by only applying it to selected
measures for the 2018 MIPS performance period (see section II.C.6.c.(2)
in this final rule with comment period) provides a simple policy
solution and addresses many concerns about a disproportionate number of
topped out measures affecting clinicians for the 2018 performance
period. However, we do understand that a significant number of measures
may qualify for this scoring cap starting in the 2019 MIPS performance
period/2021 MIPS payment year, which could affect scores for MIPS
eligible clinicians with a limited choice of measures. Therefore, we
have been persuaded by the comments to increase the scoring cap to 7
points. We chose 7 points for several reasons. First, for simplicity in
the scoring system, we believe we should have a single integer number
cap for all topped out measures that are subject to the cap. We believe
it would be easier for clinicians to understand a cap of 7 points than
a policy which uses partial points or a system that gradually decreases
points the longer a measure is topped out. One additional component in
assuring consistency in scoring is to apply the scoring cap to all
identified topped out measures, including outcome and cross-cutting
measures. Second, 7 points is higher than the median, so this cap
provides credit for good performance. Finally, the 7 point cap would
mitigate to some degree the scoring concerns for clinicians who have a
large number of topped out measures, while still providing incentives
to all eligible clinicians to submit measures that are not topped out.
We will monitor the scoring cap and address any changes through future
rulemaking.
Comment: One commenter supported the 6-point cap but recommended
maintaining measures at a reduced point cap rather than removing
measures that might have a critical position in clinical care pathways.
Response: We acknowledge concerns about maintaining measures that
support clinical care pathways, but we believe that considering the
removal of topped out measures beginning with the 2019 MIPS performance
period, subject to removal criteria, is more appropriate than
maintaining an indefinite cap on scoring. Measures considered for
removal would need to go through notice and comment rulemaking, and
commenters would have the opportunity to provide feedback on the
measure before we finalize a measure removal.
Comment: One commenter indicated that a cap of 6 points or a
similar median score would be appropriate if performance improvement
could not be statistically measured.
Response: We continue to believe that capping the score is a
reasonable and simple way to score topped out measures, although we
have been persuaded by commenters that the 6-point cap is potentially
too large a change, and, as described above, we are modifying the
proposal slightly. While we will still cap the score, we will cap it at
7 points rather than 6. The commenter did not provide additional detail
on how improvement should be statistically measured, however, in
section II.C.7.a.(2)(i)(v) of this final rule with comment, we seek
comments on how to adjust our scoring policies and meet our policy
goals and would welcome additional discussion on how this approach
could be implemented in MIPS.
Comment: Many commenters did not support a scoring cap for topped
out measures because commenters believed that CMS should be rewarding
an eligible clinician's ability to achieve and maintain high
performance, that all MIPS measures should be scored in the same way to
minimize scoring complexity, and that many topped out measures may
still represent an opportunity to encourage quality improvement. In
addition, some commenters believed that a scoring cap should not be
applied because topped out measures reflect performance of only high
performers who selected the measures and therefore should not be
considered topped out for all MIPS eligible clinicians.
Response: We disagree with the commenters that a scoring cap is
inappropriate. We believe that MIPS eligible clinicians generally
should have the flexibility to select measures most relevant to their
practice, but a trade-off of this flexibility is that not all MIPS
eligible clinicians are reporting the same measure. As MIPS is a
performance-based program, we do not believe that topped out measures
should be scored the same way as other measures that can demonstrate
achievement by showing variation in performance and room for
improvement. In particular, it is difficult to assess whether the lack
of variation in performance is truly due to lack of variation among all
clinicians or if it is just due to lack of variation among the
clinicians who are submitting the measures. If it is the latter and
more clinicians report the measure, and we have a more robust benchmark
and see variation in performance, then we would no longer classify the
measure as topped out. Thus, while we agree that some measures
initially identified as topped out may later show room for improvement
if additional MIPS eligible clinicians report the measures, we note
that we have not seen any variation in performance on the 6 measures
proposed for the scoring cap applied in 2018 MIPS performance period.
We continue to see a cap as a simple approach that is predictable for
clinicians and creates incentives to select other non-topped out
measures. However, we understand the concerns of the commenters;
therefore, we are finalizing an increase to the cap from 6 measure
achievement points to 7 measure achievement points, which does
acknowledge better than average performance on measure achievement. We
believe the scoring cap would only be used for a few years because we
anticipate that topped out measures generally will be removed after 3
years through future rulemaking, and the simplicity of the cap on
current benchmarks makes this scoring approach an attractive
alternative to more complex scoring schemes. We will continue to
consider if additional factors need to be taken into account as we
score topped out measures, and if needed, we will make further
proposals in future rulemaking. In addition, we expect to incorporate
new measures into MIPS. We refer readers to section II.C.6.c.(1) of
this final rule with comment period for further discussion on measure
development.
Comment: A few commenters indicated that capping the score of
topped out measures may result in a gap in available achievement points
available for quality measures for some MIPS eligible clinicians. A few
commenters believed that some high-volume services may have only topped
out measures, which would limit the achievement points available to
MIPS eligible clinicians providing care to those patients.
Additionally, a few commenters believed that MIPS eligible clinicians
do not have control over the measures available in the program and
should be able to report, and potentially receive full achievement
points, on measures that are relevant to their patient population and
scope of practice.
Response: We acknowledge the concern about the availability of
measures that are eligible for the full 10 measure achievement points.
The majority of quality measures used in
[[Page 53725]]
MIPS are developed by external stakeholders, and as discussed in
section II.C.6.c.(1) of this final rule with comment period, we intend
to work with developers using the Measure Development Plan as a
strategic framework to add new measures into MIPS. While these measures
are being developed and refined, we have multiple policies to mitigate
the impact of topped out measures. For the 2018 MIPS performance
period, we are only finalizing 6 topped out measures to which the
scoring cap will apply. In the 2019 MIPS performance period, MIPS
eligible clinicians will be able to submit quality data using more than
one submission mechanism, which may increase the availability of non-
topped out measures for some MIPS eligible clinicians. Finally, as
discussed in section II.C.6.c.(2) of this final rule with comment
period, we will consider the impact of the topped out measure lifecycle
on certain clinicians in future rulemaking and refine our policies if
needed.
Comment: A few commenters indicated that a scoring cap should not
be used for QCDR measures because the measures are relatively new and
QCDRs have only recently begun collecting performance data. The
commenters noted that QCDRs should be allowed to promote the submission
of QCDR measures for several years to understand performance trends
before identifying topped out measures. One commenter requested a delay
in the implementation of scoring caps for QCDR submissions because the
commenter believed that the additional time would increase submission
rates and support specialty MIPS eligible clinicians who are new to
quality reporting.
Response: We believe that the scoring cap should be applied to
measures submitted through all submission mechanisms. Although the
scoring cap applies to measures submitted via QCDR (including both MIPS
measures and QCDR measures), the topped out measures identified for the
scoring cap for the 2018 MIPS performance period do not include any
QCDR measures. Therefore, there is no immediate impact on the
submission of additional data on QCDR measures, which potentially
reduces the likelihood that any QCDR measures would be considered
topped out in future years. Therefore, we do not believe it is
necessary to delay implementation of scoring caps for QCDR submissions.
We will monitor how the application of the scoring cap affects measure
selection and propose any changes in future rulemaking.
Comment: One commenter did not support the scoring cap because it
might penalize clinicians and groups who have invested in EHR
technology and workflow design required to perform well on topped out
measures.
Response: We want to encourage the continued use of using EHR
technology for quality improvement. We continue to evaluate methods to
encourage the use of EHR technology, including the end-to-end
electronic reporting bonus available to measures submitted through an
EHR submission mechanism, regardless of whether they are topped out. We
note that topped out measures are identified through benchmarks for
each submission mechanism, and EHR measures have a lower proportion of
topped out measures compared to other submission mechanisms, which
limits the impact of any topped out special scoring policy. We will
monitor how the application of the scoring cap affects measure
selection during the topped out lifecycle.
Comment: A few commenters recommended that clinicians caring for
American Indian and Alaska Native patients be excluded from the scoring
cap for topped out measures because the commenters believed that if
clinicians are performing well on a quality measure, they should be
awarded maximum points.
Response: To our knowledge, there are no measures in MIPS that are
specific to this population; however, there is a large variety of
measures overall for selection. We do not believe it would be
appropriate to exclude clinicians caring for American Indian and Alaska
Native patients at this time because the policy for topped out measures
encourages selection of non-topped out measures that have an
opportunity for improvement of value and beneficiary outcomes,
including this specific population. For clinicians in small practices
caring for American Indian and Alaskan Native patients, there are
flexibilities built into MIPS, including the low-volume threshold
affecting eligibility for MIPS and, for those participating, bonus
points applied when calculating the final score.
Comment: Several commenters had specific recommendations to amend
CMS's proposed topped out measure scoring. One commenter recommended
eliminating the cap or awarding full points over an expanded timeline
to phase out topped out measures over a 6-year period or longer. One
commenter urged CMS to not apply a scoring cap for clinical specialists
with limited numbers of measures to report that are not topped out. One
commenter indicated that scoring should be restructured to limit the
number of topped out measures that would receive the capped score or
provide a bonus structure to add extra points for reporting on capped
topped out measures to ensure that eligible clinicians are not
penalized for submission of topped out measures. One commenter
recommended that CMS consider the ABC methodology to evaluate variation
in performance when identifying topped out measures for the scoring
cap.
Response: We believe that topped out measures should not be scored
the same way as other measures that show variation and room for
improvement, and that a measure cap is an appropriate approach that
does not add complexity to scoring. We believe the scoring cap should
be applied to all topped out measures submitted and that, although
bonus points are available for topped out measures that are additional
high priority measures and/or submitted via end-to-end reporting, bonus
points specifically for topped out measures would not be appropriate or
provide incentives for eligible clinicians to submit measures are not
topped out. We appreciate the commenters' suggestions, which we may
consider for future rulemaking; however, we believe that the lifecycle
finalized at section II.C.6.c.(2) of this final rule with comment
period provides sufficient notice to stakeholders, including measure
developers, to create alternative measures if needed, and we believe
that the scoring policy should be applied consistently across clinical
specialties and submissions mechanisms regardless of the number of
topped out measures available and submitted. In the interim, we believe
a cap of 7 points addresses some stakeholder concerns while providing a
simple way to score topped out measures. We will continue to evaluate
application of the scoring policy during the topped out lifecycle.
The following is a summary of the public comments received on the
proposal to not apply the topped out measure scoring cap to measures in
the CMS Web Interface and our responses:
Comment: A few commenters did not support CMS's proposal to exclude
CMS Web Interface measures from the special scoring policy and
encouraged CMS to develop an approach to apply the topped out policy to
the CMS Web Interface measures. One commenter believed that allowing
clinicians reporting through the CMS Web Interface to earn full points
for reporting topped out measures would unfairly advantage the final
score for these reporters compared to other clinicians. The commenter
recommended that CMS
[[Page 53726]]
work with developers on more meaningful measures for the CMS Web
Interface.
Response: We continue to believe that it would be inappropriate to
cap scoring for topped out measures from the CMS Web Interface, which
we believe provides meaningful measures for MIPS eligible clinicians.
The CMS Web Interface measures are used in MIPS and APMs such as the
Shared Savings Program, and we have aligned policies where possible,
including using the Shared Savings Programs benchmarks for the CMS Web
Interface measures. In addition, the lack of ability to select measures
would mean that applying topped out scoring would create a disadvantage
for CMS Web Interface submitters as they would not have the ability to
choose alternative measures. We would be interested in working with a
broad stakeholder group and developers as appropriate to identify
additional measures that should be in CMS Web Interface. We will
continue to coordinate with the Shared Savings Program to discuss any
future modifications to scoring or modifications to measures in the CMS
Web Interface.
The following is a summary of the public comments received on the
proposal to apply topped out measure policy for the 6 selected measures
in Table 18 for the 2018 MIPS performance period and our responses:
Comment: A few commenters supported CMS's proposed special scoring
for the 6 topped out measures in the 2018 MIPS performance period.
Response: We appreciate the commenters' support.
Comment: A few commenters did not support the special scoring and
potential future removal of the 6 topped out measures proposed for
special scoring, citing specific measures that are important to
specialists and included in specialty measurement sets. One commenter
recommended the inclusion of a replacement measure if capping the score
for the measure Chronic Obstructive Pulmonary Disease (measure 52),
because the commenter believed without replacement there will be a gap
in the program. A few commenters did not support including the
Perioperative Care Measure: Selection of Prophylactic Antibiotic
(measure 21) because commenters believed it is an important measure to
identify patient outcomes and is crucial to maintain high quality care.
One commenter did not support the inclusion of the Perioperative Care
measures (measures 21 and 23) in the list of topped out measures
because the commenter believed that the measures are important to
support patient safety. One commenter recommended that CMS consult
stakeholders and literature on the importance of measures for patient
safety and clinical significance, and consider developing a
perioperative composite measure rather than capping and eventually
removing the two perioperative care measures. The commenter believed a
replacement process, including development of composite measures, would
ensure that sufficient measures are available and appropriate to
specialties and potentially demonstrate variation in performance to
identify opportunities for improvement.
Response: We acknowledge the concerns of commenters regarding the
availability of measures. We note that the finalized policy is to cap
the score of 6 specific measures for the CY 2018 performance period,
and that any potential removal of measures would occur only through
future rulemaking, which would be proposed after consulting appropriate
literature and include stakeholder feedback. We refer readers to
section II.C.6.c.(2) of this final rule with comment period for
additional discussion on the lifecycle of topped out measures,
including the consideration of criteria for the identification and
potential removal of measures. In terms of scoring, we do not believe
that a MIPS eligible clinician electing to report topped out measures
should be able to receive the same maximum score as MIPS eligible
clinicians electing to report other measures. Therefore, we believe
that scoring of the 6 identified topped out measures, including
measures 52, 21 and 23 discussed above, should be capped. In terms of
recommendations to replace measures and develop composite measures, we
will consider these recommendations for future rulemaking. As discussed
in section II.C.6.c.(1) of this final rule with comment period, we
intend to work with developers using the Measure Development Plan as a
strategic framework to add new measures into MIPS. We will share the
commenters' recommendations regarding the need for new measures and a
composite measure. We encourage stakeholders to develop and submit
measures and composite measures for consideration.
We also sought comment on other possible options for scoring topped
out measures that would meet our policy goals to encourage clinicians
to begin to submit measures that are not topped out while also
providing stability for MIPS eligible clinicians. We specifically
sought comment on whether the proposed policy to cap the score of
topped out measures beginning with the 2019 MIPS performance period
should apply to SSMs in the CAHPS for MIPS survey measure or whether
there is another alternative policy that could be applied for the CAHPS
for MIPS survey measure due to high, unvarying performance within the
SSM. We noted that we would like to encourage groups to report the
CAHPS for MIPS survey as it incorporates beneficiary feedback.
We thank the commenters for their feedback and will take their
suggestions into consideration in future rulemaking.
We did not receive any public comments specific to our proposal to
change the regulatory text at Sec. 414.1380(b)(1)(xiii)(A) and Sec.
414.1380(b)(1)(xiii)(B).
Final Action: After consideration of all comments, we are
finalizing with modifications the proposed policy to apply the special
scoring cap to topped out measures. Specifically, we are finalizing a
scoring cap of 7 points, rather than the proposed 6 points. We are
finalizing a 7-point cap for multiple reasons. First, we believe
applying the special scoring cap is a simple approach that can be
easily predicted by clinicians. Second, the cap will create incentives
for clinicians to submit other measures for which they can improve and
earn future improvement points. Third, the rationale for the point
value is that 7 points is slightly higher than the median score for any
measure and will address the near-term concerns that clinicians have
about the lack of additional, non-topped out measures for submission
and still provide an above median award for good performance. In
addition, we are finalizing our proposed policy that we will not apply
the topped out measure cap to measures in the CMS Web Interface for the
Quality Payment Program. We also appreciate the input and suggestions
on the best way to proceed with topped out SSMs in the CAHPS for MIPS
survey measures, and we will take it into consideration in future
rulemaking. Additionally, we are finalizing our proposal to apply that
the special scoring policy to the 6 selected measures in Table 18 for
the 2018 MIPS performance period and 2020 MIPS payment year.
Finally, we are finalizing the proposed regulatory text changes
with some modifications to reflect the other policies we are
finalizing. We are finalizing amendments to Sec.
414.1380(b)(1)(xiii)(A) to read that, for the 2020 MIPS payment year,
the 6 measures identified in Table 18 will receive a maximum of 7
measure achievement points, provided that for the applicable submission
mechanisms the measure benchmarks are identified as topped out again in
the benchmarks
[[Page 53727]]
published for the 2018 MIPS performance period. We will also amend
Sec. 414.1380(b)(1)(xiii)(B) to read that, beginning with the 2021
MIPS payment year, measure benchmarks, except for measures in the CMS
Web Interface, that are identified as topped out for 2 or more
consecutive years will receive a maximum of 7 measure achievement
points in the second consecutive year it is identified as topped out,
and beyond. We will continue to consider if additional factors need to
be taken into account as we score topped out measures, and if needed,
we will make further proposals in future rulemaking.
Together the finalized policies for phasing in capped scoring and
removing topped out measures are intended to provide an incentive for
MIPS eligible clinicians to begin to submit measures that are not
topped out while also providing stability by allowing MIPS eligible
clinicians who have few alternative measures to continue to receive
standard scoring for most topped out measures for an additional year,
and not perform below the median score for those 6 measures that
receive special scoring. It also provides MIPS eligible clinicians the
ability to anticipate and plan for the removal of specific topped out
measures, while providing measure developers time to develop new
measures. Below is an illustration of the lifecycle for scoring and
removing topped out measures based on our newly finalized policies:
Year 1: Measure are identified as topped out, which in
this example would be in the benchmarks published for the 2017 MIPS
performance period. The 2017 benchmarks are posted on the Quality
Payment Program Web site: https://qpp.cms.gov/resources/education.
Year 2: Measures are identified as topped out in the
benchmarks published for the 2018 MIPS performance period. Measures
identified in Table 18 have special scoring applied, provided they are
identified as topped out for the 2018 MIPS performance period, meaning
it is the second consecutive year they are identified as topped out.
Year 3: Measures are identified as topped out in the
benchmarks published for the 2019 MIPS performance period. The measures
identified as topped out in the benchmarks published for the 2019 MIPS
performance period and the previous two consecutive performance periods
would continue to have special scoring applied for the 2019 MIPS
performance period and would be considered, through notice-and-comment
rulemaking, for removal for the 2020 MIPS performance period.
Year 4: Topped out measures that are finalized for removal
are no longer available for reporting. For example, the measures
identified as topped out for the 2017, 2018 and 2019 MIPS performance
periods, if subsequently finalized for removal, will not be available
on the list of measures for the 2020 MIPS performance period and future
years. For all other measures, the timeline would apply starting with
the benchmarks for the 2018 MIPS performance period. Thus, the first
year any topped out measure other than those identified in Table 18
could be proposed for removal would be in rulemaking for the 2021 MIPS
performance period, based on the benchmarks being topped out in the
2018, 2019, and 2020 MIPS performance periods. If the measure benchmark
is not topped out for three consecutive MIPS performance periods, then
the lifecycle would stop and start again at year 1 the next time the
measure benchmark is topped out.
An example of applying the proposed scoring cap compared to scoring
applied for the 2017 MIPS performance period is provided in Table 19.
Table 19--Scoring for Topped Out Measures * Starting in the CY 2018 MIPS Performance Period Compared to the Transition Year Scoring
--------------------------------------------------------------------------------------------------------------------------------------------------------
Measure 1 Measure 2 Measure 3 Measure 4 Measure 5 (not Measure 6 (not Quality category
Scoring policy (topped out) (topped out) (topped out) (topped out) topped out) topped out) percent score *
--------------------------------------------------------------------------------------------------------------------------------------------------------
2017 MIPS performance period 10 measure 10 measure 10 measure 4 measure 10 measure 5 measure 49/60 = 81.67
Scoring. achieve-ment achieve-ment achieve-ment achieve-ment achieve-ment achieve-ment
points. points. points. points (did points. points.
not get max
score).
Capped Scoring applied...... 7 measure 7 measure 7 measure 4 measure 10 measure 5 measure 40/60 = 66.67.
achieve-ment achieve-ment achieve-ment achieve-ment achieve-ment achieve-ment
points. points. points. points. points. points.
------------------------------------------------------------------------------------------------------
Notes....................... Topped out measures scored with 7-point measure achievement point
cap. Cap does not impact score if the MIPS eligible clinician's
score is below the cap.
Still possible to earn maximum ...............
measure achievement points on
the non-topped out measures
--------------------------------------------------------------------------------------------------------------------------------------------------------
* This example would only apply to the 6 measures identified in Table 18 for the CY 2018 MIPS Performance Period. This example also excludes bonus
points and improvement scoring proposed in section the proposed rule (82 FR 30113 through 30114).
(d) Case Minimum Requirements and Measure Reliability and Validity
To help ensure reliable measurement, in the CY 2017 Quality Payment
Program final rule (81 FR 77288), we finalized a 20-case minimum for
all quality measures except the all-cause hospital readmission measure.
For the all-cause hospital readmission measure, we finalized in the CY
2017 Quality Payment Program final rule a 200-case minimum and
finalized to apply the all-cause hospital readmission measure only to
groups of 16 or more clinicians that meet the 200-case minimum
requirement (81 FR 77288). We did not propose any changes to these
policies.
For the 2019 MIPS payment year, we finalized in the CY 2017 Quality
Payment Program final rule that if the measure is submitted but is
unable to be scored because it does not meet the required case minimum,
does not have a benchmark, or does not meet the data completeness
requirement, the measure would receive a score of 3 points (81 FR 77288
through 77289). We identified two classes of measures for the
transition year. Class 1 \5\ measures are measures that can be scored
based on performance because they have a benchmark, meet the case
minimum requirement, and meet the data completeness standard. We
finalized that Class 1 measures would receive 3 to 10 points based on
performance compared to the benchmark (81 FR 77289). Class 2 measures
are measures that cannot be scored based on performance because they do
not have a benchmark, do not have at least 20 cases, or the submitted
measure does not meet data completeness criteria. We
[[Page 53728]]
finalized that Class 2 measures, which do not include measures
submitted with the CMS Web Interface or administrative claims-based
measures, receive 3 points (81 FR 77289).
---------------------------------------------------------------------------
\5\ References to ``Classes'' of measures in this section
II.C.7.a.(2)(d) of this final rule with comment period are intended
only to characterize the measures for ease of discussion.
---------------------------------------------------------------------------
We proposed to maintain the policy to assign 3 points for measures
that are submitted but do not meet the required case minimum or do not
have a benchmark for the 2020 MIPS payment year and amend Sec.
414.1380(b)(1)(vii) accordingly (82 FR 30106 through 30108). We also
proposed a change to the policy for scoring measures that do not meet
the data completeness requirement for the 2020 MIPS payment year.
To encourage complete reporting, we proposed that in the 2020 MIPS
payment year, measures that do not meet data completeness standards
will receive 1 point instead of the 3 points that were awarded in the
2019 MIPS payment year. We proposed lowering the point floor to 1 for
measures that do not meet data completeness standards for several
reasons. First, we want to encourage complete reporting because data
completeness is needed to reliably measure quality. Second, unlike case
minimum and availability of a benchmark, data completeness is within
the direct control of the MIPS eligible clinician. In the future, we
intend that measures that do not meet the completeness criteria will
receive zero points; however, we believe that during the second year of
transitioning to MIPS, clinicians should continue to receive at least 1
measure achievement point for any submitted measure, even if the
measure does not meet the data completeness standards.
We are concerned, however, that data completeness may be harder to
achieve for small practices. Small practices tend to have small case
volume, and missing one or two cases could cause the MIPS eligible
clinician to miss the data completeness standard as each case may
represent multiple percentage points for data completeness. For
example, for a small practice with only 20 cases for a measure, each
case is worth 5 percentage points, and if they miss reporting just 11
or more cases, they would fail to meet the data completeness threshold,
whereas for a practice with 200 cases, each case is worth 0.5
percentage points towards data completeness, and the practice would
have to miss more than 100 cases to fail to meet the data completeness
criteria. Applying 1 point for missing data completeness based on
missing a relatively small number of cases could disadvantage small
practices, which may have additional burdens for reporting in MIPS,
although we also recognize that failing to report on 10 or more
patients is undesirable. In addition, we know that many small practices
may have less experience with submitting quality performance category
data and may not yet have systems in place to ensure they can meet the
data completeness criteria. Thus, we proposed an exception to the
proposed policy for measures submitted by small practices, as defined
in Sec. 414.1305. We proposed that these clinicians would continue to
receive 3 points for measures that do not meet data completeness.
Therefore, we proposed to revise Class 2 measures to include only
measures that cannot be scored based on performance because they do not
have a benchmark or do not have at least 20 cases. We also proposed to
create Class 3 measures, which are measures that do not meet the data
completeness requirement. We proposed that the revised Class 2 measure
would continue to receive 3 points. The proposed Class 3 measures would
receive 1 point, except if the measure is submitted by a small practice
in which case the Class 3 measure would receive 3 points. However,
consistent with the policy finalized in the CY 2017 Quality Payment
Program final rule, these policies for Class 2 and Class 3 measures
would not apply to measures submitted with the CMS Web Interface or
administrative claims-based measures. A summary of the proposals is
provided in Table 20.
Table 20--Quality Performance Category: Scoring Measures Based on Performance
----------------------------------------------------------------------------------------------------------------
Scoring rules in 2017 Description for 2018
Measure type Description in MIPS performance MIPS performance 2018 MIPS
transition year period period performance period
----------------------------------------------------------------------------------------------------------------
Class 1.............. Measures that can be 3 to 10 points based Same as transition Same as transition
scored based on on performance year. year.
performance. compared to the 3 to 10 points based
Measures that were benchmark. on performance
submitted or compared to the
calculated that met benchmark.
the following
criteria:
(1) The measure has a
benchmark;.
(2) Has at least 20
cases; and.
(3) Meets the data
completeness
standard (generally
60 percent.).
Class 2.............. Measures that cannot 3 points............. Measures that were 3 points.
be scored based on * This Class 2 submitted and meet * This Class 2
performance. measure policy does data completeness, measure policy
Measures that were not apply to CMS Web but do not have both would not apply to
submitted, but fail Interface measures of the following: CMS Web Interface
to meet one of the and administrative (1) a benchmark...... measures and
Class 1 criteria. claims based (2) at least 20 administrative
The measure either measures. cases.. claims based
(1) does not have a measures.
benchmark,.
(2) does not have at
least 20 cases, or.
(3) does not meet
data completeness
criteria..
[[Page 53729]]
Class 3.............. n/a.................. n/a.................. Measures that were 1 point except for
submitted, but do small practices,
not meet data which would receive
completeness 3 points.
criteria, regardless * This Class 3
of whether they have measure policy
a benchmark or meet would not apply to
the case minimum. CMS Web Interface
measures and
administrative
claims based
measures
----------------------------------------------------------------------------------------------------------------
The following is a summary of the public comments received on our
proposal for measures that do not meet the case minimum requirement or
do not have a benchmark (Class 2 measures) and our responses:
Comment: A few commenters supported maintaining the policy to
assign 3 points to measures that are submitted but do not have a
benchmark or meet the case minimum.
Response: We appreciate commenters support for the policy to assign
3 points to Class 2 measures.
Comment: Several commenters recommended that more than 3 points
should be assigned to measures without a benchmark, citing the need to
encourage MIPS eligible clinicians to report new measures and ensure
adequate representation of quality of care provided. A few commenters
proposed assigning a null value, maximum points, 6 points, or bonus
points to these measures. A few commenters suggested measures with no
benchmarks should have an opportunity to earn a maximum or near maximum
score or at least 5 or 6 points. One commenter encouraged CMS to align
general MIPS and the APM scoring standard and suggested that measures
that do not have a benchmark or meet the case minimum should not be
removed from the numerator and denominator of the quality performance
category percent score, regardless of whether general MIPS or the APM
scoring standard applies.
Response: We recognize stakeholders' concerns regarding the
assignment of 3 points to measures without a benchmark. However,
assigning more than 3 points, a null value, or bonus points increases
the likelihood of potential gaming because new measures and other
measures without a benchmark based on the baseline period may still be
scored based on performance and receive between 3-10 measure
achievement points if the measure has a benchmark based on the
performance period (Class 1 measures). Therefore, if we were to use any
of the suggested approaches, it would be more advantageous for a MIPS
eligible clinician that submits measures without a benchmark because
points for those measures would be higher than the floor for Class 1
measures. For those measures without a benchmark based on the baseline
period or the performance period (Class 2 measures), we selected 3
points because we did not want to provide more credit for reporting a
measure than cannot be reliably scored against a benchmark than for
measures for which we can measure performance against a benchmark.
Providing null values would reduce the final quality score potential of
60 points and may provide incentives for MIPS eligible clinicians to
submit mostly measures without a benchmark that cannot be reliably
scored, rather than encouraging the use of measures that can reliably
measure performance, provide meaningful distinctions and performance
and offer improvement opportunities. We refer readers to section
II.C.6.g.(3)(b) of this final rule with comment period for further
discussion on the quality performance category for MIPS APMs including
the use of the null value score for measures that do not have a
benchmark or meet the case minimum. We will continue to monitor the
impact of the policy as we gain experience with MIPS and evaluate
whether we need to revisit these approaches in future rulemaking.
Final Action: After consideration of public comments, we are
finalizing our proposal to maintain the policy to assign 3 points for
measures that are submitted but do not meet the required case minimum
or do not have a benchmark for the 2020 MIPS payment year and amend
Sec. 414.1380(b)(1)(vii) accordingly.
We proposed to amend Sec. 414.1380(b)(1)(vii) to assign 3 points
for measures that do not meet the case minimum or do not have a
benchmark in the 2020 MIPS payment year, and to assign 1 point for
measures that do not meet data completeness requirements, unless the
measure is submitted by a small practice, in which case it would
receive 3 points (82 FR 30108).
We invited comment on our proposal to assign 1 point to measures
that do not meet data completeness criteria, with an exception for
measures submitted by small practices.
The following is a summary of the public comments received on the
data completeness proposal and our responses:
Comment: Several commenters supported lowering the 3-point floor to
1 point for measures that do not meet the data completeness criteria
because it would encourage MIPS eligible clinicians to report more
complete performance data, in turn supporting the robust measurement
underlying the Quality Payment Program's overall assessments, bonuses,
and penalties. One commenter suggested adjusting the points assigned to
small practices in future years to align with large practices and
incentivize small practices to collect quality measure data
effectively. One commenter only supported awarding 1 point if the
performance period for the quality performance category was not longer
than 90 days.
Response: We thank commenters for their support for lowering the
points assigned to measures that do not meet the data completeness
criteria from 3 points to 1 point. We will continue to revisit ways to
improve this policy in future years. Additionally, we believe that it
would not be in the best interest of MIPS eligible clinicians to have
less than a full calendar year performance period for the quality
performance category. We refer readers to section II.C.5. of this final
rule with comment period for future discussion on the MIPS performance
period.
Comment: Several commenters supported CMS's proposed policy to
assign 3 points to small practices who submit measures that do not meet
the data completeness requirement. A few commenters requested that CMS
extend the 3-point floor for small practices past the 2018 MIPS
performance period or include other types of clinicians or
[[Page 53730]]
groups that may need a similar exception. One commenter who supported
the small practice exception also expressed concern that it would not
be enough to overcome the disparities between small and rural practices
and large and urban practices.
Response: We thank commenters for their support for the small
practice exception. We will consider ways to improve this approach in
future years, including assessing the exception's impact on any
disparities between various types of practices. Next year we will
revisit whether to extend the small practice exception beyond the 2018
MIPS performance period. At this time, we do not believe it is
appropriate to extend this exception beyond small practices because we
believe the policy supports our program goals for complete and accurate
reporting to support meaningful efforts to improve the quality of care
patients receive.
Comment: Many commenters did not support assigning 1 point to
measures that do not meet the data completeness criteria in the 2018
MIPS performance period and wanted to maintain the policy to assign 3
points. A few commenters cited the difficulty many practices face in
meeting the data completeness threshold, including disparate IT
systems, the dearth of germane specialty quality measures that can be
reported electronically, and limited numbers of cases. One commenter
wanted long term stability in the program reporting requirements,
citing the administrative burden caused by changes. A few commenters
requested that the 3-point floor remain until there is data to show
that sufficient numbers of MIPS eligible clinicians are able to meet
the criteria to warrant a reduction in points or at least until 2015
CEHRT is required. Another commenter stated that having a different
floor for small practices creates a disparity in scoring and further
complicates the Quality Payment Program. One commenter suggested that
CMS continue to assign 3 points if the MIPS eligible clinician makes a
substantive effort to submit data, or provide a sliding scale for MIPS
eligible clinicians who make a good faith effort to achieve data
completeness thresholds, thus rewarding physicians for submitting data
in a timely manner.
Response: We believe assigning 1 point to measures that do not meet
the data completeness criteria reflects our goals for MIPS eligible
clinicians' performance under the Quality Payment Program. We also
believe that data completeness is something that is within a
clinician's control, and without the data completeness requirement
clinicians would be able to receive 3 measure achievement points for
submitting just one case. While that was appropriate for year 1, it is
less appropriate as we transition into year 2 and future years. As
discussed in section II.C.6.b.(3)(b) of this final rule with comment
period, we are not finalizing our proposal to lower the data
completeness threshold to 50 percent for the 2018 performance period,
rather it will remain at 60 percent for the 2018 performance period as
finalized in the CY 2017 Quality Payment Program final rule with
comment period. While we acknowledge stakeholders' concerns and
suggestions for delaying the implementation of the 1-point policy, we
believe this policy supports our program goals for complete and
accurate reporting that reflects meaningful efforts to improve the
quality of care patients receive. We will continue to consider ways to
improve the policy to achieve this aim as we work to stabilize and best
simplify the program reporting requirements.
Comment: One commenter suggested that measures that do not meet
data completeness should receive zero points instead of 1 point because
it adds less complexity to the Quality Payment Program.
Response: At this time, we believe assigning 1 point is appropriate
for the second transition year, as it is a step towards meeting our
goal for a more accurate assessment of a MIPS eligible clinician's
performance on the quality measures. We will continue to monitor the
impact of the policy and evaluate whether we need to apply more
rigorous standards in future rulemaking.
Final Action: After consideration of public comments, we are
finalizing the policy to assign 1 point to measures that do not meet
data completeness criteria, with an exception for measures submitted by
small practices, which will receive 3 points, and amend Sec.
414.1380(b)(1)(vii) accordingly.
We did not propose to change the methodology we use to score
measures submitted via the CMS Web Interface that do not meet the case
minimum, do not have a benchmark, or do not meet the data completeness
requirement finalized in the CY 2017 Quality Payment Program final rule
and codified at paragraph (b)(1)(viii) of Sec. 414.1380. We referred
readers to the discussion at 81 FR 77288 for more details on our
previously finalized policy. However, we noted that as described in the
proposed rule (82 FR 30113), we proposed to add that CMS Web Interface
measures with a benchmark that are redesignated from pay for
performance to pay for reporting by the Shared Savings Program will not
be scored. We refer readers to the discussion at section
II.C.7.a.(2)(h)(ii) of this final rule with comment period for public
comments related to changes in CMS Web Interface scoring.
We also did not propose any changes to the policy to not include
administrative claims measures in the quality performance category
percent score if the case minimum is not met or if the measure does not
have a benchmark finalized in the CY 2017 Quality Payment Program final
rule and codified at paragraph (b)(1)(viii) of Sec. 414.1380. We
referred readers to the discussion at 81 FR 77288 for more details on
that policy.
To clarify the exclusion of measures submitted via the CMS Web
Interface and based on administrative claims from the policy changes
proposed to be codified at paragraph (b)(1)(vii) previously, we
proposed to amend paragraph (b)(1)(vii) to make it subject to paragraph
(b)(1)(viii), which codifies the exclusion.
We did not receive public comments on this proposal.
Final Action: We are finalizing as proposed the technical
corrections to Sec. 414.1380(b)(1)(vii) related to the exclusion of
measures submitted via the CMS Web Interface. We refer readers to the
discussion at section II.C.7.a.(2)(h)(ii) of this final rule with
comment period for public comments related to changes in CMS Web
Interface scoring.
(e) Scoring for MIPS Eligible Clinician That Do Not Meet Quality
Performance Category Criteria
In the CY 2017 Quality Payment Program final rule, we finalized
that MIPS eligible clinicians who fail to submit a measure that is
required to satisfy the quality performance category submission
criteria would receive zero points for that measure (81 FR 77291). We
did not propose any changes to the policy to assign zero points for
failing to submit a measure that is required in this proposed rule.
We would like to emphasize that MIPS eligible clinicians that fail
to submit any measures under the quality performance category will
receive a zero score for this category. All MIPS eligible clinicians
are required to submit measures under the quality performance category
unless there are no measures that are applicable and available or
because of extreme and uncontrollable circumstances. For further
discussion on extreme and uncontrollable circumstances, see sections
II.C.7.b.(3)(c) and III.B of this final rule with comment period.
[[Page 53731]]
In the CY 2017 Quality Payment Program final rule, we also
finalized implementation of a validation process for claims and
registry submissions to validate whether MIPS eligible clinicians have
6 applicable and available measures, whether an outcome measure is
available or whether another high priority measure is available if an
outcome measure is not available (81 FR 77290 through 77291).
We did not propose any changes to the process for validating
whether MIPS eligible clinicians that submit measures via claims and
registry submissions have measures available and applicable. We stated
in the CY 2017 Quality Payment Program final rule (81 FR 77290) that we
did not intend to establish a validation process for QCDRs because we
expect that MIPS eligible clinicians that enroll in QCDRs will have
sufficient meaningful measures to meet the quality performance category
criteria (81 FR 77290 through 77291). We did not propose any changes to
this policy.
We also stated that if a MIPS eligible clinician did not have 6
measures relevant within their EHR to meet the full specialty set
requirements or meet the requirement to submit 6 measures, the MIPS
eligible clinician should select a different submission mechanism to
meet the quality performance category requirements and should work with
their EHR vendors to incorporate applicable measures as feasible (81 FR
77290 through 77291). Under our proposals discussed in section
II.C.6.a.(1) of this final rule with comment period to allow measures
to be submitted and scored via multiple mechanisms within a performance
category, we anticipated that MIPS eligible clinicians that submit
fewer than 6 measures via EHR will have sufficient additional measures
available via a combination of submission mechanisms to submit the
measures required to meet the quality performance category criteria.
For example, the MIPS eligible clinician could submit 2 measures via
EHR and supplement that with 4 measures via QCDR or registry.
Therefore, given the proposal to score multiple mechanisms, if a
MIPS eligible clinician submits any quality measures via EHR or QCDR,
we would not conduct a validation process because we expect these MIPS
eligible clinicians to have sufficient measures available to meet the
quality performance category requirements. As discussed in section
II.C.6.a.(1) of this final rule with comment period, we are not
finalizing the proposal to score multiple mechanisms beginning with the
CY 2018 performance period as proposed, but instead beginning with the
CY 2019 performance period.
Given our proposal to score measures submitted via multiple
mechanisms (see 82 FR 30110 through 30113), we proposed to validate the
availability and applicability of measures only if a MIPS eligible
clinician submits via claims submission options only, registry
submission options only, or a combination of claims and registry
submission options. In these cases, we proposed that we will apply the
validation process to determine if other measures are available and
applicable broadly across claims and registry submission options. We
will not check if there are measures available via EHR or QCDR
submission options for these reporters. We noted that groups cannot
report via claims, and therefore groups and virtual groups will only
have validation applied across registries. We would validate the
availability and applicability of a measure through a clinically
related measure analysis based on patient type, procedure, or clinical
action associated with the measure specifications. For us to recognize
fewer than 6 measures, an individual MIPS eligible clinician must
submit exclusively using claims or qualified registries or a
combination of the two, and a group or virtual group must submit
exclusively using qualified registries. Given the proposal in the
proposed rule (82 FR 30110 through 30113) to permit scoring measures
submitted via multiple mechanisms, validation will be conducted first
by applying the clinically related measure analysis for the individual
measure, and then, to the extent technically feasible, validation will
be applied to check for available measures available via both claims
and registries.
We would like to clarify that we expect that MIPS eligible
clinicians would choose a single submission mechanism that would allow
them to report 6 measures. Multiple submission mechanisms give MIPS
eligible clinicians additional flexibility in reporting the 6 measures,
and we do not require using multiple submission mechanisms for
reporting quality measures.
The following is a summary of the public comments received on
validation only if measures are submitted via claims and/or registry
options proposal and our responses:
Comment: Several commenters were concerned about the impact of
multiple submission mechanisms on the validation process for claims and
registry submissions. A few commenters recommended that the validation
process should be limited to a single submission mechanism. Several
commenters were concerned that the process may determine that a MIPS
eligible clinician who reports via claims should have also reported via
a qualified registry to reach six measures adding an administrative and
financial burden for MIPS eligible clinicians. A few commenters also
recommended that in cases of claims reporting, CMS limit validation to
measures applicable to claims reporting only or develop a process to
determine in advance of the reporting year which quality measures are
likely applicable to each MIPS eligible clinician and only hold them
accountable for these relevant measures. A few commenters requested
clarification on the validation process and how it would be implemented
for measures submitted via claims and registries in light of the
proposal to use multiple submission mechanisms.
Response: As mentioned in II.C.6.a.(1) of this final rule with
comment period, we are finalizing the policy for scoring measures
submitted via multiple mechanisms beginning with year 3 to allow
additional time to communicate how this policy intersects with our
measure applicability policies. To align with that policy, we are
finalizing our validation proposal with modification beginning with
year 3 and, for the year 2 validation process, will continue to apply
the year 1 validation process, which is limited to a single submission
mechanism. Also, given commenters' concerns regarding the impact of
multiple submission mechanisms on the validation process for claims and
registry submissions, we are modifying our validation proposal to
provide that we will validate the availability and applicability of
quality measures only with respect to the data submission mechanism(s)
that a MIPS eligible clinician utilizes for the quality performance
category for a performance period. We will not apply the validation
process to any data submission mechanism that the MIPS eligible
clinician does not utilize for the quality performance category for the
performance period. Thus, MIPS eligible clinicians who submit quality
data via claims only would be validated against claims measures only,
and MIPS eligible clinicians who submit quality data via registry only
would be validated against registry measures only. MIPS eligible
clinicians who, beginning with year 3, elect to submit quality data via
claims and registry would be validated against both claims and registry
measures; however, they would not be validated against measures
submitted via other
[[Page 53732]]
data submission mechanisms. Thus, under the modified validation
process, MIPS eligible clinicians who submit via claims or registry
submission only or a combination of claims and registry submissions
would not be required to submit measures through multiple mechanisms to
meet the quality performance category criteria; rather, utilizing
multiple submission mechanisms is an option available to MIPS eligible
clinicians beginning with year 3, which may increase their quality
performance category score, but may also affect the scope of measures
against which they will be validated or whether they qualify for the
validation process. We expect that MIPS eligible clinicians would
choose a single submission mechanism that would allow them to report 6
measures. Our intention is to offer multiple submission mechanisms to
increase flexibility for MIPS individual clinicians and groups. We are
not requiring that MIPS individual clinicians and groups submit via
multiple submission mechanisms; however, beginning with year 3, the
option would be available for those that have applicable measures and/
or activities available to them.
Comment: A few commenters recommended a validation process for
measures submitted via EHR and QCDR reporting mechanisms to ensure that
eligible clinicians who select an EHR or QCDR mechanism to report are
not unfairly disadvantaged. The commenters believed that such
clinicians may not have 6 relevant measures to report; therefore, a
lack of a validation process for QCDR and EHR reporting disincentivizes
QCDR- and EHR-based submission of quality measures.
Response: As we mentioned in the proposed rule (82 FR 30108 through
30109), we expect that MIPS eligible clinicians that enroll in QCDRs
should have sufficient measures to report and that those who submit via
EHR and do not have a sufficient number of measures within their EHR
should select a different submission mechanism to meet the quality
performance category requirements and should work with their EHR
vendors to incorporate applicable measures as feasible. We recognize
this may be a disadvantage for MIPS eligible clinicians who submit via
EHR in year 2; however, beginning in the 2019 MIPS performance period,
MIPS eligible clinicians that submit fewer than 6 measures via EHR will
have sufficient additional measures available via a combination of
submission mechanisms to meet the 6-measure reporting requirement. We
strongly encourage MIPS eligible clinicians to select the submission
mechanism that has 6 measures available and applicable to their
specialty and practice type. The multiple submission policy will help
situations where people who do not have 6 measures via the QCDR or EHR,
would have the ability to report via QCDR or EHR and supplement
measures from other mechanisms.
Final Action: After consideration of public comments, we are
finalizing our validation proposal with modification beginning with
year 3 (CY 2019 performance period and 2021 MIPS payment year). For
year 2 (CY 2018 performance period and 2020 MIPS payment year), we will
continue to apply the year 1 validation process. As discussed above, we
are modifying our validation proposal to provide that we will validate
the availability and applicability of quality measures only with
respect to the data submission mechanism(s) that a MIPS eligible
clinician utilizes for the quality performance category for a
performance period. We will not apply the validation process to any
data submission mechanism that the MIPS eligible clinician does not
utilize for the quality performance category for the performance
period. We seek comment on how to modify the validation process for
year 3 when we have multiple submission mechanisms.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30109),
we recognized that in extremely rare instances there may be a MIPS
eligible clinician who may not have available and applicable quality
measures. For example, a subspecialist who focuses on a very targeted
clinical area may not have any measures available. However, in many
cases, the clinician may be part of a broader group or would have the
ability to select some of the cross-cutting measures that are
available. Given the wide array of submission options, including QCDRs
which have the flexibility to develop additional measures, we believe
this scenario should be extremely rare. If we are not able to score the
quality performance category, we may reweight their score according to
the reweighting policies described in section II.C.7.b.(3)(b) and
II.C.7.b.(3)(d) of this final rule with comment period. We noted that
we anticipate this will be a rare circumstance given our proposals to
allow measures to be submitted and scored via multiple mechanisms
within a performance category and to allow facility-based measurement
for the quality performance category.
(f) Incentives To Report High Priority Measures
In the CY 2017 Quality Payment Program final rule, we finalized
that we would award 2 bonus points for each outcome or patient
experience measure and 1 bonus point for each additional high priority
measure that is reported in addition to the 1 high priority measure
that is already required to be reported under the quality performance
category submission criteria, provided the measure has a performance
rate greater than zero, and the measure meets the case minimum and data
completeness requirements (81 FR 77293). High priority measures were
defined as outcome, appropriate use, patient safety, efficiency,
patient experience and care coordination measures. We also finalized
that we will apply measure bonus points for the CMS Web Interface for
the Quality Payment Program based on the finalized set of measures
reportable through that submission mechanism (81 FR 77293). We noted
that in addition to the 14 required measures, CMS Web Interface
reporters may also report the CAHPS for MIPS survey and receive measure
bonus points for submitting that measure. We did not propose any
changes to these policies for awarding measure bonus points for
reporting high priority measures in the proposed rule.
In the CY 2017 Quality Payment Program final rule, we finalized a
cap on high priority measure bonus points at 10 percent of the
denominator (total possible measure achievement points the MIPS
eligible clinician could receive in the quality performance category)
of the quality performance category for the first 2 years of MIPS (81
FR 77294). We did not propose any changes to the cap on measure bonus
points for reporting high priority measures, which is codified at Sec.
414.1380(b)(1)(xiv)(D),\6\ in the proposed rule.
---------------------------------------------------------------------------
\6\ Redesignated from Sec. 414.1380(b)(1)(xiii)(D).
---------------------------------------------------------------------------
(g) Incentives To Use CEHRT To Support Quality Performance Category
Submissions
Section 1848(q)(5)(B)(ii) of the Act outlines specific scoring
rules to encourage the use of CEHRT under the quality performance
category. For more of the statutory background and description of the
proposed and finalized policies, we referred readers to the CY 2017
Quality Payment Program final rule (81 FR 77294 through 77299).
In the CY 2017 Quality Payment Program final rule at Sec.
414.1380(b)(1)(xiv), we codified that 1 bonus point is available for
each quality
[[Page 53733]]
measure submitted with end-to-end electronic reporting, under certain
criteria described below (81 FR 77297). We also finalized a policy
capping the number of bonus points available for electronic end-to-end
reporting at 10 percent of the denominator of the quality performance
category percent score, for the first 2 years of the program (81 FR
77297). We also finalized that the CEHRT bonus would be available to
all submission mechanisms except claims submissions. Specifically, MIPS
eligible clinicians who report via qualified registries, QCDRs, EHR
submission mechanisms, or the CMS Web Interface for the Quality Payment
Program, in a manner that meets the end-to-end reporting requirements,
may receive 1 bonus point for each reported measure with a cap (81 FR
77297).
We did not propose changes to these policies related to bonus
points for using CEHRT for end-to-end reporting in the proposed rule.
However, we sought comment on the use of health IT in quality
measurement and how HHS can encourage the use of certified EHR
technology in quality measurement as established in the statute (82 FR
30109 through 30110).
We thank commenters for their response on the use of health IT in
quality measurement and we will consider them in future rulemaking.
(h) Calculating Total Measure Achievement and Measure Bonus Points
In the proposed rule (82 FR 30113 through 30120), we proposed a new
methodology to reward improvement based on achievement, from 1 year to
another, which requires modifying the calculation of the quality
performance category percent score. In the proposed rule (82 FR 30110
through 30113), we summarized the policies for calculating the total
measure achievement points and total measure bonus points, prior to
scoring improvement and the final quality performance category percent
score. We noted that we will refer to policies finalized in the CY 2017
Quality Payment Program final rule that apply to the quality
performance category score, which is referred to as the quality
performance category percent score in this proposed rule, in this
section. We also proposed some refinements to address the ability for
MIPS eligible clinicians to submit quality data via multiple submission
mechanisms.
(i) Calculating Total Measure Achievement and Measure Bonus Points for
Non-CMS Web Interface Reporters
In the CY 2017 Quality Payment Program final rule (81 FR 77300), we
finalized that if a MIPS eligible clinician elects to report more than
the minimum number of measures to meet the MIPS quality performance
category criteria, then we will only include the scores for the
measures with the highest number of assigned points, once the first
outcome measure is scored, or if an outcome measure is not available,
once another high priority measure is scored. We did not propose any
changes to the policy to score the measures with the highest number of
assigned points in this proposed rule; however, we proposed refinements
to account for measures being submitted across multiple submission
mechanisms.
In the CY 2017 Quality Payment Program final rule, we sought
comment on whether to score measures submitted across multiple
submission mechanisms (81 FR 77275) and on what approach we should use
to combine the scores for quality measures from multiple submission
mechanisms into a single aggregate score for the quality performance
category (81 FR 77275). We summarized the comments that were received
in the proposed rule (82 FR 30110).
We proposed, beginning with the 2018 MIPS performance period, a
method to score quality measures if a MIPS eligible clinician submits
measures via more than one of the following submission mechanisms:
Claims, qualified registry, EHR or QCDR submission options. We noted
that we believe that allowing MIPS eligible clinicians to be scored
across these data submission mechanisms in the quality performance
category will provide additional options for MIPS eligible clinicians
to report the measures required to meet the quality performance
category criteria, and encourage MIPS eligible clinicians to begin
using electronic submission mechanisms, even if they may not have 6
measures to report via a single electronic submission mechanism alone.
We noted that we also continue to score the CMS-approved survey vendor
for CAHPS for MIPS submission options in conjunction with other
submission mechanisms (81 FR 77275) as noted in Table 21.
We proposed to score measures across multiple mechanisms using the
following rules:
As with the rest of MIPS, we will only score measures
within a single identifier. For example, as codified in Sec.
414.1310(e), eligible clinicians and MIPS eligible clinicians within a
group aggregate their performance data across the TIN in order for
their performance to be assessed as a group. Therefore, measures can
only be scored across multiple mechanisms if reported by the same
individual MIPS eligible clinician, group, virtual group or APM Entity,
as described in Table 21.
We did not propose to aggregate measure results across
different submitters to create a single score for an individual measure
(for example, we are not going to aggregate scores from different TINs
within a virtual group TIN to create a single virtual group score for
the measures; rather, virtual groups must perform that aggregation
across TINs prior to data submission to CMS). Virtual groups are
treated like other groups and must report all of their measures at the
virtual group level, for the measures to be scored. Data completeness
and all the other criteria will be evaluated at the virtual group
level. Then the same rules apply for selecting which measures are used
for scoring. In other words, if a virtual group representative submits
some measures via a qualified registry and other measures via EHR, but
an individual TIN within the virtual group also submits measures, we
will only use the scores from the measures that were submitted at the
virtual group level, because the TIN submission does not use the
virtual group identifier. This is consistent with our other scoring
principles, where, for virtual groups, all quality measures are scored
at the virtual group level.
Separately, as also described in Table 21, because CMS Web
Interface and facility-based measurement each have a comprehensive set
of measures that meet the proposed MIPS submission requirements, we did
not propose to combine CMS Web Interface measures or facility-based
measurement with other group submission mechanisms (other than CAHPS
for MIPS, which can be submitted in conjunction with the CMS Web
Interface). We refer readers to section II.C.7.a.(2)(h)(ii) of this
final rule with comment period for discussion of calculating the total
measure achievement and measure bonus points for CMS Web Interface
reporters. We refer readers to section II.C.7.a.(4) of the final rule
with comment period for a description of our policies on facility-based
measurement. We list these submission mechanisms in Table 21, to
illustrate that CMS Web Interface submissions and facility-based
measurement cannot be combined with other submission options, except
that the CAHPS for MIPS survey can be combined with CMS Web Interface,
as described in the proposed rule (82 FR 30113).
[[Page 53734]]
Table 21--Scoring Allowed Across Multiple Mechanisms by Submission
Mechanism
[Determined by MIPS identifier and submission mechanism] \1\
------------------------------------------------------------------------
When can quality measures be
MIPS identifier and submission scored across multiple
mechanisms mechanisms?
------------------------------------------------------------------------
Individual eligible clinician reporting Can combine claims, EHR, QCDR,
via claims, EHR, QCDR, and registry and registry.
submission options.
Group reporting via EHR, QCDR, Can combine EHR, QCDR,
registry, and the CAHPS for MIPS registry, and CAHPS for MIPS
survey. survey.
Virtual group reporting via EHR, QCDR, Can combine EHR, QCDR,
registry, and the CAHPS for MIPS registry, and CAHPS for MIPS
survey. survey.
Group reporting via CMS Web Interface.. Cannot be combined with other
submission mechanisms, except
for the CAHPS for MIPS survey.
Virtual group reporting via CMS Web Cannot be combined with other
Interface. submission mechanisms, except
for the CAHPS for MIPS survey.
Individual or group reporting facility- Cannot be combined with other
based measures. submission mechanisms.
MIPS APMs reporting Web Interface or MIPS APMs are subject to
other quality measures. separate scoring standards and
cannot be combined with other
submission mechanisms.
------------------------------------------------------------------------
\1\ The all-cause readmission measure is not submitted and applies to
all groups of 16 or more clinicians who meet the case minimum of 200.
If a MIPS eligible clinician submits the same measure via
2 different submission mechanisms, we will score each mechanism by
which the measure is submitted for achievement and take the highest
measure achievement points of the 2 mechanisms.
Measure bonus points for high priority measures would be
added for all measures submitted via all the different submission
mechanisms available, even if more than 6 measures are submitted, but
high priority measure bonus points are only available once for each
unique measure (as noted by the measure number) that meets the criteria
for earning the bonus point. For example, if a MIPS eligible clinician
submits 8 measures--6 process and 2 outcome--and both outcome measures
meet the criteria for a high priority bonus (meeting the required data
completeness, case minimum, and has a performance rate greater than
zero), the outcome measure with the highest measure achievement points
would be scored as the required outcome measure and then the measures
with the next 5 highest measure achievement points will contribute to
the final quality score. This could include the second outcome measure
but does not have to. Even if the measure achievement points for the
second outcome measure are not part of the quality performance category
percent score, measure bonus points would still be available for
submitting a second outcome measure and meeting the requirement for the
high priority measure bonus points. The rationale for providing measure
bonus points for measures that do not contribute measure achievement
points to the quality performance category percent score is that it
would help create better benchmarks for outcome and other high priority
measures by encouraging clinicians to report them even if they may not
have high performance on the measure. We also want to encourage MIPS
eligible clinicians to submit to us all of their available MIPS data,
not only the data that they or their intermediary deem to be their best
data. We believe it will be in the best interest of all MIPS eligible
clinicians that we determine which measures will result in the
clinician receiving the highest MIPS score. If the same measure is
submitted through multiple submission mechanisms, we would apply the
bonus points only once to the measure. We proposed to amend Sec.
414.1380(b)(1)(xiv) (as redesignated from Sec. 414.1380(b)(1)(xiii))
to add paragraph (b)(1)(xiv)(E) that if the same high priority measure
is submitted via two or more submission mechanisms, as determined using
the measure ID, the measure will receive high priority measure bonus
points only once for the measure. The total measure bonus points for
high-priority measures would still be capped at 10 percent of the total
possible measure achievement points.
Measure bonus points that are available for the use of
end-to-end electronic reporting would be calculated for all submitted
measures across all submission mechanisms, including measures that
cannot be reliably scored against a benchmark. If the same measure is
submitted through multiple submission mechanisms, then we would apply
the bonus points only once to the measure. For example, if the same
measure is submitted using end-to-end reporting via both a QCDR and EHR
reporting mechanism, the measure would only get a measure bonus point
one time. We proposed to amend Sec. 414.1380(b)(1)(xv) (as
redesignated) to add that if the same measure is submitted via two or
more submission mechanisms, as determined using the measure ID, the
measure will receive measure bonus points only once for the measure.
The total measure bonus points for end-to-end electronic reporting
would still be capped at 10 percent of the total available measure
achievement points.
Although we provided a policy to account for scoring in those
circumstances when the same measure is submitted via multiple
mechanisms, we anticipated that this will be a rare circumstance and do
not encourage clinicians to submit the same measure via multiple
mechanisms. Table 22 illustrates how we would assign total measure
achievement points and total measure bonus points across multiple
submission mechanisms under the proposal. In this example, a MIPS
eligible clinician elects to submit quality data via 3 submission
mechanisms: 3 measures via registry, 4 measures via claims, and 5
measures via EHR. The 3 registry measures are also submitted via claims
(as noted by the same measure letter in this example). The EHR measures
do not overlap with either the registry or claims measures. In this
example, we assign measure achievement and bonus points for each
measure. If the same measure (as determined by measure ID) is
submitted, then we use the highest achievement points for that measure.
For the bonus points, we assess which of the outcome measures meets the
outcome measure requirement and then we identify any other unique
measures that qualify for the high priority bonus. We also identify the
unique measures that qualify for end-to-end electronic reporting bonus.
[[Page 53735]]
Table 22--Example of Assigning Total Measure Achievement and Bonus Points for an Individual MIPS Eligible
Clinician That Submits Measures Across Multiple Submission Mechanisms
----------------------------------------------------------------------------------------------------------------
Measure High priority
achievement 6 scored measure bonus Incentive for CEHRT
points measures points measure bonus points
----------------------------------------------------------------------------------------------------------------
Registry:
Measure A (Outcome)....... 7.1.............. 7.1 (Outcome (required
measure with outcome measure
highest does not
achievement receive bonus
points). points).
Measure B................. 6.2 (points not
considered
because it is
lower than the
8.2 points for
the same claims
measure).
Measure C (high priority 5.1 (points not ................ 1...............
patient safety measure considered
that meets requirements because it is
for additional bonus lower than the
points). 6.0 points for
the same claims
measure).
----------------------------------------------------------------------------------------------------------------
Claims:
Measure A (Outcome)....... 4.1 (points not ................ No bonus points
considered because the
because it is registry
lower than the submission of
7.1 points for the same
the same measure measure
submitted via a satisfies
registry). requirement for
outcome measure.
Measure B................. 8.2.............. 8.2.............
Measure C (High priority 6.0.............. 6.0............. No bonus (Bonus
patient safety measure applied to the
that meets requirements registry
for additional bonus measure).
points).
Measure D (outcome measure 1.0 \2\.......... ................ (no high
<50% of data submitted). priority bonus
points because
below data
completeness).
----------------------------------------------------------------------------------------------------------------
EHR (using end-to-end) Reporting that meets
CEHRT bonus point
criteria
----------------------------------------------------------------------------------------------------------------
Measure E..................... 5.1.............. 5.1............. ................ 1.
Measure F..................... 5.0.............. 5.0............. ................ 1.
Measure G..................... 4.1.............. ................ ................ 1.
Measure H..................... 4.2.............. 4.2............. ................ 1.
Measure I (high priority 3.0.............. ................ (no high 1.
patient safety measure that priority bonus
is below case minimum). points because
below case
minimum).
--------------------------------------------------------------
35.6............ 1 (below 10% cap 5 (below 10% cap).
\1\).
--------------------------------------------------------------
Quality Performance Category ................. (35.6 + 1 + 5)/60 = 69.33%
Percent Score Prior to
Improvement Scoring.
----------------------------------------------------------------------------------------------------------------
\1\ In this example the cap would be 6 points, which is 10 percent of the total available measure achievement
points of 60.
\2\ This table in the CY 2018 Quality Payment Program proposed rule (82 FR 30112) inadvertently indicated that
this would contribute 1 point to the quality performance category percent score for being one of the 6
measures submitted. In the example, more than 6 measures were submitted and the 6 with the highest scores
would be used, therefore, Measure D would not contribute points to the final score.
We proposed to amend Sec. 414.1380(b)(1)(xii) to add paragraph (A)
to state that if a MIPS eligible clinician submits measures via claims,
qualified registry, EHR, or QCDR submission options, and submits more
than the required number of measures, they are scored on the required
measures with the highest assigned measure achievement points. MIPS
eligible clinicians that report a measure via more than 1 submission
mechanism can be scored on only 1 submission mechanism, which will be
the submission mechanism with the highest measure achievement points.
Groups that submit via these submission mechanisms may also submit and
be scored on CMS-approved survey vendor for CAHPS for MIPS submission
mechanisms.
We invited comments on the proposal to calculate the total measure
achievement points by using the measures with the 6 highest measure
achievement points across multiple submission mechanisms. We invited
comments on the proposal that if the same measure is submitted via 2 or
more mechanisms, we will only take the one with the highest measure
achievement points. We invited comments on the proposal to assign high
priority measure bonus points to all measures, with performance greater
than zero, that meet case minimums, and that meet data completeness
requirements, regardless of submission mechanism and to assign measure
bonus points for each unique measure submitted using end-to-end
electronic reporting. We invited comments on the proposal that if the
same measure is submitted using 2 different mechanisms, the measure
will receive measure bonus points once.
We did not propose any changes to our policy that if a MIPS
eligible clinician does not have any scored measures, then a quality
performance category percent score will not be calculated as finalized
in the CY 2017 Quality Payment Program final rule at 81 FR 77300. We
referred readers to the discussion at 81 FR 77299 through 77300 for
more details on that policy. We noted in the proposed rule (82 FR 30108
through 30109) that we anticipate that it will be only in rare case
that a MIPS eligible clinician does not have any scored measures and a
quality performance category percent score cannot be calculated.
The following is a summary of the public comments received on
[[Page 53736]]
calculating total achievement and bonus points when using multiple
submission mechanisms proposals and our responses:
Comment: A few commenters supported the policy to assign high
priority measures bonus points to all measures, with performance
greater than zero, that meet case minimum, and that meet data
completeness requirement and to assign measure bonus points for each
unique measure using end-to-end electronic reporting. One commenter
expressed support for bonus points, agreeing with CMS that this would
aid future benchmark development. Another commenter stated that the
policy offers the best opportunities for eligible clinicians to perform
well and maximize the bonus points offered.
Response: As discussed in section II.C.6.a.(1) of this final rule
with comment period, we are not finalizing the proposal to score
multiple mechanisms beginning with the CY 2018 performance period as
proposed, but instead beginning with the CY 2019 performance period. To
align with that policy, we are not finalizing for the CY 2018
performance period the policy for calculating total achievement points
and bonus points when using multiple submission mechanisms, but we are
finalizing it for the CY 2019 performance period and future. We will
continue to review this policy in future rulemaking.
Comment: Several commenters supported taking the highest measure
achievement point if the same measure is submitted via 2 or more
mechanisms. A few commenters stated that this offers MIPS eligible
clinicians the best opportunity to perform well and eliminates the risk
that a MIPS eligible clinician will be penalized for reporting the same
measure via multiple mechanisms. Another commenter remarked that CMS is
providing a necessary transition to more robust submission mechanism
reporting. One commenter who supported the policy also requested that
CMS include in provider feedback reports eligible clinicians' scores on
all measures reported via multiple submission mechanisms to help MIPS
eligible clinicians select submission mechanisms in future reporting
periods.
Response: As noted above, we are finalizing the implementation of
this policy for the 2019 MIPS performance period and future years to
align with the multiple submission mechanisms policy and so that
stakeholders can more fully understand the impact of multiple
submissions on the measure applicability policies. We will consider
ways to provide more information on the impact of the policy on quality
measure scoring, including through provider feedback reports. We refer
readers to section II.C.6.a.(1) of this final rule with comment period
for more discussion on the delay.
Comment: A few commenters did not support the scoring policy for
measures submitted through multiple submission mechanisms. One
commenter cited uncertainty in the administration of the policy and
recommended it not be instituted until CMS can demonstrate the ability
to receive data and send feedback in a timely and accurate manner.
Another commenter requested that CMS re-evaluate its scoring policies
for the affected MIPS eligible clinicians who do not have the
opportunity to achieve bonus points or take advantage of this policy
due to measure scarcity. One commenter also expressed concerns that
potential cross-over measures (that is, measures that can be reported
through multiple submission mechanisms) limit the ability to aggregate
the data and shared concerns regarding how MIPS eligible clinicians
would track their progress across multiple platforms. One commenter was
concerned about the difficulty in calculating a quality performance
category score via multiple submission mechanisms. Another expressed
concern about how the same measure submitted through two different
submission mechanisms, during different timeframes would be calculated
and scored. The commenter stated that calculating a score for half of
the year using one submission mechanism would not be fair, given the
MIPS eligible clinician reported for the entire year and it would be
important to rectify as longer reporting durations are mandatory. One
commenter who supported the policy expressed concern about the number
of MIPS eligible clinicians who would need to submit via multiple
mechanisms because of the limited number of specialty measure sets that
can be reported electronically.
Response: We understand the commenters' concerns with regards to
burden and complexity around the use of multiple submission mechanisms.
We believe that allowing MIPS eligible clinicians to be scored across
multiple submission mechanisms will provide additional options for MIPS
eligible clinicians to meet the quality performance category
requirement, thus maximizing their ability to achieve the highest
possible score and encouraging them to use electronic reporting. We
would like to clarify that for performance periods beginning in 2019,
if a MIPS eligible clinician or group reports for the quality
performance category by using multiple EHRs then all the submissions
would be scored and the quality measures with the highest performance
would be utilized for the quality performance category score. If the
same measure is reported through multiple submission mechanisms for the
same performance period, then each submission would be scored, and only
the highest scored submission would be applied. We would not aggregate
multiple submissions of the same measure towards the quality
performance category score. However, we do not anticipate clinicians
will want to submit the same measure through multiple submission
mechanisms. As discussed in section II.C.6.a.(1) of this final rule
with comment period, we are not finalizing the proposed multiple data
submission mechanisms policy beginning with the CY 2018 performance
period as proposed, but instead beginning with the CY 2019 performance
period. The capabilities will be in place for us to administer the
policy for CY 2019 performance period. We do not believe that MIPS
eligible clinicians who have a scarcity of measures will be
disadvantaged because of the validation process discussed in section
II.C.7.a.(2)(e) of this final rule with comment period would adjust the
scoring for lack of measures. We refer readers to section II.C.6.a.(1)
of this final rule with comment period for further discussion on the
multiple submission mechanism policy, including specialists who report
on a specialty set or do not have 6 measures to report. Over the next
year, we intend to work with and educate stakeholders regarding this
change and make them aware of any potential advantages or disadvantages
of this policy and discuss how MIPS eligible clinicians who participate
in this policy can receive feedback.
Final Action: After consideration of public comments, we are
finalizing our proposal to calculate the total measure achievement and
bonus points when using multiple submission mechanisms proposals for
year 3 to align with the multiple submission mechanisms policy which
will be finalized for year 3 and amend Sec. 414.1380(b)(1)(xii)
accordingly.
(ii) Calculating Total Measure Achievement and Measure Bonus Points for
CMS Web Interface Reporters
In the CY 2017 Quality Payment Program final rule, we finalized
that CMS Web Interface reporters are required to report 14 measures, 13
individual measures, and a 2-component measure for diabetes (81 FR
77302 through 77305). We noted that for
[[Page 53737]]
the transition year, 3 measures did not have a benchmark in the Shared
Savings Program. Therefore, for the transition year, CMS Web Interface
reporters are scored on 11 of the total 14 required measures, provided
that they report all 14 required measures.
In the CY 2017 Quality Payment Program final rule, we finalized a
global floor of 3 points for all CMS Web Interface measures submitted
in the transition year, even with measures at zero percent performance
rate, provided that these measures have met the data completeness
criteria, have a benchmark and meet the case minimum requirements (82
FR 77305). Therefore, measures with performance below the 30th
percentile will be assigned a value of 3 points during the transition
year to be consistent with the floor established for other measures and
because the Shared Savings Program does not publish benchmarks below
the 30th percentile (82 FR 77305). We stated that we will reassess
scoring for measures below the 30th percentile in future years.
We proposed to continue to assign 3 points for measures with
performance below the 30th percentile, provided the measure meets data
completeness, has a benchmark, and meets the case minimum requirements
for the 2018 MIPS performance year; we made this proposal in order to
continue to align with the 3-point floor for other measures and because
the Shared Savings Program does not publish benchmarks with values
below the 30th percentile (82 FR 30113). We will reassess this policy
again next year through rulemaking.
We did not propose any changes to our previously finalized policy
to exclude from scoring CMS Web Interface measures that are submitted
but that do not meet the case minimum requirement or that lack a
benchmark, or to our policy that measures that are not submitted and
measures submitted below the data completeness requirements will
receive a zero score (82 FR 77305). However, to further increase
alignment with the Shared Savings Program, we proposed to also exclude
CMS Web Interface measures from scoring if the measure is redesignated
from pay for performance to pay for reporting for all Shared Savings
Program ACOs, although we will recognize the measure was submitted.
While the Shared Savings Program designates measures that are pay for
performance in advance of the reporting year, the Shared Savings
Program may redesignate a measure as pay for reporting under certain
circumstances (see 42 CFR 425.502(a)(5)). Therefore, we proposed to
amend Sec. 414.1380(b)(1)(viii) to add that CMS Web Interface measures
that have a measure benchmark but are redesignated as pay for reporting
for all Shared Savings Program ACOs by the Shared Savings Program will
not be scored, as long as the data completeness requirement is met.
We invited comment on our proposal to not score CMS Web Interface
measures redesignated as pay for reporting by the Shared Savings
Program.
We also noted that, while we did not state explicitly in the CY
2017 Quality Payment Program final rule, groups that choose to report
quality measures via the CMS Web Interface may, in addition to the 14
required measures, also submit the CAHPS for MIPS survey in the quality
performance category (81 FR 77094 through 77095; 81 FR 77292). If they
do so, they can receive bonus points for submitting this high priority
measure and will be scored on it as an additional measure. Therefore,
we proposed to amend Sec. 414.1380(b)(1)(xii) to add paragraph (B) to
state that groups that submit measures via the CMS Web Interface may
also submit and be scored on CMS-approved survey vendor for CAHPS for
MIPS submission options.
In addition, groups of 16 or more eligible clinicians that meet the
case minimum for administrative claims measures will automatically be
scored on the all-cause hospital readmission measure and have that
measure score included in their quality category performance percent
score.
We did not propose any changes to calculating the total measure
achievement points and measure bonus points for CMS Web Interface
measures in the proposed rule, although we proposed to add improvement
to the quality performance category percent score for such submissions
(as well as other submission mechanisms) in the proposed rule (82 FR
30119 through FR 30120).
The following is a summary of the public comments received on the
scoring for CMS Web Interface proposal and our responses:
Comment: A few commenters supported not scoring CMS Web Interface
measures re-designated as pay for reporting by the Shared Savings
Program, citing the need for alignment across programs and consistency
with the goals of the Quality Payment Program. One commenter requested
that CMS clarify which pay for reporting measures will be excluded from
MIPS Quality Performance category scoring.
Response: We appreciate commenters support for our proposal to not
score CMS Web Interface measures redesignated as pay for reporting by
the Shared Savings Program. We will communicate with registered CMS Web
Interface participants about the re-designation and the changes will be
posted on a CMS Web site.
Final Action: After consideration of public comments, we are
finalizing our proposal to not score CMS Web Interface measures
redesignated as pay for reporting by the Shared Savings Program and to
amend Sec. 414.1380(b)(1)(viii) accordingly.
(i) Scoring Improvement for the MIPS Quality Performance Category
Percent Score
(i) Calculating Improvement at the Quality Performance Category Level
In the CY 2017 Quality Payment Program final rule, we noted that we
consider achievement to mean how a MIPS eligible clinician performs
relative to performance standards, and improvement to mean how a MIPS
eligible clinician performs compared to the MIPS eligible clinician's
own previous performance on measures and activities in the performance
category (81 FR 77274). We also solicited public comments in the CY
2017 Quality Payment Program proposed rule on potential ways to
incorporate improvement in the scoring methodology. In the CY 2018
Quality Payment Program proposed rule (82 FR 30096 through 30098), we
explained why we believe that the options set forth in the CY 2017
Quality Payment Program proposed rule, including the Hospital VBP
Program, the Shared Savings Program, and Medicare Advantage 5-star
Ratings Program, were not fully translatable to MIPS. Beginning with
the 2018 MIPS performance period, we proposed to score improvement, as
well as achievement in the quality performance category level when data
is sufficient (82 FR 30113 through 30114). We believe that scoring
improvement at the performance category level, rather than measuring
improvement at the measure level, for the quality performance category
would allow improvement to be available to the broadest number of MIPS
eligible clinicians because we are connecting performance to previous
MIPS quality performance as a whole rather than changes in performance
for individual measures. Just as we believe it is important for a MIPS
eligible clinician to have the flexibility to choose measures that are
meaningful to their practice, we want them to be able to adopt new
measures without concern about losing the ability to be measured
[[Page 53738]]
on improvement. In addition, we encouraged MIPS eligible clinicians to
select more outcome measures and to move away from topped out measures.
We did not want to remove the opportunity to score improvement from
those who select different measures between performance periods for the
quality performance category; therefore, we proposed to measure
improvement at the category level which can be calculated with
different measures.
We proposed at Sec. 414.1380(b)(1)(xvi)(E) to define an
improvement percent score to mean the score that represents improvement
for the purposes of calculating the quality performance category
percent score. We also proposed at Sec. 414.1380(b)(1)(xvi)(C) that an
improvement percent score would be assessed at the quality performance
category level and included in the calculation of the quality
performance category percent score. When we evaluated different
improvement scoring options, we saw two general methods for
incorporating improvement. One method measures both achievement and
improvement and takes the higher of the two scores for each measure
that is compared. The Hospital VBP Program incorporates such a
methodology. The second method is to calculate an achievement score and
then add an improvement score if improvement is measured. The Shared
Savings Program utilizes a similar methodology for measuring
improvement. For the quality performance category, we proposed to
calculate improvement at the category level and believe adding
improvement to an existing achievement percent score would be the most
straight-forward and simple way to incorporate improvement. For the
purpose of improvement scoring methodology, the term ``quality
performance category achievement percent score'' means the total
measure achievement points divided by the total possible available
measure achievement points, without consideration of bonus points or
improvement adjustments and is discussed in the proposed rule (82 FR
30116 through 30117).
Consistent with bonuses available in the quality performance
category, we proposed at Sec. 414.1380(b)(1)(xvi)(B) that the
improvement percent score may not total more than 10 percentage points.
We invited public comments on these proposals.
The following is a summary of the public comments received on the
proposal for scoring improvement at the quality performance category
level and our responses:
Comment: Many commenters supported improvement scoring at the
category level for the quality performance category score and adding
the improvement percent score to the quality performance category
percent score. Many commenters noted it provides a bonus incentive,
rather than a penalty for MIPS eligible clinicians. Many commenters
supported flexibility in measure choice because clinicians could choose
the most clinically relevant measures; the approach is less complicated
than others; clinicians are adjusting to the MIPS program; and would
encourage clinicians to adopt more difficult measures. A few commenters
believed the approach would incentivize clinician progress toward
achieving quality outcomes and care efficiency because it would
encourage clinicians to move away from reporting topped out measures
and begin reporting new, more meaningful quality measures. One
commenter noted it was administratively burdensome to report new
measures; therefore, clinicians would only change measures if they are
relevant and the category scoring allows them the flexibility to change
their measures. One commenter supported the approach because it
recognizes and encourages higher standards of quality among all
clinicians and increases opportunities for providers to succeed despite
challenges associated with serving patients with high social risk
factors. One commenter believed that the proposal would encourage
smaller practices to participate in the MIPS program.
Response: We thank the commenters for their support, and we are
finalizing these policies as proposed.
Comment: A few commenters supported category level improvement
scoring, but suggested that CMS should monitor for the frequency of
clinicians switching measures, which could potentially warrant
consideration of alternative approaches, and whether category level
improvement scoring was needed in the future.
Response: We intend to monitor the MIPS scoring methodology,
including frequency of clinicians switching measures and the need for
category level improvement scoring, as the program transitions. We will
address any changes of improvement scoring through future rulemaking.
Comment: A few commenters supported the proposed capping of
improvement points at no more than 10 percentage points as proposed at
Sec. 414.1380(b)(1)(xvi)(B). One commenter supported the proposed 10
percentage points available in the quality performance category because
it would encourage clinician participation and offset negative payment
adjustments for clinicians acclimating to pay for performance programs.
Response: We thank the commenters for their support.
Comment: One commenter requested an increase in the number of bonus
points available for improvement scoring.
Response: We believe that capping the improvement percent score at
10 percentage points is consistent with the bonuses available in the
quality performance category and appropriate for rewarding year-to-year
improvement in the quality performance category.
Comment: One commenter did not support the proposed bonus of 10
percentage points because it is excessive and would penalize
consistently high-performing practices.
Response: We disagree with the characterization of the proposed
bonus points for improvement as excessive. Ten percentage points is
consistent with other bonuses in the quality performance category and
therefore simpler to describe and understand. Additionally, we believe
it is a sufficient incentive for both high and low performers,
appropriately provides a larger incentive for low performers to
improve, and will have the greatest impact on improving quality for
beneficiaries.
Comment: Many commenters did not support measuring quality
improvement at the performance category level because it could lead to
inadvertently rewarding eligible clinicians who have not improved, but
rather selected different measures, and instead recommended the
adoption of a measure level approach, which is more precise. Commenters
noted that this would align with the cost performance category. A few
commenters recommended that, should CMS implement a measure level
approach, improvement could only be assessed on any measure that meets
the data completeness threshold and is reported year over year. One
commenter suggested that CMS restrict improvement to MIPS eligible
clinicians and groups that report on at least half of the measures
reported in the prior MIPS performance period during the current MIPS
performance period. One commenter suggested restricting improvement to
the first few years that a measure is used because this would
incentivize lower performers to invest time and resources to improve.
Response: We appreciate the commenters' concerns with measuring
improvement at the performance category level and support for an
[[Page 53739]]
alternative approach to measure improvement at the measure level for
the quality performance category. We believe that, particularly in the
early years of MIPS implementation, providing clinicians the
flexibility to choose the measures for the quality performance
category, rather than a more restrictive approach that would limit the
choice of measures, will enable them to select measures that are most
appropriate for their practice from 1 year to the next, will encourage
participation in the MIPS program, and will incentivize clinicians to
invest in improving their quality of care delivery. We believe that
restricting improvement scoring to measures which meet data
completeness and MIPS eligible clinicians who reported one or more
measures over multiple years would unduly limit the availability of
this incentive, particularly for those who are transitioning away from
topped out measures. As described in section II.C.6.d. of this final
rule with comment period, the cost performance category does not allow
for the selection of measures, so we believe it is appropriate for the
quality performance to have a different methodology.
We do not believe our improvement scoring methodology will drive
clinicians to select different measures in order to earn a higher
improvement score, nor do we anticipate that clinicians will make
investments to change and excel at quality measures solely to earn a
higher improvement score. As noted in section II.C.7.a.(1)(b)(i) of
this final rule with comment period, we intend to evaluate the
implementation of improvement scoring for the quality and cost
performance categories to determine how the policies we establish in
this final rule with comment period are affecting MIPS eligible
clinicians.
Comment: Many commenters recommended a delay in implementing
improvement scoring because they believed that CMS should focus on
quality reporting and assessment; seek feedback and experience; and
develop more robust, valid, and accurate sets of measures. They were
also concerned with the consistency of quality measure benchmarks.
Response: We appreciate the commenters' concerns related to the
validity and accuracy of current measure sets and the consistency of
quality measure benchmark. We also recognize commenters' recommendation
of a delay in the implementation of improvement scoring to allow for a
focus on quality reporting and assessment. Section 1848(q)(5)(D)(i) of
the Act requires improvement to be taken into account for the quality
performance category and the cost performance category if data
sufficient to measure improvement is available. We do not believe the
concerns noted by the commenters make the data insufficient or
unavailable. Please see section II.C.7.a.(2)(i)(ii) of this final rule
with comment period for a discussion of why we believe that data is
sufficient to measure improvement in the quality performance category
and a delay is not warranted.
Comment: Several commenters believed the proposed approach adds
complexity because it would be difficult to communicate to clinicians,
is not straightforward and transparent, and clinicians would not
understand how they are being rewarded for improvement. A few
commenters believed that the proposed approach is too new and too
complex to ensure that quality improvement is being measured validly
and reliably.
Response: While improvement is an additional factor to be
considered in the MIPS quality performance category score calculation,
we are required to take improvement into account for the quality
performance category and the cost performance category if data
sufficient to measure improvement is available. Given the flexibility
in quality measurement, we wanted to have improvement scoring be
broadly available to MIPS eligible clinicians. We intend to develop
additional educational materials to explain the improvement scoring. We
believe that encouraging continued improvement in clinician quality
performance will raise the quality of care delivered and benefit the
health outcomes of beneficiaries.
Comment: Several commenters expressed concerns about the impact of
topped out measures because they potentially confound the accurate
measurement of improvement. These commenters believed that clinicians
should not be penalized for changes in quality reporting that are out
of their control such as elimination of measures or the learning curve
for reporting new measures; believed that specialists could have
difficulty demonstrating improvement; and recommended that improvement
focus on outcome measures.
Response: We understand the commenters' concerns about the impact
of topped out measures and measure availability and appreciate the
recommendation to focus on outcome measures. We do not believe that
topped out measures and the identification and removal of topped out
measures will significantly impact the accurate measurement of
improvement because there will be sufficient measure choice and
flexibility for clinicians to choose measures that represent areas for
performance category level improvement. In addition, we believe that
measuring improvement at the performance category level will encourage
movement away from topped out measures toward new high priority
measures that may have additional measure bonus points. We also believe
that improvement should be made broadly available to clinicians to
encourage MIPS participation, and therefore, do not support restricting
improvement scoring to clinicians that submit a specific number of
outcomes measures or only outcome measures.
Comment: Several commenters suggested the adoption of alternative
approaches to implementing improvement at the performance category
level, such as the Hospital VBP Program, because with the proposed
category level scoring MIPS eligible clinicians could achieve a higher
performance by changing the measures they choose, whereas with the
alternate approaches, the stability in the measures reported could
foster greater improvement in those areas, and this approach would
provide a clearer picture of the change in the quality of care over
time. A few commenters suggested that CMS develop an alternative
approach that does not put already-high-performing physicians or groups
at a disadvantage compared to lower-performing practices and that
builds on the existing benchmark structure. One commenter recommended
that CMS test each of the proposed methodologies in clinician practices
before introducing them in MIPS. One commenter recommended that CMS
seek a method that is straightforward and transparent.
Response: As we described in the CY 2018 Quality Payment Program
proposed rule (82 FR 30096 through 30098), we do not believe the
Hospital VBP Program approach would be appropriate for MIPS because it
does not reward points for achievement in the same manner and would
require significant changes to the scoring methodology for the quality
performance category. We continue to believe that flexibility for
clinicians to select meaningful measures is appropriate for MIPS,
especially for the quality performance category. The Hospital VBP
Program methodology, which relies on consistent measures from year to
year in order to track improvement, would limit our ability to measure
improvement in MIPS. As noted above, we do not anticipate that
clinicians would change measures
[[Page 53740]]
solely for the purposes of improvement scoring. We do not expect that
there will be a disadvantage for high performing clinicians as they
already would already have a high performance score and are potentially
eligible for improvement. We believe that improvement scoring will
provide relatively larger incentives for lower performers who raise
their performance level at a greater rate, but we anticipate this will
benefit quality for beneficiaries. We believe the proposed approach is
transparent and provides clarity on a clinician performance from 1 year
to the next.
Comment: A few commenters requested clarity regarding how the
improvement score would be calculated for clinicians who are changing
CEHRT systems or adopting a CEHRT system.
Response: Improvement scoring would not be directly impacted by
clinicians changing or adopting a CEHRT system because it would be
calculated at the performance category level.
Comment: One commenter believed that scoring for improvement was
unnecessary because a clinician's improvement is reflected in their
final score, which can be compared to the previous year's final score
with a higher score, potentially resulting in the clinician receiving a
higher payment adjustment.
Response: We are accounting for improvement for the quality
performance category as required under section 1848(q)(5)(D) of the
Act. The commenter is correct that clinicians who qualify for
improvement scoring would have a higher quality performance category
achievement percent score; however, we believe it is appropriate to
provide an improvement adjustment on top of that score to create
incentives for continuous improvement.
Comment: One commenter believed that the nature of different
organizations' practices, including region, payer mix, specialty, and
mode of practice, may well require adjusted treatment of reported
scores to ensure that a valid measure of improvement is being
calculated.
Response: The performance category level approach is based on
improvement in a MIPS eligible clinician's performance from the current
MIPS performance period compared to a comparable score from the
previous MIPS performance period. Because we are making the comparison
to the MIPS eligible clinician and not to other organizations or
practices, we do not see the need to adjust improvement scores in
consideration of these factors.
Comment: One commenter believed the proposed approach would be
difficult to communicate to clinicians and would obscure a clinician's
overall progress. One commenter believed that the lag time between
performance and feedback does not allow adequate time to implement
actionable changes the drive improvement.
Response: We believe that improvement scoring, while adding a layer
of complexity to MIPS scoring overall, is an easy to understand
approach that will provide important insight into clinician performance
from 1 year to the next. As discussed in section II.C.9.a.(1) of this
final rule with comment period, we continue to work on ways to improve
performance feedback.
Comment: One commenter recommended that improvement should be
defined more broadly to encourage participants to report new aspects of
the MIPS program, participate in pilots, use registries, or other tools
that CMS seeks to promote. One commenter recommended a phased-in
approach, such as with a pilot test with a limited number of
clinicians.
Response: We do not believe that that reporting or participation by
itself meets a requirement for improvement for purposes of the quality
performance category. In addition, our MIPS quality performance
category scoring policies already include bonuses to promote the use of
high priority measures and end-to-end electronic reporting. We believe
that a phased-in approach or pilot study would limit the availability
of improvement scoring, especially to clinicians who may best benefit
from improvement scoring.
Final Action: As a result of the public comments, we are finalizing
as proposed at Sec. 414.1380(b)(1)(xvi)(E) to define an improvement
percent score to mean the score that represents improvement for the
purposes of calculating the quality performance category score. We are
also finalizing as proposed at proposed at Sec. 414.1380(b)(1)(xvi)(C)
that an improvement percent score would be assessed at the quality
performance category level and included in the calculation of the
quality performance category percent score. We are also finalizing as
proposed at Sec. 414.1380(b)(1)(xvi)(B) that the improvement percent
score may not total more than 10 percentage points.
(ii) Data Sufficiency Standard to Measure Improvement for Quality
Performance Category
Section 1848(q)(5)(D)(i) of the Act stipulates that beginning with
the second year to which the MIPS applies, if data sufficient to
measure improvement is available, then we shall measure improvement for
the quality performance category. Measuring improvement requires a
direct comparison of data from one Quality Payment Program year to
another. Starting with the 2020 MIPS payment year, we proposed that a
MIPS eligible clinician's data would be sufficient to score improvement
in the quality performance category if the MIPS eligible clinician had
a comparable quality performance category achievement percent score for
the MIPS performance period immediately prior to the current MIPS
performance period (82 FR 30114); we explain our proposal to identify
how we will identify ``comparable'' quality performance category
achievement percent scores below. We noted that we believe that this
approach would allow improvement to be broadly available to MIPS
eligible clinicians and encourage continued participation in the MIPS
program. Moreover, this approach would encourage MIPS eligible
clinicians to focus on efforts to improve the quality of care
delivered. We noted that, by measuring improvement based only on the
overall quality performance category achievement percent score, some
MIPS eligible clinicians and groups may generate an improvement score
simply by switching to measures on which they perform more highly,
rather than actually improving at the same measures. We will monitor
how frequently improvement is due to actual improvement versus
potentially perceived improvement by switching measures and will
address through future rulemaking, as needed. We also solicited comment
on whether we should require some level of year to year consistency
when scoring improvement.
We proposed that ``comparability'' of quality performance category
achievement percent scores would be established by looking first at the
submitter of the data. As discussed in more detail in the proposed rule
(82 FR 300113 through 300114), we are comparing results at the
category, rather than the performance measure level because we believe
that the performance category score from 1 year is comparable to the
performance category score from the prior performance period, even if
the measures in the performance category have changed from year to
year.
We proposed to compare results from an identifier when we receive
submissions with that same identifier (either TIN/NPI for individual,
or TIN for group, APM entity, or virtual group identifier) for two
consecutive performance periods. However, if we do
[[Page 53741]]
not have the same identifier for 2 consecutive performance periods, we
proposed a methodology to create a comparable performance category
score that can be used for improvement measurement. Just as we did not
want to remove the opportunity to earn an improvement score from those
who elect new measures between performance periods for the quality
performance category, we also did not want to restrict improvement for
those MIPS eligible clinicians who elect to participate in MIPS using a
different identifier.
There are times when submissions from a particular individual
clinician or group of clinicians use different identifiers between 2
years. For example, a group of 20 MIPS eligible clinicians could choose
to submit as a group (using their TIN identifier) for the current
performance period. If the group also submitted as a group for the
previous year's performance period, we would simply compare the group
scores associated with the previous performance period to the current
performance period (following the methodology explained in the proposed
rule (82 FR 30116 through 30117)). However, if the group members had
previously elected to submit to MIPS as individual clinicians, we would
not have a group score at the TIN level from the previous performance
period to which to compare the current performance period.
In circumstances where we do not have the same identifier for 2
consecutive performance periods, we proposed to identify a comparable
score for individual submissions or calculate a comparable score for
group, virtual group, and APM entity submissions. For individual
submissions, if we do not have a quality performance category
achievement percent score for the same individual identifier in the
immediately prior period, then we proposed to apply the hierarchy logic
that is described in section II.C.8.a.(2) of the proposed rule (82 FR
30146 through 30147) to identify the quality performance category
achievement score associated with the final score that would be applied
to the TIN/NPI for payment purposes. For example, if there is no
historical score for the TIN/NPI, but there is a TIN score (because in
the previous period the TIN submitted as a group), then we would use
the quality performance category achievement percent score associated
with the TIN's prior performance. If the NPI had changed TINs and there
was no historical score for the same TIN/NPI, then we would take the
highest prior score associated with the NPI.
When we do not have a comparable TIN group, virtual group, or APM
Entity score, we proposed to calculate a score based on the individual
TIN/NPIs in the practice for the current performance period. For
example, in a group of 20 clinicians that previously participated in
MIPS as individuals, but now want to participate as a group, we would
not have a comparable TIN score to use for scoring improvement. We
believe however it is still important to provide to the MIPS eligible
clinicians the improvement points they have earned. Similarly, in cases
where a group of clinicians previously participated in MIPS as
individuals, but now participates as a new TIN, or a new virtual group,
or a new APM Entity submitting data in the performance period, we would
not have a comparable TIN, virtual group, or APM Entity score to use
for scoring improvement. Therefore, we proposed to calculate a score by
taking the average of the individual quality performance category
achievement scores for the MIPS eligible clinicians that were in the
group for the current performance period. If we have more than one
quality performance category achievement percent score for the same
individual identifier in the immediately prior period, then we proposed
to apply the hierarchy logic that is described in section II.C.8.a.(2)
of the proposed rule (82 FR 30146 through 30147) to identify the
quality performance category score associated with the final score that
would be applied to the TIN/NPI for payment purposes. We would exclude
any TIN/NPIs that did not have a final score because they were not
eligible for MIPS. We would include quality performance category
achievement percent scores of zero in the average.
There are instances where we would not be able to measure
improvement due to lack of sufficient data. For example, if the MIPS
eligible clinicians did not participate in MIPS in the previous
performance period because they were not eligible for MIPS, we could
not calculate improvement because we would not have a previous quality
performance category achievement percent score.
Table 26 in the proposed rule (82 FR 30115) summarized the
different cases when a group or individual would be eligible for
improvement scoring under the proposal which we have replicated for
convenience in Table 23.
Table 23--Proposed Eligibility for Improvement Scoring Examples
----------------------------------------------------------------------------------------------------------------
Prior MIPS
performance
Current MIPS period Identifier Eligible for
Scenario performance (with score improvement Data comparability
period identifier greater than scoring
zero)
----------------------------------------------------------------------------------------------------------------
No change in identifier....... Individual (TIN.. Individual (TIN.. Yes............... Current individual
A/NPI 1)......... A/NPI 1)......... score is compared to
individual score from
prior performance
period.
No change in identifier....... Group (TIN A).... Group (TIN A).... Yes............... Current group score is
compared to group
score from prior
performance period.
Individual is with same group, Individual (TIN.. Group (TIN A).... Yes............... Current individual
but selects to submit as an A/NPI 1)......... score is compared to
individual whereas previously the group score
the group submitted as a associated with the
group. TIN/NPI from the
prior performance
period.
Individual changes practices, Individual (TIN.. Individual (TIN.. Yes............... Current individual
but submitted to MIPS B/NPI)........... A/NPI)........... score is compared to
previously as an individual. the individual score
from the prior
performance period.
Individual changes practices Individual (TIN.. Group (TIN A/ Yes............... Current individual
and has multiple scores in C/NPI)........... NPI); Individual score is compared to
prior performance period. (TIN B/NPI). highest score from
the prior performance
period.
[[Page 53742]]
Group does not have a previous Group (TIN A).... Individual scores Yes............... The current group
group score from prior (TIN A/NPI 1, score is compared to
performance period. TIN A/NPI 2, TIN. the average of the
A/NPI 3, etc.)... scores from the prior
performance period of
individuals who
comprise the current
group.
Virtual group does not have Virtual Group Individuals (TINA/ Yes............... The current group
previous group score from (Virtual Group NPI 1, TIN A/NPI score is compared to
prior performance period. Identifier A) 2, TIN B/NPI 1, the average of the
(Assume virtual TIN B/NPI 2). scores from the prior
group has 2 TINs performance period of
with 2 individuals who
clinicians.). comprise the current
group.
Individual has score from Individual (TIN.. APM Entity (APM Yes............... Current individual
prior performance period as A/NPI 1)......... Entity score is compared to
part of an APM Entity. Identifier). the score of the APM
entity from the prior
performance period.
Individual does not have a Individual (TIN.. Individual was No................ The individual quality
quality performance category A/NPI 1)......... not eligible for performance category
achievement score for the MIPS and did not score is missing for
prior performance period. voluntarily the prior performance
submit any period and not
quality measures eligible for
to MIPS. improvement scoring.
----------------------------------------------------------------------------------------------------------------
We proposed at Sec. 414.1380(b)(1)(xvi)(A) to state that
improvement scoring is available when the data sufficiency standard is
met, which means when data are available and a MIPS eligible clinician
or group has a quality performance category achievement percent score
for the previous performance period. We also proposed at Sec.
414.1380(b)(1)(xvi)(A)(1) that data must be comparable to meet the
requirement of data sufficiency, which means that the quality
performance category achievement percent score is available for the
current performance period and the previous performance period and,
therefore, quality performance category achievement percent scores can
be compared. We also proposed at Sec. 414.1380(b)(1)(xvi)(A)(2) that
quality performance category achievement percent scores are comparable
when submissions are received from the same identifier for two
consecutive performance periods. We also proposed an exception at Sec.
414.1380(b)(1)(xvi)(A)(3) that if the identifier is not the same for 2
consecutive performance periods, then for individual submissions, the
comparable quality performance category achievement percent score is
the quality performance category achievement percent score associated
with the final score from the prior performance period that will be
used for payment. For group, virtual group, and APM entity submissions,
the comparable quality performance category achievement percent score
is the average of the quality performance category achievement percent
score associated with the final score from the prior performance period
that will be used for payment for each of the individuals in the group.
As noted above, the proposals were designed to offer improvement
scoring to all MIPS eligible clinicians with sufficient data in the
prior MIPS performance period. We invited public comments on our
proposals as they relate to data sufficiency for improvement scoring.
We also sought comment on an alternative to the proposal: whether
we should restrict improvement to those who submit quality performance
data using the same identifier for two consecutive MIPS performance
periods. We believed this option would be simpler to apply, communicate
and understand than our proposal is, but this alternative could have
the unintended consequence of not allowing improvement scoring for
certain MIPS eligible clinicians, groups, virtual groups and APM
entities.
The following is a summary of the public comments received on our
proposals related to data sufficiency for improvement scoring and our
responses:
Comment: Many commenters believed there was not sufficient data to
score improvement because MIPS data has not yet been collected, the
data in pick-your-pace approach for the 2017 MIPS performance period
may not be representative, and some practices may not understand that
they must fully participate in the quality category in order to receive
an improvement score.
Response: We disagree that there is not enough data collected to
meet the data sufficiency standard. As required by section
1848(q)(5)(D) of the Act, improvement must be incorporated into the
MIPS scoring methodology for the 2018 MIPS performance period if data
sufficient to measure improvement is available. By the end of the 2018
MIPS performance period, we will have collected data for the 2017 MIPS
performance period. While this data may have limitations due to the
``pick your pace'' transition, clinicians will have a quality
performance category achievement percent score which is sufficient to
measure quality. As discussed in section II.C.7.a.(2)(i)(ii) of this
final rule with comment period, we should not be similarly limited in
the availability of sufficient data for year 2.
Comment: Several commenters supported the proposal for a comparable
identifier because this approach would not penalize clinicians changing
jobs, changes in group composition, or new elections to report as a
group. One commenter believed this approach would support the
establishment of virtual groups. One commenter believed limitations to
the same identifier would restrict those eligible for improvement.
Response: We thank the commenters for their support. We agree that
this approach provides flexibility for clinicians to allow for changes
in their practice that could include establishing and reporting as part
of a virtual group.
Comment: A few commenters recommended restricting improvement to
those who submit quality performance data using the same identifier for
two consecutive MIPS performance periods as it is a simpler
[[Page 53743]]
approach and easier to understand. One commenter supported restricting
improvement to those MIPS eligible clinicians who use the same
identifier and same mechanism of reporting for two consecutive
performance periods. One commenter requested clarification on the
impact on group practices when the entity was participating in an APM
entity in the prior performance period. One commenter believed that
tracking different identifiers would require physicians to factor in
additional considerations when they are just trying to learn the
program, such as the requirement that MIPS eligible clinicians must
fully participate in the quality performance category in order to
receive an improvement score. One commenter expressed concerns that the
requirement for data from two consecutive performance periods for
specific clinicians may reward stable and high performing practices and
clinicians while struggling practices with high turnover rates may fall
further behind. One commenter believed that tracking clinician scores
from previous years would increase the overhead costs for Qualified
Registries and QCDRs.
Response: Within MIPS, we must balance complexity with flexibility.
We believe improvement scoring should be available to the broadest
number of eligible clinicians to incentivize increases in the quality
performance category scores and have the greatest impact on increasing
quality of care. Thus, we have provided for the use of a comparable
identifier when we do not have the same identifier from 1 year to
another. Table 23 summarizes different cases when a group or individual
would be eligible for improvement scoring under our proposal including
when we do not have identical identifiers. For a MIPS eligible
clinician reporting as an individual in the current performance period
who reported as part of an APM entity in the previous performance
period, we would use the score of the APM entity as a point of
comparison with the MIPS eligible clinician's score in the current
performance period to determine eligibility for improvement.
Improvement scoring can only increase a quality performance category
score, not decrease it. The burden to track and calculate this score
will not impact external stakeholders and should not impact clinician
decisions on how to submit data, as our systems will help do the
tracking of clinician scores from previous years.
Final Action: As a result of the public comments, we are finalizing
as proposed at Sec. 414.1380(b)(1)(xvi)(A) that improvement scoring is
available when the data sufficiency standard is met which means when
data are available and a MIPS eligible clinician has a quality
performance category achievement percent score for the previous
performance period and the current performance period. We are also
finalizing as proposed at Sec. 414.1380(b)(1)(xvi)(A)(1) that data
must be comparable to meet the requirement of data sufficiency, which
means a quality performance category achievement percent score is
available for the current and previous performance periods and quality
performance category achievement percent scores can be compared. We are
also finalizing as proposed at Sec. 414.1380(b)(1)(xvi)(A)(2) that the
quality performance category achievement percent scores are comparable
when submissions are received from the same identifier for two
consecutive performance periods. We are also finalizing as proposed at
Sec. 414.1380(b)(1)(xvi)(A)(3) that if the identifier is not the same
for 2 consecutive performance periods, then for individual submissions,
the comparable quality performance category achievement percent score
is the highest available quality performance category achievement
percent score associated with the final score from the prior
performance period that will be used for payment for the individual.
For group, virtual group, and APM Entity submissions, the comparable
quality performance category achievement percent score is the average
of the quality performance category achievement percent score
associated with the final score from the prior performance period that
will be used for payment for each of the individuals in the group.
(iii) Additional Requirement for Full Participation To Measure
Improvement for Quality Performance Category
To receive a quality performance category improvement percent score
greater than zero, we also proposed that MIPS eligible clinicians must
fully participate, which we proposed in Sec. 414.1380(b)(1)(xvi)(F) to
mean compliance with Sec. 414.1330 and Sec. 414.1340, in the current
performance year (81 FR 30116). Compliance with those referenced
regulations entails the submission of all required measures, including
meeting data completeness, for the quality performance category for the
current performance period. For example, for MIPS eligible clinicians
submitting via QCDR, full participation would generally mean submitting
6 measures including 1 outcome measure if an outcome measure is
available or 1 high priority measure if an outcome measure is not
available, and meeting the 60 percent data completeness criteria for
each of the 6 measures.
We believe that improvement is most meaningful and valid when we
have a full set of quality measures. A comparison of data resulting
from full participation of a MIPS eligible clinician from 1 year to
another enables a more accurate assessment of improvement because the
performance being compared is based on the applicable and available
measures for the performance periods and not from changes in
participation. While we did not require full participation for both
performance periods, requiring full participation for the current
performance period means that any future improvement scores for a
clinician or group would be derived solely from changes in performance
and not because the clinician or group submitted more measures. We
proposed at Sec. 414.1380(b)(1)(xvi)(C)(5) that the quality
improvement percent score is zero if the clinician did not fully
participate in the quality performance category for the current
performance period.
Because we want to award improvement for net increases in
performance and not just improved participation in MIPS, we want to
measure improvement above a floor for the 2018 MIPS performance period,
to account for our transition year policies. We considered that MIPS
eligible clinicians who chose the ``test'' option of the ``pick your
pace'' approach for the transition year may not have submitted all the
required measures and, as a result, may have a relatively low quality
performance category achievement score for the 2017 MIPS performance
period. Due to the transition year policy to award at least 3 measure
achievement points for any submitted measure via claims, EHR, QCDR,
qualified registry, and CMS-approved survey vendor for CAHPS for MIPS,
and the 3-point floor for the all-cause readmission measure (if the
measure applies), a MIPS eligible clinician that submitted some data
via these mechanisms on the required number of measures would
automatically have a quality performance category achievement score of
at least 30 percent because they would receive at least 3 of 10
possible measure achievement points for each required measure. For
example, if a solo practitioner submitted 6 measures and received 3
points for each measure, then the solo practitioner would have 18
measure achievement points out of a possible 60 total possible measure
achievement points (3 measure achievement points x 6 measures). The
[[Page 53744]]
quality performance category achievement percent score is 18/60 which
equals 30 percent. For groups with 16 or more clinicians that submitted
6 measures and receive 3 measure achievement points for each submitted
measure as well as the all-cause hospital readmission measure, then the
group would have 21 measure achievement points out of 70 total possible
measure achievement points or a quality performance category
achievement percent score of 21/70 which equals 30 percent (3 measure
achievement points x 7 measures). For the CMS Web Interface submission
option, MIPS eligible clinicians that fully participate by submitting
and meeting data completeness for all measures, would also be able to
achieve a quality performance category achievement percent score of at
least 30 percent, as each scored measure would receive 3 measure
achievement points out of 10 possible measure achievement points.
Therefore, we proposed at Sec. 414.1380(b)(1)(xvi)(C)(4) that if a
MIPS eligible clinician has a previous year quality performance
category score less than or equal to 30 percent, we would compare 2018
performance to an assumed 2017 quality performance category achievement
percent score of 30 percent. In effect, for the MIPS 2018 performance
period, improvement would be measured only if the clinician's 2018
quality performance category achievement percent score for the quality
performance category exceeds 30 percent. We believe this approach
appropriately recognizes the participation of MIPS eligible clinicians
who participated in the transition year and accounts for MIPS eligible
clinicians who participated minimally and may otherwise be awarded for
an increase in participation rather than an increase in achievement
performance.
We invited public comment on these proposals.
The following is a summary of the public comments received on our
proposal for full participation related to improvement scoring and our
responses:
Comment: Several commenters supported the requirement for full
participation in the performance period.
Response: We appreciate your support of our proposal.
Comment: One commenter requested the expansion of the eligibility
criteria to include those clinicians that were unable to report
complete data in the previous year because moving from incomplete data
to complete data in 1 year is a significant achievement and should be
recognized by CMS. One commenter believed the requirement of full
participation will add complexity when clinicians are trying to learn
the program and may not understand that they must fully participate in
the quality performance category in order to receive an improvement
score. One commenter recommended a separate improvement calculation or
bonus for clinicians who do not meet full participation for the early
years of MIPS performance because incentivizing incremental increases
in performance in the early years will be an important way to encourage
clinicians and groups to participate in MIPS without adding too much
administrative burden in a single year.
Response: We understand that adding in the full participation
requirement adds a layer of complexity; however, MIPS is required to
measure performance, not participation. We note that full participation
would generally mean submitting 6 measures, including 1 outcome measure
if available, or 1 high priority measure if an outcome measure is not
available, and meeting the data completeness criteria for each of the 6
measures; for eligible clinicians who do not have 6 measures available
or applicable, full participation is met by submitting the measures
that are available and applicable and the data completeness requirement
for those submitted measures. We do not believe increased participation
is sufficient enough to warrant receiving an improvement score, and we
believe that we need the full participation requirement to ensure that
we are capturing data that can be used to measure performance.
Comment: One commenter suggested that the regulatory language be
changed to: Sec. 414.1380(b)(1)(xvi)(A) Improvement scoring is
available when the data sufficiency standard is met, which means when
data are available and a MIPS eligible clinician fully participated in
the previous performance period. This references the definition of
``fully participate'' given in Sec. 414.1380(b)(1)(xvi)(F): For the
purpose of improvement scoring methodology, the term ``fully
participate'' means the MIPS eligible clinician met all requirements in
Sec. Sec. 414.1330 and 414.1340.
Response: We disagree that the clinicians should need to have fully
participated in the prior period in order to have sufficient data to
measure improvement. We believe our proposal to require full
participation in the current performance period and not necessarily the
prior performance period creates an additional incentive to fully
participate in the current performance period in order to be eligible
for improvement scoring and have data available for future performance
measurement. In addition, our proposal to measure improvement above 30
percent helps to ensure that we are measuring true changes in
performance and not just changes in the level of participation.
Comment: A few commenters supported the implementation of
additional requirements for improvement scoring, including the
requirement of participation during the transition year at a level to
achieve a quality category achievement percent score of at least 30
percent. A few commenters suggested that improvement bonuses only be
available to those who fully participate in both the current and the
previous year to close this loophole. One commenter suggested that 30
percent should be the minimum improvement score percentage floor
because it would encourage continued participation by clinicians,
including specialists.
Response: As noted earlier, we disagree that the clinicians should
need to have fully participated in the prior period in order to be
measured for improvement. We think our proposal to require full
participation in the current performance period and not necessarily the
prior performance period creates an additional incentive to fully
participate in the current performance period in order to be eligible
for improvement scoring and have data available for future performance
measurement. We also believe improvement scoring should be available to
the broadest number of eligible clinicians to incentivize increases in
the quality performance category scores and have the greatest impact on
increasing quality of care. In addition, our proposal to measure
improvement above 30 percent helps to ensure that we are measuring true
changes in performance and not just changes in participation.
Final Action: As a result of the public comments, we are finalizing
as proposed that MIPS eligible clinicians must fully participate, which
we propose in Sec. 414.1380(b)(1)(xvi)(F) to mean compliance with
Sec. 414.1330 and Sec. 414.1340, in the current performance year. We
are also finalizing as proposed at Sec. 414.1380(b)(1)(xvi)(C)(5) that
the quality improvement percent score is zero if the clinician did not
fully participate in the quality performance category for the current
performance period. We are also finalizing as proposed at Sec.
414.1380(b)(1)(xvi)(C)(4) that if a MIPS eligible clinician has a
previous year quality performance category score less than or equal to
30 percent, we would compare 2018
[[Page 53745]]
performance to an assumed 2017 quality performance category achievement
percent score of 30 percent.
(iv) Measuring Improvement Based on Changes in Achievement
To calculate improvement with a focus on quality performance, we
proposed to focus on improvement based on achievement performance and
would not consider measure bonus points in our improvement algorithm
(82 FR 30116 through 30117). Bonus points may be awarded for reasons
not directly related to performance such as the use of end-to-end
electronic reporting. We believe that improvement points should be
awarded based on improvement related to achievement. Accordingly, we
are proposing to use an individual MIPS eligible clinician's or group's
total measure achievement points from the prior MIPS performance period
without the bonus points the individual MIPS eligible clinician or
group may have received, to calculate improvement. Therefore, to
measure improvement at the quality performance category level, we will
use the quality performance category achievement percent score
excluding measure bonus points (and any improvement score) for the
applicable years. We proposed at Sec. 414.1380(b)(1)(xvi)(D) to call
this score, which is based on achievement only, the ``quality
performance category achievement percent score'' which is calculated
using the following formula:
Quality performance category achievement percent score = total measure
achievement points/total available measure achievement points.
The current MIPS performance period quality performance category
achievement percent score is compared to the previous performance
period quality performance category achievement percent score. If the
current score is higher, the MIPS eligible clinician may qualify for an
improvement percent score to be added into the quality performance
category percent score for the current performance year. Table 27 of
the proposed rule (82 FR 30117) illustrated how the quality performance
category achievement percent score is calculated.
We proposed to amend the regulatory text at Sec.
414.1380(b)(1)(xvi) to state that improvement scoring is available to
MIPS eligible clinicians and groups that demonstrate improvement in
performance in the current MIPS performance period compared to the
performance in the previous MIPS performance period, based on
achievement. Bonus points or improvement percent score adjustments made
to the category score in the prior or current performance period are
not taken into account when determining whether an improvement has
occurred or the size of any improvement percent score.
We invited public comment on our proposal to award improvement
based on changes in the quality performance category achievement
percent score.
The following is a summary of the public comments received on our
proposal to measure improvement based on changes in achievement in the
quality performance category and our responses:
Comment: A few commenters agreed with not including bonus points in
the calculation.
Response: We appreciate the support of our proposal.
Comment: One commenter recommended including the bonus for end-to-
end electronic reporting in the calculation for the improvement percent
score. One commenter suggested counting bonus points for scenarios
where additional outcome or high priority measures are reported because
the bonus point does have a stronger tie to performance and will help
provide incentives for eligible clinicians to move toward reporting of
more outcome measures in the future.
Response: We appreciate your suggestions to incorporate bonus
points into improvement scoring, but MIPS already has bonuses to reward
end-to-end electronic reporting and high priority measures. We do not
believe it would be appropriate to reward changes in the quality
performance due to these bonuses.
Final Action: As a result of the public comments, we are finalizing
as proposed to amend the regulatory text at Sec. 414.1380(b)(1)(xvi)
to state that improvement scoring is available to MIPS eligible
clinicians that demonstrate improvement in performance in the current
MIPS performance period compared to the performance in the previous
MIPS performance period, based on measure achievement points.
We are also finalizing as proposed to call the score at Sec.
414.1380(b)(1)(xvi)(D), which is based on achievement only, the
``quality performance category achievement percent score,'' which is
calculated using the formula as proposed.
(v) Improvement Scoring Methodology for the Quality Performance
Category
We noted that we believe the improvement scoring methodology that
we are proposing for the quality performance category recognizes the
rate of increase in quality performance category scores of MIPS
eligible clinicians from one performance period to another performance
period so that a higher rate of improvement results in a higher
improvement percent score. We believe this is particularly true for
those clinicians with lower performance who will be incentivized to
begin improving with the opportunity to increase their improvement
significantly and achieve a higher improvement percent score.
We proposed to award an ``improvement percent score'' based on the
following formula:
Improvement percent score = (increase in quality performance category
achievement percent score from prior performance period to current
performance period/prior performance period quality performance
category achievement percent score) *10 percent.
In the proposed rule (82 FR 30117), we provided an example of how
to score the improvement percent score. We noted that we believe that
this improvement scoring methodology provides an easily explained and
applied approach that is consistent for all MIPS eligible clinicians.
Additionally, it provides additional incentives for MIPS eligible
clinicians who are lower performers to improve performance. We believe
that providing larger incentives for MIPS eligible clinicians with
lower quality performance category scores to improve will not only
increase the quality performance category scores but also will have the
greatest impact on improving quality for beneficiaries.
We also proposed that the improvement percent score cannot be
negative (that is, lower than zero percentage points). The improvement
percent score would be zero for those who do not have sufficient data
or who are not eligible under our proposal for improvement points. For
example, a MIPS eligible clinician would not be eligible for
improvement if the clinician was not eligible for MIPS in the prior
performance period and did not have a quality performance category
achievement percent score. We also proposed to cap the size of the
improvement award at 10 percentage points, which we believe
appropriately rewards improvement and does not outweigh percentage
points available through achievement. In effect, 10 percentage points
under our proposed formula would represent 100 percent improvement--or
doubling of achievement measure points--over the immediately preceding
period. For the
[[Page 53746]]
reasons stated, we anticipated that this amount will encourage
participation by individual MIPS eligible clinicians and groups and
will provide an appropriate recognition and award for the largest
increases in performance improvement.
Table 28 of the proposed rule (82 FR 30118), and included in Table
24, illustrated examples of the proposed improvement percent scoring
methodology, which is based on rate of increase in quality performance
category achievement percent scores. We also considered an alternative
to measuring the rate of improvement. The alternative would use band
levels to determine the improvement points for MIPS eligible clinicians
who qualify for improvement points. Under the band level methodology, a
MIPS eligible clinician's improvement points would be determined by an
improvement in the quality performance category achievement percent
score from 1 year to the next year to determine improvement in the same
manner as set forth in the rate of improvement methodology. However,
for the band level methodology, an improvement percent score would then
be assigned by taking into account a portion (50, 75 or 100 percent) of
the improvement in achievement, based on the clinician's performance
category achievement percent score for the prior performance period.
Bands would be set for category achievement percent scores, with
increases from lower category achievement scores earning a larger
portion (percentage) of the improvement points. Under this alternative,
simple improvement percentage points for improvement are awarded to
MIPS eligible clinicians whose category scores improved across years
according to the band level, up to a maximum of 10 percent of the total
score. In Table 29 of the proposed rule (82 FR 30118), we illustrated
the band levels we considered as part of this alternative proposal.
Table 30 of the proposed rule (82 FR 30119) illustrated examples of the
improvement scoring methodology based on band levels. Generally, this
methodology would generate a higher improvement percent score for
clinicians; however, we noted that we believe the policy we proposed
would provide a score that better represents true improvement at the
performance category level, rather than comparing simple increases in
performance category scores.
Table 24--Improvement Scoring Examples Based on Rate of Increase in Quality Performance Category Achievement
Percent Scores
----------------------------------------------------------------------------------------------------------------
Year 1 quality Year 2 quality
performance performance
category category Increase in Rate of Improvement
achievement achievement improvement improvement percent score
percent score percent score
----------------------------------------------------------------------------------------------------------------
Individual Eligible 5% (Will 50 20% Because the 20%/30% = 0.67. 0.67 * 10% =
Clinician #1 (Pick your substitute 30% year 1 score 6.7% No cap
Pace Test Option). which is the is below 30%, needed.
lowest score a we measure
clinician can improvement
achieve with above 30%.
complete
reporting in
year 1.).
Individual Eligible 60%............ 66 6%............. 6%/60% = 0.10.. 0.10 * 10% =
Clinician #2. 1.0% No cap
needed.
Individual Eligible 90%............ 93 3%............. 3%/90% = 0.033. 0.033 * 10% =
Clinician #3. 0.3% No cap
needed.
Individual Eligible 30%............ 70 40%............ 40%/30% = 1.33. 1.33 * 10% =
Clinician #4. 13.3% Apply
cap at 10%.
----------------------------------------------------------------------------------------------------------------
In addition, we considered another alternative that would adopt the
improvement scoring methodology of the Shared Savings Program \7\ for
CMS Web Interface submissions in the quality performance category, but
decided to not adopt this approach. Under the Shared Savings Program
approach, eligible clinicians and groups that submit through the CMS
Web Interface would have been required to submit on the same set of
quality measures, and we would have awarded improvement for all
eligible clinicians or groups who submitted complete data in the prior
performance period. As Shared Savings Program and Next Generation ACOs
report using the CMS Web Interface, using the same improvement score
approach would align MIPS with these other programs. We believed it
could be beneficial to align improvement between the programs because
it would align incentives for those who participate in the Shared
Savings Program or ACOs. The Shared Savings Program approach would test
each measure for statistically significant improvement or statistically
significant decline. We would sum the number of measures with a
statistically significant improvement and subtract the number of
measures with a statistically significant decline to determine the Net
Improvement. We would next divide the Net Improvement in each domain by
the number of eligible measures in the domain to calculate the
Improvement Score. We would cap the number of possible improvement
percentage points at 10.
---------------------------------------------------------------------------
\7\ For additional information on the Shared Savings Program's
scoring methodology, we refer readers to the Quality Measurement
Methodology and Resources, September 2016, Version 1 and the
Medicare Shared Savings Program Quality Measure Benchmarks for the
2016 and 2017 Reporting Years (available at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/MSSP-QM-Benchmarks-2016.pdf).
---------------------------------------------------------------------------
We considered the Shared Savings Program methodology because it
would promote alignment with ACOs. We ultimately decided not to adopt
this scoring methodology because we believe having a single performance
category level approach for all quality performance category scores
encourages a uniformity in our approach to improvement scoring and
simplifies the scoring rules for MIPS eligible clinicians. It also
allows us greater flexibility to compare performance scores across the
diverse submission mechanisms, which makes improvement scoring more
broadly available to eligible clinicians and groups that elect
different ways of participating in MIPS.
We proposed to add regulatory text at Sec.
414.1380(b)(1)(xvi)(C)(3) to state that an improvement percent score
cannot be negative (that is, lower than zero percentage points). We
also proposed to add regulatory text at Sec. 414.1380(b)(1)(xvi)(C)(1)
to state that improvement scoring is awarded based on the rate of
increase in the quality performance category achievement percent score
of individual MIPS eligible clinicians or groups from the current MIPS
performance period compared to the score in the year immediately prior
to the current MIPS
[[Page 53747]]
performance period. We also proposed to add regulatory text at Sec.
414.1380(b)(1)(xvi)(C)(2) to state that an improvement percent score is
calculated by dividing the increase in the quality performance category
achievement percent score of an individual MIPS eligible clinician or
group, which is calculated by comparing the quality performance
category achievement percent score the current MIPS performance period
to the quality performance category achievement percent score from the
MIPS performance period in the year immediately prior to the current
MIPS performance period, by the prior performance period quality
performance category achievement percent score, and multiplying by 10
percent.
We invited public comments on our proposal to calculate improvement
scoring using a methodology that awards improvement points based on the
rate of improvement and, alternatively, on rewarding improvement at the
band level or using the Shared Savings Program approach for CMS Web
Interface submissions.
The following is a summary of the public comments received on our
proposal for the methodology to calculate improvement of the quality
performance category and our response:
Comment: Many commenters supported the proposed rate of increase
for measuring improvement. Several commenters believed it is fairer and
easier to understand the rate of improvement instead of the improvement
at the band level. One commenter believed that the proposed methodology
more accurately captures improvement levels than the band level
methodology and would provide more equivalent scoring because the band
level methodology provides less opportunity to improve scores for high
performers. One commenter believed the band level and the Shared
Savings Program approach for CMS Web Interface submissions are too
complex. One commenter believed the rate of improvement appropriately
incentivizes lower performers to improve their performance. One
commenter believed the proposed approach redresses inadvertent biases
that would otherwise disadvantage smaller specialty, rural, and other
professionals.
Response: We appreciate the commenters' support for our proposal.
Comment: One commenter suggested adjustments in future years based
on an analysis of the impact of the current proposal on practices and
their ability to ramp up to full reporting.
Response: We will monitor the impact of the rate of increase
methodology on the quality performance category scores and address any
changes through future rulemaking.
Comment: One commenter believed that the rate of improvement mainly
benefits lower performers who show improvement. One commenter believed
that the proposed methodology disadvantages clinicians who are already
performing well in the program. One commenter believed that this
approach might discourage high-performing practices from continuing to
invest in their practices to achieve small, incremental, yet vital
improvements in quality. One commenter requested an alternative
approach that would more equitably incentivize clinicians at all-stages
of practice transformation to value-based care and to continuously
improve their performance.
Response: While we understand the commenter's concerns, we believe
the improvement methodology provides an adequate incentive and award
for improvement in performance for high performers and low performers
and encourages movement toward value-based care. Improvement is
available to all clinicians, although initially the chance to improve
is higher for those who either low-performers or those who have not
participated before. We believe that increasing the scores for those
who raise their performance level at a greater rate will have the
greatest impact on quality for beneficiaries.
Comment: A few commenters supported the band methodology over the
proposed methodology for calculating improvement because clinicians
with high performance who demonstrate even modest improvement should
benefit from improvement scoring.
Response: We agree that the band methodology is a viable approach;
however, given the general support for our proposal, we are finalizing
our proposal of using rate of increase in achievement.
Final Action: As a result of the public comments, we are finalizing
as proposed to base the improvement percent score on the rate of
increase in achievement methodology. We are finalizing as proposed to
add requirements at Sec. 414.1380(b)(1)(xvi)(C)(3) to state that an
improvement percent score cannot be negative (that is, lower than zero
percentage points). We also are finalizing as proposed to add a
requirement at Sec. 414.1380(b)(1)(xvi)(C)(1) to state that
improvement scoring is awarded based on the rate of increase in the
quality performance category achievement percent score of individual
MIPS eligible clinicians from the previous performance period to the
current performance period. We also are finalizing as proposed to add
requirements at Sec. 414.1380(b)(1)(xvi)(C)(2) to state that an
improvement percent score is calculated by dividing the increase in the
quality performance category achievement percent score of an individual
MIPS eligible clinician or group, which is calculated by comparing the
quality performance category achievement percent score from the prior
performance period to the current performance period, by the prior
performance period's quality performance category achievement percent
score, and multiplying by 10 percent.
(j) Calculating the Quality Performance Category Percent Score
Including Improvement
In the CY 2017 Quality Payment Program final rule, we finalized at
Sec. 414.1380(b)(1)(xv) that the quality performance category score is
the sum of all points assigned for the measures required for the
quality performance category criteria plus bonus points, divided by the
sum of total possible points (81 FR 77300). Using the terminology
proposed in section II.C.7.a.(2) of the proposed rule (82 FR 30098
through 30099), this formula can be represented as:
Quality performance category percent score = (total measure achievement
points + measure bonus points)/total available measure achievement
points.
We proposed to incorporate the improvement percent score, which was
proposed in section II.C.7.a.(2)(i)(i) of the proposed rule (82 FR
30113 through 30114), into the quality performance category percent
score. We proposed to amend Sec. 414.1380(b)(1)(xv) (redesignated as
Sec. 414.1380(b)(1)(xvii)) to add the improvement percent score (as
calculated pursuant to proposed paragraph (b)(1)(xvi)(A) through (F))
to the quality performance score. We also proposed to amend Sec.
414.1380(b)(1)(xv) (redesignated as Sec. 414.1380(b)(1)(xvii)) to
amend the text that states the quality performance category percent
score cannot exceed the total possible points for the quality
performance category to clarify that the total possible points for the
quality performance category cannot exceed 100 percentage points. Thus,
the calculation for the proposed quality performance category percent
score including improvement, can be summarized in the following
formula:
[[Page 53748]]
Quality performance category percent score = ([total measure
achievement points + measure bonus points]/total available measure
achievement points) + improvement percent score, not to exceed 100
percent.
This same formula and logic will be applied for both CMS Web
Interface and Non-CMS Web Interface reporters.
Table 31 of the proposed rule (82 FR 30120) illustrated an example
of calculating the quality performance category percent score including
improvement for a non-CMS Web Interface reporter. We noted that the
quality performance category percent score is then multiplied by the
performance category weight for calculating the points towards the
final score.
We invited public comment on this overall methodology and formula
for calculating the quality performance category percent score.
The following is a summary of the public comments received on the
``Calculating the Quality Performance Category Percent Score Including
Improvement'' proposals and our responses:
Comment: Several commenters supported the quality category
improvement scoring formula.
Response: We appreciate the commenters' support of our proposal.
Final Action: As a result of the public comments, we are finalizing
as proposed to incorporate the improvement percent score, which was
proposed in section II.C.7.a.(2)(i)(i) of the proposed rule (see 82 FR
30113 through 30114), into the quality performance category percent
score. We are also finalizing as proposed to amend Sec.
414.1380(b)(1)(xv) (redesignated as Sec. 414.1380(b)(1)(xvii)) to add
the improvement percent score (as calculated pursuant to proposed
paragraph (b)(1)(xvi)(A) through (F)) to the quality performance score.
We are also finalizing as proposed to amend Sec. 414.1380(b)(1)(xv)
(redesignated as Sec. 414.1380(b)(1)(xvii) to amend the text that
states the quality performance category percent score cannot exceed the
total possible points for the quality performance category to clarify
that the total possible points for the quality performance category
cannot exceed 100 percentage points.
(3) Scoring the Cost Performance Category
We score the cost performance category using a methodology that is
generally consistent with the methodology used for the quality
performance category. In the CY 2017 Quality Payment Program final rule
(81 FR 77309), we codified at Sec. 414.1380(b)(2) that a MIPS eligible
clinician receives 1 to 10 achievement points for each cost measure
attributed to the MIPS eligible clinician based on the MIPS eligible
clinician's performance compared to the measure benchmark. We establish
a single benchmark for each cost measure and base those benchmarks on
the performance period (81 FR 77309). Because we base the benchmarks on
the performance period, we will not be able to publish the actual
numerical benchmarks in advance of the performance period (81 FR
77309). We develop a benchmark for a cost measure only if at least 20
groups (for those MIPS eligible clinicians participating in MIPS as a
group practice) or TIN/NPI combinations (for those MIPS eligible
clinicians participating in MIPS as an individual) can be attributed
the case minimum for the measure (81 FR 77309). If a benchmark is not
developed, the cost measure is not scored or included in the
performance category (81 FR 77309). For each set of benchmarks, we
calculate the decile breaks based on cost measure performance during
the performance period and assign 1 to 10 achievement points for each
measure based on which benchmark decile range the MIPS eligible
clinician's performance on the measure is between (81 FR 77309 through
77310). We also codified at Sec. 414.1380(b)(2)(iii) that a MIPS
eligible clinician's cost performance category score is the equally-
weighted average of all scored cost measures (81 FR 77311).
In the CY 2017 Quality Payment Program final rule (81 FR 77311), we
adopted a final policy to not calculate a cost performance category
score if a MIPS eligible clinician or group is not attributed any cost
measures because the MIPS eligible clinician or group has not met the
case minimum requirements for any of the cost measures or a benchmark
has not been created for any of the cost measures that would otherwise
be attributed to the clinician or group. We inadvertently failed to
include this policy in the regulation text and proposed to codify it
under Sec. 414.1380(b)(2)(v) (82 FR 30120).
For more of the statutory background and descriptions of our
current policies for the cost performance category, we refer readers to
the CY 2017 Quality Payment Program final rule (81 FR 77308 through
77311).
In the CY 2018 Quality Payment Program proposed rule (82 FR 30098),
we proposed to add improvement scoring to the cost performance category
scoring methodology starting with the 2020 MIPS payment year. We did
not propose any changes to the methodology for scoring achievement in
the cost performance category for the 2020 MIPS payment year other than
the method used for facility-based measurement described in the CY 2018
Quality Payment Program proposed rule (82 FR 30128 through 30132). We
proposed a change in terminology to refer to the ``cost performance
category percent score'' in order to be consistent with the terminology
used in the quality performance category (82 FR 30120). We proposed to
revise Sec. 414.1380(b)(2)(iii) to provide that a MIPS eligible
clinician's cost performance category percent score is the sum of the
following, not to exceed 100 percent: The total number of achievement
points earned by the MIPS eligible clinician divided by the total
number of available achievement points (which can be expressed as a
percentage); and the cost improvement score (82 FR 30121). This
terminology change to refer to the score as a percentage is consistent
with the proposed change in the CY 2018 Quality Payment Program
proposed rule (82 FR 30099) for the quality performance category. We
discussed the proposals for improvement scoring in the cost performance
category in of the CY 2018 Quality Payment Program proposed rule (82 FR
30121).
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Several commenters supported the addition of improvement
scoring in the cost performance category, noting that it was consistent
with the statutory requirements.
Response: We thank the commenters for their support.
Comment: Several commenters opposed the proposal to add improvement
scoring in the cost performance category for the 2020 MIPS payment
year. Many of these commenters expressed concern with the measures used
in the cost performance category, suggesting that the measures were not
well suited to determine achievement and therefore not suitable to
determine improvement. A few commenters recommended that clinicians be
given more time to understand cost measures before assessing
improvement. A few commenters expressed concern that this increased the
complexity of scoring in the cost performance category. A few
commenters suggested that improvement scoring should not be added until
episode-based measures were included as cost measures.
Response: We understand the concerns with adding improvement
scoring to the cost performance
[[Page 53749]]
category. We have recognized that clinicians still need time to better
understand cost measures, as well as our method of scoring them. Under
our proposed methodology for scoring improvement, only two cost
measures would be eligible for improvement scoring for the 2018 MIPS
performance period. Many clinicians will not be scored on those two
cost measures because they will not meet the case minimums for either
of those measures due to the nature of their specialty or practice.
However, section 1848(q)(5)(D)(i)(I) of the Act compels us to take
improvement into account in addition to achievement in scoring the cost
performance category beginning with the second year of MIPS, if data
sufficient to measure improvement is available.
Final Action: After consideration of the public comments, we are
finalizing our proposal to add improvement scoring to the cost
performance category scoring methodology starting with the 2020 MIPS
payment year. We are finalizing our proposal to change the terminology
to refer to a cost performance category percent score and to make
corresponding changes to the regulation text at Sec.
414.1380(b)(2)(iii). We are also finalizing our proposal to add
regulatory text at Sec. 414.1380(b)(2)(v) reflecting our previously
finalized policy not to calculate a cost performance category score if
a MIPS eligible clinician or group is not attributed any cost measures
because the MIPS eligible clinician or group has not met the case
minimum requirements for any of the cost measures or a benchmark has
not been created for any of the cost measures that would otherwise be
attributed to the clinician or group.
(a) Measuring Improvement
(i) Calculating Improvement at the Cost Measure Level
In the CY 2018 Quality Payment Program proposed rule (82 FR XXX),
we proposed to make available to MIPS eligible clinicians and groups a
method of measuring improvement in the quality and cost performance
categories. In the CY 2018 Quality Payment Program proposed rule (82 FR
30113 through 30114), for the quality performance category, we proposed
to assess improvement on the basis of the score at the performance
category level. For the cost performance category, similar to the
quality performance category, we proposed at Sec. 414.1380(b)(2)(iv)
that improvement scoring is available to MIPS eligible clinicians and
groups that demonstrate improvement in performance in the current MIPS
performance period compared to their performance in the immediately
preceding MIPS performance period (for example, demonstrating
improvement in the 2018 MIPS performance period over the 2017 MIPS
performance period).
In the CY 2018 Quality Payment Program proposed rule (82 FR 30113
through 30114), we noted the various challenges associated with
attempting to measure improvement in the quality performance category
at the measure level, given the many opportunities available to
clinicians to select which measures to report. We noted that these
challenges are not present in the cost performance category and
explained our reasons for believing that there are advantages to
measuring cost improvement at the measure level. Therefore, we proposed
at section Sec. 414.1380(b)(2)(iv)(A) to measure cost improvement at
the measure level for the cost performance category.
In the CY 2018 Quality Payment Program proposed rule, we described
our reasons for believing that we would have data sufficient to measure
improvement when we can measure performance in the current performance
period compared to the prior performance period. Due to the differences
in our proposals for measuring improvement for the quality and cost
performance categories, such as measuring improvement at the measure
level versus the performance category level, we proposed a different
data sufficiency standard for the cost performance category than for
the quality performance category. First, for data sufficient to measure
improvement to be available for the cost performance category, we
proposed that the same cost measure(s) would need to be specified for
the cost performance category for 2 consecutive performance periods (82
FR 30121). For the 2020 MIPS payment year, only 2 cost measures, the
MSPB measure and the total per capita cost measure, would be eligible
for improvement scoring under this proposal. For a measure to be scored
in either performance period, a MIPS eligible clinician would need to
have a sufficient number of attributed cases to meet or exceed the case
minimum for the measure.
In addition, we proposed that a clinician would have to report for
MIPS using the same identifier (TIN/NPI combination for individuals,
TIN for groups, or virtual group identifiers for virtual groups) and be
scored on the same measure(s) for 2 consecutive performance periods (82
FR 30121). Because we wanted to encourage action on the part of
clinicians in reviewing and understanding their contribution to patient
costs, we believed that improvement should be evaluated only when there
is a consistent identifier.
Therefore, for the cost performance category, we proposed at Sec.
414.1380(b)(2)(iv)(B) that we would calculate a cost improvement score
only when data sufficient to measure improvement is available (82 FR
30121). We proposed that sufficient data would be available when a MIPS
eligible clinician participates in MIPS using the same identifier in 2
consecutive performance periods and is scored on the same cost
measure(s) for 2 consecutive performance periods (for example, in the
2017 MIPS performance period and the 2018 MIPS performance period) (82
FR 30121). If the cost improvement score cannot be calculated because
sufficient data is not available, we proposed to assign a cost
improvement score of zero percentage points (82 FR 30121). While the
total available cost improvement score would be limited at first
because only 2 cost measures would be included in both the first and
second performance periods of the program (total per capita cost and
MSPB), more opportunities for improvement scoring would be available in
the future as additional cost measures, including episode-based
measures, are added in future rulemaking. MIPS eligible clinicians
would be able to review their performance feedback and make
improvements compared to the score in their previous feedback.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Several commenters supported our proposal to evaluate
improvement for the cost performance category at the measure level
because the measures are likely to remain consistent over time and this
approach may enable clinicians the ability to target process
improvements on a specific measurement that results in improved
performance.
Response: We thank the commenters for their support.
Comment: A few commenters suggested that data sufficient to measure
improvement in the cost performance category would not be available,
and therefore we are not required to consider improvement in
determining the cost performance category score. These commenters
suggested that sufficient data would not be available because we did
not propose to retain for the 2020 MIPS payment year the episode-based
measures used for the 2019 MIPS payment year and that new
[[Page 53750]]
episode-based measures could be added for the 2021 MIPS payment year.
One commenter suggested that sufficient data would not be available to
measure improvement for specialist clinicians because episode-based
measures would not be available for all clinicians.
Response: We continue to believe that data sufficient to measure
improvement will be available under our proposed methodology for
measuring improvement for the cost performance category. Under our
proposal, we would measure improvement only when a MIPS eligible
clinician participates in MIPS using the same identifier in two
consecutive performance periods and is scored on the same cost
measure(s) for two consecutive performance periods. This same policy
would apply as we continue to implement our plan to introduce new
episode-based measures in future years of the program. We note measures
would not be eligible for improvement scoring in the first year they
are adopted for MIPS, as we would have no way of assessing how a
clinician might have improved on a measure that was not previously
included in the program.
Comment: Several commenters supported our proposal to measure
improvement in the cost performance category if a clinician is scored
on the same measure and with the same group or individual identifier in
subsequent years.
Response: We thank the commenters for their support.
Comment: A few commenters opposed our approach to measuring
improvement in the cost performance category at the measure level,
suggesting that the approach proposed for the quality performance
category would be simpler and better understood by clinicians, and
having more than one method of evaluating improvement is confusing.
Response: We strive to maintain consistency and simplicity in the
Quality Payment Program to the greatest extent possible. However, we
continue to believe that the methods for measuring achievement in the
quality and cost performance categories are different enough to warrant
a different approach for measuring improvement. Most importantly,
clinicians are not given the opportunity to select the measures in the
cost performance category, as they are in the quality performance
category, so there should be greater consistency in the measures on
which clinicians are assessed from year to year. One benefit to scoring
improvement at the cost measure level is that clinicians who wish to
take action to improve their performance can focus on a particular
measure as opposed to an overall category score that may represent
multiple measures.
Comment: One commenter opposed our proposal to measure improvement
only when a clinician participates in MIPS using the same identifier
(TIN/NPI combination for individuals, TIN for groups, or virtual group
identifiers for virtual groups) for two consecutive performance
periods. This commenter suggested that some clinicians work in multiple
practices in a year and that this requirement would limit their
opportunity to receive an improvement score.
Response: We wish to encourage action on the part of clinicians in
reviewing and understanding their contribution to patient costs. We
believe an approach that evaluates improvement only for those who
report using the same identifier in consecutive years is more likely to
reward targeted improvement by the clinician or group. A clinician who
reported as part of a group in 1 year and as an individual in another
year would be likely to have a different patient population and other
factors that could affect the improvement score. In the case of
clinicians who work at more than one practice (and bill under more than
one TIN) in a given year and continue at those practices in future
years, they could be scored on their improvement if they continue to
participate in MIPS using the same practice identifier from year to
year.
Final Action: After consideration of the public comments, we are
finalizing all of these proposals related to measuring improvement in
the cost performance category at the measure level.
(ii) Improvement Scoring Methodology
In the CY 2018 Quality Payment Program proposed rule (82 FR 30096
through 30097), we discussed a number of different programs and how
they measure improvement at the category or measure level as part of
their scoring systems. In the proposed method for the quality
performance category, we proposed to compare the overall rate of
achievement on all the underlying measures in the quality performance
category and measure a rate of overall improvement to calculate an
improvement percent score. We then add the improvement percent score
after taking into account measure achievement points and measure bonus
points as described in proposed Sec. 414.1380(b)(1)(xvii). In
reviewing the methodologies that are specified in the CY 2018 Quality
Payment Program proposed rule that include consideration of improvement
at the measure level, we noted that the methodology used in the Shared
Savings Program would best reward achievement and improvement for the
cost performance category because this program includes measures for
clinicians, the methodology is straightforward, and it only recognizes
significant improvement (82 FR 30122). We proposed to quantify
improvement in the cost performance category by comparing the number of
cost measures with significant improvement in performance and the
number of cost measures with significant declines in performance (82 FR
30122). We proposed at Sec. 414.1380(b)(2)(iv)(C) to determine the
cost improvement score by subtracting the number of cost measures with
significant declines from the number of cost measures with significant
improvement, and then dividing the result by the number of cost
measures for which the MIPS eligible clinician or group was scored in
both performance periods, and then multiplying the result by the
maximum cost improvement score (82 FR 30122). For the 2020 MIPS payment
year, improvement scoring would be possible for the total per capita
cost measure and the MSPB measure as those 2 measures would be
available for 2 consecutive performance periods under the proposals in
the CY 2018 Quality Payment Program proposed rule (82 FR 30122). As in
our proposed quality improvement methodology, we proposed at Sec.
414.1380(b)(2)(iv)(D) that the cost improvement score could not be
lower than zero, and therefore, could only be positive (82 FR 30122).
We proposed to determine whether there was a significant
improvement or decline in performance between the two performance
periods by applying a common standard statistical test, a t-test, as is
used in the Shared Savings Program (79 FR 67930 through 67931, 82 FR
30122). We also welcomed public comments on whether we should consider
instead adopting an improvement scoring methodology that measures
improvement in the cost performance category the same way we proposed
to do in the quality performance category; that is, using the rate of
improvement and without requiring statistical significance which was
discussed in the CY 2018 Quality Payment Program proposed rule (82 FR
30113 through 30114).
Section 1848(q)(5)(D)(ii) of the Act specifies that the Secretary
may assign a higher scoring weight under subparagraph (F) with respect
to the achievement of a MIPS eligible clinician than with respect to
any improvement
[[Page 53751]]
of such clinician with respect to a measure, activity, or category
described in paragraph (2). We noted that we believe that there are
many opportunities for clinicians to actively work on improving their
performance on cost measures, through more active care management or
reductions in certain services. However, we recognize that most
clinicians are still learning about their opportunities in cost
measurement. We noted that we aim to continue to educate clinicians
about opportunities in cost measurement and continue to develop
opportunities for robust feedback and measures that better recognize
the role of clinicians. Since MIPS is still in its beginning years and
we understand that clinicians are working hard to understand how we
measure costs for purposes of the cost performance category, as well as
how we score their performance in all other aspects of the program, we
believe improvement scoring in the cost performance category should be
limited to avoid creating additional confusion. Based on these
considerations, we proposed in the CY 2018 Quality Payment Program
proposed rule to weight the cost performance category at zero percent
for the 2018 MIPS performance period/2020 MIPS payment year (82 FR
30122). With the entire cost performance category proposed to be
weighted at zero percent, we noted that the focus of clinicians should
be on achievement as opposed to improvement, and therefore, we proposed
at Sec. 414.1380(b)(2)(iv)(E) that although improvement would be
measured according to the method described above, the maximum cost
improvement score for the 2020 MIPS payment year would be zero
percentage points (82 FR 30122). Section 1848(q)(5)(D)(ii) of the Act
provides discretion for the Secretary to assign a higher scoring weight
under subparagraph (F), which refers to section 1848(q)(5)(F) of the
Act, with respect to achievement than with respect to improvement.
Section 1848(q)(5)(F) of the Act provides if there are not sufficient
measures and activities applicable and available to each type of MIPS
eligible clinician, the Secretary shall assign different scoring
weights (including a weight of zero) for measures, activities, and/or
performance categories. When read together, we interpreted sections
1848(q)(5)(D)(ii) and 1848(q)(5)(F) of the Act to provide discretion to
the Secretary to assign a scoring weight of zero for improvement on the
measures specified for the cost performance category. Under the
improvement scoring methodology we proposed, we believe a maximum cost
improvement score of zero would be effectively the same as a scoring
weight of zero. Under this proposal, the cost improvement score would
not contribute to the cost performance category percent score
calculated for the 2020 MIPS payment year.
In the CY 2018 Quality Payment Program proposed rule, we considered
an alternative to make no changes to the previously finalized weight of
10 percent for the cost performance category for the 2020 MIPS payment
year. We proposed that if we maintain a weight of 10 percent for the
cost performance category for the 2020 MIPS payment year, the maximum
cost improvement score available in the cost performance category would
be 1 percentage point out of 100 percentage points available for the
cost performance category percent score (82 FR 30122). If a clinician
were measured on only one measure consistently from one performance
period to the next and met the requirements for improvement, the
clinician would receive one improvement percentage point in the cost
performance category percent score. If a clinician were measured on 2
measures consistently, improved significantly on one, and did not show
significant improvement on the other (as measured by the t-test method
described above), the clinician would receive 0.5 improvement
percentage points.
We invited comments on these proposals, as well as alternative ways
to measure changes in statistical significance for the cost measure.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: A few commenters supported our proposed methodology to
determine cost improvement score on the basis of a statistical test at
the measure level.
Response: We thank the commenters for their support.
Comment: A few commenters expressed concern that our proposed
method of determining cost improvement would be unfair once multiple
episode-based measures are included in the cost performance category
because it would be difficult to demonstrate improvement on all
measures. A few commenters suggested that clinicians receive credit for
improvement in each cost measure but that declines in performance not
be considered as part of their improvement score.
Response: Because there will be some variability in the number of
cost measures that are attributed to clinicians and groups,
particularly if more measures are added in future years of the program,
we do not believe that we can award additional credit for improvement
for each measure without considering the total numbers of cost measures
that are scored for an individual or group. Doing so could provide an
advantage to an individual or group with more measures than others. We
also believe that recognizing significant declines reduces the chance
of rewarding random variation from year to year.
We recognize that some clinicians will not have cost measures
available and applicable during the 2018 MIPS performance period and,
therefore, will be unable to demonstrate improvement in either the 2018
or 2019 MIPS performance periods. However, we wish to reward clinicians
who do achieve improvement and who are measured using the same
identifier on the same measure in consecutive years. We will evaluate
changes to the maximum cost improvement score for future years in
future rulemaking.
Comment: A few commenters supported the proposal for the maximum
cost improvement score to be zero percentage points for the 2020 MIPS
payment year because we had also proposed to set the weight for the
cost performance category at zero percent of the final MIPS score for
that same period.
Response: We thank the commenters for their support. However, as
discussed in section II.C.6.d.(2) of this final rule with comment
period, we are not finalizing our proposal to weigh the cost
performance category at zero percent for the 2020 MIPS payment year.
Instead, we are adopting the alternative option to maintain a 10
percent weight for the cost performance category. We proposed that if
we maintain a weight of 10 percent for the cost performance category
for the 2020 MIPS payment year, the maximum cost improvement score
available in the cost performance category would be 1 percentage point
out of 100 percentage points available for the cost performance
category percent score. We believe that we should set a maximum cost
improvement score that is higher than zero and are finalizing the
maximum score at 1 percentage point as proposed.
Final Action: After consideration of the public comments, we are
finalizing all of our proposals related to the improvement scoring
methodology for the cost performance category, with the exception of
our proposal to set the maximum cost improvement score at 0 percentage
points for the 2020 MIPS payment year. Because we are finalizing the
alternative option to weight the cost
[[Page 53752]]
performance category at 10 percent of the final score for the 2020 MIPS
payment year (see II.C.6.d.(2) of this final rule with comment period),
we are adopting at Sec. 414.1380(b)(2)(iv)(E) our alternative of a
maximum cost improvement score of 1 percentage point out of 100
percentage points available for the cost performance category.
(b) Calculating the Cost Performance Category Percent Score With
Achievement and Improvement
For the cost performance category, we proposed to evaluate
improvement at the measure level, unlike the quality performance
category where we proposed to evaluate improvement at the performance
category level. For both the quality performance category and the cost
performance category, we proposed to add improvement to an existing
category percent score. We noted that we believe this is the most
straight-forward and simple way to incorporate improvement. It is also
consistent with other Medicare programs that reward improvement.
As noted in the CY 2018 Quality Payment Program proposed rule, we
proposed a change in terminology to express the cost performance
category percent score as a percentage (82 FR 30123). We proposed to
revise Sec. 414.1380(b)(2)(iii) to provide that a MIPS eligible
clinician's cost performance category percent score is the sum of the
following, not to exceed 100 percent: The total number of achievement
points earned by the MIPS eligible clinician divided by the total
number of available achievement points (which can be expressed as a
percentage); and the cost improvement score (82 FR 30123). With these
two proposed changes, the formula would be:
(Cost Achievement Points/Available Cost Achievement Points) + (Cost
Improvement Score) = (Cost Performance Category Percent Score).
We provided an example of cost performance category scores with the
determination of improvement and decline in Table 32 of the proposed
rule (82 FR 30123). We invited public comments on these proposals.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: One commenter supported the proposed formula to calculate
the cost performance category percent score with achievement and
improvement.
Response: We thank the commenter for the support.
Final Action: After consideration of the public comments, we are
finalizing the method of calculating the cost performance category
percent score as proposed.
In Table 25, we provide an example of cost performance category
percent scores along with the determination of improvement or decline.
This example is for group reporting where the group is measured on both
the total per capita cost measure and the MSPB measure for 2
consecutive performance periods.
Table 25--Example of Assessing Achievement and Improvement in the Cost Performance Category
----------------------------------------------------------------------------------------------------------------
Measure Total possible
achievement measure Significant Significant decline
Measure points earned achievement improvement from prior from prior performance
by the group points performance period period
----------------------------------------------------------------------------------------------------------------
Total per Capita Cost Measure... 8.2 10 Yes................... No.
MSPB Measure.................... 6.4 10 No.................... No.
--------------------------------
Total....................... 14.6 20 N/A................... N/A.
----------------------------------------------------------------------------------------------------------------
In this example, there are 20 total possible measure achievement
points and 14.6 measure achievement points earned by the group, and the
group improved on one measure but not the other, with both measures
being scored in each performance period. The first part of the formula
is calculating (Cost Achievement Points/Available Cost Achievement
Points) which is 14.6/20, which equals .730 and can be represented as
73.0 percent. The cost improvement score will be determined as follows:
((1 measure with significant improvement-zero measures with significant
decline)/2 measures) * 1 percentage point = 0.5 percentage points.
Under the formula, the cost performance category percent score will be
(14.6/20 or 73.0 percent) + 0.5 percent = 73.5 percent. To determine
how many points the cost performance category contributes to the final
score, we will multiply the performance category percent score (73.5
percent) by the weight of the cost performance category (10 percent of
the final score) and by 100 to determine the points to the final score.
The group would have 73.5 percent x 10 percent x 100 = 7.35 points for
the cost performance category contributed towards the final score.
(4) Facility-Based Measures Scoring Option for the 2020 MIPS Payment
Year for the Quality and Cost Performance Categories
(a) Background
Section 1848(q)(2)(C)(ii) of the Act provides that the Secretary
may use measures used for payment systems other than for physicians,
such as measures for inpatient hospitals, for purposes of the quality
and cost performance categories. However, the Secretary may not use
measures for hospital outpatient departments, except in the case of
items and services furnished by emergency physicians, radiologists, and
anesthesiologists. In the MIPS and APMs RFI (80 FR 59108), we sought
comment on how we could best use this authority. We refer readers to
the CY 2017 Quality Payment Program final rule (81 FR 77127) for a
summary of these comments.
As noted in the CY 2017 Quality Payment Program proposed rule (81
FR 28192), we considered an option for facility-based MIPS eligible
clinicians to elect to use their institution's performance rates as a
proxy for the MIPS eligible clinician's quality score. However, we did
not propose an option for the transition year of MIPS because there
were several operational considerations that we believed needed to be
addressed before this option could be implemented. At that time, we
requested comments on the following issues: (1) Whether we should
attribute a facility's performance to a MIPS eligible clinician for
purposes of the quality and cost performance categories and under what
conditions such attribution would be appropriate and representative of
the MIPS eligible clinician's performance; (2) possible
[[Page 53753]]
criteria for attributing a facility's performance to a MIPS eligible
clinician for purposes of the quality and cost performance categories;
(3) the specific measures and settings for which we can use the
facility's quality and cost data as a proxy for the MIPS eligible
clinician's quality and cost performance categories; and (4) if
attribution should be automatic or if an individual MIPS eligible
clinician or group should elect for it to be done and choose the
facilities through a registration process. We summarized the comments
on these questions in the proposed rule (82 FR 30123 through 30124).
(b) Facility-Based Measurement
We believe that facility-based measurement is intended to reduce
reporting burden on facility-based MIPS eligible clinicians by
leveraging existing quality data sources and value-based purchasing
experiences and aligning incentives between facilities and the MIPS
eligible clinicians who provide services there. In addition, we believe
that facility-based MIPS eligible clinicians contribute substantively
to their respective facilities' performance on facility-based measures
of quality and cost, and that their performance may be better reflected
by their facilities' performance on such measures.
We proposed to limit facility-based reporting to the inpatient
hospital in the first year for a number of reasons, including that
there is a more diverse group of clinicians (and specialty types)
providing services in an inpatient setting compared to other settings
and that the Hospital Value-Based Purchasing (VBP) Program adjusts
payment in connection with both increases and decreases in performance.
The Hospital VBP Program is large and mature (82 FR 30124). We also
proposed to only use measures from a pay-for-performance program and
not a pay-for-reporting program and proposed to limit the measures for
facility-based measurement to those used in the Hospital VBP Program
(82 FR 30124) because it compares facilities on a series of different
measures intended to capture the breadth of care in the facility.
We also considered program timing when determining what Hospital
VBP Program year to use for facility-based measurement for the 2020
MIPS payment year. Quality measurement for the FY 2019 Hospital VBP
Program's performance period will be concluded by December 31, 2017 (we
refer readers to the finalized FY 2019 performance periods in the FY
2018 Inpatient Prospective Payment System/Long-Term Care Hospital
Prospective Payment System final rule (82 FR 38259 through 38260)), and
the Hospital VBP Program scoring reports (referred to as the Percentage
Payment Summary Reports) will be provided to participating hospitals
not later than 60 days prior to the beginning of FY 2019, pursuant to
the Hospital VBP Program's statutory requirement at section 1886(o)(8)
of the Act. We discuss eligibility for facility-based measurement in
the CY 2018 Quality Payment Program proposed rule (82 FR 30125 through
30126), and we noted that the determination of the applicable hospital
will be made on the basis of a period that overlaps with the applicable
Hospital VBP Program performance period. Although Hospital VBP Program
measures have different measurement periods, the FY 2019 measures all
overlap from January to June in 2017, which also overlaps with our
first 12-month period to determine MIPS eligibility for purposes of the
CY 2018 performance period and 2020 MIPS payment year.
We believe that MIPS eligible clinicians electing the facility-
based measurement option under MIPS should be able to consider as much
information as possible when making that decision, including how their
attributed hospital performed in the Hospital VBP Program because an
individual clinician is a part of the clinical team in the hospital,
rather than the sole clinician responsible for care as tracked by
quality measures. Therefore, we concluded that we should be as
transparent as possible with MIPS eligible clinicians about their
potential facility-based scores before they begin data submission for
the MIPS performance period since this policy option is intended to
minimize reporting burdens on clinicians that are already participating
in quality improvement efforts through other CMS programs. We expect
that MIPS eligible clinicians that would consider facility-based
scoring would generally be aware of their hospital's performance on its
quality measures, but believe that providing this information directly
to clinicians ensures that such clinicians are fully aware of the
implications of their scoring elections under MIPS. However, we noted
that this policy could conceivably place non-facility-based MIPS
eligible clinicians at a competitive disadvantage since they would not
have any means by which to ascertain their MIPS measure scores in
advance. We viewed that compromise as a necessity to maximize
transparency, and we requested comment on whether this notification in
advance of the conclusion of the MIPS performance period is
appropriate, or if we should consider notifying facility-based
clinicians later in the MIPS performance period or even after its
conclusion.
The performance periods proposed in the CY 2018 Quality Payment
Program proposed rule (82 FR 30034) for the 2020 MIPS payment year
occur in part in 2018, with data submission for most mechanisms
starting in January 2019. To provide potential facility-based scores to
clinicians by the time the data submission period for the 2018 MIPS
performance period begins (assuming that timeframe is operationally
feasible), we noted that we believe that the FY 2019 Hospital VBP
Program, including the corresponding performance periods, is the most
appropriate program year to use for purposes of facility-based
measurement under the quality and cost performance categories for the
2020 MIPS payment year. However, we noted also that Hospital VBP
Program performance periods can run for periods as long as 36 months,
and for some FY 2019 Hospital VBP Program measures, the performance
period begins in 2014. We requested comment on whether this lengthy
performance period duration should outweigh our desire to include all
Hospital VBP Program measures as discussed further below (82 FR 30125).
We proposed at Sec. 414.1380(e)(6)(iii) that the performance period
for facility-based measurement is the performance period for the
measures for the measures adopted under the value-based purchasing
program of the facility of the year specified (82 FR 30125).
We considered whether we should include the entire set of Hospital
VBP Program measures for purposes of facility-based measurement under
MIPS or attempt to differentiate those which may be more influenced by
clinicians' contribution to quality performance than others. However,
we believe that clinicians have a broad and important role as part of
the healthcare team at a hospital and that attempting to differentiate
certain measures undermines the team-based approach of facility-based
measurement. We proposed at Sec. 414.1380(e)(6)(i) that the quality
and cost measures are those adopted under the value-based purchasing
program of the facility program for the year specified (82 FR 30125).
We proposed for the 2020 MIPS payment year to include all the
measures adopted for the FY 2019 Hospital VBP Program on the MIPS list
of quality measures and cost measures (82 FR 30125). Under the
proposal, we considered the FY 2019 Hospital VBP Program measures to
meet the definition
[[Page 53754]]
of additional system-based measures provided in section
1848(q)(2)(C)(ii) of the Act, and we proposed at Sec.
414.1380(e)(1)(i) that facility-based measures available for the 2018
MIPS performance period are the measures adopted for the FY 2019
Hospital VBP Program year authorized by section 1886(o) of the Act and
codified in our regulations at Sec. Sec. 412.160 through 412.167 (82
FR 30125). Measures in the FY 2019 Hospital VBP Program have different
performance periods as noted in Table 33 of the CY 2018 Quality Payment
Program proposed rule.
We requested comments on these proposals. We also requested
comments on what other programs, if any, we should consider including
for purposes of facility-based measurement under MIPS in future program
years (82 FR 30125).
The following is a summary of the public comments received on the
``Facility-Based Measurement'' proposals and our responses:
Comment: Many commenters supported our proposal to offer the
opportunity for facility-based measurement for purposes of determining
the quality and cost performance category scores for the 2020 MIPS
payment year. These commenters noted their longstanding interest in
such an opportunity and stated that it would reduce burden and align
incentives between facilities and clinicians who provide a substantial
amount of services there.
Response: We thank the commenters for their support. We agree that
facility-based measurement is important and a step forward in alignment
of incentives between clinicians and facilities. As we discuss below,
we believe that it is prudent to delay implementation of facility-based
measurement for an additional year so that clinicians better understand
the opportunity and ensure that we are operationally ready to support
this measurement option.
Comment: A few commenters expressed general support for the idea of
facility-based measurement but concern that they did not have enough
information or preparation to adequately understand the proposal. These
commenters recommended that CMS develop a 1-year pilot program and
inform clinicians more about their status.
Response: We acknowledge the commenters' concerns. In order to
increase understanding of the policy, better educate clinicians on
eligibility and applicability of the program, and ensure CMS's
operational readiness to offer this measurement option to clinicians,
we plan to delay implementation of this policy by 1 year.
Comment: Many commenters recommended that eligibility for facility-
based measurement be expanded to include a wide range of facilities.
Commenters recommended that facility-based measurement be extended in
the future to include inpatient rehabilitation facilities, skilled
nursing facilities, hospice programs, critical access hospitals,
hospital outpatient departments, and ambulatory surgical centers.
Response: As we stated in the proposed rule, we believe that
clinicians play an important role in many facilities and programs that
include quality reporting elements and value-based purchasing program.
Because we believe that the program used for the inpatient hospital is
the largest and among the most established value-based purchasing
programs, we have proposed, and are finalizing, that clinicians
practicing in the inpatient hospital will be eligible for facility-
based measurement. As discussed in more detail below, this final rule
will be applicable beginning with the 2019 MIPS performance period and
2021 MIPS payment year. However, in the future we will consider
opportunities to expand the program to other facilities, based on the
status of the facility value-based purchasing program, the
applicability of measures, and the ability to appropriately attribute a
clinician to a facility. Any new settings for facility-based
measurement would be proposed in future rulemaking.
Comment: A few commenters expressed concern that the facility-based
measurement would not be applicable to certain clinicians who are not
MIPS eligible because they are excluded by statute or bill under a
facility provider identification number. These commenters suggested
that we develop options to allow for these clinicians to participate in
facility-based measurement.
Response: MIPS eligibility is discussed in section II.C.1 of this
final rule with comment period. Eligibility for MIPS must be
established at the individual or group level in order for facility-
based measurement to also be applicable. We do not believe we have the
authority to determine MIPS eligibility through facility-based
measurement. We note that certain clinicians practice primarily in an
FQHC or CAH but bill for some items and services under Part B. Those
clinicians, even though they typically bill for services through an
FQHC or CAH, could be eligible for MIPS on the basis of their other
billing.
Comment: Several commenters supported our plan to inform clinicians
about their eligibility for facility-based measurement and which
hospital their score would be based on during the MIPS performance
period. A few commenters recommended that facilities be informed of
which clinicians could have their scores based on that facility as
well. These commenters expressed that informing clinicians and
hospitals of their status would allow clinicians to make the best
decisions for MIPS participation for their practices and increase
alignment between facilities and clinicians.
Response: We agree with the commenters and intend to provide as
much information as possible as early as possible to clinicians about
their eligibility and the hospital performance upon which a MIPS
eligible clinician's score would be based. We will work to provide
information about facility-based measurement eligibility and facility
attribution to clinicians in 2018, if technically feasible. We will
investigate whether it would be technically feasible and appropriate to
distribute information to attributed facilities about the clinicians
that could elect attribution of facility performance measures for
purposes of the MIPS program.
Comment: One commenter opposed our plan to notify clinicians about
their facility-based status before the close of the MIPS performance
period because the commenter noted that this choice should be made
earlier rather than to make up for a failure to report in another
fashion.
Response: Although we understand the commenter's concerns, we
disagree that an earlier deadline will be beneficial. We also need to
balance the issue of informing clinicians of their eligibility and
giving them an opportunity to consider their options.
Comment: A few commenters requested clarification on whether the
facility-based measurement would apply to the advancing care
information or improvement activity performance categories.
Response: Clinicians that participate in facility-based measurement
will have their scores in the quality and cost performance categories
determined on the basis of the performance of that facility. However,
we did not propose that those scored under facility-based measurement
would have different requirements for the advancing care information or
improvement activities performance categories. Clinicians or groups
would still be scored based on their own performance (not a facility's
performance) on those performance
[[Page 53755]]
categories unless other exclusions apply. In addition, section
1848(q)(2)(C)(ii) of the Act states that we may use measures used for a
payment system other than that used for physicians for the purposes of
the quality and cost performance categories, but does not address the
advancing care information and improvement activities performance
categories.
Comment: A few commenters expressed concern that offering facility-
based measurement could distract from other quality improvement
efforts, such as those that use registries or QCDRs. One commenter
expressed concern that offering facility-based measurement could
disadvantage those who are not offered the opportunity to participate
in facility-based measurement.
Response: One of our primary goals in structuring the Quality
Payment Program is to allow clinicians as much flexibility as possible.
We view the option of facility-based measurement as advancing that
goal. As noted in the 2018 Quality Payment Program proposed rule (82 FR
30124), we have heard concerns that clinicians who work in certain
facilities would be more accurately measured in the context of those
facilities and that separately identifying and reporting quality
measures could distract from the broader quality mission of the
facility while adding administrative burden on clinicians. We agree
with that statement. For those clinicians who may meet our definition
of facility-based and find that the measurement does not reflect their
practice, there are other opportunities to submit quality measures
data. For those for which facility-based measurement is not available,
we continue to work to offer as much flexibility in measurement as
possible. We have very clearly heard that facility-based measurement
should not be mandatory and have made it an option for those who are
eligible.
Comment: A few commenters recommended that rather than developing a
new system of assessing facility-based clinicians based on the
performance of a facility that those clinicians instead be made exempt
from the MIPS program.
Response: MIPS eligibility is determined based on the requirements
of section 1848(q)(2)(C) of the Act and discussed in section II.C.1 of
this final rule with comment period. We do not believe we have the
authority to exempt clinicians that are otherwise eligible for MIPS
from the Quality Payment Program based on their eligibility for
facility-based measurement.
Final Action: After consideration of the public comments, we are
finalizing our proposals on the general availability of facility-based
measurement with the modification that facility-based measurement will
not be available for clinicians until the 2019 MIPS performance period/
2021 MIPS payment year. We are finalizing regulation text at Sec.
414.1380(e) that provides that for payment in the 2021 MIPS payment
year and subsequent years, a MIPS eligible clinician or group may elect
to be scored in the quality and cost performance categories using
facility-based measures. We discuss the measures used to determine
facility-based measurement in section II.C.7.a.(4)(f) of this final
rule with comment period, but are finalizing our proposals and our
proposal at Sec. 414.1380(e)(6)(i) that the quality and cost measures
are those adopted under the value-based purchasing program of the
facility program for the year specified at Sec. 414.1380(e)(6)(iii)
that the performance period for facility-based measurement is the
performance period for the measures adopted under the value-based
purchasing program of the facility of the year specified (82 FR 30125).
We appreciate the broad support for the implementation of facility-
based measurement and the general support for many of the policies that
are outlined below.
However, we are concerned that we might not have the operational
ability to inform these clinicians soon enough during the MIPS
performance period in 2018 for them to know that they could select
facility-based measurement as opposed to another method. We also
believe that the comments reflect some lack of understanding of how
elements of the policy might apply to clinicians that may qualify for
facility-based measurement. We plan to use this additional year for
outreach and, if technically feasible, informing clinicians if they
would have met the requirements for facility-based measurement based on
the finalized policy and what their scoring might have been based on an
attributed hospital. We believe this additional year of outreach will
best prepare clinicians to make decisions about participating in
facility-based measurement. As discussed in section II.C.7.a.(4)(c) of
this final rule with comment period, the use of facility-based
measurement will be available for individual clinicians and groups.
Therefore, we are finalizing the introductory text at Sec. 414.1380(e)
with a minor change to refer to ``a MIPS eligible clinician or group''
in place of ``MIPS eligible clinicians'' in the proposed text. We
discuss the election in section II.C.7.a.(4)(e) of this final rule with
comment period.
(c) Facility-Based Measurement Applicability
(i) General
The percentage of professional time a clinician spends working in a
hospital varies considerably. Some clinicians may provide services in
the hospital regularly, but also treat patients extensively in an
outpatient office or another environment. Other clinicians may practice
exclusively within a hospital. Recognizing the various levels of
presence of different clinicians within a hospital environment, we
proposed to limit the potential applicability of facility-based
measurement to those MIPS eligible clinicians with a significant
presence in the hospital.
In the CY 2017 Quality Payment Program final rule (81 FR 77238
through 77240), we adopted a definition of ``hospital-based MIPS
eligible clinician'' under Sec. 414.1305 for purposes of the advancing
care information performance category. Section 414.1305 defines a
hospital-based MIPS eligible clinician as a MIPS eligible clinician who
furnishes 75 percent or more of his or her covered professional
services in sites of service identified by the POS codes used in the
HIPAA standard transaction as an inpatient hospital, on-campus
outpatient hospital, or emergency room setting, based on Medicare Part
B claims for a period prior to the performance period as specified by
CMS. We considered whether we should simply use this definition to
determine eligibility for facility-based measurement under MIPS.
However, we expressed concern that this definition could include many
clinicians that have limited or no presence in the inpatient hospital
setting. We discuss the differences between the approach of defining
hospital-based clinicians for the purposes of the advancing care
information category and defining facility-based measurement in the CY
2018 Quality Payment Program proposed rule. (82 FR 30125 through 30126)
The measures used in the Hospital VBP Program are focused on care
provided in the inpatient setting. We noted that we do not believe it
is appropriate for a MIPS eligible clinician to use a hospital's
Hospital VBP Program performance for MIPS scoring if the clinician did
not provide services in the inpatient setting or in the emergency
department, through which many inpatients arrive at the inpatient
setting.
We stated our belief that establishing a definition for purposes of
facility-based measurement that is different from the hospital-based
definition used
[[Page 53756]]
for the advancing care information category is necessary to implement
this option. We also noted that, since we were seeking comments on
other programs to consider including for purposes of facility-based
measurement in future years, we believed that establishing a separate
definition that could be expanded as needed for this purpose is
appropriate. We proposed at Sec. 414.1380(e)(2) that a MIPS eligible
clinician is eligible for facility-based measurement under MIPS if they
are determined facility-based as an individual (82 FR 30126) or as a
part of a group (82 FR 30126).
(ii) Facility-Based Measurement by Individual Clinicians
Based on those background considerations, we proposed at Sec.
414.1380(e)(2)(i) that a MIPS eligible clinician is considered
facility-based as an individual if the MIPS eligible clinician
furnishes 75 percent or more of their covered professional services (as
defined in section 1848(k)(3)(A) of the Act) in sites of service
identified by the POS codes used in the HIPAA standard transaction as
an inpatient hospital, as identified by POS code 21, or an emergency
room, as identified by POS code 23, based on claims for a period prior
to the performance period as specified by CMS (82 FR 30126). We
understand that the services of some clinicians who practice solely in
the hospital are billed using place of service codes such as code 22,
reflecting an on-campus outpatient hospital for patients who are in
observation status. Because there are limits on the length of time a
Medicare patient may be seen under observation status, we noted that we
believe that these clinicians would still furnish 75 percent or more of
their covered professional services using POS code 21, but sought
comment on whether a lower or higher threshold of inpatient services
would be appropriate. We did not propose to include POS code 22 in
determining whether a clinician is facility-based because many
clinicians who bill for services using this POS code may work on a
hospital campus but in a capacity that has little to do with the
inpatient care in the hospital. In contrast, we noted our belief that
those who provide services in the emergency room or the inpatient
hospital clearly contribute to patient care that is captured as part of
the Hospital VBP Program because many patients who are admitted are
admitted through the emergency room. We sought comments on whether POS
22 should be included in determining if a clinician is facility-based
and how we might distinguish those clinicians who contribute to
inpatient care from those who do not. The inclusion of any POS code in
our definition is pending technical feasibility to link a clinician to
a facility under the method described in section II.C.7.b.(4)(d) of the
CY 2018 Quality Payment Program proposed rule (82 FR 30126 through
30127).
We noted that this more limited definition would mean that a
clinician who is determined to be facility-based likely would also be
determined to be hospital-based for purposes of the advancing care
information performance category, because the proposed definition of
facility-based is narrower than the hospital-based definition
established for that purpose. We proposed to identify clinicians as
facility-based (and thus eligible to elect facility-based measurement)
through an evaluation of covered professional services between
September 1 of the calendar year 2 years preceding the performance
period through August 31 of the calendar year preceding the performance
period with a 30-day claims run out. For example, for the 2020 MIPS
payment year, where we have adopted a performance period of CY 2018 for
the quality and cost performance categories, we would use the data
available at the end of October 2017 to determine whether a MIPS
eligible clinician is considered facility-based under our proposed
definition. At that time, those data would include Medicare Part B
claims with dates of service between September 1, 2016 and August 31,
2017. If it is not operationally feasible to use claims from this exact
time period, we noted that we would use a 12-month period as close as
practicable to September 1 of the calendar year 2 years preceding the
performance period and August 31 of the calendar year preceding the
performance period. This determination would allow clinicians to be
made aware of their eligibility for facility-based measurement near the
beginning of the MIPS performance period.
We also recognized that in addition to the variation in the
percentage of time a clinician is present in the hospital, there is
also great variability in the types of services that clinicians
perform. We considered whether certain clinicians should be identified
as eligible for this facility-based measurement option based on
characteristics in addition to their percentage of covered professional
services furnished in the inpatient hospital or emergency room setting,
such as by requiring a certain specialty such as hospital medicine or
by limiting eligibility to those who served in patient-facing roles.
However, we noted our belief that all MIPS eligible clinicians with a
significant presence in the facility play a role in the overall
performance of a facility, and therefore, did not propose to further
limit this option based on characteristics other than the percentage of
covered professional services furnished in an inpatient hospital or
emergency room setting.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Several commenters supported our proposal that a MIPS
eligible clinician is considered facility-based as an individual if the
MIPS eligible clinician furnishes 75 percent or more of their covered
professional services in sites of service identified by the POS codes
used in the HIPAA standard transaction as an inpatient hospital, as
identified by POS code 21, or an emergency room, as identified by POS
code 23.
Response: We appreciate the commenters' support. We are finalizing
this policy as proposed, but as discussed below, we intend to continue
analyzing refinements to facility-based eligibility for potential
future rulemaking or to inform our interpretation of this final rule.
Comment: Many commenters did not support our proposal that a MIPS
eligible clinician is considered facility-based as an individual if the
MIPS eligible clinician furnishes 75 percent or more of their covered
professional services in sites of service identified by the POS codes
used in the HIPAA standard transaction as an inpatient hospital, as
identified by POS code 21, or an emergency room, as identified by POS
code 23. Many of these commenters recommended that POS code 22, used
for on-campus outpatient hospitals, be added to the POS codes used to
determine applicability of facility-based measurement. They noted that
this place of service code is used for providing observation services,
which were indistinguishable from inpatient services because they are
typically provided in the same physical space as inpatient services and
on similar patients. They indicated that many clinicians that provide
exclusively hospital services may not meet this definition due to the
preponderance of observation services which they bill. Some commenters
recommended that the facility-based definition be aligned with the
hospital-based definition used in the advancing care information
performance category, and therefore include POS code 19, which was
proposed to be added for determination of hospital-based eligibility in
addition to POS code 22 and the codes we did
[[Page 53757]]
propose for inclusion. These commenters noted that aligning definitions
would simplify understanding of the program. A few commenters suggested
the addition of POS codes 51 (inpatient psychiatric facility) and POS
codes 52 (psychiatric facility partial hospital).
Response: We remain concerned that including codes for outpatient
hospital services could make eligible for facility-based measurement
clinicians who have little or no contribution to a hospital's
performance in the Hospital VBP Program. We recognize that observation
services are similar to services provided in the inpatient hospital
setting in many cases. However, there are many services, such as
outpatient clinic visits, which include patients who may never visit
the hospital in question as inpatients. We are finalizing our proposal
for eligibility; however, we intend to further study the impact of
including outpatient services on eligibility for facility-based
clinicians and to determine if there is another method to distinguish
observation services from other outpatient services. As noted above, we
are finalizing our proposal, but with a delay in the implementation of
facility-based measurement until the 2019 MIPS performance period/2021
MIPS payment year. This will provide additional time for analysis and
outreach to clinicians. We hope that this outreach will help to inform
clinicians about the applicability of facility-based measurement. We
will make future changes to the applicability of facility-based
measurement in the context of that outreach and additional analysis.
Any changes would be proposed in future rulemaking. We are specifically
seeking comments on ways to identify clinicians who have a significant
presence within the inpatient setting and address the concerns that we
have noted above.
Comment: Several commenters recommended that we adopt a threshold
lower than 75 percent of services with particular place of service
codes because some clinicians who work primarily or exclusively in a
hospital might not meet our proposed definition. Some of these
commenters recommended that clinicians be eligible if at least a
majority of their services were provided with an eligible place of
service code.
Response: Because the 75 percent threshold is used in our
determination of hospital-based eligible clinicians in the advancing
care information performance category, we believe that a similar
threshold would be appropriate to use in the determination of
applicability of facility-based measurement. On an individual basis,
all clinicians who quality for facility-based measurement would also
qualify for hospital-based under the advancing care information
category. If we were to adopt a lower threshold for facility-based
measurement, this would no longer be the case. We believe that it is to
the benefit of clinicians to know that even though the two definitions
are not perfectly aligned, they have similar parameters and that
qualifying for one (facility-based) would generally mean qualifying for
the other (hospital-based). However, a clinician may qualify to be
hospital-based but not qualify to be facility-based. If technically
feasible, we will use 2018 as an opportunity to offer information to
clinicians on their eligibility and applicability of facility-based
measurement. While we are finalizing our proposal, we will continue to
examine the 75 percent threshold to determine if this consistency is
not necessary and will propose changes in future rulemaking if analysis
suggests that this presents a significant barrier.
Comment: A few commenters suggested that our proposal to determine
facility-based measurement status not be limited to a review of place
of service codes. One commenter suggested that we review the specialty
of a clinician to determine if the clinician is a hospitalist. Another
commenter suggested that facility-based measurement should be limited
to clinicians for whom the Hospital VBP Program measures are related to
their clinical area.
Response: As we noted in the proposed rule, we considered whether
to further limit facility-based measurement on characteristics such as
specialty. However, we believe that there are clinicians other than
those who are identified with the hospitalist specialty code who
significantly contribute to the quality of care in the facility
setting. We do not typically use a specialty code to determine special
status in MIPS. In addition, the hospitalist specialty code was only
established in 2017 so many clinicians who practice hospital medicine
are not currently identified by that specialty code. We are unable to
identify another way to identify a strong connection between a facility
and a clinician at this time but will continue to analyze and welcome
comments on this issue.
Final Action: After consideration of the public comments, we are
finalizing our proposals codified at Sec. 414.1380(e)(2) for the
determination of eligibility for facility-based measurement as an
individual. We note that facility-based measurement will not be
available until the 2019 MIPS performance period/2021 MIPS payment year
so clinicians will not be eligible until that time. We understand that
there are concerns that some clinicians who practice primarily or
exclusively in hospitals will not be eligible for facility-based
measurement, particularly due to the complicating factor of observation
services. We will use the next year to further examine this issue and
determine if changes in eligibility should be proposed in future
rulemaking. We are also finalizing technical and grammatical changes to
the introductory text at paragraph (e)(2).
(iii) Facility-Based Measurement Group Participation
We proposed at Sec. 414.1380(e)(2) that a MIPS eligible clinician
is eligible for facility-based measurement under MIPS if they are
determined facility-based as part of a group (82 FR 30126). We proposed
at Sec. 414.1380(e)(2)(ii) that a facility-based group is a group in
which 75 percent or more of the MIPS eligible clinician NPIs billing
under the group's TIN are eligible for facility-based measurement as
individuals as defined in Sec. 414.1380(e)(2)(i) (82 FR 30126). We
also considered an alternative proposal in which a facility-based group
would be a group where the TIN overall furnishes 75 percent or more of
its covered professional services (as defined in section 1848(k)(3)(A)
of the Act) in sites of service identified by the POS codes used in the
HIPAA standard transaction as an inpatient hospital, as identified by
POS code 21, or the emergency room, as identified by POS code 23, based
on claims for a period prior to the performance period as specified by
CMS (82 FR 30126). Groups would be determined to be facility-based
through an evaluation of covered professional services between
September 1 of the calendar year 2 years preceding the performance
period through August 31 of the calendar year preceding the performance
period with a 30 day claims run out period (or if not operationally
feasible to use claims from this exact time period, a 12-month period
as close as practicable to September 1 of the calendar year 2 years
preceding the performance period and August 31 of the calendar year
preceding the performance period).
We requested comments on our proposal and alternative proposal.
The following is a summary of the public comments received on the
``Facility-Based Measurement Group Participation'' proposals and our
responses:
Comment: Several commenters recommended that groups be eligible for
facility-based measurement if they meet
[[Page 53758]]
the requirements of either our proposal (75 percent or more of the MIPS
eligible clinician NPIs billing under the group's TIN are eligible for
facility-based measurement as individuals) or our alternative proposal
(TIN overall furnishes 75 percent or more of its covered professional
services in sites of service identified by the POS codes used to
determine individual eligibility). These commenters noted that this
would increase the number of groups eligible for this opportunity.
Response: We understand the interest in providing multiple methods
of eligibility but believe that establishing multiple methods increases
complexity. In this case, we do not believe that the interests of
flexibility outweigh those of simplicity, given that facility-based
measurement will be available only for groups that are primarily
composed of those who provide services in the hospital setting. We are
finalizing our proposal that a facility-based group is one in which 75
percent or more of the of the MIPS eligible clinician NPIs billing
under the group's TIN are eligible for facility-based measurement as
individuals. We are finalizing regulation text at Sec.
414.1380(e)(2)(ii) to codify this standard for determining that a
clinician group is a facility-based group and are making minor
revisions to the regulatory text to match the proposed policy by
explicitly referencing clinician NPIs billing under the group's TIN. In
2018, we will provide more information to clinicians and groups on
their eligibility for facility-based measurement and hope that sharing
this information will help to provide more clarity. We will revisit
this standard for identifying when a clinician group is a facility-
based group eligible for facility-based measurement in future
rulemaking if changes are needed.
Comment: A few commenters supported our alternative proposal in
which a facility-based group would be a group where the TIN overall
furnishes 75 percent or more of its covered professional services (as
defined in section 1848(k)(3)(A) of the Act) in sites of service
identified by the POS codes used to establish individual eligibility
for facility-based measurement. These commenters expressed concern that
CMS would be unable to properly identify all of the clinicians in the
group and therefore unable to properly make this determination. One of
the commenters suggested that it was easier to determine eligibility at
the TIN level.
Response: We agree that our proposed alternative approach of using
all the claims submitted by a group is one way to calculate eligibility
for facility-based measurement, but we also believe that our proposed
approach would appropriately identify groups that should be eligible
for facility-based measurement. We are able to identify through claims
data all the individual NPIs that bill under a group TIN. In addition,
we have several MIPS group status indicators that are determined by 75
percent or more of the of the MIPS eligible clinician NPIs billing
under the group's TIN meeting a certain designation. By finalizing our
policy as proposed, we are aligning with those other group policies.
For example, as discussed in section II.C.1.e. of this final rule with
comment period, a group is determined to be non-patient facing provided
that more than 75 percent of the NPIs billing under the group's TIN
meet the definition of a non-patient facing individual MIPS eligible
clinician during the non-patient facing determination period. As
discussed in section II.C.1.d. of this final rule with comment period,
we use a similar threshold to determine which groups should have a
rural or HPSA designation. However, as we perform outreach in 2018, we
hope that we can clarify and address any concerns related to our
ability to identify the clinicians that are associated with a
particular practice and would be considered for facility-based
measurement. If needed, we will revisit this policy through future
rulemaking.
Comment: One commenter recommended that groups of clinicians within
a TIN be eligible as a facility-based group rather than requiring the
entire group to be scored based on facility-based measurement.
Response: Because of the scoring approach that we are adopting for
facility-based measurement (discussed in section II.C.7.a.(4) of this
final rule with comment period), a group is scored for the quality and
cost performance category on the basis of facility-based measurement or
through another method. We are unable to establish a group reporting
mechanism that is not applicable for portions of a TIN. This score will
be combined with scores on the improvement activity and advancing care
information performance categories. Please refer to section II.C.3. of
this final rule with comment period for additional information about
reporting for groups.
Comment: One commenter recommended that a group be eligible for
facility-based measurement if more than 50 percent of the MIPS eligible
clinicians met the requirements of facility-based measurement.
Response: We believe that the 75 percent threshold better
establishes that a group is primarily one that focuses on hospital
care. It aligns with our proposal to identify non-patient facing groups
and ensures that majority of clinicians in the group are involved in
care that may be related to the measures in facility-based measurement.
As we develop outreach in 2018, we aim to inform clinicians and groups
about what their facility-based measurement eligibility would have been
had we finalized these policies for application to the 2020 MIPS
payment year; we hope this will clarify the application of this rule.
Comment: One commenter recommended that there not be an option to
establish a facility-based group, because, as proposed, a group could
include many clinicians who do not practice in the facility setting.
Response: We believe that the establishment of an opportunity for a
group to be eligible for facility-based measurement is consistent with
our general approach to group measurement in MIPS. A large group may
include some clinicians who focus on the patients associated with
submitted quality measures and others who focus on a different
population. However, under group-based reporting in MIPS, all members
of the group receive the same score. Facility-based measurement will
only be available to those groups with a significant connection to the
hospital (as measured by the settings of services for which claims are
paid) and we believe only those groups that believe the hospital scores
reflect their performance will elect the option. We believe that
limiting the facility-based measurement to individuals would make the
option less tenable and less consistent with our overall approach to
MIPS, which is intended to provide flexibility to participate as a
group or as an individual to the greatest extent possible. We also
noted that facility-based measurement applies only to the quality and
cost performance categories; groups and individuals must separately
consider their participation in the advancing care information and
improvement activities performance categories. We believe that groups
will select the quality measures that they believe are most applicable
to reflecting the overall quality of the group.
Final Action: After consideration of the public comments, we are
finalizing our proposal for determining which groups are facility-based
groups in regulation text at Sec. 414.1380(e)(2)(ii). We note that
facility-based measurement will not be available until the 2019 MIPS
performance period/2021 MIPS payment year so a facility-based group
will not exist before that time. As
[[Page 53759]]
noted earlier, we are delaying the implementation of facility-based
measurement until the 2019 MIPS performance period to ensure clinician
understanding and operational readiness. We will propose any changes to
this definition in future rulemaking.
(d) Facility Attribution for Facility-Based Measurement
Many MIPS eligible clinicians provide services at more than one
hospital, so we need a method to identify which hospital's scores
should be associated with each MIPS eligible clinician that elects
facility-based measurement under this option. We considered whether a
clinician should be required to identify for us the hospital with which
the clinician is affiliated, but believe that such a requirement would
add unnecessary administrative burden in a process that we believe was
intended to reduce burden. We also considered whether we could combine
scores from multiple hospitals, but noted our belief that such a
combination would reduce the alignment between a single hospital and a
clinician or group and could be confusing for participants. We further
noted that we believed we must establish a reasonable threshold for a
MIPS eligible clinician's participation in clinical care at a given
facility to allow that MIPS eligible clinician to be scored using that
facility's measures. We noted that we do not believe it to be
appropriate to allow MIPS eligible clinicians to claim credit for
facilities' measures if the MIPS eligible clinician does not
participate meaningfully in the care provided at the facility.
Therefore, we proposed at Sec. 414.1380(e)(5) that MIPS eligible
clinicians who elect facility-based measurement would receive scores
derived from the value-based purchasing score (using the methodology
described in section II.B.7.b.4. of the CY 2018 Quality Payment Program
proposed rule (82 FR 30128 through 30129) for the facility at which
they provided services for the most Medicare beneficiaries during the
period of September 1 of the calendar year 2 years preceding the
performance period through August 31 of the calendar year preceding the
performance period with a 30-day claims run out (82 FR 30127). This
period for identifying the facility whose performance will be
attributed to a facility-based clinician (or group) is the same as the
time period for services we will use to determine if a clinician (or
group) is eligible for facility-based measurement; this time period
also overlaps with parts of the performance period for the applicable
Hospital VBP Program measures. We proposed that for the first year, the
value-based purchasing score for the facility would be the FY 2019
Hospital VBP Program's Total Performance Score. In cases in which there
was an equal number of Medicare beneficiaries treated at more than one
facility, we proposed to use the value-based purchasing score from the
facility with the highest score (82 FR 30127).
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Several commenters supported our proposal that MIPS
eligible clinicians that elect facility-based measurement would receive
scores derived from the value-based purchasing score for the facility
at which they provided services for the most Medicare beneficiaries
during the period of September 1 of the calendar year 2 years preceding
the performance period through August 31 of the calendar year.
Response: We thank the commenters for their support.
Comment: A few commenters opposed our proposed time period that
would determine the facility that would determine the MIPS quality and
cost score, noting that the clinician may have moved on to another
facility by the time of the MIPS performance period. One commenter
suggested that because of this issue that clinicians be given the
opportunity to identify the hospital upon which their scores should be
based.
Response: We recognize that clinicians may move from one facility
to another over time and a specific clinician may see a majority of his
or her patients at one facility during 1 year but at a different
facility in later years. However, our proposal to use the September
through August period beginning 2 calendar years before the MIPS
performance period begins for attribution of facility performance
matched our proposed timeframe for claims used to determine whether a
clinician (or group) is facility-based. This time period overlaps with
parts of the performance period for the applicable Hospital VBP Program
measures. If these timelines did not overlap, it would increase the
likelihood that we determine a clinician met the requirements for
facility- based measurement but did not have a hospital from which we
could attribute performance. As noted in the proposed rule, we
considered whether a clinician should be required to identify the
hospital on which their scores should be based, but concluded that was
more likely to be a burden on the clinician. We were also (and continue
to be) concerned that permitting the clinician or group to choose could
result in clinician or group selecting a hospital at which they did not
provide care, either inadvertently due to selection error, or
fraudulently.
Comment: A few commenters noted that the proposed method of
attributing clinicians to a facility did not identify a method that
would determine attribution for a facility-based group. A few
commenters suggested that in this situation that CMS use the score from
the hospital attributed to an individual clinician in the group with
the highest score on the Hospital VBP Program.
Response: Although we did not specifically address the issue of how
facility-based groups would be assigned to a facility (for purposes of
attributing facility performance to the group) in the preamble of the
CY 2018 Quality Payment Program proposed rule, our proposed regulation
at Sec. 414.380(e)(5) did apply the same standard to individuals and
groups. Although we believe that this provided sufficient notice of the
policy, we will plan to address this issue as part of the next Quality
Payment Program rulemaking cycle. We encourage all interested parties
to review that proposal when it is issued and submit comments.
Comment: A few commenters expressed concern that our proposed
approach for facility attribution would not reflect the quality of care
for clinicians that practice at multiple facilities. These commenters
suggested that CMS consider future changes to the methodology to
accommodate multiple facilities, such as using a weighted average of
the facility scores.
Response: We have designed the facility-based measurement option to
align incentives between clinicians and facilities. Therefore, the
intention is for a clinician or a group that spends significant time in
a facility to be supporting the efforts to improve the score of that
particular facility, particularly because we believe a desire to
improve scores drives high quality care for patients. While we
recognize that clinicians do practice in multiple facilities, we are
concerned that developing a composite score based on the performance of
multiple facilities would reduce that alignment by diffusing focus from
a single facility and complicating scoring.
Final Action: After consideration of the public comments, we are
finalizing our proposal for clinicians in facility-based measurement to
receive scores derived from the value-based purchasing score for the
facility at which they provided services for the
[[Page 53760]]
most Medicare beneficiaries during the period of September 1 of the
calendar year 2 years preceding the performance period through August
31 of the calendar year preceding the performance period with a 30-day
claims run out. We are not finalizing regulation text associated with
this specific policy (that is, identifying the period of the claims
data used) as we consider implementation of this policy. We note that
facility-based measurement will not be available until the 2019 MIPS
performance period/2021 MIPS payment year so clinicians will not be
assigned to a facility for attribution of the facility's performance
before that time. We will address the issue of attribution for
facility-based groups in future rulemaking.
(e) Election of Facility-Based Measurement
We proposed at Sec. 414.1380(e)(3) that individual MIPS eligible
clinicians or groups who wish to have their quality and cost
performance category scores determined based on a facility's
performance must elect to do so through an attestation. We proposed
that those clinicians or groups who are eligible for and wish to elect
facility-based measurement would be required to submit their election
during the data submission period as determined at Sec. 414.1325(f)
through the attestation submission mechanism established for the
improvement activities and advancing care information performance
categories. (82 FR 30127). We further proposed that, if technically
feasible, we would let the MIPS eligible clinician know that they were
eligible for facility-based measurement prior to the submission period,
so that MIPS eligible clinicians would be informed if this option is
available to them.
We also considered an alternative approach of not requiring an
election process but instead automatically applying facility-based
measurement to MIPS eligible clinicians and groups who are eligible for
facility-based measurement, if technically feasible. Under this
approach, we would calculate a MIPS eligible clinician's facility-based
measurement score based on the hospital's (as identified using the
process described in section II.C.7.a.(4)(d) of the CY 2018 Quality
Payment Program proposed rule (82 FR 30126 through 30127)) performance
using the methodology described in section II.C.7.a.(4)(f) of the CY
2018 Quality Payment Program proposed rule (82 FR 30128 through 30132),
and automatically use that facility-based measurement score for the
quality and cost performance category scores if the facility-based
measurement score is higher than the quality and cost performance
category scores as determined based on data submitted by the MIPS
eligible clinician or group through any available reporting mechanism.
This facility-measurement score would be calculated even if an
individual MIPS eligible clinician or group did not submit any data for
the quality performance category. We explained how this alternative
approach might work in the CY 2018 Quality Payment Program proposed
rule in connection with choosing the time period of the hospital
performance in the Hospital VBP Program (82 FR 30127). We noted our
concern that a method that does not require active selection may result
in MIPS eligible clinicians being scored on measures at a facility and
being unaware that such scoring is taking place. We also expressed
concern that such a method could provide an advantage to those
facility-based clinicians who do not submit quality measures in
comparison to those who work in other environments. We also noted that
this option may not be technically feasible for us to implement for the
2018 MIPS performance period.
We invited comments on this proposal and alternate proposal.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters supported our proposal to require
clinicians or groups to opt-in through a voluntary election process in
order to participate in facility-based measurement. These commenters
noted that clinicians should be given the opportunity to determine if
the quality of a hospital reflected the quality of the clinician or if
they would be better represented using a different submission
mechanism.
Response: We appreciate the support of the commenters. As noted
below, we are not finalizing the attestation mechanism aspect of our
proposal or our alternative, but we will revisit through future
rulemaking how to permit an individual clinician or group to elect
facility-based measurement.
Comment: A few commenters supported our alternative approach of not
requiring an election process but instead automatically applying
facility-based measurement to MIPS eligible clinicians and groups who
are eligible for facility-based measurement unless they opt out. These
commenters noted that this would reduce administrative burden and some
clinicians who would be eligible would fail to opt in.
Response: We appreciate the interest in minimizing administrative
burden for clinicians. We will aim to minimize administrative burden on
clinicians and groups for whichever method would be used for
determination of facility-based measurement.
Comment: A few commenters expressed concern about the details of
the opt-in process. One commenter expressed concern that an
administrator would be unable to opt in on behalf of clinicians in a
group. One commenter recommended that third party intermediaries be
able to receive information on facility-based measurement through an
API framework.
Response: As described further above, we are not implementing
facility-based measurement until the 2019 MIPS performance period. We
will use the additional year to better explain to clinicians how the
facility-based measurement will work under the regulatory provisions we
are finalizing at Sec. 414.1380(e), including the determination of
when a clinician or group is facility-based and thus able to elect to
use facility measurement, the time period for making that
determination, and the use of the facility's Hospital VBP Program
performance to score the clinician or group.
Final Action: After consideration of the public comments, we are
not finalizing either our proposal or our alternative option for how an
individual clinician or group will elect to use and be identified as
using facility-based measurement for the MIPS program. Because we are
not offering facility-based measurement until the 2019 MIPS performance
period, we do not need to finalize either of these for the 2018 MIPS
performance period. We will use the additional time to examine the
attestation process we proposed and the alternative opt-out process. We
intend to work with stakeholders to identify a procedure that best
balances administrative burden and clinician choice for proposal in
next year's proposed rule. We are not finalizing our proposed
regulatory text at Sec. 414.1380(e)(3), but will reserve that section
for our future proposals.
In light of our interest in reducing burden, we do prefer an option
that would not require a clinician or practice to notify CMS through
attestation or other method. We therefore seek comment on whether a
process by which a clinician or group would be automatically assigned a
score under facility-based measurement but be notified and given the
opportunity to
[[Page 53761]]
opt out of facility-based measurement would be appropriate.
(f) Facility-Based Measures
For FY 2019, the Hospital VBP Program has adopted 12 quality and
efficiency measures. The Hospital VBP Program currently includes 4
domains: Person and community engagement, clinical care, safety, and
efficiency and cost reduction. These domains align with many MIPS high
priority measures (outcome, appropriate use, patient safety,
efficiency, patient experience, and care coordination measures) in the
quality performance category and the efficiency and cost reduction
domain closely aligns with our cost performance category. We believe
this set of measures covering 4 domains and composed primarily of
measures that would be considered high priority under the MIPS quality
performance category capture a broad picture of hospital-based care.
Additionally, the Hospital VBP Program has adopted several measures of
clinical outcomes in the form of 30-day mortality measures, and
clinical outcomes are a high-priority topic for MIPS. The Hospital VBP
Program includes several measures in a safety domain, which meets our
definition of patient safety measures as high-priority. Therefore, we
proposed that facility-based individual MIPS eligible clinicians or
groups that are attributed to a hospital would be scored on all the
measures on which the hospital is scored for the Hospital VBP Program
via the Hospital VBP Program's Total Performance Score scoring
methodology (82 FR 30127).
The Hospital VBP Program's FY 2019 measures, and their associated
performance periods, were reproduced in Table 33 in the proposed rule
(82 FR 30128). Here, we are including, in Table 26, a list of the
finalized FY2019 Hospital VBP Program Measures.
Table 26--FY 2019 Hospital VBP Program Measures
----------------------------------------------------------------------------------------------------------------
Short name Domain/measure name NQF No. Performance period
----------------------------------------------------------------------------------------------------------------
Person and Community Engagement Domain
----------------------------------------------------------------------------------------------------------------
HCAHPS........................... Hospital Consumer 0166 CY 2017.
Assessment of (0228)
Healthcare Providers
and Systems (HCAHPS)
(including Care
Transition Measure).
----------------------------------------------------------------------------------------------------------------
Clinical Care Domain
----------------------------------------------------------------------------------------------------------------
MORT-30-AMI...................... Hospital 30-Day, All- 0230 July 1, 2014-June 30, 2017.
Cause, Risk-
Standardized Mortality
Rate (RSMR) Following
Acute Myocardial
Infarction (AMI)
Hospitalization.
MORT-30-HF....................... Hospital 30-Day, All- 0229 July 1, 2014-June 30, 2017.
Cause, Risk-
Standardized Mortality
Rate (RSMR) Following
Heart Failure (HF)
Hospitalization.
MORT-30-PN....................... Hospital 30-Day, All- 0468 July 1, 2014-June 30, 2017.
Cause, Risk-
Standardized Mortality
Rate (RSMR) Following
Pneumonia
Hospitalization.
THA/TKA.......................... Hospital-Level Risk- 1550 January 1, 2015-June 30, 2017.
Standardized
Complication Rate
(RSCR) Following
Elective Primary Total
Hip Arthroplasty (THA)
and/or Total Knee
Arthroplasty (TKA).
----------------------------------------------------------------------------------------------------------------
Safety Domain
----------------------------------------------------------------------------------------------------------------
CAUTI............................ National Healthcare 0138 CY 2017.
Safety Network (NHSN)
Catheter-Associated
Urinary Tract Infection
(CAUTI) Outcome Measure.
CLABSI........................... National Healthcare 0139 CY 2017.
Safety Network (NHSN)
Central Line-Associated
Bloodstream Infection
(CLABSI) Outcome
Measure.
Colon and Abdominal Hysterectomy American College of 0753 CY 2017.
SSI. Surgeons--Centers for
Disease Control and
Prevention (ACS-CDC)
Harmonized Procedure
Specific Surgical Site
Infection (SSI) Outcome
Measure.
MRSA Bacteremia.................. National Healthcare 1716 CY 2017.
Safety Network (NHSN)
Facility-wide Inpatient
Hospital-onset
Methicillin-resistant
Staphylococcus aureus
(MRSA) Bacteremia
Outcome Measure.
CDI.............................. National Healthcare 1717 CY 2017.
Safety Network (NHSN)
Facility-wide Inpatient
Hospital-onset
Clostridium difficile
Infection (CDI) Outcome
Measure.
PC-01............................ Elective Delivery....... 0469 CY 2017.
----------------------------------------------------------------------------------------------------------------
Efficiency and Cost Reduction Domain
----------------------------------------------------------------------------------------------------------------
MSPB............................. Payment-Standardized 2158 CY 2017.
Medicare Spending Per
Beneficiary (MSPB).
----------------------------------------------------------------------------------------------------------------
We noted that the Patient Safety Composite Measure (PSI-90) was
proposed for removal beginning with the FY 2019 measure set in the FY
2018 Hospital Inpatient Prospective Payment Systems for Acute Care
Hospitals and the Long Term Care Hospital Prospective Payment System
(IPPS/LTCH PPS) proposed rule (82 FR 19970) due to issues with
calculating the measure score and that we would remove the measure from
the list of those adopted for facility-based measurement in the MIPS
program if that proposal was finalized. The proposal to remove the PSI-
90 measure was finalized in the FY 2018 IPPS/LTCH PPS final rule (82 FR
38244).
We proposed at Sec. 414.1380(e)(4) that there are no data
submission requirements for the facility-based measures used to assess
performance in the quality and cost performance categories, other than
electing the option through attestation as proposed in the CY 2018
Quality Payment Program proposed rule (82 FR 30128).
The following is a summary of the public comments received on the
``Facility-Based Measures'' proposals and our responses:
Comment: Several commenters supported our proposal to adopt all
measures and performances from the FY 2019 Hospital VBP Program for the
purposes of facility-based measurement
[[Page 53762]]
in the MIPS program for the 2018 MIPS performance period/2020 MIPS
payment year because those measures represented the total performance
of the hospital and were well known by clinicians.
Response: We thank the commenters for their support. Because we are
delaying the implementation of facility-based measure until the 2019
MIPS performance period, these measures will not be available for the
2018 MIPS performance period. We intend to propose in next year's
rulemaking the facility measures that will be used for purpose of the
2019 MIPS performance period.
Comment: Several commenters recommended that clinicians be able to
select measures from the Hospital VBP Program and the Hospital
Inpatient Quality Reporting (IQR) Program in order to better identify
those that they noted were relevant to their practice. These commenters
indicated that using all measures from the Hospital VBP Program was not
necessarily representative of the individual clinician's quality.
Response: We have a policy goal to align incentives between
clinicians and facilities through facility-based measurement. We
believe that any efforts to measure clinicians on a subset of measures
rather than the entire measure set reduces that alignment. In addition,
we believe that a measure selection process would introduce unnecessary
administrative burden. If clinicians do not believe that the measures
that are included for that facility measurement program are
appropriate, there are opportunities to participate in MIPS that offer
more flexibility in measure selection other than the use of facility-
based measurement.
Comment: A few commenters recommended that instead of using
measures that are part of the Hospital VBP Program or other pay-for-
reporting or pay-for-performance program, that we use measures from
registries or other sources. These measures might reflect the
performance of an entire facility but would be more closely tied to the
activities of a particular clinician.
Response: Section 1848(q)(2)(C)(ii) of the Act provides that the
Secretary may use measures used for payment systems other than for
physicians, such as measures for inpatient hospitals, for purposes of
the quality and cost performance categories. Based on this statutory
requirement and because we want to align incentives between clinicians
and hospitals, we have elected to use measures that are developed and
implemented into other programs, as opposed to other new measures that
reflect a facility's performance. We note that there may be
opportunities for clinicians to participate in MIPS using qualified
registries or QCDRs that measure quality for services that may be
provided in a facility setting, without being measured in facility-
based measurement.
Comment: Several commenters expressed concern that the performance
periods for the measures that we proposed for inclusion for facility-
based measurement did not align with the performance period used for
other measures and requested that the performance periods be aligned.
Response: We recognize that the performance periods adopted for the
measures under the Hospital VBP Program differ from the performance
period for MIPS measures. As we have discussed with respect to the
Hospital VBP Program (such as in the FY 2013 IPPS/LTCH PPS final rule,
77 FR 53594), we take several considerations into account when adopting
performance periods for the Hospital VBP Program, including previously-
adopted performance periods under the Hospital VBP Program, the
possible duration of the performance period, and the reliability of the
data that we collect. We also consider the statutory requirement that
hospitals be notified of their Total Performance Scores and payment
adjustments no later than 60 days prior to the fiscal year involved, as
well as the time necessary for quality measures submission and Total
Performance Score computations.
When developing our facility-based measurement policy under MIPS,
we also took into account our beliefs that aligning incentives and
informing clinicians about their opportunity to participate in MIPS
outweighs the interest in aligning the performance period between the
Hospital VBP Program and MIPS. We believe that we must encourage
participation in MIPS, and we view the facility-based measurement
option as one policy that enables us to so encourage participating
clinicians. We will consider ways to align performance periods between
the Hospital VBP Program and the Quality Payment Program in the future.
Comment: A few commenters opposed the inclusion of the PSI-90
measure as a measure to be used for facility-based measurement. Others
expressed concern about the inclusion of measures that are part of
future years of the Hospital VBP Program, such as condition-specific
episode-based payment measures.
Response: In the FY 2018 IPPS/LTCH PPS final rule (82 FR 38244), we
finalized our proposal to remove the PSI-90 measure from the FY 2019
Hospital VBP Program measure set. We noted in the proposed rule that if
this measure was to be removed from that measure set, we would also
remove it from the measures set used for facility-based measurement. We
will consider issues of measures included in future years of other
programs in future rulemaking for the Quality Payment Program.
Comment: Several commenters opposed the inclusion of the MSPB
measure from the FY 2019 Hospital VBP Program as a measure for
facility-based measurement. These commenters noted that we had also
proposed to weight the cost performance category at zero percent, so
clinicians in facility-based measurement would be disadvantaged by
including this similar measure.
Response: As noted earlier in this section, we will not offer the
opportunity to participate in facility-based measurement for the 2020
MIPS payment year. When facility-based measurement is offered beginning
with the 2021 MIPS payment year, the cost performance category will be
equally weighted to the quality performance category. The MSPB measure
is part of the overall Hospital VBP Program score and reflects an
important measure of the overall value of care in that environment. Our
scoring methodology is intended to translate the overall score of value
in the Hospital VBP Program to a measure of value in the MIPS quality
and cost performance categories. Section II.C.7.a.(4)(g) of this final
rule with comment period discusses the scoring for facility-based
measurement.
Final Action: After consideration of the public comments, we are
not finalizing our proposal that the facility-based measures available
for the 2018 MIPS performance period are the measures adopted for the
FY 2019 Hospital VBP Program. We are also not finalizing our proposal
that for the 2020 MIPS payment year facility-based individual MIPS
eligible clinicians or groups that are attributed to a hospital would
be scored on all the measures on which the hospital is scored for the
Hospital VBP Program via the Hospital VBP Program's Total Performance
Score methodology. (As discussed in section II.C.7.a.(4)(g) of this
final rule with comment period, we are finalizing a facility-based
measurement scoring standard, but not the specific instance of using FY
2019 Hospital VBP Program Total Performance Score methodology.)
We believe that the policy approach of using all measures from a
value-based purchasing program is appropriate. However, we are not
adopting these
[[Page 53763]]
proposals because we are not implementing facility-based measurement
for the 2018 MIPS performance period/2020 MIPS payment year and as such
cannot finalize any measures or scoring under this program for that
performance period and payment year for the purpose of facility-based
measurement in MIPS. We intend to propose measures that would be
available for facility-based measurement for the 2019 MIPS performance
period/2021 MIPS payment year in future rulemaking. As noted in section
II.C.7.a.(4)(a) of this final rule with comment period, we are adopting
at Sec. 414.1380(e)(6)(i) that quality and cost measures for which
facility-based measurement will be available are those adopted under
the value-based purchasing program of the facility for the year
specified and at Sec. 414.1380(e)(6)(iii) that the performance period
for facility-based measurement is the performance period for the
measures adopted under the value-based purchasing program of the
facility program for the year specified. These provisions refer to the
general parameters of our method of facility-based measurement.
Specific programs and years would be addressed in future rulemaking.
We are finalizing our proposal at Sec. 414.1380(e)(4) with
modification to state that there are no data submission requirements
for clinicians for the facility-based measures used to assess
performance in the quality and cost performance categories. Because we
have not finalized a method of electing facility-based measurement in
Sec. 414.1380(e)(3), we are deleting the phrase ``other than electing
the option through attestation as described in paragraph (e)(3) of this
section''. In addition, we are revising the text to clarify that the
lack of data submission requirements is for individual clinicians and
groups of clinicians, rather than a statement about the submission by
facilities for the facility performance program.
(g) Scoring Facility-Based Measurement
(i) Hospital VBP Program Scoring
We believe that the Hospital VBP Program represents the most
appropriate value-based purchasing program with which to begin
implementation of the facility-based measurement option under MIPS. We
offered a summary of the Hospital VBP Program scoring and compared it
to MIPS scoring in the CY 2018 Quality Payment Program proposed rule
(82 FR 30128 through 30129).
(ii) Applying Hospital VBP Program Scoring to the MIPS Quality and Cost
Performance Categories
We summarized in the proposed rule (82 FR 30129) what we considered
prior to proposing at Sec. 414.1380(e) that facility-based scoring be
available for cost and quality performance categories. We considered
several methods to incorporate facility-based measures into scoring for
the 2020 MIPS payment year, including selecting hospitals' measure
scores, domain scores, and the Hospital VBP Program Total Performance
Scores to form the basis for the cost and quality performance category
scores for individual MIPS eligible clinicians and groups that are
eligible to participate in facility-based measurement. We proposed the
option that we believed provided the fairest comparison between
performance in the 2 programs and would best allow us to expand the
opportunity to other programs in the future.
Unlike MIPS, the Hospital VBP Program does not have performance
categories. There are instead four domains of measures. We considered
whether we should try to identify certain domains or measures that were
more closely aligned with those identified in the quality performance
category or the cost performance category. We also considered whether
we should limit the application of facility-based measurement to the
quality performance category and calculate the cost performance
category score as we do for other clinicians. However, we believe that
value-based purchasing programs are generally constructed to assess an
overall picture of the care provided by the facility, taking into
account both the costs and the quality of care provided. Given our
focus on alignment between quality and cost, we also do not believe it
is appropriate to measure quality on one unit (a hospital) and cost on
another (such as an individual clinician or TIN). Therefore, we
proposed at Sec. 414.1380(e) that facility-based scoring is available
for the quality and cost performance categories and that the facility-
based measurement scoring standard is the MIPS scoring methodology
applicable for those who meet facility-based eligibility requirements
and who elect facility-based measurement.
The following is a summary of the public comments received on
``Applying Hospital VBP Program Scoring to the MIPS Quality and Cost
Performance Categories'' proposals and our responses:
Comment: Several commenters supported our proposed methodology of
applying Hospital VBP Program scoring to the MIPS quality and cost
performance categories.
Response: We thank the commenters for their support.
Final Action: After consideration of the public comments, we are
finalizing our proposed methodology applying Hospital VBP Program
scoring to MIPS quality and cost performance categories with
modifications. As noted, we are delaying the implementation of
facility-based measurement by 1 year in order to increase clinician
understanding and operational readiness to offer the program. As such,
we are finalizing the introductory regulation text at Sec.
414.1380(e)(1) (that the facility-based measurement scoring standard is
the MIPS scoring methodology applicable for MIPS eligible clinicians
identified as meeting the requirements in paragraph (e)(2) and (3) of
this section) but are not finalizing the text proposed for paragraphs
(e)(1)(A) and (B) that would specifically identify use of the FY 2019
Hospital VBP Program for this purpose. We will address this issue in
future rulemaking to identify the specifics of the Hospital VBP Program
performance and scoring to be used for facility-based measurement in
MIPS.
(iii) Benchmarking Facility-Based Measures
Measures in the MIPS quality performance category are benchmarked
to historical performance on the basis of performance during the 12-
month calendar year that is 2 years prior to the performance period for
the MIPS payment year. If a historical benchmark cannot be established,
a benchmark is calculated during the performance period. In the cost
performance category, benchmarks are established during the performance
period because changes in payment policies year to year can make it
challenging to compare performance on cost measure year to year.
Although we proposed a different performance period for MIPS eligible
clinicians in facility-based measurement, the baseline period used for
creating MIPS benchmarks is generally consistent with this approach. We
noted that the Hospital VBP Program uses measures for the same fiscal
year even if those measures do not have the same performance period
length, but the baseline period closes well before the performance
period. The MSPB is benchmarked in a manner that is similar to measures
in the MIPS cost performance category. The MSPB only uses a historical
baseline period for improvement scoring and bases its achievement
threshold and benchmark solely on the performance period (81 FR
[[Page 53764]]
57002). We proposed at Sec. 414.1380(e)(6)(ii) that the benchmarks for
facility-based measurement are those that are adopted under the value-
based purchasing program of the facility for the year specified (82 FR
30130).
Final Action: We did not receive any comments specifically on the
``Benchmarking Facility-Based Measures'' proposals, and we are
finalizing the policy as proposed in Sec. 414.1380(e)(6)(ii). While we
are not making facility-based measurement available until the 2019 MIPS
performance period/2021 MIPS payment year (and are therefore not
finalizing use of the FY 2019 Hospital VBP Program measurement), we are
finalizing that benchmarks are those adopted under the value-based
purchasing program of the facility program for the year specified. We
will identify the particular value-based purchasing program in future
rulemaking but would routinely use the benchmarks associated with that
program.
(iv) Assigning MIPS Performance Category Scores Based on Hospital VBP
Performance
Performance measurement in the Hospital VBP Program and MIPS is
quite different in part due to the design and the maturity of the
programs. The Hospital VBP Program only assigns achievement points to a
hospital for its performance on a measure if the hospital's performance
during the performance period meets or exceeds the median of hospital
performance on that measure during the applicable baseline period (or
in the case of the MSPB measure, if the hospital's performance during
the performance period meets or exceeds the median of hospital
performance during that period), whereas MIPS assigns achievement
points to all measures that meet the required data completeness and
case minimums. In addition, the Hospital VBP Program has removed many
process measures and topped out measures since its first program year
(FY 2013), while both process and topped out measures are available in
MIPS. With respect to the FY 2017 program year, for example, the median
Total Performance Score for a hospital in Hospital VBP Program was
33.88 out of 100 possible points. If we were to simply assign the
Hospital VBP Program Total Performance Score for a hospital to a
clinician, the performance of those MIPS eligible clinicians electing
facility-based measurement would likely be lower than most who
participated in the MIPS program, particularly in the quality
performance category.
We noted that we believe that we should recognize relative
performance in the facility programs that reflect their different
designs. Therefore, we proposed at Sec. 414.1380(e)(6)(iv) that the
quality performance category score for facility-based measurement is
reached by determining the percentile performance of the facility
determined in the value-based purchasing program for the specified year
as described under Sec. 414.1380(e)(5) and awarding a score associated
with that same percentile performance in the MIPS quality performance
category score for those clinicians who are not scored using facility-
based measurement (82 FR 30130). We also proposed at Sec.
414.1380(e)(6)(v) that the cost performance category score for
facility-based measurement is established by determining the percentile
performance of the facility determined in the value-based purchasing
program for the specified year as described in Sec. 414.1380(e)(5) and
awarding the number of points associated with that same percentile
performance in the MIPS cost performance category score for those
clinicians who are not scored using facility-based measurement (82 FR
30130). (In the context of our proposal, this year would have been the
FY 2019 year for the Hospital VBP program, as we proposed in section
II.C.7.a.(4)(e) to use that as the attributed performance for MIPS
eligible clinicians and groups that elected facility-based
measurement.) For example, if the median Hospital VBP Program Total
Performance Score was 35 out of 100 possible points and the median
quality performance category percent score in MIPS was 75 percent and
the median cost performance category score was 50 percent, then a
clinician or group that is evaluated based on a hospital that received
an Hospital VBP Program Total Performance Score of 35 points would
receive a score of 75 percent for the quality performance category and
50 percent for the cost performance category. The percentile
distribution for both the Hospital VBP Program and MIPS would be based
on the distribution during the applicable performance periods for each
of the programs and not on a previous benchmark year.
We noted in the proposed rule our belief that the proposal offers a
fairer comparison of the performance among participants in MIPS and the
Hospital VBP Program compared to other options we considered and
provides an objective means to normalize differences in measured
performance between the programs. In addition, we noted that this
method will make it simpler to apply the concept of facility-based
measurement to additional programs in the future.
The following is a summary of the public comments received on the
``Assigning MIPS Performance Category Scores based on Hospital VBP
Performance'' proposals and our responses:
Comment: Several commenters supported our proposed approach to
translate performance in the Hospital VBP Program into MIPS quality and
cost performance category scores using a percentile distribution.
Response: We thank the commenters for their support.
Final Action: After consideration of the public comments, we are
finalizing our proposal to determine the percentile performance of the
facility determined for the specified year and awarding a score
associated with that same percentile performance in the MIPS quality
performance category score and MIPS cost performance category score for
those clinicians who are not scored using facility-based measurement,
but are not finalizing use of the FY 2019 Hospital VBP Program
measurement and scoring. We are modifying the regulatory text at Sec.
414.1380(e) to clarify that this determination will be based on the
year the claims are drawn from in Sec. 414.1380(e)(2). We note that
facility-based measurement will not be available until the 2019 MIPS
performance period/2021 MIPS payment year so clinicians will not be
scored through facility-based measurement until that time.
(v) Scoring Improvement for Facility-Based Measurement
The Hospital VBP Program includes a methodology for recognizing
improvement on individual measures which is then incorporated into the
total performance score for each participating hospital. A hospital's
performance on a measure is compared to a national benchmark, as well
as its own performance from a corresponding baseline period.
We proposed to consider improvement in the quality and cost
performance categories. In the CY 2018 Quality Payment Program proposed
rule (82 FR 30113), we proposed to measure improvement in the quality
performance category based on improved achievement for the performance
category percent score and award improvement even if, under certain
circumstances, a clinician moves from one identifier to another from 1
year to the next. For those who may be measured under facility-based
[[Page 53765]]
measurement, improvement is already captured in the scoring method used
by the Hospital VBP Program, so we do not believe it is appropriate to
separately measure improvement using the proposed MIPS methodology for
clinicians and groups that elect facility-based measurement. Although
the improvement methodology is not identical in the Hospital VBP
Program compared to our MIPS proposal, improvement is reflected in the
underlying Hospital VBP Program measurement because a hospital that
demonstrated improvement in the individual measures would in turn
receive points under the Hospital VBP Program methodology if the
improvement score is higher than their achievement score. In addition,
improvement is already captured in the distribution of MIPS performance
scores that is used to translate Hospital VBP Program Total Performance
Score into a MIPS quality performance category score. Therefore, we did
not propose any additional improvement scoring for facility-based
measurement for either the quality or cost performance category.
Because we indicated our intention to allow clinicians the
flexibility to elect facility-based measurement on an annual basis, we
noted that some clinicians may be measured through facility-based
measurement in 1 year and through another MIPS method in the next. We
sought comment on how to assess improvement for those that switch from
facility-based scoring to another MIPS method in a later year. We
requested comment on whether it is appropriate to include measurement
of improvement in the MIPS quality performance category for MIPS
eligible clinicians and groups that use facility-based measures given
that the Hospital VBP Program already takes improvement into account in
its scoring methodology (82 FR 30130).
In the CY 2018 Quality Payment Program proposed rule, we discussed
our proposal to measure improvement in the cost performance category at
the measure level (82 FR 30121). We proposed that clinicians under
facility-based measurement would not be eligible for a cost improvement
score in the cost performance category (82 FR 30130). As in the quality
performance category, we believe that a clinician participating in
facility-based measurement in subsequent years would already have
improvement recognized as part of the Hospital VBP Program methodology
and therefore, should not be given additional credit. In addition,
because we proposed to limit measurement of improvement to those MIPS
eligible clinicians that participate in MIPS using the same identifier
and are scored on the same cost measure(s) in 2 consecutive performance
periods, those MIPS eligible clinicians who elect facility-based
measurement would not be eligible for a cost improvement score in the
cost performance category under the proposed methodology because they
would not be scored on the same cost measure(s) for 2 consecutive
performance periods.
The following is a summary of the public comments received on the
``Scoring Improvement for Facility-Based Measurement'' proposals and
our responses:
Comment: One commenter supported our proposal to not assess
improvement for participants in facility-based measurement.
Response: We appreciate the support of the commenter.
Final Action: After consideration of the public comments, we are
finalizing our proposal that a clinician or group participating in
facility-based measurement would not be given the opportunity to earn
improvement points based on prior performance in the MIPS quality and
cost performance categories. We did not propose and are not finalizing
regulation text on this aspect of facility-based measurement because we
believe it is unnecessary.
(vi) Bonus Points for Facility-Based Measurement
MIPS eligible clinicians that report on quality measures are
eligible for bonus points for the reporting of additional outcome and
high priority measures beyond the one that is required. Two bonus
points are awarded for each additional outcome or patient experience
measure, and one bonus point is awarded for each additional other high
priority measure. These bonus points are intended to encourage the use
of measures that are more impactful on patients and better reflect the
overall goals of the MIPS program. Many of the measures in the Hospital
VBP Program meet the criteria that we have adopted for high-priority
measures. We support measurement that takes clinicians' focus away from
clinical process measures; however, the proposed scoring method
described above is based on a percentile distribution of scores within
the quality and cost performance categories that already accounts for
bonus points. For this reason, we did not propose to calculate
additional high priority bonus points for facility-based measurement.
We noted that clinicians have an additional opportunity to receive
bonus points in the quality performance category score for using end-
to-end electronic submission of quality measures. The Hospital VBP
Program does not capture whether or not measures are reported using
end-to-end electronic reporting; however, our proposed facility-based
scoring method described above is based on a percentile distribution of
scores within the quality and cost performance categories. Because the
MIPS quality performance category scores already account for bonus
points, including end-to-end electronic reporting, when we translate
the Total Performance Score, the overall effect of end-to-end
electronic reporting would be captured in the translated score. For
this reason, we did not propose to calculate additional end-to-end
electronic reporting bonus points for facility-based measurement.
The following is a summary of the public comments received on the
``Bonus Points for Facility-Based Measurement'' proposals and our
responses:
Comment: A few commenters supported our proposal to not calculate
bonus points for additional high priority or end-to-end electronic
reporting of measures.
Response: We thank the commenters for their support of the
proposal.
Comment: A few commenters opposed our proposal to not calculate
bonus points for additional high priority measures or end-to-end
electronic reporting. One of the commenters noted the similarity of
facility-based measurement to the CMS Web Interface because there was
no opportunity to select measures in either method, and noted that
those who submitted via web interface did receive bonus points for both
additional high priority measures and end-to-end electronic reporting.
Response: Because our scoring approach to facility-based
measurement is based on a translation of the facility's performance
under the Hospital VBP Program scoring methodology to the MIPS quality
and cost performance categories, we do not believe it is appropriate to
add bonus points based on measure selection. The CMS Web Interface
method is scored in a manner that determines performance on individual
measures and is scored in the same way as other MIPS submission
mechanisms with a few exceptions.
Final Action: After consideration of the public comments, we are
finalizing our proposal to not award bonus points for additional high
priority and end-to-end electronic reporting for clinicians scored
under facility-based measurement. We did not propose and are not
finalizing regulation text on this
[[Page 53766]]
aspect of facility-based measurement because we believe it is
unnecessary.
(vii) Special Rules for Facility-Based Measurement
Some hospitals do not receive a Total Performance Score in a given
year in the Hospital VBP Program, whether due to insufficient quality
measure data, failure to meet requirements under the Hospital IQR
Program, or other reasons. In these cases, we would be unable to
calculate a facility-based score based on the hospital's performance,
and facility-based clinicians would be required to participate in MIPS
via another method. Most hospitals which do not receive a Total
Performance Score in the Hospital VBP Program are routinely excluded,
such as hospitals in Maryland. In such cases, facility-based clinicians
would know well in advance that the hospital would not receive a Total
Performance Score, and that they would need to participate in MIPS
through another method. However, we noted that we are concerned that
some facility-based clinicians may provide services in hospitals which
they expect will receive a Total Performance Score but do not due to
various rare circumstances such as natural disasters. In the CY 2018
Quality Payment Program proposed rule (82 FR 30142 through 30143) we
proposed a process for requesting a reweighting assessment for the
quality, cost and improvement activities performance categories due to
extreme and uncontrollable circumstances, such as natural disasters. We
proposed that MIPS eligible clinicians who are facility-based and
affected by extreme and uncontrollable circumstances, such as natural
disasters, may apply for reweighting (82 FR 30131).
In addition, we noted that hospitals may submit correction requests
to their Total Performance Scores calculated under the Hospital VBP
Program, and may also appeal the calculations of their Total
Performance Scores, subject to Hospital VBP Program requirements
established in prior rulemaking. Our proposal was to use the final
Hospital VBP Program Total Performance Score for the facility-based
measurement option under MIPS. In the event that a hospital obtains a
successful correction or appeal of its Total Performance Score, we
would update MIPS eligible clinicians' quality and cost performance
category scores accordingly, as long as the update could be made prior
to the application of the MIPS payment adjustment for the relevant MIPS
payment year.
Additionally, although we wish to tie the hospital and clinician
performance as closely together as possible for purposes of the
facility-based scoring policy, we do not wish to disadvantage those
clinicians and groups that select this measurement method. In the CY
2018 Quality Payment Program proposed rule, we proposed to retain a
policy equivalent to the 3-point floor for all measures with complete
data in the quality performance category scored against a benchmark in
the 2020 MIPS payment year (82 FR 30131). However, the Hospital VBP
Program does not have a corresponding scoring floor. Therefore, we
proposed to adopt a floor on the Hospital VBP Program Total Performance
Score for purposes of facility-based measurement under MIPS so that any
score in the quality performance category, once translated into the
percentile distribution described above, that would result in a score
of below 30 percent would be reset to a score of 30 percent in the
quality performance category (82 FR 30131). We believe that this
adjustment is important to maintain consistency with our other
policies. There is no similar floor established for measures in the
cost performance category under MIPS, so we did not propose any floor
for the cost performance category for facility-based measurement.
Some MIPS eligible clinicians who select facility-based measurement
could have sufficient numbers of attributed patients to meet the case
minimums for the cost measures established under MIPS. Although there
is no additional data reporting for cost measures, we believe that, to
facilitate the relationship between cost and quality measures, they
should be evaluated covering the same population as opposed to
comparing a hospital population and a population attributed to an
individual clinician or group. In addition, we believe that including
additional cost measures in the cost performance category score for
MIPS eligible clinicians who elect facility-based measurement would
reduce the alignment of incentives between the hospital and the
clinician. Thus, we proposed at Sec. 414.1380(e)(6)(v)(A) that MIPS
eligible clinicians who elect facility-based measurement would not be
scored on other cost measures specified for the cost performance
category, even if they meet the case minimum for a cost measure (82 FR
30131).
If a clinician or a group elects facility-based measurement but
also submits quality data through another MIPS mechanism, we proposed
to use the higher of the two scores for the quality performance
category and base the score of the cost performance category on the
same method (that is, if the facility-based quality performance
category score is higher, facility-based measurement is used for
quality and cost) (82 FR 30131). Since this policy may result in a
higher final score, it may provide facility-based clinicians with a
substantial incentive to elect facility-based measurement, whether or
not the clinician believes such measures are the most accurate or
useful measures of that clinician's performance. Therefore, this policy
may create an advantage for facility-based clinicians over non-
facility-based clinicians, since non-facility-based clinicians would
not have the opportunity to use the higher of two scores. Therefore, we
sought comment on whether this proposal to use the higher score is the
best approach to score the performance of facility-based clinicians in
comparison to their non-facility-based peers (82 FR 30131).
The following is a summary of the public comments received on the
``Special Rules for Facility-Based Measurement'' proposals and our
responses:
Comment: Several commenters supported our proposal that, if a
clinician or a group elects facility-based measurement but also submits
quality data through another MIPS mechanism, we use the higher of the
two scores for the quality performance category and base the score of
the cost performance category on the same method.
Response: We thank the commenters for their support.
Comment: A few commenters stated that giving the higher score of
the facility-based measurement or another submission was an unfair
advantage for facility-based clinicians. Some of these commenters
recommended that those who elect facility-based measurement always be
scored on it, regardless if another mechanism for submitting quality
measurement was used.
Response: We believe that this policy to use the higher of two
available performance scores is consistent with our other approaches to
scoring where we have the opportunity to assess performance based on
two different methods. If another clinician were to submit through two
different methods of MIPS reporting, we would base the score of that
clinician on the submission that resulted in the highest score.
Comment: One commenter supported our proposed 30 percent floor for
the quality performance category for participants in facility-based
measurement, noting that it was equitable to other clinicians with
complete data submission in the quality performance category.
Response: We thank the commenter for the support.
[[Page 53767]]
Comment: Several commenters opposed our proposal to establish a
floor of 30 percent for the quality performance category for clinicians
and groups participating in facility-based measurement. A few
commenters noted that score of 30 percent in the quality category is
equal to 18 points (assuming a quality performance category weight of
60 percent), which is higher than the 15 point performance threshold
that CMS has proposed. These commenters suggested that such a floor was
unfair to other clinicians who would be required to submit data in
order to receive a score that higher than the performance threshold.
Response: We continue to believe that this policy is consistent
with the score that might be received for a clinician who submitted
data that meet data completeness on six measures through another
mechanism. Measures scored in the Hospital VBP Program have to meet the
criteria required for submission through the Hospital IQR Program;
therefore, we do not believe it would be appropriate to allow a
clinician to receive a lower score based on the selection of this
measurement option. We will continue to evaluate this floor in the
context of scoring policies that are established in the quality
performance category for other methods of participating in MIPS. We
also note that this option is not being finalized for the 2018 MIPS
performance period, so concerns about the minimum score being higher
than the performance threshold for the 2018 MIPS performance period is
no longer relevant at this time. We will consider comments on this
topic in future rulemaking.
Comment: One commenter recommended that if a clinician or group
participates in facility-based measurement and submits through another
mechanism that we use the highest combined quality and cost performance
category score as opposed to the method which would have the highest
quality performance category score.
Response: Because many of the clinicians who qualify for facility-
based measurement would also qualify for an exemption from the
advancing care information performance category, their quality
performance category will carry more weight than the cost performance
category. Because of the possibility of reweighting of this category
for many clinicians who would use facility-based measurement, we
believe it is too complex to use the higher combined score. We believe
that using the option with the higher quality score is simpler and more
appropriate.
Final Action: After consideration of the public comments, we are
finalizing our proposals that clinicians or groups that elect facility-
based measurement but also submit quality data through another MIPS
mechanism would be measured on the method that results in the higher
quality score and to establish a 30 percent floor for the quality
performance category for those who participate in facility-based
measurement. We are finalizing all other special rules discussed in
this section as well. We note that facility-based measurement will not
be available until the 2019 MIPS performance period/2021 MIPS payment
year so these special rules will not apply until that time.
(5) Scoring the Improvement Activities Performance Category
Section 1848(q)(5)(C) of the Act specifies scoring rules for the
improvement activities performance category. For more of the statutory
background and description of the proposed and finalized policies, we
refer readers to the CY 2017 Quality Payment Program final rule (81 FR
77311 through 77319). We have also codified certain requirements for
the improvement activities performance category at Sec.
414.1380(b)(3). Based on these criteria, we finalized at Sec.
414.1380(b)(3) in the CY 2017 Quality Payment Program final rule the
scoring methodology for this category, which assigns points based on
certified patient-centered medical home participation or comparable
specialty practice participation, APM participation, and the
improvement activities reported by the MIPS eligible clinician (81 FR
77312). A MIPS eligible clinician's performance will be evaluated by
comparing the reported improvement activities to the highest possible
score (40 points). In the CY 2018 Quality Payment Program proposed rule
(82 FR 30132), we did not propose any changes to the scoring of the
improvement activities performance category.
(a) Assigning Points to Reported Improvement Activities
We assign points for each reported improvement activity within 2
categories: Medium-weighted; and high-weighted activities. Generally,
each medium-weighted activity is worth 10 points toward the total
category score of 40 points, and each high-weighted activity is worth
20 points toward the total category score of 40 points. These points
are doubled for small practices, practices in rural areas, or practices
located in geographic HPSAs, and non-patient facing MIPS eligible
clinicians. We refer readers to Sec. 414.1380(b)(3) and the CY 2017
Quality Payment Program final rule (81 FR 78312) for further detail on
improvement activities scoring.
Activities will be weighted as high based on the extent to which
they align with activities that support the certified patient-centered
medical home, since that is consistent with the standard under section
1848(q)(5)(C)(i) of the Act for achieving the highest potential score
for the improvement activities performance category, as well as with
our priorities for transforming clinical practice (81 FR 77311).
Additionally, activities that require performance of multiple actions,
such as participation in the Transforming Clinical Practice Initiative
(TCPI), participation in a MIPS eligible clinician's state Medicaid
program, or an activity identified as a public health priority (such as
emphasis on anticoagulation management or utilization of prescription
drug monitoring programs) are justifiably weighted as high (81 FR 77311
through 77312).
We refer readers to Table 26 of the CY 2017 Quality Payment Program
final rule for a summary of the previously finalized improvement
activities that are weighted as high (81 FR 77312 through 77313), and
to Table H of the same final rule, for a list of all the previously
finalized improvement activities, both medium- and high-weighted (81 FR
77817 through 77831). We also refer readers to Table F and Table G in
the appendices of the proposed rule for our proposed additions and
changes to the Improvement Activities Inventory for Quality Payment
Program Year 2 and future years (82 FR 30479 and 82 FR 30486
respectively). In this final rule with comment period, we are
finalizing the proposed new activities and changes to previously
adopted activities, some with modification, and refer readers to the
tables in the appendices of this final rule with comment period for
details. Consistent with our unified scoring system principles, we
finalized in the CY 2017 Quality Payment Program final rule that MIPS
eligible clinicians will know in advance how many potential points they
could receive for each improvement activity (81 FR 77311 through
77319).
(b) Improvement Activities Performance Category Highest Potential Score
At Sec. 414.1380(b)(3), we finalized that we will require a total
of 40 points to receive the highest score for the improvement
activities performance category (81 FR 77315). For more of the
statutory background and description of the proposed and finalized
policies, we
[[Page 53768]]
refer readers to the CY 2017 Quality Payment Program final rule (81 FR
77314 through 77315).
For small practices, practices in rural areas or geographic HPSAs,
and non-patient facing MIPS eligible clinicians, the weight for any
activity selected is doubled so that these practices and eligible
clinicians only need to select one high- or two medium-weighted
activities to achieve the highest score of 40 points (81 FR 77312).
In accordance with section 1848(q)(5)(C)(ii) of the Act, we
codified at Sec. 414.1380(b)(3)(ix) that individual MIPS eligible
clinicians or groups who are participating in an APM (as defined in
section 1833(z)(3)(C) of the Act) for a performance period will
automatically earn at least one half of the highest potential score for
the improvement activities performance category for the performance
period (81 FR 30132). In addition, MIPS eligible clinicians that are
participating in MIPS APMs are assigned an improvement activity score,
which may be higher than one half of the highest potential score (81 FR
30132). This assignment is based on the extent to which the
requirements of the specific model meet the list of activities in the
Improvement Activities Inventory (81 FR 30132). For a further
description of improvement activities and the APM scoring standard for
MIPS, we refer readers to the CY 2017 Quality Payment Program final
rule (81 FR 77246). For all other individual MIPS eligible clinicians
or groups, we refer readers to the scoring requirements for individual
MIPS eligible clinicians and groups in the CY 2017 Quality Payment
Program final rule (81 FR 77270). An individual MIPS eligible clinician
or group is not required to perform activities in each improvement
activities subcategory or participate in an APM to achieve the highest
potential score in accordance with section 1848(q)(5)(C)(iii) of the
Act (81 FR 77178).
In the CY 2017 Quality Payment Program final rule, we also
finalized that individual MIPS eligible clinicians and groups that
successfully participate and submit data to fulfill the requirements
for the CMS Study on Improvement Activities and Measurement will
receive the highest score for the improvement activities performance
category (81 FR 77315). We refer readers to the CY 2018 Quality Payment
Program proposed rule (82 FR 30056) and section II.C.6.e.(10) of this
final rule with comment period for further detail on this study.
(c) Points for Certified Patient-Centered Medical Home or Comparable
Specialty Practice
Section 1848(q)(5)(C)(i) of the Act specifies that a MIPS eligible
clinician who is in a practice that is certified as a patient-centered
medical home or comparable specialty practice for a performance period,
as determined by the Secretary, must be given the highest potential
score for the improvement activities performance category for the
performance period. Accordingly, at Sec. 414.1380(b)(3)(iv), we
specified that a MIPS eligible clinician who is in a practice that is
certified as a patient-centered medical home, including a Medicaid
Medical Home, Medical Home Model, or comparable specialty practice,
will receive the highest potential score for the improvement activities
performance category (81 FR 77196 through 77180).
In the CY 2018 Quality Payment Program proposed rule, we did not
propose any changes specifically to the scoring of the patient-centered
medical home or comparable specialty practice; however, we did propose
a change to how groups qualify for this activity (82 FR 30054). We
refer readers to section II.C.6.e.(2)(a) of this final rule with
comment period for more details.
(d) Calculating the Improvement Activities Performance Category Score
(i) Generally
In the CY 2017 Quality Payment Program final rule (81 FR 77318), we
finalized that individual MIPS eligible clinicians and groups must earn
a total of 40 points to receive the highest score for the improvement
activities performance category. To determine the improvement
activities performance category score, we sum the points for all of a
MIPS eligible clinician's reported activities and divide by the
improvement activities performance category highest potential score of
40. A perfect score will be 40 points divided by 40 possible points,
which equals 100 percent. If MIPS eligible clinicians have more than 40
improvement activities points, we will cap the resulting improvement
activities performance category score at 100 percent (81 FR 77318). For
example, if more activities are selected than 4 medium-weighted
activities, the total points that could be achieved is still 40 points
(81 FR 77318). As stated at (81 FR 77318), the following scoring
applies to MIPS eligible clinicians generally (who are not a non-
patient facing clinician, a small practice, a practice located in a
rural area, or a practice in a geographic HPSA):
Reporting of one medium-weighted activity will result in
10 points which is one- fourth of the highest score.
Reporting of two medium-weighted activities will result in
20 points which is one- half of the highest score.
Reporting of three medium-weighted activities will result
in 30 points which is three-fourths of the highest score.
Reporting of four medium-weighted activities will result
in 40 points which is the highest score.
Reporting of one high-weighted activity will result in 20
points which is one-half of the highest score.
Reporting of two high-weighted activities will result in
40 points which is the highest score.
Reporting of a combination of medium-weighted and high-
weighted activities where the total number of points achieved are
calculated based on the number of activities selected and the weighting
assigned to that activity (number of medium-weighted activities
selected x 10 points + number of high-weighted activities selected x 20
points) (81 FR 78318).
In the CY 2018 Quality Payment Program proposed rule (82 FR 30133),
we did not propose any changes to how we will calculate the improvement
activities performance category score.
(ii) Small Practices, Practices Located in Rural Areas or Geographic
HPSAs, and Non- Patient Facing MIPS Eligible Clinicians
Section 1848(q)(2)(B)(iii) of the Act requires the Secretary to
give consideration to the circumstances of small practices and
practices located in rural areas and in geographic HPSAs (as designated
under section 332(a)(1)(A) of the PHS Act) in defining activities.
Section 1848(q)(2)(C)(iv) of the Act also requires the Secretary to
give consideration to non- patient facing MIPS eligible clinicians.
Further, section 1848(q)(5)(F) of the Act allows the Secretary to
assign different scoring weights for measures, activities, and
performance categories, if there are not sufficient measures and
activities applicable and available to each type of eligible clinician.
Accordingly, in the CY 2017 Quality Payment Program final rule (81
FR 77318), we finalized that the following scoring applies to MIPS
eligible clinicians who are a non-patient facing MIPS eligible
clinician, a small practice (consisting of 15 or fewer professionals),
a practice located in a rural area, or practice in a geographic HPSA,
or any combination thereof:
[[Page 53769]]
Reporting of one medium-weighted activity will result in
20 points or one-half of the highest score.
Reporting of two medium-weighted activities will result in
40 points or the highest score.
Reporting of one high-weighted activity will result in 40
points or the highest score.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30133),
we did not propose any changes to our policy to give consideration to
the circumstances of small practices and practices located in rural
areas and in geographic HPSAs.
(iii) Advancing Care Information Performance Category Bonus
We finalized in the CY 2017 Quality Payment Program final rule that
certain activities in the improvement activities performance category
will also qualify for a bonus under the advancing care information
performance category (81 FR 78318). This bonus is applied under the
advancing care information performance category and not under the
improvement activities performance category (81 FR 78318). For more
information about our finalized improvement activities scoring policies
and for several sample scoring charts, we refer readers to the CY 2017
Quality Payment Program final rule (81 FR 78318 through 78319). In the
CY 2018 Quality Payment Program proposed rule (82 FR 30059), we did not
propose any changes to this policy and refer readers to section
II.C.6.f.(2)(d) of this final rule with comment period for more details
in the advancing care information performance discussion.
(iv) MIPS APMs
Finally, in the CY 2017 Quality Payment Program final rule (81 FR
77319), we codified at Sec. 414.1380(b)(3)(ix) that MIPS eligible
clinicians participating in APMs that are not certified patient-
centered medical homes will automatically earn a minimum score of one-
half of the highest potential score for the performance category, as
required by section 1848(q)(5)(C)(ii) of the Act. For any other MIPS
eligible clinician who does not report at least one activity, including
a MIPS eligible clinician who does not identify to us that they are
participating in a certified patient-centered medical home or
comparable specialty practice, we will calculate a score of zero points
(81 FR 77319). In the CY 2018 Quality Payment Program proposed rule (82
FR 30132), we did not propose any changes to this policy.
(e) Self-Identification Policy for MIPS Eligible Clinicians
In the CY 2017 Quality Payment Program final rule (81 FR 77319), we
established that individual MIPS eligible clinicians or groups
participating in APMs would not be required to self-identify as
participating in an APM, but that all MIPS eligible clinicians would be
required to self-identify if they were part of a certified patient-
centered medical home or comparable specialty practice, a non-patient
facing MIPS eligible clinician, a small practice, a practice located in
a rural area, or a practice in a geographic HPSA, or any combination
thereof, and that we would validate these self-identifications as
appropriate.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30133),
we did not propose any changes to this policy for certified patient-
centered medical homes or comparable specialty practices. MIPS eligible
clinicians that are part of a certified patient-centered medical home a
recognized or certified patient-centered medical home or comparable
specialty practice are still required to self-identify for the 2018
MIPS performance period, and we will validate these self-
identifications as appropriate.
For the criteria for recognition as a recognized or certified
patient-centered medical home or comparable specialty practice, we
refer readers to the CY 2017 Quality Payment Program final rule (81 FR
77179 through 77180) and section II.C.6.e.(2)a of this final rule with
comment period.
However, in the CY 2018 Quality Payment Program proposed rule (82
FR 30133), we proposed to no longer require these self-identifications
for non-patient facing MIPS eligible clinicians, small practices, or
practices located in rural areas or geographic HPSAs beginning with the
2018 MIPS performance period, because it is technically feasible for us
to identify these MIPS eligible clinicians during attestation for the
performance of improvement activities following the performance period.
We define these MIPS eligible clinicians in the CY 2017 Quality Payment
Program final rule (81 FR 77540).
The following is a summary of the public comments received on the
``Self-Identification Policy for MIPS Eligible Clinicians'' proposals
and our responses.
Comment: Several commenters supported our proposal to remove the
requirement for MIPS eligible clinicians that are non-patient facing, a
small practice, a practice located in a rural area, or a practice in a
geographic HPSA, or any combination thereof to self-identify, stating
that this will lower the burden of reporting. One commenter urged us to
also consider ways of eliminating the self-identification requirements
for MIPS eligible clinicians who participate in patient-centered
medical homes or comparable specialty practices, for example, by
requiring the certification and recognition organizations to submit to
CMS lists of the MIPS eligible clinicians or groups that meet their
standards and are certified/recognized, similar to how participation
lists are utilized to determine the participants in certain APMs.
Response: We thank commenters for their support and suggestions. We
are attempting to eliminate burden where possible and will continue to
explore technically feasible ways to reduce burden. We will consider
the suggestion to also eliminate the need for MIPS eligible clinicians
who participate in patient-centered medical homes or comparable
specialty practices to self identify as we craft future policy.
Final Action: After consideration of the public comments received,
we are finalizing our proposal, as proposed, to no longer require these
self-identifications for non-patient facing MIPS eligible clinicians,
small practices, practices located in rural areas or geographic HPSAs,
or any combination thereof, beginning with the 2018 MIPS performance
period and for future years.
(6) Scoring the Advancing Care Information Performance Category
In the CY 2018 Quality Payment Program proposed rule, we referred
readers to section II.C.6. of the proposed rule, (82 FR 30057 through
82 FR 30080) where we discussed scoring the advancing care information
performance category. We refer readers to section II.C.6.f. of this
final rule with comment period for finalized policies related to
scoring the advancing care information performance category.
b. Calculating the Final Score
For a description of the statutory basis and our policies for
calculating the final score for MIPS eligible clinicians, we refer
readers to the discussion in the CY 2017 Quality Payment Program final
rule (81 FR 77319 through 77329) and Sec. 414.1380. In the proposed
rule, we proposed to add a complex patient scoring bonus (82 FR 30135
through 82 FR 30139) and add a small practice bonus to the final score
(82 FR 30139 through 82 FR 30140). In addition, we reviewed the final
score calculation for the 2020 MIPS payment year (82 FR
[[Page 53770]]
30140) and proposed refinements to the reweighting policies (82 FR
30141 through 82 FR 30146).
(1) Accounting for Risk Factors
Section 1848(q)(1)(G) of the Act requires us to consider risk
factors in our scoring methodology. Specifically, that section provides
that the Secretary, on an ongoing basis, shall, as the Secretary
determines appropriate and based on individuals' health status and
other risk factors, assess appropriate adjustments to quality measures,
cost measures, and other measures used under MIPS and assess and
implement appropriate adjustments to payment adjustments, final scores,
scores for performance categories, or scores for measures or activities
under the MIPS. In doing this, the Secretary is required to take into
account the relevant studies conducted under section 2(d) of the
Improving Medicare Post-Acute Care Transformation (IMPACT) Act of 2014
and, as appropriate, other information, including information collected
before completion of such studies and recommendations.
In this section, we summarize our efforts related to social risk
and the relevant studies conducted under section 2(d) of the IMPACT Act
of 2014. We also finalize some short-term adjustments to address
patient complexity.
(a) Considerations for Social Risk
We understand that social risk factors such as income, education,
race and ethnicity, employment, disability, community resources, and
social support (certain factors of which are also sometimes referred to
as socioeconomic status (SES) factors or socio-demographic status (SDS)
factors) play a major role in health. One of our core objectives is to
improve beneficiary outcomes, including reducing health disparities,
and we want to ensure that all beneficiaries, including those with
social risk factors, receive high quality care. In addition, we seek to
ensure that the quality of care furnished by providers and suppliers is
assessed as fairly as possible under our programs while ensuring that
beneficiaries have adequate access to excellent care.
We have been reviewing reports prepared by the Office of the
Assistant Secretary for Planning and Evaluation (ASPE) and the National
Academies of Sciences, Engineering, and Medicine on the issue of
accounting for social risk factors in CMS's value-based purchasing and
quality reporting programs, and considering options on how to address
the issue in these programs. On December 21, 2016, ASPE submitted the
first of several Reports to Congress on a study it was required to
conduct under section 2(d) of the IMPACT Act of 2014. The first study
analyzed the effects of certain social risk factors in Medicare
beneficiaries on quality measures and measures of resource use used in
one or more of nine Medicare value-based purchasing programs.\8\ The
report also included considerations for strategies to account for
social risk factors in these programs. A second report due October 2019
will expand on these initial analyses, supplemented with non-Medicare
datasets to measure social risk factors. In a January 10, 2017 report
released by the National Academies of Sciences, Engineering, and
Medicine, that body provided various potential methods for accounting
for social risk factors, including stratified public reporting.\9\
---------------------------------------------------------------------------
\8\ Office of the Assistant Secretary for Planning and
Evaluation. 2016. Report to Congress: Social Risk Factors and
Performance Under Medicare's Value-Based Purchasing Programs.
Available at https://aspe.hhs.gov/pdf-report/report-congress-social-risk-factors-and-performance-under-medicares-value-based-purchasing-programs.
\9\ National Academies of Sciences, Engineering, and Medicine.
2017. Accounting for social risk factors in Medicare payment.
Washington, DC: The National Academies Press.
---------------------------------------------------------------------------
In addition, the National Quality Forum (NQF) has concluded their
initial trial on risk adjustment for quality measures. Based on the
findings from the initial trial, NQF will continue its work to evaluate
the impact of social risk factor adjustment on intermediate outcome and
outcome measures for an additional 3 years. The extension of this work
will allow NQF to determine further how to effectively account for
social risk factors through risk adjustment and other strategies in
quality measurement.
As we continue to consider the analyses and recommendations from
these and any future reports, we are continuing to work with
stakeholders in this process. As we have previously communicated, we
are concerned about holding providers to different standards for the
outcomes of their patients with social risk factors because we do not
want to mask potential disparities or minimize incentives to improve
the outcomes for disadvantaged populations. Keeping this concern in
mind, while we sought input on this topic previously, we requested
public comment on whether we should account for social risk factors in
the MIPS, and if so, what method or combination of methods would be
most appropriate for accounting for social risk factors in the MIPS.
Examples of methods include: Adjustment of MIPS eligible clinician
scores (for example, stratifying the scores of MIPS eligible clinicians
based on the proportion of their patients who are dual eligible);
confidential reporting of stratified measure rates to MIPS eligible
clinicians; public reporting of stratified measure results; risk
adjustment of a particular measure as appropriate based on data and
evidence; and redesigning payment incentives (for instance, rewarding
improvement for clinicians caring for patients with social risk factors
or incentivizing clinicians to achieve health equity). We requested
comments on whether any of these methods should be considered, and if
so, which of these methods or combination of methods would best account
for social risk factors in MIPS, if any.
In addition, we requested public comment on which social risk
factors might be most appropriate for stratifying measure scores and/or
potential risk adjustment of a particular measure. Examples of social
risk factors include, but are not limited to the following: Dual
eligibility/low-income subsidy; race and ethnicity; and geographic area
of residence. We also requested comment on which of these factors,
including current data sources where this information would be
available, could be used alone or in combination, and whether other
data should be collected to better capture the effects of social risk.
We noted that we will take commenters' input into consideration as we
continue to assess the appropriateness and feasibility of accounting
for social risk factors in MIPS. We noted that any such changes would
be proposed through future notice and comment rulemaking.
We look forward to working with stakeholders as we consider the
issue of accounting for social risk factors and reducing health
disparities in CMS programs. Of note, implementing any of the above
methods would be taken into consideration in the context of how this
and other CMS programs operate (for example, data submission methods,
availability of data, statistical considerations relating to
reliability of data calculations, among others); we also welcome
comment on operational considerations. CMS is committed to ensuring
that its beneficiaries have access to and receive excellent care, and
that the quality of care furnished by providers and suppliers is
assessed fairly in CMS programs.
In response to our requests for comments described previously in
this final rule with comment period, many commenters provided feedback
on addressing social risk. As we have previously stated, we are
concerned
[[Page 53771]]
about holding providers to different standards for the outcomes of
their patients with social risk factors, because we do not want to mask
potential disparities. We believe that the path forward should
incentivize improvements in health outcomes for disadvantaged
populations while ensuring that beneficiaries have adequate access to
excellent care. We thank commenters for this important feedback and
will continue to consider options to account for social risk factors
that would allow us to view disparities and potentially incentivize
improvement in care for patients and beneficiaries. We will consider
the comments we received in preparation for future rulemaking.
(b) Complex Patient Bonus
While we work with stakeholders on these issues as we have
described, under the authority within section 1848(q)(1)(G) of the Act,
which allows us to assess and implement appropriate adjustments to
payment adjustments, MIPS final scores, scores for performance
categories, or scores for measures or activities under MIPS, we
proposed to implement a short-term strategy for the Quality Payment
Program to address the impact patient complexity may have on final
scores (82 FR 30135 through 82 FR 30139). The overall goal when
considering a bonus for complex patients is two-fold: (1) To protect
access to care for complex patients and provide them with excellent
care; and (2) to avoid placing MIPS eligible clinicians who care for
complex patients at a potential disadvantage while we review the
completed studies and research to address the underlying issues. We
used the term ``patient complexity'' to take into account a multitude
of factors that describe and have an impact on patient health outcomes;
such factors include the health status and medical conditions of
patients, as well as social risk factors. We believe that as the number
and intensity of these factors increase for a single patient, the
patient may require more services, more clinician focus, and more
resources in order to achieve health outcomes that are similar to those
who have fewer factors. In developing the policy for the complex
patient bonus, we assessed whether there was a MIPS performance
discrepancy by patient complexity using two well-established indicators
in the Medicare program. The proposal was intended to address any
discrepancy, without masking performance. Because this bonus is
intended to be a short-term strategy, we proposed the bonus only for
the 2018 MIPS performance period (2020 MIPS payment year) and noted we
will assess on an annual basis whether to continue the bonus and how
the bonus should be structured (82 FR 30135 through 30139).
When considering approaches for a complex patient bonus, we
reviewed evidence to identify how indicators of patient complexity have
an impact on performance under MIPS as well as availability of data to
implement the bonus. We estimated the impact on performance using our
proposed scoring model, described in more detail in the regulatory
impact analysis of the CY 2018 Quality Payment Program proposed rule
(82 FR 30235 through 30238) that uses historical PQRS data to simulate
scores for MIPS eligible clinicians including estimates for the
quality, advancing care information, and improvement activities
performance categories, and the small practice bonus (82 FR 30149
through 30150). These estimates reflect scoring proposals with the cost
performance category weight at zero percent. We identified two
potential indicators for complexity: Medical complexity as measured
through Hierarchical Condition Category (HCC) risk scores and social
risk as measured through the proportion of patients with dual eligible
status. We identified these indicators because they are common
indicators of patient complexity in the Medicare program and the data
is readily available. Please refer to the CY 2018 Quality Payment
Program proposed rule for a detailed discussion of our analysis of both
indicators that informed our proposal (82 FR 30135 through 30138).
We proposed at Sec. 414.1380(c)(3) to add a complex patient bonus
to the final score for the 2020 MIPS payment year for MIPS eligible
clinicians that submit data for at least one performance category (82
FR 30138). We proposed at Sec. 414.1380(c)(3)(i) to calculate an
average HCC risk score, using the model adopted under section 1853 of
the Act for Medicare Advantage risk adjustment purposes, for each MIPS
eligible clinician or group, and to use that average HCC risk score as
the complex patient bonus. We proposed to calculate the average HCC
risk score for a MIPS eligible clinician or group by averaging HCC risk
scores for beneficiaries cared for by the MIPS eligible clinician or
clinicians in the group during the second 12-month segment of the
eligibility period, which spans from the last 4 months of a calendar
year 1 year prior to the performance period followed by the first 8
months of the performance period in the next calendar year (September
1, 2017 to August 31, 2018 for the 2018 MIPS performance period). We
proposed the second 12-month segment of the eligibility period to align
with other MIPS policies and to ensure we have sufficient time to
determine the necessary calculations. The second period 12-month
segment overlaps 8-months with the MIPS performance period which means
that many of the patients in our complex patient bonus would have been
cared for by the clinician, group, virtual group, or APM Entity during
the MIPS performance period.
HCC risk scores for beneficiaries would be calculated based on the
calendar year immediately prior to the performance period. For the 2018
MIPS performance period, the HCC risk scores would be calculated based
on beneficiary services from the 2017 calendar year. We proposed this
approach because CMS uses prior year diagnoses to set Medicare
Advantage rates prospectively every year and has employed this approach
in the VM (77 FR 69317 through 69318). Additionally, this approach
mitigates the risk of ``upcoding'' to get higher expected costs, which
could happen if concurrent risk adjustments were incorporated. We noted
that we realized using the 2017 calendar year to assess beneficiary HCC
risk scores overlaps by 4 months with the 12-month data period to
identify beneficiaries (which is September 1, 2017 to August 31, 2018
for the 2018 MIPS performance period); however, we annually calculate
the beneficiary HCC risk score and use it for multiple purposes (like
the Physician and Other Supplier Public Use File).
For MIPS APMs and virtual groups, we proposed at Sec.
414.1380(c)(3)(ii) to use the beneficiary weighted average HCC risk
score for all MIPS eligible clinicians, and if technically feasible,
TINs for models and virtual groups which rely on complete TIN
participation, within the APM Entity or virtual group, respectively, as
the complex patient bonus. We would calculate the weighted average by
taking the sum of each individual clinician's (or TIN's as appropriate)
average HCC risk score multiplied by the number of unique beneficiaries
cared for by the clinician and then divide by the sum of the
beneficiaries cared for by each individual clinician (or TIN as
appropriate) in the APM Entity or virtual group.
We proposed at Sec. 414.1380(c)(3)(iii) that the complex patient
bonus cannot exceed 3 points. We divided clinicians and groups into
quartiles based on average HCC risk score and percentage of patients
who are dual eligible. A cap of 3 points was selected because the
differences in performance we observed
[[Page 53772]]
in simulated scores (based on our proposed scoring methodology) between
the first and fourth quartiles of average HCC risk scores was
approximately 4 points for individuals and approximately 5 points for
groups. The 95th percentile of average HCC risk score values for
individual clinicians was 2.91 which we rounded to 3 for simplicity.
Although we considered using a higher cap to reflect the differences in
performance above 4 points, we believed that 3 points was appropriate
in order to not mask poor performance and because we estimated that
most MIPS eligible clinicians would have an average HCC risk score
below 3 points.
We expressed our belief that applying this bonus to the final score
is appropriate because caring for complex and vulnerable patients can
affect all aspects of a practice and not just specific performance
categories. It may also create a small incentive to provide access to
complex patients. We considered whether we should apply a set number of
points to those in a specific quartile (for example, for the highest
risk quartile only), but did not want to restrict the bonus to only
certain MIPS eligible clinicians. Rather than assign points based on
quartile, we believed that adding the average HCC risk score directly
to the final score would achieve our goal of accounting for patient
complexity without masking low performance and does provide a modest
effect on the final score.
Finally, we proposed that the MIPS eligible clinician, group,
virtual group, or APM Entity must submit data on at least one measure
or activity in a performance category during the performance period to
receive the complex patient bonus. Under this proposal, MIPS eligible
clinicians would not need to meet submission requirements for the
quality performance category in order to receive the bonus (they could
instead submit improvement activities or advancing care information
measures only or submit fewer than the required number of measures for
the quality performance category).
Based on our data analysis using our proposed scoring model with
the cost performance category weighted at zero percent, we estimated
that this bonus on average would range from 1.16 points in the first
quartile of MIPS eligible clinicians when ranked by average HCC risk
scores to 2.49 points in the fourth quartile for individual reporters
submitting 6 or more measures, and 1.26 points in the first quartile to
2.23 points in the fourth quartile for group reporters. For example, a
MIPS eligible clinician with a final score of 55.11 with an average HCC
risk score of 2.01 would receive a final score of 57.12. We proposed
(82 FR 30140) to modify the final score calculation formula so that if
the result of the calculation is greater than 100 points, then the
final score would be capped at 100 points.
We also sought comment on an alternative complex patient bonus
methodology, similarly for the 2020 MIPS payment year only (82 FR
30139). Under the alternative, we would apply a complex patient bonus
based on a ratio of patients who are dual eligible because dual
eligible status is a common indicator of social risk for which we
currently have data available. We expressed our belief that the
advantage of this option is its relative simplicity and that it creates
a direct incentive to care for dual eligible patients, who are often
medically complex and have concurrent social risk factors. In addition,
whereas the HCC risk scores rely on the diagnoses a beneficiary
receives which could be impacted by variations in coding practices
among clinicians, the dual eligibility ratio is not impacted by
variations in coding practices. For this alternative option, we would
calculate a dual eligible ratio (including both full and partial
Medicaid beneficiaries) for each MIPS eligible clinician based on the
proportion of unique patients who have dual eligible status seen by the
MIPS eligible clinician among all unique patients seen during the
second 12-month segment of the eligibility period, which spans from the
last 4 months of a calendar year 1 year prior to the performance period
followed by the first 8 months of the performance period. For MIPS APMs
and virtual groups, we would use the average dual eligible patient
ratio for all MIPS eligible clinicians, and if technically feasible,
TINs for models and virtual groups which rely on complete TIN
participation, within the APM Entity or virtual group, respectively.
Under this alternative option, we would identify dual eligible
status (numerator of the ratio) using data on dual-eligibility status
sourced from the state Medicare Modernization Act (MMA) files, which
are files each state submits to CMS with monthly Medicaid eligibility
information. We would use dual-eligibility status data from the state
MMA files because it is the best available data for identifying dual
eligible beneficiaries. Under this alternative option, we would include
both full-benefit and partial benefit beneficiaries in the dual
eligible ratio, and an individual would be counted as a dual patient if
they were identified as a full-benefit or partial-benefit dual patient
in the state MMA files at the conclusion of the second 12-month segment
of the eligibility determination period.
We proposed to define the proportion of full-benefit or partial-
benefit dual eligible beneficiaries as the proportion of dual eligible
patients among all unique Medicare patients seen by the MIPS eligible
clinician or group during the second 12-month segment of the
eligibility period which spans from the last 4 months of a calendar
year prior to the performance period followed by the first 8 months of
the performance period in the next calendar year (September 1, 2017 to
August 31, 2018 for the 2018 MIPS performance period), to identify MIPS
eligible clinicians for calculation of the complex patient bonus. This
date range aligns with the second low-volume threshold determination
and also represents care provided during the performance period.
We proposed to multiply the dual eligible ratio by 5 points to
calculate a complex patient bonus for each MIPS eligible clinician. For
example, a MIPS eligible clinician who sees 400 patients with dual
eligible status out of 1,000 total Medicare patients seen during the
second 12-month segment of the eligibility period would have a complex
patient ratio of 0.4, which would be multiplied by 5 points for a
complex patient bonus of 2 points toward the final score. We believe
this approach would be simple to explain and would be available to all
clinicians who care for dual eligible beneficiaries. We also believed a
complex patient bonus ranging from 1 to 5 points (with most MIPS
eligible clinicians receiving a bonus between 1 and 3 points) would be
appropriate because, in our analysis, we estimated differences in
performance between the 1st and 4th quartiles of dual eligible ratios
to be approximately 3 points for individuals and approximately 6 points
for groups. A bonus of less than 5 points would help to mitigate the
impact of caring for patients with social risk factors while not
masking poor performance. Using this approach, we estimated that the
bonus would range from 0.45 (first dual quartile) to 2.42 (fourth dual
quartile) for individual reporters, and from 0.63 (first dual quartile)
to 2.19 (fourth dual quartile) for group reporters. Under this
alternative option, we would also include the complex patient bonus in
the calculation of the final score. We proposed that if the result of
the calculation is greater than 100 points, then the final score would
be capped at 100 points (82 FR 30140). We sought comments on our
proposed bonus for
[[Page 53773]]
complex patients based on average HCC risk scores, and our alternative
option using a ratio of dual eligible patients in lieu of average HCC
risk scores. We reiterated that the complex patient bonus is intended
to be a short-term solution, which we plan to revisit on an annual
basis, to incentivize clinicians to care for patients with medical
complexity. We noted that we may consider alternate adjustments in
future years after methods that more fully account for patient
complexity in MIPS have been developed. We also requested comments on
alternative methods to construct a complex patient bonus.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Several commenters requested that CMS base the complex
patient bonus on both HCC risk scores and dual eligibility. The
commenters expressed the belief that both of these indicators capture
important aspects of patient risk, and together provide a more complete
picture. The commenters suggested that CMS apply a complex patient
bonus if a MIPS eligible clinician meets a defined threshold for either
HCC risk scores or proportion of patients who are dual eligible. One
commenter requested that CMS provide separate bonuses based on HCC risk
scores and the proportion of patients who are dual eligible.
Response: We appreciate these comments and have decided to finalize
a modified complex patient bonus which will be added to the final score
that includes the sum of the average HCC risk scores and proportion of
dual eligible beneficiaries (multiplied by 5 points), subject to a 5-
point cap. We believe combining these two indicators is appropriate
because, while these two indicators are correlated (with a correlation
coefficient of 0.487 based on our updated model, which includes 2016
PQRS data and the cost performance category weighted at 10 percent of
the final score), they are not interchangeable. We believe adding these
two indicators together recognizes the strengths of both approaches, as
well as the limitations of either approach in fully accounting for
patient complexity. We believe including both indicators will account
for MIPS eligible clinicians who see medically complex patients but do
not see many patients who are dual eligible, as well as MIPS eligible
clinicians who see dual eligible patients but do not see many medically
complex patients as defined by HCC risk scores.
As discussed later in this section, a 5-point cap was requested by
several commenters. While we did not want to mask poor performance, we
believe raising the cap to 5 points could be supported by the data and
would align with the small practice bonus. Using our proposed scoring
model, we observed a decrease in simulated scores of approximately 4
points (for individuals who report 6 or more quality measures) and
approximately 5 points (for groups) from the top quartile to the bottom
quartile for the average patient HCC risk score. Our updated model
showed similar distribution. We believe that the 3-point cap we
proposed was justified in order to not mask poor performance, given the
differences in quartiles scores, and we believe a cap of 5 points could
also be supported. Using our updated scoring model (described in the
regulatory impact analysis section VI.D of this final rule with comment
period), we estimate a decrease in simulated scores of 5.4 points (for
individuals who report 6 or more quality measures) and 4.5 points (for
groups) from the top quartile to the bottom quartile for the average
patient HCC risk scores and 4.8 points (for individuals who report 6 or
more quality measures) and 4.5 points (for groups) from the top
quartile to the bottom quartile for the dual eligible ratio. Therefore,
we believe a cap for the complex patient bonus of 5 points is supported
by this updated data with slightly higher differences in performance
based on HCC risk scores and dual eligibility.
We do not believe that adopting a threshold is appropriate at this
time because it would add additional complexity. In addition, a
threshold would likely create an artificial ``cliff,'' where MIPS
eligible clinicians just above the threshold would receive a bonus
while those just below the threshold would not, even though the
differences in patient populations between these two groups may be very
minimal. We also believe that separate bonuses for complex patients (as
opposed to a single combined score) may add unnecessary confusion to
MIPS eligible clinicians.
Comment: Many commenters supported the complex patient bonus but
requested that CMS increase the maximum number of points for the bonus.
Most of these commenters supported a cap of 5 points. The commenters
expressed the belief that MIPS eligible clinicians should have the
opportunity to receive as many points for the complex patient bonus as
they receive for the small practice bonus because commenters believe
patient complexity can have as much of an impact on performance as
practice size. The commenters furthermore believe that a bonus of only
1 to 3 points would have only a modest impact on the final score. A few
commenters requested that CMS adopt the same cap for HCC risk scores
that is used in the Next Generation ACO Model or Shared Savings
Program, where the HCC risk score cannot increase by more than 3
percent.
Response: We acknowledge the commenters' concern that a 3-point cap
would have only a modest impact on the final score and would not be
aligned with the small practice bonus. For the reasons described
earlier, we believe a 5-point cap for the complex patient bonus is
justified and are finalizing it for the 2020 MIPS payment year. We
think the 5 points will have slightly more impact on the final score
(while not masking poor performance) and is justified by the data. In
response to comments to adopt the cap used in the Next Generation ACO
Model or Shared Savings Program, we note that we are not currently
measuring increases in HCC risk scores over time but will evaluate any
impacts on diagnosis coding should the complex patient bonus continue.
Comment: Many commenters supported CMS's proposal to apply a
complex patient bonus to the final score based on HCC risk scores. The
commenters agreed that the complex patient bonus will help address the
resources needed to treat complex patients, without masking clinician
performance. Furthermore, the commenters believe that the complex
patient bonus will help protect access to care and offset incentives to
avoid treating the sickest patients. The commenters supported HCC risk
scores as a valid proxy for medical complexity, believing that it is
familiar to stakeholders.
Response: We appreciate the support of commenters for the proposed
complex patient bonus for the 2020 MIPS payment year. We continue to
believe HCC risk scores is a valid complex patient indicator and will
be incorporating this into the complex patient bonus along with the
dual eligible ratio, for the reasons described earlier. As we stated in
the CY 2018 Quality Payment Program proposed rule, we intend to monitor
the effect of the complex patient bonus and revisit future adjustments
or the continued need for an extension of the bonus through rulemaking.
Comment: Several commenters supported CMS's proposal to apply a
complex patient bonus but expressed the belief that the bonus,
particularly when combined with other bonuses at the performance
category level and at the final score level, creates confusion for MIPS
eligible clinicians.
[[Page 53774]]
Commenters urged CMS to align our approaches across these various
bonuses as much as possible to enhance stability and predictability for
MIPS eligible clinicians. Several commenters requested that CMS extend
the complex patient bonus to future years, in order to increase
stability in the Quality Payment Program and to help MIPS eligible
clinicians better predict which bonuses they will receive. The
commenters expressed the belief that modifying bonus points each year
will add complexity to the program and increase confusion for MIPS
eligible clinicians.
Response: We acknowledge the need for simplicity and predictability
in our MIPS scoring policies. For the reasons described earlier in this
final rule comment period, we are modifying the complex patient bonus
to incorporate dual eligibility and HCC risk scores with a 5-point cap
to better align with the small practice bonus. We also agree with
commenters that, to the extent possible, we should try to maintain
stability over time in our approach to account for social risk in order
to minimize confusion and complexity for MIPS eligible clinicians.
However, as we note earlier in this final rule with comment period, we
intend this complex patient bonus as a short-term solution to account
for risk factors in MIPS as we continue to evaluate ongoing research in
this area as well as review available data to support various
approaches to accounting for risk factors. We plan to review results of
implementation of the complex patient bonus in the 2020 MIPS payment
year, as well as available reports, and as appropriate, update our
approach to accounting for risk factors.
Comment: A few commenters expressed support for CMS's alternative
approach to calculate a complex patient bonus based on proportion of
patients who are dual eligible. The commenters supported dual
eligibility as a proxy for social risk factors, an approach that is
currently used in the Medicare Advantage star ratings methodology. The
commenters stated that dual eligible patients are a high-cost, high-
risk population, and, therefore, the proportion of patients who are
dual eligible is an appropriate indicator of social risk. The
commenters further expressed concern with limitations in using HCC risk
scores, which they believe are subject to variations in coding
practices.
Response: We thank the commenters for their support. We agree that
dual eligibility is an appropriate indicator, and for the reasons
explained previously, we are including a dual eligibility ratio in the
calculation of the complex patient bonus.
Comment: Some commenters who supported CMS's proposal to apply a
complex patient bonus based on HCC risk scores pointed out some
limitations in using HCC risk scores for this purpose for our
consideration as we consider alternate methods in future years. For
example, some commenters expressed the belief that HCC risk scores are
subject to differences in coding, rather than being completely tied to
patient complexity. A few commenters stated that HCC risk scores are of
limited value due to the inadequacy of coding systems. For example, a
few commenters noted that inadequate coding exists for behavioral
health conditions, oncology, pediatrics, and rare diseases. Further,
the commenters expressed the belief that, even though HCC risk scores
include dual eligibility as one component, they do not adequately
capture social determinants of health. Some commenters further pointed
out that in the VM program, clinicians who cared for patients with high
HCC risk scores were more likely to receive negative payment
adjustments. A few commenters urged CMS to identify appropriate
adjustment mechanisms for quality measures in addition to the complex
patient bonus.
Response: We understand that HCC risk scores have some limitations,
particularly in that the HCC values depends on coding to capture
medical complexity and coding may not capture all of a patient's
medical conditions. However, we are unaware of other options that are
readily available that would be a more complete index of a patient's
medical complexity. We have decided to pair the HCC risk score with the
proportion of dual eligible patients to create a more complete complex
patient indicator than can be captured using HCC risk scores alone. We
note that the complex patient bonus would be available to all MIPS
eligible clinicians who submit data on at least one measure or activity
in a performance category, unlike in the VM program, which limits the
bonus for caring for high-risk beneficiaries to clinicians who qualify
for upward adjustments. We will evaluate additional options in future
years in order to better account for social risk factors while
minimizing unintended consequences.
Comment: One commenter urged CMS to look to POS codes (POS 31 SNF;
POS 32 NF) to provide further granularity in assessing the complexity
of clinicians' patient populations given the complexity of populations
in these settings.
Response: We thank the commenter for this suggestion. We take into
consideration better accounting for the complexity of patients in these
facility settings in future rulemaking.
Comment: One commenter requested that the complex patient bonus be
determined based on new patient relationship codes, with a field to
document patient complexity.
Response: Thank you for this comment. We do not believe it is
feasible to include patient complexity with patient relationship codes
at this time, but we will take it into consideration in future
rulemaking.
Comment: A few commenters suggested that CMS offer education to
MIPS eligible clinicians on appropriate coding practices to enhance the
validity of HCC risk scores.
Response: Our intent in adopting a methodology for the complex
patient bonus based on HCC risk scores is to capture differences in
patient complexity, rather than differences in clinician coding
practices. We are aware that variations in coding practices may impact
HCC risk scores, but are unaware of other readily available indicators
that would better capture medical complexity. We intend to provide
guidance to MIPS eligible clinicians on calculation of the complex
patient bonus. As described earlier, we are also incorporating
proportion of dual eligible beneficiaries in to the complex patient
bonus and plan to develop appropriate educational materials.
Comment: Several commenters did not support the proposed complex
patient bonus for the 2020 MIPS payment year. Several commenters
expressed the belief that the proposed approach is too complex, while
others stated that because CMS have not yet identified an ideal method
to adjust for patient complexity, CMS should delay any bonus at this
time. A few commenters expressed the belief that implementing a complex
patient bonus that CMS plans to modify in future years will add
unnecessary confusion for MIPS eligible clinicians. A few commenters
stated that all of the various bonuses available under MIPS add a great
deal of complexity and uncertainty to the program. One commenter stated
that HCC risk scores tend to be lower for rural practices, citing
MedPAC's 2012 report on rural providers.
Response: While we work with stakeholders to identify a more
comprehensive, long-term approach to account for social risk factors,
we continue to believe a short-term strategy for the Quality Payment
Program based on data we have available to us is appropriate, despite
its limitations, to
[[Page 53775]]
address the impact of patient complexity. We are also finalizing a
revised complex patient bonus based on HCC risk scores and dual
eligibility with a 5-point cap for reasons described earlier. We intend
to identify additional ways we can minimize complexity in our approach
to accounting for social risk factors in future rulemaking. We also
intend to monitor for any disparities in HCC risk scores based on
whether a practice is located in a rural area, but in the meantime, we
are also incorporating a dual eligibility component to the complex
patient bonus which we believe will mitigate some concerns about basing
the complex patient bonus on HCC risk scores alone.
Comment: Several commenters did not support the use of dual
eligibility for calculating a complex patient bonus. For example,
several commenters expressed their belief that dual eligibility is not
a good proxy for social risk factors. The commenters pointed out that
Medicaid eligibility varies by state, particularly based on recent
trends in Medicaid expansion. A few commenters stated that HCC risk
scores are a more familiar concept to MIPS eligible clinicians than
dual eligibility.
Response: We continue to believe that dual eligibility is a valid
proxy for social risk factors which impacts performance in MIPS, which
has been used to account for social risk in other CMS programs, such as
Medicare Advantage star ratings. We note that HCC risk scores include
dual eligibility as one factor; however, we acknowledge that these two
indicators are not interchangeable (correlation coefficient of 0.487)
as HCC risk scores also include other aspects of social complexity
(such as medical diagnoses). We are aware that dual eligibility may
vary by state, and we plan to continue to monitor alternative
approaches to accounting for social risk in the future.
Comment: One commenter requested that CMS use data from the
performance period rather than prospective data because this approach
does not account for new diagnoses.
Response: As we discussed above, Medicare Advantage uses prior year
diagnoses to set rates prospectively every year and we have employed
this approach in the VM (77 FR 69317 through 69318). While using data
from a prior period may not capture any new diagnoses for a patient, it
also mitigates the risk of ``upcoding'' which could happen if
concurrent risk adjustments were incorporated.
Comment: One commenter requested that CMS provide information on
the number of MIPS eligible clinicians who would be eligible for a
complex patient bonus under each of the two options, as well as the
overlap between the two.
Response: Under both options, all MIPS eligible clinicians would
receive a complex patient bonus as long as they submit data on at least
one measure or activity in a performance category; however, those with
higher complexity would receive a higher bonus score. Based on our
updated analysis, we estimate the median complex patient bonus would be
just under 3 points (2.97). Additional information can be found in
Table 27 of this final rule with comment period.
Comment: Several commenters suggested additional risk factors to
consider for bonuses. Several commenters requested that CMS incorporate
a bonus for MIPS eligible clinicians who care for American Indian/
Alaska Native patients because these patients tend to be more complex,
with a greater disease burden, and because clinicians caring for these
patients tend to have decreased resources. The commenters also
requested that CMS provide bonus points based on frailty, Adverse
Childhood Events (ACE), social risk factors, and other factors not
currently captured in the HCC risk score methodology. Several
commenters offered suggestions for future enhancements of the complex
patient bonus to ensure that it achieves the goals CMS has outlined
while reducing confusion and complexity for MIPS eligible clinicians
wherever possible. The commenters acknowledged that the proposed
approach has several limitations that must be addressed over time. The
commenters urged CMS to use the first year of the complex patient bonus
to monitor the impact of the complex patient bonus, receive feedback
from stakeholders, and explore more appropriate methods of accounting
for patient complexity, while continuing to monitor reports released by
NQF, ASPE, and others. The commenters also requested that CMS identify
ways to better account for certain patient populations, such as
patients with rare diseases.
Response: We appreciate these comments suggesting ways that we can
continue to enhance the complex patient bonus. We intend to explore
additional risk factors, as appropriate, as we consider approaches to
account for social risk in the future in the Quality Payment Program.
As noted in the CY 2018 Quality Payment Program proposed rule (82 FR
30135), our goals for the complex patient bonus are (1) to protect
access to care for complex patients and provide them with excellent
care; and (2) to avoid placing MIPS eligible clinicians who care for
complex patients at a potential disadvantage while we review the
completed studies and research to address the underlying issues.
Keeping these goals in mind, we also recognize the value of maintaining
stability wherever possible to reduce clinician confusion. Therefore,
we intend to take into consideration feedback we have received as we
consider approaches to account for social risk in future years that
minimize confusion and complexity in the Quality Payment Program.
Comment: One commenter suggested that CMS provide an exclusion to
the Quality Payment Program for clinicians who treat complex patients
because such an exclusion would reduce unnecessary risks and
uncertainties for MIPS eligible clinicians and impact treatment access.
Response: We do not have statutory authority to exclude MIPS
eligible clinicians based on patient complexity.
Final Action: After consideration of the public comments, we are
finalizing our proposal with modification for the 2020 MIPS payment
year. We are finalizing at Sec. 414.1380(c)(3) a complex patient bonus
for MIPS eligible clinicians, groups, APM Entities, and virtual groups
that submit data for at least one MIPS performance category during the
applicable performance period, which will be added to the final score.
We are finalizing at Sec. 414.1380(c)(3)(i) to calculate the complex
patient bonus for MIPS eligible clinicians and groups by adding the
average HCC risk score to the dual eligible ratio, based on full
benefit and partial benefit dual eligible beneficiaries, multiplied by
5. We are finalizing at Sec. 414.1380(c)(3)(ii) to calculate the
complex patient bonus for APM Entities and virtual groups by adding the
beneficiary weighted average HCC risk score for all MIPS eligible
clinicians, and if technically feasible, TINs for models and virtual
groups which rely on complete TIN participation, within the APM Entity
or virtual group, respectively, to the average dual eligible ratio for
all MIPS eligible clinicians, and if technically feasible, TINs for
models and virtual groups which rely on complete TIN participation,
within the APM Entity or virtual group, respectively, multiplied by 5.
We will calculate the average HCC risk score and dual eligible ratio as
described in the proposed rule (82 FR 30138 through 30139). We are
finalizing at Sec. 414.1380(c)(3)(iii) that the complex patient bonus
cannot exceed 5 points.
[[Page 53776]]
Using our scoring model, we estimate that the average complex
patient bonus will range from 2.52 in the first HCC quartile to 3.72 in
the highest HCC quartile for all MIPS eligible clinicians. Table 27
includes the distribution for the complex patient bonus under our final
policy, along with bonuses based on the proposed approach (based on HCC
risk scores only) and the alternate approach (based on dual eligible
ratio).
Table 27--Estimated Complex Patient Bonus for Finalized, Proposed, and Alternate Approach
----------------------------------------------------------------------------------------------------------------
Dual bonus **
HCC bonus * (2018 QPP HCC + dual
(2018 QPP proposed rule bonus (final
proposed rule) alternate policy)
proposal)
----------------------------------------------------------------------------------------------------------------
HCC Quartile
----------------------------------------------------------------------------------------------------------------
Quartile 1--Lowest Average HCC Score............................ 1.26 1.33 2.52
Quartile 2...................................................... 1.51 1.56 3.06
Quartile 3...................................................... 1.63 1.81 3.43
Quartile 4--Highest Average HCC Score........................... 1.86 1.97 3.72
----------------------------------------------------------------------------------------------------------------
Dual Eligible Quartile
----------------------------------------------------------------------------------------------------------------
Quartile 1--Low Proportion of Dual Eligible..................... 1.42 0.84 2.19
Quartile 2...................................................... 1.54 1.36 2.85
Quartile 3...................................................... 1.68 2.01 3.68
Quartile 4--Highest Proportion of Dual Eligible................. 1.64 2.46 4.02
----------------------------------------------------------------------------------------------------------------
* Includes a 3-point cap.
** Calculated as dual eligible ratio times 5.
(c) Small Practice Bonus for the 2020 MIPS Payment Year
Eligible clinicians and groups who work in small practices are a
crucial part of the health care system. The Quality Payment Program
provides options designed to make it easier for these MIPS eligible
clinicians and groups to report on performance and quality and
participate in advanced alternative payment models for incentives. We
have heard directly from clinicians in small practices that they face
unique challenges related to financial and other resources,
environmental factors, and access to health information technology. We
heard from many commenters that the Quality Payment Program gives an
advantage to large organizations because such organizations have more
resources invested in the infrastructure required to track and report
measures to MIPS. We also observed that, based on our scoring model,
which is described in the regulatory impact analysis in the CY 2018
Quality Payment Program proposed rule (82 FR 30233 through 30241),
practices with more than 100 clinicians may perform better in the
Quality Payment Program, on average, compared to smaller practices. We
believe this trend is due primarily to two factors: participation rates
and submission mechanism. Based on the most recent PQRS data available,
practices with 100 or more MIPS eligible clinicians have participated
in the PQRS at a higher rate than small practices (99.4 percent
compared to 69.7 percent, respectively). As we indicate in our
regulatory impact analysis in the CY 2018 Quality Payment Program
proposed rule (82 FR 30233 through 30241), we believe participation
rates based only on historic 2015 quality data submitted under PQRS
significantly underestimate the expected participation in MIPS
particularly for small practices. Therefore, we have modeled the
regulatory impact analysis using minimum participation assumptions of
80 percent and 90 percent participation for each practice size category
(1-15 clinicians, 16-24 clinicians, 25-99 clinicians, and 100 or more
clinicians). However, even with these enhanced participation
assumptions, MIPS eligible clinicians in small practices would have
lower participation in MIPS than MIPS eligible clinicians in larger
practices have had in PQRS, as 80 or 90 percent participation is still
much lower than the 99.4 percent PQRS participation for MIPS eligible
clinicians in practices with 100 or more clinicians.
In addition, the most recent PQRS data (from CY 2016) indicates
practices with 100 or more MIPS eligible clinicians are more likely to
report as a group, rather than individually, which reduces burden to
individuals within those practices due to the unified nature of group
reporting. Specifically, 62.1 percent of practices with 100 or more
MIPS eligible clinicians have reported via CMS Web Interface (either
through the Shared Savings Program or as a group practice) compared to
22.4 percent of small practices (the CMS Web Interface reporting
mechanism is only available to small practices participating in the
Shared Savings Program or Next Generation ACO Model).\10\
---------------------------------------------------------------------------
\10\ Groups must have at least 25 clinicians to participate in
Web Interface.
---------------------------------------------------------------------------
These two factors have financial implications based on the MIPS
scoring model described in the CY 2018 Quality Payment Program proposed
rule (82 FR 30233 through 30241). Looking at the combined impact
performance, we observed consistent trends for small practices in
various scenarios. A combined impact of performance measurement looks
at the aggregate net percent change (the combined impact of MIPS
negative and positive adjustments). The MIPS payment adjustment is
connected to the final score because final scores below the performance
threshold receive a negative MIPS payment adjustment and final scores
above the performance threshold receive a positive MIPS payment
adjustment. In analyzing the combined impact performance, we see MIPS
eligible clinicians in small practices consistently have a lower
combined impact performance than larger practices based on actual
historical data and after we apply the 80 and 90 percent participation
assumptions.
Due to these challenges, we proposed an adjustment to the final
score for MIPS eligible clinicians in small practices (referred to
herein as the ``small practice bonus'') to recognize these barriers and
to incentivize MIPS
[[Page 53777]]
eligible clinicians in small practices to participate in the Quality
Payment Program and to overcome any performance discrepancy due to
practice size (82 FR 30139 through 30140). To receive the small
practice bonus, we proposed that the MIPS eligible clinician must
participate in the program by submitting data on at least one
performance category in the 2018 MIPS performance period. Therefore,
MIPS eligible clinicians would not need to meet submission requirements
for the quality performance category in order to receive the bonus
(they could instead submit improvement activities or advancing care
information measures only or submit fewer than the required number of
measures for the quality performance category). Additionally, we
proposed that group practices, virtual groups, or APM Entities that
consist of a total of 15 or fewer clinicians may receive the small
practice bonus.
We proposed at Sec. 414.1380(c)(4) to add a small practice bonus
of 5 points to the final score for MIPS eligible clinicians who
participate in MIPS for the 2018 MIPS performance period and are in
small practices, virtual groups, or APM Entities with 15 or fewer
clinicians (the entire virtual group or APM Entity combined must
include 15 or fewer clinicians to qualify for the bonus). We proposed
in the CY 2018 Quality Payment Program proposed rule that if the result
of the calculation is greater than 100 points, then the final score
would be capped at 100 points (82 FR 30140). This bonus is intended to
be a short-term strategy to help small practices transition to MIPS;
therefore, we proposed the bonus only for the 2018 MIPS performance
period (2020 MIPS payment year) and will assess on an annual basis
whether to continue the bonus and how the bonus should be structured.
We invited public comment on our proposal to apply a small practice
bonus for the 2020 MIPS payment year.
We also considered applying a bonus for MIPS eligible clinicians
that practice in either a small practice or a rural area. However, on
average, because we saw less than a 1-point difference between scores
for MIPS eligible clinicians who practice in rural areas and those who
do not, we did not propose a bonus for those who practice in a rural
area, but plan to continue to monitor the Quality Payment Program's
impacts on the performance of those who practice in rural areas. We
also sought comment on the application of a rural bonus in the future,
including available evidence demonstrating differences in clinician
performance based on rural status. If we implement a bonus for
practices located in rural areas, we would use the definition for rural
specified in section II.C.1.d. of this final rule with comment period
for individuals and groups (including virtual groups).
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters supported the proposed small practice
bonus for the 2020 MIPS payment year. The commenters expressed the
belief that this bonus will help address the particular challenges that
small practices experience in participating in MIPS, including the
resources needed to create an infrastructure to meet MIPS reporting
requirements. Several commenters believe that the small practice bonus
will help to encourage small practices to participate in MIPS. A few
commenters supported the application of the small practice bonus to
group practices, virtual groups, and APM Entities.
Response: We thank commenters for their support. We agree that the
small practice bonus will help to alleviate the impact of some of the
particular challenges that small practices experience in participating
in MIPS on performance, and believe that the bonus will help
incentivize these practices to participate in MIPS.
Comment: A few commenters requested that CMS extend the small
practice bonus to future years of MIPS to maintain stability. One
commenter requested that CMS reevaluate the small practice bonus in
future years to ensure that it is sufficient to overcome any
discrepancies due to practice size.
Response: We are finalizing the small practice bonus for the 2020
MIPS payment year only. We intend to continue to evaluate options to
address challenges small practices face to participate in MIPS in
future rulemaking, including continuation of the small practice bonus,
as appropriate.
Comment: A few commenters expressed the belief that some practices
may not meet the definition of a small practice due to the use of part-
time, temporary staff that will cause them to exceed the 15-clinician
threshold. One commenter suggested that CMS revise the definition of a
small practice to include full-time employees only. One commenter
requested that CMS expand the small practice bonus to practices of 16
to 24 clinicians.
Response: We thank the commenters for alerting us to this potential
limitation in our definition of a small practice, and we will monitor
the impact of part-time and temporary staff to determine whether we
should propose changes to the small practice bonus in future
rulemaking. However, we also believe it is important to maintain
consistency within the Quality Payment Program, so we intend to align
this bonus with our definition of small practices under Sec. 414.1305.
In addition, we have not seen the same discrepancies in simulated MIPS
final scores among practices of 16-24 clinicians that we have observed
for practices of 15 or fewer clinicians.
Comment: One commenter suggested that CMS reduce the small practice
bonus to 3 points, instead of 5 points. The commenter expressed the
belief that the small practice bonus represented too great a proportion
of the performance threshold (5 points of the proposed 15-point
performance threshold which represents 30 percent of the points).
Response: We believe a bonus of 5 points is appropriate to
acknowledge the challenges small practices face in participating in
MIPS, and to help them achieve the performance threshold finalized at
section II.C.8.c. of this final rule with comment period at 15 points
for the 2020 MIPS payment year, as this bonus represents one-third of
the total points needed to meet or exceed the performance threshold and
receive a neutral or positive payment adjustment. With a small practice
bonus of 5 points, small practices could achieve this performance
threshold by reporting 3 quality measures or 1 quality measure and 1
medium weighted improvement activity.\11\
---------------------------------------------------------------------------
\11\ Assuming the small practice did not submit data for the
advancing care information performance category and applied for the
hardship exception and had the advancing care information
performance category weight redistributed to the quality performance
category, the small practice would have a final score with 75
percent weight from the quality performance category score, 15
percent from improvement activities, and 10 percent from cost. With
the proposed scoring for small practices, submitting one measure one
time would provide at least 3 measure achievement points out of 60
total available measure points. With 75 percent quality performance
category weight, each quality measure would be worth at least 3.75
points towards the final score. ((3/60) x 75% x 100 = 3.75 points).
For improvement activities, each medium weighted activity is worth
20 out of 40 possible points which translates to 7.5 points to the
file score. (20/40) x 15% x 100 = 7.5 points). The final score would
be at least 3.75 points for quality + 7.5 points for improvement
activities + 5 point small practice bonus which equals 16.5 points
without considering cost or the complex patient bonus.
---------------------------------------------------------------------------
Comment: One commenter suggested that CMS require small practices
to report on at least 2 performance categories in order to receive the
small practice bonus.
Response: We continue to believe that it is appropriate to require
MIPS eligible clinicians to report on only one performance category in
order to receive the small practice bonus because we
[[Page 53778]]
want to encourage small practices to participate in MIPS and we are
still in a transition phase. We may reconsider the need for the bonus
or augment requirements for small practices to receive the small
practice bonus in future rulemaking.
Comment: Some commenters did not support the small practice bonus
as proposed. Some commenters expressed the belief that CMS should
instead focus on providing technical assistance to small and rural
practices who may struggle to meet MIPS reporting requirements. One
commenter suggested that CMS calculate different performance thresholds
based on practice size. A few commenters expressed the belief that the
small practice bonus, along with additional available bonuses, may make
it difficult for other MIPS eligible clinicians to succeed in MIPS and
earn a positive adjustment. One commenter expressed the belief that
small practices can be competitive in MIPS by participating in a
virtual group or reporting quality measures above the minimum number.
Another commenter expressed the belief that the small practice bonus is
not sufficient to overcome the disparities small practices face to
succeed in MIPS.
Response: We intend to explore other approaches to account for the
impact of practice size on MIPS performance in future rulemaking as
well as monitor for any unintended consequences of the bonus in the
MIPS program, including impact on MIPS eligible clinicians who are not
in small practices. We are not able to create different performance
thresholds based on practice size because we believe section
1848(q)(6)(D) of the Act requires us to establish one performance
threshold applicable to all MIPS eligible clinicians for a year. We
believe that technical support is critical for the success of small
practices in reporting for MIPS, but we also believe that a bonus is
appropriate at this time due to the discrepancies in performance we
observed for clinicians in small practices as compared with clinicians
in practices with 100 or more clinicians. We have launched the Small,
Underserved, and Rural Support initiative, a 5-year program, to provide
technical support to MIPS eligible clinicians in small practices. The
program provides assistance to practices in selecting and reporting on
quality measures, education and outreach, and support for optimizing
health information technology.
Comment: Several commenters requested that CMS implement a similar
bonus for rural practices. The commenters noted that not all rural
practices meet the definition of a small practice, but these practices
face unique challenges in meeting MIPS reporting requirements. For
example, the commenters expressed the belief that rural practices face
particular challenges in adopting health information technology.
Commenters further noted that rural practices lack resources to help
achieve high performance on quality measures. One commenter expressed
the belief that relying on data from preceding programs such as PQRS
and the VM to estimate the impact of rural status on performance may
provide an incomplete picture due to low participation rates for rural
practices in these legacy programs. One commenter expressed the belief
that a rural practice bonus may signal that quality standards for
patients in rural areas do not need to be as high as those for patients
in non-rural areas.
Response: As we discussed in the CY 2018 Quality Payment Program
proposed rule (82 FR 30140), we observed that performance for rural
MIPS eligible clinicians is very similar to performance for non-rural
MIPS eligible clinician once we account for practice size, so we do not
believe a bonus for MIPS eligible clinicians practicing in a rural
setting is appropriate at this time. We acknowledge that legacy program
data may not provide a complete picture of MIPS participation rates for
practices located in rural areas. We will continue to monitor impacts
of rural status on performance in the MIPS program and if warranted,
propose adjustments through future rulemaking.
Final Action: After consideration of the public comments, we are
finalizing at Sec. 414.1380(c)(4) our proposal to add a small practice
bonus of 5 points to the final score for MIPS eligible clinicians,
groups, APM Entities, and virtual groups that meet the definition of a
small practice as defined at Sec. 414.1305 and submit data on at least
one performance category in the 2018 performance period.
We seek comment on approaches to better align final score and
performance category level bonuses for simplicity in future rulemaking.
(2) Final Score Calculation
We proposed a formula for the final score calculation for MIPS
eligible clinicians, groups, virtual groups, and APM Entities at Sec.
414.1380(c), which includes the proposed complex patient and small
practice bonuses. We also proposed to revise the policy finalized in
the CY 2017 Quality Payment Program final rule to assign MIPS eligible
clinicians with only 1 scored performance category a final score that
is equal to the performance threshold (81 FR 77326 through 77328) (we
noted that we inadvertently failed to codify this policy in Sec.
414.1380(c)). We proposed this revision to the policy to account for
our proposal in the CY 2018 Quality Payment Program proposed rule (82
FR 30144 through 30146) for extreme and uncontrollable circumstances
which, if finalized, could result in a scenario where a MIPS eligible
clinician is not scored on any performance categories. To reflect this
proposal, we proposed to add to Sec. 414.1380(c) that a MIPS eligible
clinician with fewer than 2 performance category scores would receive a
final score equal to the performance threshold.
With the proposed addition of the complex patient and small
practice bonuses, we also proposed to strike the following phrase from
the final score definition at Sec. 414.1305: ``The final score is the
sum of each of the products of each performance category score and each
performance category's assigned weight, multiplied by 100.'' We
believed this portion of the definition would be incorrect and
redundant of the proposed revised regulation at Sec. 414.1380(c).
We requested public comment on the proposed final score methodology
and associated revisions to regulation text.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: One commenter supported the proposal for MIPS eligible
clinicians who are scored on fewer than two performance categories to
receive a final score equal to the performance threshold.
Response: We thank the commenter for their support of our proposal.
Comment: Several commenters expressed the belief that calculation
of the final score is overly confusing for MIPS eligible clinicians. A
few commenters suggested that CMS modify our scoring methodology so
that performance category points are equal to points in the final
score. This would mean that, for example, the advancing care
information performance category total possible points would be 25
points which would be equal to the generally applicable weighting for
the advancing care information performance category of 25 points.
Response: Simplification in scoring is a core goal of the MIPS
program so that MIPS eligible clinicians can easily understand how the
final score is calculated. In determining scoring policies for the MIPS
program, we kept this goal in mind whenever possible. The weighting of
performance categories can vary for different MIPS eligible
[[Page 53779]]
clinicians, such as when there are not sufficient measures or
activities applicable and available to a clinician, and the performance
categories are reweighted in their final score. For example, non-
patient facing MIPS eligible clinicians can qualify for reweighting of
the advancing care information performance category. If a non-patient
facing MIPS eligible clinician does not submit advancing care
information data, then the advancing care information performance
category score would be redistributed to the quality performance
category and non-patient facing MIPS eligible clinician would have a 75
percent weighting for quality instead of 50 percent (see Table 28 of
this final rule with comment period for the different potential
redistribution combinations). Therefore, it is not possible to have a
single scoring system that generates the exact number of points toward
the final score. Instead, we have created a system where a clinician
receives a performance category score and then that score is multiplied
by the weight assigned to the performance category. We intend to
continue to explore approaches to simplify MIPS scoring in future
rulemaking.
In the meantime, we seek comment on approaches to display scores
and provide feedback to MIPS eligible clinicians in a way that MIPS
eligible clinicians can easily understand how their scores are
calculated, including how performance category scores are translated to
a final score. We also seek comment on how to simplify the scoring
system while still recognizing differences in clinician practices.
Final Action: After consideration of public comments, we are
finalizing the revisions to Sec. 414.1380(c) and Sec. 414.1305 as
proposed.
(3) Final Score Performance Category Weights
(a) General Weights
Section 1848(q)(5)(E)(i) of the Act specifies weights for the
performance categories included in the MIPS final score: In general, 30
percent for the quality performance category, 30 percent for the cost
performance category, 25 percent for the advancing care information
performance category, and 15 percent for the improvement activities
performance category. However, that section also specifies different
weightings for the quality and cost performance categories for the
first and second years for which the MIPS applies to payments. Section
1848(q)(5)(E)(i)(II)(bb) of the Act specifies that for the transition
year, not more than 10 percent of the final score will be based on the
cost performance category, and for the 2020 MIPS payment year, not more
than 15 percent will be based on the cost performance category. Under
section 1848(q)(5)(E)(i)(I)(bb) of the Act, the weight of the quality
performance category for each of the first 2 years will increase by the
difference of 30 percent minus the weight specified for the cost
performance category for the year.
In the CY 2017 Quality Payment Program final rule, we established
the weights of the cost performance category as 10 percent of the final
score (81 FR 77166) and the quality performance category as 50 percent
of the final score (81 FR 77100) for the 2020 MIPS payment year. While
we proposed in the CY 2018 Quality Payment Program proposed rule (82 FR
30047 through 30048) to change the weight of the cost performance
category to zero percent and to change the weight of the quality
performance category to 60 percent for the 2020 MIPS payment year, we
are finalizing a weight of 10 percent for cost for the 2020 MIPS
payment year, so the quality performance category weight will be 50
percent (82 FR 30037 through 30038). We refer readers to sections
II.C.6.b. and II.C.6.d. of this final rule with comment period for
further information on the final policies related to the weight of the
quality and cost performance categories, including our rationale for
our weighting for each category.
As specified in section 1848(q)(5)(E)(i) of the Act, the weights
for the other performance categories are 25 percent for the advancing
care information performance category and 15 percent for the
improvement activities performance category. Section 1848(q)(5)(E)(ii)
of the Act provides that in any year in which the Secretary estimates
that the proportion of eligible professionals (as defined in section
1848(o)(5) of the Act) who are meaningful EHR users (as determined in
section 1848(o)(2) of the Act) is 75 percent or greater, the Secretary
may reduce the applicable percentage weight of the advancing care
information performance category in the final score, but not below 15
percent. For more on our policies concerning section 1848(q)(5)(E)(ii)
of the Act and a review of our proposal for reweighting the advancing
care information performance category in the event that the proportion
of MIPS eligible clinicians who are meaningful EHR users is 75 percent
or greater starting with the 2019 MIPS performance period, we refer
readers to section II.C.6.f.(5) of this final rule with comment period.
Table 28 summarizes the weights specified for each performance
category.
Table 28--Weights by MIPS Performance Category
----------------------------------------------------------------------------------------------------------------
2020 MIPS 2021 MIPS
Performance category Transition payment year payment year
year (%) (%) and beyond (%)
----------------------------------------------------------------------------------------------------------------
Quality......................................................... 60 50 30
Cost............................................................ 0 10 30
Improvement Activities.......................................... 15 15 15
Advancing Care Information*..................................... 25 25 25
----------------------------------------------------------------------------------------------------------------
* As described in section II.C.6.f.(5) of this final rule with comment period, the weight for advancing care
information could decrease (not below 15 percent) starting with the 2021 MIPS payment year if the Secretary
estimates that the proportion of physicians who are meaningful EHR users is 75 percent or greater.
(b) Flexibility for Weighting Performance Categories
Under section 1848(q)(5)(F) of the Act, if there are not sufficient
measures and activities applicable and available to each type of MIPS
eligible clinician involved, the Secretary shall assign different
scoring weights (including a weight of zero) for each performance
category based on the extent to which the category is applicable and
for each measure and activity based on the extent to which the measure
or activity is applicable and available to the type of MIPS eligible
clinician involved. For the 2020 MIPS payment year, we proposed to
assign a scoring weight of zero percent to a performance category
[[Page 53780]]
and redistribute its weight to the other performance categories in the
following scenarios.
For the quality performance category, we proposed that having
sufficient measures applicable and available means that we can
calculate a quality performance category percent score for the MIPS
eligible clinician because at least one quality measure is applicable
and available to the MIPS eligible clinician. Based on the volume of
measures available to MIPS eligible clinicians via the multiple
submission mechanisms, we stated that we generally believe there will
be at least 1 quality measure applicable and available to every MIPS
eligible clinician. If we receive no quality performance category
submission from a MIPS eligible clinician, the MIPS eligible clinician
generally will receive a performance category score of zero (or
slightly above zero if the all-cause hospital readmission measure
applies because the clinician submits data for a performance category
other than the quality performance category).\12\ However, as described
in the CY 2018 Quality Payment Program proposed rule (82 FR 30108
through 30109), there may be rare instances that we believe could
affect only a very limited subset of MIPS eligible clinicians (as well
as groups and virtual groups) that may have no quality measures
available and applicable and for whom we receive no quality performance
category submission (and for whom the all-cause hospital readmission
measure does not apply). In those instances, we would not be able to
calculate a quality performance category percent score.
---------------------------------------------------------------------------
\12\ As discussed in the CY 2017 Quality Payment Program final
rule (81 FR 77300), groups of 16 or more eligible clinicians that
meet the applicable case minimum requirement are automatically
scored on the all-cause readmission measure, even if they do not
submit any other data under the quality performance category,
provided that they submit data under one of the other performance
categories. If such groups do not submit data under any performance
category, the readmission measure is not scored.
---------------------------------------------------------------------------
The proposed quality performance category scoring policies for the
2020 MIPS payment year continue many of the special scoring policies
from the transition year which would enable us to determine a quality
performance category percent score whenever a MIPS eligible clinician
has submitted at least 1 quality measure. In addition, MIPS eligible
clinicians that do not submit quality measures when they have them
available and applicable would receive a quality performance category
percent score of zero percent. It is only in the rare scenarios when we
determine that a MIPS eligible clinician does not have any relevant
quality measures available to report or the MIPS eligible clinician is
approved for reweighting the quality performance category based on
extreme and uncontrollable circumstances as proposed in the CY 2018
Quality Payment Program proposed rule (82 FR 30142 through 30144), that
we would reweight the quality performance category.
For the cost performance category, we stated that we continue to
believe that having sufficient measures applicable and available means
that we can reliably calculate a score for the cost measures that
adequately captures and reflects the performance of a MIPS eligible
clinician, and that MIPS eligible clinicians who are not attributed
enough cases to be reliably measured should not be scored for the cost
performance category (82 FR 30142). We established a policy in the CY
2017 Quality Payment Program final rule that if a MIPS eligible
clinician is not attributed enough cases for a measure (in other words,
has not met the required case minimum for the measure), or if a measure
does not have a benchmark, then the measure will not be scored for that
clinician (81 FR 77323). If we do not score any cost measures for a
MIPS eligible clinician in accordance with this policy, then the
clinician would not receive a cost performance category percent score.
Because we proposed in the CY 2018 Quality Payment Program proposed
rule to set the weight of the cost performance category to zero percent
of the final score for the 2020 MIPS payment year, we did not propose
to redistribute the weight of the cost performance category to any
other performance categories for the 2020 MIPS payment year. In the
event we did not finalize this proposal, we proposed to redistribute
the weight of the cost performance category as described in the CY 2018
Quality Payment Program proposed rule (82 FR 30144 through 30146).
For the improvement activities performance category, we stated the
belief that all MIPS eligible clinicians will have sufficient
activities applicable and available; however, as discussed in the CY
2018 Quality Payment Program proposed rule (82 FR 30142 through 30144),
we believe there are limited extreme and uncontrollable circumstances,
such as natural disasters, where a clinician is unable to report
improvement activities. Barring these circumstances, we did not propose
any changes that would affect our ability to calculate an improvement
activities performance category score.
We refer readers to the CY 2018 Quality Payment Program proposed
rule (82 FR 30075 through 30079) for a detailed discussion of our
proposals and policies under which we would not score the advancing
care information performance category and would assign a weight of zero
percent to that category for a MIPS eligible clinician.
We invited public comment on our interpretation of sufficient
measures available and applicable in the performance categories.
Final Action: We did not receive any comments. We are finalizing
our proposed policies for our interpretation of measures available and
applicable for the quality, cost, and improvement activities for the
2020 MIPS payment year.
(c) Extreme and Uncontrollable Circumstances
In the CY 2017 Quality Payment Program final rule (81 FR 77241
through 77243), we discussed our belief that extreme and uncontrollable
circumstances, such as a natural disaster in which an EHR or practice
location is destroyed, can happen at any time and are outside a MIPS
eligible clinician's control. We stated that if a MIPS eligible
clinician's CEHRT is unavailable as a result of such circumstances,
then the measures specified for the advancing care information
performance category may not be available for the MIPS eligible
clinician to report. We established a policy allowing a MIPS eligible
clinician affected by extreme and uncontrollable circumstances to
submit an application to us to be considered for reweighting of the
advancing care information performance category under section
1848(q)(5)(F) of the Act. Although we proposed (82 FR 30075 through
30078) to use the authority in the last sentence of section
1848(o)(2)(D) of the Act, as amended by section 4002(b)(1)(B) of the
21st Century Cures Act, as the authority for this policy, rather than
section 1848(q)(5)(F) of the Act, we continue to believe that extreme
and uncontrollable circumstances could affect the availability of a
MIPS eligible clinician's CEHRT and the measures specified for the
advancing care information performance category.
While we had not adopted a similar reweighting policy for the other
performance categories in the transition year, we stated that we
believe a similar reweighting policy may be appropriate for the
quality, cost, and improvement activities performance categories
beginning with the 2020 MIPS payment year (82 FR 30142). For these
performance categories, we proposed to define ``extreme and
uncontrollable circumstances'' as rare (that is, highly
[[Page 53781]]
unlikely to occur in a given year) events entirely outside the control
of the clinician and of the facility in which the clinician practices
that cause the MIPS eligible clinician to not be able to collect
information that the clinician would submit for a performance category
or to submit information that would be used to score a performance
category for an extended period of time (for example, 3 months could be
considered an extended period of time with regard to information a
clinician would collect for the quality performance category). For
example, a tornado or fire destroying the only facility in which a
clinician practices likely would be considered an ``extreme and
uncontrollable circumstance;'' however, neither the inability to renew
a lease--even a long or extended lease--nor a facility being found not
compliant with federal, state, or local building codes or other
requirements would be considered ``extreme and uncontrollable
circumstances''. We proposed that we would review both the
circumstances and the timing independently to assess the availability
and applicability of measures and activities independently for each
performance category. For example, in 2018 the performance period for
improvement activities is only 90 days, whereas it is 12 months for the
quality performance category, so an issue lasting 3 months may have
more impact on the availability of measures for the quality performance
category than for the improvement activities performance category,
because the MIPS eligible clinician, conceivably, could participate in
improvement activities for a different 90-day period.
We stated that we believe that extreme and uncontrollable
circumstances, such as natural disasters, may affect a clinician's
ability to access or submit quality measures via all submission
mechanisms (effectively rendering the measures unavailable to the
clinician), as well as the availability of numerous improvement
activities. In addition, damage to a facility where care is provided
due to a natural disaster, such as a hurricane, could result in
practice management and clinical systems that are used for the
collection or submission of data to be down, thus impacting a
clinician's ability to submit necessary information via Qualified
Registry, QCDR, CMS Web Interface, or claims. This policy would not
include issues that third-party intermediaries, such as EHRs, Qualified
Registries, or QCDRs, might have submitting information to MIPS on
behalf of a MIPS eligible clinician. Instead, this policy is geared
towards events, such as natural disasters, that affect the MIPS
eligible clinician's ability to submit data to the third-party
intermediary, which in turn, could affect the ability of the clinician
(or the third-party intermediary acting on their behalf) to
successfully submit measures and activities to MIPS.
We also proposed to use this policy for measures which we derive
from claims data, such as the all-cause hospital readmission measure
and the cost measures. Other programs, such as the Hospital VBP
Program, allow hospitals to submit exception applications when ``a
hospital is able to continue to report data on measures . . . but can
demonstrate that its Hospital VBP Program measure rates are negatively
impacted as a result of a natural disaster or other extraordinary
circumstance and, as a result, the hospital receives a lower value-
based incentive payment'' (78 FR 50705). For the Hospital VBP Program,
we ``interpret[ed] the minimum numbers of cases and measures
requirement in the Act to enable us to not score . . . all applicable
quality measure data from a performance period and, thus, exclude the
hospital from the Hospital VBP Program for a fiscal year during which
the hospital has experienced a disaster or other extraordinary
circumstance'' (78 FR 50705). Hospitals that request and are granted an
exception are exempted from the Program entirely for the applicable
year.
For the 2020 MIPS payment year, we would score quality measures and
assign points even for those clinicians who do not meet the case
minimums for the quality measures they submit. However, we established
a policy not to score a cost measure unless a MIPS eligible clinician
has met the required case minimum for the measure (81 FR 77323), and
not to score administrative claims measures, such as the all-cause
hospital readmission measure, if they cannot be reliably scored against
a benchmark (81 FR 77288 through 77289). Even if the required case
minimums have been met and we are able to reliably calculate scores for
the measures that are derived from claims, we believe a MIPS eligible
clinician's performance on those measures could be adversely impacted
by a natural disaster or other extraordinary circumstance, similar to
the issues we identified for the Hospital VBP Program. For example, the
claims data used to calculate the cost measures or the all-cause
hospital readmission measure could be significantly affected if a
natural disaster caused wide-spread injury or health problems for the
community, which could not have been prevented by high-value
healthcare. In such cases, we believe that the measures are available
to the clinician, but are likely not applicable, because the extreme
and uncontrollable circumstance has disrupted practice and measurement
processes. Therefore, we believed an approach similar to that in the
Hospital VBP Program (78 FR 50705) is warranted under MIPS, and we
proposed that we would exempt a MIPS eligible clinician from all
quality and cost measures calculated from administrative claims data if
the clinician is granted an exception for the respective performance
categories based on extreme and uncontrollable circumstances.
Beginning with the 2020 MIPS payment year, we proposed that we
would reweight the quality, cost, and/or improvement activities
performance categories if a MIPS eligible clinician, group, or virtual
group's request for a reweighting assessment based on extreme and
uncontrollable circumstances is granted. We proposed that MIPS eligible
clinicians could request a reweighting assessment if they believe
extreme and uncontrollable circumstances affect the availability and
applicability of measures for the quality, cost, and improvement
activities performance categories. To the extent possible, we noted we
would seek to align the requirements for submitting a reweighting
assessment for extreme and uncontrollable circumstances with the
requirements for requesting a significant hardship exception for the
advancing care information performance category. For example, we
proposed to adopt the same deadline (December 31, 2018 for the 2018
MIPS performance period) for submission of a reweighting assessment
(see 82 FR 30075 through 30078), and we encouraged the requests to be
submitted on a rolling basis. We proposed the reweighting assessment
must include the nature of the extreme and uncontrollable circumstance,
including the type of event, date of the event, and length of time over
which the event took place, performance categories impacted, and other
pertinent details that impacted the ability to report on measures or
activities to be considered for reweighting of the quality, cost, or
improvement activities performance categories (for example, information
detailing how exactly the event impacted availability and applicability
of measures). We stated that if we finalize the policy to allow
reweighting based on extreme and uncontrollable circumstances beginning
with the 2020 MIPS payment year, we would specify the form and manner
in which these reweighting applications must be
[[Page 53782]]
submitted outside of the rulemaking process after the final rule is
published.
For virtual groups, we proposed to request that virtual groups
submit a reweighting assessment for extreme and uncontrollable
circumstances similar to groups, and we would evaluate whether
sufficient measures and activities are applicable and available to the
majority of TINs in the virtual group. We proposed that a majority of
TINs in the virtual group would need to be impacted before we grant an
exception. We still found it important to measure the performance of
virtual group members unaffected by an extreme and uncontrollable
circumstance even if some of the virtual group's TINs are affected.
We also sought comment on what additional factors we should
consider for virtual groups. We proposed that the reweighting
assessment due to extreme and uncontrollable circumstances for the
quality, cost, and improvement activities would not be available to APM
Entities in the APM scoring standard for the following reasons. First,
all MIPS eligible clinicians scored under the APM scoring standard will
automatically receive an improvement activities category score based on
the terms of their participation in a MIPS APM and need not report
anything for this performance category. Second, the cost performance
category has no weight under the APM scoring standard. Finally, for the
quality performance category, each MIPS APM has its own rules related
to quality measures and we believe any decisions related to
availability and applicability of measures should reside within the
model. As noted in the CY 2018 Quality Payment Program proposed rule
(82 FR 30087 through 30088), APM entities in MIPS APMs would be able to
request reweighting of the advancing care information performance
category.
We noted that if we finalize these proposals for reweighting the
quality, cost, and improvement activities performance categories based
on extreme and uncontrollable circumstances, then it would be possible
that one or more of these performance categories would not be scored
and would be weighted at zero percent of the final score for a MIPS
eligible clinician. We proposed to assign a final score equal to the
performance threshold if fewer than 2 performance categories are scored
for a MIPS eligible clinician. This is consistent with our policy
finalized in the CY 2017 Quality Payment Program final rule that
because the final score is a composite score, we believe the intention
of section 1848(q)(5) of the Act is for MIPS eligible clinicians to be
scored based on multiple performance categories (81 FR 77326 through
77328).
We requested comment on our extreme and uncontrollable
circumstances proposals. We also sought comment on the types of the
extreme and uncontrollable circumstances we should consider for this
policy given the general parameters we describe in this section.
The following is a summary of the public comments received on these
proposals and our responses:
Comment: Many commenters supported our proposal to reweight the
performance categories based on extreme and uncontrollable
circumstances. These commenters stated that MIPS eligible clinicians
who experience extreme and uncontrollable events are already
significantly burdened and should not be subject to MIPS reporting
requirements. A few commenters stated that claims data could be
impacted by extreme and uncontrollable events.
Response: We thank commenters for their support of our proposed
policy to reweight the performance categories in the event of extreme
and uncontrollable circumstances.
Comment: One commenter requested that CMS modify our proposal to
allow MIPS eligible clinicians who are eligible for an improvement
score to receive the improvement score points, believing that this will
provide recognition of improvement.
Response: Because MIPS eligible clinicians would not report or
receive a score for the quality and cost performance categories if
those categories are reweighted based on extreme and uncontrollable
circumstances, we would not have data sufficient to measure improvement
for the current or future performance periods. We refer readers to
sections II.C.7.a.(2)(i) and II.C.7.a.(3)(a) of this final rule with
comment period for a summary of our policies related to data
sufficiency. We believe it is important to measure improvement for as
many MIPS eligible clinicians as possible, and we seek comment on ways
we can modify our improvement scoring policies to account for
clinicians who have been affected by extreme and uncontrollable
circumstances. For example, in cases where sufficient data from the
prior performance period are not available to measure improvement due
to extreme and uncontrollable circumstances, should we use data from 2
years prior to the performance period if such data is available.
Comment: A few commenters suggested additional types of events to
include in the definition of extreme and uncontrollable circumstances.
A few commenters requested that CMS include extreme and uncontrollable
events caused by a third-party intermediary submitting information to
CMS on behalf of a MIPS eligible clinician. In addition, a few
commenters requested that CMS include physician illness and maternity
leave in the definition of extreme and uncontrollable events.
Response: We continue to believe it is appropriate to maintain a
narrow definition of extreme and uncontrollable circumstances for the
quality, cost, and improvement activities performance categories. For
third-party intermediaries, we believe it more appropriate to monitor
the third-party issues and take additional action if needed in the
future rather than address it through the extreme and uncontrollable
circumstances policy here at this time. We refer readers to section
II.C.10. in this final rule with comment period for additional
information on third party vendors. We believe many clinicians affected
by illness or who are on maternity leave would be excluded from MIPS
due to not exceeding the low-volume threshold; however, we will review
each application on a case-by-case basis and determine whether
reweighting is warranted based on the circumstances described and
information provided.
Comment: One commenter requested that CMS allow flexibility in our
process for reviewing reweighting applications because they believe
certain events may impact certain MIPS eligible clinicians more than
others.
Response: We intend for the review process to be flexible and take
into consideration various factors, including the duration, type, and
severity of the circumstances. We agree with commenters that additional
flexibility is appropriate, especially for virtual groups because we
have finalized the virtual group reporting option to support MIPS
eligible clinicians who may have a difficult time reporting in MIPS
individually. We believe that there may be cases where less than a
majority of the TINs in a virtual group are impacted by an extreme and
uncontrollable event, but reweighting is still appropriate. For
example, there may be one TIN in the virtual group which is impacted by
an extreme and uncontrollable event; however, that TIN may be the one
coordinating data collection and submission for the entire virtual
group. Conversely, we believe there may be cases where more than a
majority of the TINs in a virtual group are impacted by an extreme and
uncontrollable event, but reweighting
[[Page 53783]]
may still not be appropriate. One example may be when the TINs impacted
by the event experience the event; however, the event did not impede
data collection. As a result, we are not finalizing the proposal that a
majority of TINs in the virtual group would need to be impacted by
extreme and uncontrollable circumstances in order for the virtual group
to qualify for reweighting.
Final Action: After consideration of the public comments, we are
finalizing the proposed policies for reweighting the quality, cost, and
improvement activities performance categories based on extreme and
uncontrollable circumstances, beginning with the 2018 performance
period/2020 MIPS payment year with one minor exception. We are not
finalizing the proposal that a virtual group submitting a reweighting
application must have a majority of its TINs impacted by extreme and
uncontrollable circumstances in order for the virtual group to qualify
for reweighting, but instead we will review each virtual group
application on a case-by-case basis and make a determination based on
the information provided on the practices impacted and nature of the
event. As we noted in the CY 2018 Quality Payment Program proposed rule
(82 FR 30143), we will specify the form and manner in which the
reweighting applications must be submitted outside of the rulemaking
process after this final rule with comment period is published. We also
invite public comment on alternatives to these policies, such as using
a shortened performance period, which may allow us to measure
performance, rather than reweighting the performance categories to zero
percent.
These policies for reweighting the quality, cost, and improvement
activities performance categories based on extreme and uncontrollable
circumstances will apply beginning with the 2018 MIPS performance
period/2020 MIPS payment year. We recognize, however, that MIPS
eligible clinicians have been affected by the recent hurricanes Harvey,
Irma, and Maria, which affected large regions of the United States in
August and September of 2017. We are adopting interim final policies
for the 2017 performance period/2019 MIPS payment year for MIPS
eligible clinicians who have been affected by these hurricanes and
other natural disasters and refer readers to the interim final rule
with comment period in section III.B.
(d) Redistributing Performance Category Weights
In the CY 2017 Quality Payment Program final rule, we codified at
Sec. 414.1380(c)(2) that we will assign different scoring weights for
the performance categories if we determine there are not sufficient
measures and activities applicable and available to MIPS eligible
clinicians (81 FR 77327). We also finalized a policy to assign MIPS
eligible clinicians with only one scored performance category a final
score that is equal to the performance threshold, which means the
clinician would receive a MIPS payment adjustment factor of zero
percent for the year (81 FR 77326 through 77328). We proposed in the CY
2018 Quality Payment Program proposed rule (82 FR 30140) to refine this
policy such that a MIPS eligible clinician with fewer than 2
performance category scores would receive a final score equal to the
performance threshold. This refinement is to account for the proposal
in the CY 2018 Quality Payment Program proposed rule (82 FR 30142
through 30144) for extreme and uncontrollable circumstances, which
could result in a scenario where a MIPS eligible clinician is not
scored on any performance categories. We referred readers to the CY
2017 Quality Payment Program final rule for a description of our
policies for redistributing the weights of the performance categories
(81 FR 77325 through 77329). For the 2020 MIPS payment year, we
proposed to redistribute the weights of the performance categories in a
manner that is similar to the transition year. However, we also
proposed new scoring policies to incorporate our proposals for extreme
and uncontrollable circumstances.
In the CY 2018 Quality Payment Program proposed rule, (82 FR 30075
through 30078) we proposed to use the authority in the last sentence of
section 1848(o)(2)(D) of the Act, as amended by section 4002(b)(1)(B)
of the 21st Century Cures Act, as the authority for certain policies
under which we would assign a scoring weight of zero percent for the
advancing care information performance category, and to amend Sec.
414.1380(c)(2) to reflect the proposals. We did not, however, propose
substantive changes to the policy established in the CY 2017 Quality
Payment Program final rule to redistribute the weight of the advancing
care information performance category to the other performance
categories for the transition year (81 FR 77325 through 77329).
For the 2020 MIPS payment year, if we assign a weight of zero
percent for the advancing care information performance category for a
MIPS eligible clinician, we proposed (82 FR 30144) to continue our
policy from the transition year and redistribute the weight of the
advancing care information performance category to the quality
performance category (assuming the quality performance category does
not qualify for reweighting). We believe redistributing the weight of
the advancing care information performance category to the quality
performance category (rather than redistributing to both the quality
and improvement activities performance categories) is appropriate
because MIPS eligible clinicians have more experience reporting quality
measures through the PQRS program, and measurement in this performance
category is more mature.
We noted in the CY 2018 Quality Payment Program proposed rule (82
FR 30144) that if we do not finalize our proposal to weight the cost
performance category at zero percent (which means the weight of the
cost performance category is greater than zero percent), then we would
not redistribute the weight of any other performance categories to the
cost performance category. We believed this would be consistent with
our policy of introducing cost measurement in a deliberate fashion and
recognition that clinicians are more familiar with other elements of
MIPS. In the rare and unlikely scenario where a MIPS eligible clinician
qualifies for reweighting of the quality performance category percent
score (because there are not sufficient quality measures applicable and
available to the clinician or the clinician is facing extreme and
uncontrollable circumstances) and the MIPS eligible clinician is
eligible to have the advancing care information performance category
reweighted to zero and the MIPS eligible clinician has sufficient cost
measures applicable and available to have a cost performance category
percent score that is not reweighted, then we would redistribute the
weight of the quality and advancing care information performance
categories to the improvement activities performance category and would
not redistribute the weight to the cost performance category. We also
proposed that if we finalize the cost performance category weight at
zero percent for the 2020 MIPS payment year, then we would set the
final score at the performance threshold because the final score would
be based on the improvement activities performance category which would
not be a
[[Page 53784]]
composite of 2 or more performance category scores.
For the 2020 MIPS payment year, we proposed to redistribute the
weight of the cost performance category to the quality performance
category if we did not finalize the proposal to set the cost
performance category at a zero percent weight, and if a MIPS eligible
clinician does not receive a cost performance category percent score
because there are not sufficient cost measures applicable and available
to the clinician or the clinician is facing extreme and uncontrollable
circumstances. In the rare scenarios where a MIPS eligible clinician
does not receive a quality performance category percent score because
there are not sufficient quality measures applicable and available to
the clinician or the clinician is facing extreme and uncontrollable
circumstances, we proposed to redistribute the weight of the cost
performance category equally to the remaining performance categories
that are not reweighted.
In the rare event a MIPS eligible clinician is not scored on at
least one measure in the quality performance category because there are
not sufficient measures applicable and available or the clinician is
facing extreme and uncontrollable circumstances, we proposed for the
2020 MIPS payment year to continue our policy from the transition year
and redistribute the 60 percent weight of the quality performance
category so that the performance category weights are 50 percent for
the advancing care information performance category and 50 percent for
the improvement activities performance category (assuming these
performance categories do not qualify for reweighting). While
clinicians have more experience reporting advancing care information
measures, we believe equal weighting to both the improvement activities
and advancing care information performance categories is appropriate
for simplicity. Additionally, in the absence of quality measures, we
believe increasing the relative weight of the improvement activities
performance category is appropriate because both the improvement
activities and advancing care information performance categories have
elements of quality and care improvement which are important to
emphasize. Should the cost performance category have available and
applicable measures and the cost performance category weight is not
zero, but either the improvement activities or advancing care
information performance category is reweighted to zero percent, then we
proposed that we would redistribute the weight of the quality
performance category to the remaining performance category that is not
weighted at zero percent. We would not redistribute the weight to the
cost performance category.
We believe that all MIPS eligible clinicians will have sufficient
improvement activities applicable and available. It is possible that a
MIPS eligible clinician might face extreme and uncontrollable
circumstances that render the improvement activities not applicable or
available to the clinician; however, in that scenario, we believe it is
likely that the measures specified for the other performance categories
also would not be applicable or available to the clinician based on the
circumstances. In the rare event that the improvement activities
performance category would qualify for reweighting based on extreme and
uncontrollable circumstances, and the other performance categories
would not also qualify for reweighting, we proposed to redistribute the
improvement activities performance category weight to the quality
performance category consistent with the redistribution policies for
the cost and advancing care information performance categories. We
noted in the CY 2018 Quality Payment Program proposed rule (82 FR
30145) that, should the cost performance category have available and
applicable measures and the cost performance category weight is not
finalized at zero percent, and the quality performance category is
reweighted to zero percent, then we would redistribute the weight of
the improvement activities performance category to the advancing care
information performance category. Table 38 in the CY 2018 Quality
Payment Program proposed rule summarized the potential reweighting
scenarios based on our proposals for the 2020 MIPS payment year should
the cost performance category be weighted at zero percent (82 FR
30145).
We also considered an alternative approach for the 2020 MIPS
payment year to redistribute the weight of the advancing care
information performance category to the quality and improvement
activities performance categories, to minimize the impact of the
quality performance category on the final score. For this approach, we
proposed to redistribute 15 percent to the quality performance category
(60 percent + 15 percent = 75 percent) and 10 percent to the
improvement activities performance category (15 percent + 10 percent =
25 percent). We considered redistributing the weight of the advancing
care information performance category equally to the quality and
improvement activities performance categories. However, for simplicity,
we wanted to redistribute the weights in increments of 5 points.
Because MIPS eligible clinicians have more experience reporting quality
measures and because these measures are more mature, under this
alternative option, we would redistribute slightly more to the quality
performance category (15 percent vs. 10 percent). Should the cost
performance category have available and applicable measures and the
cost performance category weight is not finalized at zero percent and
the quality performance category is reweighted to zero percent, then we
would redistribute the weight of the advancing care information
performance category to the improvement activities performance
category. This alternative approach, which assumed a cost performance
category weight of zero percent was detailed in Table 39 of the CY 2018
Quality Payment Program proposed rule (82 FR 30146).
We invited comments on our proposal for reweighting the performance
categories for the 2020 MIPS payment year and our alternative option
for reweighting the advancing care information performance category.
The following is a summary of the public comments received and our
responses:
Comment: Several commenters supported CMS's proposed reweighting
policies for the 2020 MIPS payment year. Commenters noted that CMS's
reweighting policies would alleviate burdens for small and rural
practices. Some commenters expressed the belief that reweighting to the
quality performance category was appropriate because it is the category
with which MIPS eligible clinicians are most familiar.
Response: We thank commenters for their support of our proposed
reweighting policies. We are finalizing our reweighting policies as
proposed for the 2020 MIPS payment year, with the exception of the
policies that assume the cost performance category will be weighted at
zero percent in the final score as proposed, because we have decided to
finalize the cost performance category weight at 10 percent in section
II.C.6.d.(2) of this final rule with comment period. We agree that
quality is the performance category with which MIPS eligible clinicians
are most familiar (compared with the improvement activities performance
category). The commenters did not specify how this policy would benefit
small and rural practices, but we agree that collectively our policies
for MIPS aim to minimize burden for these practices.
[[Page 53785]]
Comment: Several commenters were supportive of CMS's alternative
approach to reweight the advancing care information performance
category to the quality and improvement activities performance
categories, in order to not place undue emphasis on the quality
performance category. A few commenters suggested that, in cases where a
MIPS eligible clinician's advancing care information performance
category is reweighted to quality, CMS provide a 50 percent base score
for the quality performance category to better align with scoring for
the advancing care information performance category and to not unfairly
penalize these MIPS eligible clinicians.
Response: We continue to believe that redistributing the advancing
care information weight to quality is appropriate because of the
experience MIPS eligible clinicians have reporting on quality measures
under other CMS programs. We appreciate these comments and will take
them into consideration in future rulemaking, when MIPS eligible
clinicians have more experience reporting on the improvement activities
performance category.
Comment: One commenter requested that CMS not redistribute the cost
performance category weight in future years for non-patient facing MIPS
eligible clinicians who do not have sufficient cost measures.
Response: We appreciate the feedback and will take into
consideration in future rulemaking. We note that in section
II.C.6.d.(2) of this final rule with comment period, we finalized that
the cost performance category weight for the 2018 MIPS performance
period and the 2020 MIPS payment year is 10 percent. As a result, if
there are not sufficient cost measures applicable and available to a
MIPS eligible clinician, we are finalizing the proposal to redistribute
the cost performance category weight to the quality performance
category, or if a MIPS eligible clinician does not receive a quality
performance category percent score because there are not sufficient
quality measures applicable and available to the clinician, to
redistribute the cost performance category weight equally to the
remaining performance categories that are not reweighted.
Final Action: After consideration of public comments, we are
finalizing our proposals for redistributing the performance category
weights for the 2020 MIPS payment year, with the exception of the
proposals that assume the cost performance category will be weighted at
zero percent in the final score as proposed, because in section
II.C.6.d.(2) of this final rule with comment period, we finalized that
the cost performance category weight for the 2018 MIPS performance
period and the 2020 MIPS payment period is 10 percent. Table 29
summarizes the final reweighting policies for the 2018 MIPS performance
period and 2020 MIPS payment year.
Table 29--Performance Category Redistribution Policies for the 2020 MIPS Payment Year
----------------------------------------------------------------------------------------------------------------
Advancing care
Reweighting scenario Quality (%) Cost (%) Improvement information
activities (%) (%)
----------------------------------------------------------------------------------------------------------------
No Reweighting Needed
----------------------------------------------------------------------------------------------------------------
--Scores for all four performance categories.... 50 10 15 25
----------------------------------------------------------------------------------------------------------------
Reweight One Performance Category
----------------------------------------------------------------------------------------------------------------
--No Cost....................................... 60 0 15 25
--No Advancing Care Information................. 75 10 15 0
--No Quality.................................... 0 10 45 45
--No Improvement Activities..................... 65 10 0 25
----------------------------------------------------------------------------------------------------------------
Reweight Two Performance Categories
----------------------------------------------------------------------------------------------------------------
--No Cost and no Advancing Care Information..... 85 0 15 0
--No Cost and no Quality........................ 0 0 50 50
--No Cost and no Improvement Activities......... 75 0 0 25
--No Advancing Care Information and no Quality.. 0 10 90 0
--No Advancing Care Information and no 90 10 0 0
Improvement Activities.........................
--No Quality and no Improvement Activities...... 0 10 0 90
----------------------------------------------------------------------------------------------------------------
8. MIPS Payment Adjustments
a. Payment Adjustment Identifier and Final Score Used in Payment
Adjustment Calculation
(1) Payment Adjustment Identifier
For purposes of applying the MIPS payment adjustment under section
1848(q)(6)(E) of the Act, we finalized a policy in the CY 2017 Quality
Payment Program final rule to use a single identifier, TIN/NPI, for all
MIPS eligible clinicians, regardless of whether the TIN/NPI was
measured as an individual, group or APM Entity group (81 FR 77329
through 77330). In other words, a TIN/NPI may receive a final score
based on individual, group, or APM Entity group performance, but the
MIPS payment adjustment would be applied at the TIN/NPI level.
We did not propose any changes to the MIPS payment adjustment
identifier.
(2) Final Score Used in Payment Adjustment Calculation
In CY 2017 Quality Payment Program final rule (81 FR 77330 through
77332), we finalized a policy to use a TIN/NPI's performance from the
performance period associated with the MIPS payment adjustment. We also
proposed the following policies, and, although we received public
comments on them and responded to those comments, we inadvertently
failed to state that we were finalizing these policies, although it was
our intention to do so. Thus, we clarify that the following final
policies apply beginning with the transition year. For groups
submitting data using the TIN identifier, we will apply the group final
score to all the TIN/NPI combinations that bill under that TIN during
the performance period. For individual MIPS eligible clinicians
submitting data using TIN/NPI, we will
[[Page 53786]]
use the final score associated with the TIN/NPI that is used during the
performance period. For MIPS eligible clinicians in MIPS APMs, we will
assign the APM Entity group's final score to all the APM Entity
Participant Identifiers that are associated with the APM Entity. For
MIPS eligible clinicians that participate in APMs for which the APM
scoring standard does not apply, we will assign a final score using
either the individual or group data submission assignments.
In the case where a MIPS eligible clinician starts working in a new
practice or otherwise establishes a new TIN that did not exist during
the performance period, there would be no corresponding historical
performance information or final score for the new TIN/NPI. In cases
where there is no final score associated with a TIN/NPI from the
performance period, we will use the NPI's performance for the TIN(s)
the NPI was billing under during the performance period. If the MIPS
eligible clinician has only one final score associated with the NPI
from the performance period, then we will use that final score. In the
event that an NPI bills under multiple TINs in the performance period
and bills under a new TIN in the MIPS payment year, we finalized a
policy of taking the highest final score associated with that NPI in
the performance period (81 FR 77332).
In some cases, a TIN/NPI could have more than one final score
associated with it from the performance period, if the MIPS eligible
clinician submitted duplicative data sets. In this situation, the MIPS
eligible clinician has not changed practices; rather, for example, a
MIPS eligible clinician has a final score for an APM Entity and a final
score for a group TIN. If a MIPS eligible clinician has multiple final
scores, the following hierarchy will apply. If a MIPS eligible
clinician is a participant in MIPS APM, then the APM Entity final score
would be used instead of any other final score. If a MIPS eligible
clinician has more than one APM Entity final score, we will apply the
highest APM Entity final score to the MIPS eligible clinician. If a
MIPS eligible clinician reports as a group and as an individual and not
as an APM Entity, we will calculate a final score for the group and
individual identifier and use the highest final score for the TIN/NPI
(81 FR 77332).
For a further description of our policies, we referred readers to
the CY 2017 Quality Payment Program final rule (81 FR 77330 through
77332).
In addition to the above policies from the CY 2017 Quality Payment
Program final rule, beginning with the 2020 MIPS payment year, we
proposed to modify the policies to address the addition of virtual
groups. Section 1848(q)(5)(I)(i) of the Act provides that MIPS eligible
clinicians electing to be a virtual group must: (1) Have their
performance assessed for the quality and cost performance categories in
a manner that applies the combined performance of all the MIPS eligible
clinicians in the virtual group to each MIPS eligible clinician in the
virtual group for the applicable performance period; and (2) be scored
for the quality and cost performance categories based on such
assessment. Therefore, when identifying a final score for payment
adjustments, we must prioritize a virtual group final score over other
final scores such as individual and group scores. Because we also wish
to encourage movement towards APMs, we will prioritize using the APM
Entity final score over any other score for a TIN/NPI, including a TIN/
NPI that is in a virtual group. If a TIN/NPI is in both a virtual group
and a MIPS APM, we proposed to use the waiver authority for Innovation
Center models under section 1115A(d)(1) of the Act and the Shared
Savings Program waiver authority under section 1899(f) of the Act to
waive section 1848(q)(5)(I)(i)(I) and (II) of the Act so that we could
use the APM Entity final score instead of the virtual group final score
for a TIN/NPI. As discussed in the CY 2018 Quality Payment Program
proposed rule (82 FR 30033 through 30034), the use of waiver authority
is to avoid creating competing incentives between MIPS and the APM. We
want MIPS eligible clinicians to focus on the requirements of the APM
to ensure that the models produce valid results that are not confounded
by the incentives created by MIPS.
We also proposed to modify our hierarchy to state that if a MIPS
eligible clinician is not in an APM Entity and is in a virtual group,
the MIPS eligible clinician would receive the virtual group final score
over any other final score. Our policies remain unchanged for TIN/NPIs
who are not in an APM Entity or virtual group. Tables 40 and 41 in the
CY 2018 Quality Payment Program proposed rule summarized the final and
proposed policies (82 FR 30147).
We will only apply the associated final score to clinicians or
groups who are not otherwise excluded from MIPS. We invited public
comment on our proposals.
The following is a summary of the public comments received and our
responses:
Comment: A few commenters supported the prioritization of the APM
Entity final score over any virtual group scores for the TIN/NPI and
agreed that this prioritization will help encourage eligible clinicians
to move towards APMs.
Response: We thank the commenters for their support.
Comment: One commenter did not support the prioritization of the
APM Entity final score and suggested that a group practice should have
the option to report both as a group and through an APM Entity, and the
final score should be the higher of the two scores. One commenter
believes that APM Entities may receive lower scores for certain
performance categories, such as the advancing care information
performance category, compared to their group.
Response: We believe it is important to align MIPS with APMs and
believe prioritizing APM Entity scores over other scores creates that
alignment. We want MIPS eligible clinicians to be able to focus on the
requirements and redesign required in the APM.
Comment: One commenter requested additional clarity on how payment
adjustments will be applied when a MIPS eligible clinician bills under
more than one TIN/NPI combination. One commenter expressed concern with
the approach of applying the payment adjustment at the TIN/NPI level
because of the potential complexities from MIPS eligible clinicians
changing practices.
Response: MIPS payment adjustments will be determined for each TIN/
NPI combination. We will use only one final score for a TIN/NPI for
purposes of determining the MIPS payment adjustment that will be
applied to that TIN/NPI. If a MIPS eligible clinician bills under more
than one TIN, that MIPS eligible clinician will receive a separate MIPS
payment adjustment for each TIN/NPI combination. In addition, since we
allow each MIPS eligible clinician to decide how they want to report--
individually, through a group, or through an APM Entity as a MIPS APM
participant--we cannot control the number of submissions that one TIN/
NPI may have for a performance period. To address scenarios where we
have multiple submissions for one TIN/NPI, we have established the
policies described earlier in this section to articulate the hierarchy
of which final score we will use to determine the MIPS payment
adjustment for a TIN/NPI.
Final Action: After consideration of the public comments received,
we are finalizing our policies as proposed.
Tables 30 and 31 illustrate the final policies for determining
which final score will be used when more than one final score is
associated with a TIN/NPI (Table 30) and the final policies that apply
if there is no final score
[[Page 53787]]
associated with a TIN/NPI from the performance period, such as when a
MIPS eligible clinician starts working in a new practice or otherwise
establishes a new TIN (Table 31).
Table 30--Hierarchy for Final Score When More Than One Final Score Is
Associated With a TIN/NPI
------------------------------------------------------------------------
Final score used to determine
Example payment adjustments
------------------------------------------------------------------------
TIN/NPI has more than one APM Entity The highest of the APM Entity
final score. final scores.
TIN/NPI has an APM Entity final score APM Entity final score.
and also has an individual score.
TIN/NPI has an APM Entity final score APM Entity final score.
that is not a virtual group score and
also has a group final score.
TIN/NPI has an APM Entity final score APM Entity final score.
and also has a virtual group score.
TIN/NPI has a virtual group score and Virtual group score.
an individual final score.
TIN/NPI has a group final score and an The highest of the group or
individual final score, but no APM individual final score.
Entity final score and is not in a
virtual group.
------------------------------------------------------------------------
Table 31--No Final Score Associated With a TIN/NPI
----------------------------------------------------------------------------------------------------------------
TIN/NPI billing in Final score used to
MIPS eligible clinician (NPI 1) Performance period MIPS payment year (yes/ determine payment
final score no) adjustments
----------------------------------------------------------------------------------------------------------------
TIN A/NPI 1........................ 90.................... Yes (NPI 1 is still 90 (Final score for TIN A/
billing under TIN A NPI 1 from the performance
in the MIPS payment period)
year).
TIN B/NPI 1........................ 70.................... No (NPI 1 has left TIN n/a (no claims are billed
B and no longer bills under TIN B/NPI 1)
under TIN B in the
MIPS payment year).
TIN C/NPI 1........................ n/a (NPI 1 was not Yes (NPI 1 has joined 90 (No final score for TIN
part of TIN C during TIN C and is billing C/NPI 1, so use the
the performance under TIN C in the highest final score
period). MIPS payment year). associated with NPI 1 from
the performance period)
----------------------------------------------------------------------------------------------------------------
b. MIPS Payment Adjustment Factors
For a description of the statutory background and further
description of our policies, we refer readers to the CY 2017 Quality
Payment Program final rule (81 FR 77332 through 77333).
Although we did not propose any changes to these policies, nor did
we request public comments, we did receive comments on this topic,
which we will consider in preparation for future rulemaking.
c. Establishing the Performance Threshold
Under section 1848(q)(6)(D)(i) of the Act, for each year of the
MIPS, the Secretary shall compute a performance threshold with respect
to which the final scores of MIPS eligible clinicians are compared for
purposes of determining the MIPS payment adjustment factors under
section 1848(q)(6)(A) of the Act for a year. The performance threshold
for a year must be either the mean or median (as selected by the
Secretary, and which may be reassessed every 3 years) of the final
scores for all MIPS eligible clinicians for a prior period specified by
the Secretary. Section 1848(q)(6)(D)(iii) of the Act outlines a special
rule for the initial 2 years of MIPS, which requires the Secretary,
prior to the performance period for such years, to establish a
performance threshold for purposes of determining the MIPS payment
adjustment factors under section 1848(q)(6)(A) of the Act and an
additional performance threshold for purposes of determining the
additional MIPS payment adjustment factors under section 1848(q)(6)(C)
of the Act, each of which shall be based on a period prior to the
performance period and take into account data available for performance
on measures and activities that may be used under the performance
categories and other factors determined appropriate by the Secretary.
We codified the term performance threshold at Sec. 414.1305 as the
numerical threshold for a MIPS payment year against which the final
scores of MIPS eligible clinicians are compared to determine the MIPS
payment adjustment factors. We codified at Sec. 414.1405(b) that a
performance threshold will be specified for each MIPS payment year. We
referred readers to the CY 2017 Quality Payment Program final rule for
further discussion of the performance threshold (81 FR 77333 through
77338). In accordance with the special rule set forth in section
1848(q)(6)(D)(iii) of the Act, we finalized a performance threshold of
3 points for the transition year (81 FR 77334 through 77338). We
inadvertently failed to codify the performance threshold for the 2019
MIPS payment year in the CY 2017 Quality Payment Program final rule,
although it was our intention to do so. Thus, we now codify the
performance threshold of 3 points for the 2019 MIPS payment year at
Sec. 414.1405(b)(4).
Our goal was to encourage participation and provide an opportunity
for MIPS eligible clinicians to become familiar with the MIPS program.
We determined that it would have been inappropriate to set a
performance threshold that would result in downward adjustments to
payments for many clinicians who may not have had time to prepare
adequately to succeed under MIPS. By providing a pathway for many
clinicians to succeed under MIPS, we believed that we would encourage
early participation in the program, which may enable more robust and
thorough engagement with the program over time. We set the performance
threshold at a low number to provide MIPS eligible clinicians an
opportunity to achieve a minimum level of success under the program,
while gaining experience with reporting on the measures and activities
and becoming familiar with other program
[[Page 53788]]
policies and requirements. We believed if we set the threshold too
high, using a new formula that is unfamiliar and confusing to
clinicians, many could be discouraged from participating in the first
year of the program, which may lead to lower participation rates in
future years. Additionally, we believed a lower performance threshold
was particularly important to reduce the initial burden for MIPS
eligible clinicians in small or solo practices. We believed that active
participation of MIPS eligible clinicians in MIPS will improve the
overall quality, cost, and care coordination of services provided to
Medicare beneficiaries. In accordance with section 1848(q)(6)(D)(iii)
of the Act, we took into account available data regarding performance
on measures and activities, as well as other factors we determined
appropriate. We refer readers to 81 FR 77333 through 77338 for details
of our analysis. We also stated our intent to increase the performance
threshold in the 2020 MIPS payment year, and that, beginning in the
2021 MIPS payment year, we will use the mean or median final score from
a prior period as required by section 1848(q)(6)(D)(i) of the Act (81
FR 77338).
For the 2020 MIPS payment year, we again wanted to use the
flexibility provided in section 1848(q)(6)(D)(iii) to help transition
MIPS eligible clinicians to the 2021 MIPS payment year, when the
performance threshold will be the mean or median of the final scores
for all MIPS eligible clinicians from a prior period. We wanted to
encourage continued participation and the collection of meaningful data
by MIPS eligible clinicians. A higher performance threshold would help
MIPS eligible clinicians strive to achieve more complete reporting and
better performance and prepare MIPS eligible clinicians for the 2021
MIPS payment year. However, a performance threshold set too high could
also create a performance barrier, particularly for MIPS eligible
clinicians who did not previously participate in PQRS or the EHR
Incentive Programs. We have heard from stakeholders requesting that we
continue a low performance threshold and from stakeholders requesting
that we ramp up the performance threshold to help MIPS eligible
clinicians prepare for the 2021 MIPS payment year and to meaningfully
incentivize higher performance. Given our desire to provide a
meaningful ramp between the transition year's 3-point performance
threshold and the 2021 MIPS payment year performance threshold using
the mean or median of the final scores for all MIPS eligible clinicians
for a prior period, we proposed to set the performance threshold at 15
points for the 2020 MIPS payment year (82 FR 30147 through 30149).
We proposed a performance threshold of 15 points because it
represents a meaningful increase, compared to 3 points in the
transition year, while maintaining flexibility for MIPS eligible
clinicians in the pathways available to achieve this performance
threshold. We refer readers to the CY 2018 Quality Payment Program
proposed rule (82 FR 30148) for examples of how clinicians could meet
or exceed a performance threshold of 15 points based on our proposed
policies.
We believed the proposed performance threshold would mitigate
concerns from MIPS eligible clinicians about participating in the
program for the second year. However, we remained concerned that moving
from a performance threshold of 15 points for the 2020 MIPS payment
year to a performance threshold of the mean or median of the final
scores for all MIPS eligible clinicians for a prior period for the 2021
MIPS payment year may be a steep jump.
By the 2021 MIPS payment year, MIPS eligible clinicians would
likely need to submit most of the required information and perform well
on the measures and activities to receive a positive MIPS payment
adjustment. Therefore, we also sought comment on setting the
performance threshold either lower or higher than the proposed 15
points for the 2020 MIPS payment year. A performance threshold lower
than the proposed 15 points for the 2020 MIPS payment year presents the
potential for a significant increase in the final score a MIPS eligible
clinician must earn to meet the performance threshold in the 2021 MIPS
payment year, as well as providing for a potentially smaller total
amount of negative MIPS payment adjustments upon which the total amount
of the positive MIPS payment adjustments would depend due to the budget
neutrality requirement under section 1848(q)(6)(F)(ii) of the Act. A
performance threshold higher than the proposed 15 points would increase
the final score required to receive a neutral MIPS payment adjustment,
which may be particularly challenging for small practices, even with
the proposed addition of the small practice bonus. A higher performance
threshold would also allow for potentially higher positive MIPS payment
adjustments for those who exceed the performance threshold.
We considered an alternative of setting a performance threshold of
6 points, which could be met by submitting 2 quality measures with
required data completeness or one high-weighted improvement activity.
While this lower performance threshold may provide a sharp increase to
the required performance threshold in the 2021 MIPS payment year (the
mean or median of the final scores for all MIPS eligible clinicians for
a prior period), it would continue to reward clinicians for
participation in MIPS as they transition into the program.
We also considered an alternative of setting the performance
threshold at 33 points, which would require full participation both in
improvement activities and in the quality performance category (either
for a small group or for a large group that meets data completeness
standards) to meet the performance threshold. Such a threshold would
make the step to the required mean or median performance threshold in
the 2021 MIPS payment year less steep but could present further
challenges to clinicians who have not previously participated in legacy
quality reporting programs.
As required by section 1848(q)(6)(D)(iii) of the Act, for the
purposes of determining the performance threshold, we considered data
available for performance on measures and activities that may be used
under the MIPS performance categories. We refer readers to the CY 2018
Quality Payment Program proposed rule (82 FR 30147 through 30149) for a
discussion of the data we considered.
We invited public comments on the proposal to set the performance
threshold at 15 points, and also sought comment on setting the
performance threshold at the alternative of 6 points or at 33 points
for the 2020 MIPS payment year. We also sought public comments on
principles and considerations for setting the performance threshold
beginning with the 2021 MIPS payment year, which will be the mean or
median of the final scores for all MIPS eligible clinicians from a
prior period.
The following is a summary of the public comments received on our
proposals for the performance threshold and our responses:
Comment: Many commenters supported the performance threshold of 15
points because it will provide an incremental increase over the 3-point
performance threshold from the transition year; provide a helpful
ramping up of performance standards; encourage more participation in
MIPS; prepare clinicians to focus on the delivery of high quality care
to help them eventually advance toward APM
[[Page 53789]]
participation; and represents a meaningful increase in the performance
threshold while maintaining flexibility for clinicians to achieve the
threshold in multiple ways. One commenter recommended a performance
threshold higher than 5 points.
Response: We thank the commenters for their support. We are
finalizing the performance threshold at 15 points. Please refer to
section II.C.8.g.(2) of this final rule with comment period for
additional details on multiple ways clinicians and groups can meet or
exceed the performance threshold.
Comment: Many commenters supported a lower performance threshold
without a specific numerical recommendation because the commenters
believe that the increase to 15 points would put an increased burden of
additional requirements on MIPS eligible clinicians, that a lower
threshold would encourage clinician participation, provide flexibility
for clinicians to meet the performance threshold, and would allow
clinicians to become more familiar with MIPS and more successful,
particularly for gastroenterologists. One commenter encouraged CMS to
maintain as low a performance threshold as possible for 2018 since the
second year of MIPS is still considered a transition year, and the
commenter indicated many clinicians are still expected to be at various
levels of readiness and comfort with the program. One commenter
believes that the lower performance threshold would allow CRNAs and
other MIPS eligible clinicians to gain greater familiarity with QCDR
measure reporting and improvement activities.
Response: We acknowledge the concerns expressed by many commenters.
We recognize that the 2020 MIPS payment year is still a transition year
for MIPS, and we believe the proposed performance threshold of 15
points modestly increases the threshold from the transition year, while
encouraging increased engagement and participation in the MIPS program
and preparing clinicians for additional participation requirements in
the 2021 MIPS performance period. We note that this performance
threshold would allow for many options for a MIPS eligible clinician to
succeed under MIPS. For example, submitting the maximum number of
improvement activities could qualify for a final score of 15 points
because improvement activities performance category is worth 15 percent
of the final score. The performance threshold could also be met by full
participation in the quality performance category--by submitting all
required measures with the necessary data completeness, MIPS eligible
clinicians would earn a quality performance category percent score of
at least 30 percent (which is at least 3 measure achievement points out
of 10 measure points for each required measure). If the quality
performance category is weighted at 50 percent, then the quality
performance category would be 30 percent x 50 percent x 100 which
equals 15 points toward the final score and meets the performance
threshold. Finally, a MIPS eligible clinician could achieve a final
score of 15 points through an advancing care information performance
category score of 60 percent or higher (60 percent advancing care
information performance category score x 25 percent performance
category weight x 100 equals 15 points towards the final score). Please
refer to section II.C.8.g.(2) of this final rule with comment period
for additional details on ways to meet or exceed the performance
threshold.
Comment: A few commenters stated that setting a lower performance
threshold is especially important because stakeholders do not have data
from the first performance period and are unsure how well clinicians
understand MIPS requirements and whether clinicians are ready for a
more challenging program. The commenters expressed their belief that
CMS's current program estimates are overly optimistic and may be
inflated. A few commenters suggested that CMS delay implementing a
significant increase in the performance threshold until a complete
analysis of the 2017 data is performed because that would be consistent
with efforts to ensure a smooth transition in the 2018 performance
period.
Response: We appreciate the commenters' concerns with the proposed
performance threshold and their request for a delay in increasing the
performance threshold until we have more information about how
clinicians are performing under MIPS. However, beginning with the 2021
MIPS payment year, section 1848(q)(6)(D)(i) of the Act requires the
performance threshold to be either the mean or median of the final
scores for all MIPS eligible clinicians for a prior period, which could
result in a significant increase in the performance threshold in the
2021 MIPS payment year. We believe that setting the performance
threshold at 15 points for the 2020 MIPS payment year is appropriate
because it encourages increased participation and prepares clinicians
for the additional participation requirements to meet or exceed the
increased performance threshold that is statutorily required in the
2021 MIPS payment year. We also do not believe that increasing the
performance threshold to 15 points is a significant increase, but is
rather a moderate step that provides an opportunity for clinicians to
gain experience with all MIPS performance categories before the
performance threshold changes in the 2021 MIPS payment year and a
clinician will likely need to participate more fully and perform well
on multiple performance categories to earn a score high enough to
receive a positive adjustment. We have based our regulatory impact
analysis estimates on the best available data and two sets of
participation assumptions; we do not believe our participation
assumptions are overly inflated or inaccurate based on the data
available. We refer readers to the CY 2018 Quality Payment Program
proposed rule (82 FR 30147 through 30149) for details on the data
considered. While we anticipate we will have more accurate program
information after the first year of MIPS, we do not believe it is
appropriate to have a performance threshold below 15 points as our
program estimates do not impact the statutory requirement to set the
performance threshold at either the mean or median of the final scores
for all MIPS eligible clinicians for a prior period starting in the
2021 MIPS payment year.
Comment: A few commenters believe a performance threshold of 15
points is excessively steep because clinicians will no longer be able
to report on only one measure to avoid a negative payment adjustment
and because some clinicians may not be ready to submit enough data to
reach the proposed performance threshold of 15 points. One commenter
recommended only a minimal increase (something less than the proposed
15 points) in the performance threshold because of concern with drastic
fluctuations in performance threshold numbers. One commenter
recommended that CMS simplify and clarify performance scoring through
future regulation to allow clinicians to better assess the scoring and
weighting of each performance category because any increases in the
performance threshold make it more difficult for clinicians to combine
reporting on measures and activities to avoid a negative payment
adjustment.
Response: We disagree with the characterization that a performance
threshold of 15 points is excessively steep. We believe a performance
threshold of 15 points is an incremental increase over the 3-point
performance threshold from the transition year and will provide a
modest increase in what
[[Page 53790]]
clinicians need to do to succeed in MIPS. As discussed earlier in this
section, there are many ways a clinician can earn a final score of 15
points from reporting for just a single performance category. We also
believe this provides an opportunity for clinicians to gain experience
with all MIPS performance categories before the performance threshold
changes in the 2021 MIPS payment year, and a clinician will likely need
to perform well on multiple performance categories to earn a score high
enough to receive a positive payment adjustment. We will continue to
address any changes to the MIPS program in future rulemaking.
Comment: One commenter did not support the increase from 3 points
in the 2019 MIPS payment year to 15 points for the 2020 MIPS payment
year because of the impact on clinicians integrating CEHRT into their
practices.
Response: We do not believe CEHRT integration will impact the
ability of MIPS eligible clinicians to meet or exceed the performance
threshold because in section II.C.6.f.(4) of this final rule with
comment period, we adopted a policy to allow the use of 2014 Edition or
2015 Edition CEHRT, or a combination of the two Editions, for the
performance period in 2018. A clinician can also meet a performance
threshold of 15 points without participating in the advancing care
information performance category.
Comment: Several commenters recommended CMS maintain the
performance threshold at 3 points because the 2020 MIPS payment year is
a transition year, MIPS is complex, and CMS should continue to offer an
``on-ramp'' for clinicians to transition and integrate into MIPS. One
commenter stated that an increase could harm MIPS eligible clinicians'
ability to provide the care that patients need. One commenter believes
that 15 points would be too steep an increase at this early juncture in
the MIPS program. One commenter stated that clinicians are still trying
to understand the program requirements and invest in submission
mechanisms that make the most sense for their practice. One commenter
recommended that the performance threshold remain at 3 points until
MIPS eligible clinician participation can be assessed so that impact on
small practices could be evaluated. One commenter believes that current
threshold of 3 points would reward clinicians who are implementing
quality measures into their practices while encouraging those who are
reluctant to do so as well.
Response: We do not believe that maintaining the performance
threshold at 3 points for the 2020 MIPS payment year appropriately
encourages clinicians to actively participate in MIPS. We believe a
meaningful increase to a performance threshold of 15 points maintains
appropriate flexibility for clinicians to meet or exceed the threshold,
while requiring increased participation over the level of engagement
required to meet or exceed the 3-point threshold used in the transition
year. We also believe the increased participation better prepares
clinicians to succeed under MIPS in future years and will improve the
overall quality, cost, and care coordination of services to Medicare
beneficiaries. We are also mindful of the impact of meeting additional
requirements on small practices and have added a small practice bonus
as discussed in section II.C.7.b.(1)(c) of this final rule with comment
period, which may help them meet the performance threshold.
Additionally, we have modified our quality performance category scoring
policy, which allows small practices to receive a minimum of 3 measure
achievement points for every measure submitted, even if the measure
does not meet the data completeness criteria.
Comment: Several commenters recommended a performance threshold of
6 points, rather than the proposed 15 points, because it would relieve
some of the burden of increased participation from the transition year,
particularly for solo practitioners and small group practices, and
would encourage participation providing clinicians with the opportunity
to avoid a negative MIPS payment adjustment by submitting a minimal
amount of data. A few commenters stated that lowering the threshold to
6 points would be appropriate for another transition year, keep the
program stable, and minimize the potential of penalizing clinicians who
are still learning about the program and care for the most vulnerable
patients in our country. A few commenters acknowledged CMS's concerns
that setting a lower performance threshold in the 2018 MIPS performance
period could lead to a jump in the performance threshold for the 2019
MIPS performance period, when CMS is required to use either the mean or
median final score from a prior period. However, the commenters
believes that setting a lower performance threshold in 2018 would lead
to a lower performance threshold in the future because many clinicians
would be aiming to meet the lower performance threshold of 6 points
which would lower the mean or median final score for 2018. A few
commenters supported a performance threshold at 6 points to be
implemented along with provisions, such as additional bonus points,
that protect clinicians and groups whose final scores are below the
performance threshold due to performance category reweighting. One
commenter believes 6 points would be a more modest performance
threshold which would enable practices to upgrade their EHR software
and more effectively track measures and improvement activities and
comply with interoperability expectations. One commenter urged CMS to
consider the impact of the level of participation that would be
required to meet a performance threshold of 6 points in the MIPS
program.
Response: We believe that increasing the performance threshold to 6
points for the 2020 MIPS payment year would not adequately encourage
increased clinician participation in MIPS and would not prepare
clinicians for the additional participation requirements in the 2021
MIPS payment year in order to avoid a negative adjustment. We recognize
the challenges unique to clinicians in solo and small group practices
participating in MIPS, but note that solo and small group practices
must also meet the additional participation requirements in the 2021
MIPS payment year, and refer readers to section II.C.7.b.(1)(c) of this
final rule with comment for the provisions related to the small
practice bonus for the 2020 MIPS payment year. We also do not agree
that setting a performance threshold at 6 points for 2018 MIPS
performance period will preclude a significant increase in the
performance threshold for the 2019 MIPS performance period because
performance data does not support that the mean or median of clinician
scores for a particular performance period is limited to a number at or
near the performance threshold. We refer readers to the CY 2018 Quality
Payment Program proposed rule (82 FR 30147 through 30149) for a
discussion of the data we considered. Finally, we believe that 15 point
performance threshold is attainable even for those who have a
performance category score reweighted. We refer readers to section
II.C.8.g.(2) for scoring examples where the advancing care information
performance category is reweighted and yet MIPS eligible clinicians are
able to receive a final score higher than 15 points.
Comment: A few commenters recommended a performance threshold
between 8 and 13 points. One commenter supported a performance
threshold between 8 and 10 points to lessen the increase from the 2017
[[Page 53791]]
performance period and to have less of an impact on small practices.
One commenter recommended that CMS set the performance threshold at 7
to 10 points because of the lower expected participation rate of small
practices. One commenter recommended that the performance threshold be
increased by no more than 7 to 10 points in any given year because any
more is too much of an increase to implement in a year. One commenter
encouraged CMS to consider a longer transition period and suggested
that 10 points would be an appropriate performance threshold because it
would enable growth over the 2019 MIPS payment year, but at not as
steep a climb as the proposed 15 points.
Response: We appreciate the suggestions for a range of increases in
the performance threshold from 7 points to 13 points. We also
appreciate the concerns expressed by many commenters about clinicians
needing more clarity around MIPS program requirements and additional
time to prepare to participate in MIPS and meet program requirements.
We agree that setting the performance threshold for the 2018 MIPS
performance period significantly higher than the performance threshold
for the 2017 MIPS performance period would be inappropriate because
many clinicians need time to become familiar with the program policies
and requirements and gain experience with increased participation under
the MIPS program. However, we believe that clinicians should be
prepared to meet the additional requirements for meeting, or exceeding,
the significantly increased performance threshold statutorily required
in the 2021 MIPS payment year. As such, we believe that the performance
threshold of 15 points will encourage increased participation and
adequately prepare clinicians for these additional participation
requirements in the 2021 MIPS payment year. Additionally, we refer
readers to section II.C.7.b.(1)(c) of this final rule with comment
where we finalize the small practice bonus for the 2020 MIPS payment
year which may help clinicians in small practices meet the performance
threshold of 15 points.
Comment: Many commenters supported a higher performance threshold
with no specific numerical recommendation because the additional
increase would encourage participation in multiple performance
categories, appropriately focus clinicians on quality and improvement
activities that are critical steps in moving towards value-based care,
and would make a higher performance threshold for the 2021 MIPS payment
year less steep. One commenter recommended setting the performance
threshold closer to the cumulative number of points a clinician would
earn for minimum participation across all MIPS performance categories
to incentivize clinicians who are almost ready for full participation
to make the necessary practice changes and investments.
Response: We understand the perspective expressed by some
commenters that a higher performance threshold would better prepare
clinicians for the expected increase in the performance threshold for
the 2021 MIPS payment year and would encourage increased clinician
participation in the MIPS program and the movement toward value-based
care. While we acknowledge these advantages to setting a higher
performance threshold for the 2020 MIPS payment year, we also believe
that we should provide MIPS eligible clinicians with a smooth
transition to the second year of the program to encourage continued
participation. We believe that a performance threshold of 15 points is
a sufficient increase over the 2017 MIPS performance period that would
encourage continued clinician participation with an increased
engagement whereas a higher performance threshold may discourage
clinicians from participating in MIPS, which in the long run does not
improve quality of care for beneficiaries. We appreciate the suggestion
to set the performance threshold at a number to encourage minimum
participation in all of the performance categories, however, we believe
that the additional performance threshold, which we are establishing at
70 points as discussed in section II.C.8.d. of this final rule with
comment period, will provide incentive for reporting on all of the
performance categories.
Comment: A few commenters expressed concerns that the proposed
performance threshold would limit the opportunity for MIPS eligible
clinicians performing above average to earn up to a 5 percent positive
payment adjustment in 2020 because of the proposals to expand
exclusions from reporting and make more bonus points available.
Response: We acknowledge that setting the performance threshold at
a low number may limit the maximum payment adjustment amount that high
performers could receive, due to the budget neutrality requirement in
the statute, but we believe that this is warranted in a transition year
to encourage clinician participation in MIPS.
Comment: A few commenters supported the alternative of 33 points
because they believe it is attainable, would better prepare clinicians
for the steep increase expected for the 2021 MIPS payment year, send
the message to clinicians that focusing on quality and improvement
activities are critical steps in moving towards value-based care,
reward high-performing clinicians who have invested in performance
improvement, and result in higher positive MIPS payment adjustments for
MIPS eligible clinicians who exceed the performance threshold thereby
incentivizing higher performance. One commenter supported a performance
threshold of 33 points because if a clinician that had a neutral
adjustment in the VM program and had successfully demonstrated
meaningful use under the EHR Incentive Program delivered the same
performance under MIPS, then the clinician could expect to receive a
final score of 53 points. This commenter believes that this ``status
quo'' performance threshold of 53 points, which is significantly higher
than either the proposed 15 point or the alternative 33 point
threshold, supported a performance threshold of 33 points.
One commenter supported a performance threshold of 33 points
because it would require participation in both the improvement
activities and quality performance categories to avoid a negative
adjustment. One commenter supported a 33-point performance threshold
because the combined effect of the proposed changes for 2018, including
the performance threshold, the low-volume threshold, small practice
bonus, and EHR certification requirements, would reduce the opportunity
for high-performing MIPS eligible clinicians to earn a reasonable
increase to their Medicare payments in the 2020 MIPS payment year. One
commenter recommended for those practices where a 33-point performance
threshold may present a challenge because they have not participated in
the legacy Medicare programs, CMS can assist them through the existing
Transforming Clinical Practice Initiative (TCPI) that would help
clinicians identify and report quality measures under the MIPS quality
performance category.
Response: We appreciate the commenters' feedback regarding the
alternative of 33 points. We believe the proposed performance threshold
of 15 points is appropriate for the 2020 MIPS payment year because it
represents a meaningful increase compared to 3 points in the transition
year, while maintaining multiple pathways for MIPS eligible clinicians
to achieve and or exceed the performance threshold. We want to
encourage clinician
[[Page 53792]]
participation and believe that setting a performance threshold too high
for the 2020 MIPS payment year could create a performance barrier,
particularly for clinicians that have not previously participated in
PQRS or the EHR Incentive Programs. We want to encourage MIPS eligible
clinicians to participate because that will provide better data for us
to measure performance and ultimately help drive the delivery of value-
based, quality health care. In the long run, we would prefer the
negative MIPS payment adjustments to be caused by poor performance
rather than non-participation. Because the statute requires the MIPS
payment adjustments to be budget neutral, a performance threshold of 15
points could lower the potential positive MIPS payment adjustment for
high performers compared to a higher performance threshold. However, we
believe the trade-off to encourage participation is warranted in the
second transition year. We agree that technical assistance can help
practices understand MIPS and transform care and have set up the Small,
Underserved, and Rural Support initiative, a 5-year program, to provide
technical support to MIPS eligible clinicians in small practices. The
program provides assistance to practices in selecting and reporting on
quality measures, education and outreach, and support for optimizing
health IT.
Comment: A few commenters suggested a performance threshold higher
of at least 30 points and up to 45 points. One commenter supported a
threshold of at least 30 points because this would better prepare
clinicians for the likely higher performance threshold for the 2019
MIPS performance period and would be fair for groups that have invested
time and resources preparing for the MIPS program. One commenter
recommended a performance threshold of approximately 40 to 45 points
because that would incentivize clinicians to familiarize themselves
with the reporting requirements and accelerate initial improvement
efforts to ensure higher performance in future program years. One
commenter recommended a performance threshold of 42.5 points because
that would be closer to the cumulative number of points a clinician
would earn for minimum participation across all MIPS performance
categories, ensure that eligible clinicians participate in the quality
performance category to avoid a negative payment adjustment, and would
encourage clinicians to gain experience in each performance category
and familiarize themselves with the program's reporting requirements so
that they can better focus on performance in future program years.
Response: We appreciate the commenters' suggestions for alternative
higher performance thresholds of 30 points, 42.5 points, and a number
between 40 and 45 points. However, we believe that setting the
performance threshold too high could discourage clinician participation
which may lead to lower clinician participation in future years.
Accordingly, we believe clinicians should have an opportunity to become
more familiar with the MIPS program and gain experience with reporting
on measures and activities for the different MIPS performance
categories with only a modest increase in the MIPS performance
threshold from the 2017 MIPS performance period to the 2018 MIPS
performance period. We believe that a performance threshold of 15
points does not preclude clinicians from participating in multiple
performance categories and that clinicians can and should participate
in all performance categories. We are encouraged clinicians are
investing in the time and resources to perform well in MIPS and expect
that will benefit these clinicians through receiving a positive MIPS
payment adjustment and additional MIPS payment adjustment (for those
with a final score equal or greater than 70 points, as discussed in
section II.C.8.d. of this final rule with comment period.) While having
a lower performance threshold many limit the amount of the positive
payment adjustment, we believe the trade-off to encourage participation
is warranted in the second transition year.
Comment: One commenter recommended setting the performance
threshold at a level that would require eligible clinicians to
participate in at least 2 performance categories to avoid a negative
payment adjustment, including the quality performance category, because
this would incentivize clinicians to familiarize themselves with all
the reporting requirements in the program, particularly the quality
performance category, so that they can focus on performance improvement
in future program years. One commenter suggested that CMS consider
alternative approaches to setting the performance threshold that would
reduce the burden on small practices and clinicians and groups
practicing in rural and underserved areas by establishing different
performance thresholds for specific groups.
Response: We appreciate your suggestions for alternative approaches
when setting the performance threshold. We believe the proposed
performance threshold of 15 points provides a pathway to success for
many clinicians in the MIPS program through increased participation and
do not want to add additional complexity with establishing a
performance threshold or placing additional requirements for submitting
for multiple performance categories. We believe that requiring MIPS
eligible clinicians to submit on more than one performance category to
meet the performance threshold to avoid a negative payment adjustment
could be a barrier to participation, particularly for clinicians
gaining experience with reporting on the measures and activities and
becoming familiar with program policies and requirements. However, we
also believe that a performance threshold of 15 points does not
preclude clinicians from participating in multiple performance
categories and that clinicians can and should participate in all
performance categories. In addition, the scoring policies in the MIPS
program take into account the needs of small practices and the impact
on clinicians serving complex patients; however, the statute requires a
single performance threshold for all MIPS eligible clinicians. Please
refer to sections II.C.7.b.(1)(b) and II.C.7.b.(1)(c) of this final
rule with comment period for a discussion of these policies.
Comment: Several commenters offered input on the 2021 MIPS payment
year requirement that the performance threshold be either the median or
mean of the final scores for a prior period and other suggested
modifications to the performance threshold.
Response: We thank the commenters for their input, and although we
did not propose or request comments on the performance threshold for
the 2021 MIPS payment year, we will take these comments into
consideration in future rulemaking.
Final Action: After consideration of the public comments, we are
finalizing the performance threshold for the 2020 MIPS payment year as
proposed at 15 points. We are codifying the performance threshold for
the 2020 MIPS payment year at Sec. 414.1405(b)(5).
d. Additional Performance Threshold for Exceptional Performance
Section 1848(q)(6)(D)(ii) of the Act requires the Secretary to
compute, for each year of the MIPS, an additional performance threshold
for purposes of determining the additional MIPS payment adjustment
factors for exceptional performance under paragraph (C). For each such
year, the Secretary shall apply either of the
[[Page 53793]]
following methods for computing the additional performance threshold:
(1) The threshold shall be the score that is equal to the 25th
percentile of the range of possible final scores above the performance
threshold determined under section 1848(q)(6)(D)(i) of the Act; or (2)
the threshold shall be the score that is equal to the 25th percentile
of the actual final scores for MIPS eligible clinicians with final
scores at or above the performance threshold for the prior period
described in section 1848(q)(6)(D)(i) of the Act.
We codified at Sec. 414.1305 the definition of additional
performance threshold as the numerical threshold for a MIPS payment
year against which the final scores of MIPS eligible clinicians are
compared to determine the additional MIPS payment adjustment factors
for exceptional performance. We also codified at Sec. 414.1405(d) that
an additional performance threshold will be specified for each of the
MIPS payment years 2019 through 2024. We referred readers to the CY
2017 Quality Payment Program final rule for further discussion of the
additional performance threshold (81 FR 77338 through 77339). We
inadvertently failed to codify the additional performance threshold for
the 2019 MIPS payment year in the CY 2017 Quality Payment Program final
rule, although it was our intention to do so. Thus, we now codify the
additional performance threshold for the 2019 MIPS payment year at
Sec. 414.1405(d)(3).
Based on the special rule for the initial 2 years of MIPS in
section 1848(q)(6)(D)(iii) of the Act, for the transition year, we
decoupled the additional performance threshold from the performance
threshold and established the additional performance threshold at 70
points. We selected a 70-point numerical value for the additional
performance threshold, in part, because it would require a MIPS
eligible clinician to submit data for and perform well on more than one
performance category (except in the event the advancing care
information performance category is reweighted to zero percent and the
weight is redistributed to the quality performance category making the
quality performance category worth 85 percent of the final score).
Under section 1848(q)(6)(C) of the Act, a MIPS eligible clinician with
a final score at or above the additional performance threshold will
receive an additional MIPS payment adjustment factor and may share in
the $500,000,000 of funding available for the year under section
1848(q)(6)(F)(iv) of the Act. We believed these additional incentives
should only be available to those clinicians with very high performance
on the MIPS measures and activities. We took into account the data
available and the modeling described in section II.E.7.c.(1) of the CY
2017 Quality Payment Program final rule in selecting the additional
performance threshold for the transition year (81 FR 77338 through
77339).
As we discussed in the CY 2018 Quality Payment Program proposed
rule (82 FR 30147 through 30149), we relied on the special rule under
section 1848(q)(6)(D)(iii) of the Act to establish the performance
threshold at 15 points for 2020 MIPS payment year. We proposed to again
decouple the additional performance threshold from the performance
threshold. Because we do not have actual MIPS final scores for a prior
performance period, if we do not decouple the additional performance
threshold from the performance threshold, then we would have to set the
additional performance threshold at the 25th percentile of possible
final scores above the performance threshold. With a performance
threshold set at 15 points, the range of total possible points above
the performance threshold is 16 to 100 points. The 25th percentile of
that range is 36.25 points, which is barely more than one third of the
possible 100 points in the MIPS final score. We do not believe it would
be appropriate to lower the additional performance threshold to 36.25
points, as we do not believe a final score of 36.25 points demonstrates
exceptional performance by a MIPS eligible clinician. We believe these
additional incentives should only be available to those clinicians with
very high performance on the MIPS measures and activities. Therefore,
we relied on the special rule under section 1848(q)(6)(D)(iii) of the
Act to propose the additional performance threshold at 70 points for
the 2020 MIPS payment year, which is higher than the 25th percentile of
the range of the possible final scores above the performance threshold.
We took into account the data available and the modeling described
in the CY 2018 Quality Payment Program proposed rule (82 FR 30147
through 30148) to estimate final scores for the 2020 MIPS payment year.
We believed 70 points is appropriate because it requires a MIPS
eligible clinician to submit data for and perform well on more than one
performance category (except in the event the advancing care
information measures are not applicable and available to a MIPS
eligible clinician). Generally, under our proposals, a MIPS eligible
clinician could receive a maximum score of 60 points for the quality
performance category, which is below the 70-point additional
performance threshold. In addition, 70 points is at a high enough level
that MIPS eligible clinicians must submit data for the quality
performance category to achieve this target. For example, if a MIPS
eligible clinician gets a perfect score for the improvement activities
and advancing care information performance categories, but does not
submit quality measures data, then the MIPS eligible clinician would
only receive 40 points (0 points for quality + 15 points for
improvement activities + 25 points for advancing care information),
which is below the additional performance threshold. We believed an
additional performance threshold of 70 points would maintain the
incentive for excellent performance while keeping the focus on quality
performance. Finally, we noted that we believed keeping the additional
performance threshold at 70 points maintains consistency with the 2019
MIPS payment year which helps to simplify the overall MIPS framework.
We invited public comment on the proposals. We also sought feedback
on whether we should raise the additional performance threshold to a
higher number which would in many instances require the use of an EHR
for those to whom the advancing care information performance category
requirements would apply. In addition, a higher additional performance
threshold would incentivize better performance and would also allow
MIPS eligible clinicians to receive a higher additional MIPS payment
adjustment.
We also sought public comment on which method we should use to
compute the additional performance threshold beginning with the 2021
MIPS payment year. Section 1848(q)(6)(D)(ii) of the Act requires the
additional performance threshold to be the score that is equal to the
25th percentile of the range of possible final scores above the
performance threshold for the year, or the score that is equal to the
25th percentile of the actual final scores for MIPS eligible clinicians
with final scores at or above the performance threshold for the prior
period described in section 1848(q)(6)(D)(i) of the Act.
The following is a summary of the public comments received and our
responses:
Comment: Many commenters supported the proposal to keep the
additional performance threshold at 70 points for the 2018 MIPS
performance period because this number is high enough to necessitate
what could be construed as ``exceptional performance'' and low enough
to be reasonably
[[Page 53794]]
attainable; is sufficient to drive improvement and reward those with
high performance; is close to full participation in addition to
requiring good performance in the quality and advancing care
information categories; avoids shifting program requirements; rewards
those who submit data on multiple MIPS performance categories; and is
more appropriate than raising the bar after just 1 year.
Response: We thank the commenters for their support. We are
finalizing the additional performance threshold at 70 points.
Comment: Several commenters recommended an additional performance
threshold higher than the proposal of 70 points because they believe it
was merited and that establishing an additional performance threshold
that allows 4 out of 5 participants to qualify as ``exceptional''
performers would dilute the impact of these important incentives and
potentially reduce clinician motivation to improve performance,
particularly for those clinicians who expended resources and effort
preparing to be successful in MIPS in 2017 and 2018.
A few commenters supported raising the additional performance
threshold for the 2018 MIPS performance period/2020 MIPS payment year
to 75 points because this would allow for a potentially larger
additional MIPS payment adjustment for qualifying clinicians compared
to an additional performance threshold of 70 points. The increase would
allow those MIPS eligible clinicians who expended significant effort
and resources to perform at higher levels and earn a higher incentive
for their achievement; would account for improvements in technology and
processes; would align with a proposed increase in the performance
threshold; and would better prepare the MIPS eligible clinician
community for the statutory requirements for the 2021 MIPS payment year
additional performance threshold.
One commenter supported an additional performance threshold of 80
points because it would be possible for a MIPS eligible clinician to
exceed 70 points without reporting on the advancing care information
measures.
Response: We appreciate commenters' suggestions for higher
additional performance thresholds in general and your specific
recommendations of 75 points and 80 points. We applaud MIPS eligible
clinicians that have invested in performing well in MIPS. We want to
reward exceptional performance, yet also have an achievable additional
performance threshold that encourages clinicians to participate more
fully. We disagree with the characterization that the proposal of 70
points will reduce clinician motivation to perform because there is no
certainty about the number of clinicians who will qualify for the
additional MIPS payment adjustment and the impact of this number on the
size of the additional MIPS payment adjustment. We also believe that
keeping the additional performance threshold the same as in the 2017
MIPS performance period will encourage continued participation from
clinicians who have experience with and understand what is required to
meet and exceed the 70-point threshold.
Comment: Several commenters offered input on the 2021 MIPS payment
year statutory requirements for the additional performance threshold
and other suggested modifications to the additional performance
threshold.
Response: We thank the commenters for their input, and although we
did not propose or request comments on the additional performance
threshold for the 2021 MIPS payment year, we will take these comments
into consideration in future rulemaking.
Final Action: After consideration of the public comments, we are
finalizing our proposal to set the additional performance threshold at
70 points for the 2020 MIPS payment year. We are codifying the
additional performance threshold for the 2020 MIPS payment year in this
final rule at Sec. 414.1405(d)(4).
e. Scaling/Budget Neutrality
We codified at Sec. 414.1405(b)(3) that a scaling factor not to
exceed 3.0 may be applied to positive MIPS payment adjustment factors
to ensure budget neutrality such that the estimated increase in
aggregate allowed charges resulting from the application of the
positive MIPS payment adjustment factors for the MIPS payment year
equals the estimated decrease in aggregate allowed charges resulting
from the application of negative MIPS payment adjustment factors for
the MIPS payment year. We referred readers to the CY 2017 Quality
Payment Program final rule for further discussion of budget neutrality
(81 FR 77339).
We did not propose any changes to the scaling and budget neutrality
requirements as they are applied to MIPS payment adjustment factors in
this proposed rule.
f. Additional Adjustment Factors
We referred readers to the CY 2017 Quality Payment Program final
rule for further discussion of the additional MIPS payment adjustment
factor (81 FR 77339 through 77340). We did not propose any changes to
determine the additional MIPS payment adjustment factors.
g. Application of the MIPS Payment Adjustment Factors
(1) Application to the Medicare Paid Amount
Section 1848(q)(6)(E) of the Act provides that for items and
services furnished by a MIPS eligible clinician during a year
(beginning with 2019), the amount otherwise paid under Part B for such
items and services and MIPS eligible clinician for such year, shall be
multiplied by 1 plus the sum of the MIPS payment adjustment factor
determined under section 1848(q)(6)(A) of the Act divided by 100, and
as applicable, the additional MIPS payment adjustment factor determined
under section 1848(q)(6)(C) of the Act divided by 100.
We codified at Sec. 414.1405(e) the application of the MIPS
payment adjustment factors. For each MIPS payment year, the MIPS
payment adjustment factor, and if applicable the additional MIPS
payment adjustment factor, are applied to Medicare Part B payments for
items and services furnished by the MIPS eligible clinician during the
year.
We proposed to apply the MIPS payment adjustment factor, and if
applicable, the additional MIPS payment adjustment factor, to the
Medicare paid amount for items and services paid under Part B and
furnished by the MIPS eligible clinician during the year. This proposal
is consistent with the approach taken for the value-based payment
modifier (77 FR 69308 through 69310) and would mean that beneficiary
cost-sharing and coinsurance amounts would not be affected by the
application of the MIPS payment adjustment factor and the additional
MIPS payment adjustment factor. The MIPS payment adjustment applies
only to the amount otherwise paid under Part B for items and services
furnished by a MIPS eligible clinician during a year. Please refer to
the CY 2017 Quality Payment Program final rule at 81 FR 77340 and the
CY 2018 Quality Payment Program proposed rule at 82 FR 30019 and
section II.C.1.a. of this final rule with comment period for further
discussion and our proposals regarding which Part B covered items and
services would be subject to the MIPS payment adjustment.
The following is a summary of the public comments received on these
proposals and our responses:
[[Page 53795]]
Comment: A few commenters supported the proposal to apply the
adjustment to the Medicare paid amount because it would not affect the
Medicare beneficiary deductible and coinsurance amounts.
Response: We thank the commenters for their support.
Comment: One commenter did not support the proposal and recommended
that the MIPS payment adjustment apply to the full fee schedule amount.
The commenter questioned the statutory authority for the proposal and
expressed a belief that section 1848(q)(6)(E) of the Act applies the
adjustment to the full fee schedule amount. In addition, the commenter
stated that the proposal would not accomplish its objective as savings
would be passed on to the supplemental insurance industry and not to
beneficiaries.
Response: We disagree with the commenter's interpretation of the
statute. We assume the commenter is referring to the Medicare Physician
Fee Schedule. Section 1848(q)(6)(E) of the Act, requires us to apply
the adjustment to ``the amount otherwise paid under this part,'' which
we interpret to refer to Medicare Part B payments, with respect to
items and services furnished by a MIPS eligible clinician. We believe
the language of this section gives us discretion to apply the
adjustment to the Medicare paid amount as we proposed. We also disagree
with the characterization that the proposal would not accomplish its
objective because the MIPS program is focused on rewarding value and
outcomes for MIPS eligible clinicians and is intended to improve the
overall quality, cost, and care coordination of services provided to
Medicare beneficiaries.
Comment: One commenter requested guidance on how the MIPS payment
adjustment will be applied for non-participating clinicians.
Specifically, the commenter expressed concerns about the administrative
burden of maintaining a separate fee schedule for MIPS eligible
clinicians and non-participating clinicians. The commenter also
requested guidance regarding whether the MIPS adjustment is used in
calculating the Medicare limiting charge amount for non-participating
clinicians, whether we will provide the annual Medicare Physician Fee
Schedule with the relating limiting charge amount for both MIPS
eligible clinicians as well as non-participating clinicians, and
whether there are different limiting charge amounts for MIPS eligible
clinicians receiving the MIPS payment adjustment and for clinicians not
subject to MIPS.
Response: We appreciate the commenter's questions and note that
although we did not address these issues in the proposed rule, we
intend to address them in rulemaking next year.
Final Action: After consideration of the public comments, we are
finalizing our proposal to apply the MIPS payment adjustment factor,
and if applicable, the additional MIPS payment adjustment factor, to
the Medicare paid amount for items and services paid under Part B and
furnished by the MIPS eligible clinician during the year. We refer
readers to section II.C.1.a. of this final rule with comment period,
where we discuss the items and services to which the MIPS payment
adjustment could be applied under Part B.
(2) Example of Adjustment Factors
In the CY 2018 Quality Payment Program proposed rule (82 FR 30152)
we provided a figure and several tables as illustrative examples of how
various final scores would be converted to an adjustment factor, and
potentially an additional adjustment factor, using the statutory
formula and based on proposed policies. We repeat these examples using
our final policies. In Figure A, the performance threshold is 15
points. The applicable percentage is 5 percent for 2020. The adjustment
factor is determined on a linear sliding scale from zero to 100, with
zero being the lowest negative applicable percentage (negative 5
percent for the 2020 MIPS payment year), and 100 being the highest
positive applicable percentage. However, there are two modifications to
this linear sliding scale. First, there is an exception for a final
score between zero and one-fourth of the performance threshold (zero
and 3.75 points based on the performance threshold of 15 points for the
2020 MIPS payment year). All MIPS eligible clinicians with a final
score in this range would receive the lowest negative applicable
percentage (negative 5 percent for the 2020 MIPS payment year). Second,
the linear sliding scale line for the positive MIPS adjustment factor
is adjusted by the scaling factor, which cannot be higher than 3.0 (as
discussed in section II.C.8.e. of this final rule with comment and in
the CY 2018 Quality Payment Program proposed rule at 82 FR 30150). If
the scaling factor is greater than zero and less than or equal to 1.0,
then the adjustment factor for a final score of 100 would be less than
or equal to 5 percent. If the scaling factor is above 1.0, but less
than or equal to 3.0, then the adjustment factor for a final score of
100 would be higher than 5 percent. Only those MIPS eligible clinicians
with a final score equal to 15 points (which is the performance
threshold in this example) would receive a neutral MIPS payment
adjustment. Because the performance threshold is 15 points, we
anticipate that the scaling factor would be less than 1.0 and the
payment adjustment for MIPS eligible clinicians with a final score of
100 points would be less than 5 percent.
Figure A illustrates an example of the slope of the line for the
linear adjustments. In this example, the scaling factor for the
adjustment factor is 0.06 which is much lower than 1.0. In this
example, MIPS eligible clinicians with a final score equal to 100 would
have an adjustment factor of 0.31 percent (5 percent x 0.06).
The additional performance threshold is 70 points. An additional
adjustment factor of 0.5 percent starts at the additional performance
threshold and increases on a linear sliding scale up to 10 percent.
This linear sliding scale line is also multiplied by a scaling factor
that is greater than zero and less than or equal to 1.0. The scaling
factor will be determined so that the estimated aggregate increase in
payments associated with the application of the additional adjustment
factors is equal to $500,000,000. In Figure A, the example scaling
factor for the additional adjustment factor is 0.175. Therefore, MIPS
eligible clinicians with a final score of 100 would have an additional
adjustment factor of 1.75 percent (10 percent x 0.175). The total
adjustment for a MIPS eligible clinician with a final score equal to
100 would be 1 + 0.0031 + 0.0175 = 1.0205, for a total positive MIPS
payment adjustment of 2.05 percent.
BILLING CODE 4120-01-P
[[Page 53796]]
[GRAPHIC] [TIFF OMITTED] TR16NO17.000
BILLING CODE 4120-01-C
The final MIPS payment adjustments will be determined by the
distribution of final scores across MIPS eligible clinicians and the
performance threshold. More MIPS eligible clinicians above the
performance threshold means the scaling factors would decrease because
more MIPS eligible clinicians receive a positive MIPS payment
adjustment. More MIPS eligible clinicians below the performance
threshold means the scaling factors would increase because more MIPS
eligible clinicians would have negative MIPS payment adjustments and
relatively fewer MIPS eligible clinicians would receive positive MIPS
payment adjustments.
Table 32 illustrates the changes in payment adjustments from the
transition year to the 2020 MIPS payment year based on the final
policies, as well as the statutorily-required increase in the
applicable percent as required by section 1848(q)(6)(B) of the Act.
Table 32--Illustration of Point System and Associated Adjustments
Comparison Between Transition Year and the 2020 MIPS Payment Year
------------------------------------------------------------------------
Transition year 2020 MIPS payment year
------------------------------------------------------------------------
Final score
Final score points MIPS adjustment points MIPS adjustment
------------------------------------------------------------------------
0.0-0.75.............. Negative 4 0.0-3.75 Negative 5
percent. percent.
[[Page 53797]]
0.76-2.99............. Negative MIPS 3.76-14.99 Negative MIPS
payment payment
adjustment adjustment
greater than greater than
negative 4 negative 5
percent and percent and
less than 0 less than 0
percent on a percent on a
linear sliding linear sliding
scale. scale.
3.00.................. 0 percent 15.00 0 percent
adjustment. adjustment.
3.01-69.99............ Positive MIPS 15.01-69.99 Positive MIPS
payment payment
adjustment adjustment
greater than 0 greater than 0
percent on a percent on a
linear sliding linear sliding
scale. The scale. The
linear sliding linear sliding
scale ranges scale ranges
from 0 to 4 from 0 to 5
percent for percent for
scores from scores from
3.00 to 15.00 to
100.00. This 100.00. This
sliding scale sliding scale
is multiplied is multiplied
by a scaling by a scaling
factor greater factor greater
than zero but than zero but
not exceeding not exceeding
3.0 to 3.0 to
preserve preserve
budget budget
neutrality. neutrality.
70.00-100............. Positive MIPS 70.00-100 Positive MIPS
payment payment
adjustment adjustment
greater than 0 greater than 0
percent on a percent on a
linear sliding linear sliding
scale. The scale. The
linear sliding linear sliding
scale ranges scale ranges
from 0 to 4 from 0 to 5
percent for percent for
scores from scores from
3.00 to 15.00 to
100.00. This 100.00. This
sliding scale sliding scale
is multiplied is multiplied
by a scaling by a scaling
factor greater factor greater
than zero but than zero but
not exceeding not exceeding
3.0 to 3.0 to
preserve preserve
budget budget
neutrality; neutrality;
PLUS. PLUS.
An additional An additional
MIPS payment MIPS payment
adjustment for adjustment for
exceptional exceptional
performance. performance.
The additional The additional
MIPS payment MIPS payment
adjustment adjustment
starts at 0.5 starts at 0.5
percent and percent and
increases on a increases on a
linear sliding linear sliding
scale. The scale. The
linear sliding linear sliding
scale ranges scale ranges
from 0.5 to 10 from 0.5 to 10
percent for percent for
scores from scores from
70.00 to 70.00 to
100.00. This 100.00. This
sliding scale sliding scale
is multiplied is multiplied
by a scaling by a scaling
factor not factor not
greater than greater than
1.0 in order 1.0 in order
to to
proportionatel proportionatel
y distribute y distribute
the available the available
funds for funds for
exceptional exceptional
performance. performance.
------------------------------------------------------------------------
In the CY 2018 Quality Payment Program proposed rule, we provided a
few examples for the 2020 MIPS payment year to demonstrate scenarios in
which MIPS eligible clinicians can achieve a final score at or above
the performance threshold of 15 points. We note a calculation error was
included in Example 3. Because the MIPS eligible clinician did not
submit advancing care information, the quality performance category
should have been 85 percent to reflect reweighting, while the advancing
care information performance category should have been zero percent.
Earned points (column D) should have been 42.5 for quality to reflect
reweighting and the final score should have been listed as 51.5.
We have provided updated examples below for the 2020 MIPS payment
year to demonstrate scenarios in which MIPS eligible clinicians can
achieve a final score at or above the performance threshold of 15
points based on our final policies.
Example 1: MIPS Eligible Clinician in Small Practice Submits 1 Quality
Measure and 1 Improvement Activity
In the example illustrated in Table 32, a MIPS eligible clinician
in a small practice reporting individually meets the performance
threshold by reporting one quality measure one time via claims and one
medium-weight improvement activity. The practice does not submit data
for the advancing care information performance category, but does
submit a significant hardship exception application which is approved;
therefore, the weight for the advancing care information performance
category is reweighted to the quality performance category due to final
reweighting policies discussed in section II.C.7.b.(3) of this final
rule with comment period (82 FR 30141 through 30146). We also assumed
the small practice has a cost performance category percent score of 50
percent. Finally, we assumed a complex patient bonus of 3 points which
represents the average HCC risk score for the beneficiaries seen by the
MIPS eligible clinician as well as the proportion of Medicare
beneficiaries that are dual eligible. There are several special scoring
rules which affect MIPS eligible clinicians in a small practice:
3 Measure achievement points for each quality measure even
if the measure does not meet data completeness standards. We refer
readers to section II.C.7.a.(2)(d) of this final rule with comment
period for discussion of this policy. Therefore, a quality measure
submitted one time would receive 3 points. Because the measure is
submitted via claims, it does not qualify for the end-to-end electronic
reporting bonus, nor would it qualify for the high-priority bonus
because it is the only measure submitted. Because the MIPS eligible
clinician does not meet full participation requirements, the MIPS
eligible clinician does not qualify for improvement scoring. We refer
readers to section II.C.7.a.(2)(i)(iii) of this final rule with comment
period for a discussion on full participation requirements. Therefore,
the quality performance category is (3 measure achievement points +
zero measure bonus points)/60 total available measure points + zero
improvement percent score which is 5 percent.
The advancing care information performance category weight
is redistributed to the quality performance category so that the
quality performance category score is worth 75 percent of the final
score. We refer readers to section II.C.7.b.(3)(d) of this final rule
with comment period for a discussion of this policy.
MIPS eligible clinicians in small practices qualify for
special scoring for improvement activities so a medium weighted
activity is worth 20 points out of a total 40 possible points for the
improvement activities performance category. We refer readers to
section II.C.6.e.(5) of this final rule with comment period for a
discussion of this policy.
MIPS eligible clinicians in small practices qualify for
the 5-point small practice bonus which is applied to the
[[Page 53798]]
final score. We refer readers to section II.C.7.b.(1)(c) of this final
rule with comment period for a discussion of this policy.
This MIPS eligible clinician exceeds the performance threshold of
15 points (but does not exceed the additional performance threshold).
This score is summarized in Table 33.
Table 33--Scoring Example 1, MIPS Eligible Clinician in a Small Practice
----------------------------------------------------------------------------------------------------------------
Earned points ([B]
Performance category Performance score Category weight * [C] * 100)
[A] [B]...................... [C]..................... [D]
----------------------------------------------------------------------------------------------------------------
Quality............................... 5%....................... 75%..................... 3.75
Cost.................................. 50%...................... 10%..................... 5.0
Improvement Activities................ 20 out of 40 points--50%. 15%..................... 7.5
Advancing Care Information............ N/A...................... 0% (reweighted to 0
quality).
Subtotal (Before Bonuses)............. ......................... ........................ 16.25
Complex Patient Bonus................. ......................... ........................ 3
Small Practice Bonus.................. ......................... ........................ 5
Final Score (not to exceed 100)....... ......................... ........................ 24.25
----------------------------------------------------------------------------------------------------------------
Example 2: Group Submission Not in a Small Practice
In the example illustrated in Table 34, a MIPS eligible clinician
in a medium size practice participating in MIPS as a group receives
performance category scores of 75 percent for the quality performance
category, 50 percent for the cost performance category, and 100 percent
for the advancing care information and improvement activities
performance categories. There are many paths for a practice to receive
a 75 percent score in the quality performance category, so for
simplicity we are assuming the score has been calculated at this
amount. The final score is calculated to be 85.5, and both the
performance threshold of 15 and the additional performance threshold of
70 are exceeded. Again, for simplicity, we assume a complex patient
bonus of 3 points. In this example, the group practice does not qualify
for any special scoring, yet is able to exceed the additional
performance threshold and will receive the additional MIPS payment
adjustment.
Table 34--Scoring Example 2, MIPS Eligible Clinician in a Medium Practice
----------------------------------------------------------------------------------------------------------------
Earned points ([B]
Performance category Performance score Category weight * [C] * 100)
[A] [B]...................... [C]..................... [D]
----------------------------------------------------------------------------------------------------------------
Quality............................... 75%...................... 50%..................... 37.5
Cost.................................. 50%...................... 10%..................... 5
Improvement Activities................ 40 out of 40 points 100%. 15%..................... 15
Advancing Care Information............ 100%..................... 25%..................... 25
Subtotal (Before Bonuses)............. ......................... ........................ 82.5
Complex Patient Bonus................. ......................... ........................ 3
Small Practice Bonus.................. ......................... ........................ 0
Final Score (not to exceed 100)....... ......................... ........................ 85.5
----------------------------------------------------------------------------------------------------------------
Example 3: Non-Patient Facing MIPS Eligible Clinician
In the example illustrated in Table 35, an individual MIPS eligible
clinician that is non-patient facing and not in a small practice
receives performance category scores of 50 percent for the quality
performance category, 50 percent for the cost performance category, and
50 percent for 1 medium-weighted improvement activity. Again, there are
many paths for a practice to receive a 50 percent score in the quality
performance category, so for simplicity we are assuming the score has
been calculated. Because the MIPS eligible clinician is non-patient
facing, they qualify for special scoring for improvement activities and
receive 20 points (out of 40 possible points) for the medium weighted
activity. Also, this individual did not submit advancing care
information measures and qualifies for the automatic reweighting of the
advancing care information performance category to the quality
performance category. Again, for simplicity, we assume a complex
patient bonus of 3 points. The MIPS eligible clinician is not in a
small practice so does not qualify for the small practice bonus.
In this example, the final score is 53 and the performance
threshold of 15 is exceeded while the additional performance threshold
of 70 is not.
Table 35--Scoring Example 3, Non-Patient Facing MIPS Eligible Clinician
----------------------------------------------------------------------------------------------------------------
Performance category Performance score Category weight Earned points
[A] [B]...................... [C]..................... ([B] * [C] * 100) =
[D]
----------------------------------------------------------------------------------------------------------------
Quality............................... 50%...................... 75%..................... 37.5
Cost.................................. 50%...................... 10%..................... 5
Improvement Activities................ 20 out of 40 points for 1 15%..................... 7.5
medium weight activity.
50%......................
[[Page 53799]]
Advancing Care Information............ 0%....................... 0% (reweighted to 0
quality).
Subtotal (Before Bonuses)............. ......................... ........................ 50
Complex Patient Bonus................. ......................... ........................ 3
Small Practice Bonus.................. ......................... ........................ 0
Final Score (not to exceed 100)....... ......................... ........................ 53
----------------------------------------------------------------------------------------------------------------
We note that these examples are not intended to be exhaustive of
the types of participants nor the opportunities for reaching and
exceeding the performance threshold.
9. Review and Correction of MIPS Final Score
a. Feedback and Information To Improve Performance
(1) Performance Feedback
As we have stated previously in the CY 2017 Quality Payment Program
final rule (81 FR 77345), we will continue to engage in user research
with front-line clinicians to ensure we are providing the performance
feedback data in a user-friendly format, and that we are including the
data most relevant to clinicians. Any suggestions from user research
would be considered as we develop the systems needed for performance
feedback, which would occur outside of the rulemaking process.
Over the past year, we have conducted numerous user research
sessions to determine what the community most needs in performance
feedback. In summary, we have found the users want the following:
(1) To know as soon as possible how I am performing based on my
submitted data so that I have confidence that I performed the way I
thought I would.
(2) To be able to quickly understand how and why my payments will
be adjusted so that I can understand how my business will be impacted.
(3) To be able to quickly understand how I can improve my
performance so that I can increase my payment in future program years.
(4) To know how I am performing over time so I can improve the care
I am providing patients in my practice.
(5) To know how my performance compares to my peers.
Based on that research, we have already begun development of real-
time feedback on data submission and scoring where technically feasible
(some scoring requires all clinician data be submitted, and therefore,
cannot occur until the end of the submission period). By ``real-time''
feedback, we mean instantaneous receipt recognition; for example, when
a clinician submits their data via our Web site or a third party
submits data via our Application Program Interface (API), they will
know immediately if their submission was successful.
We will continue to provide information for stakeholders who wish
to participate in user research via our education and communication
channels. Suggestions can also be sent via the ``Contact Us''
information on qpp.cms.gov. However, we noted that suggestions provided
through this channel would not be considered as comments on the
proposed rule.
(a) MIPS Eligible Clinicians
Under section 1848(q)(12)(A)(i) of the Act, we are at a minimum
required to provide MIPS eligible clinicians with timely (such as
quarterly) confidential feedback on their performance under the quality
and cost performance categories beginning July 1, 2017, and we have
discretion to provide such feedback regarding the improvement
activities and advancing care information performance categories.
We proposed to provide, beginning July 1, 2018, performance
feedback to MIPS eligible clinicians and groups for the quality and
cost performance categories for the 2017 performance period, and if
technically feasible, for the improvement activities and advancing care
information performance categories. We proposed to provide this
performance feedback at least annually, and as, technically feasible,
we would provide it more frequently, such as quarterly. If we are able
to provide it more frequently, we would communicate the expected
frequency to our stakeholders via our education and outreach
communication channels.
Based on public comments summarized and responded to in the CY 2017
Quality Payment Program final rule (81 FR 77347), we also proposed that
the measures and activities specified for the CY 2017 performance
period (for all four MIPS performance categories), along with the final
score, would be included in the performance feedback provided on or
about July 1, 2018.
For cost measures, since we can measure performance using any 12-
month period of prior claims data, we requested comment on whether it
would be helpful to provide more frequent feedback on the cost
performance category using rolling 12-month periods or quarterly
snapshots of the most recent 12-month period; how frequent that
feedback should be; and the format in which we should make it available
to clinicians and groups. In addition, as described in sections
II.C.6.b. and II.C.6.d. of the proposed rule, we stated in the proposed
rule our intent to provide cost performance feedback in the fall of
2017 and the summer of 2018 on new episode-based cost measures that are
currently under development by CMS. With regard to the format of
feedback on cost measures, we noted how we are considering utilizing
the parts of the Quality and Resource Use Reports (QRURs) that user
testing has revealed beneficial while making the overall look and feel
usable to clinicians. We requested comment on whether that format is
appropriate or if other formats or revisions to that format should be
used to provide performance feedback on cost measures.
The following is a summary of the public comments received on the
``MIPS Eligible Clinicians'' proposals and our responses:
Comment: Many commenters asked for more timely feedback. Some
commenters expressed concern that the data in existing reports may be
more than 2 or more years out of date and that more recent feedback is
needed to improve quality and change behaviors. Several commenters
noted the need for real time feedback in order to be actionable. Many
commenters noted feedback reports should be available quarterly, semi-
annually, or more frequently than annually. A few commenters noted
receiving feedback in mid-2018 would be too late to make the necessary
adjustments to ensure success in the following MIPS performance period
and requested mid-year performance reports for the start of the
performance period. One commenter stated that CMS should hold itself
accountable for annual reports to be
[[Page 53800]]
available no later than the following August, but CMS should aim for
having them available no later than July. Half-year performance should
be available no later than the following March, but CMS should aim for
January. The commenter stated that if CMS is unable to provide timely
reports, then clinicians should be exempt from MIPS.
One commenter noted that it is important that CMS reduce the amount
of time between the performance period and performance feedback from
CMS to allow practices time to make necessary adjustments before the
next reporting period begins. The commenter also requested that
feedback to clinicians should be delivered by CMS to clinicians
beginning no later than April 1, 2019. A few commenters requested
technology and system upgrades, so that CMS could improve the way
performance information is disseminated to physicians and practices,
such as dashboards or reports on demand. One commenter requested
quarterly information to ensure the accuracy of the information,
especially as CMS has proposed posting MIPS performance scores on the
Physician Compare Web site.
Another commenter encouraged CMS to release the reports as early as
possible, at minimum following the MACRA recommendation that data be
available on a quarterly basis, so that clinicians are not well into
the next reporting cycle before they learn of their MIPS results and
performance and can institute workflow changes to ensure success under
MIPS. One commenter expressed that measure-based feedback is helpful as
eligible clinicians determine performance improvement plans and select
measures for future performance periods. One commenter believes that
patient-level data is helpful to eligible clinicians as they determine
areas in which additional resources can be allocated.
Response: Under section 1848(q)(12)(A)(i) of the Act, we are at a
minimum required to provide MIPS eligible clinicians with timely (such
as quarterly) confidential feedback on their performance under the
quality and cost performance categories beginning July 1, 2017, and we
have discretion to provide such feedback regarding the improvement
activities and advancing care information performance categories. We
are finalizing our policy as proposed to provide performance feedback
annually on the quality and cost performance categories, and as
technically feasible the improvement activities and advancing care
information performance categories. As we have indicated previously,
our goal is to provide even more timely feedback under MIPS as the
program evolves, and we are continuing to work with stakeholders as we
build performance feedback to incorporate technology to improve the
usability of performance feedback. We do note that there are a number
of challenges with providing feedback more frequently than annually,
namely that for the MIPS performance period, we can only provide
feedback on performance as often as data are reported to us; for MIPS,
this will be an annual basis for all quality submission mechanisms
except for claims and administrative claims. As soon as the data are
available on a more frequent basis we can continue exploring the path
to provide performance feedback on a more frequent basis, such as
quarterly. The inability to provide more frequent feedback, other than
annually, is not a reason to be exempted from the Quality Payment
Program, and by statute there is no authority to create such
exemptions. For eligible clinicians and groups who use a third party
intermediary to report data, we expect those intermediaries to provide
additional performance feedback on top of what CMS is providing through
the annual performance feedback. Lastly, we are working with
stakeholders on an API alpha where registries, and other third party
intermediaries as technically feasible, are currently testing real-time
feedback capabilities with the intermediary directly sharing the
feedback with the eligible clinician or group. Lastly, we refer readers
to section II.C.5. of this final rule with comment period for more
information on the MIPS performance period.
Comment: One commenter requested CMS include the advancing care
information and improvement activities categories in the report as well
in order for eligible clinicians to familiarize themselves with the
program and scoring and to enable them to make decisions to support
their success under MIPS.
Response: We agree that all four performance categories are
beneficial to include in performance feedback, and are working to
incorporate these data into the July 1, 2018 performance feedback, as
technically feasible. We will continue to work with stakeholders on the
best way to include all four performance categories in performance
feedback.
Comment: One commenter suggested that one person should be able to
obtain feedback for an entire TIN or even group of TINs at once because
seeking out a report on each NPI is not sustainable for larger
organizations.
Response: We are continuing to evaluate ways to make the data in
performance feedback more easily accessible to practice managers who
manage large numbers of clinicians (TINs).
Comment: One commenter noted that cost category elements could be
submitted to other entities using APIs.
Response: We note that the cost category measures for MIPS require
no submission, and are entirely claims based. Therefore, identifying
additional submission mechanisms for cost category data appears
unnecessary.
Comment: Several commenters suggested feedback on cost measures and
cost performance category as it relates to performance feedback. Some
commenters agreed it would be helpful to provide more frequent and
actionable feedback on the cost performance category using rolling 12-
month periods or quarterly snapshots of the most recent 12-month
period. One commenter asked for the agency to do this in a transparent
manner. A few commenters asked for information on cost performance to
include cost metrics related to episodes of care and comparative data.
Another commenter expressed concerns that there is limited
experience in episode grouper and urged CMS to test and evaluate its
episode grouper methodology and ensure that their application will not
result in unintended consequences, such as stinting on needed care as a
way to ensure that costs within the defined episode are contained.
Another commenter noted that issues such as identifying the correct
length of the episode window and assigning services to the episode
(like rehabilitation therapy or imaging, etc.) each take hours to
resolve and asked that CMS think critically about the MIPS timeline
needed to build out every episode of care in the Medicare population.
That commenter further requested that CMS provide stakeholders with a
time-table for developing Medicare Cost measure episodes as well as a
list of future episodes under consideration and that once developed,
the proposed details of the new episode-based cost measures should be
subject to notice and comment rulemaking in a future proposed rule.
Finally, the commenter noted that while this performance category
relies solely on administrative claims data, critical resource use
related questions like attribution and risk adjustment for medically
complex patients still need solutions that can only be answered with
additional time and through CMS collaboration with the relevant
professions.
[[Page 53801]]
Response: We will take this into consideration as we continue to
build the mechanisms and formats for performance feedback for the
Quality Payment Program. Additionally, we are actively developing new
episode-based cost measures, which includes field testing the measures
that will share such information with clinicians. We will continue to
engage in user research with front-line clinicians and other
stakeholders to ensure we are providing the performance feedback data
in a user-friendly format, and that we are including the data most
relevant to clinicians. In particular, we have held a Technical Expert
Panel focused on risk adjustment for episode-based cost measures, which
has informed the development of potential new episode-based cost
measures. Any new cost measures would be proposed through rulemaking.
In terms of clinician involvement, the cost measure development
contractor has brought together nearly 150 clinicians affiliated with
nearly 100 specialty societies to both recommend which new cost
measures to build first and to review and make recommendations for
every step of cost measure development including which claims to
include and risk adjustment.
Comment: One commenter suggested including the following
information in standardized feedback reports as fields: (1) Indications
for individual or group classification for non-patient facing, small
group practice, and rural area and health professional shortage area;
(2) indications for performance category reweighting and special
scoring considerations; (3) for the quality performance category,
including the title of the quality measure submitted, measure type, the
total points that can be achieved based on the benchmark, whether data
completeness has been met for the quality measure, decile level
achieved, measure achievement points, bonus points awarded, and
performance score; and (4) for the improvement activities category,
including the title of the improvement activity submitted, weighting of
the improvement activity, total points that can be earned, special
scoring applied; and points earned for the measure performance score.
Another commenter recommended including the following information in
standardized feedback reports: (1) Eligibility status for both eligible
clinicians and group practices; (2) for a group practice especially, a
defined and updated list of NPIs for which the group is responsible
when reporting at group TIN level; (3) submission status tracking files
submitted and whether or not the submission was successful; and (4)
scoring feedback on claim-based universal population quality measures
and cost measures which is often not available to eligible clinicians
and groups until after the performance period.
Response: We agree with commenters about continually improving the
usability of performance feedback, and will continue doing stakeholder
outreach with the goal that the template for performance feedback will
be available in a usable and user-friendly format. We intend to
consider different options--including all the comments submitted on the
proposed rule--before the performance feedback is displayed in a web-
based application to MIPS eligible clinicians.
Final Action: As a result of the public comments, we are finalizing
these policies as proposed. Specficially, on an annual basis, beginning
July 1, 2018, performance feedback will be provided to MIPS eligible
clinicians and groups for the quality and cost performance categories
for the 2017 performance period, and if technically feasible, for the
improvement activities and advancing care information performance
categories.
We also solicited comment only on how often cost data should be
provided in performance feedback under the Quality Payment Program, as
well as, which data fields in the QRUR that would be useful to include
in the Quality Payment Program performance feedback.
We received a number of comments on this item and appreciate the
input received. As this was a request for comment only, we will take
the feedback provided into consideration for the future as we continue
to build performance feedback.
(b) MIPS APMs
We proposed that MIPS eligible clinicians who participate in MIPS
APMs would receive performance feedback in 2018 and future years of the
Quality Payment Program, as technically feasible. We referred readers
to section II.C.6.g.(5) of the proposed rule for additional information
related to the proposal. A summary of comments on those proposals can
be found in section II.C.6.g.(5) of this final rule with comment
period.
(c) Voluntary Clinician and Group Reporting
As noted in the CY 2017 Quality Payment Program final rule (81 FR
77071), eligible clinicians who are not included in the definition of a
MIPS eligible clinician during the first 2 years of MIPS (or any
subsequent year) may voluntarily report on measures and activities
under MIPS, but will not be subject to the payment adjustment. In the
CY 2017 Quality Payment Program final rule (81 FR 77346), we summarized
public comments requesting that eligible clinicians who are not
required, but who voluntarily report on measures and activities under
MIPS, should receive the same access to performance feedback as MIPS
eligible clinicians; there, we indicated that we would take the
comments into consideration in the future development of performance
feedback. We proposed to furnish performance feedback to eligible
clinicians and groups that do not meet the definition of a MIPS
eligible clinician but voluntarily report on measures and activities
under MIPS. We proposed that this would begin with data collected in
performance period 2017, and would be available beginning July 1, 2018.
Based on user and market research, we believe that making this
information available would provide value in numerous ways. First, it
would help clinicians who are excluded from MIPS in the 2017
performance period, but who may be considered MIPS eligible clinicians
in future years, to prepare for participation in the Quality Payment
Program when there are payment consequences associated with
participation. Second, it would give all clinicians equal access to the
CMS claims and benchmarking data available in performance feedback. And
third, it would allow clinicians who may be interested in participating
in an APM to make a more informed decision.
The following is a summary of the public comments received on the
``Voluntary Clinician and Group Reporting'' proposals and our
responses:
Comment: A few commenters supported providing feedback reports to
clinicians who do not meet the definition of MIPS eligible clinician,
but voluntarily report measures and activities to MIPS, beginning July
1, 2018, and containing information on data submitted in the 2017
performance period, because MIPS provides a valuable introduction to
value-based payment for clinicians that may not have previously
encountered it, which will help them better understand the program and
prepare for successful participation in the future if they become MIPS
eligible.
Response: We agree this data will be useful and are finalizing this
proposal.
Final Action: As a result of the public comments, we are finalizing
this policy as proposed. Specifically, starting with data collected in
the performance period 2017 that would be available beginning
[[Page 53802]]
July 1, 2018, we will furnish performance feedback to eligible
clinicians and groups that do not meet the definition of a MIPS
eligible clinician but voluntarily report on measures and activities
under MIPS.
(2) Mechanisms
Under section 1848(q)(12)(A)(ii) of the Act, the Secretary may use
one or more mechanisms to make performance feedback available, which
may include use of a web-based portal or other mechanisms determined
appropriate by the Secretary. For the quality performance category,
described in section 1848(q)(2)(A)(i) of the Act, the feedback shall,
to the extent an eligible clinician chooses to participate in a data
registry for purposes of MIPS (including registries under sections
1848(k) and (m) of the Act), be provided based on performance on
quality measures reported through the use of such registries. For any
other performance category (that is, cost, improvement activities, or
advancing care information), the Secretary shall encourage provision of
feedback through qualified clinical data registries (QCDRs) as
described in section 1848(m)(3)(E) of the Act.
As previously stated in the CY 2017 Quality Payment Program final
rule (81 FR 77347 through 77349), we will use a CMS-designated system
as the mechanism for making performance feedback available, which we
expect will be a web-based application. We expect to use a new and
improved format for the next performance feedback, anticipated to be
released around July 1, 2018. It will be provided via the Quality
Payment Program Web site (qpp.cms.gov), and we intend to leverage
additional mechanisms, such as health IT vendors, registries, and QCDRs
to help disseminate data and information contained in the performance
feedback to eligible clinicians, where applicable.
We also sought comment on how health IT, either in the form of an
EHR or as a supplemental module, could better support the feedback
related to participation in the Quality Payment Program and quality
improvement in general. Specifically--
Are there specific health IT functionalities that could
contribute significantly to quality improvement?
Are there specific health IT functionalities that could be
part of a certified EHR technology or made available as optional health
IT modules in order to support the feedback loop related to Quality
Payment Program participation or participation in other HHS reporting
programs?
In what other ways can health IT support clinicians
seeking to leverage quality data reports to inform clinical improvement
efforts? For example, are there existing or emerging tools or resources
that could leverage an API to provide timely feedback on quality
improvement activities?
Are there opportunities to expand existing tracking and
reporting for use by clinicians, for example expanding the feedback
loop for patient engagement tools to support remote monitoring of
patient status and access to education materials?
We welcomed public comment on these questions.
We also noted in the proposed rule that we intend to continue to
leverage third party intermediaries as a mechanism to provider
performance feedback (82 FR 30155 through 30156). In the CY 2017
Quality Payment Program final rule (81 FR 77367 through 77386) we
finalized that at least 4 times per year, qualified registries and
QCDRs will provide feedback on all of the MIPS performance categories
that the qualified registry or QCDR reports to us (improvement
activities, advancing care information, and/or quality performance
category). The feedback should be given to the individual MIPS eligible
clinician or group (if participating as a group) at the individual
participant level or group level, as applicable, for which the
qualified registry or QCDR reports. The qualified registry or QCDR is
only required to provide feedback based on the MIPS eligible
clinician's data that is available at the time the performance feedback
is generated. In regard to third party intermediaries, we also noted we
would look to propose ``real time'' feedback as soon as it is
technically feasible.
We also noted in the proposed rule (82 FR 30156) that, per the
policies finalized in the CY 2017 Quality Payment Program final rule
(81 FR 77367 through 77386), we require qualified registries and QCDRs,
as well as encourage other third party intermediaries (such as health
IT vendors that submit data to us on behalf of a MIPS eligible
clinician or group), to provide performance feedback to individual MIPS
eligible clinicians and groups via the third party intermediary with
which they are already working. We also noted that we understand that
performance feedback is valuable to individual clinicians and groups,
and seek feedback from third party intermediaries on when ``real-time''
feedback could be provided.
As discussed in the proposed rule (see 82 FR 30156), we plan to
continue to work with third party intermediaries as we continue to
develop the mechanisms for performance feedback, to see where we may be
able to develop and implement efficiencies for the Quality Payment
Program. We are exploring options with an API, which could allow
authenticated third party intermediaries to access the same data that
we use to provide confidential feedback to the individual clinicians
and groups on whose behalf the third party intermediary reports for
purposes of MIPS, in accordance with applicable law, including, but not
limited to, the HIPAA Privacy and Security Rules. Our goal is to enable
individual clinicians and groups to more easily access their feedback
via the mechanisms and relationships they already have established. We
referred readers to section II.C.10. of the proposed rule for
additional information on Third Party Data Submission.
We solicited comment only on mechanisms used for performance
feedback but did not propose any specific policy.
We received a number of comments on this item and appreciate the
input received. As this was a request for comment only, we will take
the feedback provided into consideration for the future as we continue
to build performance feedback.
(3) Receipt of Information
Section 1848(q)(12)(A)(v) of the Act, states that the Secretary may
use the mechanisms established under section 1848(q)(12)(A)(ii) of the
Act to receive information from professionals. This allows for expanded
use of the feedback mechanism to not only provide feedback on
performance to MIPS eligible clinicians, but to also receive
information from professionals.
In the CY 2017 Quality Payment Program final rule (81 FR 77350), we
discussed that we intended to explore the possibility of adding this
feature to the CMS-designated system, such as a portal, in future years
under MIPS. Although we did not make any specific proposals at this
time, we sought comment on the features that could be developed for the
expanded use of the feedback mechanism. This could be a feature where
eligible clinicians and groups can send their feedback (for example, if
they are experiencing issues accessing their data, technical questions
about their data, etc.) to us through the Quality Payment Program
Service Center or the Quality Payment Program Web site. We noted that
we appreciate that eligible clinicians and groups may have questions
regarding the Quality Payment Program information contained in their
performance feedback. To assist
[[Page 53803]]
eligible clinicians and groups, we intend to utilize existing
resources, such as a helpdesk or offer technical assistance, to help
address questions with the goal of linking these resource features to
the Quality Payment Program Web site and Service Center.
We solicited comment only on the receipt of information on features
that could be developed for the expanded use of the feedback mechanism
but did not propose any specific policy.
We received a number of comments on this item and appreciate the
input received. As this was a request for comment only, we will take
the feedback provided into consideration for the future as we continue
to build performance feedback. As a reminder, we have already
established a single helpdesk to address all questions related to the
Quality Payment Program. Please visit our Web site at qpp.cms.gov for
more information.
(4) Additional Information--Type of Information
Section 1848(q)(12)(B)(i) of the Act states that beginning July 1,
2018, the Secretary shall make available to MIPS eligible clinicians
information about the items and services for which payment is made
under Title 18 that are furnished to individuals who are patients of
MIPS eligible clinicians by other suppliers and providers of services.
This information may be made available through mechanisms determined
appropriate by the Secretary, such as the CMS-designated system that
would also provide performance feedback. Section 1848(q)(12)(B)(ii) of
the Act specifies that the type of information provided may include the
name of such providers, the types of items and services furnished, and
the dates that items and services were furnished. Historical data
regarding the total, and components of, allowed charges (and other
figures as determined appropriate by the Secretary) may also be
provided.
We proposed, beginning with the performance feedback provided
around July 1, 2018, to make available to MIPS eligible clinicians and
eligible clinicians information about the items and services for which
payment is made under Title 18 that are furnished to individuals who
are patients of MIPS eligible clinicians and eligible clinicians by
other suppliers and providers of services. We proposed to include as
many of the following data elements as technically feasible: The name
of such suppliers and providers of services; the types of items and
services furnished and received; the dollar amount of services provided
and received; and the dates that items and services were furnished. We
proposed that the additional information would include historical data
regarding the total, and components of, allowed charges (and other
figures as determined appropriate). We proposed that this information
be provided on the aggregate level; with the exception of data on items
and services, as we could consider providing this data at the patient
level, if clinicians find that level of data to be useful, although we
noted it may contain personally identifiable information and protected
health information. We proposed the date range for making this
information available would be based on what is most helpful to
clinicians, which could include the most recent data we have available,
which as technically feasible would be from the previous 3 to 12-month
period. We proposed to make this information available via the Quality
Payment Program Web site, and as technically feasible, as part of the
performance feedback. Finally, because data on items and services
furnished is generally kept confidential, we proposed that access would
be provided only after secure credentials are obtained.
The following is a summary of the public comments received on the
``Additional Information--Type of Information'' proposals and our
responses:
Comment: One commenter supported providing additional information
about the items and services for which payment is made under Title
XVIII. The commenter urged CMS to make the information more robust by
identifying alternatives to the items or services provided that would
have been more cost effective to the patient while still delivering the
same quality of care.
Two commenters provided recommendations for the type of additional
information to include in performance feedback. One commenter
recommended that CMS create machine-readable APIs for the feedback
mechanism so that vendors could then interpret ``raw data,'' thereby
enabling them to develop visualization and processing tools to better
understand this data. The commenter believes that providing all data
and allowing community tools to filter out irrelevant data would
provide more useful insights to MIPS eligible clinicians. Another
commenter suggested inclusion of information about which patients are
attributed to particular clinicians, which other clinicians have
partnered in that care, and the care directly attributed to the
clinician. The commenter believes that inclusion of this information
would better balance the power between CMS to audit and potentially
recover money with the opportunity for an eligible clinician to seek an
informal review. Furthermore, the commenter observed that current
feedback reports lack key details for understanding the methodologies
used to arrive at the benchmarks and other calculations and encouraged
CMS to generate a summary report of all measures across the MIPS
domains per specialty and TIN size, including the ``success'' of each
measure assessed.
Response: We appreciate the feedback provided and will consider
these ideas as we continue to build performance feedback. We are
continuing to work with registries, QCDRs, and health IT vendors to
test new APIs and plan to continue to develop new APIs as the Quality
Payment Program progresses. We also continue to evaluate what
additional information and type of information as required by section
1848(q)(12)(B)(i) of the Act would be useful to clinicians and groups,
and are currently working with stakeholders to establish what to
include in performance feedback.
Final Action: As a result of the public comments, we are finalizing
these policies as proposed. Section 1848(q)(12)(B)(i) of the Act states
that beginning July 1, 2018, the Secretary shall make available to MIPS
eligible clinicians information about the items and services for which
payment is made under Title 18 that are furnished to individuals who
are patients of MIPS eligible clinicians by other suppliers and
providers of services.
(5) Performance Feedback Template
In the proposed rule, we noted our intent (82 FR 30157), to do as
much as we can of the development of the template for performance
feedback by working with the stakeholder community in a transparent
manner. We stated our belief that this will encourage stakeholder
commentary and make sure the result is the best possible format(s) for
feedback.
To continue with our collaborative goal of working with the
stakeholder community, we sought comment on the structure, format,
content (for example, detailed goals, data fields, and elements) that
would be useful for MIPS eligible clinicians and groups to include in
performance feedback, including the data on items and services
furnished, as discussed above. Additionally, we understand the term
``performance feedback'' may not be a meaningful phrase to communicate
to clinicians or groups the scope of the data. Therefore, we sought
comment on a more suitable term than ``performance feedback.'' User
[[Page 53804]]
testing to date has provided some considerations for a name in the
Quality Payment Program, such as Progress Notes, Reports, Feedback,
Performance Feedback, or Performance Reports.
Any suggestions on the template to be used for performance feedback
or what to call ``performance feedback'' can be submitted to the
Quality Payment Program Web site at qpp.cms.gov.
We received a number of comments on this item and appreciate the
input received. As this was a request for comment only and we did nto
make a proposal, we will take the feedback provided into consideration
for the future as we continue to build performance feedback. We intend
to do as much as we can of the development of the template for
performance feedback by working with the stakeholder community in a
transparent manner. We invite clinicians and groups that may have ideas
they want to share, or if they would like to participate in user
testing to email partnership@cms.hhs.gov. We think this will both
encourage stakeholder commentary and make sure we end up with the best
possible format(s) for feedback. We intend for this performance
feedback to be available in the new format on the 2017 performance
period by summer 2018, after the 2017 reporting closes.
b. Targeted Review
In the CY 2017 Quality Payment Program final rule (81 FR 77546), we
finalized at Sec. 414.1385 that MIPS eligible clinicians or groups may
request a targeted review of the calculation of the MIPS payment
adjustment factor under section 1848(q)(6)(A) of the Act and, as
applicable, the calculation of the additional MIPS payment adjustment
factor under section 1848(q)(6)(C) of the Act applicable to such MIPS
eligible clinician or group for a year. We noted MIPS eligible
clinicians who are scored under the APM scoring standard described in
section II.C.6.g. of the proposed rule may request this targeted
review. Although we did not propose any changes to the targeted review
process, we provided information on the process that was finalized in
the CY 2017 Quality Payment Program final rule (81 FR 77353 through
77358).
(1) MIPS eligible clinicians and groups have a 60-day period to
submit a request for targeted review, which begins on the day we make
available the MIPS payment adjustment factor, and if applicable the
additional MIPS payment adjustment factor, for the MIPS payment year
and ends on September 30 of the year prior to the MIPS payment year or
a later date specified by us.
(2) We will respond to each request for targeted review timely
submitted and determine whether a targeted review is warranted.
Examples under which a MIPS eligible clinician or group may wish to
request a targeted review include, but are not limited to:
The MIPS eligible clinician or group believes that
measures or activities submitted to us during the submission period and
used in the calculations of the final score and determination of the
adjustment factors have calculation errors or data quality issues.
These submissions could be with or without the assistance of a third
party intermediary; or
The MIPS eligible clinician or group believes that there
are certain errors made by us, such as performance category scores were
wrongly assigned to the MIPS eligible clinician or group (for example,
the MIPS eligible clinician or group should have been subject to the
low-volume threshold exclusion, or a MIPS eligible clinician should not
have received a performance category score).
(3) The MIPS eligible clinician or group may include additional
information in support of their request for targeted review at the time
the request is submitted. If we request additional information from the
MIPS eligible clinician or group, it must be provided and received by
us within 30 days of the request. Non-responsiveness to the request for
additional information may result in the closure of the targeted review
request, although the MIPS eligible clinician or group may submit
another request for targeted review before the deadline.
(4) Decisions based on the targeted review are final, and there is
no further review or appeal.
c. Data Validation and Auditing
In the CY 2017 Quality Payment Program final rule (81 FR 77546
through 77547), we finalized at Sec. 414.1390(a) that we will
selectively audit MIPS eligible clinicians and groups on a yearly
basis. If a MIPS eligible clinician or group is selected for audit, the
MIPS eligible clinician or group will be required to do the following
in accordance with applicable law and timelines we establish:
(1) Comply with data sharing requests, providing all data as
requested by us or our designated entity. All data must be shared with
us or our designated entity within 45 days of the data sharing request,
or an alternate timeframe that is agreed to by us and the MIPS eligible
clinician or group. Data will be submitted via email, facsimile, or an
electronic method via a secure Web site maintained by us.
(2) Provide substantive, primary source documents as requested.
These documents may include: Copies of claims, medical records for
applicable patients, or other resources used in the data calculations
for MIPS measures, objectives, and activities. Primary source
documentation also may include verification of records for Medicare and
non-Medicare beneficiaries where applicable. We did not propose any
changes to the requirements in section Sec. 414.1390(a).
We indicated in the CY 2017 Quality Payment Program final rule that
all MIPS eligible clinicians and groups that submit data to us
electronically must attest to the best of their knowledge that the data
submitted to us is accurate and complete (81 FR 77362). We also
indicated in the final rule that attestation requirements would be part
of the submission process (81 FR 77360). We neglected to codify this
requirement in regulation text of the CY 2017 Quality Payment Program
final rule. Additionally, after further consideration since the final
rule, the requirement is more in the nature of a certification, rather
than an attestation. Thus, we proposed to revise Sec. 414.1390 to add
a new paragraph (b) that requires all MIPS eligible clinicians and
groups that submit data and information to CMS for purposes of MIPS to
certify to the best of their knowledge that the data submitted to CMS
is true, accurate, and complete. We also proposed that the
certification by the MIPS eligible clinician or group must accompany
the submission.
We also indicated in the CY 2017 Quality Payment Program final rule
that if a MIPS eligible clinician or group is found to have submitted
inaccurate data for MIPS, we would reopen and revise the determination
in accordance with the rules set forth at Sec. Sec. 405.980 through
405.984 (81 FR 77362). We neglected to codify this policy in regulation
text of the CY 2017 Quality Payment Program final rule and further, we
did not include Sec. 405.986, which is also an applicable rule in our
reopening policy. We also finalized our approach to recoup incorrect
payments from the MIPS eligible clinician by the amount of any debts
owed to us by the MIPS eligible clinician and likewise, we would recoup
any payments from the group by the amount of any debts owed to us by
the group. Thus, we proposed to revise Sec. 414.1390 to add a new
paragraph (c) that states we may reopen and revise a MIPS payment
determination in accordance with the rules set forth at Sec. Sec.
405.980 through 405.986.
[[Page 53805]]
In the CY 2017 Quality Payment Program final rule, we also
indicated that MIPS eligible clinicians and groups should retain copies
of medical records, charts, reports and any electronic data utilized
for reporting under MIPS for up to 10 years after the conclusion of the
performance period (81 FR 77360). We neglected to codify this policy in
regulation text of the CY 2017 Quality Payment Program final rule.
Thus, we proposed to revise Sec. 414.1390 to add a new paragraph (d)
that states that all MIPS eligible clinicians or groups that submit
data and information to CMS for purposes of MIPS must retain such data
and information for a period of 10 years from the end the MIPS
Performance Period.
Finally, we indicated in the CY 2017 Quality Payment Program final
rule, that, in addition to recouping any incorrect payments, we intend
to use data validation and audits as an educational opportunity for
MIPS eligible clinicians and groups and we note that this process will
continue to include education and support for MIPS eligible clinicians
and groups selected for an audit.
The following is a summary of the public comments received on the
``Data Validation And Auditing'' proposals and our responses:
Comment: One commenter supported CMS' proposals regarding data
validation and auditing requirements.
Response: We thank the commenter for their support.
Comment: Several commenters expressed concern regarding CMS's
proposal to codify the requirement that eligible clinicians and groups
must retain data utilized for reporting under MIPS for a period of 10
years from the end of the MIPS performance period. A few commenters
noted the 10-year retention requirement is excessive, and will create
undue financial and time burden for eligible clinicians associated with
managing, storing, and retrieving data for audit. Some of these
commenters also noted the 10-year retention requirement is inconsistent
with data retention requirements for other CMS programs, such as the
EHR Incentive Program, the record retention requirements for non-
Quality Payment Program Part B payments, the rules governing CMS's
Recovery Audit Contractors, and state laws on medical records
retention. As a result, using a 10-year retention period would create
multiple disparate data retention requirements for eligible clinicians
participating in MIPS. A few commenters also asserted using the outer
limit of False Claims Act liability as the data retention requirement
for MIPS is inappropriate because the False Claims Act relates to
instances where a party knowingly files a false claim for payment, and
therefore, is an unduly burdensome and inappropriate baseline for data
retention policies in a quality program. A few commenters therefore
recommended CMS reduce the record retention policy to 3 years, as the
commenters stated it is comparable to rules for CMS' Recovery Audit
Contractors and it would allow eligible clinicians to retain the
performance year data that would be used for payment adjustments.
Whereas other commenters recommended using a 6-year retention period,
stating that it would be similar to the requirements under the EHR
Incentive Program. Two commenters specifically recommended adopting a
5-year retention period, with one commenter noting state law record
retention rules which use a 5 to 7-year record retention time period.
Response: We appreciate the commenters' concerns and suggestions to
reduce the record retention period. We understand concerns regarding
the financial and time burdens associated with retaining data and
information. Therefore, we are modifying our proposed record retention
policy at Sec. 414.1390(d) to require all MIPS eligible clinicians and
groups that submit data and information to CMS for purposes of MIPS to
retain such data and information for a period of 6 years from the end
of the performance period. We believe our final 6-year record retention
requirement reduces burden and cost on MIPS eligible clinicians and
groups and, is consistent with HIPAA record retention requirements and
other Medicare program requirements.
Comment: Several commenters requested additional guidance regarding
the specific data eligible clinicians and groups must retain for
auditing purposes and who should be responsible for retaining this
data. A few commenters urged CMS to further specify the data retention
required for auditing purposes prior to the beginning of the
performance period so eligible clinicians and groups have adequate
notice of what is expected of and required from them. One commenter
specifically requested additional information regarding what evidence
an eligible clinician should retain to support attestations, and
encouraged CMS to provide eligible clinicians additional education
regarding their data retention responsibilities. Another commenter
requested clarification on whether the data retention requirements
apply to third-party entities who submit data to CMS on behalf of
eligible clinicians.
Response: MIPS eligible clinicians and groups are responsible for
retaining data. Please note, in the CY 2017 Quality Payment Program
final rule, we required at Sec. 414.1390(a)(1) that MIPS eligible
clinicians and groups must provide all data as requested by CMS or its
designated entity, and at Sec. 414.1390(a)(2) that MIPS eligible
clinicians and groups must provide substantive, primary source
documents as requested. Such documents may include: Copies of claims,
medical records for applicable patients, or other resources used in the
data calculations for MIPS measures, objectives, and activities; and
verification of records for Medicare and non-Medicare beneficiaries
where applicable. We will continue providing clarification through
subregulatory guidance. We also encourage MIPS eligible clinicians and
groups to review the current guidance available on the Quality Payment
Program Web site at https://qpp.cms.gov/docs/QPP_MIPS_Data_Validation_Criteria.zip.
Additionally, we note that the certification policy we finalized in
the CY 2017 Quality Payment Program final rule (81 FR 77362) and
proposed to codify at Sec. 414.1390(b) in the CY 2017 Quality Payment
Program proposed rule (82 FR 30254), requires MIPS eligible clinicians
and groups to certify to the best of their knowledge that the data
submitted to CMS is true, accurate, and complete. Thus, the evidence
needed to support such an assertion would be the types of data and
information we would request under Sec. 414.1390. Finally, we refer
readers to section II.C.10.g. of this final rule with comment period
where we discuss the record retention policy for third party
intermediaries. This policy is found at Sec. 414.1400(j)(2), which we
proposed to update in the CY 2018 Quality Payment Program proposed rule
(82 FR 30255).
Comment: A few commenters provided specific recommendations
regarding CMS' auditing process for MIPS data, focusing on the need for
a process that is not excessively burdensome for eligible clinicians
and provides sufficient time to respond to auditing requests in light
of eligible clinicians' patient care obligations and resource
availability. One commenter specifically recommended CMS establish an
ombudsman for the sole purpose of monitoring and responding to eligible
clinicians' complaints and concerns regarding the burden associated
with audits. Another commenter recommended CMS develop a process to
protect eligible clinicians' rights and offer recourse for eligible
clinicians in instances where there are
[[Page 53806]]
issues with third-party intermediaries retaining data on their behalf.
A third commenter requested CMS establish a fair and transparent
auditing process with clear documentation requirements and data
validation criteria for each MIPS category in order to lower the
likelihood of misinterpretation by eligible clinicians and groups.
Response: We believe the audit process established is reasonable
and is no more burdensome than other existing Medicare audit processes,
which similarly require providers and suppliers to furnish
documentation to support the accuracy of previous statements made to
CMS. We do not believe an ombudsman is necessary, but we will closely
monitor concerns from MIPS eligible clinicians and groups regarding
audit burdens and third-party intermediary issues. We believe that MIPS
eligible clinicians and groups should incorporate appropriate
protections into their agreements with third party intermediaries.
Additionally, in regards to establishing a fair and transparent
auditing process, we refer readers to our response above and reiterate
that we believe Sec. 414.1390(a) sets forth what must be retained for
purposes of an audit. We will continue providing clarification through
subregulatory guidance.
Final Action: After consideration of the public comments, we are
finalizing our proposal as proposed to add a new paragraph (b) to Sec.
414.1390 that requires all MIPS eligible clinicians and groups that
submit data and information to CMS for purposes of MIPS to certify to
the best of their knowledge that the data submitted to CMS is true,
accurate, and complete. Further, we finalize that the certification by
the MIPS eligible clinician or group must accompany the submission and
be made at the time of the submission. We are also finalizing with
clarification our proposal to revise Sec. 414.1390 to add a new
paragraph (c). Specifically, we are clarifying that we may reopen and
revise a MIPS payment adjustment rather than a payment determination.
Thus, we are finalizing our proposal to add a new paragraph (c) at
Sec. 414.1390 that states we may reopen and revise a MIPS payment
adjustment in accordance with the rules set forth at Sec. Sec. 405.980
through 405.986. Finally, we are finalizing our proposal with
modification to revise Sec. 414.1390 to add a new paragraph (d)
stating that all MIPS eligible clinicians and groups that submit data
and information to CMS for purposes of MIPS must retain such data and
information for 6 years from the end of the MIPS performance period.
10. Third Party Data Submission
a. Generally
Flexible reporting options will provide eligible clinicians with
options to accommodate different practices and make measurement
meaningful. We believe that allowing eligible clinicians to participate
in MIPS through the use of third party intermediaries that will collect
or submit data on their behalf, will help us accomplish our goal of
implementing a flexible program. We strongly encourage all third party
intermediaries to work with their MIPS eligible clinicians to ensure
the data submitted are representative of the individual MIPS eligible
clinician's or group's overall performance for that measure or
activity.
We use the term third party to refer to a qualified registry, QCDR,
a health IT vendor, or other third party that obtains data from a MIPS
eligible clinician's Certified Electronic Health Record Technology, or
a CMS approved survey vendor. We refer readers to the CY 2017 Quality
Payment Program final rule (81 FR 77363) and Sec. 414.1400 of the CFR
for our previously established policies regarding third party
intermediaries.
(1) Expansion to Virtual Groups
In the CY 2018 Quality Payment Program proposed rule (82 FR 30158),
we proposed to allow third party intermediaries to also submit on
behalf of virtual groups. We proposed to revise Sec. 414.1400(a)(1) to
state that MIPS data may be submitted by third party intermediaries on
behalf of an individual MIPS eligible clinician, group, or virtual
group. We also refer readers to section II.C.4. of this final rule with
comment period for a detailed discussion about virtual groups.
(2) Certification
Additionally, we believe it is important that the MIPS data
submitted by third party intermediaries is true, accurate, and
complete. To that end, in the CY 2018 Quality Payment Program proposed
rule (82 FR 30158), we proposed to add a requirement at Sec.
414.1400(a)(5) stating that all data submitted to CMS by a third party
intermediary on behalf of a MIPS eligible clinician, group or virtual
group must be certified by the third party intermediary to the best of
its knowledge as true, accurate, and complete. We also proposed that
this certification accompany the submission and be made at the time of
the submission. We solicited comments on these proposals.
Below is a summary of the public comments received on our proposals
to: (1) Allow third party intermediaries to also submit on behalf of
virtual groups; (2) require that all data submitted to us by a third
party intermediary on behalf of a MIPS eligible clinician, group, or
virtual group must be certified by the third party intermediary to the
best of its knowledge as true, accurate, and complete; and (3) require
that this certification occur at the time of the submission and
accompany the submission.
Comment: One commenter supported the proposal to permit third-party
intermediaries to submit data on behalf of not only individual eligible
clinicians and groups, but also on behalf of virtual groups.
Response: We thank the commenter for their support.
Comment: One commenter supported the proposal that third-party
intermediaries submitting data on behalf of a MIPS eligible clinician,
group, or virtual group must certify and attest that the data are true,
accurate, and complete at the time of submission but recommended that
CMS define ``true, accurate, and complete.'' One commenter expressed
concern that the term ``true, accurate, and complete'' is too vague,
particularly the ``complete'' element; that because the third parties
act as an intermediary and are not the original source of data means it
is not reasonable to request that they certify to that criteria; and
that because MIPS eligible clinicians themselves are the source of some
of the information, data being attested to would be burdensome to
verify. The commenter recommended studying data quality, possibly
through a task force made up of all stakeholders engaged in this
process because they suspect that standard checks of the quality of
information retention from source to intermediary could be devised, but
it should be done in a deliberative manner that leaves all constituents
comfortable with the recommended process and requirements. One
commenter also recommended that the certification requirement occur not
with each individual submission, but rather on a registry level; and
that individual practices or registries should not be punished if the
attestation is found to be incorrect.
Response: We appreciate the commenter's recommendations. The
``true, accurate, and complete'' standard is used throughout the
Medicare program and is commonly used in Medicare certifications. We do
not believe that it is ambigious or vague. Additionally, we understand
that third
[[Page 53807]]
party intermediaries may not always be the original source of data. In
the CY 2017 Quality Payment Program final rule with comment period (81
FR 77388), we clarified that MIPS eligible clinicians are ultimately
responsible for the data that is submitted by their third party
intermediary and expect that MIPS eligible clinicians and groups should
ultimately hold their third party intermediary accountable for accurate
reporting. However, we also expect third party intermediaries to
develop processes to ensure that the data and information they submit
to us on behalf of MIPS eligible clinicians, groups, and virtual groups
is true, accurate, and complete. We rely on the third party
intermediaries to address these issues in its arrangements and
agreements with other entities, including MIPS eligible clinicians,
groups, and virtual groups. We thank the commenter for their
recommendation to develop a task force and will take this into
consideration as we develop future policy. Additionally, we thank the
commenter for their recommendation that the certification requirement
be at the registry level. However, we are clarifying that the third
party intermediary must certify each submission and that the
certification must be for each MIPS eligible clinician, group, and
virtual group on whose behalf it is submitting data to us. Finally, a
third party intermediary that knowingly submits false data to the
government, whether the third party intermediary was the original
source of the data or not, would be subject to penalty under federal
law.
Comment: One commenter did not support the proposal that all data
must be certified by the third party intermediary because the commenter
believed the current regulations and attestations are adequate and
broadening the attestation without providing guidance to third party
intermediaries on how they can confidently make such an expansive
assertion is not reasonable.
Response: This certification requirement is not duplicative of
current requirements nor is it an expansive assertion. Rather, it is a
change for consistency across the program, that all data submissions
are certified by the one who submitted it. We do not believe the
certification requirement at Sec. 414.1400(a)(5) is an expansive
assertion because the third party intermediary is certifying to the
best of their knowledge that the data it submits is true, accurate, and
complete. The certification we are requiring at Sec. 414.1400(a)(5) is
imposed upon a third party intermediary and the data it submits to CMS
on behalf of an individual MIPS eligible clinician, group, or virtual
groups, while the certification requirement we finalized in the CY 2017
QPP Final Rule (81 FR 77362) is imposed upon MIPS eligible clinicians
and groups that submit data and information to CMS for purposes of
MIPS. We believe it is important that all MIPS data submitted to CMS,
whether it is submitted by MIPS eligible clinicians, groups, virtual
groups, or a third party intermediary on behalf of MIPS eligible
clinicians, groups, or virtual groups, be certified as true, accurate,
and complete. Thus, we believe both certifications are necessary.
Moreover, we do not believe additional guidance is needed, we believe
that this requirement provides sufficent guidance for third party
intermediaries to execute this certification requirement in a manner
that is feasible in their business operations while remaining compliant
with requirements of the policy. We also refer readers to our response
to the previous comment for additional discussion.
Final Action: After consideration of the public comments received,
we are finalizing our proposals, as proposed: (1) To revise Sec.
414.1400(a)(1) to include virtual groups; and (2) that we will require
at Sec. 414.1400(a)(5) that all data submitted to CMS by a third party
intermediary on behalf of a MIPS eligible clinician, group or virtual
group must be certified by the third party intermediary to the best of
its knowledge as true, accurate, and complete; and require that this
certification occur at the time of the submission and accompany the
submission.
b. Qualified Clinical Data Registries (QCDRs)
In the CY 2017 Quality Payment Program final rule (81 FR 77364), we
finalized the definition and capabilities of a QCDR. We finalized to
require other information (described below) of QCDRs at the time of
self-nomination. As previously established, if an entity becomes
qualified as a QCDR, they will need to sign a statement confirming this
information is correct prior to listing it on our Web site (81 FR
77363). Once we post the QCDR on our Web site, including the services
offered by the QCDR, we will require the QCDR to support these services
and measures for its clients as a condition of the entity's
participation as a QCDR in MIPS (81 FR 77366). Failure to do so will
preclude the QCDR from participation in MIPS in the subsequent year (81
FR 77366). In the CY 2018 Quality Payment Program proposed rule (82 FR
30159), we did not propose any changes to the definition or the
capabilities of a QCDR. However, we did propose changes to the self-
nomination process and the QCDR measure nomenclature. Additionally, we
proposed a policy in which a QCDR may support an existing QCDR measure
that is owned by another QCDR. The details of these proposals are
discussed in more detail below.
(1) Establishment of an Entity Seeking To Qualify as a QCDR
In the CY 2017 Quality Payment Program final rule (81 FR 77365), we
finalized the criteria to establish an entity seeking to qualify as a
QCDR. In the CY 2018 Quality Payment Program proposed rule (82 FR
30159), we did not propose any changes to the criteria and refer
readers to the CY 2017 Quality Payment Program final rule for the
criteria to qualify as a QCDR.
(2) Self-Nomination Process
(a) Generally
In the CY 2017 Quality Payment Program final rule (81 FR 77365
through 77366), we finalized procedures and requirements for QCDRs to
self-nominate. Additional details regarding self-nomination
requirements for both the self-nomination form and the QCDR measure
specification criteria and requirements can be found in the QCDR fact
sheet and the self-nomination user guide, that are posted in the
resource library of the Quality Payment Program Web site at https://qpp.cms.gov/about/resource-library.
In the CY 2017 Quality Payment Program final rule (81 FR 77365
through 77366), we finalized the self-nomination period for the 2017
performance period to begin on November 15, 2016, and end on January
15, 2017. We also finalized for future years of the program, beginning
with the 2018 performance period, for the self-nomination period to
begin on September 1 of the year prior to the applicable performance
period until November 1 of the same year. As an example, the self-
nomination period for the 2018 performance period will begin on
September 1, 2017, and will end on November 1, 2017. We believe an
annual self-nomination process is the best process to ensure accurate
information is conveyed to MIPS individual eligible clinicians and
groups and accurate data is submitted for MIPS. Having qualified as a
QCDR in a prior year does not automatically qualify the entity to
participate in MIPS as a QCDR in subsequent performance periods (82 FR
30159). As discussed in the CY 2017 Quality Payment Program final rule
(81 FR 77365), a QCDR may chose not to continue participation in the
program in
[[Page 53808]]
future years, or may be precluded from participation in a future year
due to multiple data or submission errors. QCDRs may also want to
update or change the measures, services or performance categories they
intend to support (82 FR 30159). Thus, we require that QCDRs must self-
nominate each year, and note that the prior performance of the QCDR
(when applicable) is taken into consideration in approval of their
self-nomination for subsequent years (82 FR 30159). In this final rule,
we are establishing a simplified self-nomination process for existing
QCDRs in good standing, beginning with the CY 2019 performance period.
(b) Simplified Self-Nomination Process for Existing QCDRs in MIPS, That
Are in Good Standing
We do understand that some QCDRs may not have any changes to the
measure and/or activity inventory they offer to their clients, and
intend to participate in the MIPS for many years. In the CY 2018
Quality Payment Program proposed rule (82 FR 30159), we proposed
beginning with the 2019 performance period, a simplified self-
nomination process, to reduce the burden of self-nomination for those
existing QCDRs that have previously participated in MIPS and are in
good standing (not on probation or disqualified, as described below),
and to allow for sufficient time for us to review data submissions and
make determinations. Our proposals to simplify the process for QCDRs in
good standing with no changes, minimal changes, and those with
substantive changes are discussed below.
(i) Existing QCDRs in Good Standing With No Changes
In the CY 2018 Quality Payment Program proposed rule (82 FR 30159),
we proposed, beginning with the 2019 performance period, a simplified
process for which existing QCDRs in good standing may continue their
participation in MIPS, by attesting that the QCDR's previously
approved: Data validation plan, services offered, cost associated with
using the QCDR, measures, activities, and performance categories
supported in the previous year's performance period of MIPS have no
changes and will be used for the upcoming performance period.
Specifically, existing QCDRs in good standing with no changes may
attest during the self-nomination period, between September 1 and
November 1, that they have no changes to their approved self-nomination
application from the previous year of MIPS. By attesting that all
aspects of their approved application from the previous year have not
changed, these existing QCDRs in good standing would be spending less
time completing the entire self-nomination form, as was previously
required on a yearly basis.
(ii) Existing QCDRs in Good Standing With Minimal Changes
Beginning with the 2019 performance period, existing QCDRs in good
standing that would like to make minimal changes to their previously
approved self-nomination application from the previous year, may submit
these changes, and attest to no other changes from their previously
approved QCDR application, for CMS review during the self-nomination
period, from September 1 to November 1. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30159), we proposed that minimal changes
include: Limited changes to their performance categories, adding or
removing MIPS quality measures, and adding or updating existing
services offered, and/or the cost associated with using the QCDR.
(iii) Existing QCDRs in Good Standing With Substantive Changes
In the CY 2018 Quality Payment Program proposed rule (82 FR 30159)
we stated that existing QCDRs in good standing, may also submit for CMS
review and approval, substantive changes to measure specifications for
existing QCDR measures that were approved the previous year, or submit
new QCDR measures for CMS review and approval without having to
complete the entire self-nomination application process, which is
required to be completed by a new QCDR. By attesting that certain
aspects of their approved application from the previous year have not
changed, existing QCDRs in good standing would be spending less time
completing the entire self-nomination form, as was previously required
on a yearly basis. We are proposing such a simplified process to reduce
the burden of self-nomination for those existing QCDRs who have
previously participated in MIPS, and are in good standing (not on
probation or disqualified, as described later in this section) and to
allow for sufficient time for us to review data submissions and to make
determinations on the standing of the QCDRs. We note that substantive
changes to existing QCDR measure specifications or any new QCDR
measures would have to be submitted for CMS review and approval by the
close of the self-nomination period. This proposed process will allow
existing QCDRs in good standing to avoid completing the entire
application annually, as is required in the existing process, and in
alignment with the existing timeline. We requested comments on this
proposal.
(A) Multi-Year Approval of QCDRs
In the development of the above policy, we had also reviewed the
possibility of offering a multi-year approval, in which QCDRs would be
approved for a continuous 2-year increment of time. However, we are
concerned that utilizing a multi-year approval process would restrict
QCDRs by having them support the same fixed services they had for the
first year, and would not provide QCDRs with the flexibility to add or
remove services, measures, and/or activities based on their QCDR
capabilities for the upcoming program year. Furthermore, under a multi-
year approval process, QCDRs would not be able to make changes to their
organizational structure (as described above) and would also create
complications in our process for placing QCDRs who perform poorly
(during the first year) on probation or disqualifying them (as
described below). Moreover, a multi-year approval process would not
take into consideration potential changes in criteria or requirements
of participation for QCDRs, that may occur as the MIPS program develops
through future program years. For the reasons stated above, we believe
an annual self-nomination period is appropriate. We understand that
stakeholders are interested in a multi-year approval process, for that
reason we intend to revisit the topic once we have gained additional
experience with the self-nomination process under MIPS. We seek comment
from stakeholders as to how they believe our aforementioned concerns
with multi-year approvals of QCDRs can be resolved.
(B) Web-Based Submission of Self-Nomination Forms
In the CY 2018 Quality Payment Program proposed rule (82 FR 30159),
for the 2018 performance period and beyond, we proposed that self-
nomination information must be submitted via a web-based tool, and to
eliminate the submission method of email. We noted that we will provide
further information about the web-based tool at www.qpp.cms.gov.
Below is a summary of the public comments received on the following
proposals: (1) A simplified self-nomination process for existing QCDRs
in good standing; and (2) To eliminate the self-nomination submission
method of email.
[[Page 53809]]
Comment: Many commenters supported the proposal allowing a
simplified self-nomination process for QCDRs in good standing to
continue their participation in MIPS by attesting to certain
information, for reasons including: It would require less time to
complete the self-nomination application; it would minimize confusion
and miscommunication with CMS; it should enable CMS to dedicate more
time to the review of the QCDR measures; it would help reduce reporting
burden for QCDRs; it will encourage the use and development of QCDR; it
will allow for more time to be spent developing new measures; and
decreasing application burden will enhance their ability to assist
clients with quality improvement and data submission and may create
efficiency by encouraging registries to make many measure changes at
one time versus submitting small changes annually.
Response: We thank commenters for their support with regards to a
simplified self-nomination process for existing QCDRs in good standing
beginning with the 2019 performance period.
Comment: One commenter recommended simplifying the QCDR self-
nomination process by de-coupling the QCDR self-nomination and measure
selection processes. One commenter recommended that CMS make exception
to this proposal to allow a QCDR to replace a measure if deems
necessary. One commenter recommended that the self-nomination could be
used if QCDR measures changes because they believed that QCDR measure
changes should be considered independent of the self-nomination
process. One commenter recommended that CMS consider self-populating
self-nomination forms based on the previous application. One commenter
recommended that CMS consider a system under which QCDRs only need to
reapply if they make substantial organizational or operational changes.
Response: We recognize that the existing process and our proposals
could use additional clarification. The existing process for QCDR self-
nomination and QCDR measure approval is a two-part process. The QCDR
self-nomination form requires contact information, services, costs
associated with using the QCDR, performance categories supported, MIPS
quality measures, and data validation plan to be considered for the
next performance period (82 FR 30159). Currently, QCDRs may also submit
for consideration QCDR measures (previously referred to as non-MIPS
measures (81 FR 77375) and separate from MIPS quality measures) as a
part of their self-nomination, which are reviewed and approved or
rejected separately from the self-nomination form. MIPS quality
measures have gone through extensive review prior to rulemaking and are
approved or rejected for inclusion in the program through the rule-
making cycle as discussed in the CY 2018 Quality Payment Program
proposed rule (82 FR 30043-30045). QCDR measures, on the other hand,
are reviewed for consideration under a different timeline and must be
reviewed to the extent needed to determine whether they are appropriate
for inclusion in the program, and align within the goals of the MIPS
program and provide meaningful measurement to eligible clinicians and
groups (82 FR 30160 through 30161). Details regarding self-nomination
requirements for both the self-nomination form and the QCDR measure
specification criteria and requirements can be found in the QCDR fact
sheet and the self-nomination user guide, that are posted in the
resource library of the Quality Payment Program Web site at https://qpp.cms.gov/about/resource-library.
Under our proposals, we are clarifying that beginning with the 2019
performance period, we are clarifying that any previously approved QCDR
in good standing (meaning, those that are not on probation or
disqualified) that wishes to self nominate using the simplified process
can attest, in whole or in part, that their previously approved form is
still accurate and applicable. Specifically, under this process, QCDRs
with no changes can attest that their previously submitted QCDR self-
nomination form in its entirety remains the same. Similarly, previously
approved QCDRs in good standing that wish to self nominate using the
simplified process and have minimal changes can attest to aspects of
their previously submitted form that remain the same, but would
additionally be required to update the self-nomination form to reflect
any minimal changes for CMS review. As stated in our proposal above,
minimal changes include, but are not limited to: Limited changes to
performance categories, adding or removing MIPS quality measures, and
adding or updating existing services and/or cost information.
Furthermore, under our proposal, we are also clarifying that any
previously approved QCDR in good standing that wishes to self nominate
using the simplified process and has substantive changes may submit
those substantive changes while attesting that the remainder of their
application remains the same from the previous year. We are clarifying
here that substantive changes include, but are not limited to: Updates
to existing (approved) QCDR measure specifications, new QCDR measures
for consideration, changes in the QCDR's data validation plan, or
changes in the QCDR's organizational structure (for example, if a
regional health collaborative or specialty society wishes to partner
with a different data submission platform vendor that would support the
submission aspect of the QCDR). This process mirrors that for minimal
changes. For example, if a previously approved QCDR in good standing
would like to submit changes only to it's QCDR measures, the QCDR can
attest that there are no changes to their self-nomination form, and
provide the updated QCDR measures specifications for CMS review and
approval. We are also clarifying that the information required to be
submitted for any changes would be the same as that required under the
normal self-nomination process. We refer reader to the CY 2017 Quality
Payment Program final rule (81 FR 77366 through 77367), as well as (81
FR 77374 through 77375) and Sec. 414.1400(f). For example, if a QCDR
would like to include one new QCDR measure, it would be required to
submit: Descriptions and narrative specifications for each measure, the
name or title of the QCDR measure, NQF number (if NQF endorsed),
descriptions of the denominator, numerator, denominator exceptions
(when applicable), denominator exclusions (when applicable), risk
adjustment variables (when applicable), and risk adjustment algorithms.
We expect that the measure will address a gap in care, and prefer
outcome or high priority measures. Documentation or ``check box''
measures are discouraged. Measures that have a very high performance
rate already, or extremely rare gaps in care will unlikely be approved
for inclusion (81 FR 77374 through 77375).
We disagree with the recommendation that QCDRs only need to reapply
if they make substantial organization or operational changes. This
would not take into consideration potential changes in criteria or
requirements of participation for QCDRs, as the MIPS program develops
through future program years. Furthermore, we believe self-nomination
should occur on an annual basis to account for QCDRs that may perform
poorly and thereby need to be placed on probation or precluded from
participation the following year. We thank the commenter for their
[[Page 53810]]
suggestion of automatically self-nomination forms based off of the
previous year's application; we are looking into the technical
capabilities of the system, and will make any potential updates based
on the feasibility of the system for future self-nomination form
updates.
Comment: One commenter recommended that in light of Congressional
intent to encourage the use of QCDRs, CMS should not only simplify the
process for re-approval of QCDRs, but should also consider
substantially streamlining and simplifying the process for approval of
new QCDRs. The commenter suggests including a revision or elimination
of any requirement that the Scientific Registry for Transplant
Recipients (SRTR) must report quality performance on an individual
level in order to be approved as a QCDR.
Response: As indicated in the CY 2017 Quality Payment Program final
rule (81 FR 77363) the Secretary encourages the use of QCDRs in
carrying out MIPS. We believe the current self-nomination application
for new QCDRs is comprehensive and collects information needed to make
a determination as to whether the entity has or has not met the
requirements and criteria of participation as a QCDR under MIPS. We
believe that this simplified self-nomination policy will help us
streamline the existing self-nomination process. As a requirement,
QCDRs must support reporting under the quality performance category at
the individual and/or group level (81 FR 77368).
Comment: A few commenters expressed concern with the current QCDR
self-nomination process for reasons including inconsistent feedback,
impractical timelines, and a lack of rationale for rejected measures. A
few commenters recommended that QCDR self-nomination application and
materials should be updated to outline all of the information needed to
determine QCDR status to avoid delays and misunderstandings. A few
commenters recommended that CMS develop a standardized process for
reviewing QCDR measures, including structured timeframes for an initial
review period, an appeals process, and a final review. One commenter
also recommended mechanisms to ensure transparency and predictability,
assigning a coordinator for each QCDR and creating an official database
containing decisions on measures to ensure there are no conflicting
messages.
Response: We refer readers to our previous response in this final
rule with comment period, in which we clarify that the existing process
for QCDR self-nomination and QCDR measure approval is a two-part
process. To reiterate, the QCDR self-nomination form requires contact
information, services, costs associated with using the QCDR,
performance categories supported, MIPS quality measures, and data
validation plan to be considered for the next performance period (82 FR
30159). Currently, QCDRs may also submit for consideration QCDR
measures (previously referred to as non-MIPS measures (81 FR 77375) and
separate from MIPS quality measures) as a part of their self-
nomination, which are reviewed and approved or rejected separately from
the self-nomination form. MIPS quality measures have gone through
extensive review prior to rulemaking and are approved or rejected for
inclusion in the program through the rulemaking cycle as discussed in
the CY 2018 Quality Payment Program proposed rule (82 FR 30043-30045).
QCDR measures, on the other hand, are reviewed for consideration under
a different timeline and must be reviewed to the extent needed to
determine whether they are appropriate for inclusion in the program,
and align within the goals of the MIPS program and provide meaningful
measurement to eligible clinicians and groups (82 FR 30160 through
30161). Details regarding self-nomination requirements for both the
self-nomination form and the QCDR measure specification criteria and
requirements can be found in the QCDR fact sheet and the self-
nomination user guide, that are posted in the resource library of the
Quality Payment Program Web site at https://qpp.cms.gov/about/resource-library.
We understand the commenters concerns, but would like to note we
have been working to implement process improvements and develop
additional standardization for the 2018 performance period self-
nomination and QCDR measure review, in which consistent feedback is
communicated to vendors, additional time is given to vendors to respond
to requests for information, and more detailed rationales are provided
for rejected QCDR measures. Furthermore, through our review, we intend
to communicate the timeframe in which a decision re-examination can be
requested should we reject QCDR measures. In order to improve
predictability and avoid delays or misunderstandings, we have made
updates to the self-nomination form to outline all of the information
needed during the review process. We refer readers to: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html where additional self-nomination guidance can be found.
Furthermore, we intend to assign specific personnel to communicate
self-nomination and QCDR decisions as appropriate and will continue to
use our internal decision tracker to track all decisions made on QCDRs
and their QCDR measures, as we did during the review of 2017 self-
nominations and QCDR measures. We appreciate that commenters provided
recommendations to standardize a process and timeframe for self-
nomination review and will take them into consideration for future
policies. We are currently working through such efforts to standardize
the process and timelines to the best of our ability.
Comment: One commenter appreciated the adjusted timeline that ends
on November 1, but expressed concern about the feasibility of this
timeline because the commenter believed QCDRs will have less than a
full year's worth of data to evaluate when making decisions about
whether to retire or modify existing measures for the upcoming year.
The commenter requested that CMS adopt a multi-year measure approval
process, such as 5 years, to allow QCDRs to adjust or retire a QCDR
measure from year-to-year, as long as they request such changes by
CMS's self-nomination deadline so QCDRs would not be expected to invest
time and resources on defending their measures from year to year and
could instead shift their focus to more meaningful analytics to help
improve patient care. A few commenters that supported the simplified
process also expressed concerns about the timelines for the self-
nomination process and ability to update or change information given
that they will not have a full year of data by the deadline.
Response: We appreciate the commenters support regarding the change
in the self-nomination period timeline. We understand that that there
is concern around the timeline as QCDRs believe they will have less
than a full year's worth of data to evaluate prior to making decisions
about whether to retire or modify an existing QCDR measure before the
next self-nomination period. We acknowledge that the timeframe may, in
some instances, limit the QCDR's ability to make a determination about
changing or retiring their QCDR measure, however we heard
overwhelmingly from stakeholders in the CY 2017 Quality Payment Program
final rule, that having the ability to make their quality measure
selections
[[Page 53811]]
prior to the beginning of the performance period is critical. CMS will
review the measure, data submissions and performance data (as
available) to ensure that the measure is appropriate for inclusion. We
will also take into consideration whether or not the measure is topped
out, reflects current clinical guidelines and is not considered
standard of care prior to making a final determination on the QCDR
measure. We refer readers to Sec. 414.1400(f) for additional
information. Furthermore, as stated in our proposal, we are concerned
that utilizing a multi-year approval process would restrict QCDRs by
having them support the same fixed services they had for the first
year, and would not provide QCDRs with the flexibility to add or remove
services, measures, and/or activities based on their QCDR capabilities
for the upcoming program year. Moreover, a multi-year approval process
would not take into consideration potential changes in criteria or
requirements of participation for QCDRs. Furthermore, our concerns with
multi-year approval of QCDR measures stem from the possibility of
clinical guidance changes, that may include the addition, removal, or
update to a new class of medications, procedures, diagnosis codes (for
example, ICD-10 code updates), or treatment methodology. Thus, at this
time, we believe an annual self-nomination and QCDR measure approval
process is most appropriate. We will however take such recommendations
into consideration, for future years as we gain further experience with
QCDRs and QCDR measures under MIPS.
Comment: One commenter supported the registry self-nomination
period deadline of November 1, 2017, because they believed with this
deadline, CMS will be able to better accommodate registries, other
third-party intermediaries, and eligible clinicians and groups as they
implement and prepare for MIPS reporting each year. The commenter
recommended that CMS approve 2018 QCDR measure specifications by
December 1, 2017, to allow for adequate time to prepare for the 2018
performance year.
Response: We interpret the comment to refer to QCDRs and thank the
commenter for its support. We intend to have QCDR measure specification
approvals and/or rejections completed by early December.
Comment: A few commenters supported CMS's proposal that beginning
with the 2018 performance period, self-nomination information must be
submitted via a web-based tool, rather than email, because they noted
it will simplify the process, reduce the time to complete self-
nomination applications, and would minimize confusion and
miscommunication with CMS.
Response: We thank the commenters for their support and believe the
web-based tool will help reduce burden and confusion.
Comment: A few commenters recommended revising the application
process to eliminate the duplication when applying as both a QCDR and
qualified registry. One commenter also recommended eliminating
duplicate applications for Qualified Registries and QCDRs because they
believed a majority of the qualified registry questions were also on
the QCDR application, and therefore, the application could be
duplicated within the web portal.
Response: We believe having distinct applications for QCDRs and
Qualified Registries is important because each specifies specific
measures, performance categories, and services offered by the entity.
Though we understand that application questions are the same under the
QCDR and Qualified Registries self-nomination applications, we note
that QCDRs have additional capabilities as compared to Qualified
Registries. QCDRs have the capability to develop and submit for
consideration up to 30 QCDR measures for CMS review and approval.
Furthermore, as defined in the CY 2017 Quality Payment Program final
rule (81 FR 77366) for QCDR measures, if the measure is risk adjusted,
the QCDR is required to provide us with details on their risk
adjustment methodology (risk adjustment variables, and applicable
calculation formula) at the time of self-nomination. However, we will
take these comments into consideration as we develop future policy.
Final Action: After consideration of the public comments received,
we are finalizing our proposals with clarification. Specifically at
Sec. 414.1400(b) we are finalizing our proposal, beginning with the
2019 performance period, that previously approved QCDRs in good
standing (that are not on probation or disqualified) that wish to self
nominate using the simplified process can attest, in whole or in part,
that their previously approved form is still accurate and applicable.
We are clarifying our proposals by elaborating on what would be
required for previously approved QCDRs in good standing that wish to
self-nominate and have minimal or substantive changes. For abundant
clarity, we are restating our finalized proposals with clarifications
here:
Beginning with the 2019 performance period, previously approved
QCDRs in good standing (meaning, those that are not on probation or
disqualified) that wishes to self nominate using the simplified process
can attest, in whole or in part, that their previously approved form is
still accurate and applicable. Specifically, under this process, QCDRs
with no changes can attest that their previously submitted QCDR self-
nomination form in its entirety remains the same. Similarly, previously
approved QCDRs in good standing that wish to self nominate using the
simplified process and have minimal changes can attest to aspects of
their previously submitted form that remain the same, but would
additionally be required to outline any minimal changes for CMS review
and approval. Minimal changes include, but are not limited to: Limited
changes to performance categories, adding or removing MIPS quality
measures, and adding or updating existing services and/or cost
information. Furthermore, a previously approved QCDRs in good standing
that wishes to self nominate using the simplified process and has
substantive changes may submit those substantive changes while
attesting that the remainder of their application remains the same from
the previous year. Substantive changes include, but are not limited to:
Updates to existing (approved) QCDR measure specifications, new QCDR
measures for consideration, changes in the QCDR's data validation plan,
or changes in the QCDR's organizational structure (for example, if a
regional health collaborative or speciality society wishes to partner
with a different data submission platform vendor that would support the
submission aspect of the QCDR). We are also clarifying that the
information required to be submitted for any changes would be the same
as that required under the normal self-nomination process as discussed
previously in this final rule with comment period.
Furthermore, we are finalizing our proposal, as proposed, that
beginning with the 2018 performance period, that self-nomination
applications must be submitted via a web-based tool, and that the email
method of submission will be eliminated.
For the 2018 performance period and future performance periods, we
are finalizing the following proposal: That self-nomination submissions
will occur via a web-based tool rather than email; and for the 2019
performance period and future performance periods, we are finalizing
the availability of a simplified
[[Page 53812]]
self-nomination process for existing QCDRs in good standing.
(3) Information Required at the Time of Self-Nomination
In the CY 2017 Quality Payment Program final rule (81 FR 77366
through 77367), we finalized the information a QCDR must provide to us
at the time of self-nomination. In the CY 2018 Quality Payment Program
proposed rule (82 FR 30159 through 30160), we proposed to replace the
term ``non-MIPS measures'' with ``QCDR measures'' for future program
years, beginning with the 2018 performance period. We noted that
although we proposed a change in the term referring to such measures,
we did not propose any other changes to the information a QCDR must
provide to us at the time of self-nomination under the process
finalized in the CY 2017 Quality Payment Program final rule. We
referred readers to the CY 2017 Quality Payment Program final rule for
specific information requirements. However, we refer readers to section
II.C,10,a,(5)(b) of this final rule with comment period, where we are
modifying our proposal that as a part of the self-nomination review
process for 2018 and future years, we will assign QCDR measure IDs to
approved QCDR measures, and the same measure ID must be used by any
other QCDRs that have received permission to also report the measure.
We also note that information required under the newly finalized
simplified process is discussed in the previous section of this final
rule with comment period. Additionally, as finalized in section
II.C.10.b.(2)(b)(iii) of this final rule with comment period, we will
only accept self-nomination applications through the web-based tool and
will provide additional guidance as to what information needs to be
submitted for QCDR measure specifications through the 2018 Self-
Nomination User Guide that will be posted on our Web site.
The following is a summary of the public comments received on the
``Information Required at the Time of Self-Nomination'' proposal and
our response:
Comment: A few commenters supported the proposal that the term
``QCDR measures'' replace the term ``non-MIPS measures'' for reasons
including a belief that the term ``non-MIPS'' has caused confusion and
created the impression that the measures are not eligible to be
reported under the MIPS program when in fact they are.
Response: We thank the commenters for their support.
Comment: One commenter did not support the proposal to replace the
term ``non-MIPS measure'' with ``QCDR measure'' noting that there is
likely to be greater understanding and familiarity with the current
terminology of ``non-MIPS measures.'' The commenter recommended that,
instead, these measures could be referred to as ``non-MIPS (QCDR
defined, specialty-specific) measures,'' ``non-MIPS (QCDR-specific)
measures,'' or ``non-MIPS (QCDR-defined) measures'' to promote clarity
for clinicians.
Response: Although we understand the commenters perspective, that
there is greater familiarity with the current terminology of ``non-MIPS
measures'', we believe that the term may lead clinicians and groups new
to MIPS to believe that the measures are not in the MIPS program and
they may then chose not to report on measures developed by QCDRs.
``QCDR measures'' will clearly construe that the measure is owned by a
QCDR and avoid any misinterpretation that the measures are not
reportable under MIPS. The term ``QCDR measure'', previously referred
to as ``non-MIPS measure'' is used to identify measures that are
developed by QCDRs. The term is used to distinguish them from the
quality measures that are in the MIPS program that have been reviewed
and approved through the rule making cycle. The term is used in the
QCDR self-nomination form and in the QCDR qualified posting to identify
which QCDR developed measures have been approved for use in the
upcoming performance period.
Final Action: After consideration of the public comments, we are
finalizing as proposed, our proposal to replace the term ``non-MIPS
measures'' with ``QCDR measures'' for future program years, beginning
with the 2018 performance period. We have also updated the regulation
text to reflect this change, and refer readers to Sec. 414.1400(e) for
the updated language.
For the 2018 performance period and future performance periods, we
are finalizing the following proposal: That the term ``QCDR measures''
will replace the term ``non-MIPS measures''.
(4) QCDR Criteria for Data Submission
In the CY 2017 Quality Payment Program final rule (81 FR 77367
through 77374), we finalized that a QCDR must perform specific
functions to meet the criteria for data submission. While we did not
propose any changes to the criteria for data submission in the CY 2018
Quality Payment Program proposed rule (82 FR 30160), we clarified the
criteria for QCDR data submission. For data submissions, QCDRs:
Must have in place mechanisms for transparency of data
elements and specifications, risk models and measures. That is, we
expect that the QCDR measures, and their data elements (that is,
specifications) comprising these measures be listed on the QCDR's Web
site unless the measure is a MIPS measure, in which case the
specifications will be posted by us. QCDR measure specifications should
be provided at a level of detail that is comparable to what is posted
by us on the CMS Web site for MIPS quality measure specifications.
Approved QCDRs may post the MIPS quality measure
specifications on their Web site, if they so choose. If the MIPS
quality measure specifications are posted by the QCDRs, they must be
replicated exactly the same as the MIPS quality measure specifications
as posted on the CMS Web site.
Enter into and maintain with its participating MIPS
eligible clinicians, an appropriate Business Associate Agreement that
complies with the HIPAA Privacy and Security Rules. Ensure that
Business Associate Agreement provides for the QCDR's receipt of
patient-specific data from an individual MIPS eligible clinician or
group, as well as the QCDR's disclosure of quality measure results and
numerator and denominator data or patient specific data on Medicare and
non-Medicare beneficiaries on behalf of MIPS eligible clinicians and
groups.
Must provide timely feedback at least 4 times a year, on
all of the MIPS performance categories that the QCDR will report to us.
We refer readers to section II.C.9.a. of the CY 2018 Quality Payment
Program proposed rule for additional information on third party
intermediaries and performance feedback.
For purposes of distributing performance feedback to MIPS
eligible clinicians, we encourage QCDRs to assists MIPS eligible
clinicians in the update of their email addresses in CMS systems--
including PECOS and the Identity and Access System--so that they have
access to feedback as it becomes available on www.qpp.cms.gov and have
documentation from the MIPS eligible clinician authorizing the release
of his or her email address.
As noted in the CY 2017 Quality Payment Program final rule (81 FR
77370), we will on a case-by-case basis allow QCDRs and qualified
registries to request review and approval for additional MIPS measures
throughout the performance period. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30160), we clarified and explained that
this flexibility would
[[Page 53813]]
only apply for MIPS measures; QCDRs will not be able to request
additions of any new QCDR measures throughout the performance period.
Furthermore, QCDRs will not be able to retire any measures they are
approved for during the performance period (82 FR 30160). Should a QCDR
encounter an issue regarding the safety or change in evidence for a
measure during the performance period, it must inform CMS by email of
said issue and indicate whether it will or will not be reporting on the
measure; we will review measure issues on a case-by-case basis (82 FR
30160). Any measures QCDRs wish to retire would need to be retained
until the next annual self-nomination process and applicable
performance period (82 FR 30160).
(5) QCDR Measure Specifications Criteria
In the CY 2017 Quality Payment Program final rule (81 FR 77374
through 77375), we specified at Sec. 414.1400(f) that the QCDR must
provide specific QCDR measures specifications criteria. We generally
intend to apply a process similar to the one used for MIPS measures to
QCDR measures that have been identified as topped out. In the CY 2018
Quality Payment Program proposed rule (82 FR 30160), we did not propose
any changes to the QCDR measure specifications criteria and refer
readers to the CY 2017 Quality Payment Program final rule (81 FR 77374
through 77375) for the specification requirements a QCDR must submit
for each measure, activity, or objective the QCDR intends to submit to
CMS. Though we did not make proposals around the QCDR measure
specifications themselves, in the CY 2018 Quality Payment Program
proposed rule, (82 FR 30160) we did make a number of clarifications
around alignment with the measures development plan, previously retired
measures, and the public posting of the QCDR measure specifications.
Additionally, we proposed to allow QCDR vendors to seek permission from
another QCDR to use an existing approved QCDR measure. Lastly, we
sought comment from stakeholders around requiring QCDRs to fully
develop and test their QCDR measures by the time of self-nomination.
These are discussed in more detail below.
(a) Clarifications to Previously Established Policies
In the CY 2017 Quality Payment Program final rule (81 FR 77375), we
finalized that we will consider all QCDR (non-MIPS) measures submitted
by the QCDR, but that the measures must address a gap in care and
outcome or other high priority measures are preferred. In the CY 2018
Quality Payment Program proposed rule (82 FR 30160), we clarified that
we encourage alignment with our Measures Development Plan.\13\
---------------------------------------------------------------------------
\13\ The CMS Quality Measures Development Plan: Supporting the
Transition to The Quality Payment Program 2017 Annual Report,
available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/2017-CMS-MDP-Annual-Report.pdf.
---------------------------------------------------------------------------
In the CY 2017 Quality Payment Program final rule (81 FR 77375), we
finalized that measures that have very high performance rates already
or address extremely rare gaps in care (thereby allowing for little or
no quality distinction between MIPS eligible clinicians) are also
unlikely to be approved for inclusion. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30160), we also clarified that we will
likely not approve retired measures that were previously in one of
CMS's quality programs, such as the Physician Quality Reporting System
(PQRS) program, if proposed as QCDR measures. This includes measures
that were retired due to being topped out, as defined in section
II.C.6.c.(2) of the CY 2018 Quality Payment Program proposed rule, due
to high-performance or measures retired due to a change in the evidence
supporting the use of the measure.
Lastly, in the CY 2017 Quality Payment Program final rule (81 FR
77375), we finalized that the QCDR must publicly post the measure
specifications (no later than 15 days following our approval of these
measure specifications) for each QCDR measure it intends to submit for
MIPS. In the CY 2018 Quality Payment Program proposed rule (82 FR
30160), we clarified that 15 days refers to 15 calendar days, not
business days. The QCDR must publicly post the measure specifications
no later than 15 calendar days following our approval of these measures
specifications for each QCDR measure it intends to submit for MIPS. It
is important for QCDRs to post their QCDR measure specifications on
their Web site in a timely manner, so that the specifications are
readily available for MIPS eligible clinicians and groups to access and
review in determining which QCDR measures they intend to report on for
the performance period.
(b) QCDRs Seeking Permission From Another QCDR To Use an Existing,
Approved QCDR Measure
In the CY 2018 Quality Payment Program proposed rule (82 FR 30160),
beginning with the 2018 performance period and for future program
years, we proposed that QCDR vendors may seek permission from another
QCDR to use an existing measure that is owned by the other QCDR. If a
QCDR would like report on an existing QCDR measure that is owned by
another QCDR, they must have permission from the QCDR that owns the
measure that they can use the measure for the performance period.
Permission must be granted at the time of self-nomination, so that the
QCDR that is using the measure can include the proof of permission for
CMS review and approval for the measure to be used in the performance
period. The QCDR measure owner (QCDR vendor) would still own and
maintain the QCDR measure, but would allow other approved QCDRs to
utilize their QCDR measure with proper notification. We noted that the
proposal would help to harmonize clinically similar measures and limit
the use of measures that only slightly differ from another. We invited
comments on this proposal.
(c) Full Development and Testing of QCDR Measures by Self-Nomination
In the CY 2018 Quality Payment Program proposed rule (82 FR 30160),
we sought comment for future rulemaking, on requiring QCDRs that
develop and report on QCDR measures, to fully develop and test (that
is, conduct reliability and validity testing) their QCDR measures, by
the time of submission of the new measure during the self-nomination
process. We received a number of comments on this item and appreciate
the input received. As this was a request for comment only, we will
take the feedback provided into consideration for possible inclusion in
future rulemaking.
The following is a summary of the public comments received on the
``QCDR Measure Specifications Criteria'' proposals and our responses:
Comment: Several commenters supported the proposal to allow QCDRs
to seek permission to use another vendor's QCDR's measures for reasons
including that developing and testing measures is a costly process; the
measure steward has the resources and clinical guidance to ensure
appropriate use for consistency that will assist with reporting; it is
intended to harmonize measures; it could allow similar types of
clinicians to report the same measure regardless of their TIN
structure; allowing the same measures to be collected by the QCDR
registries for different specialties at the same time would give CMS
and the physician community a more complete picture regarding the
quality of care being
[[Page 53814]]
provided to Medicare beneficiaries; and it will reduce the
proliferation of similar measures that may be duplicative. One
commenter also sought clarity as to the mechanism the Agency will use
to identify ``shared'' measures and recommended that CMS do the
following to increase clarity, harmonization, and transparency
including: (1) Require that if the specifications are not identical to
the original QCDR's measure(s), the borrowing QCDR must identify,
provide a rationale, and make public any changes made to the measure
specifications; (2) require the original measure steward/owner be
identified in the borrowing QCDR's list of measures; and (3) establish
some system of identification (that is, tags or numbers similar to MIPS
measures) so it is clear when one measure is used in multiple QCDRs.
Response: We thank the commenters for their recommendations.
Similar to how we identify the stewards of MIPS quality measures, we
agree that it is important to identify when QCDR measures are owned by
another QCDR. In response, we are modifying our proposal, such that as
a part of the self-nomination review process for the 2018 performance
period and future years, we will assign QCDR measure IDs once the QCDR
measure has been approved, and the same measure ID must be used by any
other QCDRs that have received permission to also report the measure.
If a QCDR measure has been assigned a measure ID from a previous
performance period, the secondary QCDR must use the previously assigned
measure ID and identify the QCDR that the measure belongs to as a part
of their self-nomination application. As stated in our proposal above,
permission must be granted at the time of self-nomination, so that the
borrowing QCDR using the measure can include proof of permission in
their application. Additionally, as finalized in section
II.C.10.b.(2)(b)(iii) of this final rule with comment period, we will
only accept self-nomination applications through the Web-based tool and
will provide additional guidance as to what information needs to be
submitted for QCDR measure specifications through the 2018 Self-
Nomination User Guide that will be posted on our Web site. To be clear,
if a QCDR is requesting permission to use another QCDR's measure, the
borrowing QCDR must use the exact measure specification as provided by
the QCDR measure owner. We expect that if a QCDR measure owner
implements a change to their QCDR measure, and the change is approved
by us during the QCDR measure review process (as outlined previously in
this final rule with comment period), secondary QCDRs borrowing the
QCDR measure must use the updated specifications.
Comment: One commenter requested that CMS clarify what form proof
of permission must take to satisfy the requirements of the self-
nomination application process. Another commenter recommended that CMS
reconsider the proposal to require that a QCDR must, by the time of
self-nomination, have permission from the QCDR that owns the measure
that they can use the measure for the performance period and rather
provide the ability to add reportable measures throughout the year.
Response: As a clarification to the proposal, for the 2018 self-
nomination period and for future performance periods, the self-
nomination form that is available through the Web-based tool, will
include two additional fields: One that questions whether the QCDR
measure is owned by another QCDR, and another that questions the
secondary QCDR to attest that it has received written permission to use
another QCDR's measure. We leave these agreements and their details to
QCDRs to determine. We may request that the secondary QCDR provide
proof that permission was received in instances where we seek further
verification. As stated in our proposal, permission must be established
by the QCDR at the time of self-nomination.
Final Action: After consideration of the public comments received,
we are finalizing, with modification that beginning with the 2018
performance period and for future program years, QCDRs can report on an
existing QCDR measure that is owned by another QCDR. In response to
comments, we are modifying our proposal to also include that we will
assign QCDR measure IDs after the QCDR measure has been approved, and
the same measure ID must be used by other QCDRs that have received
permission to also report the measure. Furthermore, the self-nomination
form that is available via the Web-based tool will be modified to
include a field that will request QCDR measure IDs if the measure has
been previously approved and assigned a MIPS QCDR measure ID.
We are also clarifying and updating at Sec. 414.1400(f)(3) that
the QCDR must publicly post the measure specifications no later than 15
calendar days, not business days, following our approval of these
measures specifications for each approved QCDR measure.
For the 2018 performance period and future performance periods, we
are finalizing the following proposal: That QCDRs may report on QCDR
measures owned by another QCDR with the appropriate permissions; and we
clarify that QCDRs must publicly post QCDR measure specifications no
later than 15 calendar days following our approval of the measures
specifications.
(6) Identifying QCDR Quality Measures
In the CY 2017 Quality Payment Program final rule (81 FR 77375
through 77377), we finalized the definition and types of QCDR quality
measures for purposes of QCDRs submitting data for the MIPS quality
performance category. In the CY 2018 Quality Payment Program proposed
rule (82 FR 30160), we did not propose any changes to the criteria on
how to identify QCDR quality measures. However, in the proposed rule,
we clarified that QCDRs are not limited to reporting on QCDR measures,
and may also report on MIPS measures as indicated in section
II.C.10.b.(4) of this final rule with comment period, the QCDR data
submission criteria section.
As the MIPS program progresses in its implementation, we are
interested in elevating the standards for which QCDR measures are
selected and approved for use. We are interested in further aligning
our QCDR measure criteria with that used under the Call for Quality
Measures process, as is described in the CY 2017 Quality Payment
Program final rule (81 FR 77151). We seek comment in this final rule
with comment period, on whether the standards used for selecting and
approving QCDR measures should align more closely with the standards
used for the Call for Quality Measures process for consideration in
future rule making.
(7) Collaboration of Entities To Become a QCDR
In the CY 2017 Quality Payment Program final rule (81 FR 77377), we
finalized policy on the collaboration of entities to become a QCDR. In
the CY 2018 Quality Payment Program proposed rule (82 FR 30161), we did
not propose any changes to this policy.
c. Health IT Vendors That Obtain Data From MIPS Eligible Clinicians'
Certified EHR Technology (CEHRT)
In the CY 2017 Quality Payment Program final rule (81 FR 77382), we
finalized definitions and criteria around health IT vendors that obtain
data from MIPS eligible clinicians CEHRT. We note that, a health IT
vendor that serves as a third party intermediary to collect or submit
data on behalf MIPS eligible clinicians may or may not also be a
[[Page 53815]]
``health IT developer.'' We refer readers to the CY 2018 Quality
Payment Program proposed rule (82 FR 30161) for additional information
regarding health IT vendors. Throughout this rule, we used the term
``health IT vendor'' to refer to entities that support the health IT
requirements of a clinician participating in the Quality Payment
Program.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30161),
we did not propose any changes to this policy in the proposed rule.
However, we sought comment for future rulemaking regarding the
potential shift to seeking alternatives which might fully replace the
QRDA III format in the Quality Payment Program in future program years.
We received a number of comments on this item and appreciate the input
received. As this was a request for comment only, we will take the
feedback provided into consideration for possible inclusion in future
rulemaking.
d. Qualified Registries
In the CY 2017 Quality Payment Program final rule (81 FR 77382
through 77386), we finalized the definition and capability of qualified
registries. As previously established, if an entity becomes qualified
as a qualified registry, they will need to sign a statement confirming
that this information is currect prior to listing it on our Web site
(81 FR 77383). Once we post the qualified registry on our Web site,
including the services offered by the qualified registry, we will
require the qualified registry to support these services and measures
for its clients as a condition of the entity's participation as a
qualified registry in MIPS (81 FR 77383). Failure to do so will
preclude the qualified registry from participation in MIPS in the
subsequent year (81 FR 77383). In the CY 2018 Quality Payment Program
proposed rule (82 FR 30161), we did not propose any changes to the
definition or the capabilities of qualified registries. However, we did
propose changes to the self-nomination process for the 2019 performance
period. This is discussed in detail below.
(1) Establishment of an Entity Seeking To Qualify as a Registry
In the CY 2017 Quality Payment Program final rule (81 FR 77383), we
finalized the requirements for the establishment of an entity seeking
to qualify as a registry. In the CY 2018 Quality Payment Program
proposed rule (82 FR 30161), we did not propose any changes to the
criteria regarding the establishment of an entity seeking to qualify as
a registry criteria.
(2) Self-Nomination Process
(a) Generally
In the CY 2017 Quality Payment Program final rule (81 FR 77383
through 77384), we finalized procedures and requirements for qualified
registries to self-nominate. Additional details regarding self-
nomination requirements for the self-nomination form can be found in
the qualified registry fact sheet and the self-nomination user guide,
that are posted in the resource library of the Quality Payment Program
Web site at https://qpp.cms.gov/about/resource-library.
For the 2018 performance period, and for future years of the
program, we finalized in the CY 2017 Quality Payment Program final rule
(81 FR 77383) and Sec. 414.1400(g) a self-nomination period for
qualified registries from September 1 of the year prior to the
applicable performance period, until November 1 of the same year. For
example, for the 2018 performance period, the self-nomination period
would begin on September 1, 2017, and end on November 01, 2017.
Entities that desire to qualify as a qualified registry for purposes of
MIPS for a given performance period will need to provide all requested
information to us at the time of self-nomination and would need to
self-nominate for that performance period (81 FR 77383). Having
previously qualified as a qualified registry does not automatically
qualify the entity to participate in subsequent MIPS performance
periods (81 FR 77383). Furthermore, prior performance of the qualified
registry (when applicable) will be taken into consideration in approval
of their self-nomination. For example, a qualified registry may choose
not to continue participation in the program in future years, or the
qualified registry may be precluded from participation in a future
year, due to multiple data or submission errors as noted in section
II.C.10.f. of this final rule with comment period. As such, we believe
an annual self-nomination process is the best process to ensure
accurate information is conveyed to MIPS eligible clinicians and
accurate data is submitted to MIPS. In this final rule with comment
period, we are establishing a simplified process for existing qualified
registries in good standing.
(b) Simplified Self-Nomination Process for Existing Qualified
Registries in MIPS, That Are in Good Standing
We do understand that some qualified registries may not have any
changes to the measures and/or activity inventory they offer to their
clients and intend to participate in MIPS for many years. In the CY
2018 Quality Payment Program proposed rule (82 FR 30161), we proposed,
beginning with the 2019 performance period, a simplified process, to
reduce the burden of self-nomination for those existing qualified
registries that have previously participated in MIPS and are in good
standing (not on probation or disqualified, as described below), and to
allow for sufficient time for us to review data submissions and make
determinations. Our proposals to simplify the process for existing
qualified registries in good standing with no changes, minimal changes,
and those with substantive changes are discussed below.
(i) Existing Qualified Registries in Good Standing, With No Changes
In the CY 2018 Quality Payment Program proposed rule (82 FR 30161),
we proposed, beginning with the 2019 performance period, a simplified
process for which existing qualified registries in good standing may
continue their participation in MIPS, by attesting that the qualified
registry's previously approved: Data validation plan, cost to use the
qualified registry, measures, activities, services, and performance
categories used in the previous year's performance period of MIPS have
no changes and will be used for the upcoming performance period.
Specifically, existing qualified registries in good standing with no
changes may attest during the self-nomination period, between September
1 and November 1, that they have no changes to their approved self-
nomination application from the previous year from the previous year of
MIPS. By attesting that all aspects of their approved application from
the previous year have not changed, these existing qualified registries
in good standing would be spending less time completing the entire
self-nomination form, as was previously required on a yearly basis.
(ii) Existing Qualified Registries in Good Standing With Minimal
Changes
Beginning with the 2019 performance period, existing qualified
registries in good standing that would like to make minimal changes to
their previously approved self-nomination application from the previous
year, may submit these changes, and attest to no other changes from
their previously approved qualified registry application, for CMS
review during the self-nomination period, from September 1 to November
1. In the CY 2018 Quality Payment Program proposed rule (82 FR 30161),
we proposed that minimal changes
[[Page 53816]]
include: Limited changes to their supported performance categories,
adding or removing MIPS quality measures, adding or updating existing
services and/or the costs to use the registry.
(iii) Existing Qualified Registries in Good Standing With Substantive
Changes
In the CY 2018 Quality Payment Program proposed rule (82 FR 30161),
we inadvertently left out language in the preamble that explained our
proposed updates to Sec. 414.1400(g), which were included in the
proposed amendments to 42 CFR chapter IV at 82 FR 30255, and stated
that:
For the 2018 performance period and future years of the program,
the qualified registry must self-nominate from September 1 of the prior
year until November 1 of the prior year. Entities that desire to
qualify as a qualified registry for a given performance period must
self-nominate and provide all information requested by CMS at the time
of self-nomination. Having qualified as a qualified registry does not
automatically qualify the entity to participate in subsequent MIPS
performance periods. Beginning with the 2019 performance period,
existing qualified registries that are in good standing may attest that
certain aspects of their previous year's approved self-nomination have
not changed and will be used for the upcoming performance period. CMS
may allow existing qualified registries in good standing to submit
minimal or substantive changes to their previously approved self-
nomination form from the previous year, during the annual self-
nomination period, for CMS review and approval without having to
complete the entire qualified registry self-nomination application
process.
This language mirrors that proposed for QCDRs (82 FR 30255) and
finalized above in section II.C.10.b. of this final rule with comment
period. Our intention was to parallel the simplified self-nomination
process available to QCDRs in good standing beginning with the 2019
performance period, including for substantive changes, such that
Qualified Registries could do the same. The update to Sec.
414.1400(g), as included in the proposed rule, allows a qualified
registry to also submit substantive changes, in addition to minor
changes, through the simplified self-nomination process. Therefore, we
are clarifying here in this final rule with comment period that
beginning with the 2019 performance period, CMS may allow existing
qualified registries in good standing to submit substantive changes, in
addition to minimal changes as discussed in the section above, to their
previously approved self-nomination form from the previous year, during
the annual self-nomination period. We are also clarifying that
substantive changes may include, but are not limited to: Updates to a
qualified registry's data validation plan, or a change in the qualified
registry's organization structure that would impact any aspect of the
qualified registry. We are also clarifying that the information
required to be submitted for any changes would be the same as that
required under the normal self-nomination process as previously
finalized. We refer readers to the CY 2017 Quality Payment Program
final rule (81 FR 77383 through 77384), where we finalized the
information a qualified registry must provide to us at the time of
self-nomination as well as (82 FR 30162) and Sec. 414.1400(g).
(c) Multi-Year Approval of Qualified Registries
In the development of the above proposal, we had also reviewed the
possibility of offering a multi-year approval, in which qualified
registries would be approved for a 2-year increment of time. However,
we are concerned that utilizing a multi-year approval process would
restrict qualified registries by having them support the same fixed
services they had for the first year, and would not provide qualified
registries with the flexibility to add or remove services, measures,
and/or activities based on their qualified registry's capabilities for
the upcoming year. Furthermore, under a multi-year approval process,
qualified registries would not be able to make changes to their
organizational structure (as noted above) and would also create
complications in our process for placing qualified registries who
perform poorly (during the first year) on probation or disqualifying
them (as described below). Moreover, a multi-year approval process
would not take into consideration potential changes in criteria or
requirements of participation for qualified registries, that may occur
as the MIPS program develops through future program years. For the
reasons stated above, we believe an annual self-nomination period is
appropriate. We understand that stakeholders are interested in a multi-
year approval process, for that reason we intend to revisit the topic
once we have gained additional experience with the self-nomination
process under MIPS. We seek comment from stakeholders as to how they
believe our aforementioned concerns with multi-year approvals of
qualified registries can be resolved.
(d) Web-Based Submission of Self-Nomination Forms
In the CY 2018 Quality Payment Program proposed rule (82 FR 30162),
for the 2018 performance period and beyond, we proposed that self-
nomination information must be submitted via a Web-based tool, and to
eliminate the submission method of email. We noted that we will provide
further information about the web-based tool at www.qpp.cms.gov.
We invited public comment on: (1) Our proposals regarding a
simplified self-nomination process beginning with the 2019 performance
period for previously approved qualified registries in good standing;
(2) multi-year approvals; and (3) our proposal to submit self-
nomination information via a web-based tool. The following is a summary
of the public comments received on the ``Self-Nomination Period''
proposals and our responses:
Comment: A few commenters supported the proposal to allow a
simplified self-nomination process for qualified registries in good
standing for reasons including a belief it would be more efficient.
Response: We thank the commenters for their support.
Comment: A few commenters supported the proposal that beginning
with the 2018 performance period self-nomination information for a
qualified registry must be submitted via a Web-based tool, rather than
email, because they believed it will simplify the process.
Response: We thank the commenters for their support.
Comment: One commenter requested clarification on the proposal for
simplification of the self-nomination process for qualified registries,
specifically to confirm: (1) For 2018, the only proposed change from
2017 is that the self-nomination submission process will be via a web-
based tool rather than email; and (2) it is not until 2019 that the
self-nomination submission process will be replaced with the
attestation for existing qualified registries.
Response: For the 2018 performance period, the only change proposed
is that self-nomination submission will occur via a web-based tool
rather than email. The simplified self-nomination process would be
available for qualified registries in good standing beginning with the
2019 performance period. In addition, in order to align with the QCDR
process finalized above, we are providing clarification here. We
recognize that the existing process and our proposals could use
additional clarification. The qualified registry self-
[[Page 53817]]
nomination form requires: contact information, services, costs
associated with using the qualified registry, performance categories
supported, MIPS quality measures, and data validation plan to be
considered for the next performance period (81 FR 77383 through 77384).
Details regarding self-nomination requirements can be found in the
qualified registry fact sheet and the self-nomination user guide, that
are posted in the resource library of the Quality Payment Program Web
site at https://qpp.cms.gov/about/resource-library.
Under our proposals, we are clarifying that beginning with the 2019
performance period, any previously approved qualified registry in good
standing (meaning, those that are not on probation or disqualified)
that wishes to self nominate using the simplified process can attest,
in whole or in part, that their previously approved form is still
accurate and applicable. Specifically, under this process, qualified
registries with no changes can attest that their previously submitted
qualified registry self-nomination form in its entirety remains the
same. Similarly, previously approved qualified registries in good
standing that wish to self nominate using the simplified process and
have minimal changes can attest to aspects of their previously
submitted form that remain the same, but would additionally be required
to outline any minimal changes for our review and approval through the
self-nomination review process. Additional instructions regarding the
completion of this simplified self-nomination form will be available on
our Web site prior to the start of the self-nomination process for the
2019 performance period. As stated in our proposal above, minimal
changes include, but are not limited to: Limited changes to performance
categories, adding or removing MIPS quality measures, and adding or
updating existing services and/or cost information.
Furthermore, we are also clarifying that any previously approved
qualified registry in good standing that wishes to self nominate using
the simplified process can submit substantive changes while attesting
that the remainder of their application remains the same from the
previous year. Substantive changes include, but are not limited to:
Updates to the qualified registry's data validation plan, or a change
in the qualified registry's organization structure that would impact
any aspect of the qualified registry. For example, if a previously
approved qualified registry in good standing would like to submit
changes only to it's MIPS quality measures, the qualified registry can
attest that there are no changes to their self-nomination form, and
provide updated MIPS quality measures information for CMS review and
approval. We are also clarifying that the information required to be
submitted for any changes would be the same as that required under the
normal self-nomination process. We refer readers to the CY 2017 Quality
Payment Program final rule (81 FR 77383 through 77384), where we
finalized the information a qualified registry must provide to us at
the time of self-nomination as well as (82 FR 30162) and Sec.
414.1400(g).
Comment: One commenter supported the simplified self-nomination
process available for QCDRs and qualified registries. Specifically that
existing QCDRs and qualified registries in good standing may also make
substantive or minimal changes to their approved self-nomination
application from the previous year of MIPS that would be submitted
during the self-nomination period for CMS review and approval.
Response: We thank the commenter for their support. As clarified
above, in the CY 2018 Quality Payment Program proposed rule, we
inadvertently left out language in the preamble that explained our
proposed updates to Sec. 414.1400(g), which were included in the
proposed rule at 82 FR 30255. The update to Sec. 414.1400(g) would
allow a qualified registry to also submit substantive changes, in
addition to minor changes, through the simplified self-nomination
process. We refer readers to our clarification for existing Qualified
Registries in good standing with substantive changes as discussed
above.
Comment: One commenter recommended that CMS allow qualified
registries to report existing QCDR measures, using the same approval
process that QCDRs would use.
Response: Currently, qualified registries are limited to reporting
MIPS quality measures that currently exist in the program, as described
in the CY 2017 Quality Payment Program final rule (81 FR 77384). Should
an entity wish to report on existing, approved QCDR measures they
should consider self-nominating as a QCDR. However, we will take the
commenter's feedback into consideration as we develop future policies.
Final Action: After consideration of the public comments received,
we are finalizing our proposals with clarifications. Specifically at
Sec. 414.1400 (5)(g), we are finalizing our proposal, beginning with
the 2019 performance period, that previously approved qualified
registries in good standing (that are not on probation or disqualified)
that wish to self nominate using the simplified process can attest, in
whole or in part, that their previously approved form is still accurate
and applicable. We are clarifying our proposals by elaborating on what
would be required for previously approved qualified registries in good
standing that wish to self-nominate and have changes. For abundant
clarity, we are restating our finalized proposals with clarifications
here:
Beginning with the 2019 performance period, any previously approved
qualified registry in good standing (meaning, those that are not on
probation or disqualified) that wishes to self nominate using the
simplified process can attest, in whole or in part, that their
previously approved form is still accurate and applicable.
Specifically, under this process, qualified registries with no changes
can attest that their previously submitted qualified registry self-
nomination form in its entirety remains the same. Similarly, previously
approved qualified registries in good standing that wish to self
nominate using the simplified process and have minimal changes can
attest to aspects of their previously submitted form that remain the
same, but would additionally be required to update and describe any
minimal changes in their self-nomination application for our review and
approval. Minimal changes include, but are not limited to: limited
changes to performance categories, adding or removing MIPS quality
measures, and adding or updating existing services and/or cost
information.
We are also clarifying that any previously approved qualified
registry in good standing that wishes to self nominate using the
simplified process and has substantive changes may submit those
substantive changes while attesting that the remainder of their
application remains the same from the previous year. Substantive
changes include, but are not limited to: Changes in the qualified
registry's data validation plan, or changes in the qualified registry's
organizational change in the qualified registry's organization
structure that would impact any aspect of the qualified registry. We
are clarifying that the information required to be submitted for any
changes would be the same as that required under the normal self-
nomination process as previously finalized. Therefore, we are
finalizing at Sec. 414.1400(g) the following: for the 2018 performance
period and future years of the program, the qualified registry must
self-nominate from September 1 of the prior year until
[[Page 53818]]
November 1 of the prior year. Entities that desire to qualify as a
qualified registry for a given performance period must self-nominate
and provide all information requested by us at the time of self-
nomination. Having qualified as a qualified registry does not
automatically qualify the entity to participate in subsequent MIPS
performance periods. Beginning with the 2019 performance period,
existing qualified registries that are in good standing may attest that
certain aspects of their previous year's approved self-nomination have
not changed and will be used for the upcoming performance period. We
may allow existing qualified registries in good standing to submit
minimal or substantive changes to their previously approved self-
nomination form from the previous year, during the annual self-
nomination period, for our review and approval without having to
complete the entire qualified registry self-nomination application
process.
We are also finalizing, as proposed, that for the 2018 performance
period and beyond: (1) Self-nomination information must be submitted
via a web-based tool, and (2) we are eliminating the submission method
of email. We will provide further information on the web-based tool at
www.qpp.cms.gov.
(3) Information Required at the Time of Self-Nomination
We finalized in the CY 2017 Quality Payment Program final rule (81
FR 77384) that a qualified registry must provide specific information
to us at the time of self-nomination. In the CY 2018 Quality Payment
Program proposed rule (82 FR 30162), we did not propose any changes to
this policy.
(4) Qualified Registry Criteria for Data Submission
In the CY 2017 Quality Payment Program final rule (81 FR 77386), we
finalized the criteria for qualified registry data submission. In the
CY 2018 Quality Payment Program proposed rule (82 FR 30162), we did not
propose any changes to this policy. Although no changes were proposed,
however we made two clarifications to the existing criteria:
In the CY 2017 Quality Payment Program final rule (81 FR
77385), we specify that qualified registries must enter into and
maintain with its participating MIPS eligible clinicians an appropriate
MIPS eligible clinicians an appropriate Business Associate agreement.
The Business Associate agreement should provide for the qualified
registry's receipt of patient-specific data from an individual MIPS
eligible clinician or group; as well as the qualified registry's
disclosure of quality measure results and numerator and denominator
data or patient specific data on Medicare and non-Medicare
beneficiaries on behalf of individual MIPS eligible clinicians and
groups. As stated in the CY 2018 Quality Payment Program proposed rule
(82 FR 30162), we are clarifying that the Business Associate agreement
must comply with the HIPAA Privacy and Security Rules.
We had finalized in the CY 2017 Quality Payment Program
final rule (81 FR 77384) that timely feedback be provided at least four
times a year, on all of the MIPS performance catgeories that the
qualified registry will report to us. We are clarifying that readers
should refer to section II.C.9.a. of this rule for additional
information on third party intermediaries and performance feedback.
We refer readers to the CY 2017 Quality Payment Program final rule
(81 FR 77370) for additional information on allowing qualified
registries ability to request CMS approval to support additional MIPS
quality measures.
e. CMS-Approved Survey Vendors
In the CY 2017 Quality Payment Program final rule (81 FR 77386), we
finalized the definition, criteria, required forms, and vendor business
requirements needed to participate in MIPS as a survey vendor. In the
CY 2018 QPP proposed rule (82 FR 30162), we did not propose changes to
those policies. However, in the CY 2016 PFS rule (80 FR 71143) we heard
from some groups that it would be useful to have a final list of CMS-
approved survey vendors to inform their decision on whether or not to
participate in the CAHPS for MIPS survey. Therefore, in the proposed
rule, we proposed to change the survey vendor application deadline in
order to timely display a final list of CMS-approved survey vendors.
This is discussed in more detail below.
(1) Updated Survey Vendor Application Deadline
In the CY 2017 Quality Payment Program final rule (81 FR 77386), we
finalized a survey vendor application deadline of April 30th of the
performance period. We also finalized that survey vendors would be
required to undergo training, to meet our standards on how to
administer the survey, and submit a quality assurance plan (81 FR
77386). In the CY 2018 Quality Payment Program proposed rule (82 FR
30162-30163), we noted that the current CAHPS for MIPS survey timeframe
from the 2017 performance period conflicts with the timeframe in which
groups can elect to participate in the CAHPS for MIPS survey. We would
like to clarify that the current CAHPS for MIPS survey vendor
application deadline from the 2017 performance period of April 30th
conflicts with the timeframe in which groups can elect to participate
in the CAHPS for MIPS survey, of April 1st to June 30th. In order to
provide a final list of CMS-approved survey vendors earlier in the
timeframe during which groups can elect to participate in the CAHPS for
MIPS survey, an earlier vendor application deadline would be necessary.
This could be accomplished by having a rolling application period,
where vendors would be able to submit an application by the end of the
first quarter. The rolling application period that would end by the
first quarter would allow us to adjust the application deadline beyond
January 31st on a year to year basis, based on program needs. However,
in addition to submitting a vendor application, vendors must also
complete vendor training and submit a Quality Assurance Plan and we
need to allow sufficient time for these requirements as well.
Therefore, in the CY 2018 Quality Payment Program proposed rule (82
FR 30162 through 30163), we proposed for the 2018 performance period of
the Quality Payment Program and future years that the vendor
application deadline would be January 31st of the applicable
performance year or a later date specified by CMS. The proposal would
allow us to adjust the application deadline beyond January 31st on a
year to year basis, based on program needs. We would notify vendors of
the application deadline to become a CMS-approved survey vendor through
additional communications and postings on the Quality Payment Program
Web site, qpp.cms.gov. We requested comments on this proposal and other
alternatives that would allow us to provide a final list of CMS-
approved survey vendors early in the timeframe during which groups can
elect to participate in the CAHPS for MIPS survey.
We did not receive any comments related to this proposal.
Final Action: We are finalizing our policy, as proposed, therefore
beginning with the 2018 performance period and for future years, the
vendor application deadline will be January 31st of the applicable
performance year or a later date specified by CMS as discussed in this
final rule with comment period. Therefore, we are finalizing at Sec.
414.1400(i), the following: Vendors are required to undergo the CMS
approval
[[Page 53819]]
process for each year in which the survey vendor seeks to transmit
survey measures data to CMS. Applicants must adhere to any deadlines
specified by CMS.
f. Probation and Disqualification of a Third Party Intermediary
At Sec. 414.1400(k), we finalized the process for placing third
party intermediaries on probation and for disqualifying such entities
for failure to meet certain standards established by us (81 FR 77386).
Specifically, we proposed that if at any time we determine that a third
party intermediary (that is, a QCDR, health IT vendor, qualified
registry, or CMS-approved survey vendor) has not met all of the
applicable criteria for qualification, we may place the third party
intermediary on probation for the current performance period or the
following performance period, as applicable (81 FR 77389). We refer
readers to the CY 2018 Quality Payment Program proposed rule (81 FR
30163) for additional information regarding the probation and
disqualification process.
In the CY 2017 Quality Payment Program final rule with comment (81
FR 77388), we stated that MIPS eligible clinicians are ultimately
responsible for the data that are submitted by their third party
intermediaries and expect that MIPS eligible clinicians and groups
should ultimately hold their third party intermediaries accountable for
accurate reporting. We also stated that we would consider from the MIPS
eligible clinicians and groups perspective, cases of vendors leaving
the marketplace (81 FR 77388) during the performance period on a case
by case basis, but that we will not consider cases prior to the
performance period. Furthermore, we stated that we would need proof
that the MIPS eligible clinician had an agreement in place with the
vendor at the time of their withdrawal from the marketplace.
While we did not propose any changes to the process of probation
and disqualification of a third party intermediary in the CY 2018
Quality Payment Program proposed rule (82 FR 30163), we received a
number of comments on this item and appreciate the input received.
g. Auditing of Third Party Intermediaries Submitting MIPS Data
In the CY 2017 Quality Payment Program final rule (81 FR 77389), we
finalized at Sec. 414.1400(j) that any third party intermediary (that
is, a QCDR, health IT vendor, qualified registry, or CMS-approved
survey vendor) must comply with the following procedures as a condition
of their qualification and approval to participate in MIPS as a third
party intermediary:
(1) The entity must make available to us the contact information of
each MIPS eligible clinician or group on behalf of whom it submits
data. The contact information will include, at a minimum, the MIPS
eligible clinician or group's practice phone number, address, and if
available, email;
(2) The entity must retain all data submitted to us for MIPS for a
minimum of 10 years; and
(3) For the purposes of auditing, we may request any records or
data retained for the purposes of MIPS for up to 6 years and 3 months.
In the CY 2018 Quality Payment Program proposed rule (82 FR 30163),
we proposed to update Sec. 414.1400(j)(2) from stating that the entity
must retain all data submitted to us for MIPS for a minimum of 10 years
to state that the entity must retain all data submitted to us for
purposes of MIPS for a minimum of 10 years from the end of the MIPS
performance period.
We invited public comment on our proposal, but did not receive any.
Final Action: We are finalizing our proposal with modification. We
are modifying the record retention provision at Sec. 414.1400(j)(2) to
align with the record retention provisions elsewhere in this final rule
with comment period. We refer readers to section II.C.9.c. of this
final rule with comment period where we discuss and respond to public
comments we received on our proposal to add a new paragraph (d) at
Sec. 414.1390, codifying our record retention policy for MIPS eligible
clinicians and groups. Based on comments we received on the 10 year
record retention period at Sec. 414.1390(d) regarding time and
financial burden in managing, storing, and retrieving data and
information, and our interest in reducing financial and time burdens
under this program and having consistent policies across this program,
we are modifying our proposed 10-year retention requirement at Sec.
414.1400(j)(2) to a 6-year retention requirement.
Similarly, we finalized in the CY 2017 Quality Payment Program
final rule (81 FR 77389-77390) at Sec. 414.1400(j)(3) that for the
purposes of auditing, we may request any records or data retained for
the purposes of MIPS for up to 6 years and 3 months. While we did not
propose any changes or updates to this policy in the CY 2018 QPP
proposed rule, based on our modifications to Sec. 414.1390(d) and
Sec. 414.1400(j)(2), as discussed previously in this final rule with
comment period, we are also updating Sec. 414.1400(j)(3) to reflect
these same changes and allow us to request any records or data retained
for the purposes of MIPS for up to 6 years from the end of the MIPS
performance period. We believe this change will promote consistent and
cohesive policies across this program.
11. Public Reporting on Physician Compare
This section contains the approach for public reporting on
Physician Compare for year 2 of the Quality Payment Program (2018 data
available for public reporting in late 2019) and future years,
including MIPS, APMs, and other information as required by the MACRA
and building on the MACRA public reporting policies previously
finalized (81 FR 77390 through 77399).
Physician Compare draws its operating authority from section
10331(a)(1) of the Affordable Care Act. As required by section
10331(a)(1) of the Affordable Care Act, by January 1, 2011, we
developed a Physician Compare Internet Web site with information on
physicians enrolled in the Medicare program under section 1866(j) of
the Act, as well as information on other EPs who participate in the
PQRS under section 1848 of the Act. More information about Physician
Compare can be accessed on the Physician Compare Initiative Web site at
https://www.cms.gov/medicare/quality-initiatives-patient-assessment-instruments/physician-compare-initiative/.
The first phase of Physician Compare was launched on December 30,
2010 (https://www.medicare.gov/physiciancompare). Since the initial
launch, Physician Compare has been continually improved, and more
information has been added. In December 2016, the site underwent a
complete user-informed, evidenced-based redesign to further enhance
usability and functionality on both desktop computers and mobile
devices and to begin to prepare the site for the inclusion of more data
as required by the MACRA.
Consistent with section 10331(a)(2) of the Affordable Care Act,
Physician Compare initiated a phased approach to public reporting
performance scores that provide comparable information on quality and
patient experience measures for reporting periods beginning January 1,
2012. The first set of quality measures were publicly reported on
Physician Compare in February 2014. A complete history of public
reporting on Physician Compare is detailed in the CY 2016 PFS final
rule (80 FR 71117 through 71122). The Physician Compare Initiative Web
site at https://www.cms.gov/medicare/
[[Page 53820]]
quality-initiatives-patient-assessment-instruments/physician-compare-
initiative/ is regularly updated to provide information about what is
currently available on the Web site.
As finalized in the CY 2015 and CY 2016 PFS final rules (79 FR
67547 and 80 FR 70885, respectively), Physician Compare will continue
to expand public reporting. This expansion includes publicly reporting
both individual eligible professional (now referred to as eligible
clinician) and group-level QCDR measures starting with 2016 data
available for public reporting in late 2017, as well as the inclusion
of a 5-star rating based on a benchmark in late 2017 based on 2016 data
(80 FR 71125 and 71129), among other additions.
This expansion will continue under the MACRA. Sections
1848(q)(9)(A) and (D) of the Act facilitate the continuation of our
phased approach to public reporting by requiring the Secretary to make
available on the Physician Compare Web site, in an easily
understandable format, individual MIPS eligible clinician and group
performance information, including:
The MIPS eligible clinician's final score;
The MIPS eligible clinician's performance under each MIPS
performance category (quality, cost, improvement activities, and
advancing care information);