Distribution of 1998 and 1999 Cable Royalty Funds, 13423-13444 [2015-05777]
Download as PDF
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
No other changes have been made in
either the membership or planned
activity of the group research project.
Membership in this group research
project remains open, and Pistoia
Alliance, Inc. intends to file additional
written notifications disclosing all
changes in membership.
On May 28, 2009, Pistoia Alliance,
Inc. filed its original notification
pursuant to Section 6(a) of the Act. The
Department of Justice published a notice
in the Federal Register pursuant to
Section 6(b) of the Act on July 15, 2009
(74 FR 34364).
The last notification was filed with
the Department on November 20, 2014.
A notice was published in the Federal
Register pursuant to Section 6(b) of the
Act on December 31, 2014 (79 FR
78908).
Patricia A. Brink,
Director of Civil Enforcement,Antitrust
Division.
[FR Doc. 2015–05853 Filed 3–12–15; 8:45 am]
BILLING CODE 4410–11–P
DEPARTMENT OF JUSTICE
Antitrust Division
mstockstill on DSK4VPTVN1PROD with NOTICES
Notice Pursuant to the National
Cooperative Research and Production
Act of 1993; National Armaments
Consortium
Notice is hereby given that, on
February 13, 2015, pursuant to Section
6(a) of the National Cooperative
Research and Production Act of 1993,
15 U.S.C. 4301 et seq. (‘‘the Act’’),
National Armaments Consortium
(‘‘NAC’’) has filed written notifications
simultaneously with the Attorney
General and the Federal Trade
Commission disclosing changes in its
membership. The notifications were
filed for the purpose of extending the
Act’s provisions limiting the recovery of
antitrust plaintiffs to actual damages
under specified circumstances.
Specifically, AEgis Technologies Group,
Inc., Huntsville, AL; Aerojet Ordnance
Tennessee, Jonesborough, TN; AGM
Container Controls, Inc., Tucson, AZ;
Anyar, Inc., Fort Walton Beach, FL;
BANC3, Inc., Princeton, NJ; Chesapeake
Testing Services, Inc., Belcamp, MD;
DRS Sustainment Systems, Inc., Saint
Louis, MO; Ellwood National Forge
Company, Irvine, IN; Fastcom Supply
Corporation, Franklin, NJ; Group W,
Fairfax, VA; Hydrosoft International,
Livermore, CA; Kord Technologies, Inc.,
Huntsville, AL; Michigan Research
Institute, Ann Arbor, MI; Prime
Photonics, LC, Blacksburg, VA; Sabre
Global Services, Wharton, NJ; SCHOTT
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
North America, Southbridge, VA; Scot
Forge Company, Spring Grove, IL;
Teamvantage Molding LLC, Forest Lake,
MN; Technical Professional Services,
Inc., Wayland, MI; TELEGRID
Technologies, Inc., Livingston, NJ;
TimkenSteel Corporation, Canton, OH;
and Universal Propulsion Company,
Inc., Fairfield, CA, have been added as
parties to this venture.
The following members have
withdrawn as parties to this venture:
Bulova Technologies Group, Inc.,
Tampa, FL; Colt Defense, Hartford, CT;
Decatur Mold Tool & Engineering, Inc.,
North Vernon, IN; DRS ICAS, LLC,
Buffalo, NY; Ervine Industries, Inc., Ann
Arbor, MI; Fibertek, Inc., Herndon, VA;
Matrix Systems, Inc., Ashland, VA;
Metal Storm, Herndon, VA; Microcosm,
Inc., Hawthorne, CA; NI Industries, Inc.,
Riverbank, CA; Olin Corporation—
Winchester Division, East Alton, IL; Otis
Products, Inc., Lyons Falls, NY; Parsons
Government Services, Pasadena, CA;
Polaris Sensor Technologies, Inc.,
Huntsville, AL; Quantum Technology
Consultants, Inc., Franklin Park, NJ;
Solidica, Inc., Ann Arbor, MI; The
Timken Company, Canton, OH; and
UTRON, Manassas, VA.
No other changes have been made in
either the membership or planned
activity of the group research project.
Membership in this group research
project remains open, and NAC intends
to file additional written notifications
disclosing all changes in membership.
On May 2, 2000, NAC filed its original
notification pursuant to Section 6(a) of
the Act. The Department of Justice
published a notice in the Federal
Register pursuant to Section 6(b) of the
Act on June 30, 2000 (65 FR 40693).
The last notification was filed with
the Department on August 18, 2014. A
notice was published in the Federal
Register pursuant to Section 6(b) of the
Act on September 17, 2014 (79 FR
55830).
Patricia A. Brink,
Director of Civil Enforcement, Antitrust
Division.
[FR Doc. 2015–05835 Filed 3–12–15; 8:45 am]
BILLING CODE 4410–11–P
LIBRARY OF CONGRESS
Copyright Royalty Board
[Docket No. 2008–1 CRB CD 98–99 (Phase
II)]
Distribution of 1998 and 1999 Cable
Royalty Funds
Copyright Royalty Board,
Library of Congress.
AGENCY:
PO 00000
Frm 00102
Fmt 4703
Sfmt 4703
13423
Final distribution
determination.
ACTION:
The Copyright Royalty Judges
announce the final Phase II distribution
of cable royalty funds for the year 1999.
The judges issued their initial
determination in December 2014 and
received no motions for rehearing.
DATES: Effective date: March 13, 2015.
ADDRESSES: The final distribution order
is also published on the agency’s Web
site at www.loc.gov/crb and on the
Federal eRulemaking Portal at
www.regulations.gov.
FOR FURTHER INFORMATION CONTACT:
Richard Strasser, Senior Attorney, or
Kim Whittle, Attorney Advisor, (202)
707–7658 or crb@loc.gov.
SUPPLEMENTARY INFORMATION:
SUMMARY:
I. Introduction
In this proceeding, the Copyright
Royalty Judges (Judges) determine the
final distribution of royalty funds
deposited by cable system operators
(CSOs) for the right to retransmit
television programming carried on
distant over-the-air broadcast signals
during calendar year 1999.1 Participants
have received prior partial distributions
of the 1999 cable royalty funds.2 The
remaining funds at issue are those
allocated to the Devotional Claimants
category.3 Two participants are
pursuing distribution from the
Devotional Claimants funds for 1999:
Worldwide Subsidy Group LLC dba
Independent Producers Group (IPG) and
the ‘‘Settling Devotional Claimants’’
(SDC).4 The Judges conducted three and
1 Although this proceeding consolidates royalty
years 1998 and 1999, all claims to 1998 royalties
have been resolved, and the funds have been
distributed. IPG’s appeal of the order approving
distribution of 1998 royalties was dismissed for lack
of jurisdiction. Ind. Producers Group v. Librarian of
Congress, 759 F.3d 100 (D.C. Cir. 2014).
2 The 1999 cable royalty deposits equaled
approximately $118.8 million at the outset. The
Judges authorized partial distributions that the
Copyright Licensing Office made on October 31,
2001, March 27, 2003, April 19, 2007, June 7, 2007,
and February 28, 2013. Authorized distributions
equaled in the aggregate approximately $126.9
million, including accrued interest, leaving a
balance available for distribution of $827,842.
3 See infra note 18, and accompanying text. The
Devotional Claimants category has been defined by
agreements of the Phase I participants as
‘‘Syndicated programs of a primarily religious
theme, not limited to those produced by or for
religious institutions.’’
4 The Settling Devotional Claimants are: The
Christian Broadcasting Network, Inc., Coral Ridge
Ministries Media, Inc., Crystal Cathedral Ministries,
Inc., In Touch Ministries, Inc., and Oral Roberts
Evangelistic Association, Inc. The SDC previously
reached a confidential settlement with devotional
program claimants represented by the National
Association of Broadcasters, Liberty Broadcasting
Network, Inc., and Family Worship Center Church,
E:\FR\FM\13MRN1.SGM
Continued
13MRN1
13424
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
a half days of hearings. After
considering written evidence and oral
testimony, the Judges determine that the
SDC should receive 71.3% and IPG
should receive 28.7% of the 1999 fund
allocated to the Devotional Claimants
category.5
II. Background
A. Statement of Facts
In the present proceeding, IPG
represents the interests of four entities 6
owning copyrights in 10 distinct
programs. The SDC represent five
entities 7 owning copyrights in 20
distinct programs. CSOs remotely
retransmitted IPG-claimed titles 11,041
times and the SDC-claimed titles 6,684
times during 1999. See IPG PFF at 6;
SDC PFF at 1–2.
B. Statement of the Case
mstockstill on DSK4VPTVN1PROD with NOTICES
On January 30, 2008, the Judges
commenced a proceeding to determine
the Phase II distribution of 1998 and
1999 royalties deposited by CSOs for the
cable statutory license.8 Beginning in
July 2008, the Judges stayed the
proceeding pending the outcome of
California state court litigation initiated
by IPG regarding the validity and
interpretation of settlement agreements
Inc. The programming giving rise to the latter
groups’ claims is not included in the Judges’
analysis in this proceeding.
5 From prior partial distributions, the SDC have
received over $693,000. The SDC alone is
responsible to make adjustments, if any, to comply
with the conclusions of this Determination and to
comply with confidential settlements it reached
with former participants.
6 IPG represents Benny Hinn Ministries, Creflo A.
Dollar Ministries, Eagle Mountain International
Church aka Kenneth Copeland Ministries, and Life
Outreach International.
7 See supra, n.4.
8 See 73 FR 5596 (Jan. 30, 2008). Before the
effective date of the Copyright Royalty and
Distribution Reform Act of 2004, Public Law 108–
419, 118 Stat. 2341 (Nov. 30, 2004), the Copyright
Office, with oversight by the Librarian of Congress,
managed distribution of cable retransmission
royalties. To resolve controversies regarding the
appropriate distribution of royalties, the Librarian
would order appointment of a Copyright Arbitration
Royalty Panel (CARP). In 2001, the Library of
Congress initiated Phase I proceedings to determine
distribution of, inter alia, royalties for distant
retransmission by cable in 1999 of broadcast
television programming. In November 2003, all
claimants to funds in the 1998 Devotional
Claimants category reached agreement regarding
distribution of those funds and the Register of
Copyrights ordered final distribution of 1998
Devotional Claimants royalties by order dated
November 19, 2003. By Order dated April 3, 2007,
the Register finalized the Phase I allocation of
uncontroverted funds for 1998 and 1999 cable
retransmissions among the claimant categories.
After enactment of the current statute, the Register
terminated the CARP proceeding relating to the
1998–99 funds. See 72 FR 45071 (Aug. 10, 2007).
The Judges have managed all subsequent
proceedings relating to the 1998 and 1999 cable
royalties fund.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
by and between IPG, the Motion Picture
Association of America as
representative of certain program
suppliers (MPAA), and the Librarian of
Congress.9 In a September 2012 filing,
IPG acknowledged that the California
proceedings had been resolved in favor
of MPAA. See Opposition of
Independent Producers group to Motion
for Final Distribution of 1998 and 1999
Cable Royalty Funds at 4, Docket No.
2008–1 CRB CD 98–99 (September 5,
2012). IPG continued to assert claims to
1999 royalties allocated to the
Devotional Claimants category. In July
2013, the Judges issued an order
establishing the schedule and order of
proceedings for the present matter.10
On May 5 and 6, 2014, the Judges
held a Preliminary Hearing to adjudicate
disputes regarding the validity of claims
asserted by each party. At the
conclusion of the Preliminary Hearing,
the Judges dismissed two claims
asserted by IPG. See Ruling and Order
Regarding Claims (June 18, 2014).
Beginning September 2, 2014, the
Judges presided over three and a half
days of hearings at which IPG presented
two witnesses and the SDC presented
three live witnesses and designated
testimony of seven witnesses from prior
proceedings.11 The Judges admitted 35
paper and electronic exhibits into
evidence. On September 23, 2014, the
parties filed their proposed findings of
fact and conclusions of law.
III. IPG’s Motion in Limine
A. Issues Presented
On August 26, 2014, IPG filed with
the Judges a motion in limine (Motion)
to exclude the SDC’s Nielsen Household
Devotional Viewing Report sponsored
by SDC witness, Alan Whitt.12 IPG
contends that the SDC failed to include
in its exhibit list foundational data for
the methodology used in the report and
9 Order Granting Motions to Stay, Docket No.
2008–1 CRB CD 98–99 (July 23, 2008). The Judges
granted eight continuations of the original stay
order, entering the last continuation order on July
27, 2012. See Eighth Order Continuing Stay of
Proceedings (July 27, 2012).
10 Order Setting Deadline for Filing Written Direct
Statements, Announcing Discovery Period, and
Requiring Settlement Conference (Jul 25, 2013).
11 Because of the delay of the present proceeding
occasioned by outside litigation, the Judges
concluded their determination of distributions of
cable retransmission royalties for the period 2000
to 2003, inclusive, before completing the instant
proceeding regarding 1999 funds. The participants
in the 2000–03 proceeding presented many of the
same issues relevant to the present proceeding;
thus, one of the ‘‘prior proceedings’’ from which
participants could designate testimony is a
proceeding involving funds deposited after the
relevant period at issue in the present proceeding.
12 The SDC, whose witness introduced the report,
sometimes refer to it as the Household Viewing
Hours Report or ‘‘HHVH Report.’’
PO 00000
Frm 00103
Fmt 4703
Sfmt 4703
failed to produce all foundational data
and electronic files underlying the
report. Motion at 1. IPG requests that the
Judges strike any evidence relying on or
referring to the report that the SDC
presented.
The SDC oppose IPG’s request,
arguing, among other things, that IPG
has failed to present any competent
evidence that the purportedly missing
data either were in the SDC’s custody,
possession, or control, or were not
publicly available. SDC Opposition at 1
(September 2, 2014). The SDC also
contend that IPG’s motion in limine
merely attempts to revisit issues the
Judges resolved in their May 2, 2014,
Order Denying IPG’s Motion to Strike
Portions of SDC Written Direct
Statement (‘‘May 2, 2014, Order’’). The
SDC contend that IPG has presented no
new evidence that would justify
revisiting that decision. SDC Opposition
at 3 & n.1.
Moreover, the SDC contend that IPG’s
arguments go to the weight rather than
to the admissibility of the proffered
report. SDC Opposition at 2, citing U.S.
v. H & R Block, Inc., 831 F.Supp.2d 27,
34 (D.D.C. 2011) (denying motion in
limine ‘‘because [defendant’s proffered]
survey [was] not so unreliable as to be
deemed inadmissible.’’) and Graves v.
D.C., 850 F.Supp.2d 6, 13 (D.D.C. 2011)
(‘‘[M]otions in limine are designed to
address discrete evidentiary issues
related to trial and are not a vehicle for
resolving factual disputes or testing the
sufficiency of the plaintiff’s evidence.’’).
Finally, the SDC argue that even if the
Judges were inclined to believe that the
unavailability of data underlying the
proffered report was relevant in an
admissibility determination, this fact
would not warrant a prehearing
exclusion of the evidence. According to
the SDC, the facts and data underlying
an expert’s opinion need not be
admissible for the opinion or reference
to be admitted if the facts or data are of
a type reasonably relied upon by experts
in the field. SDC Opposition at 4, citing
Rule 703 of the Federal Rules of
Evidence. On this point, the SDC refer
to the written direct testimony of the
SDC’s witness, John S. Sanders, who,
according to the SDC, determined that
‘‘Mr. Whitt’s report is sufficiently
reliable to render his opinion
concerning the relative market value of
the SDC and IPG programs.’’ SDC
Opposition at 5.
The Judges heard oral argument on
the Motion on September 2, 2014, and
deferred ruling until the end of the
proceeding. For the reasons discussed
below, the Judges deny the Motion and
admit the proffered report.
E:\FR\FM\13MRN1.SGM
13MRN1
mstockstill on DSK4VPTVN1PROD with NOTICES
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
B. The Judges’ May 2, 2014, Discovery
Order
The dispute between IPG and the SDC
began with a discovery request from IPG
in which it requested from the SDC
‘‘evidentiary support for a report by the
SDC’s expert witness, Mr. Whitt, setting
forth viewership levels for Devotional
programming.’’ See May 2, 2014, Order
at 1. In the motion to compel discovery
that gave rise to the Judges’ May 2, 2014,
Order, IPG sought an order striking Mr.
Whitt’s report and the SDC’s reliance on
that report. According to IPG, the SDC
failed to meet its discovery obligations
by failing to provide electronic files or
computer codes that Mr. Whitt
purportedly used to (1) merge
viewership data sets compiled by
Tribune Media Services and the Nielsen
Company and (2) cull claimed
devotional titles from numerous
program titles in the merged data sets
(referred to in the May 2, 2014, Order
as ‘‘Merger Information’’). Id. at 3.
The Judges determined that the
discovery dispute could not be resolved
without an evidentiary hearing, and
scheduled one for April 8, 2014. During
the hearing, the Judges heard testimony
from the SDC’s witnesses, Mr. Whitt and
Dr. Erkan Erdem, as well as from Dr.
Laura Robinson, who testified for IPG.
Mr. Whitt testified that he did not
have access to the files and codes he
had used that contained the Merger
Information because he had done most
of the work in question when he was
employed with an independent
company that was a contractor for
MPAA. Mr. Whitt completed his MPAA
assignment several years prior to the
current proceeding. 04/08/14 Tr. at 105
(Whitt). Therefore, according to Mr.
Whitt, neither Mr. Whitt nor the SDC
could provide the requested information
to IPG. Id. at 121–22. Dr. Robinson
testified that, based on discovery that
the SDC provided, she was unable to
replicate the results that Mr. Whitt had
reached, although she admitted that she
could have merged the Tribune and
Nielsen data sets. Id. at 35–6, 66
(Robinson). Finally, Dr. Erdem testified
that, based on discovery the SDC had
provided to IPG, and certain other
publicly available information, Dr.
Erdem was able to closely approximate,
although not duplicate, Mr. Whitt’s
results. Id. at 162 (Erdem).
The Judges found that nothing in the
record allowed them to conclude that
SDC violated its duties under the
applicable procedural rule governing
discovery by not producing the Merger
Information. See May 2, 2014, Order at
9. The Judges further concluded that the
SDC’s discovery responses were
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
sufficient for IPG to ‘‘test’’ the process
Mr. Whitt used in compiling the report.
The Judges noted that the purpose of an
earlier discovery order addressing IPG’s
discovery request was to
allow IPG sufficient discovery to allow it to
confirm either that Mr. Whitt had performed
his work correctly . . . or that Mr. Whitt had
performed his work incorrectly or
inaccurately. In that latter case, IPG would be
able to: (a) file a Written Rebuttal Statement
contradicting Mr. Whitt’s work and/or (b)
cross-examine Mr. Whitt at the hearing on
the merits regarding claimed errors or
inaccuracies in his work.
Id. (emphasis in original).
The Judges concluded that they
‘‘would not—and did not—assert that
discovery regarding expert testimony
must result in a consensus between
adverse participants as to the
correctness of the result (or the amount)
calculated by the expert.’’ Id. at 11.
Specifically, the Judges concluded
with the discovery [the SDC provided to IPG,
Dr. Robinson] could test Mr. Whitt’s
computational process by producing her own
merger of the Tribune Data and the Nielsen
Data. However, Dr. Robinson also testified
that her merger and the concomitant results
might differ from (i.e., falsify) rather than
replicate Mr. Whitt’s results. Likewise, [Dr.
Erdem] produced a merger of the Tribune
Data and the Nielsen Data that was quite
proximate to Mr. Whitt’s results, albeit not a
complete replication. Thus, it is clear that
Mr. Whitt’s computational processes can be
tested and subject to meaningful crossexamination and rebuttal.
Id. Based on this conclusion the Judges
denied IPG’s motion to strike portions of
the SDC’s written direct statement on
grounds that the SDC violated its
discovery obligations.
In its discovery motion, IPG also
asked the Judges to strike any reliance
on or reference to the distant rating
study presented by the SDC as
inadmissible. See [IPG] Motion to Strike
Portions of [SDC] Direct Statement at
10–11 (February 20, 2014). The Judges
declined to consider these issues at that
stage of the proceeding reasoning that:
[a]n order regarding these issues would
essentially constitute a premature in limine
ruling based on SDC’s non-production of the
Merger Information in discovery. Given that
SDC introduced new testimony and new
exhibits at the April 8, 2014, discovery
hearing, the Judges decline to rule without a
formal motion in limine, addressing these
issues in the context of the new hearing
exhibits and the hearing testimony, should
IPG decide to renew these arguments.
May 2, 2014, Order at 11. IPG filed that
motion in limine on August 26, 2014,
viz., the Motion at issue here.
PO 00000
Frm 00104
Fmt 4703
Sfmt 4703
13425
C. Substance of IPG’s Motion
In the present Motion, IPG asserts that
‘‘Merger Information existed and was
not produced to IPG, including sweeps
period data, a sweeps period algorithm,
a file that prepared the Tribune data for
merger, a process to reconcile Nielsen
and Tribune data, and another ‘quality
control process’ performed by Mr.
Whitt.’’ Motion at 2. IPG further asserts
that ‘‘SDC’s witness [Dr. Erdem]
approximated Mr. Whitt’s results only
after utilization of data and information
that had not been produced to IPG, and
that the SDC’s attempted replication of
the Merger Information occurred
months after both the discovery
deadline and the deadline for filing
amended direct statements.’’ Id.
According to IPG, the ‘‘SDC neither
produced the original Merger
Information, nor attempted to replicate
it until March 28, 2014, all the while
knowing the evidentiary requirements
for the introduction of the study. . . .’’
Id. at 3.
IPG continues:
Alan Whitt asserts that his analysis relied,
inter alia, (i) on a sample of television
stations selected by Marsha Kessler [an
MPAA witness in past cable distribution
proceedings, including the 2000–2003
proceeding and the Phase I proceeding for
the instant royalty year], and (ii) household
diaries of distant program viewing for those
programs from Nielsen’s six ‘‘sweep’’
months. [Yet, l]iterally no information or data
regarding the station sampling process exists,
nor information or data that explains the
methodological processes utilized in
connection with the produced Nielsen data.
Id. at 3 (internal quotations omitted).
IPG asserts that:
stations selected by Ms. Kessler for inclusion
in the 1999 MPAA/Nielsen study were
altogether different than those appearing in
data produced by the SDC. . . . [Therefore,]
Mr. Whitt’s statement that the SDC-produced
data was derived from a sample of stations
selected by Marsha Kessler is simply
inaccurate or, at minimum, without
evidentiary foundation [but] IPG has been
denied any ability to investigate that
determination because of the SDC’s failure to
produce underlying documents
substantiating such assertion.
Id. at 4.
IPG further asserts that, in prior
proceedings, Nielsen and the MPAA
have used a wide variety of sampling
methodologies and methods of data
collection. IPG contends that with
respect to the Nielsen data produced by
the SDC in the current proceeding,
however, the SDC provided none of
those methodological details.
Consequently, IPG asserts that it has ‘‘no
means of determining the method by
which the stations on which the Whitt
E:\FR\FM\13MRN1.SGM
13MRN1
13426
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
analysis relies were selected, and no
means to determine what Nielsen data
was collected, how it was collected, the
limitations on the data, the scope and
meaning of the data, the possible
alternatives that were employed, etc.’’
Id. at 5–6. As a result, IPG requests that
the Judges strike any evidence relying
on or referring to Mr. Whitt’s HHVH
report. Id. at 8.
mstockstill on DSK4VPTVN1PROD with NOTICES
D. Judges’ Analysis and Ruling on the
Motion
Much of IPG’s Motion rehashes
discovery issues that the Judges
addressed fully in the May 2, 2014,
Order. The Judges will not revisit those
discovery-related issues. The Judges
now consider only whether to grant or
deny IPG’s Motion, which requests that
the Judges preclude the SDC from
relying on or referring to the HHVH
report on grounds of admissibility.
IPG’s arguments for excluding the
HHVH report are that the SDC failed to:
(1) Retain or produce to IPG input data
from the HHVH report, (2) produce
information relating to the sampling
processes that were followed for the
selection of stations included as part of
the Whitt analysis, and (3) produce the
methodological processes followed by
Nielsen in the creation of the Nielsen
data that were referred to in the HHVH
report. See Motion at 7–8.
At oral argument on the motion, IPG’s
counsel contended that even if the SDC
did not have the underlying documents
that IPG sought, the SDC was required
to create such documents and produce
them to IPG. 09/02/14 Tr. at 14–15. As
a preliminary matter, the Judges view
this argument as yet another attempt by
IPG to resurrect its complaint that the
SDC failed to meet its discovery
obligations. The Judges already
addressed this issue in the May 2, 2014,
Order.13
IPG also asserts that the SDC’s failure
to create a document in response to
IPG’s discovery requests somehow
13 Even if the Judges had not addressed the issue
in the May 2, 2014, Order, they would nonetheless
reject IPG’s assertion that the SDC was obligated to
create documents to comply with a discovery
request. The Judges have consistently held that
‘‘[t]he limited discovery permitted in proceedings
before the Judges should permit the parties to test
admissible evidence, but not create an extensive
burden of time and expense.’’ Order Granting In
Part and Denying In Part the Motion of
SoundExchange to Compel XM Satellite Radio Inc.,
Sirius Satellite Radio Inc., and Music Choice to
Produce Surveys and Supporting Documents,
Docket No. 2006–1 CRB DSTRA (May 15, 2007). In
the May 2, 2014, Order, the Judges ruled that the
SDC had provided IPG with sufficient discovery to
enable IPG to test the HHVH report. IPG points to
no provision in the CRB rules that requires a party
to create documents in response to a discovery
request. The Judges see no reason in this instance
to impose such a requirement by order.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
violated a statutory provision dealing
with written direct statements. At the
hearing, IPG’s counsel contended that
the SDC ‘‘never put this information or
alluded to it or referenced or
incorporated it by reference in an
Amended Written Direct Statement.
Therefore, for the record, it does not
exist. It is not before [the Judges]. And
as such, the SDC study is hopelessly
missing a piece, and therefore, it should
not be heard. It should be excluded.’’
09/02/14 Tr. at 15 (Att’y Boydston).
The requirement to file written direct
statements is codified in section
803(b)(6)(C) of the Copyright Act. That
section circuitously requires the Judges
to issue regulations that require the
parties to file written direct statements
and written rebuttal statements by a
date specified by the Judges. 17 U.S.C.
803(b)(6)(C)(i). The statutory provision
does not address the content of written
direct statements. Moreover, the
regulation the Judges promulgated
under that provision does not impose
the content requirements that IPG
suggests.14 Therefore, the Judges reject
IPG’s assertion that the SDC violated the
statutory provisions dealing with the
filing of written direct statements. The
HHVH report was properly before the
Judges.15 On balance, the Judges find
that the SDC’s written direct statement
was adequate to satisfy the requirements
14 The rule states: ‘‘[t]he written direct statement
shall include all testimony, including each
witness’s background and qualifications, along with
all the exhibits.’’ 37 CFR 351.4(b). The SDC’s
written direct statement included Mr. Whitt’s
testimony as well as that of Mr. Sanders. The SDC
included in its rebuttal statement the testimony of
Dr. Erdem. The SDC’s written direct statement may
not have been exquisitely complete. Indeed, the
SDC’s counsel concedes that Mr. Whitt’s written
testimony did not describe a ‘‘quality control’’
process that he conducted to eliminate duplicative
entries and to fix errors in program titles. 09/02/14
Tr. at 37 (Att’y MacLean). The SDC contends,
however, that Mr. Whitt’s process resulted in the
elimination of a handful of program titles, none of
which was claimed by either party in this
proceeding. Id. at 37–8. The Judges find no
persuasive evidence in the record to contradict the
SDC’s contention, rendering the SDC’s omission
harmless. Moreover, the Judges note that the dates
for Nielsen sweeps weeks, used by Dr. Erdem in his
analysis to replicate Mr. Whitt’s report, either were
produced to IPG or were otherwise publicly
available. See 04/08/14 Tr. at 23–24, 204 (Att’y
MacLean). The SDC satisfied its discovery
obligations with respect to this information.
15 IPG raised similar objections in the 2000–03
distribution proceeding. Docket No. 2008–2 CRB CD
2000–03. In that proceeding, the Judges excluded
Mr. Whitt’s testimony, which relied on data similar
to that which the SDC proffer in the current
proceeding. The Judges’ decision not to consider
Mr. Whitt’s testimony in the 2000–2003 proceeding,
however, was based the SDC’s failure to provide
Mr. Whitt’s testimony until its rebuttal case, three
weeks before the hearing. In that context, the Judges
found the SDC’s delay ‘‘deprived IPG of the
opportunity to review the work undertaken by Mr.
Whitt.’’ 78 FR 64984, 65004 (Oct. 30, 2013)
PO 00000
Frm 00105
Fmt 4703
Sfmt 4703
of the Act and applicable rules. IPG’s
complaints about the completeness or
persuasiveness of that testimony go to
the weight rather than the admissibility
of the testimony.
IPG also objects to the purported lack
of clarity surrounding the way in which
the television stations analyzed in the
HHVH report were selected. Mr. Whitt
stated in his written direct testimony
that the television stations he studied in
the report were based on a list of
stations compiled by Ms. Kessler. Ex.
SDC–D–001 at 3. IPG, evidently
assuming that the list referred to by Mr.
Whitt was the list of stations that was
attached to Ms. Kessler’s written direct
testimony in Phase I of this
proceeding,16 contends that it compared
the selection of stations in the Whitt
HHVH report with Ms. Kessler’s list and
found that the two do not correspond.
IPG states that of Mr. Whitt’s 72
stations, only half of them can be found
in Ms. Kessler’s list. 09/02/14 Tr. at 16.
IPG contends that it does not know
where the other 36 stations that Mr.
Whitt studied came from. Id.
The SDC reply that the Kessler list
that IPG compared with the Whitt list
was not the basis for the HHVH report.
The SDC represent that the Kessler list
of stations that Mr. Whitt used for his
report was based on the Nielsen data
that Ms. Kessler ordered for the study
that she prepared for the 2000–03
proceeding. 09/02/14 Tr. at 28–30 (Att’y
MacLean). Mr. Whitt addressed this
issue in his testimony in the April
hearing on IPG’s discovery motion. 04/
08/14 Tr. at 113–15 (Whitt). That being
said, the SDC are unsure how Ms.
Kessler determined what Nielsen data to
order. 09/04/14 Tr. at 29 (Att’y
MacLean). Nevertheless, the SDC’s
witness, Mr. Sanders, testified that the
list upon which the Whitt report was
compiled was ‘‘sufficiently
representative for the purpose that it is
being put forth.’’ Id. at 31. The SDC
further assert, based on an analysis by
Dr. Erdem, that the SDC’s Nielsen
sample, which was based on the Nielsen
information that was ordered by Ms.
Kessler, ‘‘does not have a bias in terms
of coverage of quarter-hours of IPG
versus SDC programs. Or, if it does have
a bias, the same bias is in all of the data
that IPG is using as well, whatever bias
there is.’’ Id. at 32. Finally, the SDC
state that they ‘‘had absolutely nothing
to do with choosing this [station]
sample—it was chosen years before we
ever purchased it from MPAA—there
16 Ms. Kessler’s written direct testimony was
included in the prior testimony designated by the
SDC for consideration in this proceeding under 37
CFR 351.4(b)(2).
E:\FR\FM\13MRN1.SGM
13MRN1
mstockstill on DSK4VPTVN1PROD with NOTICES
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
was absolutely zero incentive for
everybody to intentional [sic] bias the
data in any way.’’ Id. at 33.
For purposes of ruling on the Motion,
the Judges do not examine the weight,
if any, they might place on the proffered
evidence. Rather, the Judges must
examine whether the SDC offered the
evidence in a manner that was
consistent with the applicable rules for
offering this type of evidence.
The Judges’ procedural rules address
evidence in proceedings before the
Judges. Rule 351.10(a) addresses
admissibility of evidence. Under the
rule, evidence that is relevant and not
unduly repetitious or privileged is
admissible. Proponents must
authenticate or identify written
testimony and exhibits for them to be
admissible. See 37 CFR 351.10(a). The
admissibility requirements of
authentication or identification are
satisfied by evidence sufficient to
support a finding that the matter in
question is what its proponent claims.
Id.
IPG does not contend that the SDC
violated any provision of Rule 351.10(a);
that is, that the Whitt report is
irrelevant, unduly repetitious, or
privileged. Rather, IPG focuses on Rule
351.10(e). That provision of the rule
provides if studies or analyses are
offered in evidence, they must state
clearly ‘‘the study plan, the principles
and methods underlying the study, all
relevant assumptions, all variables
considered in the analysis, the
techniques of data collection, the
techniques of estimation and testing,
and the results of the study’s actual
estimates and tests.’’ 37 CFR 351.10(e).
This information must be presented in
a ‘‘format commonly accepted within
the relevant field of expertise implicated
by the study.’’ Id. Facts and judgments
upon which conclusions are based must
be ‘‘stated clearly, together with any
alternative courses of action that were
considered.’’ Id. The party offering the
study into evidence must retain
summaries and tabulations of input data
and the input data themselves. Id.
IPG asserts that by not explaining
precisely how the Whitt report was
created, the SDC failed to provide an
adequate foundation for the report. In
considering whether there was an
adequate foundation for admitting the
Whitt report into evidence, the Judges
must consider not only the exhibit that
contains the report but also any written
or live testimony offered to explain how
the exhibit was created. In his written
direct statement, Mr. Whitt included the
household viewing report that he had
prepared and discussed the sources of
the data and a description of how he
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
prepared the report. In the April 8,
2014, hearing on IPG’s motion to strike
portions of the SDC’s written direct
statement, Mr. Whitt provided
additional details about he created the
report, including the sources of the data,
the processes he followed to merge
Nielsen and Tribune data files, and the
‘‘quality control’’ process he used to
eliminate erroneous program titles.
IPG’s counsel and the Judges had
ample opportunity to question Mr.
Whitt on all elements of the report.
After noting IPG’s objection, the Judges
admitted provisionally Mr. Whitt’s
written testimony during the hearing on
September 3, 2014. 09/03/14 Tr. at 416.
IPG’s counsel then had another
opportunity to cross-examine Mr. Whitt
on the processes he used to construct
the report. On both occasions, Mr. Whitt
was open and forthright about how he
prepared the report, including the
manner in which he used a list of
stations based on a set of Nielsen data
ordered by Ms. Kessler for MPAA in a
separate proceeding. See, e.g., 09/03/14
Tr. at 422 (Whitt) (‘‘I just accepted
whatever stations they sent me.’’). Mr.
Whitt made no efforts to gloss over the
potential weaknesses in the preparation
of the report. Indeed, the SDC’s counsel
correctly identified Mr. Whitt as more
akin to a fact witness than an expert
witness. 09/02/14 Tr. at 35 (Whitt).
In the end, the Judges are satisfied
that the SDC provided an adequate
foundation for the admission of Mr.
Whitt’s written direct statement into the
record. That is not to say that there are
not issues with respect to how the
HHVH report was created. The SDC
concede as much. See 09/02/14 Tr. at 24
(Att’y MacLean) (‘‘[I]t is not that any of
the specific problems that the parties
raised were invalid or that they
shouldn’t be raised. . . .’’). Not the
least of these issues is the fact that the
Whitt report relies on a list of stations
selected according to criteria that were
seemingly unknown even to Ms. Kessler
who purportedly selected the stations.
These issues go to the weight, not to the
admissibility, of the report. For the
foregoing reasons, the Judges DENY
IPG’s Motion and admit Exhibit SDC–D–
001 (Written Direct Testimony of Whitt
with Exhibits) for all purposes in this
proceeding.
IV. Applicable Law and Precedent
Twice each year, CSOs deposit with
the Copyright Office royalties accrued
for the retransmission of over-the-air
television programming outside the
originating station’s local broadcast
area. The amount of fee deposits is
statutory. See 17 U.S.C. 111(d)(1). Every
July, copyright owners file claims for
PO 00000
Frm 00106
Fmt 4703
Sfmt 4703
13427
the funds on deposit for the preceding
calendar year’s retransmissions. On
motion of a claimant or sua sponte, the
Judges publish notice of the
commencement of proceedings to
distribute those royalty funds.
By convention, claimants and
claimants’ representatives begin each
proceeding with an allocation process
that has come to be called ‘‘Phase I.’’ 17
Traditionally, the claimants divide
themselves into eight Phase I categories
based upon the nature of the programs
in which they claim copyright.18 If the
participants do not agree to an
allocation of deposited royalties among
the Phase I categories, they submit their
controversy to the Judges for
adjudication. Once the allocation is
decided, the claimants in each category
seek distribution. If the claimants
within each category do not agree to the
distribution scheme among themselves,
the Judges adjudicate disputes and make
a determination of the appropriate
distribution among claimants within
each category. This process has become
known as ‘‘Phase II’’ of the distribution
proceeding.
A. The Relevant Statutory Language
The Copyright Act (Act) does not
mandate (or even suggest) a formula for
royalty distribution.19 As the
Librarian 20 has stated:
Section 111 does not prescribe the
standards or guidelines for distributing
royalties collected from cable operators
17 The Copyright Royalty Tribunal (CRT), a
predecessor to the CRB, began bifurcation of the
distribution proceedings to mitigate what it
perceived to be an unwieldy process. See 1979
Cable Royalty Distribution Determination, 47 FR
9879 (Mar. 8, 1982). Bifurcation of distribution
proceedings is not mandated by statute or
regulation, but is acknowledged in the Judges’
current regulations at 37 CFR 351.1(b)(2).
18 The program categories are: Program Suppliers
(syndicated programming and movies); Joint Sports
Claimants (live college and professional team
sports); Commercial Television (programs produced
by local commercial TV stations); Public
Broadcasting; Devotional Claimants; and Canadian
Claimants. Two additional categories represent nonTV interests: Music Claimants (copyright owners of
musical works carried on broadcast TV signals); and
National Public Radio (copyright owners of all nonmusic content broadcast on NPR stations)
19 Section 111(d)(4) of the Act merely provides
that, in the event of a controversy concerning the
distribution of royalties, ‘‘the Copyright Royalty
Judges shall, pursuant to Chapter 8 of [title 17],
conduct a proceeding to determine the distribution
of royalty fees.’’
20 The Librarian was responsible for
administering the Copyright Arbitration Royalty
Panel (CARP) process for distributing cable
royalties from 1993, when Congress abolished the
CRT, a predecessor adjudicative body, until 2005,
when Congress established the Copyright Royalty
Judges program. The Librarian had the obligation of
reviewing CARP decisions and, on recommendation
of the Register, adopting, modifying, or rejecting
them.
E:\FR\FM\13MRN1.SGM
13MRN1
13428
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
the evidence presented by the parties, if
any, to identify the parameters of a
hypothetical market that would exist
but for the compulsory license regime.22
Distribution of 1993, 1994, 1995, 1996
and 1997 Cable Royalty Funds, Order,
in Docket No. 2000–2 CARP CD 93–97,
66 FR 66433, 66444 (Dec. 26, 2001)
(quoting H.R. Rep. No. 1476, at 97
(1976)) (1993–1997 Librarian Order).21
The Act does require, however, that
the Judges act in accordance with prior
determinations and interpretations of
the Copyright Royalty Tribunal, the
Librarian, the Register of Copyrights
(Register), Copyright Arbitration Royalty
Panels, to the extent that precedent is
consistent with the Register’s opinions
on questions of copyright law, and
decisions of the Court of Appeals
relating to distribution proceedings. See
17 U.S.C. 803(a)(1).
Determining the proper distribution of
cable royalties among claimants requires
a determination of the ‘‘relative
marketplace value’’ of the respective
claimants’ programs. See, e.g., Program
Suppliers v. Librarian of Congress, 409
F.3d 395, 401 (D.C. Cir. 2005); 1993–
1997 Librarian Order, 66 FR at 66445.
The Judges defined ‘‘relative
marketplace value’’ in detail in a
previous Determination. See
Determination of the Distribution of the
2000–03 Cable Royalty Funds, Docket
No. 2008–2, CRB CD 2000–2003, 78 FR
64984, 64985–6 nn. 8 and 9 (October 30,
2013) (2000–03 Determination). In the
present Determination, the Judges adopt
and restate the ‘‘relative market value’’
standard they described in the 2000–03
Determination, and provide further
detail consistent with that standard,
including detail presented through the
expert economic testimony in the
present proceeding.
To assess relative marketplace value,
the Judges previously have looked to
hypothetical, simulated, or analogous
markets, as Congress has imposed the
compulsory license regime in lieu of an
unfettered free market for cable
retransmission of broadcast television
programs. 2000–03 Determination, 78
FR at 64986; see also 1993–97 Librarian
Order, 66 FR at 66445; 1987 Music
Determination, 55 FR at 11993.
Consistent with precedent, in the
current proceeding the Judges look to
mstockstill on DSK4VPTVN1PROD with NOTICES
under the statutory license. Instead, Congress
decided to let the Copyright Royalty Tribunal
‘‘consider all pertinent data and
considerations presented by the claimants’’
in determining how to divide the royalties.
B. The Economic Standard: ‘‘Relative
Market Value’’
21 The 1993–1997 Librarian Order was vacated as
moot after the parties settled their appeals.
Distribution of 1993, 1994, 1995, 1996 and 1997
Cable Royalty Funds, Notice of termination of
proceeding, Docket No. 2000–2 CARP CD 93–97, 69
FR 23821 (Apr. 30, 2004). The settlement and
vacatur of the 1993–1997 Librarian Order did not
disturb the reasoning articulated therein. Id. at
23822.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
As explained in the 2000–03
Determination, to construct the
hypothetical market, it is important at
the outset to appreciate the reason for
the statutory license and the
concomitant distribution proceedings.
Statutory licenses substitute for free
market negotiations because of a
perceived intractable ‘‘market failure’’
inherent in the licensing of copyrights—
particularly the assumed prohibitively
high ‘‘transaction costs’’ of negotiating a
multitude of bilateral contracts between
potential sellers and buyers. See, e.g., R.
Picker, Copyright as Entry Policy: The
Case of Digital Distribution, 47 Antitrust
Bull. 423, 464 (2002) (‘‘The modern
structure of . . . validating or conferring
rights in copyright holders yet coupling
those rights with statutory licenses has
the virtue of mitigating the exercise of
monopoly power and minimizing the
transaction costs of negotiations.’’); S.
Willard, A New Method of Calculating
Copyright Liability for Cable
Rebroadcasting of Distant Television
Signals, 94 Yale L.J. 1512, 1519 (1985)
(‘‘One important reason for compulsory
licensing . . . was to avoid the
‘prohibitive’ transaction costs of
negotiating rebroadcast consent.’’); S.
Besen, W. Manning & B. Mitchell,
Copyright Liability for Cable Television:
Compulsory Licensing and the Coase
Theorem, 21 J.L. & Econ. 67, 87 (1978)
(‘‘Compulsory licensing . . . has lower
negotiating costs than a system based on
full copyright liability. . . .’’). Thus,
the hypothetical market that the Judges
must construct must be a market that
would be unencumbered by either
transaction costs or the restrictions
imposed by the statutory license.
22 ‘‘Simulations
aim at imitating an economically
relevant real or possible system by creating societies
of artificial agents and . . . institutional
structure. . . .’’ A. Lehtinen and J. Kuorikoski,
Computing the Perfect Model: Why Do Economists
Shun Simulations?, 74 Phil. of Sci. 304, 307 (2007)
(emphasis in original). However, the parties to this
proceeding did not proffer evidence of any
simulations. Further, the parties did not provide
evidence or testimony from sellers/licensors and
buyers/licensees in ‘‘analogous’’ markets, such as
perhaps the markets for cable programming or
syndication rights (nor the results of any surveys of
such market participants) that the Judges might use
as benchmarks to establish a distribution
methodology in the present proceeding. The SDC
did provide, however, evidence of ratings from the
local markets in which the SDC and IPG programs
aired.
PO 00000
Frm 00107
Fmt 4703
Sfmt 4703
1. ‘‘Relative’’ Market Value
The Judges begin, as they did in the
2000–03 Determination, parsing the
phrase ‘‘relative market value’’ by first
considering the import of the word
‘‘relative.’’ The word ‘‘relative’’ denotes
that the value of any retransmitted
program is to be determined in relation
to the value of all other programs within
the bounds of the respective Phase I
category definitions, and thus can be
expressed as a percentage of total
‘‘market value.’’
2. Relative ‘‘Market Value’’
In turn, ‘‘market value’’ is
traditionally stated in decisional and
administrative law more fully as ‘‘fair
market value.’’ The Supreme Court has
stated the traditional definition of ‘‘fair
market value’’ as ‘‘the price at which the
property would change hands between
a willing buyer and a willing seller,
neither being under any compulsion to
buy or sell and both having reasonable
knowledge of relevant facts.’’ U.S. v.
Cartwright, 411 U.S. 546, 551 (1973). It
is necessary to further define the various
terms that comprise the foregoing
definition of relative market value.
a. The Hypothetical ‘‘Willing Sellers’’
(the Copyright Owners)
Copyright Owners seek to maximize
profit from licensing their programs for
retransmission by CSOs. Copyright
Owners’ marginal costs are low and
approach zero. Most of the costs
incurred in creating the work are sunk,
fixed costs. Even so, Copyright Owners
seek to maximize the revenue they
receive from CSOs. Given the minimal
marginal costs, Copyright Owners, as
the hypothetical willing sellers, will
always have an incentive to sell at some
positive price, but will likely engage in
bargaining whereby a Copyright Owner
might threaten to deny the license
unless the CSO offers the Copyright
Owner’s (undisclosed) reservation price.
See Besen, et al, supra, at 81.
b. The Hypothetical ‘‘Willing Buyers’’
(the CSOs)
For CSOs, the economics are less
straightforward. CSO revenues are
derived from the sale of cable bundles
(commonly described as ‘‘packages’’ or
‘‘tiers’’) to subscribers, i.e., the ultimate
consumers. In turn, many variables
affect the number of consumers that
subscribe to a particular CSO’s service,
including the retransmitted broadcasts
that the CSO includes as part of its
subscription package.23
23 The compulsory license regime requires CSOs
to license a station’s signal in its entirety, 17 U.S.C.
111(d)(1)(B), and to retransmit the programs
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
To CSOs, the programs offered by the
Copyright Owners are inputs—factors of
production—utilized to create the
products that the CSOs sell to their
customers, viz., the various subscription
bundles of cable channels. In a
hypothetical program market, CSOs
would buy the rights to retransmit
programs as they would purchase any
factor of production, up to the level at
which that ‘‘factor price’’ equals the
‘‘Marginal Revenue Product’’ (MRP) of
that program. In simple terms, a CSO in
a competitive factor market would only
pay for a license to retransmit a program
if the revenue the CSO could earn on
the next (marginal) sale of the final
product were at least equal to that
price.24 In practical terms, why would a
CSO pay $50,000 to retransmit a
program that the CSO estimates would
add only $40,000 to the CSO’s
subscriber revenue? See Besen, et al.,
supra, at 80 (‘‘To the cable system the
value of carrying the signal is equal to
the revenue from the extra subscribers
that the programming will attract and
any higher subscriber fees it can charge
less the additional costs of importing
the program.’’).25
mstockstill on DSK4VPTVN1PROD with NOTICES
c. ‘‘Neither Being Under Any
Compulsion to Buy or Sell’’
In the actual (i.e., non-hypothetical)
market, terrestrial broadcast stations
create the program lineup, which is only
available for purchase by CSOs as a prebundled signal. The CSOs cannot
selectively license for retransmission
some programs broadcast on the
retransmitted station and decline to
license others; rather, the signal must be
purchased in toto. 17 U.S.C. 111(d)(1)(B)
(statutory license royalty computed
based on number of ‘‘distant signal
equivalents’’).
Is this required bundling a form of
‘‘compulsion’’ upon CSOs? In the actual
(including advertisements) without alteration. 17
U.S.C. 111(c)(3). Therefore, retransmitting CSOs
cannot sell advertising on retransmitted broadcast
channels in the actual market under the
compulsory license regime. However, in the
hypothetical market, where the limiting provisions
of the compulsory license regime would not apply,
retransmitting CSOs arguably could sell local
replacement advertising, which would render
viewership an important metric of relative market
value. However, this point was not presented by
either of the parties in the present proceeding.
24 A focus on marginal costs and benefits is not
only efficient for the hypothetical buyers and
sellers, but also for the consuming public: ‘‘Optimal
program diversity will result if cable operators and
the public they serve pay to copyright owners the
marginal value derived from viewing syndicated
programming.’’ Willard, supra, at 1518.
25 If the CSO, as a program licensee, had some
degree of monopsony power in the factor market,
it could pay less than a price equal to MRP, but still
would pay to license programs in a quantity at
which MRP would equal the marginal cost to
license an additional program.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
market, they are compelled to take every
program pre-bundled on the
retransmitted distant station, despite the
fact that the various pre-bundled
programs would each add different
monetary value (or zero value) in the
form of new subscriber volume,
subscriber retention, or higher
subscription fees. Indeed, some
programs on the retransmitted station
may have so few viewers that CSOs—if
they had the right—would decide not to
purchase such low viewership programs
but for the requirements of the
compulsory license regime.
Further, certain programs may have
more substantial viewership, but that
viewership might merely duplicate
viewership of another program that
generates the same sub-set of
subscribers. To restate the example
offered in the 2000–03 Determination,
the viewers of reruns of the situation
comedy ‘‘Bewitched’’ may all be the
same as the viewers of reruns of ‘‘I
Dream of Jeannie,’’ a similar
supernatural-themed situation comedy.
However, ‘‘Bewitched’’ may have fewer
viewers than ‘‘I Dream of Jeannie.’’
In the hypothetical market in which
the compulsory licensing regime did not
exist, a rational profit-maximizing CSO
that had already paid for a license to
retransmit ‘‘I Dream of Jeannie’’ would
not also pay for ‘‘Bewitched’’ in this
hypothetical marketplace, because it
fails to add marginal subscriber revenue
for the CSO. Rather, the rational CSO
would seek to license and retransmit a
show that marginally increased
subscriber revenue (or volume, if market
share was more important than profit
maximization), even if that program had
lower total viewership than
‘‘Bewitched.’’
Alternately stated, why should CSOs
in the hypothetical market be compelled
to pay for a program based on its higher
viewership, even though it adds less
value than another show with lower
viewership? Simply put, the
hypothetical, rational profit-maximizing
CSOs would not pay Copyright Owners
based solely on levels of viewership.
Rather, the hypothetical CSOs would (1)
utilize viewership principally as a tool
to estimate how the addition of any
given program might change the CSO’s
subscriber revenue, (2) attempt to factor
in the economics of various bundles;
and (3) pay for a program license (or
eschew purchasing that license) based
on that analysis.
Thus, the Judges consider the
hypothetical market to be free of the
compulsion that arises from the prebundling that exists in the actual
market.
PO 00000
Frm 00108
Fmt 4703
Sfmt 4703
13429
On the other side of the coin, are the
sellers, i.e., the Copyright Owners,
under any ‘‘compulsion’’ to sell? In the
actual market, one in which the
terrestrial station signal is acquired in a
single specific bundle by a CSO, the
answer appears to be yes, there is
‘‘compulsion.’’ Copyright Owners
cannot carve out their respective
programs and seek to maximize their
values to CSOs independent of the
prepackaged station bundles in which
they exist.
Of course, in the ‘‘hypothetical
market’’ that the Judges are charged
with constructing, it would be
inappropriate not to acknowledge the
inherent bundling that would occur.
That is, the bundling decision is a
‘‘feature’’ rather than a ‘‘bug’’ in even a
hypothetical market for distant
retransmissions in which the statutory
license framework does not exist. Thus,
while Copyright Owners could offer to
supply their respective programs at
given prices, the equilibrium market
price at which supply and demand
would intersect would reflect the CSOs’
demand schedules, which are based in
part upon the fact that the buyers, i.e.,
the CSOs, would pay only a price that
is equal to (or less than) the MRP of that
program in a bundle to be purchased by
subscribers.
3. The Optimal Economic Approach to
Determining ‘‘Relative Market Value’’
In the present proceeding, the Judges
considered the general interrelationship
among bundling, subscribership, and
viewership, and their impact on
‘‘relative market value,’’ in more detail
than in prior proceedings. Specifically,
the Judges inquired as to whether the
parties’ experts had considered utilizing
a method of valuation known as the
‘‘Shapley value’’ methodology 26 to
determine their respective allocations.
Broadly stated, ‘‘the Shapley value
gives each player his ‘average marginal
contribution to the players that precede
him,’ where averages are taken with
respect to all potential orders of the
players.’’ U. Rothblum, Combinatorial
Representations of the Shapley Value
Based on Average Relative Payoffs, in
The Shapley Value: Essays in Honor of
Lloyd S. Shapley 121 (A. Roth ed. 1988)
(hereinafter, ‘‘Roth’’) (quoting Shapley,
supra). A Shapley valuation in the
26 See L. Shapley, A Value for n-person Games,
in H. Kuhn and A. Tucker, Contributions to the
Theory of Games (1953). A definition and an
example of a Shapley valuation are set forth in the
text, infra. For the statistical formula for a Shapley
value, see 9/8/14 Tr. at 1075–79 (Erdem); see also
SDC Proposed Findings of Fact (PFF) ¶ 64.
E:\FR\FM\13MRN1.SGM
13MRN1
13430
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
(a) An entity (C, P1, or P2)—alone in
the market—generates $0 in
retransmission value regardless of who
that player is (because a cable system
without programming or a program
without a CSO will not be viewed and
thus has no value);
(b) regardless of the order in which
the respective owners of P1and P2
arrive in the market to attempt to license
their respective programs, both of their
respective programs generate $0 in
retransmission value without a CSO
(because programs without a CSO
cannot be retransmitted and therefore
provide no value);
(c) if C is present, it generates $6 by
retransmitting P1 alone and $5 by
retransmitting P2 alone;
(d) if all three players are present,
then the retransmission of P1 and P2 by
C generates an assumed synergistic
value of $12.
• The Shapley value of P1 in each of
the six possible orderings is thus:
$6 in ordering (1) (because P1 increases
the value from $0 to $6);
$7 in ordering (2) (because P1 increases
the value from $5 to the synergistic
$12);
$0 in ordering (3) (because P1 adds no
value when it arrives first to the
market);
$0 in ordering (4) (because P1 adds no
value when it arrives first to the
market);
$7 in ordering (5) (because P1 increases
the value from $5 to the synergistic
$12); and
$0 in ordering (6) (because P1 does not
add value if there is no CSO in the
market).
The Shapley value of P1 is the average
value of P1 over all possible arrival
sequences, or
By a similar calculation, the Shapley
value of P2 is $2.83. (Similarly, the
Shapley value of C, the CSO, is $5.83.)
The sum of the values each provides is
approximately $12, which equals the
synergistic business value generated
when all three entities are present in the
market.
Shapley valuations constitute ‘‘the
unique efficient solution’’ because they
‘‘valu[e] each player[’s] direct marginal
contribution to [a] grand coalition.’’ S.
Hart and A. Mas-Colell, ‘‘The Potential
of the Shapley Value,’’ in Roth, supra,
at 127–28. The Shapley value analysis
not only enriches the development of
the relative market value standard, but
it also would allow the Judges in this
proceeding to carry out their statutory
mandate to distribute the deposited
royalties by comparing the parties’
respective valuation methodologies to
that optimal standard, to determine
which of their methodologies more
closely reflects the optimal hypothetical
market.
To summarize, as in the 2000–03
Determination, the Judges will apply in
this Determination a hypothetical
market that contains the following
participants and elements: (1) The
hypothetical sellers are the owners of
the copyrighted programs; (2) the
hypothetical buyers are the CSOs that
acquire the programs as part of their
hypothetical bundles of programs; and
(3) the requirement of an absence of
compulsion dictates that the terrestrial
stations’ initial bundling of programs
does not affect the marginal profitmaximizing decisions of the
hypothetical buyers and sellers.28
V. Description and Analysis of the
Parties’ Proposals for Distribution
27 This example is inspired by a similar example
set forth by Professor Richard Watt, Managing
Editor of the Review of Economic Research on
Copyright Issues and a past president of The Society
for Economic Research on Copyright Issues. See R.
Watt, Fair Copyright Remuneration: The Case of
Music Radio, 7 Rev. of Econ. Res. on Copyright
Issues 21, 25–26 (2010).
28 The construction of the hypothetical market is
of particular importance in this proceeding. As
explained infra, IPG mistakenly argues that the
preexisting bundling of programs on the
retransmitted stations in the actual market renders
ratings irrelevant to a CSO that must purchase and
retransmit the actual bundle in toto. IPG confuses
the actual market with the hypothetical market the
Judges are obligated to construct. The actual market
is distorted by the existence of the compulsory
statutory license, and the Judges are required to
determine the values of the copyrighted programs
by hypothesizing an unregulated market in which
such statutory compulsion does not exist.
29 The SDC designated the following testimony
from the 1998–99 Phase I Proceeding (Distribution
of 1998 and 1999 Cable Royalty Funds, Docket No.
2001–8 CARP CD 98–99) from the following
witnesses: (a) Marsha Kessler (a retired MPAA vice
president, responsible for retransmission royalties):
June 2, 2003 (pp. 6347–6454); June 3, 2003 (pp.
6456–6613); July 14, 2003 (pp. 9478–9491); and July
15, 2003 (pp. 9724–9753); (b) Paul Lindstrom (a
Nielsen employee): June 9, 2003 (7175–7445); and
(c) Paul Donato (a Nielsen employee) June 9, 2003
(pp. 7445–7520). From the 2000–2003 Phase II
Proceeding (In the Matter of Distribution of 2000,
2001, 2002, and 2003 Cable Royalty Funds, Docket
No. 2008–2 CRB CD 2000–2003), the SDC
designated testimony from the following witnesses:
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
PO 00000
Frm 00109
Fmt 4703
Sfmt 4703
A. The SDC Methodology
1. The Details of the SDC Methodology
The SDC’s calculation of relative
market value (SDC Methodology) is
based upon the analyses of two expert
witnesses who testified on behalf of the
SDC in their direct case and upon
certain designated testimony from prior
proceedings.29 The first live witness
upon whom the SDC relied was Mr.
Whitt, a systems analyst, programmer
and database analyst, who had worked
for a company he founded, IT
Processing LLC (IT Processing). 9/3/14
Tr. at 418 (Whitt).30 Mr. Whitt had
formed IT Processing to engage in
‘‘massive data projects’’ that required
‘‘millions of unique items of data to be
accurately and efficiently entered and
(a) Ms. Kessler: June 3, 2013 (pp. 101–218); (b) Paul
Lindstrom: June 3, 2013 (pp. 280–324); and June 4,
2013 (pp. 368–433); (c) Dr. William Brown: June 6,
2013 (pp. 1364–1420); (d) Jonda Martin: June 3,
2013 (pp. 219–236); (e) Kelvin Patterson: June 3,
2013 (pp. 237–280); and (f) Mr. Whitt: June 6, 2013
(pp. 1346–1363).
30 The SDC proffered Mr. Whitt’s testimony from
a prior hearing in this proceeding (discussed in Part
III, supra) conducted on April 8, 2014, in lieu of
eliciting his testimony during the September
hearing. IPG consented to this procedure (subject to
its foundational challenge as set forth in its Motion
in Limine discussed supra) and the Judges
incorporated by reference Mr. Whitt’s April 8, 2014,
testimony as part of the present record. 9/3/14 Tr.
at 413–15. IPG also cross-examined Mr. Whitt
during the September 2014 hearings, and the SDC
then conducted redirect examination of Mr. Whitt.
E:\FR\FM\13MRN1.SGM
13MRN1
EN13MR15.005
mstockstill on DSK4VPTVN1PROD with NOTICES
present context is best understood
through the following example: 27
• Assume there is only one CSO (C),
and there are two program owners (P1
and P2) with programs available for
retransmission.
In a hypothetical market, the Shapley
model defines the values of C, P1, and
P2 under all of the possible orderings of
arrival of the three entities to
negotiations and at each point of arrival.
For C, P1, and P2, there are the
following 6 (that is 3 factorial, or 3!)
possible orderings by which each
arrives in the market:
(1) C, P1, P2
(2) C, P2, P1
(3) P1, C, P2
(4) P1, P2, C
(5) P2, C, P1
(6) P2, P1, C
• Assume the following.
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
mstockstill on DSK4VPTVN1PROD with NOTICES
analyzed.’’ Whitt WDT at 2; Ex. SDC–D–
001 at 2.
Mr. Whitt’s work on behalf of the SDC
was derivative of earlier work he had
undertaken on behalf of MPAA. More
particularly, Mr. Whitt had been
engaged by MPAA ‘‘to process large data
files consisting of cable and satellite
copyright programming and viewing
associated with claims filed with the
Copyright Royalty Arbitration Panels
. . . and [the] Copyright Royalty
Board.’’ Id. at 3.
According to Mr. Whitt, he was
contacted by the SDC in 2006 to assist
in preparing their case in this
proceeding. 4/8/14 Tr. at 106 (Whitt).
The SDC engaged Mr. Whitt to utilize
his prior work and data from his MPAA
assignment to prepare the HHVH Report
for 1999, relating to the retransmission
of certain Devotional programming on
broadcast television stations that were
distantly retransmitted to other markets.
Whitt WDT at 3 and Ex. 1 thereto; Ex.
SDC–D–001 at 3 and Ex. 1 thereto; 4/8/
14 Tr. at 106 (Whitt).
Mr. Whitt’s 1999 HHVH Report was
based on following three data sources:
(1) Programs on a sample of television
stations whose signals were distantly
transmitted on cable that Mr. Whitt
believed Ms. Kessler, a former employee
of the MPAA, chose based on whether
the signals were ‘‘distant’’ for cable
copyright purposes;
(2) distant program viewing data from
Nielsen, presented on a quarter-hour
basis, for programs from Nielsen’s six
‘‘sweeps’’ months of diary data (January,
February, May, July, October and
November) (Nielsen Data); 31 and
31 Nielsen ratings estimate the number of homes
tuned to a program based upon a sample of
television households selected from all television
households. The findings within the sample are
‘‘projected’’ to national totals. Although there was
no evidence or testimony regarding how Nielsen
conducted its data collection for sweeps weeks in
1999, Mr. Lindstrom described the general process
in his testimony in the 2000–03 proceeding, which
the SDC designated in this proceeding. In that
regard, Mr. Lindstrom testified that diary data is
collected in Nielsen’s diary markets during
November, February, May, July, and in some cases
October and March, which are also known as the
‘‘sweeps’’ ratings periods (Nielsen Diary Data).
(Diaries are paper booklets in which each person in
the household manually records viewing
information.) Nielsen mails seven-day diaries to
homes randomly selected by Nielsen to keep a tally
of when each television in the household was on,
what it was tuned to, and who in the household was
watching. Over the course of a four-week sweeps
period, Nielsen mails diaries to a new panel of
randomly selected homes each week. At the end of
each sweeps period, all of the viewing data from the
individual weeks are aggregated into Nielsen’s
database. Each sweeps period yielded a sample of
approximately 100,000—aggregating to 400,000
households over the course of a year. 2000–03
Determination, 78 FR at 6993; 6/3/13 Tr. at 290,
296–98, 312 (Lindstrom) (SDC Designated
Testimony).
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
(3) program data from Tribune Media
Services (‘‘TMS’’) (including station,
date, time, title and program category)
(TMS Data).
Id. at 3.
Mr. Whitt then matched the Nielsen
Data with the TMS Data in order to
merge the Nielsen Data for reported
quarter-hour segments with the titles of
the programs and other program
information in the TMS Data. Id. at 4;
4/8/14 Tr. at 108 (Whitt).32 In addition,
Mr. Whitt identified what he described
as ‘‘character strings’’ from program
titles (44 in total) that he discretionally
determined were devotional in nature
but had not been captured in the
merging of the Nielsen Data and the
TMS Data. Id. at 4–6. Mr. Whitt also
used his discretion to delete certain
programs that he concluded were not in
fact devotional, although their titles
initially suggested that they were
devotional in nature. 4/8/14 Tr. at 126–
28 (Whitt).
Mr. Whitt completed his analysis by
‘‘aggregat[ing] by title and station
summing the adjusted household
viewing hours from [the] Nielsen
[data].’’ Whitt WDT at 6; Ex. SDC–D–
001 at 3. Thus, Mr. Whitt was able to
identify the potentially compensable
broadcasts of the programs claimed by
SDC and IPG that aired on the sample
stations. Whitt WDT at 3; Ex. SDC–D–
001 at 3.
The SDC also presented John Sanders
as an expert ‘‘to make a fair
determination of the relative market
values of particular devotional
television programs claimed by the
parties’’ using Mr. Whitt’s report. Ex.
SDC–D–002 at 2. Mr. Sanders
previously had ‘‘actively participated in
the appraisal of more than 3,000
communications and media
businesses,’’ and his work has focused
on, inter alia, ‘‘the television and cable
industries and the appraisal of . . .
subscribership-based assets . . . .’’ Id. at
3. In the course of that work, since 1982,
Mr. Sanders has frequently engaged in
the valuation of television programs for
both buyers and sellers, and the
valuation of cable systems, in
connection with market transactions (as
contrasted with valuations as an expert
witness). 9/3/14 Tr. at 461–62 (Sanders).
Accordingly, and without objection, Mr.
Sanders was qualified as an expert in
the valuation of media assets, including
32 More precisely, Mr. Whitt had performed this
merger on behalf of the MPAA. Then, after being
retained by the SDC, he derived his 1999 HHVH
Report for Devotional programming by narrowing
that prior work on behalf of the MPAA to isolate
the Devotional programming. 4/8/14 Tr. at 108–10
(Whitt).
PO 00000
Frm 00110
Fmt 4703
Sfmt 4703
13431
television programs. 9/3/14 Tr. at 463–
64.
Mr. Sanders testified that if he were
representing a buyer or a seller of a
license to retransmit a program into a
distant market, the first step in his
analysis of value would be to ‘‘measure
the audience that is being generated by
the various programs in question . . . .’’
9/3/14 Tr. at 476–79 (Sanders). Mr.
Sanders testified that the reason for this
initial emphasis on audience viewership
is as follows:
[I]n terms of a cable system, the objective is
to have categories of programming that will
attract subscribers. But, within those
categories, to have individual program titles
that viewers will actually be interested in
watching. And those that show greater
evidence of viewership will obviously attract
more subscribers and, [as a] consequence,
would have greater value.
9/3/14 Tr. at 478–79 (Sanders).
Accordingly, Mr. Sanders based his
relative valuation estimate primarily on
Mr. Whitt’s 1999 HHVH Report. Sanders
WDT at 4; Ex. SDC–D–002 at 4. He
relied on that measure of viewing for the
following reasons:
To allocate reasonably the available funds
between [the] SDC and IPG in this
proceeding, it is my opinion that audience
measurements relying on surveys conducted
by Nielsen Media Research are the best
available tools to allocate shares. . . .
. . .
Within the category of devotional
programming, all of the programs claimed by
[the] SDC and IPG appear to be directed
predominantly to a Christian audience, and
can therefore be thought of as homogeneous
in terms of the subscriber base to which they
are likely to appeal. Where programs are
homogeneous, the most salient factor to
distinguish them in terms of subscribership
is the size of the audience. A religious
program with a larger audience is more likely
to attract and retain more subscribers or [sic]
the [CSO], and is therefore of proportionately
higher value.
Sanders WDT at 5–6; Ex. SDC–D–002 at
5–6. To ascertain the size of a program’s
audience, Mr. Sanders relied upon
Nielsen ratings because he understood
such ratings to be ‘‘the currency of the
broadcast and cable industry, and . . .
generally regarded as the most reliable
available measure of audience size.’’ Id.
As Mr. Sanders elaborated in his oral
testimony:
Ultimately, the valuation will be based
upon the benefit that it brings to the holder
of the programming. And most commonly,
the measurement of that value is based upon
the audience that that programming is able to
generate. . . . Nielsen audience measurement
data . . . is the most ubiquitous and
authoritative source of audience
measurement data in the broadcasting and
cable fields.
9/3/14 Tr. at 465–66 (Sanders).
E:\FR\FM\13MRN1.SGM
13MRN1
mstockstill on DSK4VPTVN1PROD with NOTICES
13432
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
Accordingly, Mr. Sanders added the
household viewing hours for the
distantly retransmitted compensable
programming for each party. This
calculation totaled 1,237,396 viewing
hours for the SDC and 280,063 for IPG.
Sanders WDT at 9; Ex. SDC–D–002 at 9;
see also id. at Appendix E. In percentage
terms, SDC-compensable programming
accounted for 81.5% of the devotional
viewing of the two parties’ programs,
and IPG-compensable programming
accounted for 18.5%.
Based on his analysis, Mr. Sanders
calculated the viewership (and
distribution) shares of the SDC and IPG
programming as follows.
SDC: 81.5%
IPG: 18.5%
Mr. Sanders was unable to provide
any confidence interval for these
allocations, given that the statistical
bases for the analysis were not random
in nature. However, Mr. Sanders
testified that he was able to confirm the
overall ‘‘reasonableness’’ of his analysis
by comparing the results with an
analysis of local Nielsen viewing data
for the same IPG and SDC programs in
the February 1999 sweeps period. Mr.
Sanders testified that he believed the
Nielsen analysis was performed through
a random sampling of viewers and
constituted the ‘‘granular’’ or ‘‘niche’’
type of report that Mr. Sanders
understood to be necessary in order to
rely with greater certainty on the results
of the analysis. 9/3/14 Tr. at 512
(Sanders).
That analysis revealed the following
distribution of viewers:
SDC: 71.3%
IPG: 28.7%
Mr. Sanders also noted that there was a
‘‘correlation coefficient for the HHVH
shares relative to the Nielsen shares [of]
approximately 0.75, which ‘‘signifies
that 75% of the variance between HHVH
results for different programs is
connected with the variance between
local ratings for those programs.’’
Sanders WDT at 10; Ex. SDC–D–002 at
10. The Judges understand Mr.
Sanders’s testimony to mean that the
‘‘connected’’ or correlated nature of the
two sets of viewership data
demonstrates that each data set is a form
of confirmation as to the reasonableness
of the other data set.
Indeed, Mr. Sanders testified that, in
his expert opinion, this 71.3%:28.7%
ratio should be ‘‘characterized as a
reasonableness check’’ on his analysis.
9/3/14 Tr. at 501. See also 9/3/14 Tr. at
510 (Sanders) (restating his
‘‘reasonableness’’ conclusion). Mr.
Sanders emphasized the importance of
his ‘‘reasonableness check,’’ stating that
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
the ‘‘body of data’’ that led to a
71.3%:28.7% distribution ‘‘is very
relevant and, in my opinion, should not
be ignored.’’ 9/3/14 Tr. at 503 (Sanders)
(emphasis added). In that regard, Mr.
Sanders further noted that, had his
primary analysis resulted in a
71.3%:28.7% distribution and, had his
‘‘reasonableness’’ check resulted in an
81.5%:18.5% distribution, he would
have proposed the 71.3%:28.7%
distribution. 9/3/14 Tr. at 509–10
(Sanders).
2. Evaluation of the SDC Methodology
IPG sets forth several criticisms of the
SDC Methodology. First, IPG claims that
the SDC Methodology incorrectly
assumes that household viewing
constitutes an appropriate measure of
relative market value. Assuming
arguendo viewership can be a basis for
value, IPG asserts, second, that the SDC
did not provide a sufficient evidentiary
foundation for the Nielsen Data and,
therefore, for the 1999 HHVH Report.
Third, again assuming, arguendo, that
viewership is probative of value IPG
argues that the incidence of ‘‘zero
viewing’’ sample points in the Nielsen
Data utilized to create the 1999 HHVH
Report invalidates the Nielsen Data as a
reliable source of viewership
information. Fourth, IPG asserts again
assuming, arguendo, the probative
nature of viewership, that the SDC
could have provided better data to
support the SDC Methodology. Fifth,
IPG argues that the SDC’s own
reasonableness test demonstrates that
IPG programming has a significantly
higher value than the 18.5% allocation
proposed by the SDC.
a. Viewership Is an Acceptable
‘‘Second-Best’’ Measure of Value, Even
Though It Is Not the Optimal Metric
IPG opposes a relative market value
assessment based solely on viewership
because: (1) A CSO primarily benefits
from attracting subscribers rather than
viewers; (2) retransmitting a program
with more viewers will not necessarily
increase aggregate subscribership for a
CSO; and (3) retransmitting a program
with fewer viewers might increase a
CSO’s aggregate subscribership.
Robinson WRT at 8.
The Judges agree that a relative
market value assessment based solely on
viewership is less than optimal. In
reaching this conclusion, the Judges
refer to their earlier discussion of the
Shapley valuation approach. In the
present context, the Judges believe that
the optimal approach to determining
relative market value would have been
to compare the SDC programs with
those of IPG using Shapley or Shapley-
PO 00000
Frm 00111
Fmt 4703
Sfmt 4703
approximate valuations. Such an
approach was not possible on the record
before the Judges in the current
proceeding because of the nonexistence, unavailability, or, from the
parties’ perspective, prohibitive
development cost of the necessary
evidence upon which such a
comparison could be made.
The SDC’s expert economic witness,
Dr. Erkan Erdem, agreed that, in theory,
a Shapley valuation would be a more
precise way to measure relative value in
this proceeding. 9/8/14 Tr. at 1084
(Erdem). However, as Dr. Erdem noted,
there was no evidence in the record (or
apparently otherwise available) by
which one could calculate the Shapley
values in this proceeding. Tr. 9/8/14 at
1084–85 (Erdem).33 Indeed, no expert
attempted to utilize a Shapley
methodology to determine relative
market value of the SDC and IPG
programs.
Dr. Erdem did acknowledge, however,
that, as an alternative, a CSO could
utilize the general principles of a
Shapley valuation to rank ordinally the
shows available for retransmission in a
hypothetical market, and thus create
heuristic Shapley values. 9/8/14 Tr. at
1100–01 (Erdem). Such a ranking by
CSOs in the present case could have
served as a basis for benchmarking the
‘‘relative marketplace values.’’ However,
neither of the parties proffered a witness
who had experience in creating a roster
of television programs.
Thus, the Judges have no evidence or
testimony by which to establish the
relative marketplace values of the SDC
and IPG programs in the optimal
theoretical manner or in a manner that
uses ‘‘Shapley-approximate’’ values.
This evidentiary constraint places the
Judges in a ‘‘second best’’ situation. In
that situation, it is not necessarily
optimal to attempt to satisfy other
efficient conditions, because to do so
would further worsen the already suboptimal situation. See R.G. Lipsey and
K. Lancaster, The General Theory of
Second Best, 24 Rev. Econ. Stud. 11
(1956). Colloquially stated, the theory of
the second best may generally be
defined as ‘‘not letting the perfect be the
enemy of the good.’’ When the parties
have not proffered evidence or
testimony to permit Shapley-type
valuations, it would not be efficient also
to reject valuations based
predominantly on viewing data.
33 Professor Watt recognized this practical
problem. See Watt, supra note 27, at 27 (‘‘The
Shapley model provides a reasonable working
solution for regulators . . . . However, it does
suffer from a particularly pressing problem—that of
data availability.’’).
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
mstockstill on DSK4VPTVN1PROD with NOTICES
To reject viewing-centric valuations
would require the Judges instead either
to adopt a less probative or seriously
deficient methodology,34 or figuratively
to throw up their hands and refuse to
make any allocation or distribution.35
The Judges will not compound the
problem of the absence of the most
theoretically probative evidence by
rejecting the SDC’s viewer-centric
valuations, notwithstanding the
limitations in using those valuations. 36
The Judges’ decision to issue a
determination based on the extant
evidence, rather than to reject all
evidence because it is less than optimal,
is consistent with D.C. Circuit
precedent. Specifically, the D.C. Circuit
has held that, in making distributions
under Section 111 of the Copyright Act,
mathematical precision is not required.
See National Ass’n of Broadcasters v.
Librarian of Congress, 146 F.3d 907,
34 The only other methodology presented in this
proceeding is the IPG Methodology. For the reasons
set forth infra, the Judges have concluded that the
IPG Methodology is seriously deficient and far less
probative than the SDC Methodology. Thus, even if
the Judges had not analyzed and considered a
Shapley-type valuation, they would have analysis
or consideration by the Judges, they would have
given far more weight to the viewer-centric SDC
Methodology than the seriously deficient IPG
Methodology.
35 The Judges had considered whether they could
decline to make any distribution determination in
light of the imperfections of the parties’ evidence,
and asked counsel for the parties to provide
guidance as to that alternative. Cf. Final
Determination, 1993–1997 Cable Proceedings
(Phase II), 66 FR 66433, 66454 (Dec. 26, 2001) (the
Librarian accepted the Register’s recommendation
to reject a CARP Report distributing royalties
because ‘‘the record . . . is insufficient on which
to base a distribution determination.’’). Both
counsel in the present proceeding urged the Judges
not to render a determination that declined to
distribute the royalties if a determination that made
an allocation could be based upon adequate
evidence and would not be arbitrary or capricious.
9/8/14 Tr. at 1172–75 (counsel for the SDC);
9/8/14 Tr. at 1176–79 (counsel for IPG). The Judges
are confident that this Determination satisfies those
standards.
36 The limitations might inure to IPG’s benefit. As
Dr. Erdem explained, when there is an overlap in
viewership between programs, a purely viewershipbased valuation, such as that proffered by the SDC,
might understate the relative value of programs
with higher viewership (i.e., the SDC programs) and
overstate the IPG distribution percentage compared
to a Shapley valuation. 9/8/14 Tr. at 1082–83
(Erdem). The Judges also note that in the
hypothetical market, several CSOs might be
competing for the right to retransmit programs.
Thus, to use a prior example, if a CSO has
purchased a license to retransmit I Dream of
Jeannie, rather than Bewitched, because the former
has more viewers and its viewers overlap
significantly with the latter’s viewers, a competing
retransmitter might then find the total viewership
for Bewitched so valuable (given that the
retransmission rights to I Dream of Jeannie were no
longer available) that Bewitched is that competing
retransmitter’s first choice even under a Shapleytype valuation. Therefore, in a competitive market,
absolute viewership would be particularly
probative of program value.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
929, 932 (D.C. Cir. 1998); Nat’l Cable
Television Ass’n v. Copyright Royalty
Tribunal, 724 F.2d 176, 187 (D.C. Cir.
1983). Rather, the Judges may render a
determination premised on ‘‘the only’’
evidence presented by the parties,
notwithstanding that ‘‘the character of
the evidence presented’’ may fall short
of more precise evidence that the parties
did not or could not present. See Nat’l
Cable Television, 724 F.2d at 187.
Applying a viewership-based model
of valuation in deciding distribution
allocations also is consistent with
Library precedent. Specifically, in an
analogous context in a Phase I
proceeding, the Librarian held that a
measure of ‘‘relative market value’’
could be made by reliance on
viewership information when a more
optimal valuation tool was not
available. Distribution of 1998 and 1999
Cable Royalty Funds, Docket No. 2001–
8 CARP CD 98–99, 69 FR 3606, 3614
(January 26, 2004) (noting that survey
evidence may be superior to viewing
evidence but, in the absence of that
superior evidence, viewing information
can properly be relied upon by the
factfinder in a distribution proceeding).
IPG’s own witness acknowledges the
importance of viewership data generally
in assessing the value of programming.
In her oral testimony, Dr. Robinson
conceded that viewership is an
important metric in the determination of
relative market value. 9/2/14 Tr. at 175;
9/4/14 Tr. at 784. (Robinson).
Additionally, Dr. Robinson
acknowledged that viewership is
important to a CSO in order to retain
subscribers, 9/4/14 Tr. at 777–78
(Robinson), confirming the common
sense idea that subscribers would not
continue to subscribe if they did not
watch the offered programming.
The Judges are also confident that,
generally, Nielsen-derived viewership
data presents a useful measurement of
actual viewership. They base this
conclusion on, among other things, the
fact that the television industry relies on
Nielsen data for a wide range of
business decisions. The SDC’s expert
industry witness, Mr. Sanders, testified
that those in the television industry
consider viewership data, as compiled
by Nielsen, to be the best and most
comprehensive measure of viewership.
9/3/14 Tr. at 480–81 (Sanders). Mr.
Sanders acknowledged that the Nielsen
Data are not perfect, but that their status
as the best and most comprehensive
measure of viewership has caused the
television industry to utilize Nielsen
data as a ‘‘convention’’ for ‘‘economic
decision makers.’’ Id. IPG did not
present any evidence to rebut either of
these points.
PO 00000
Frm 00112
Fmt 4703
Sfmt 4703
13433
If the Judges were to discount the
Nielsen Data in this proceeding simply
on the basis that Nielsen data are
imperfect, the Judges would in essence
be substituting their own opinion of the
Nielsen yardstick for the collective
opinion of the ‘‘economic decision
makers’’ in the market. The Judges will
not engage in such substitution; it is
their job to develop a hypothetical
market by eliminating the impact of the
compulsory licensing regime—but
otherwise to hew as closely as is
reasonably appropriate to the conduct,
performance, customs and standards of
the actual market.
Despite the Judges’ conclusion that
viewership is a type of metric that the
Judges may consider, the Judges must
consider whether the particular
viewership analysis undertaken by the
SDC contains imperfections, as noted by
IPG, or otherwise. See, e.g., 1987
Devotional Determination, 55 FR at
5650; 1986 Determination, 54 FR at
16153–54 (noting that viewing
measurements might not be perfect and
must be appropriately adjusted if
claimants are able to prove that their
programs have not been measured
properly or may be significantly
undermeasured). Accordingly, the
Judges must analyze the SDC’s
particular viewership evidence and
address the issues raised by IPG in that
regard.
b. The Evidentiary Foundation for the
SDC Methodology
(1) ‘‘Replication’’ and ‘‘Testing’’ of the
SDC’s HHVH Report
The SDC’s viewership evidence
consisted largely of the HHVH Report
presented by SDC’s witness, Mr. Whitt.
IPG asserts that the SDC did not provide
sufficient underlying data to allow IPG’s
expert, Dr. Robinson, to test the
accuracy of the SDC’s HHVH Report for
1999. 9/4/14 Tr. at 755–56, 765–68
(Robinson). However, the Judges
disagree with IPG’s assertion, based
upon Dr. Robinson’s own testimony.
Specifically, Dr. Robinson testified that
she indeed ‘‘merged the underlying data
and ran the search terms for devotional
programming [and] reached
substantially the same results [as the
SDC] in all material respects.’’ Id. at
850–61 (Robinson). In her prior
testimony on IPG’s Motion to Strike, Dr.
Robinson had presaged her subsequent
successful replication of the HHVH
Report by admitting that she was able to
merge the Nielsen Data and the TMS
Data, run Mr. Whitt’s search terms and
test the accuracy of his results. 4/8/14
Tr. 68–69 (Robinson); Order Denying
Motion to Strike at 6. Based on Dr.
E:\FR\FM\13MRN1.SGM
13MRN1
13434
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
Robinson’s testimony, the Judges
conclude that the HHVH Report was
replicable and that the results were
capable of being tested. As a result, the
Judges conclude that the report should
carry at least some weight in assessing
the relative market value of the SDC and
IPG programs.
(2) Issues Regarding the Kessler Sample
mstockstill on DSK4VPTVN1PROD with NOTICES
IPG also criticizes the HHVH Report
because the SDC (1) did not produce a
witness with ‘‘firsthand knowledge of
the method or basis for the station
sample selection’’ used to create the
Kessler sampling of stations, (2)
presented no evidence directly
establishing that Ms. Kessler selected
the stations appearing in the Nielsen
Data, and (3) presented ‘‘[n]o
information or data regarding the station
sampling process.’’ See IPG PFF at 26–
29.
There is some validity to IPG’s
criticisms. The SDC did not call Ms.
Kessler as a witness to explain how she
selected her 1999 sample of stations.
Further, Mr. Whitt acknowledged that
he had not participated in the selection
of the Kessler Sample of stations, so he
had no knowledge of the method by
which those stations were selected. 4/8/
14 Tr. at 112 (Whitt). The extent of Mr.
Whitt’s knowledge in this regard was
limited to his recollection that ‘‘the
MPAA conducted a detailed study of
what stations to select[,] . . . and then
I was given a list of those stations[,] and
then that’s what I used to combine the
two files. . . . So, all the Nielsen
stations should have represented the
complete list of the Kessler stations.’’
4/8/14 Tr. at 113 (Whitt); 9/3/14 Tr. at
444 (Whitt).37
Further, the SDC’s expert witness, Mr.
Sanders, admitted that the Kessler
Sample and, derivatively, the HHVH
Report and his own report are subject to
valid criticism because the Kessler
Sample—upon which both reports
rely—was created by a non-random
selection of stations. 9/3/14 Tr. at 496
(Sanders).38
IPG properly takes the SDC to task for
relying on only a small portion (72 of
800, or 9%) of distantly retransmitted
37 Although the SDC provided an example of such
a Kessler Sample to IPG in discovery (from the
Phase I 1999 proceeding), the SDC did not represent
that this earlier sample constituted the sample used
to select the stations identified in the Nielsen Data.
See 4/8/14 Tr. at 229 (SDC counsel ‘‘stipulat[ing]’’
that ‘‘Ms. Kessler’s list from Phase I is not the list
of stations that was ordered from Nielsen’’).
38 In the 2000–03 proceeding, Ms. Kessler
testified that her sampling was not (and was not
intended to be) a random sample. See 6/3/13 Tr. at
122–25 (Kessler).
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
stations.39 See 9/4/14 Tr. at 626
(Sanders) (confirming that Mr. Whitt’s
analysis covered 72 stations.) 40 As IPG
noted, in 1986 the CRT found that a
study of 18.8% of 622 total stations to
be not sufficiently large to be ‘‘perfectly
projected to other stations. . . .’’ IPG
PFF at 50 (emphasis added) (citing 1983
Cable Royalty Distribution Proceeding,
Docket No. CRT 84–1 83CD, 51 FR
12792, 12794 (April 15, 1986)).41
The Judges acknowledge that the
Kessler Sample was non-random.42 That
being said, the manner in which the
sample was chosen will influence the
weight the Judges place on the station
sample, and by extension, on the HHVH
Report. For example, the presence of a
clear bias either in favor of or against a
particular participant in the current
proceeding would render the report all
but useless. Therefore, for the Judges to
give any weight to the SDC
Methodology, the Judges must analyze
the origination of and the purposes for
creating the Kessler Sample.
The SDC argues that the Judges can
and should rely on the Kessler Sample
notwithstanding the aforementioned
39 All things being equal, the larger the sample
size, the more likely it is that the sample will be
representative of the population the sample
purports to measure. Although sampling 100% of
the population is ideal, it is typically not cost
effective or practicable to sample an entire
population. The smaller the sample size, however,
the greater the margin of error. See H. Zeisel, . . .
And Then There Were None: The Diminution of the
Federal Jury, 38 U. Chi. L. Rev. 710, 718 (1971).
40 IPG also asserts that there is an inconsistency
between the number of stations (123) in the Kessler
Sample and the number of stations (72) in the
sample analyzed by Mr. Whitt. See IPG Proposed
Findings of Fact at 28. That claimed inconsistency
is a red herring, however, because the sample that
IPG claims may be the ‘‘Kessler Sample’’ was a
Phase I sample she had selected—one that the SDC
acknowledged was not the sample from which Mr.
Whitt identified stations with Devotional
programming in this Phase II proceeding. See, e.g.,
4/8/14 Tr. at 113–15, 229.
41 Dr. Robinson also pointed out that the Kessler
Sample’s apparent exclusion of Canadian stations
suggests that the sample was unrepresentative. By
comparison, Dr. Robinson’s own station selection
contains only a single Canadian station on which
programs claimed in this proceeding were
broadcast; that station broadcasted both an IPG
program and an SDC program. 9/8/14 Tr. 1092
(Erdem). The Judges find no persuasive evidence in
the record that the exclusion of Canadian stations
from the HHVH Report materially affects the results
as to either side in this case. Therefore, the Judges
conclude that the probative value of the HHVH
Report is not diminished by the absence of
Canadian stations. Accord Distribution of the 2000–
03 Cable Royalty Funds, 78 FR at 64998 (‘‘The
Judges conclude that, while the exclusion of the
Canadian stations was an error, it did not have a
significant effect on the relative shares computed by
MPAA’’).
42 Given this analysis, it is perhaps inaccurate to
continue referring to the SDC’s sample of stations
as the ‘‘Kessler Sample.’’ However, because the
parties have identified the sample in this manner,
for ease of reference the Judges have continued with
that short-hand identifier in this Determination.
PO 00000
Frm 00113
Fmt 4703
Sfmt 4703
defects. Mr. Sanders opined that the
non-random nature of the Kessler
sample, and its uncertain genesis, do
not pose a problem because:
• The Kessler Sample ‘‘employs
viewing results from the most distantly
retransmitted broadcast stations as
reported by Form 3 cable systems.’’ 43
• Although the Kessler Sample is
non-random, it is ‘‘close to a census,’’
because ‘‘the most important and
relevant titles [of] the principal
programs of all SDC- and IPGrepresented claimants appear in the
survey.’’ (Emphasis added).44
• The Kessler Sample comprises
many of the regions identified by
Nielsen as ‘‘Designated Market Areas
(DMAs),’’ 45 and the first 10 stations in
the Kessler Sample covered
approximately 30–40% of the
population of the country, thereby
covering some of the largest stations.
• There is no evidence ‘‘to suggest
that the sample was chosen to benefit or
prejudice either party in this proceeding
[and] . . . it is neutral on that score.’’
Sanders WDT at 2; Ex. SDC–D–002 at 7;
9/4/14 Tr. at 627 (Sanders). Mr. Whitt
likewise defended the use of the Kessler
Sample, observing that ‘‘it appeared that
the stations were national,
geographically scattered around the
country[, a]nd they included several
large stations, but also a few small
stations.’’ 9/3/14 Tr. at 420 (Whitt).
Under cross-examination, however,
Mr. Sanders did acknowledge that many
large metropolitan areas were not
represented in the Kessler Sample of
stations. He noted the ‘‘possibility’’ that
there was no measurable viewing of the
SDC and IPG programs in those areas or
that the programs were not
retransmitted in those areas. 9/4/14 Tr.
at 631–33 (Sanders). Of course, those
speculative ‘‘possibilities’’ are precisely
the sort of concerns that a truly random
sample would address objectively. The
non-random nature of the Kessler
Sample leaves unanswered the question
43 ‘‘Form 3 cable systems’’ are cable systems
whose semiannual gross receipts for secondary
transmissions total $527,600 or more, and are thus
required to file statements of account on Copyright
Office form SA3. See 37 CFR 201.17(d)(2)(ii).
44 When questioned by the Judges, Mr. Sanders
acknowledged that he would have no basis for also
asserting that the ‘‘Kessler Sample’’ approximates a
‘‘census’’ of all retransmitted stations or of all
broadcasts of IPG and SDC programs. 9/4/14 Tr. at
637–39 (Sanders). Moreover, IPG takes the SDC to
task for relying on a sample of only 123 stations
(about 17.5%) of the approximately 700 stations
distantly retransmitted by Form 3 cable systems.
45 The term ‘‘DMA’’ is used by Nielsen to identify
an exclusive geographic area of counties in which
the home market television stations hold a
dominance of total hours viewed. See
www.nielsenmedia.com/glossary/terms/D/ (last
visited December 3, 2014).
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
of why those metropolitan areas were
not represented.
Mr. Sanders concluded that, on
balance, he could nonetheless give some
weight to this non-random selection of
stations. 9/3/14 Tr. at 498–500. It is
noteworthy that IPG’s expert, Dr.
Robinson, likewise acknowledged that
even a non-random sample can be
representative and therefore probative of
facts concerning an entire population. 9/
3/14 Tr. at 234–35 (Robinson). In fact,
Dr. Robinson testified that the results of
her own non-random sample were
representative of the population she was
measuring (subscriber fees paid to
CSOs) because, ‘‘as a practical matter
. . . in terms of understanding the
population that we care about, if we
have the majority of the data, then at
least we know the truth for the majority
of the data. . . .’’ 9/2/14 Tr. at 156
(Robinson).
Non-random (a.k.a. ‘‘nonprobability’’)
sampling, although inferior to random
sampling, can be of some limited use.
As explained in a treatise on the subject:
mstockstill on DSK4VPTVN1PROD with NOTICES
[N]onprobability samples cannot depend
upon the rationale of probability theory. At
least with a probabilistic sample, you know
the odds or probability that you have
represented the population well. You can
estimate the confidence intervals for the
statistic. With nonprobability samples, you
may or may not represent the population
well . . . . In general, researchers prefer
probabilistic or random sampling methods
over nonprobabilistic ones, and consider
them to be more accurate and rigorous.
However, in some circumstances in applied
social research there may be circumstances
where it is not feasible, practical or
theoretically sensible to do random sampling.
W. Trochim and J. Donnelly, Research
Methods, The Concise Knowledge Base
at 41 (2005) (emphasis added).
In the present case, ‘‘feasibility’’ was
certainly a constraint because, as Mr.
Sanders explained, it was costprohibitive for the SDC to invest
additional money into the development
of evidence. The costliness of
undertaking random sampling can
render an analysis unfeasible. As one
survey organization has noted, ‘‘costs
are important and must be considered in
a practical sense’’ and therefore a
‘‘broader framework’’ is needed to
assess the results of nonrandom
sampling in terms of ‘‘fitness for
purpose.’’ Rep. of the Am. Ass’n of Pub.
Opinion Res. Task Force on
NonProbability Sampling at 96 (2013).
To summarize, had the HHVH Report
been based on a random sample of
stations, it would have been more
probative. Nevertheless, the Kessler
Sample was not prepared in
anticipation of the current proceeding
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
and contained no discernible bias either
in favor of or against the programs that
are at issue in this proceeding. Cost is
a reasonable factor for the parties to
consider in preparing evidence for a
proceeding and, given the relatively
modest amount of royalties involved in
the current proceeding, it likely would
not have been cost effective for the SDC
to conduct an entirely new study based
on a random sample of stations, even
assuming that one could have been
prepared so long after the royalty year
at issue. Therefore, the Judges find that
the Kessler Sample is sufficiently robust
to allow the Judges to afford some
weight to the SDC Methodology while
remaining mindful of its deficiencies.
(3) Imperfections in the Nielsen Data
Mr. Sanders acknowledged that the
particular Nielsen Data utilized to
prepare the 1999 HHVH Report was not
as granular as he would have preferred.
Specifically, Mr. Sanders explained that
the 1999 HHVH Report was imperfect
because it was based upon a ‘‘very, very
thin slice’’ of the broader broadcasting
or programming field. 9/3/14 Tr. at 519.
When such an extremely narrow ‘‘slice’’
of the market is the subject of the
analysis, according to Mr. Sanders, it is
preferable to obtain a ‘‘niche’’ Nielsen
report that focuses on the narrow market
that is the subject of the study. 9/3/14
Tr. at 514–15 (Sanders). In this
particular case, Mr. Sanders
acknowledged therefore that, because
‘‘it is distant signal viewing that is the
actual focus of the project, [this] would
be an example where a customized
report would be done.’’ 9/3/14 Tr. at 485
(Sanders) (emphasis added).
Furthermore, the SDC did not disclose
the margins of error or the levels of
confidence associated with the data
underlying the HHVH Report. Without
this information, the Judges cannot
assess the reliability of any statistical
sample. The Judges infer that, had the
SDC possessed such information, or if
such information underscored the
reliability of the Nielsen data, the SDC
would have produced it. Further, in the
2000–03 proceeding, Paul Lindstrom,
one of the two Nielsen witnesses whose
prior testimony the SDC designated for
consideration in this proceeding,
acknowledged that the size of the
samples used by Nielsen to measure
distant retransmissions are relatively
small, and therefore do not measure
viewership as accurately as a larger
sample. Accordingly, Mr. Lindstrom
acknowledged that ‘‘[t]he relative error
on any given quarter-hour for any given
station . . . would be very high,’’ 6/3/
13 Tr. at 303 (Lindstrom). Despite these
shortcomings, the SDC relied upon Mr.
PO 00000
Frm 00114
Fmt 4703
Sfmt 4703
13435
Whitt’s HHVH Report, in lieu of
investing in a ‘‘niche’’ Nielsen report, 9/
3/14 Tr. at 514 (Sanders),46 and without
providing information regarding the
levels of confidence and margins of
error associated with the HHVH Report
upon which it has relied.
In an attempt to minimize the impact
of the thinness of this slice of data, Mr.
Sanders shifted the focus,
distinguishing ‘‘fully informed’’ market
participants from ‘‘all-knowing’’
participants. In his opinion, willing
sellers and willing buyers in the
marketplace for television program
copyright licenses would consider
themselves ‘‘fully informed’’ if they had
access merely to the information upon
which he relied, even if they lacked the
more granular data of a special ‘‘niche’’
Nielsen report of distant viewing of the
devotional programming at issue. 9/3/14
Tr. 474–75 (Sanders). As Mr. Sanders
added, ‘‘fully informed’’ in the context
of the licensing of television programs
simply means having adequate knowledge of
the relevant facts and circumstances to the
issue or the proposed transaction at
hand. . . . I don’t think in any engagement
I’ve ever been involved in . . . we have had
all the information we would like to have.
Typically, a valuation exercise is
endeavoring to reach a conclusion based
upon the information that is available.
9/3/14 Tr. 474–75 (Sanders).
Additionally, in economic terms, Mr.
Sanders’s testimony is consistent with
the concept of ‘‘bounded rationality.’’
Willing buyers and willing sellers in
any market 47 are unlikely to have
complete information regarding all of
the variables that could contribute to the
setting of a market price. It would be
humanly impossible to calculate all the
relevant economic variables, and it
would be economically inefficient to
expend the time sufficient to make such
calculations even if they were possible.
Thus, economists recognize that willing
buyers and willing sellers are bounded
by the ‘‘external constraint[ ] . . . [of]
the cost of searching for information in
46 Mr. Sanders had informed the SDC that any
attempt to obtain superior data would have been
cost-prohibitive, i.e., subjecting the SDC to
‘‘hundreds of thousands of dollars of additional
costs,’’ for an amount at stake of ‘‘somewhere north
of a million dollars,’’ and that the SDC agreed not
to invest additional sums to acquire more data. 9/
3/14 Tr. at 469–72 (Sanders). He also speculated
that it might have been impossible to acquire better
data, but the anticipated expense apparently
foreclosed any attempts to learn if superior data
could be acquired or developed. Id. In any event,
Mr. Sanders conceded on cross-examination that he
never attempted to contact anyone at MPAA (or
apparently anyone else) to determine if better data
could be acquired. 9/3/14 Tr. at 591–92 (Sanders).
47 The Judges note that the economic experts for
willing buyers and willing sellers likewise are
subject to inevitable constraints.
E:\FR\FM\13MRN1.SGM
13MRN1
13436
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
the world . . . [and they] attempt to
make optimal choices given the
demands of the world leading to the
notion of optimization under
constraints.’’ G. Gigerenzer, Is the Mind
Irrational or Ecologically Rational? in F.
Parisi & V. Smith, The Law and
Economics of Irrational Behavior at 38
(2005). Thus, ‘‘[t]he focus on the
constraints in the world has allowed
economists to equate bounded
rationality with optimization under
constraints.’’ Id. at 40.
Finally, IPG leveled a broad criticism
of the SDC Methodology, asserting that
it is ‘‘the product of several degrees of
projection.’’ Robinson AWDT at 7 n.10.
That is, the SDC derived its royalty
distribution by analyzing the viewership
of a few sampled individual airings
projected over the population of a
Nielsen Designated Market Area during
‘‘sweeps’’ weeks, and then projected
over the entire year, for only a relatively
small (nonrandom) set of stations
projected to represent all retransmitted
stations. Id. The Judges recognize the
validity of this criticism. However, the
nature of viewership-type estimates is to
engage in such sampling and
extrapolation. Thus, the SDC
Methodology may be compromised, but
it is not subject to outright
disqualification.
mstockstill on DSK4VPTVN1PROD with NOTICES
(4) The Incidence of Zero Viewing
IPG criticizes the SDC Methodology
because it is based on what IPG
characterizes as a ‘‘disproportionately
large number of ‘0’ entries’’ [i.e., zero
viewing sampling points] in the Nielsen
data for distant viewing.’’ IPG PFF at 38.
More particularly, IPG notes that the
Nielsen Data include a recorded ‘‘0’’ for
72% of all quarter-hours of broadcasts
measured by the 1999 Nielsen Data, and
recorded a ‘‘0’’ for 91.2% of all quarterhours of devotional broadcasts. Id.
Zero viewing sampling points
represent the quarter-hour sampling
points at which no sample households
recorded that they were viewing that
station. See 2000–03 Determination, 78
FR at 64995. IPG criticized the
incidence of zero viewing sampling
points in the 2000–03 proceeding, and
the Judges addressed the issue in their
Determination in that proceeding.
[T]he Judges agree with Mr. Lindstrom that
these ‘‘zero viewing’’ sampling points can be
considered important elements of
information, rather than defects in the
process. As Mr. Lindstrom testified, when
doing sampling of counts within a
population, it is not unusual for a large
number of zeros to be recorded, 6/4/13 Tr. at
391–93, 410 (Lindstrom), and those ‘‘zero
viewing’’ sample points must be aggregated
with the non-zero viewing points. 6/3/13 Tr.
at 323 (Lindstrom).
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
. . . .
[A]s Mr. Lindstrom testified, distantly
retransmitted stations typically have very
small levels of viewership in a television
market fragmented (even in the 2000–2003
period) among a plethora of available
stations. 6/4/13 Tr. at 393 (Lindstrom). Thus,
it would be expected, not anomalous, for
Nielsen to record some zero viewing for any
given quarter-hour period within the diary
sampling (sweeps) period.
Id.48
In the present proceeding, Mr.
Sanders offered the following practical
reasons why zero viewing would be
recorded for these retransmitted
programs: (1) There is much less
viewing of out-of-market signals, (2) the
lion’s share of viewing in any market is
going to be viewing of the local stations,
(3) stations within a market tend to have
a long legacy and a history in the
market, (4) stations within a market
have preferred dial positions, and (5)
local television stations devote
incredible resources to promoting
themselves. 9/4/14 Tr. at 681–83
(Sanders). This testimony was not
rebutted by any IPG witness.
Despite these seemingly reasonable
and credible explanations of ‘‘zero
viewing’’ sampling points, the probative
force of these ‘‘zero viewing’’ data
points, as a general matter, is not
without doubt. As the Judges also noted
in the 2000–03 Determination regarding
Nielsen sampling:
The sample size is not sufficient to
estimate low levels of viewership as
accurately as a larger sample. Mr. Lindstrom
acknowledged that ‘‘[t]he relative error on
any given quarter-hour for any given station
. . . would be very high,’’ 6/3/13 Tr. at 303
(Lindstrom).
Furthermore, Mr. Lindstrom acknowledged
that he had not produced the margins of error
or the levels of confidence associated with
the Nielsen viewership data, despite the fact
that such information could be produced. 6/
3/13 Tr. at 391–93, 410 (Lindstrom). Without
this information, the reliability of any
statistical sample cannot be assessed. (The
Judges infer that, had such information
underscored the reliability of the Nielsen
data, it would have been produced by
MPAA.)
78 FR at 64995. The Judges note that the
evidence in the present proceeding does
not resolve these issues regarding
sample size, margins of error and levels
of confidence.
Nonetheless, the Judges concluded in
the 2000–03 Determination that
‘‘viewership as measured after the airing
of the retransmitted programs is a
reasonable, though imperfect proxy for
the viewership-based value of those
48 The SDC designated Mr. Lindstrom’s testimony
in the 2000–03 cable distribution proceeding for
consideration in this present proceeding.
PO 00000
Frm 00115
Fmt 4703
Sfmt 4703
programs.’’ Id. at 64995. IPG has not
provided record evidence or testimony
in this proceeding that would persuade
the Judges to depart from the conclusion
reached in the 2000–03 Determination.
In light of the reasonable and credible
explanations offered by the SDC for the
‘‘zero viewing’’ sampling points, and the
absence of any persuasive evidence or
testimony to the contrary, the Judges
again find and conclude that the
incidence of such zero viewing points
does not invalidate a viewership-based
valuation study such as utilized in the
SDC Methodology.
IPG did introduce in this proceeding
evidence that it did not introduce in the
2000–03 proceeding regarding the
incidence of ‘‘zero viewing’’ sample
points for individual programs (rather
than for the aggregate of quarter-hours).
Compare 2000–03 Determination, 78 FR
at 64995 (finding that IPG had failed to
introduce evidence that the Nielsen data
revealed particular programs with ‘‘zero
viewing’’) with Ex. IPG–R–011
(analyzing zero viewing by title). As the
Judges noted in the 2000–03
Determination, the distinction between
‘‘zero viewing’’ overall and ‘‘zero
viewing’’ for individual programs or
titles is important because ‘‘under the
hypothetical market construct, royalties
would accrue on a program-by-program
basis to individual copyright owners,
not to the distantly retransmitted
stations.’’ 2000–03 Determination, 78 FR
at 64995. However, an analysis of the
evidence upon which IPG relied does
not support its assertion that ‘‘zero
viewing’’ for individual programs was
particularly pervasive among the SDC or
IPG programs, or that the incidences of
‘‘zero-viewing’’ that did occur were
disproportionately harmful to IPG.
First, the incidence of ‘‘zero viewing’’
for individual, retransmitted SDC and
IPG programs was no more than 15.8%,
according to IPG’s own economics
expert witness, Dr. Robinson. See Ex.
IPG–R–011. This 15.8% figure
represented only three of the 19
programs believed at issue in this
proceeding or, alternatively stated, 16 of
the 19 programs (84.2%) did not have
‘‘zero viewing’’ throughout the
sample.49
Second, of the three programs with
‘‘zero viewing’’ throughout the sample,
two were SDC programs (‘‘700 Club
Super Sunday’’ and ‘‘James Kennedy’’),
whereas only one of the three programs
49 Before submitting her final recommendation,
Dr. Robinson amended her program count to
conform to the Judges’ rulings and to capture data
that she (apparently inadvertently) omitted in her
first analysis. See notes 6, 7, supra, and
accompanying text; note 57, infra, and
accompanying text.
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
(‘‘Creflo A. Dollar Jr. Weekly’’) was an
IPG program. See Ex. IPG–R–013.
Further, the IPG program was
retransmitted only three times and
represented less than one-tenth of one
percent (.097%) of both the total
quarter-hours and the number of
retransmitted broadcasts of IPG
programs at issue in this proceeding. Id.
Similarly, the two SDC retransmitted
programs with ‘‘zero viewing’’
throughout the sample represented a de
minimis percent of the SDC’s total
devotional programming at issue in this
proceeding (for ‘‘700 Club Super
Sunday,’’ four retransmitted broadcasts,
representing less than .25% of the total
SDC quarter-hours and programs
retransmitted and, for ‘‘James
Kennedy,’’ approximately 1% of total
SDC quarter-hours and programs
retransmitted). Id. Moreover, the
copyrights for all three of the aboveidentified programs with supposed zero
viewing throughout the sample were
owned by respective claimants who also
owned the copyrights for programs with
virtually identical or similar names, viz.,
‘‘Creflo A. Dollar,’’ ‘‘700 Club,’’ and
‘‘James Kennedy’’, none of which had
zero viewing sample points for all
retransmitted broadcasts of their
programs. Id. Based on these facts, Dr.
Robinson acknowledged at the hearing,
that, in her view, she ‘‘would not say
that for the IPG and the SDC titles that
we have any that we have 100 percent
zero viewing.’’ 9/4/14 Tr. at 827–28
(Robinson) (emphasis added).50
For all of the foregoing reasons, the
Judges find and conclude that there was
not persuasive or sufficient evidence of
‘‘zero viewing’’ for individual SDC and
IPG programs to invalidate any reliance
on the SDC Methodology.51
mstockstill on DSK4VPTVN1PROD with NOTICES
3. Viewership as an Ex Ante or Ex Post
Measure of Value
IPG asserts that viewership and
ratings cannot form a measure of
relative market value because the extent
of viewership and the ratings measuring
viewership are not available until after
the programs have been retransmitted.
Thus, IPG argues, the hypothetical
willing buyer and willing seller could
not utilize this viewership data ex ante
to negotiate a license. Galaz AWDT at 9;
Ex. IPG–D–001 at 9.
Although IPG’s premise is literally
correct, it does not preclude the use of
such viewership data to estimate the
value of the hypothetical licenses. As
Mr. Sanders testified, this problem can
be overcome—and indeed is overcome
in the industry—by the use of a ‘‘make
good’’ provision in the contracts
between program copyright owners and
licensees. That is, program copyright
licenses in the television industry are
established based upon an ex ante
prediction of viewership as measured by
ratings. If the ex post ratings reveal that
the program’s measured viewership was
less than predicted and set forth in the
license agreement, the licensor must
provide compensatory value to the
licensee. 9/4/14 Tr. at 685–95
(Sanders).52 In this manner, such a
rational measure of viewership can also
be expressly incorporated into the
bargain in the hypothetical market
constructed by the Judges.53
The Judges also agree with Mr.
Sanders that the programs within the
Devotional Claimants category on the
surface appear to be more homogeneous
inter se than they are in comparison
with programs in either the Sports
Programming or the Program Suppliers’
claimant categories. Sanders WDT at 6.
This relative homogeneity suggests that
a rational CSO would not be as
concerned with whether different
programs would attract different
audience segments (compared with
more heterogeneous programming) and
therefore the CSO would rely to a
greater extent on absolute viewership
levels.
For these reasons, the record
testimony supports the conclusion that
viewership data is a useful metric in
50 IPG attempts to deflect attention from the
paucity of the relevant evidence regarding the
programs at issue in this proceeding by noting a
higher incidence of ‘‘zero viewing’’ for programs in
other categories, such as Alfred Hitchcock Presents
and Today’s Homeowner. Ex. IPG–R–012; IPG PFF
at 44. However, data sample points in other
categories of programming are not relevant because
they do not address the issues relating to the
Devotional category and, further, there is no
evidence to place such data in an appropriate
context.
51 IPG notes that the SDC could have improved
its analysis to attempt to attribute value to the
distant ‘‘zero viewing’’ data points, as performed by
experts in prior proceedings. Although such
improvements might have permitted the Judges to
give more weight to the HHVH Report, the absence
of such improvements did not invalidate the HHVH
Report.
52 Dr. Robinson was unfamiliar with the
industry’s use of a ‘‘make good’’ provision as a tool
to account for viewership levels. 9/3/14 Tr. at 270
(Robinson).
53 The Judges anticipated the existence of such
‘‘post-viewing adjustments’’ in their 2013
determination. See 2000–03 Determination, supra,
at 64995, n.48 (‘‘Since it is a hypothetical market
we are constructing, it also would not be
unreasonable to hypothesize that the CSO and the
Copyright Owner might negotiate a license that
would contain a provision adjusting the value of the
license, post-viewing, to reflect actual
viewership. . . . In that regard, the Judges refer to
one of the preconditions for relative market value—
reasonable knowledge of relevant facts. Actual
viewership would be a ‘relevant fact’ that could be
applied if post-viewing adjustments to the license
fees were hypothetically utilized by the bargaining
parties.’’).
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
PO 00000
Frm 00116
Fmt 4703
Sfmt 4703
13437
determining relative market value, in
the absence of optimal data that would
permit a precise or an estimated
Shapley value.54 Accordingly, the
Judges reject IPG’s argument that
household viewing cannot constitute a
measure of value in this proceeding.
IPG notes, though, that even assuming
arguendo the SDC’s viewership analysis
is probative of value, the SDC’s own
‘‘reasonableness’’ check demonstrates a
significant disparity between the results
derived from the HHVH Report
(81.5%:18.5% in favor of the SDC) and
the results from the ‘‘reasonableness’’
check of local viewing for the SDC and
IPG programs at issue in this proceeding
(71.3%:28.7% in favor of the SDC). The
Judges agree with IPG that this is an
important disparity, suggesting that IPG
may well be entitled to a larger
distribution than indicated by the SDC’s
HHVH Report. Because of the
importance of this point, the Judges
discuss its significance in their analysis
set forth in Part VI, infra, synthesizing
and reconciling the parties’ positions.
B. The IPG Methodology
1. The Details of the IPG Methodology
IPG proffered its distribution
methodology (the IPG Methodology)
through its expert witness, Dr. Laura
Robinson, whom the Judges qualified to
testify as an expert in economics, data
analysis, and valuation. 9/2/14 Tr. at 87
(Robinson).55 Through her application
54 Interestingly, Dr. Erdem explained that, as
between two programs with overlapping
viewership, the program with higher viewership
would have a greater proportionate Shapley value
than the less viewed program; the difference would
be even greater than the difference between the two
programs based strictly on relative viewership.
9/8/14 Tr. at 1082–83 (Erdem). Given the relative
homogeneity of devotional programming (compared
to the apparent relative heterogeneity between and
among other Phase II category programs),
viewership overlaps between and among the SDC
and IPG programs are likely. Therefore, because the
SDC programs had higher overall ratings than IPG
programs and because the SDC Methodology is
based solely on ratings, the SDC’s percentage
distribution (if accurately measured) could in fact
understate the SDC percentage and overstate the
IPG percentage, compared to percentages based on
potential Shapley values. See supra note 36, and
accompanying text.
55 IPG initially asked the Judges to qualify Dr.
Robinson as a testifying expert ‘‘regarding the value
of the programming issue in this matter for IPG and
for the SDC,’’ or, as alternatively stated by IPG’s
counsel, as an expert ‘‘valuing the relative value of
these programs to these royalties.’’ 9/2/14 Tr. at 73–
74, 80. However, SDC’s counsel objected, and the
Judges then qualified Dr. Robinson as an expert in
the areas of knowledge listed in the text, supra.
IPG’s counsel did not renew his request that Dr.
Robinson be qualified as an expert in the areas set
forth in this footnote. Even if Dr. Robinson had been
qualified as an expert in the areas originally
identified by IPG, that would not have made any
difference in the Judges’ findings and conclusions
in this determination.
E:\FR\FM\13MRN1.SGM
13MRN1
13438
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
of the IPG Methodology, Dr. Robinson
set forth her opinion of the relative
market value of the retransmitted
broadcasts of the compensable
copyrighted program titles represented
by IPG and the SDC and estimated the
share attributable to both parties.
Robinson AWDT at 14, 25; Ex. IPG–D–
001 at 14, 25.
Consistent with the conclusions of the
Judges in this and other determinations,
Dr. Robinson identified the ‘‘willing
sellers’’ in the hypothetical market to be
the owners of the copyrights to the
programs subject to retransmission and
the ‘‘willing buyers’’ to be the CSOs that
would acquire the license to retransmit
the program. 9/2/14 Tr. at 92
(Robinson). However, Dr. Robinson
defined the hypothetical marketplace in
a manner different from that of the
Judges in this proceeding and in the
2000–03 Determination. Dr. Robinson
defined the hypothetical marketplace as
equivalent to the actual marketplace in
which the CSO is required to acquire
the retransmitted programs in the same
bundle as created by the station that the
CSO retransmits. See, e.g., 9/4/14 Tr. at
782 (Robinson) (‘‘[I]t is certainly the
case that when a cable system operator
is actually making the decision about
whether or not to retransmit a broadcast,
that comes within their decision
whether or not to retransmit the station,
which is a little bit at odds with this
whole notion of a hypothetical
negotiation over an individual
broadcast. . . . They don’t have the
choice to broadcast a particular
program.’’).56
Dr. Robinson identified the following
‘‘obtainable data’’ that she claimed to
comprise ‘‘various indicia of value of
the retransmitted broadcasts’’:
• The length of the retransmitted
broadcasts.
• The time of day of the retransmitted
broadcasts.
• The fees paid by CSOs to retransmit
the stations carrying the broadcasts.
• The number of persons distantly
subscribing to the station broadcasting
the IPG-claimed program.
Robinson AWDT at 17; Ex. IPG–D–001
at 17.
Dr. Robinson relied upon four sets of
data. First, she utilized data from the
Cable Data Corporation (CDC). This data
included information on more than
2,700 cable systems regarding:
• The stations transmitted by each
CSO.
• The distant retransmission fees paid
by each CSO.
• The number of distant subscribers
to each CSO.
For each station distantly
retransmitted by these CSOs, the CDC
data also included:
• The number of CSOs retransmitting
each station.
• The number of distant subscribers
to the CSOs retransmitting the station.
• The average number of distant
subscribers to the CSOs retransmitting
the station.
• The distant retransmission fees paid
by the CSOs to retransmit each station.
• The average distant retransmission
fees paid by the CSOs to retransmit each
station.
Id. at 21.
Second, Dr. Robinson relied on TMS
Data (the same source as that relied
upon by the SDC). The TMS data
provided the following information for
the IPG and the SDC programs
represented in this proceeding:
• The date and time each broadcast
was aired.
• The station call sign.
• The program length in minutes.
• The program type (e.g., Devotional).
• The program title.
Id. at 21–22.
Third, Dr. Robinson relied upon the
following information from Nielsen:
• Data reporting 1997 viewing,
segregable according to time period of
the measured broadcast.
• Reports reflecting the long-run
stability of day-part (time period)
viewing patterns.
Id. at 22.
Applying this data, Dr. Robinson
made several computations and
observations, as summarized in Tables 1
and 2 below:
TABLE 1—DATA ON IPG AND NON-IPG CLAIMED TITLES 1999
IPG
Number of distantly retransmitted broadcasts of claimed titles ..............................................................................
Number of hours of distantly retransmitted broadcasts of claimed titles ................................................................
Number of quarter-hours of distantly retransmitted broadcasts of claimed titles ...................................................
SDC
12,017
6,010
24,040
6,558
5,856
23,423
TABLE 2—RELATIVE MARKET VALUE
IPG
(percent)
mstockstill on DSK4VPTVN1PROD with NOTICES
Hours of claimed distantly retransmitted broadcasts ..............................................................................................
Time of day of distantly retransmitted broadcasts ..................................................................................................
Fees Paid by CSOs distantly retransmitting devotional broadcasts .......................................................................
Number of distant subscribers to CSOs distantly retransmitting devotional broadcasts ........................................
Id. at 26–27.
Dr. Robinson stressed repeatedly that
the Judges should not consider the
above measures of value individually.
Rather, she testified that the Judges
should consider the several approaches
as a whole, with any weakness in one
approach offset by the other approaches
that do not suffer from that weakness.
See, e.g., 9/3/14 Tr. at 243, 326, 329, 403
(Robinson); 9/4/14 Tr. at 775
(Robinson). Dr. Robinson also testified
that this approach was an important
method of analysis because her multiple
valuation methods all tended toward a
similar result—approximately a 50:50
distribution—despite any weaknesses or
limitations in any one method. See 9/2/
56 In the hypothetical marketplace the terrestrial
stations’ initial bundling of programs does not affect
the marginal profit-maximizing decisions of the
hypothetical buyers and sellers.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
PO 00000
Frm 00117
Fmt 4703
Sfmt 4703
51
46
>50
51
SDC
(percent)
49
54
<50
49
14 Tr. at 90 (Robinson) (‘‘In summary,
I looked at four different measures of
value, of the relative value. And the IPG
versus SDC are roughly equal.’’); 9/3/14
Tr. at 245 (Robinson) (‘‘since everything
came out roughly equal, all the
indicators pointed to a roughly 50/50
split.’’). Based upon these calculated
percentages, Dr. Robinson concluded
E:\FR\FM\13MRN1.SGM
13MRN1
13439
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
that the proper allocation of royalties
should be in a range from 54%:46%
favoring the SDC to 51%:49% favoring
IPG. Id. at 25.
With regard to the particular factors
Dr. Robinson applied, she noted that her
first measurement—of total broadcast
time—was essentially identical for both
the IPG and the SDC programs when
measured by quarter-hour segments. 9/
2/14 Tr. at 90–91 (Robinson). Second,
with regard to her ‘‘time of day’’
analysis, Dr. Robinson testified that
‘‘certain times of days are associated
with different amounts of viewership
[a]nd everything else equal, it would be
reasonable to think that higher
viewership might be associated with a
higher value.’’ 9/2/14 Tr. at 93
(Robinson). Dr. Robinson concluded
that this time-of-day measurement, like
the first measurement (total broadcast
time) revealed a ‘‘roughly similar’’ value
measurement for the IPG programs and
the SDC programs. 9/2/14 Tr. at 94
(Robinson).
With regard to the third factor—the
fees paid by the CSOs to distantly
retransmit the broadcasts—Dr. Robinson
found that ‘‘on average, IPG broadcast
quarter hours are shown on stations that
are retransmitted by CSOs who pay
relatively more in distant retransmission
fees than do the CSOs who retransmit
the stations with the SDC broadcasts.’’
Ex. IPG–D–001 at 31. From this metric,
Dr. Robinson concluded ‘‘the IPG
broadcasts have more value than the
[SDC] broadcasts.’’ Id. at 32.
Finally, with regard to her fourth
factor—the number of subscribers to the
cable systems—Dr. Robinson found that
when considering the average number of
subscribers to the cable systems on
which the IPG and the SDC programs
are retransmitted, ‘‘the IPG distantly
retransmitted broadcasts are
retransmitted by CSOs on stations with
approximately 6% more distant
subscribers than [the SDC] distantly
retransmitted broadcasts.’’ Id. at 33.
Based upon this final metric, Dr.
Robinson opined: ‘‘To the extent the
value of the broadcast relates to the
number of distant subscribers to the
CSOs retransmitting the station, this
metric indicates that IPG-distantlyretransmitted broadcasts have more
value than [the SDC]-distantlyretransmitted broadcasts.’’ Id. at 34.
Dr. Robinson corrected her analyses
before and during the hearing to reflect
changes in the program titles that she
could allocate to IPG and to the SDC.
First, she removed from her analyses the
several IPG programs that the Judges
had concluded at the preliminary claims
hearing were not properly subject to
representation by IPG. 9/2/14 Tr. at 146
(Robinson). Second, Dr. Robinson added
several program titles that were properly
subject to representation by the SDC but
had not been included in her original
analyses. 9/2/14 Tr. 181–84 (Robinson).
See also 9/8/14 Tr. at 1016 (Robinson)
(confirming that she made these
program inclusions and exclusions in
her amended analysis). With these
adjustments, Dr. Robinson modified her
conclusions as set forth on Table 3
below:
TABLE 3
IPG
(percent)
mstockstill on DSK4VPTVN1PROD with NOTICES
Hours of claimed distantly retransmitted broadcasts ..................................................................
Time of day of distantly retransmitted broadcasts ......................................................................
Fees paid by CSOs distantly retransmitting devotional broadcasts ...........................................
Number of distant subscribers to CSOs distantly retransmitting devotional broadcasts ............
Non-IPG
(percent)
48
46
41
52
Total
(percent)
52
54
59
48
100
100
100
100
Ex. IPG–D–013.
Dr. Robinson acknowledged that the
data available to her was incomplete, in
that she did not have information
regarding all of the fees, cable systems
and stations that retransmitted the
programs of IPG and the SDC. Moreover,
she acknowledged that the sample of
CSOs and, derivatively, the sample of
stations retransmitted by those CSOs,
were not random samples. Accordingly,
Dr. Robinson undertook what she
described as a ‘‘sensitivity analysis’’ to
adjust for the missing data. Robinson
AWDT at 34–36; Ex. IPG–D–001 at 34–
36.
Specifically, Dr. Robinson noted that
she did not have data regarding 29% of
the total fees paid by all the CSOs that
distantly retransmit stations. Rather, she
had information from CSOs who in the
aggregate had paid only 71% of the total
fees paid in 1999 to distantly retransmit
stations. Dr. Robinson acknowledged
that she also lacked full information or
a random sampling of CSOs and of
stations (in addition to her lack of full
information or a random sampling of the
fees paid by CSOs to distantly
retransmit stations). However, Dr.
Robinson did not attempt to adjust her
original results to compensate for the
missing information or the fact that the
data set was not random.
Accordingly, in her ‘‘sensitivity
analysis,’’ Dr. Robinson adjusted all of
her metrics by assuming that she was
missing 29% of the data in all of her
valuation data categories (even though
only one of her metrics was calculated
based on fees). By this ‘‘sensitivity
analysis,’’ Dr. Robinson first calculated
how her allocations would change if all
of the assumed missing 29% of fees paid
by CSOs to distantly retransmit stations
were allocated (in each of the categories
in Table 3) to IPG and, conversely, how
her allocations would change if all of
the assumed missing 29% of such fees
instead was allocated (in each of the
categories in Table 3) to the SDC. Id. Dr.
Robinson initially applied this
sensitivity analysis to her original
allocations and, subsequently (at the
request of the Judges), applied this
sensitivity analysis to her adjusted
analyses that took into account the (1)
removal of certain IPG programs that
had been eliminated by the Judges in
the preliminary hearing and (2) addition
of certain SDC programs that Dr.
Robinson had overlooked in her initial
report. Ex. IPG–R–16 (revised). The
application of this ‘‘sensitivity analysis’’
to Dr. Robinson’s adjusted analyses 57
resulted in the proposed allocations set
forth on Table 4 below:
57 Because Dr. Robinson’s adjusted analyses
supersede her original analyses (they admittedly
included IPG programs that should have been
excluded and omitted SDC programs that should
have been included), the Judges choose not to
clutter this determination with the details of those
now irrelevant calculations.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
PO 00000
Frm 00118
Fmt 4703
Sfmt 4703
E:\FR\FM\13MRN1.SGM
13MRN1
13440
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
TABLE 4
IPG high
(percent)
Hours of claimed distantly retransmitted broadcasts ......................................
Time of day of distantly retransmitted broadcasts ..........................................
Fees paid by CSOs distantly retransmitting devotional broadcasts ................
Number of distant subscribers to CSOs distantly retransmitting devotional
broadcasts ....................................................................................................
Ex. IPG–D–014.
2. Evaluation of the IPG Methodology
The SDC have raised the following
specific criticisms of the IPG
Methodology. First, the SDC critiqued
each of the four purported measures of
value presented by Dr. Robinson. See
SDC PFF at ¶¶ 10–13 (regarding
volume); ¶¶ 14–17 (regarding time of
day); ¶¶ 18–24 (regarding fee
generation); and ¶¶ 26–27 (regarding
subscribership). Second, the SDC noted
that the sensitivity analysis undertaken
by Dr. Robinson revealed that SDC
programming had an eighteen
percentage point higher value than IPG
programming. SDC PFF at ¶ 25. Before
undertaking an analysis of the specific
elements of the IPG Methodology or the
SDC’s critiques thereof, it is important
to consider several important
overarching defects in the approach
undertaken by IPG.
mstockstill on DSK4VPTVN1PROD with NOTICES
a. General Deficiencies in the IPG
Methodology
It bears repeating that a fundamental
problem with the IPG Methodology is
that it is based on a decision by Dr.
Robinson to presume the existence of
the compulsion arising from the prebundled status of the retransmitted
programs as it existed in the actual
compulsory-license market, rather than
the compulsion-free hypothetical fair
market consistently applied by the
Judges. See, e.g., Tr. 9/4/14 at 781–82
(Robinson) (quoted supra); see also 9/2/
14 Tr. at 175–76 (Robinson)
(acknowledging that the IPG
Methodology does not address the
relationship between value and
bundling).
A second problem with the IPG
Methodology is that, although it
ostensibly is intended to eschew
viewership as a primary measure of
program value, IPG’s Methodology
implicitly uses indicia of viewership to
measure program value. In particular,
IPG’s Methodology considers and values
programs based on their indirect
contribution to viewership: The
duration of a program serves as an
indicium of value (a program of
relatively longer duration would be
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
IPG low
(percent)
34
33
29
66
67
71
37
38
42
66
37
63
34
Q. [I]n constructing the methodology that
you relied on in the 2000–2003 proceeding,
you used certain data from the CDC, from
Tribune, or whatever it was called at the
time, and so forth. Is that—were those types
of data essentially the same as the types of
data that were provided to Dr. Robinson for
purposes of this proceeding?
Mr. Galaz: I would say it was essentially
the same.
9/8/14 Tr. at 997 (Galaz).
It is not surprising, therefore, that Dr.
Robinson conditioned her analysis and
conclusions by noting that she was only
able to express an opinion as to relative
market value ‘‘given the data that are
available in this matter.’’ Ex. IPG–D–001
at 20. In fact, Dr. Robinson premised her
analysis on the fact that it was based
upon the limited data available to her.
See, e.g., 9/2/14 Tr. at 111 (Robinson)
(‘‘I looked at the data, looked at what I
could do with them, and this is what I
could do.’’).
Indeed, Mr. Galaz’s methodology in
the 2000–03 proceeding and Dr.
Robinsons’ methodology in the present
Frm 00119
Fmt 4703
Sfmt 4703
Non-IPG low
(percent)
63
62
58
more valuable because of its viewership
over a longer period), as does the time
of day a program is aired (there are more
viewers at some times of day than
others), and the number of subscribers
(potential viewers) to CSOs
retransmitting the program. Simply put,
IPG’s Methodology is not true to its own
critique of valuing programs based on
viewership. Thus, the IPG Methodology
fails to address either the initial
necessity of considering absolute
viewership or the subsequent necessity
of undertaking a Shapley type of
measurement or estimation in order to
create a ‘‘bundle’’ of programs.
The Judges also find that Dr. Robinson
did not truly undertake her own
independent inquiry and develop her
own methodology, because she worked
solely with the data IPG, through Mr.
Galaz, provided her. See 9/2/14 Tr. at
110–11 (Robinson); IPG PFF at 11. The
type of data that Mr. Galaz supplied to
Dr. Robinson was the same type he
utilized in the 2000–03 proceeding,
when he presented his own
methodology on behalf of IPG. Mr.
Galaz’s response to a question from the
Judges confirmed this point:
PO 00000
Non-IPG high
(percent)
proceeding overlap. Compare 2000–03
Determination, 78 FR at 64998 (‘‘The
weight that IPG accorded to any given
compensable broadcast was the product
of (x) a ‘Station Weight Factor’ [based on
subscriber or fee levels], (y) a ‘Time
Period Weight Factor,’ and (z) the
duration of the broadcast . . . .) with
Robinson AWDT at 28; Ex. IPG–D–001
at 28 (‘‘[T]he indicia of the economic
value of the retransmitted broadcasts
. . . are: The length of the retransmitted
broadcasts, the time of day of the
retransmitted broadcast, the fees paid by
cable system operators to retransmit the
stations carrying the devotional
broadcasts, and the number of persons
distantly subscribing [to] the stations
broadcasting the devotional
programs.’’).
Dr. Robinson clearly was
straitjacketed in attempting to devise an
appropriate methodology by the limited
data she received from Mr. Galaz. In this
regard, it is important to note that Mr.
Galaz is not an economist, statistician,
econometrician or an expert in the field
of valuation of television programs or
other media assets, and that he therefore
had no particular expertise that would
permit him to select or approve the use
of appropriate data, especially when
that selection dictated the construction
of a methodology to establish ‘‘relative
market value’’ in a distribution
proceeding.58 The Judges therefore
58 See 2000–03 Determination, 78 FR at 65000. By
contrast, the SDC’s expert witness, Mr. Sanders, was
qualified as ‘‘an expert in the valuation of media
assets, including television programs.’’ 9/3/14 Tr. at
463–64, and, in that capacity, he testified that the
broadcast industry relied on Nielsen viewing data
as the ‘‘best and most comprehensive’’ basis for
valuing programs, 9/3/14 Tr. at 480–81 (Sanders).
Thus, Mr. Sanders was qualified to testify as to the
actual commercial use of a viewership-based
valuation methodology. Mr. Galaz, on the other
hand, was not qualified to testify as to the
appropriateness of the data he selected for use in
the IPG Methodology and, it should be noted,
neither he nor Dr. Robinson testified that the factors
relied upon in the IPG Methodology had ever been
relied upon commercially. See Tr. 9/3/14 at 348–
49 (Robinson).
Not only did Mr. Galaz lack the expertise to
approve or select the type of data necessary to
construct a persuasive methodology, his credibility
has been seriously compromised by his prior fraud
and criminal conviction arising from his
misrepresentations in prior distribution
proceedings. See 78 FR at 6500 (‘‘Mr. Galaz was
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
conclude that the overall IPG
Methodology carries no more weight
than IPG’s methodology did in the
2000–03 proceeding. See 2000–03
Determination, 78 FR at 65002 (while
IPG Methodology ‘‘cannot be applied to
establish the basis for an allocation’’ it
can be used to adjust ‘‘marginally’’ an
allocation derived from other evidence).
Finally, IPG contends that the
purpose of the IPG Methodology is to
compensate every claimant, even if
there is no evidence of viewership of the
claimant’s program. See Galaz AWDT at
8; Ex. IPG–D–001 at 8. The Judges find
no basis for that purpose to guide the
methodology. Even if viewership as a
metric for determining royalties
theoretically would be subject to
adjustment to establish or estimate a
Shapley valuation, there is certainly no
basis to allow for compensation of a
program in the absence of any evidence
of viewership.
b. Specific Deficiencies in the IPG
Methodology
In addition to the foregoing
overarching criticisms of the IPG
Methodology, the Judges note the
following more particular deficiencies
in that methodology.
As a preliminary matter, Dr. Robinson
acknowledged that IPG’s sample of
stations had not been selected in a
statistically random manner. 9/2/14 Tr.
at 155 (Robinson). Thus, the sample
upon which Dr. Robinson relied
suffered from the same infirmity as the
Kessler Sample relied upon in part by
the SDC. Moreover, each prong of the
IPG Methodology raised its own
concerns.
mstockstill on DSK4VPTVN1PROD with NOTICES
(1) Broadcast Hours
Dr. Robinson acknowledged that the
number of hours of broadcasts is not
actually a measure of value; rather it is
a measure of volume. 9/3/14 Tr. at 243–
51 (Robinson). ‘‘Volume’’ fails to
capture the key measure of whether
anyone is actually viewing the
previously convicted and incarcerated for fraud in
the context of copyright royalty proceedings—a
fraud that caused financial injury to MPAA. In
connection with that fraud, Mr. Galaz also
admittedly lied in a cable distribution proceeding
much like the instant proceeding. Mr. Galaz’s fraud
conviction and prior false testimony compromises
his credibility.’’) Further, in the present case, the
Judges carefully observed that Mr. Galaz testified
that ‘‘what we gave to Dr. Robinson was everything
that we had in our possession that we thought might
affect . . .’’ before catching himself and stating
instead ‘‘or would—I should say with which [s]he
could work.’’ 9/8/14 Tr. at 996 (Galaz) (emphasis
added). In any event, the Judges recognize that even
a party that does not have such a checkered history
has an inherent self-interest in selecting the types
of data for use by its expert that is inconsistent with
the independence of the expert in identifying his
or her own categories of data.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
retransmitted program. See 9/3/14 Tr. at
247 (Robinson); SDC PFF ¶ 12. Further,
‘‘volume’’ i.e., number of hours of air
time, does not even reflect how many
subscribers have access to the programs.
9/8/14 Tr. at1085–86 (Erdem).
(2) Time of Day of Retransmitted
Broadcasts
IPG’s second measure of value
compares the time of day viewership of
IPG and SDC programs. Using 1997
Nielsen sweeps data produced by the
MPAA in a previous proceeding, Dr.
Robinson estimates the average number
of total television viewers for each
quarter-hour when IPG or SDC programs
were broadcast according to the Tribune
Data analyzed by Dr. Robinson. 9/3/14
Tr. at 254–55 (Robinson).
Dr. Robinson’s time-of-day measure
does not measure the value of the
individual programs that are
retransmitted. The proper measure of
value for such individual programs,
when considering ratings, would hold
the time of day constant, and then
consider relative ratings within the
fixed time periods. To do otherwise—as
Dr. Robinson acknowledged—absurdly
would be to give equal value to the
Super Bowl and any program broadcast
at the same time. 9/3/14 Tr. at 264
(Robinson).
Further, Dr. Robinson’s analysis does
not show, as she asserted, that the SDC
and IPG programs are broadcast at times
of day that have approximately equal
viewership. Rather, her time-of-day
analysis pointed to a 54%:46%
distribution in favor of the SDC.59
Finally, IPG utilized 1997 data to
estimate the level of viewing throughout
the broadcast day, rather than data that
was contemporaneous with the 1999
royalty distribution period at issue in
this proceeding. 9/3/14 Tr. at 229, 255
(Robinson).60
(3) Fees Paid
Dr. Robinson’s third metric is derived
from an analysis of fees paid by CSOs
per broadcast station. That is, several
CSOs might pay royalty fees to
retransmit the same over-the-air station.
Dr. Robinson testified that stations
generating relatively greater fees could
be presumed to have higher value
59 When Dr. Robinson adjusted for the proper
addition of SDC programs and deletion of IPG
programs, and then applied her sensitivity analysis,
she changed this allocation to 67%:33% in favor of
SDC (not giving IPG any credit for the assumed 29%
of the data it declined to obtain).
60 Mr. Galaz asserted that information
subsequently published by Nielsen confirmed that
‘‘there had been virtually no change’’ in day-part
viewing between 1997 and 1999. 9/8/14 Tr. at 984
(Galaz). However, IPG presented no evidence to
support that assertion.
PO 00000
Frm 00120
Fmt 4703
Sfmt 4703
13441
programs in their respective station
bundles. 9/3/14 Tr. at 406–07
(Robinson). To measure this factor, Dr.
Robinson combined CDC data on royalty
fees the CSOs paid (on a per-station
basis) and TMS data on broadcast hours
by station in order to compare the fees
paid for retransmission of stations
carrying SDC and IPG programs. 9/3/14
Tr. at 229, 271 (Robinson).
In Phase I of this proceeding, the
Librarian adopted the use of a fees-paid
metric for value, where that measure
appeared to be the best alternative
valuation approach. See Distribution of
1998 and 1999 Cable Royalty Funds, 69
FR 3606, 3609 (January 24, 2004). The
use of a fee-based attempt at valuation
is particularly problematic, however, for
a niche area such as devotional
programming, which constitutes only a
small fraction of total station
broadcasting. See 9/8/14 Tr. at 1087–88
(Erdem). Because of the tenuous nature
of this approach to valuation, a royalty
allocation based on a fees-paid metric
might serve as, at best, a ‘‘ceiling’’ on a
distribution in favor of the party
proposing that approach. See
Distribution of the 2004 and 2005 Cable
Royalty Funds, 75 FR 57063, 57073
(September 17, 2010). That being said,
when Dr. Robinson adjusted her feespaid based valuation by applying her
sensitivity analysis, she calculated a
value ratio of 71%:29% in favor of the
SDC. As the SDC noted, this appears to
be ‘‘a fact that Dr. Robinson had tried
hard to obscure.’’ SDC PFF ¶ 25.
(4) Subscribership Levels
Dr. Robinson’s final metric measures
the average number of distant
subscribers per cable system
retransmitting IPG programming versus
SDC programming. 9/3/14 Tr. at 311–12
(Robinson). This metric measures
average subscribers per cable system,
without taking into account the number
of cable systems retransmitting a station.
Therefore, this metric is of no assistance
in measuring the total number of distant
subscribers even receiving a program, let
alone the number of distant subscribers
who watch the program.
As Dr. Erdem demonstrated—and as
Dr. Robinson admitted—this
subscribership metric can actually
increase when a program is eliminated,
if the program had been retransmitted
by a cable system with lower than
average numbers of subscribers. Erdem
WRT at 8–9 (Redacted); Ex. SDC–R–001
at 8–9 (Redacted); 9/3/14 Tr. at 331–45
(Robinson). Indeed, this metric actually
increased in favor of IPG after the
dismissal of two of IPG’s claimants—
Feed the Children and Adventist Media
Center. Ex. SDC–R–001 at 7–10; 9/3/14
E:\FR\FM\13MRN1.SGM
13MRN1
13442
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
Tr. at 329–30 (Robinson). Simply put,
when a purported measure of program
value can move inversely to the
addition or subtraction of a claimant,
the measure is, at best, of minimal
assistance in determining relative
market value.
Dr. Robinson suggests that the Judges
nonetheless should rely on her opinion
as to relative market value because all
of her alternative measures resulted in
similar proportionate valuations. 9/2/14
Tr. at 102–03 (Robinson) (‘‘[B]y coming
at this with four different metrics . . .
the fact that the estimates all came out
quite similarly gives me some comfort
that the numbers are reasonable.’’); see
also id. at 170 (Robinson) (emphasizing
that she was ‘‘looking at all of these
factors in combination’’). However, if
four measures of value are individually
untenable or of minimal value, they do
not necessarily possess a synergism
among them that increases their
collective probative value.
mstockstill on DSK4VPTVN1PROD with NOTICES
VI. Judges’ Determination of
Distribution
A. The Judges’ Distribution of Royalties
Is Within the Zone of Reasonableness
As the foregoing analysis describes,
the evidence submitted by the two
parties is problematic. First, the optimal
measure or approximation of relative
value in a distribution proceeding—the
Shapley valuation method—was neither
applied nor approximated by either
party. Second, the methodologies
proposed by both parties have
significant deficiencies.
As between the parties’ competing
methodologies, however, the Judges
conclude that the approach proffered by
the SDC is superior to that proffered by
IPG. The SDC Methodology, consistent
with measures of value in the television
industry, relies on viewership to
estimate relative market value. The
Judges conclude that in constructing a
hypothetical market to measure the
relative market values of distantly
retransmitted programs viewership
would be a fundamental metric used to
apply a Shapley valuation model.
Therefore, a methodology that uses
viewership as an indicium of program
value is reasonable, appropriate, and
consistent with recent precedent in
distribution proceedings.
IPG’s expert, Dr. Robinson, agreed
that viewership is relevant to the
determination of program value. IPG’s
own methodology uses viewership as a
valuation proxy, although it does so in
a much less direct and transparent way
than does the SDC Methodology.
Further, the SDC presented unrebutted
testimony that estimating relative
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
market value based on viewership data
alone when considering homogeneous
programming, as the Devotional
Claimants category, might actually
understate the value of the more highly
`
viewed programs vis-a-vis a Shapley
valuation of the same programs. Because
the SDC programs had higher ratings,
the Judges conclude that the SDC
Methodology, ceteris paribus, may well
tend to understate the SDC share of the
royalties in this proceeding.
By contrast, the IPG Methodology is
reliant on data that does not focus on
the property right the Judges must
value—the license to retransmit
individual programs in a hypothetical
market that is unaffected by the
statutory license. Moreover, the IPG
Methodology fails to value the
retransmitted programs in the
hypothetical market as applied by the
Judges in this and prior proceedings.
Rather, IPG has assumed tacitly that the
valuation of the individual programs
has been compromised by the
preexisting bundling of the programs in
the actual market, and therefore all
programs must be subject to common
measurements, based on broadcast
hours, time of day, subscriber fees, and
subscriber levels. The Judges conclude,
as they did in the 2000–03
Determination, that this failure to value
programs individually is erroneous.
Accordingly, at best, as stated in the
2000–03 Determination, the IPG
Methodology can serve as no more than
a ‘‘crude approximation’’ of value that
may have some ‘‘marginal’’ impact on
the determination of relative market
value. See 2000–03 Determination, 78
FR at 78002.
The Judges’ preference for the
valuation concept of the SDC
Methodology does not mean that the
Judges find the SDC’s application of that
concept to be free of problems or
unimpeachably persuasive in its own
right. The application of the
theoretically acceptable SDC
Methodology is inconsistent as regards
its probative value.
The Judges’ task in this and every
distribution determination is to
establish a distribution that falls within
a ‘‘zone of reasonableness.’’ See
Asociacion de Compositores y Editores
de Musica Latino Americana v.
Copyright Royalty Tribunal, 854 F.2d
10, 12 (2d Cir. 1988); Christian
Broadcasting Network, Inc. v. Copyright
Royalty Tribunal, 720 F.2d 1295, 1304
(D.C. Cir. 1983). Based on the entirety of
the Judges’ analysis in this
determination, the Judges find that the
SDC’s proposed royalty distribution of
81.5%:15.5% in favor of the SDC can
serve only as a guidepost for an upper
PO 00000
Frm 00121
Fmt 4703
Sfmt 4703
bound of such a zone of reasonableness.
The Judges decline to adopt the
81.5%:15.5% split as the distribution in
this proceeding, however, because the
Judges conclude that the several defects
in the application of the SDC
Methodology render the 81.5%:15.5%
split too uncertain. That is, the defects
in the application of the SDC
Methodology require the Judges to
examine the record for a basis to
establish a distribution that
acknowledges both the merits and the
imperfections in the SDC Methodology.
To that end, the Judges look to the
alternative confirmatory measure of
relative market value utilized by Mr.
Sanders in his report and testimony.
More particularly, the Judges look to his
analysis of the viewership data for the
SDC and IPG programs in the local
market, one that served as an
‘‘analogous’’ market by which to
estimate the distribution of royalties in
this proceeding. The allocation of
royalties suggested by that confirmatory
analysis was a 71.3%:28.7%
distribution in favor of the SDC.
On behalf of the SDC, Mr. Sanders
testified that this analogous body of data
‘‘is potentially very relevant and should
not, in my opinion, be ignored.’’ 9/3/14
Tr. at 503 (Sanders) (emphasis added).
The Judges agree. That distribution ratio
arises from the Nielsen local viewership
ratings over a three-month period in
1999 and covers all of the programs
represented in this proceeding.
Importantly, that approach does not
suffer from the uncertainty created by
the selection and use of the Kessler
Sample of stations, nor any of the other
serious potential or actual deficiencies
in the application of the SDC
Methodology, as discussed in this
determination.
There was no sufficiently probative
evidence in the record for the Judges to
establish a lower bound to a zone of
reasonableness. That being said, it is
noteworthy that even under IPG’s
Methodology the relative market
valuations of the SDC and IPG programs
would be no more favorable to IPG than
roughly a 50/50 split. Under at least two
prongs of IPG’s Methodology, Dr.
Robinson acknowledged that an
adjusted allocation would likely be
closer to a 67/33 split (based on time of
day of retransmitted broadcasts) or a 71/
29 split (based on fees paid) in SDC’s
favor.
Further, as IPG correctly argued, the
71.3%:28.7% distribution is
significantly different (to the benefit of
IPG) compared with the uncertain
results derived by the SDC
Methodology. Given that the
81.5%:18.5% allocation derived by the
E:\FR\FM\13MRN1.SGM
13MRN1
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
falling within a ‘‘zone of
reasonableness.’’ 63 Accordingly, given
that IPG’s expert witness testified
explicitly that a 71%:29% distribution
in favor of the SDC was within the
‘‘zone of reasonableness’’ and that the
SDC’s expert witness testified explicitly
that a 71.3%:28.7% distribution in favor
of the SDC was ‘‘reasonable’’ and
‘‘should not . . . be ignored,’’ such a
distribution is also consonant with the
parties’ understanding of a reasonable
allocation.
B. The Judges’ Distribution is Consistent
With a Valuation Derived From an
Application of the IPG Methodology
The Judges also note a consensus
between this 71.3%:28.7% distribution
and the least deficient of IPG’s proposed
valuations—the ‘‘fees-paid’’ valuation.
More particularly, Dr. Robinson made
‘‘sensitivity’’ adjustments to all her
values to account for the
incompleteness of her data. However,
her only adjustment was to multiply all
her alternative value measures by 71%
to adjust for the 29% of fees paid that
her data set did not include. The Judges
find and conclude that Dr. Robinson
could adjust only her fees-paid
valuation approach in this manner
because the ‘‘missing 29%’’ only
pertained to that data set. In the other
categories, Dr. Robinson (to put it
colloquially) was subtracting apples
from oranges.
When Dr. Robinson made her
adjustment in the fees-paid category
(and properly accounted for all
programs), she changed her valuation
and distribution estimate to 71%:29%
in favor of the SDC. See Table 4 supra.62
Moreover, Dr. Robinson testified that
her sensitivity analysis resulted in
values that she would characterize as
within an economic ‘‘zone of
reasonableness.’’ 9/2/14 Tr. at 158
(Robinson) (emphasis added).
Thus, not only do the Judges
independently find that a 71.3%:28.7%
distribution in favor of the SDC
proximately adjusts the distribution
within the zone of reasonableness, there
is also a virtual overlap between what
can properly be characterized as the
worst case distribution scenarios that
the parties’ own experts respectively
acknowledge to be ‘‘very relevant’’ and
mstockstill on DSK4VPTVN1PROD with NOTICES
SDC Methodology represents a
guidepost to the upper bound of a zone
of reasonableness, the ‘‘very relevant’’
(to use Mr. Sanders’s characterization)
71.3%:28.7% distribution has the added
virtue of serving as a rough proxy 61 for
the need to reflect the imperfections in
the application of the SDC
Methodology.
Accordingly, the Judges find and
conclude that a distribution ratio of
71.3%:28.7% in favor of the SDC lies
within the zone of reasonableness.
C. The Judges’ Distribution Is Consistent
With the Parties’ Economic Decisions
Regarding the Development and
Presentation of Evidence
61 As noted supra, the Judges may rely on the
evidence presented by the parties to make a
distribution within the zone of reasonableness, and,
in so doing, mathematical precision is not required.
See Nat’l Ass’n of Broadcasters, 140 F.3d at 929;
Nat’l Cable Television Ass’n, 724 F.2d at 182.
62 The fact that Dr. Robinson’s adjustment was
based on multiplying her allocations by 71% (to
account for the missing 29%) and that the
adjustment led to a recommended distribution to
the SDC of 71% is only coincidental.
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
The parties admittedly proffered their
respective worst-case scenarios because
each had chosen not to obtain data that
are more precise—because each party
deemed the cost of acquiring additional
data to be too high relative to the
marginal change in royalties that might
result from such additional data (and
perhaps the overall royalties that remain
in dispute in the current proceeding).
The parties’ independent yet identical
decisions in this regard underscore the
Judges’ reliance on the parties’ worstcase scenarios in establishing relative
market value. When a party acts, or fails
to act, to cause evidentiary uncertainty
as to the quantum of relief, the party
that created the uncertainty cannot
benefit from its own decision in that
regard. As one commentary notes:
Factual uncertainty resulting from missing
evidence is a salient feature of every litigated
case. Absolute certainty is unattainable.
Judicial decisions thus always involve risk of
error. This risk cannot be totally eliminated.
However, it is sought to be minimized by
increasing the amount of probative evidence
that needs to be considered by the triers of
fact. Missing evidence should therefore be
perceived as a damaging factor.
A. Porat and A. Stein, Liability for
Uncertainty: Making Evidential Damage
Actionable, 18 Cardozo L. Rev. 1891,
1893 (1997) (emphasis added).
Alternatively stated, the SDC and IPG
have failed to satisfy their respective
evidentiary burdens to obtain anything
above the minimum values indicated by
their evidence, by failing to obtain
random samples, full surveys, the
testimony of television programmers, or
other more probative evidence or
testimony to support their respective
arguments for a higher percentage
distribution.
63 The
Judges’ acknowledgement that IPG’s worstcase scenario (arising out of its fees-paid approach)
overlaps with the SDC’s worst case scenario
constitutes the extent to which the Judges credit the
IPG Methodology.
PO 00000
Frm 00122
Fmt 4703
Sfmt 4703
13443
Although the SDC and IPG each had
an incentive to procure and proffer
additional evidence, that incentive
existed only if the additional evidence
would have advanced the offering
party’s net economic position. As the
parties acknowledged at the hearing, the
amount at stake simply did not justify
their investment in the discovery,
development, and presentation of
additional evidence.64 When a party
makes the choice to forego the expense
of producing more precise evidence,
that party has implicitly acknowledged
that the value of any additional
evidence is less than the cost of its
procurement. As Judge Richard Posner
has noted: ‘‘The law cannot force the
parties to search more than the case is
worth to them merely because the
additional search would confer a social
benefit.’’ R. Posner, An Economic
Approach to Evidence, 51 Stan. L. Rev.
1477, 1491 (1999).
VII. Conclusion
Although there is a virtual overlap
between the worst-case scenarios of
both parties, the Judges adopt the SDC’s
distribution proposal, in light of the
more fundamental deficiencies in the
IPG Methodology. Accordingly, based
on the analysis set forth in this
Determination, the Judges conclude that
the distribution at issue in this
proceeding shall be:
SDC: 71.3%
IPG: 28.7%
This Final Determination determines
the distribution of the cable royalty
funds allocated to the Devotional
Claimants category for the year 1999,
including accrued interest. The Register
of Copyrights may review the Judges’
final determination for legal error in
resolving a material issue of substantive
copyright law. The Librarian shall cause
the Judges’ final determination, and any
correction thereto by the Register, to be
published in the Federal Register no
later than the conclusion of the
Register’s 60-day review period.
January 14, 2015.
SO ORDERED.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
David R. Strickler,
United States Copyright Royalty Judge.
Jesse M. Feder,
United States Copyright Royalty Judge.
64 As noted previously, IPG criticized the SDC
Methodology for failing to utilize better data. That
criticism applies equally to both parties and reflects
their respective decisions not to invest additional
resources to obtain more evidence. See supra notes
46–47 and accompanying text.
E:\FR\FM\13MRN1.SGM
13MRN1
13444
Federal Register / Vol. 80, No. 49 / Friday, March 13, 2015 / Notices
Dated: January 14, 2015.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
Approved by:
James H. Billington,
Librarian of Congress.
[FR Doc. 2015–05777 Filed 3–12–15; 8:45 am]
BILLING CODE 1410–72–P
NATIONAL AERONAUTICS AND
SPACE ADMINISTRATION
[Notice: 15–008]
NASA Advisory Council; Aeronautics
Committee; Meeting
National Aeronautics and
Space Administration.
ACTION: Notice of meeting.
AGENCY:
In accordance with the
Federal Advisory Committee Act, Public
Law 92–463, as amended, the National
Aeronautics and Space Administration
announces a meeting of the Aeronautics
Committee of the NASA Advisory
Council (NAC). This Committee reports
to the NAC. The meeting will be held
for the purpose of soliciting, from the
aeronautics community and other
persons, research and technical
information relevant to program
planning.
SUMMARY:
Thursday, March 26, 2015, 9:00
a.m. to 5:00 p.m., Local Time.
ADDRESSES: NASA Headquarters, Room
6E40, 300 E Street SW., Washington, DC
20546.
FOR FURTHER INFORMATION CONTACT: Ms.
Susan L. Minor, Executive Secretary for
the NAC Aeronautics Committee, NASA
Headquarters, Washington, DC 20546,
(202) 358–0566, or susan.l.minor@
nasa.gov.
security requirements, including the
presentation of a valid picture ID to
Security before access to NASA
Headquarters. Foreign nationals
attending this meeting will be required
to provide a copy of their passport and
visa in addition to providing the
following information no less than 10
working days prior to the meeting: Full
name; gender; date/place of birth;
citizenship; visa information (number,
type, expiration date); passport
information (number, country,
expiration date); employer/affiliation
information (name of institution,
address, country, telephone); title/
position of attendee; and home address
to Susan Minor, NAC Aeronautics
Committee Executive Secretary, fax
(202) 358–4060. U.S. citizens and
Permanent Residents (green card
holders) are requested to submit their
name and affiliation 3 working days
prior to the meeting to Susan Minor at
(202) 358–0566. It is imperative that this
meeting be held on this date to
accommodate the scheduling priorities
of the key participants.
Harmony R. Myers,
Acting Advisory Committee Management
Officer, National Aeronautics and Space
Administration.
[FR Doc. 2015–05702 Filed 3–12–15; 8:45 am]
BILLING CODE 7510–13–P
DATES:
The
meeting will be open to the public up
to the capacity of the room. Any person
interested in participating in the
meeting by WebEx and telephone
should contact Ms. Susan L. Minor at
(202) 358–0566 for the web link, tollfree number and passcode. The agenda
for the meeting includes the following
topics:
• NAC Aeronautics Committee Work
Plan.
• NASA Aeronautics Budget
Discussion.
• Safety Program Reorganization
Implementation.
• Aeronautics Research Mission
Directorate Investment Strategy.
• Innovation in Commercial
Supersonic Aircraft Thrust Overview.
Attendees will be requested to sign a
register and to comply with NASA
mstockstill on DSK4VPTVN1PROD with NOTICES
SUPPLEMENTARY INFORMATION:
VerDate Sep<11>2014
19:27 Mar 12, 2015
Jkt 235001
NATIONAL AERONAUTICS AND
SPACE ADMINISTRATION
[Notice: 15–009]
NASA Advisory Council; Institutional
Committee; Meeting
National Aeronautics and
Space Administration.
ACTION: Notice of meeting.
AGENCY:
In accordance with the
Federal Advisory Committee Act, Public
Law 92–463, as amended, the National
Aeronautics and Space Administration
announces a meeting of the Institutional
Committee of the NASA Advisory
Council (NAC). This Committee reports
to the NAC.
DATES: Thursday, March 26, 2015, 9:00
a.m. to 5:00 p.m., Local Time, and
Friday, March 27, 2015, 9:00 a.m. to
4:00 p.m., Local Time.
ADDRESSES: NASA Headquarters, Room
9H40 [Program Review Center (PRC)],
300 E Street SW., Washington, DC
20546.
SUMMARY:
Mr.
Todd Mullins, Executive Secretary for
the NAC Institutional Committee, NASA
Headquarters, Washington, DC 20546,
FOR FURTHER INFORMATION CONTACT:
PO 00000
Frm 00123
Fmt 4703
Sfmt 9990
(202) 58–3831, or todd.mullins@
nasa.gov.
The
meeting will be open to the public up
to the seating capacity of the room. This
meeting is also available telephonically
and by WebEx. You must use a touch
tone phone to participate in this
meeting. Any interested person may dial
the toll free access number 844–467–
6272 or toll access number 720–259–
6462, and then the numeric participant
passcode: 180093 followed by the #
sign. To join via WebEx on March 26,
the link is https://nasa.webex.com/, the
meeting number is 990 778 028 and the
password is Meeting2015! (Password is
case sensitive.) To join via WebEx on
March 27, the link is https://
nasa.webex.com/, the meeting number
is 999 775 359 and the password is
Meeting2015! (Password is case
sensitive.) Note: If dialing in, please
‘‘mute’’ your telephone. The agenda for
the meeting will include the following:
—NASA Human Capital Culture
Strategy
—NASA Leadership Development
Programs
—NASA Export Control Program
—NASA Space Act Agreements Process
Attendees will be requested to sign a
register and to comply with NASA
Headquarters security requirements,
including the presentation of a valid
picture ID before receiving access to
NASA Headquarters. Foreign nationals
attending this meeting will be required
to provide a copy of their passport and
visa in addition to providing the
following information no less than 10
working days prior to the meeting: Full
name; gender; date/place of birth;
citizenship; passport information
(number, country, telephone); visa
information (number, type, expiration
date); employer/affiliation information
(name of institution, address, country,
telephone); title/position of attendee. To
expedite admittance, attendees with
U.S. citizenship and Permanent
Residents (green card holders) can
provide full name and citizenship status
3 working days in advance by
contacting Ms. Mary Dunn, via email at
mdunn@nasa.gov or by telephone at
202–358–2789. It is imperative that the
meeting be held on this date to
accommodate the scheduling priorities
of the key participants.
SUPPLEMENTARY INFORMATION:
Harmony R. Myers,
Acting Advisory Committee Management
Officer, National Aeronautics and Space
Administration.
[FR Doc. 2015–05768 Filed 3–12–15; 8:45 am]
BILLING CODE 7510–13–P
E:\FR\FM\13MRN1.SGM
13MRN1
Agencies
[Federal Register Volume 80, Number 49 (Friday, March 13, 2015)]
[Notices]
[Pages 13423-13444]
From the Federal Register Online via the Government Printing Office [www.gpo.gov]
[FR Doc No: 2015-05777]
=======================================================================
-----------------------------------------------------------------------
LIBRARY OF CONGRESS
Copyright Royalty Board
[Docket No. 2008-1 CRB CD 98-99 (Phase II)]
Distribution of 1998 and 1999 Cable Royalty Funds
AGENCY: Copyright Royalty Board, Library of Congress.
ACTION: Final distribution determination.
-----------------------------------------------------------------------
SUMMARY: The Copyright Royalty Judges announce the final Phase II
distribution of cable royalty funds for the year 1999. The judges
issued their initial determination in December 2014 and received no
motions for rehearing.
DATES: Effective date: March 13, 2015.
ADDRESSES: The final distribution order is also published on the
agency's Web site at www.loc.gov/crb and on the Federal eRulemaking
Portal at www.regulations.gov.
FOR FURTHER INFORMATION CONTACT: Richard Strasser, Senior Attorney, or
Kim Whittle, Attorney Advisor, (202) 707-7658 or crb@loc.gov.
SUPPLEMENTARY INFORMATION:
I. Introduction
In this proceeding, the Copyright Royalty Judges (Judges) determine
the final distribution of royalty funds deposited by cable system
operators (CSOs) for the right to retransmit television programming
carried on distant over-the-air broadcast signals during calendar year
1999.\1\ Participants have received prior partial distributions of the
1999 cable royalty funds.\2\ The remaining funds at issue are those
allocated to the Devotional Claimants category.\3\ Two participants are
pursuing distribution from the Devotional Claimants funds for 1999:
Worldwide Subsidy Group LLC dba Independent Producers Group (IPG) and
the ``Settling Devotional Claimants'' (SDC).\4\ The Judges conducted
three and
[[Page 13424]]
a half days of hearings. After considering written evidence and oral
testimony, the Judges determine that the SDC should receive 71.3% and
IPG should receive 28.7% of the 1999 fund allocated to the Devotional
Claimants category.\5\
---------------------------------------------------------------------------
\1\ Although this proceeding consolidates royalty years 1998 and
1999, all claims to 1998 royalties have been resolved, and the funds
have been distributed. IPG's appeal of the order approving
distribution of 1998 royalties was dismissed for lack of
jurisdiction. Ind. Producers Group v. Librarian of Congress, 759
F.3d 100 (D.C. Cir. 2014).
\2\ The 1999 cable royalty deposits equaled approximately $118.8
million at the outset. The Judges authorized partial distributions
that the Copyright Licensing Office made on October 31, 2001, March
27, 2003, April 19, 2007, June 7, 2007, and February 28, 2013.
Authorized distributions equaled in the aggregate approximately
$126.9 million, including accrued interest, leaving a balance
available for distribution of $827,842.
\3\ See infra note 18, and accompanying text. The Devotional
Claimants category has been defined by agreements of the Phase I
participants as ``Syndicated programs of a primarily religious
theme, not limited to those produced by or for religious
institutions.''
\4\ The Settling Devotional Claimants are: The Christian
Broadcasting Network, Inc., Coral Ridge Ministries Media, Inc.,
Crystal Cathedral Ministries, Inc., In Touch Ministries, Inc., and
Oral Roberts Evangelistic Association, Inc. The SDC previously
reached a confidential settlement with devotional program claimants
represented by the National Association of Broadcasters, Liberty
Broadcasting Network, Inc., and Family Worship Center Church, Inc.
The programming giving rise to the latter groups' claims is not
included in the Judges' analysis in this proceeding.
\5\ From prior partial distributions, the SDC have received over
$693,000. The SDC alone is responsible to make adjustments, if any,
to comply with the conclusions of this Determination and to comply
with confidential settlements it reached with former participants.
---------------------------------------------------------------------------
II. Background
A. Statement of Facts
In the present proceeding, IPG represents the interests of four
entities \6\ owning copyrights in 10 distinct programs. The SDC
represent five entities \7\ owning copyrights in 20 distinct programs.
CSOs remotely retransmitted IPG-claimed titles 11,041 times and the
SDC-claimed titles 6,684 times during 1999. See IPG PFF at 6; SDC PFF
at 1-2.
---------------------------------------------------------------------------
\6\ IPG represents Benny Hinn Ministries, Creflo A. Dollar
Ministries, Eagle Mountain International Church aka Kenneth Copeland
Ministries, and Life Outreach International.
\7\ See supra, n.4.
---------------------------------------------------------------------------
B. Statement of the Case
On January 30, 2008, the Judges commenced a proceeding to determine
the Phase II distribution of 1998 and 1999 royalties deposited by CSOs
for the cable statutory license.\8\ Beginning in July 2008, the Judges
stayed the proceeding pending the outcome of California state court
litigation initiated by IPG regarding the validity and interpretation
of settlement agreements by and between IPG, the Motion Picture
Association of America as representative of certain program suppliers
(MPAA), and the Librarian of Congress.\9\ In a September 2012 filing,
IPG acknowledged that the California proceedings had been resolved in
favor of MPAA. See Opposition of Independent Producers group to Motion
for Final Distribution of 1998 and 1999 Cable Royalty Funds at 4,
Docket No. 2008-1 CRB CD 98-99 (September 5, 2012). IPG continued to
assert claims to 1999 royalties allocated to the Devotional Claimants
category. In July 2013, the Judges issued an order establishing the
schedule and order of proceedings for the present matter.\10\
---------------------------------------------------------------------------
\8\ See 73 FR 5596 (Jan. 30, 2008). Before the effective date of
the Copyright Royalty and Distribution Reform Act of 2004, Public
Law 108-419, 118 Stat. 2341 (Nov. 30, 2004), the Copyright Office,
with oversight by the Librarian of Congress, managed distribution of
cable retransmission royalties. To resolve controversies regarding
the appropriate distribution of royalties, the Librarian would order
appointment of a Copyright Arbitration Royalty Panel (CARP). In
2001, the Library of Congress initiated Phase I proceedings to
determine distribution of, inter alia, royalties for distant
retransmission by cable in 1999 of broadcast television programming.
In November 2003, all claimants to funds in the 1998 Devotional
Claimants category reached agreement regarding distribution of those
funds and the Register of Copyrights ordered final distribution of
1998 Devotional Claimants royalties by order dated November 19,
2003. By Order dated April 3, 2007, the Register finalized the Phase
I allocation of uncontroverted funds for 1998 and 1999 cable
retransmissions among the claimant categories. After enactment of
the current statute, the Register terminated the CARP proceeding
relating to the 1998-99 funds. See 72 FR 45071 (Aug. 10, 2007). The
Judges have managed all subsequent proceedings relating to the 1998
and 1999 cable royalties fund.
\9\ Order Granting Motions to Stay, Docket No. 2008-1 CRB CD 98-
99 (July 23, 2008). The Judges granted eight continuations of the
original stay order, entering the last continuation order on July
27, 2012. See Eighth Order Continuing Stay of Proceedings (July 27,
2012).
\10\ Order Setting Deadline for Filing Written Direct
Statements, Announcing Discovery Period, and Requiring Settlement
Conference (Jul 25, 2013).
---------------------------------------------------------------------------
On May 5 and 6, 2014, the Judges held a Preliminary Hearing to
adjudicate disputes regarding the validity of claims asserted by each
party. At the conclusion of the Preliminary Hearing, the Judges
dismissed two claims asserted by IPG. See Ruling and Order Regarding
Claims (June 18, 2014). Beginning September 2, 2014, the Judges
presided over three and a half days of hearings at which IPG presented
two witnesses and the SDC presented three live witnesses and designated
testimony of seven witnesses from prior proceedings.\11\ The Judges
admitted 35 paper and electronic exhibits into evidence. On September
23, 2014, the parties filed their proposed findings of fact and
conclusions of law.
---------------------------------------------------------------------------
\11\ Because of the delay of the present proceeding occasioned
by outside litigation, the Judges concluded their determination of
distributions of cable retransmission royalties for the period 2000
to 2003, inclusive, before completing the instant proceeding
regarding 1999 funds. The participants in the 2000-03 proceeding
presented many of the same issues relevant to the present
proceeding; thus, one of the ``prior proceedings'' from which
participants could designate testimony is a proceeding involving
funds deposited after the relevant period at issue in the present
proceeding.
---------------------------------------------------------------------------
III. IPG's Motion in Limine
A. Issues Presented
On August 26, 2014, IPG filed with the Judges a motion in limine
(Motion) to exclude the SDC's Nielsen Household Devotional Viewing
Report sponsored by SDC witness, Alan Whitt.\12\ IPG contends that the
SDC failed to include in its exhibit list foundational data for the
methodology used in the report and failed to produce all foundational
data and electronic files underlying the report. Motion at 1. IPG
requests that the Judges strike any evidence relying on or referring to
the report that the SDC presented.
---------------------------------------------------------------------------
\12\ The SDC, whose witness introduced the report, sometimes
refer to it as the Household Viewing Hours Report or ``HHVH
Report.''
---------------------------------------------------------------------------
The SDC oppose IPG's request, arguing, among other things, that IPG
has failed to present any competent evidence that the purportedly
missing data either were in the SDC's custody, possession, or control,
or were not publicly available. SDC Opposition at 1 (September 2,
2014). The SDC also contend that IPG's motion in limine merely attempts
to revisit issues the Judges resolved in their May 2, 2014, Order
Denying IPG's Motion to Strike Portions of SDC Written Direct Statement
(``May 2, 2014, Order''). The SDC contend that IPG has presented no new
evidence that would justify revisiting that decision. SDC Opposition at
3 & n.1.
Moreover, the SDC contend that IPG's arguments go to the weight
rather than to the admissibility of the proffered report. SDC
Opposition at 2, citing U.S. v. H & R Block, Inc., 831 F.Supp.2d 27, 34
(D.D.C. 2011) (denying motion in limine ``because [defendant's
proffered] survey [was] not so unreliable as to be deemed
inadmissible.'') and Graves v. D.C., 850 F.Supp.2d 6, 13 (D.D.C. 2011)
(``[M]otions in limine are designed to address discrete evidentiary
issues related to trial and are not a vehicle for resolving factual
disputes or testing the sufficiency of the plaintiff's evidence.'').
Finally, the SDC argue that even if the Judges were inclined to
believe that the unavailability of data underlying the proffered report
was relevant in an admissibility determination, this fact would not
warrant a prehearing exclusion of the evidence. According to the SDC,
the facts and data underlying an expert's opinion need not be
admissible for the opinion or reference to be admitted if the facts or
data are of a type reasonably relied upon by experts in the field. SDC
Opposition at 4, citing Rule 703 of the Federal Rules of Evidence. On
this point, the SDC refer to the written direct testimony of the SDC's
witness, John S. Sanders, who, according to the SDC, determined that
``Mr. Whitt's report is sufficiently reliable to render his opinion
concerning the relative market value of the SDC and IPG programs.'' SDC
Opposition at 5.
The Judges heard oral argument on the Motion on September 2, 2014,
and deferred ruling until the end of the proceeding. For the reasons
discussed below, the Judges deny the Motion and admit the proffered
report.
[[Page 13425]]
B. The Judges' May 2, 2014, Discovery Order
The dispute between IPG and the SDC began with a discovery request
from IPG in which it requested from the SDC ``evidentiary support for a
report by the SDC's expert witness, Mr. Whitt, setting forth viewership
levels for Devotional programming.'' See May 2, 2014, Order at 1. In
the motion to compel discovery that gave rise to the Judges' May 2,
2014, Order, IPG sought an order striking Mr. Whitt's report and the
SDC's reliance on that report. According to IPG, the SDC failed to meet
its discovery obligations by failing to provide electronic files or
computer codes that Mr. Whitt purportedly used to (1) merge viewership
data sets compiled by Tribune Media Services and the Nielsen Company
and (2) cull claimed devotional titles from numerous program titles in
the merged data sets (referred to in the May 2, 2014, Order as ``Merger
Information''). Id. at 3.
The Judges determined that the discovery dispute could not be
resolved without an evidentiary hearing, and scheduled one for April 8,
2014. During the hearing, the Judges heard testimony from the SDC's
witnesses, Mr. Whitt and Dr. Erkan Erdem, as well as from Dr. Laura
Robinson, who testified for IPG.
Mr. Whitt testified that he did not have access to the files and
codes he had used that contained the Merger Information because he had
done most of the work in question when he was employed with an
independent company that was a contractor for MPAA. Mr. Whitt completed
his MPAA assignment several years prior to the current proceeding. 04/
08/14 Tr. at 105 (Whitt). Therefore, according to Mr. Whitt, neither
Mr. Whitt nor the SDC could provide the requested information to IPG.
Id. at 121-22. Dr. Robinson testified that, based on discovery that the
SDC provided, she was unable to replicate the results that Mr. Whitt
had reached, although she admitted that she could have merged the
Tribune and Nielsen data sets. Id. at 35-6, 66 (Robinson). Finally, Dr.
Erdem testified that, based on discovery the SDC had provided to IPG,
and certain other publicly available information, Dr. Erdem was able to
closely approximate, although not duplicate, Mr. Whitt's results. Id.
at 162 (Erdem).
The Judges found that nothing in the record allowed them to
conclude that SDC violated its duties under the applicable procedural
rule governing discovery by not producing the Merger Information. See
May 2, 2014, Order at 9. The Judges further concluded that the SDC's
discovery responses were sufficient for IPG to ``test'' the process Mr.
Whitt used in compiling the report. The Judges noted that the purpose
of an earlier discovery order addressing IPG's discovery request was to
allow IPG sufficient discovery to allow it to confirm either that
Mr. Whitt had performed his work correctly . . . or that Mr. Whitt
had performed his work incorrectly or inaccurately. In that latter
case, IPG would be able to: (a) file a Written Rebuttal Statement
contradicting Mr. Whitt's work and/or (b) cross-examine Mr. Whitt at
the hearing on the merits regarding claimed errors or inaccuracies
in his work.
Id. (emphasis in original).
The Judges concluded that they ``would not--and did not--assert
that discovery regarding expert testimony must result in a consensus
between adverse participants as to the correctness of the result (or
the amount) calculated by the expert.'' Id. at 11. Specifically, the
Judges concluded
with the discovery [the SDC provided to IPG, Dr. Robinson] could
test Mr. Whitt's computational process by producing her own merger
of the Tribune Data and the Nielsen Data. However, Dr. Robinson also
testified that her merger and the concomitant results might differ
from (i.e., falsify) rather than replicate Mr. Whitt's results.
Likewise, [Dr. Erdem] produced a merger of the Tribune Data and the
Nielsen Data that was quite proximate to Mr. Whitt's results, albeit
not a complete replication. Thus, it is clear that Mr. Whitt's
computational processes can be tested and subject to meaningful
cross-examination and rebuttal.
Id. Based on this conclusion the Judges denied IPG's motion to strike
portions of the SDC's written direct statement on grounds that the SDC
violated its discovery obligations.
In its discovery motion, IPG also asked the Judges to strike any
reliance on or reference to the distant rating study presented by the
SDC as inadmissible. See [IPG] Motion to Strike Portions of [SDC]
Direct Statement at 10-11 (February 20, 2014). The Judges declined to
consider these issues at that stage of the proceeding reasoning that:
[a]n order regarding these issues would essentially constitute a
premature in limine ruling based on SDC's non-production of the
Merger Information in discovery. Given that SDC introduced new
testimony and new exhibits at the April 8, 2014, discovery hearing,
the Judges decline to rule without a formal motion in limine,
addressing these issues in the context of the new hearing exhibits
and the hearing testimony, should IPG decide to renew these
arguments.
May 2, 2014, Order at 11. IPG filed that motion in limine on August 26,
2014, viz., the Motion at issue here.
C. Substance of IPG's Motion
In the present Motion, IPG asserts that ``Merger Information
existed and was not produced to IPG, including sweeps period data, a
sweeps period algorithm, a file that prepared the Tribune data for
merger, a process to reconcile Nielsen and Tribune data, and another
`quality control process' performed by Mr. Whitt.'' Motion at 2. IPG
further asserts that ``SDC's witness [Dr. Erdem] approximated Mr.
Whitt's results only after utilization of data and information that had
not been produced to IPG, and that the SDC's attempted replication of
the Merger Information occurred months after both the discovery
deadline and the deadline for filing amended direct statements.'' Id.
According to IPG, the ``SDC neither produced the original Merger
Information, nor attempted to replicate it until March 28, 2014, all
the while knowing the evidentiary requirements for the introduction of
the study. . . .'' Id. at 3.
IPG continues:
Alan Whitt asserts that his analysis relied, inter alia, (i) on
a sample of television stations selected by Marsha Kessler [an MPAA
witness in past cable distribution proceedings, including the 2000-
2003 proceeding and the Phase I proceeding for the instant royalty
year], and (ii) household diaries of distant program viewing for
those programs from Nielsen's six ``sweep'' months. [Yet, l]iterally
no information or data regarding the station sampling process
exists, nor information or data that explains the methodological
processes utilized in connection with the produced Nielsen data.
Id. at 3 (internal quotations omitted).
IPG asserts that:
stations selected by Ms. Kessler for inclusion in the 1999 MPAA/
Nielsen study were altogether different than those appearing in data
produced by the SDC. . . . [Therefore,] Mr. Whitt's statement that
the SDC-produced data was derived from a sample of stations selected
by Marsha Kessler is simply inaccurate or, at minimum, without
evidentiary foundation [but] IPG has been denied any ability to
investigate that determination because of the SDC's failure to
produce underlying documents substantiating such assertion.
Id. at 4.
IPG further asserts that, in prior proceedings, Nielsen and the
MPAA have used a wide variety of sampling methodologies and methods of
data collection. IPG contends that with respect to the Nielsen data
produced by the SDC in the current proceeding, however, the SDC
provided none of those methodological details. Consequently, IPG
asserts that it has ``no means of determining the method by which the
stations on which the Whitt
[[Page 13426]]
analysis relies were selected, and no means to determine what Nielsen
data was collected, how it was collected, the limitations on the data,
the scope and meaning of the data, the possible alternatives that were
employed, etc.'' Id. at 5-6. As a result, IPG requests that the Judges
strike any evidence relying on or referring to Mr. Whitt's HHVH report.
Id. at 8.
D. Judges' Analysis and Ruling on the Motion
Much of IPG's Motion rehashes discovery issues that the Judges
addressed fully in the May 2, 2014, Order. The Judges will not revisit
those discovery-related issues. The Judges now consider only whether to
grant or deny IPG's Motion, which requests that the Judges preclude the
SDC from relying on or referring to the HHVH report on grounds of
admissibility.
IPG's arguments for excluding the HHVH report are that the SDC
failed to: (1) Retain or produce to IPG input data from the HHVH
report, (2) produce information relating to the sampling processes that
were followed for the selection of stations included as part of the
Whitt analysis, and (3) produce the methodological processes followed
by Nielsen in the creation of the Nielsen data that were referred to in
the HHVH report. See Motion at 7-8.
At oral argument on the motion, IPG's counsel contended that even
if the SDC did not have the underlying documents that IPG sought, the
SDC was required to create such documents and produce them to IPG. 09/
02/14 Tr. at 14-15. As a preliminary matter, the Judges view this
argument as yet another attempt by IPG to resurrect its complaint that
the SDC failed to meet its discovery obligations. The Judges already
addressed this issue in the May 2, 2014, Order.\13\
---------------------------------------------------------------------------
\13\ Even if the Judges had not addressed the issue in the May
2, 2014, Order, they would nonetheless reject IPG's assertion that
the SDC was obligated to create documents to comply with a discovery
request. The Judges have consistently held that ``[t]he limited
discovery permitted in proceedings before the Judges should permit
the parties to test admissible evidence, but not create an extensive
burden of time and expense.'' Order Granting In Part and Denying In
Part the Motion of SoundExchange to Compel XM Satellite Radio Inc.,
Sirius Satellite Radio Inc., and Music Choice to Produce Surveys and
Supporting Documents, Docket No. 2006-1 CRB DSTRA (May 15, 2007). In
the May 2, 2014, Order, the Judges ruled that the SDC had provided
IPG with sufficient discovery to enable IPG to test the HHVH report.
IPG points to no provision in the CRB rules that requires a party to
create documents in response to a discovery request. The Judges see
no reason in this instance to impose such a requirement by order.
---------------------------------------------------------------------------
IPG also asserts that the SDC's failure to create a document in
response to IPG's discovery requests somehow violated a statutory
provision dealing with written direct statements. At the hearing, IPG's
counsel contended that the SDC ``never put this information or alluded
to it or referenced or incorporated it by reference in an Amended
Written Direct Statement. Therefore, for the record, it does not exist.
It is not before [the Judges]. And as such, the SDC study is hopelessly
missing a piece, and therefore, it should not be heard. It should be
excluded.'' 09/02/14 Tr. at 15 (Att'y Boydston).
The requirement to file written direct statements is codified in
section 803(b)(6)(C) of the Copyright Act. That section circuitously
requires the Judges to issue regulations that require the parties to
file written direct statements and written rebuttal statements by a
date specified by the Judges. 17 U.S.C. 803(b)(6)(C)(i). The statutory
provision does not address the content of written direct statements.
Moreover, the regulation the Judges promulgated under that provision
does not impose the content requirements that IPG suggests.\14\
Therefore, the Judges reject IPG's assertion that the SDC violated the
statutory provisions dealing with the filing of written direct
statements. The HHVH report was properly before the Judges.\15\ On
balance, the Judges find that the SDC's written direct statement was
adequate to satisfy the requirements of the Act and applicable rules.
IPG's complaints about the completeness or persuasiveness of that
testimony go to the weight rather than the admissibility of the
testimony.
---------------------------------------------------------------------------
\14\ The rule states: ``[t]he written direct statement shall
include all testimony, including each witness's background and
qualifications, along with all the exhibits.'' 37 CFR 351.4(b). The
SDC's written direct statement included Mr. Whitt's testimony as
well as that of Mr. Sanders. The SDC included in its rebuttal
statement the testimony of Dr. Erdem. The SDC's written direct
statement may not have been exquisitely complete. Indeed, the SDC's
counsel concedes that Mr. Whitt's written testimony did not describe
a ``quality control'' process that he conducted to eliminate
duplicative entries and to fix errors in program titles. 09/02/14
Tr. at 37 (Att'y MacLean). The SDC contends, however, that Mr.
Whitt's process resulted in the elimination of a handful of program
titles, none of which was claimed by either party in this
proceeding. Id. at 37-8. The Judges find no persuasive evidence in
the record to contradict the SDC's contention, rendering the SDC's
omission harmless. Moreover, the Judges note that the dates for
Nielsen sweeps weeks, used by Dr. Erdem in his analysis to replicate
Mr. Whitt's report, either were produced to IPG or were otherwise
publicly available. See 04/08/14 Tr. at 23-24, 204 (Att'y MacLean).
The SDC satisfied its discovery obligations with respect to this
information.
\15\ IPG raised similar objections in the 2000-03 distribution
proceeding. Docket No. 2008-2 CRB CD 2000-03. In that proceeding,
the Judges excluded Mr. Whitt's testimony, which relied on data
similar to that which the SDC proffer in the current proceeding. The
Judges' decision not to consider Mr. Whitt's testimony in the 2000-
2003 proceeding, however, was based the SDC's failure to provide Mr.
Whitt's testimony until its rebuttal case, three weeks before the
hearing. In that context, the Judges found the SDC's delay
``deprived IPG of the opportunity to review the work undertaken by
Mr. Whitt.'' 78 FR 64984, 65004 (Oct. 30, 2013)
---------------------------------------------------------------------------
IPG also objects to the purported lack of clarity surrounding the
way in which the television stations analyzed in the HHVH report were
selected. Mr. Whitt stated in his written direct testimony that the
television stations he studied in the report were based on a list of
stations compiled by Ms. Kessler. Ex. SDC-D-001 at 3. IPG, evidently
assuming that the list referred to by Mr. Whitt was the list of
stations that was attached to Ms. Kessler's written direct testimony in
Phase I of this proceeding,\16\ contends that it compared the selection
of stations in the Whitt HHVH report with Ms. Kessler's list and found
that the two do not correspond. IPG states that of Mr. Whitt's 72
stations, only half of them can be found in Ms. Kessler's list. 09/02/
14 Tr. at 16. IPG contends that it does not know where the other 36
stations that Mr. Whitt studied came from. Id.
---------------------------------------------------------------------------
\16\ Ms. Kessler's written direct testimony was included in the
prior testimony designated by the SDC for consideration in this
proceeding under 37 CFR 351.4(b)(2).
---------------------------------------------------------------------------
The SDC reply that the Kessler list that IPG compared with the
Whitt list was not the basis for the HHVH report. The SDC represent
that the Kessler list of stations that Mr. Whitt used for his report
was based on the Nielsen data that Ms. Kessler ordered for the study
that she prepared for the 2000-03 proceeding. 09/02/14 Tr. at 28-30
(Att'y MacLean). Mr. Whitt addressed this issue in his testimony in the
April hearing on IPG's discovery motion. 04/08/14 Tr. at 113-15
(Whitt). That being said, the SDC are unsure how Ms. Kessler determined
what Nielsen data to order. 09/04/14 Tr. at 29 (Att'y MacLean).
Nevertheless, the SDC's witness, Mr. Sanders, testified that the list
upon which the Whitt report was compiled was ``sufficiently
representative for the purpose that it is being put forth.'' Id. at 31.
The SDC further assert, based on an analysis by Dr. Erdem, that the
SDC's Nielsen sample, which was based on the Nielsen information that
was ordered by Ms. Kessler, ``does not have a bias in terms of coverage
of quarter-hours of IPG versus SDC programs. Or, if it does have a
bias, the same bias is in all of the data that IPG is using as well,
whatever bias there is.'' Id. at 32. Finally, the SDC state that they
``had absolutely nothing to do with choosing this [station] sample--it
was chosen years before we ever purchased it from MPAA--there
[[Page 13427]]
was absolutely zero incentive for everybody to intentional [sic] bias
the data in any way.'' Id. at 33.
For purposes of ruling on the Motion, the Judges do not examine the
weight, if any, they might place on the proffered evidence. Rather, the
Judges must examine whether the SDC offered the evidence in a manner
that was consistent with the applicable rules for offering this type of
evidence.
The Judges' procedural rules address evidence in proceedings before
the Judges. Rule 351.10(a) addresses admissibility of evidence. Under
the rule, evidence that is relevant and not unduly repetitious or
privileged is admissible. Proponents must authenticate or identify
written testimony and exhibits for them to be admissible. See 37 CFR
351.10(a). The admissibility requirements of authentication or
identification are satisfied by evidence sufficient to support a
finding that the matter in question is what its proponent claims. Id.
IPG does not contend that the SDC violated any provision of Rule
351.10(a); that is, that the Whitt report is irrelevant, unduly
repetitious, or privileged. Rather, IPG focuses on Rule 351.10(e). That
provision of the rule provides if studies or analyses are offered in
evidence, they must state clearly ``the study plan, the principles and
methods underlying the study, all relevant assumptions, all variables
considered in the analysis, the techniques of data collection, the
techniques of estimation and testing, and the results of the study's
actual estimates and tests.'' 37 CFR 351.10(e). This information must
be presented in a ``format commonly accepted within the relevant field
of expertise implicated by the study.'' Id. Facts and judgments upon
which conclusions are based must be ``stated clearly, together with any
alternative courses of action that were considered.'' Id. The party
offering the study into evidence must retain summaries and tabulations
of input data and the input data themselves. Id.
IPG asserts that by not explaining precisely how the Whitt report
was created, the SDC failed to provide an adequate foundation for the
report. In considering whether there was an adequate foundation for
admitting the Whitt report into evidence, the Judges must consider not
only the exhibit that contains the report but also any written or live
testimony offered to explain how the exhibit was created. In his
written direct statement, Mr. Whitt included the household viewing
report that he had prepared and discussed the sources of the data and a
description of how he prepared the report. In the April 8, 2014,
hearing on IPG's motion to strike portions of the SDC's written direct
statement, Mr. Whitt provided additional details about he created the
report, including the sources of the data, the processes he followed to
merge Nielsen and Tribune data files, and the ``quality control''
process he used to eliminate erroneous program titles.
IPG's counsel and the Judges had ample opportunity to question Mr.
Whitt on all elements of the report. After noting IPG's objection, the
Judges admitted provisionally Mr. Whitt's written testimony during the
hearing on September 3, 2014. 09/03/14 Tr. at 416. IPG's counsel then
had another opportunity to cross-examine Mr. Whitt on the processes he
used to construct the report. On both occasions, Mr. Whitt was open and
forthright about how he prepared the report, including the manner in
which he used a list of stations based on a set of Nielsen data ordered
by Ms. Kessler for MPAA in a separate proceeding. See, e.g., 09/03/14
Tr. at 422 (Whitt) (``I just accepted whatever stations they sent
me.''). Mr. Whitt made no efforts to gloss over the potential
weaknesses in the preparation of the report. Indeed, the SDC's counsel
correctly identified Mr. Whitt as more akin to a fact witness than an
expert witness. 09/02/14 Tr. at 35 (Whitt).
In the end, the Judges are satisfied that the SDC provided an
adequate foundation for the admission of Mr. Whitt's written direct
statement into the record. That is not to say that there are not issues
with respect to how the HHVH report was created. The SDC concede as
much. See 09/02/14 Tr. at 24 (Att'y MacLean) (``[I]t is not that any of
the specific problems that the parties raised were invalid or that they
shouldn't be raised. . . .''). Not the least of these issues is the
fact that the Whitt report relies on a list of stations selected
according to criteria that were seemingly unknown even to Ms. Kessler
who purportedly selected the stations. These issues go to the weight,
not to the admissibility, of the report. For the foregoing reasons, the
Judges DENY IPG's Motion and admit Exhibit SDC-D-001 (Written Direct
Testimony of Whitt with Exhibits) for all purposes in this proceeding.
IV. Applicable Law and Precedent
Twice each year, CSOs deposit with the Copyright Office royalties
accrued for the retransmission of over-the-air television programming
outside the originating station's local broadcast area. The amount of
fee deposits is statutory. See 17 U.S.C. 111(d)(1). Every July,
copyright owners file claims for the funds on deposit for the preceding
calendar year's retransmissions. On motion of a claimant or sua sponte,
the Judges publish notice of the commencement of proceedings to
distribute those royalty funds.
By convention, claimants and claimants' representatives begin each
proceeding with an allocation process that has come to be called
``Phase I.'' \17\ Traditionally, the claimants divide themselves into
eight Phase I categories based upon the nature of the programs in which
they claim copyright.\18\ If the participants do not agree to an
allocation of deposited royalties among the Phase I categories, they
submit their controversy to the Judges for adjudication. Once the
allocation is decided, the claimants in each category seek
distribution. If the claimants within each category do not agree to the
distribution scheme among themselves, the Judges adjudicate disputes
and make a determination of the appropriate distribution among
claimants within each category. This process has become known as
``Phase II'' of the distribution proceeding.
---------------------------------------------------------------------------
\17\ The Copyright Royalty Tribunal (CRT), a predecessor to the
CRB, began bifurcation of the distribution proceedings to mitigate
what it perceived to be an unwieldy process. See 1979 Cable Royalty
Distribution Determination, 47 FR 9879 (Mar. 8, 1982). Bifurcation
of distribution proceedings is not mandated by statute or
regulation, but is acknowledged in the Judges' current regulations
at 37 CFR 351.1(b)(2).
\18\ The program categories are: Program Suppliers (syndicated
programming and movies); Joint Sports Claimants (live college and
professional team sports); Commercial Television (programs produced
by local commercial TV stations); Public Broadcasting; Devotional
Claimants; and Canadian Claimants. Two additional categories
represent non-TV interests: Music Claimants (copyright owners of
musical works carried on broadcast TV signals); and National Public
Radio (copyright owners of all non-music content broadcast on NPR
stations)
---------------------------------------------------------------------------
A. The Relevant Statutory Language
The Copyright Act (Act) does not mandate (or even suggest) a
formula for royalty distribution.\19\ As the Librarian \20\ has stated:
---------------------------------------------------------------------------
\19\ Section 111(d)(4) of the Act merely provides that, in the
event of a controversy concerning the distribution of royalties,
``the Copyright Royalty Judges shall, pursuant to Chapter 8 of
[title 17], conduct a proceeding to determine the distribution of
royalty fees.''
\20\ The Librarian was responsible for administering the
Copyright Arbitration Royalty Panel (CARP) process for distributing
cable royalties from 1993, when Congress abolished the CRT, a
predecessor adjudicative body, until 2005, when Congress established
the Copyright Royalty Judges program. The Librarian had the
obligation of reviewing CARP decisions and, on recommendation of the
Register, adopting, modifying, or rejecting them.
---------------------------------------------------------------------------
Section 111 does not prescribe the standards or guidelines for
distributing royalties collected from cable operators
[[Page 13428]]
under the statutory license. Instead, Congress decided to let the
Copyright Royalty Tribunal ``consider all pertinent data and
considerations presented by the claimants'' in determining how to
---------------------------------------------------------------------------
divide the royalties.
Distribution of 1993, 1994, 1995, 1996 and 1997 Cable Royalty Funds,
Order, in Docket No. 2000-2 CARP CD 93-97, 66 FR 66433, 66444 (Dec. 26,
2001) (quoting H.R. Rep. No. 1476, at 97 (1976)) (1993-1997 Librarian
Order).\21\
---------------------------------------------------------------------------
\21\ The 1993-1997 Librarian Order was vacated as moot after the
parties settled their appeals. Distribution of 1993, 1994, 1995,
1996 and 1997 Cable Royalty Funds, Notice of termination of
proceeding, Docket No. 2000-2 CARP CD 93-97, 69 FR 23821 (Apr. 30,
2004). The settlement and vacatur of the 1993-1997 Librarian Order
did not disturb the reasoning articulated therein. Id. at 23822.
---------------------------------------------------------------------------
The Act does require, however, that the Judges act in accordance
with prior determinations and interpretations of the Copyright Royalty
Tribunal, the Librarian, the Register of Copyrights (Register),
Copyright Arbitration Royalty Panels, to the extent that precedent is
consistent with the Register's opinions on questions of copyright law,
and decisions of the Court of Appeals relating to distribution
proceedings. See 17 U.S.C. 803(a)(1).
Determining the proper distribution of cable royalties among
claimants requires a determination of the ``relative marketplace
value'' of the respective claimants' programs. See, e.g., Program
Suppliers v. Librarian of Congress, 409 F.3d 395, 401 (D.C. Cir. 2005);
1993-1997 Librarian Order, 66 FR at 66445. The Judges defined
``relative marketplace value'' in detail in a previous Determination.
See Determination of the Distribution of the 2000-03 Cable Royalty
Funds, Docket No. 2008-2, CRB CD 2000-2003, 78 FR 64984, 64985-6 nn. 8
and 9 (October 30, 2013) (2000-03 Determination). In the present
Determination, the Judges adopt and restate the ``relative market
value'' standard they described in the 2000-03 Determination, and
provide further detail consistent with that standard, including detail
presented through the expert economic testimony in the present
proceeding.
To assess relative marketplace value, the Judges previously have
looked to hypothetical, simulated, or analogous markets, as Congress
has imposed the compulsory license regime in lieu of an unfettered free
market for cable retransmission of broadcast television programs. 2000-
03 Determination, 78 FR at 64986; see also 1993-97 Librarian Order, 66
FR at 66445; 1987 Music Determination, 55 FR at 11993. Consistent with
precedent, in the current proceeding the Judges look to the evidence
presented by the parties, if any, to identify the parameters of a
hypothetical market that would exist but for the compulsory license
regime.\22\
---------------------------------------------------------------------------
\22\ ``Simulations aim at imitating an economically relevant
real or possible system by creating societies of artificial agents
and . . . institutional structure. . . .'' A. Lehtinen and J.
Kuorikoski, Computing the Perfect Model: Why Do Economists Shun
Simulations?, 74 Phil. of Sci. 304, 307 (2007) (emphasis in
original). However, the parties to this proceeding did not proffer
evidence of any simulations. Further, the parties did not provide
evidence or testimony from sellers/licensors and buyers/licensees in
``analogous'' markets, such as perhaps the markets for cable
programming or syndication rights (nor the results of any surveys of
such market participants) that the Judges might use as benchmarks to
establish a distribution methodology in the present proceeding. The
SDC did provide, however, evidence of ratings from the local markets
in which the SDC and IPG programs aired.
---------------------------------------------------------------------------
B. The Economic Standard: ``Relative Market Value''
As explained in the 2000-03 Determination, to construct the
hypothetical market, it is important at the outset to appreciate the
reason for the statutory license and the concomitant distribution
proceedings. Statutory licenses substitute for free market negotiations
because of a perceived intractable ``market failure'' inherent in the
licensing of copyrights--particularly the assumed prohibitively high
``transaction costs'' of negotiating a multitude of bilateral contracts
between potential sellers and buyers. See, e.g., R. Picker, Copyright
as Entry Policy: The Case of Digital Distribution, 47 Antitrust Bull.
423, 464 (2002) (``The modern structure of . . . validating or
conferring rights in copyright holders yet coupling those rights with
statutory licenses has the virtue of mitigating the exercise of
monopoly power and minimizing the transaction costs of
negotiations.''); S. Willard, A New Method of Calculating Copyright
Liability for Cable Rebroadcasting of Distant Television Signals, 94
Yale L.J. 1512, 1519 (1985) (``One important reason for compulsory
licensing . . . was to avoid the `prohibitive' transaction costs of
negotiating rebroadcast consent.''); S. Besen, W. Manning & B.
Mitchell, Copyright Liability for Cable Television: Compulsory
Licensing and the Coase Theorem, 21 J.L. & Econ. 67, 87 (1978)
(``Compulsory licensing . . . has lower negotiating costs than a system
based on full copyright liability. . . .''). Thus, the hypothetical
market that the Judges must construct must be a market that would be
unencumbered by either transaction costs or the restrictions imposed by
the statutory license.
1. ``Relative'' Market Value
The Judges begin, as they did in the 2000-03 Determination, parsing
the phrase ``relative market value'' by first considering the import of
the word ``relative.'' The word ``relative'' denotes that the value of
any retransmitted program is to be determined in relation to the value
of all other programs within the bounds of the respective Phase I
category definitions, and thus can be expressed as a percentage of
total ``market value.''
2. Relative ``Market Value''
In turn, ``market value'' is traditionally stated in decisional and
administrative law more fully as ``fair market value.'' The Supreme
Court has stated the traditional definition of ``fair market value'' as
``the price at which the property would change hands between a willing
buyer and a willing seller, neither being under any compulsion to buy
or sell and both having reasonable knowledge of relevant facts.'' U.S.
v. Cartwright, 411 U.S. 546, 551 (1973). It is necessary to further
define the various terms that comprise the foregoing definition of
relative market value.
a. The Hypothetical ``Willing Sellers'' (the Copyright Owners)
Copyright Owners seek to maximize profit from licensing their
programs for retransmission by CSOs. Copyright Owners' marginal costs
are low and approach zero. Most of the costs incurred in creating the
work are sunk, fixed costs. Even so, Copyright Owners seek to maximize
the revenue they receive from CSOs. Given the minimal marginal costs,
Copyright Owners, as the hypothetical willing sellers, will always have
an incentive to sell at some positive price, but will likely engage in
bargaining whereby a Copyright Owner might threaten to deny the license
unless the CSO offers the Copyright Owner's (undisclosed) reservation
price. See Besen, et al, supra, at 81.
b. The Hypothetical ``Willing Buyers'' (the CSOs)
For CSOs, the economics are less straightforward. CSO revenues are
derived from the sale of cable bundles (commonly described as
``packages'' or ``tiers'') to subscribers, i.e., the ultimate
consumers. In turn, many variables affect the number of consumers that
subscribe to a particular CSO's service, including the retransmitted
broadcasts that the CSO includes as part of its subscription
package.\23\
---------------------------------------------------------------------------
\23\ The compulsory license regime requires CSOs to license a
station's signal in its entirety, 17 U.S.C. 111(d)(1)(B), and to
retransmit the programs (including advertisements) without
alteration. 17 U.S.C. 111(c)(3). Therefore, retransmitting CSOs
cannot sell advertising on retransmitted broadcast channels in the
actual market under the compulsory license regime. However, in the
hypothetical market, where the limiting provisions of the compulsory
license regime would not apply, retransmitting CSOs arguably could
sell local replacement advertising, which would render viewership an
important metric of relative market value. However, this point was
not presented by either of the parties in the present proceeding.
---------------------------------------------------------------------------
[[Page 13429]]
To CSOs, the programs offered by the Copyright Owners are inputs--
factors of production--utilized to create the products that the CSOs
sell to their customers, viz., the various subscription bundles of
cable channels. In a hypothetical program market, CSOs would buy the
rights to retransmit programs as they would purchase any factor of
production, up to the level at which that ``factor price'' equals the
``Marginal Revenue Product'' (MRP) of that program. In simple terms, a
CSO in a competitive factor market would only pay for a license to
retransmit a program if the revenue the CSO could earn on the next
(marginal) sale of the final product were at least equal to that
price.\24\ In practical terms, why would a CSO pay $50,000 to
retransmit a program that the CSO estimates would add only $40,000 to
the CSO's subscriber revenue? See Besen, et al., supra, at 80 (``To the
cable system the value of carrying the signal is equal to the revenue
from the extra subscribers that the programming will attract and any
higher subscriber fees it can charge less the additional costs of
importing the program.'').\25\
---------------------------------------------------------------------------
\24\ A focus on marginal costs and benefits is not only
efficient for the hypothetical buyers and sellers, but also for the
consuming public: ``Optimal program diversity will result if cable
operators and the public they serve pay to copyright owners the
marginal value derived from viewing syndicated programming.''
Willard, supra, at 1518.
\25\ If the CSO, as a program licensee, had some degree of
monopsony power in the factor market, it could pay less than a price
equal to MRP, but still would pay to license programs in a quantity
at which MRP would equal the marginal cost to license an additional
program.
---------------------------------------------------------------------------
c. ``Neither Being Under Any Compulsion to Buy or Sell''
In the actual (i.e., non-hypothetical) market, terrestrial
broadcast stations create the program lineup, which is only available
for purchase by CSOs as a pre-bundled signal. The CSOs cannot
selectively license for retransmission some programs broadcast on the
retransmitted station and decline to license others; rather, the signal
must be purchased in toto. 17 U.S.C. 111(d)(1)(B) (statutory license
royalty computed based on number of ``distant signal equivalents'').
Is this required bundling a form of ``compulsion'' upon CSOs? In
the actual market, they are compelled to take every program pre-bundled
on the retransmitted distant station, despite the fact that the various
pre-bundled programs would each add different monetary value (or zero
value) in the form of new subscriber volume, subscriber retention, or
higher subscription fees. Indeed, some programs on the retransmitted
station may have so few viewers that CSOs--if they had the right--would
decide not to purchase such low viewership programs but for the
requirements of the compulsory license regime.
Further, certain programs may have more substantial viewership, but
that viewership might merely duplicate viewership of another program
that generates the same sub-set of subscribers. To restate the example
offered in the 2000-03 Determination, the viewers of reruns of the
situation comedy ``Bewitched'' may all be the same as the viewers of
reruns of ``I Dream of Jeannie,'' a similar supernatural-themed
situation comedy. However, ``Bewitched'' may have fewer viewers than
``I Dream of Jeannie.''
In the hypothetical market in which the compulsory licensing regime
did not exist, a rational profit-maximizing CSO that had already paid
for a license to retransmit ``I Dream of Jeannie'' would not also pay
for ``Bewitched'' in this hypothetical marketplace, because it fails to
add marginal subscriber revenue for the CSO. Rather, the rational CSO
would seek to license and retransmit a show that marginally increased
subscriber revenue (or volume, if market share was more important than
profit maximization), even if that program had lower total viewership
than ``Bewitched.''
Alternately stated, why should CSOs in the hypothetical market be
compelled to pay for a program based on its higher viewership, even
though it adds less value than another show with lower viewership?
Simply put, the hypothetical, rational profit-maximizing CSOs would not
pay Copyright Owners based solely on levels of viewership. Rather, the
hypothetical CSOs would (1) utilize viewership principally as a tool to
estimate how the addition of any given program might change the CSO's
subscriber revenue, (2) attempt to factor in the economics of various
bundles; and (3) pay for a program license (or eschew purchasing that
license) based on that analysis.
Thus, the Judges consider the hypothetical market to be free of the
compulsion that arises from the pre-bundling that exists in the actual
market.
On the other side of the coin, are the sellers, i.e., the Copyright
Owners, under any ``compulsion'' to sell? In the actual market, one in
which the terrestrial station signal is acquired in a single specific
bundle by a CSO, the answer appears to be yes, there is ``compulsion.''
Copyright Owners cannot carve out their respective programs and seek to
maximize their values to CSOs independent of the prepackaged station
bundles in which they exist.
Of course, in the ``hypothetical market'' that the Judges are
charged with constructing, it would be inappropriate not to acknowledge
the inherent bundling that would occur. That is, the bundling decision
is a ``feature'' rather than a ``bug'' in even a hypothetical market
for distant retransmissions in which the statutory license framework
does not exist. Thus, while Copyright Owners could offer to supply
their respective programs at given prices, the equilibrium market price
at which supply and demand would intersect would reflect the CSOs'
demand schedules, which are based in part upon the fact that the
buyers, i.e., the CSOs, would pay only a price that is equal to (or
less than) the MRP of that program in a bundle to be purchased by
subscribers.
3. The Optimal Economic Approach to Determining ``Relative Market
Value''
In the present proceeding, the Judges considered the general
interrelationship among bundling, subscribership, and viewership, and
their impact on ``relative market value,'' in more detail than in prior
proceedings. Specifically, the Judges inquired as to whether the
parties' experts had considered utilizing a method of valuation known
as the ``Shapley value'' methodology \26\ to determine their respective
allocations.
---------------------------------------------------------------------------
\26\ See L. Shapley, A Value for n-person Games, in H. Kuhn and
A. Tucker, Contributions to the Theory of Games (1953). A definition
and an example of a Shapley valuation are set forth in the text,
infra. For the statistical formula for a Shapley value, see 9/8/14
Tr. at 1075-79 (Erdem); see also SDC Proposed Findings of Fact (PFF)
] 64.
---------------------------------------------------------------------------
Broadly stated, ``the Shapley value gives each player his `average
marginal contribution to the players that precede him,' where averages
are taken with respect to all potential orders of the players.'' U.
Rothblum, Combinatorial Representations of the Shapley Value Based on
Average Relative Payoffs, in The Shapley Value: Essays in Honor of
Lloyd S. Shapley 121 (A. Roth ed. 1988) (hereinafter, ``Roth'')
(quoting Shapley, supra). A Shapley valuation in the
[[Page 13430]]
present context is best understood through the following example: \27\
---------------------------------------------------------------------------
\27\ This example is inspired by a similar example set forth by
Professor Richard Watt, Managing Editor of the Review of Economic
Research on Copyright Issues and a past president of The Society for
Economic Research on Copyright Issues. See R. Watt, Fair Copyright
Remuneration: The Case of Music Radio, 7 Rev. of Econ. Res. on
Copyright Issues 21, 25-26 (2010).
---------------------------------------------------------------------------
Assume there is only one CSO (C), and there are two
program owners (P1 and P2) with programs available for retransmission.
In a hypothetical market, the Shapley model defines the values of
C, P1, and P2 under all of the possible orderings of arrival of the
three entities to negotiations and at each point of arrival.
For C, P1, and P2, there are the following 6 (that is 3 factorial,
or 3!) possible orderings by which each arrives in the market:
(1) C, P1, P2
(2) C, P2, P1
(3) P1, C, P2
(4) P1, P2, C
(5) P2, C, P1
(6) P2, P1, C
Assume the following.
(a) An entity (C, P1, or P2)--alone in the market--generates $0 in
retransmission value regardless of who that player is (because a cable
system without programming or a program without a CSO will not be
viewed and thus has no value);
(b) regardless of the order in which the respective owners of P1and
P2 arrive in the market to attempt to license their respective
programs, both of their respective programs generate $0 in
retransmission value without a CSO (because programs without a CSO
cannot be retransmitted and therefore provide no value);
(c) if C is present, it generates $6 by retransmitting P1 alone and
$5 by retransmitting P2 alone;
(d) if all three players are present, then the retransmission of P1
and P2 by C generates an assumed synergistic value of $12.
The Shapley value of P1 in each of the six possible
orderings is thus:
$6 in ordering (1) (because P1 increases the value from $0 to $6);
$7 in ordering (2) (because P1 increases the value from $5 to the
synergistic $12);
$0 in ordering (3) (because P1 adds no value when it arrives first to
the market);
$0 in ordering (4) (because P1 adds no value when it arrives first to
the market);
$7 in ordering (5) (because P1 increases the value from $5 to the
synergistic $12); and
$0 in ordering (6) (because P1 does not add value if there is no CSO in
the market).
The Shapley value of P1 is the average value of P1 over all
possible arrival sequences, or
[GRAPHIC] [TIFF OMITTED] TN13MR15.005
By a similar calculation, the Shapley value of P2 is $2.83.
(Similarly, the Shapley value of C, the CSO, is $5.83.) The sum of the
values each provides is approximately $12, which equals the synergistic
business value generated when all three entities are present in the
market.
Shapley valuations constitute ``the unique efficient solution''
because they ``valu[e] each player['s] direct marginal contribution to
[a] grand coalition.'' S. Hart and A. Mas-Colell, ``The Potential of
the Shapley Value,'' in Roth, supra, at 127-28. The Shapley value
analysis not only enriches the development of the relative market value
standard, but it also would allow the Judges in this proceeding to
carry out their statutory mandate to distribute the deposited royalties
by comparing the parties' respective valuation methodologies to that
optimal standard, to determine which of their methodologies more
closely reflects the optimal hypothetical market.
To summarize, as in the 2000-03 Determination, the Judges will
apply in this Determination a hypothetical market that contains the
following participants and elements: (1) The hypothetical sellers are
the owners of the copyrighted programs; (2) the hypothetical buyers are
the CSOs that acquire the programs as part of their hypothetical
bundles of programs; and (3) the requirement of an absence of
compulsion dictates that the terrestrial stations' initial bundling of
programs does not affect the marginal profit-maximizing decisions of
the hypothetical buyers and sellers.\28\
---------------------------------------------------------------------------
\28\ The construction of the hypothetical market is of
particular importance in this proceeding. As explained infra, IPG
mistakenly argues that the preexisting bundling of programs on the
retransmitted stations in the actual market renders ratings
irrelevant to a CSO that must purchase and retransmit the actual
bundle in toto. IPG confuses the actual market with the hypothetical
market the Judges are obligated to construct. The actual market is
distorted by the existence of the compulsory statutory license, and
the Judges are required to determine the values of the copyrighted
programs by hypothesizing an unregulated market in which such
statutory compulsion does not exist.
---------------------------------------------------------------------------
V. Description and Analysis of the Parties' Proposals for Distribution
A. The SDC Methodology
1. The Details of the SDC Methodology
The SDC's calculation of relative market value (SDC Methodology) is
based upon the analyses of two expert witnesses who testified on behalf
of the SDC in their direct case and upon certain designated testimony
from prior proceedings.\29\ The first live witness upon whom the SDC
relied was Mr. Whitt, a systems analyst, programmer and database
analyst, who had worked for a company he founded, IT Processing LLC (IT
Processing). 9/3/14 Tr. at 418 (Whitt).\30\ Mr. Whitt had formed IT
Processing to engage in ``massive data projects'' that required
``millions of unique items of data to be accurately and efficiently
entered and
[[Page 13431]]
analyzed.'' Whitt WDT at 2; Ex. SDC-D-001 at 2.
---------------------------------------------------------------------------
\29\ The SDC designated the following testimony from the 1998-99
Phase I Proceeding (Distribution of 1998 and 1999 Cable Royalty
Funds, Docket No. 2001-8 CARP CD 98-99) from the following
witnesses: (a) Marsha Kessler (a retired MPAA vice president,
responsible for retransmission royalties): June 2, 2003 (pp. 6347-
6454); June 3, 2003 (pp. 6456-6613); July 14, 2003 (pp. 9478-9491);
and July 15, 2003 (pp. 9724-9753); (b) Paul Lindstrom (a Nielsen
employee): June 9, 2003 (7175-7445); and (c) Paul Donato (a Nielsen
employee) June 9, 2003 (pp. 7445-7520). From the 2000-2003 Phase II
Proceeding (In the Matter of Distribution of 2000, 2001, 2002, and
2003 Cable Royalty Funds, Docket No. 2008-2 CRB CD 2000-2003), the
SDC designated testimony from the following witnesses: (a) Ms.
Kessler: June 3, 2013 (pp. 101-218); (b) Paul Lindstrom: June 3,
2013 (pp. 280-324); and June 4, 2013 (pp. 368-433); (c) Dr. William
Brown: June 6, 2013 (pp. 1364-1420); (d) Jonda Martin: June 3, 2013
(pp. 219-236); (e) Kelvin Patterson: June 3, 2013 (pp. 237-280); and
(f) Mr. Whitt: June 6, 2013 (pp. 1346-1363).
\30\ The SDC proffered Mr. Whitt's testimony from a prior
hearing in this proceeding (discussed in Part III, supra) conducted
on April 8, 2014, in lieu of eliciting his testimony during the
September hearing. IPG consented to this procedure (subject to its
foundational challenge as set forth in its Motion in Limine
discussed supra) and the Judges incorporated by reference Mr.
Whitt's April 8, 2014, testimony as part of the present record. 9/3/
14 Tr. at 413-15. IPG also cross-examined Mr. Whitt during the
September 2014 hearings, and the SDC then conducted redirect
examination of Mr. Whitt.
---------------------------------------------------------------------------
Mr. Whitt's work on behalf of the SDC was derivative of earlier
work he had undertaken on behalf of MPAA. More particularly, Mr. Whitt
had been engaged by MPAA ``to process large data files consisting of
cable and satellite copyright programming and viewing associated with
claims filed with the Copyright Royalty Arbitration Panels . . . and
[the] Copyright Royalty Board.'' Id. at 3.
According to Mr. Whitt, he was contacted by the SDC in 2006 to
assist in preparing their case in this proceeding. 4/8/14 Tr. at 106
(Whitt). The SDC engaged Mr. Whitt to utilize his prior work and data
from his MPAA assignment to prepare the HHVH Report for 1999, relating
to the retransmission of certain Devotional programming on broadcast
television stations that were distantly retransmitted to other markets.
Whitt WDT at 3 and Ex. 1 thereto; Ex. SDC-D-001 at 3 and Ex. 1 thereto;
4/8/14 Tr. at 106 (Whitt).
Mr. Whitt's 1999 HHVH Report was based on following three data
sources:
(1) Programs on a sample of television stations whose signals were
distantly transmitted on cable that Mr. Whitt believed Ms. Kessler, a
former employee of the MPAA, chose based on whether the signals were
``distant'' for cable copyright purposes;
(2) distant program viewing data from Nielsen, presented on a
quarter-hour basis, for programs from Nielsen's six ``sweeps'' months
of diary data (January, February, May, July, October and November)
(Nielsen Data); \31\ and
---------------------------------------------------------------------------
\31\ Nielsen ratings estimate the number of homes tuned to a
program based upon a sample of television households selected from
all television households. The findings within the sample are
``projected'' to national totals. Although there was no evidence or
testimony regarding how Nielsen conducted its data collection for
sweeps weeks in 1999, Mr. Lindstrom described the general process in
his testimony in the 2000-03 proceeding, which the SDC designated in
this proceeding. In that regard, Mr. Lindstrom testified that diary
data is collected in Nielsen's diary markets during November,
February, May, July, and in some cases October and March, which are
also known as the ``sweeps'' ratings periods (Nielsen Diary Data).
(Diaries are paper booklets in which each person in the household
manually records viewing information.) Nielsen mails seven-day
diaries to homes randomly selected by Nielsen to keep a tally of
when each television in the household was on, what it was tuned to,
and who in the household was watching. Over the course of a four-
week sweeps period, Nielsen mails diaries to a new panel of randomly
selected homes each week. At the end of each sweeps period, all of
the viewing data from the individual weeks are aggregated into
Nielsen's database. Each sweeps period yielded a sample of
approximately 100,000--aggregating to 400,000 households over the
course of a year. 2000-03 Determination, 78 FR at 6993; 6/3/13 Tr.
at 290, 296-98, 312 (Lindstrom) (SDC Designated Testimony).
---------------------------------------------------------------------------
(3) program data from Tribune Media Services (``TMS'') (including
station, date, time, title and program category) (TMS Data).
Id. at 3.
Mr. Whitt then matched the Nielsen Data with the TMS Data in order
to merge the Nielsen Data for reported quarter-hour segments with the
titles of the programs and other program information in the TMS Data.
Id. at 4; 4/8/14 Tr. at 108 (Whitt).\32\ In addition, Mr. Whitt
identified what he described as ``character strings'' from program
titles (44 in total) that he discretionally determined were devotional
in nature but had not been captured in the merging of the Nielsen Data
and the TMS Data. Id. at 4-6. Mr. Whitt also used his discretion to
delete certain programs that he concluded were not in fact devotional,
although their titles initially suggested that they were devotional in
nature. 4/8/14 Tr. at 126-28 (Whitt).
---------------------------------------------------------------------------
\32\ More precisely, Mr. Whitt had performed this merger on
behalf of the MPAA. Then, after being retained by the SDC, he
derived his 1999 HHVH Report for Devotional programming by narrowing
that prior work on behalf of the MPAA to isolate the Devotional
programming. 4/8/14 Tr. at 108-10 (Whitt).
---------------------------------------------------------------------------
Mr. Whitt completed his analysis by ``aggregat[ing] by title and
station summing the adjusted household viewing hours from [the] Nielsen
[data].'' Whitt WDT at 6; Ex. SDC-D-001 at 3. Thus, Mr. Whitt was able
to identify the potentially compensable broadcasts of the programs
claimed by SDC and IPG that aired on the sample stations. Whitt WDT at
3; Ex. SDC-D-001 at 3.
The SDC also presented John Sanders as an expert ``to make a fair
determination of the relative market values of particular devotional
television programs claimed by the parties'' using Mr. Whitt's report.
Ex. SDC-D-002 at 2. Mr. Sanders previously had ``actively participated
in the appraisal of more than 3,000 communications and media
businesses,'' and his work has focused on, inter alia, ``the television
and cable industries and the appraisal of . . . subscribership-based
assets . . . .'' Id. at 3. In the course of that work, since 1982, Mr.
Sanders has frequently engaged in the valuation of television programs
for both buyers and sellers, and the valuation of cable systems, in
connection with market transactions (as contrasted with valuations as
an expert witness). 9/3/14 Tr. at 461-62 (Sanders). Accordingly, and
without objection, Mr. Sanders was qualified as an expert in the
valuation of media assets, including television programs. 9/3/14 Tr. at
463-64.
Mr. Sanders testified that if he were representing a buyer or a
seller of a license to retransmit a program into a distant market, the
first step in his analysis of value would be to ``measure the audience
that is being generated by the various programs in question . . . .''
9/3/14 Tr. at 476-79 (Sanders). Mr. Sanders testified that the reason
for this initial emphasis on audience viewership is as follows:
[I]n terms of a cable system, the objective is to have categories of
programming that will attract subscribers. But, within those
categories, to have individual program titles that viewers will
actually be interested in watching. And those that show greater
evidence of viewership will obviously attract more subscribers and,
[as a] consequence, would have greater value.
9/3/14 Tr. at 478-79 (Sanders).
Accordingly, Mr. Sanders based his relative valuation estimate
primarily on Mr. Whitt's 1999 HHVH Report. Sanders WDT at 4; Ex. SDC-D-
002 at 4. He relied on that measure of viewing for the following
reasons:
To allocate reasonably the available funds between [the] SDC and
IPG in this proceeding, it is my opinion that audience measurements
relying on surveys conducted by Nielsen Media Research are the best
available tools to allocate shares. . . .
. . .
Within the category of devotional programming, all of the
programs claimed by [the] SDC and IPG appear to be directed
predominantly to a Christian audience, and can therefore be thought
of as homogeneous in terms of the subscriber base to which they are
likely to appeal. Where programs are homogeneous, the most salient
factor to distinguish them in terms of subscribership is the size of
the audience. A religious program with a larger audience is more
likely to attract and retain more subscribers or [sic] the [CSO],
and is therefore of proportionately higher value.
Sanders WDT at 5-6; Ex. SDC-D-002 at 5-6. To ascertain the size of a
program's audience, Mr. Sanders relied upon Nielsen ratings because he
understood such ratings to be ``the currency of the broadcast and cable
industry, and . . . generally regarded as the most reliable available
measure of audience size.'' Id. As Mr. Sanders elaborated in his oral
testimony:
Ultimately, the valuation will be based upon the benefit that it
brings to the holder of the programming. And most commonly, the
measurement of that value is based upon the audience that that
programming is able to generate. . . . Nielsen audience measurement
data . . . is the most ubiquitous and authoritative source of
audience measurement data in the broadcasting and cable fields.
9/3/14 Tr. at 465-66 (Sanders).
[[Page 13432]]
Accordingly, Mr. Sanders added the household viewing hours for the
distantly retransmitted compensable programming for each party. This
calculation totaled 1,237,396 viewing hours for the SDC and 280,063 for
IPG. Sanders WDT at 9; Ex. SDC-D-002 at 9; see also id. at Appendix E.
In percentage terms, SDC-compensable programming accounted for 81.5% of
the devotional viewing of the two parties' programs, and IPG-
compensable programming accounted for 18.5%.
Based on his analysis, Mr. Sanders calculated the viewership (and
distribution) shares of the SDC and IPG programming as follows.
SDC: 81.5%
IPG: 18.5%
Mr. Sanders was unable to provide any confidence interval for these
allocations, given that the statistical bases for the analysis were not
random in nature. However, Mr. Sanders testified that he was able to
confirm the overall ``reasonableness'' of his analysis by comparing the
results with an analysis of local Nielsen viewing data for the same IPG
and SDC programs in the February 1999 sweeps period. Mr. Sanders
testified that he believed the Nielsen analysis was performed through a
random sampling of viewers and constituted the ``granular'' or
``niche'' type of report that Mr. Sanders understood to be necessary in
order to rely with greater certainty on the results of the analysis. 9/
3/14 Tr. at 512 (Sanders).
That analysis revealed the following distribution of viewers:
SDC: 71.3%
IPG: 28.7%
Mr. Sanders also noted that there was a ``correlation coefficient for
the HHVH shares relative to the Nielsen shares [of] approximately 0.75,
which ``signifies that 75% of the variance between HHVH results for
different programs is connected with the variance between local ratings
for those programs.'' Sanders WDT at 10; Ex. SDC-D-002 at 10. The
Judges understand Mr. Sanders's testimony to mean that the
``connected'' or correlated nature of the two sets of viewership data
demonstrates that each data set is a form of confirmation as to the
reasonableness of the other data set.
Indeed, Mr. Sanders testified that, in his expert opinion, this
71.3%:28.7% ratio should be ``characterized as a reasonableness check''
on his analysis. 9/3/14 Tr. at 501. See also 9/3/14 Tr. at 510
(Sanders) (restating his ``reasonableness'' conclusion). Mr. Sanders
emphasized the importance of his ``reasonableness check,'' stating that
the ``body of data'' that led to a 71.3%:28.7% distribution ``is very
relevant and, in my opinion, should not be ignored.'' 9/3/14 Tr. at 503
(Sanders) (emphasis added). In that regard, Mr. Sanders further noted
that, had his primary analysis resulted in a 71.3%:28.7% distribution
and, had his ``reasonableness'' check resulted in an 81.5%:18.5%
distribution, he would have proposed the 71.3%:28.7% distribution. 9/3/
14 Tr. at 509-10 (Sanders).
2. Evaluation of the SDC Methodology
IPG sets forth several criticisms of the SDC Methodology. First,
IPG claims that the SDC Methodology incorrectly assumes that household
viewing constitutes an appropriate measure of relative market value.
Assuming arguendo viewership can be a basis for value, IPG asserts,
second, that the SDC did not provide a sufficient evidentiary
foundation for the Nielsen Data and, therefore, for the 1999 HHVH
Report. Third, again assuming, arguendo, that viewership is probative
of value IPG argues that the incidence of ``zero viewing'' sample
points in the Nielsen Data utilized to create the 1999 HHVH Report
invalidates the Nielsen Data as a reliable source of viewership
information. Fourth, IPG asserts again assuming, arguendo, the
probative nature of viewership, that the SDC could have provided better
data to support the SDC Methodology. Fifth, IPG argues that the SDC's
own reasonableness test demonstrates that IPG programming has a
significantly higher value than the 18.5% allocation proposed by the
SDC.
a. Viewership Is an Acceptable ``Second-Best'' Measure of Value, Even
Though It Is Not the Optimal Metric
IPG opposes a relative market value assessment based solely on
viewership because: (1) A CSO primarily benefits from attracting
subscribers rather than viewers; (2) retransmitting a program with more
viewers will not necessarily increase aggregate subscribership for a
CSO; and (3) retransmitting a program with fewer viewers might increase
a CSO's aggregate subscribership. Robinson WRT at 8.
The Judges agree that a relative market value assessment based
solely on viewership is less than optimal. In reaching this conclusion,
the Judges refer to their earlier discussion of the Shapley valuation
approach. In the present context, the Judges believe that the optimal
approach to determining relative market value would have been to
compare the SDC programs with those of IPG using Shapley or Shapley-
approximate valuations. Such an approach was not possible on the record
before the Judges in the current proceeding because of the non-
existence, unavailability, or, from the parties' perspective,
prohibitive development cost of the necessary evidence upon which such
a comparison could be made.
The SDC's expert economic witness, Dr. Erkan Erdem, agreed that, in
theory, a Shapley valuation would be a more precise way to measure
relative value in this proceeding. 9/8/14 Tr. at 1084 (Erdem). However,
as Dr. Erdem noted, there was no evidence in the record (or apparently
otherwise available) by which one could calculate the Shapley values in
this proceeding. Tr. 9/8/14 at 1084-85 (Erdem).\33\ Indeed, no expert
attempted to utilize a Shapley methodology to determine relative market
value of the SDC and IPG programs.
---------------------------------------------------------------------------
\33\ Professor Watt recognized this practical problem. See Watt,
supra note 27, at 27 (``The Shapley model provides a reasonable
working solution for regulators . . . . However, it does suffer from
a particularly pressing problem--that of data availability.'').
---------------------------------------------------------------------------
Dr. Erdem did acknowledge, however, that, as an alternative, a CSO
could utilize the general principles of a Shapley valuation to rank
ordinally the shows available for retransmission in a hypothetical
market, and thus create heuristic Shapley values. 9/8/14 Tr. at 1100-01
(Erdem). Such a ranking by CSOs in the present case could have served
as a basis for benchmarking the ``relative marketplace values.''
However, neither of the parties proffered a witness who had experience
in creating a roster of television programs.
Thus, the Judges have no evidence or testimony by which to
establish the relative marketplace values of the SDC and IPG programs
in the optimal theoretical manner or in a manner that uses ``Shapley-
approximate'' values. This evidentiary constraint places the Judges in
a ``second best'' situation. In that situation, it is not necessarily
optimal to attempt to satisfy other efficient conditions, because to do
so would further worsen the already sub-optimal situation. See R.G.
Lipsey and K. Lancaster, The General Theory of Second Best, 24 Rev.
Econ. Stud. 11 (1956). Colloquially stated, the theory of the second
best may generally be defined as ``not letting the perfect be the enemy
of the good.'' When the parties have not proffered evidence or
testimony to permit Shapley-type valuations, it would not be efficient
also to reject valuations based predominantly on viewing data.
[[Page 13433]]
To reject viewing-centric valuations would require the Judges
instead either to adopt a less probative or seriously deficient
methodology,\34\ or figuratively to throw up their hands and refuse to
make any allocation or distribution.\35\ The Judges will not compound
the problem of the absence of the most theoretically probative evidence
by rejecting the SDC's viewer-centric valuations, notwithstanding the
limitations in using those valuations. \36\
---------------------------------------------------------------------------
\34\ The only other methodology presented in this proceeding is
the IPG Methodology. For the reasons set forth infra, the Judges
have concluded that the IPG Methodology is seriously deficient and
far less probative than the SDC Methodology. Thus, even if the
Judges had not analyzed and considered a Shapley-type valuation,
they would have analysis or consideration by the Judges, they would
have given far more weight to the viewer-centric SDC Methodology
than the seriously deficient IPG Methodology.
\35\ The Judges had considered whether they could decline to
make any distribution determination in light of the imperfections of
the parties' evidence, and asked counsel for the parties to provide
guidance as to that alternative. Cf. Final Determination, 1993-1997
Cable Proceedings (Phase II), 66 FR 66433, 66454 (Dec. 26, 2001)
(the Librarian accepted the Register's recommendation to reject a
CARP Report distributing royalties because ``the record . . . is
insufficient on which to base a distribution determination.''). Both
counsel in the present proceeding urged the Judges not to render a
determination that declined to distribute the royalties if a
determination that made an allocation could be based upon adequate
evidence and would not be arbitrary or capricious. 9/8/14 Tr. at
1172-75 (counsel for the SDC); 9/8/14 Tr. at 1176-79 (counsel for
IPG). The Judges are confident that this Determination satisfies
those standards.
\36\ The limitations might inure to IPG's benefit. As Dr. Erdem
explained, when there is an overlap in viewership between programs,
a purely viewership-based valuation, such as that proffered by the
SDC, might understate the relative value of programs with higher
viewership (i.e., the SDC programs) and overstate the IPG
distribution percentage compared to a Shapley valuation. 9/8/14 Tr.
at 1082-83 (Erdem). The Judges also note that in the hypothetical
market, several CSOs might be competing for the right to retransmit
programs. Thus, to use a prior example, if a CSO has purchased a
license to retransmit I Dream of Jeannie, rather than Bewitched,
because the former has more viewers and its viewers overlap
significantly with the latter's viewers, a competing retransmitter
might then find the total viewership for Bewitched so valuable
(given that the retransmission rights to I Dream of Jeannie were no
longer available) that Bewitched is that competing retransmitter's
first choice even under a Shapley-type valuation. Therefore, in a
competitive market, absolute viewership would be particularly
probative of program value.
---------------------------------------------------------------------------
The Judges' decision to issue a determination based on the extant
evidence, rather than to reject all evidence because it is less than
optimal, is consistent with D.C. Circuit precedent. Specifically, the
D.C. Circuit has held that, in making distributions under Section 111
of the Copyright Act, mathematical precision is not required. See
National Ass'n of Broadcasters v. Librarian of Congress, 146 F.3d 907,
929, 932 (D.C. Cir. 1998); Nat'l Cable Television Ass'n v. Copyright
Royalty Tribunal, 724 F.2d 176, 187 (D.C. Cir. 1983). Rather, the
Judges may render a determination premised on ``the only'' evidence
presented by the parties, notwithstanding that ``the character of the
evidence presented'' may fall short of more precise evidence that the
parties did not or could not present. See Nat'l Cable Television, 724
F.2d at 187.
Applying a viewership-based model of valuation in deciding
distribution allocations also is consistent with Library precedent.
Specifically, in an analogous context in a Phase I proceeding, the
Librarian held that a measure of ``relative market value'' could be
made by reliance on viewership information when a more optimal
valuation tool was not available. Distribution of 1998 and 1999 Cable
Royalty Funds, Docket No. 2001-8 CARP CD 98-99, 69 FR 3606, 3614
(January 26, 2004) (noting that survey evidence may be superior to
viewing evidence but, in the absence of that superior evidence, viewing
information can properly be relied upon by the factfinder in a
distribution proceeding).
IPG's own witness acknowledges the importance of viewership data
generally in assessing the value of programming. In her oral testimony,
Dr. Robinson conceded that viewership is an important metric in the
determination of relative market value. 9/2/14 Tr. at 175; 9/4/14 Tr.
at 784. (Robinson). Additionally, Dr. Robinson acknowledged that
viewership is important to a CSO in order to retain subscribers, 9/4/14
Tr. at 777-78 (Robinson), confirming the common sense idea that
subscribers would not continue to subscribe if they did not watch the
offered programming.
The Judges are also confident that, generally, Nielsen-derived
viewership data presents a useful measurement of actual viewership.
They base this conclusion on, among other things, the fact that the
television industry relies on Nielsen data for a wide range of business
decisions. The SDC's expert industry witness, Mr. Sanders, testified
that those in the television industry consider viewership data, as
compiled by Nielsen, to be the best and most comprehensive measure of
viewership. 9/3/14 Tr. at 480-81 (Sanders). Mr. Sanders acknowledged
that the Nielsen Data are not perfect, but that their status as the
best and most comprehensive measure of viewership has caused the
television industry to utilize Nielsen data as a ``convention'' for
``economic decision makers.'' Id. IPG did not present any evidence to
rebut either of these points.
If the Judges were to discount the Nielsen Data in this proceeding
simply on the basis that Nielsen data are imperfect, the Judges would
in essence be substituting their own opinion of the Nielsen yardstick
for the collective opinion of the ``economic decision makers'' in the
market. The Judges will not engage in such substitution; it is their
job to develop a hypothetical market by eliminating the impact of the
compulsory licensing regime--but otherwise to hew as closely as is
reasonably appropriate to the conduct, performance, customs and
standards of the actual market.
Despite the Judges' conclusion that viewership is a type of metric
that the Judges may consider, the Judges must consider whether the
particular viewership analysis undertaken by the SDC contains
imperfections, as noted by IPG, or otherwise. See, e.g., 1987
Devotional Determination, 55 FR at 5650; 1986 Determination, 54 FR at
16153-54 (noting that viewing measurements might not be perfect and
must be appropriately adjusted if claimants are able to prove that
their programs have not been measured properly or may be significantly
undermeasured). Accordingly, the Judges must analyze the SDC's
particular viewership evidence and address the issues raised by IPG in
that regard.
b. The Evidentiary Foundation for the SDC Methodology
(1) ``Replication'' and ``Testing'' of the SDC's HHVH Report
The SDC's viewership evidence consisted largely of the HHVH Report
presented by SDC's witness, Mr. Whitt. IPG asserts that the SDC did not
provide sufficient underlying data to allow IPG's expert, Dr. Robinson,
to test the accuracy of the SDC's HHVH Report for 1999. 9/4/14 Tr. at
755-56, 765-68 (Robinson). However, the Judges disagree with IPG's
assertion, based upon Dr. Robinson's own testimony. Specifically, Dr.
Robinson testified that she indeed ``merged the underlying data and ran
the search terms for devotional programming [and] reached substantially
the same results [as the SDC] in all material respects.'' Id. at 850-61
(Robinson). In her prior testimony on IPG's Motion to Strike, Dr.
Robinson had presaged her subsequent successful replication of the HHVH
Report by admitting that she was able to merge the Nielsen Data and the
TMS Data, run Mr. Whitt's search terms and test the accuracy of his
results. 4/8/14 Tr. 68-69 (Robinson); Order Denying Motion to Strike at
6. Based on Dr.
[[Page 13434]]
Robinson's testimony, the Judges conclude that the HHVH Report was
replicable and that the results were capable of being tested. As a
result, the Judges conclude that the report should carry at least some
weight in assessing the relative market value of the SDC and IPG
programs.
(2) Issues Regarding the Kessler Sample
IPG also criticizes the HHVH Report because the SDC (1) did not
produce a witness with ``firsthand knowledge of the method or basis for
the station sample selection'' used to create the Kessler sampling of
stations, (2) presented no evidence directly establishing that Ms.
Kessler selected the stations appearing in the Nielsen Data, and (3)
presented ``[n]o information or data regarding the station sampling
process.'' See IPG PFF at 26-29.
There is some validity to IPG's criticisms. The SDC did not call
Ms. Kessler as a witness to explain how she selected her 1999 sample of
stations. Further, Mr. Whitt acknowledged that he had not participated
in the selection of the Kessler Sample of stations, so he had no
knowledge of the method by which those stations were selected. 4/8/14
Tr. at 112 (Whitt). The extent of Mr. Whitt's knowledge in this regard
was limited to his recollection that ``the MPAA conducted a detailed
study of what stations to select[,] . . . and then I was given a list
of those stations[,] and then that's what I used to combine the two
files. . . . So, all the Nielsen stations should have represented the
complete list of the Kessler stations.'' 4/8/14 Tr. at 113 (Whitt); 9/
3/14 Tr. at 444 (Whitt).\37\
---------------------------------------------------------------------------
\37\ Although the SDC provided an example of such a Kessler
Sample to IPG in discovery (from the Phase I 1999 proceeding), the
SDC did not represent that this earlier sample constituted the
sample used to select the stations identified in the Nielsen Data.
See 4/8/14 Tr. at 229 (SDC counsel ``stipulat[ing]'' that ``Ms.
Kessler's list from Phase I is not the list of stations that was
ordered from Nielsen'').
---------------------------------------------------------------------------
Further, the SDC's expert witness, Mr. Sanders, admitted that the
Kessler Sample and, derivatively, the HHVH Report and his own report
are subject to valid criticism because the Kessler Sample--upon which
both reports rely--was created by a non-random selection of stations.
9/3/14 Tr. at 496 (Sanders).\38\
---------------------------------------------------------------------------
\38\ In the 2000-03 proceeding, Ms. Kessler testified that her
sampling was not (and was not intended to be) a random sample. See
6/3/13 Tr. at 122-25 (Kessler).
---------------------------------------------------------------------------
IPG properly takes the SDC to task for relying on only a small
portion (72 of 800, or 9%) of distantly retransmitted stations.\39\ See
9/4/14 Tr. at 626 (Sanders) (confirming that Mr. Whitt's analysis
covered 72 stations.) \40\ As IPG noted, in 1986 the CRT found that a
study of 18.8% of 622 total stations to be not sufficiently large to be
``perfectly projected to other stations. . . .'' IPG PFF at 50
(emphasis added) (citing 1983 Cable Royalty Distribution Proceeding,
Docket No. CRT 84-1 83CD, 51 FR 12792, 12794 (April 15, 1986)).\41\
---------------------------------------------------------------------------
\39\ All things being equal, the larger the sample size, the
more likely it is that the sample will be representative of the
population the sample purports to measure. Although sampling 100% of
the population is ideal, it is typically not cost effective or
practicable to sample an entire population. The smaller the sample
size, however, the greater the margin of error. See H. Zeisel, . . .
And Then There Were None: The Diminution of the Federal Jury, 38 U.
Chi. L. Rev. 710, 718 (1971).
\40\ IPG also asserts that there is an inconsistency between the
number of stations (123) in the Kessler Sample and the number of
stations (72) in the sample analyzed by Mr. Whitt. See IPG Proposed
Findings of Fact at 28. That claimed inconsistency is a red herring,
however, because the sample that IPG claims may be the ``Kessler
Sample'' was a Phase I sample she had selected--one that the SDC
acknowledged was not the sample from which Mr. Whitt identified
stations with Devotional programming in this Phase II proceeding.
See, e.g., 4/8/14 Tr. at 113-15, 229.
\41\ Dr. Robinson also pointed out that the Kessler Sample's
apparent exclusion of Canadian stations suggests that the sample was
unrepresentative. By comparison, Dr. Robinson's own station
selection contains only a single Canadian station on which programs
claimed in this proceeding were broadcast; that station broadcasted
both an IPG program and an SDC program. 9/8/14 Tr. 1092 (Erdem). The
Judges find no persuasive evidence in the record that the exclusion
of Canadian stations from the HHVH Report materially affects the
results as to either side in this case. Therefore, the Judges
conclude that the probative value of the HHVH Report is not
diminished by the absence of Canadian stations. Accord Distribution
of the 2000-03 Cable Royalty Funds, 78 FR at 64998 (``The Judges
conclude that, while the exclusion of the Canadian stations was an
error, it did not have a significant effect on the relative shares
computed by MPAA'').
---------------------------------------------------------------------------
The Judges acknowledge that the Kessler Sample was non-random.\42\
That being said, the manner in which the sample was chosen will
influence the weight the Judges place on the station sample, and by
extension, on the HHVH Report. For example, the presence of a clear
bias either in favor of or against a particular participant in the
current proceeding would render the report all but useless. Therefore,
for the Judges to give any weight to the SDC Methodology, the Judges
must analyze the origination of and the purposes for creating the
Kessler Sample.
---------------------------------------------------------------------------
\42\ Given this analysis, it is perhaps inaccurate to continue
referring to the SDC's sample of stations as the ``Kessler Sample.''
However, because the parties have identified the sample in this
manner, for ease of reference the Judges have continued with that
short-hand identifier in this Determination.
---------------------------------------------------------------------------
The SDC argues that the Judges can and should rely on the Kessler
Sample notwithstanding the aforementioned defects. Mr. Sanders opined
that the non-random nature of the Kessler sample, and its uncertain
genesis, do not pose a problem because:
The Kessler Sample ``employs viewing results from the most
distantly retransmitted broadcast stations as reported by Form 3 cable
systems.'' \43\
---------------------------------------------------------------------------
\43\ ``Form 3 cable systems'' are cable systems whose semiannual
gross receipts for secondary transmissions total $527,600 or more,
and are thus required to file statements of account on Copyright
Office form SA3. See 37 CFR 201.17(d)(2)(ii).
---------------------------------------------------------------------------
Although the Kessler Sample is non-random, it is ``close
to a census,'' because ``the most important and relevant titles [of]
the principal programs of all SDC- and IPG-represented claimants appear
in the survey.'' (Emphasis added).\44\
---------------------------------------------------------------------------
\44\ When questioned by the Judges, Mr. Sanders acknowledged
that he would have no basis for also asserting that the ``Kessler
Sample'' approximates a ``census'' of all retransmitted stations or
of all broadcasts of IPG and SDC programs. 9/4/14 Tr. at 637-39
(Sanders). Moreover, IPG takes the SDC to task for relying on a
sample of only 123 stations (about 17.5%) of the approximately 700
stations distantly retransmitted by Form 3 cable systems.
---------------------------------------------------------------------------
The Kessler Sample comprises many of the regions
identified by Nielsen as ``Designated Market Areas (DMAs),'' \45\ and
the first 10 stations in the Kessler Sample covered approximately 30-
40% of the population of the country, thereby covering some of the
largest stations.
---------------------------------------------------------------------------
\45\ The term ``DMA'' is used by Nielsen to identify an
exclusive geographic area of counties in which the home market
television stations hold a dominance of total hours viewed. See
www.nielsenmedia.com/glossary/terms/D/ (last visited December 3,
2014).
---------------------------------------------------------------------------
There is no evidence ``to suggest that the sample was
chosen to benefit or prejudice either party in this proceeding [and] .
. . it is neutral on that score.''
Sanders WDT at 2; Ex. SDC-D-002 at 7; 9/4/14 Tr. at 627 (Sanders). Mr.
Whitt likewise defended the use of the Kessler Sample, observing that
``it appeared that the stations were national, geographically scattered
around the country[, a]nd they included several large stations, but
also a few small stations.'' 9/3/14 Tr. at 420 (Whitt).
Under cross-examination, however, Mr. Sanders did acknowledge that
many large metropolitan areas were not represented in the Kessler
Sample of stations. He noted the ``possibility'' that there was no
measurable viewing of the SDC and IPG programs in those areas or that
the programs were not retransmitted in those areas. 9/4/14 Tr. at 631-
33 (Sanders). Of course, those speculative ``possibilities'' are
precisely the sort of concerns that a truly random sample would address
objectively. The non-random nature of the Kessler Sample leaves
unanswered the question
[[Page 13435]]
of why those metropolitan areas were not represented.
Mr. Sanders concluded that, on balance, he could nonetheless give
some weight to this non-random selection of stations. 9/3/14 Tr. at
498-500. It is noteworthy that IPG's expert, Dr. Robinson, likewise
acknowledged that even a non-random sample can be representative and
therefore probative of facts concerning an entire population. 9/3/14
Tr. at 234-35 (Robinson). In fact, Dr. Robinson testified that the
results of her own non-random sample were representative of the
population she was measuring (subscriber fees paid to CSOs) because,
``as a practical matter . . . in terms of understanding the population
that we care about, if we have the majority of the data, then at least
we know the truth for the majority of the data. . . .'' 9/2/14 Tr. at
156 (Robinson).
Non-random (a.k.a. ``nonprobability'') sampling, although inferior
to random sampling, can be of some limited use. As explained in a
treatise on the subject:
[N]onprobability samples cannot depend upon the rationale of
probability theory. At least with a probabilistic sample, you know
the odds or probability that you have represented the population
well. You can estimate the confidence intervals for the statistic.
With nonprobability samples, you may or may not represent the
population well . . . . In general, researchers prefer probabilistic
or random sampling methods over nonprobabilistic ones, and consider
them to be more accurate and rigorous. However, in some
circumstances in applied social research there may be circumstances
where it is not feasible, practical or theoretically sensible to do
random sampling.
W. Trochim and J. Donnelly, Research Methods, The Concise Knowledge
Base at 41 (2005) (emphasis added).
In the present case, ``feasibility'' was certainly a constraint
because, as Mr. Sanders explained, it was cost-prohibitive for the SDC
to invest additional money into the development of evidence. The
costliness of undertaking random sampling can render an analysis
unfeasible. As one survey organization has noted, ``costs are important
and must be considered in a practical sense'' and therefore a ``broader
framework'' is needed to assess the results of nonrandom sampling in
terms of ``fitness for purpose.'' Rep. of the Am. Ass'n of Pub. Opinion
Res. Task Force on NonProbability Sampling at 96 (2013).
To summarize, had the HHVH Report been based on a random sample of
stations, it would have been more probative. Nevertheless, the Kessler
Sample was not prepared in anticipation of the current proceeding and
contained no discernible bias either in favor of or against the
programs that are at issue in this proceeding. Cost is a reasonable
factor for the parties to consider in preparing evidence for a
proceeding and, given the relatively modest amount of royalties
involved in the current proceeding, it likely would not have been cost
effective for the SDC to conduct an entirely new study based on a
random sample of stations, even assuming that one could have been
prepared so long after the royalty year at issue. Therefore, the Judges
find that the Kessler Sample is sufficiently robust to allow the Judges
to afford some weight to the SDC Methodology while remaining mindful of
its deficiencies.
(3) Imperfections in the Nielsen Data
Mr. Sanders acknowledged that the particular Nielsen Data utilized
to prepare the 1999 HHVH Report was not as granular as he would have
preferred. Specifically, Mr. Sanders explained that the 1999 HHVH
Report was imperfect because it was based upon a ``very, very thin
slice'' of the broader broadcasting or programming field. 9/3/14 Tr. at
519. When such an extremely narrow ``slice'' of the market is the
subject of the analysis, according to Mr. Sanders, it is preferable to
obtain a ``niche'' Nielsen report that focuses on the narrow market
that is the subject of the study. 9/3/14 Tr. at 514-15 (Sanders). In
this particular case, Mr. Sanders acknowledged therefore that, because
``it is distant signal viewing that is the actual focus of the project,
[this] would be an example where a customized report would be done.''
9/3/14 Tr. at 485 (Sanders) (emphasis added).
Furthermore, the SDC did not disclose the margins of error or the
levels of confidence associated with the data underlying the HHVH
Report. Without this information, the Judges cannot assess the
reliability of any statistical sample. The Judges infer that, had the
SDC possessed such information, or if such information underscored the
reliability of the Nielsen data, the SDC would have produced it.
Further, in the 2000-03 proceeding, Paul Lindstrom, one of the two
Nielsen witnesses whose prior testimony the SDC designated for
consideration in this proceeding, acknowledged that the size of the
samples used by Nielsen to measure distant retransmissions are
relatively small, and therefore do not measure viewership as accurately
as a larger sample. Accordingly, Mr. Lindstrom acknowledged that
``[t]he relative error on any given quarter-hour for any given station
. . . would be very high,'' 6/3/13 Tr. at 303 (Lindstrom). Despite
these shortcomings, the SDC relied upon Mr. Whitt's HHVH Report, in
lieu of investing in a ``niche'' Nielsen report, 9/3/14 Tr. at 514
(Sanders),\46\ and without providing information regarding the levels
of confidence and margins of error associated with the HHVH Report upon
which it has relied.
---------------------------------------------------------------------------
\46\ Mr. Sanders had informed the SDC that any attempt to obtain
superior data would have been cost-prohibitive, i.e., subjecting the
SDC to ``hundreds of thousands of dollars of additional costs,'' for
an amount at stake of ``somewhere north of a million dollars,'' and
that the SDC agreed not to invest additional sums to acquire more
data. 9/3/14 Tr. at 469-72 (Sanders). He also speculated that it
might have been impossible to acquire better data, but the
anticipated expense apparently foreclosed any attempts to learn if
superior data could be acquired or developed. Id. In any event, Mr.
Sanders conceded on cross-examination that he never attempted to
contact anyone at MPAA (or apparently anyone else) to determine if
better data could be acquired. 9/3/14 Tr. at 591-92 (Sanders).
---------------------------------------------------------------------------
In an attempt to minimize the impact of the thinness of this slice
of data, Mr. Sanders shifted the focus, distinguishing ``fully
informed'' market participants from ``all-knowing'' participants. In
his opinion, willing sellers and willing buyers in the marketplace for
television program copyright licenses would consider themselves ``fully
informed'' if they had access merely to the information upon which he
relied, even if they lacked the more granular data of a special
``niche'' Nielsen report of distant viewing of the devotional
programming at issue. 9/3/14 Tr. 474-75 (Sanders). As Mr. Sanders
added, ``fully informed'' in the context of the licensing of television
programs
simply means having adequate knowledge of the relevant facts and
circumstances to the issue or the proposed transaction at hand. . .
. I don't think in any engagement I've ever been involved in . . .
we have had all the information we would like to have. Typically, a
valuation exercise is endeavoring to reach a conclusion based upon
the information that is available.
9/3/14 Tr. 474-75 (Sanders).
Additionally, in economic terms, Mr. Sanders's testimony is
consistent with the concept of ``bounded rationality.'' Willing buyers
and willing sellers in any market \47\ are unlikely to have complete
information regarding all of the variables that could contribute to the
setting of a market price. It would be humanly impossible to calculate
all the relevant economic variables, and it would be economically
inefficient to expend the time sufficient to make such calculations
even if they were possible. Thus, economists recognize that willing
buyers and willing sellers are bounded by the ``external constraint[ ]
. . . [of] the cost of searching for information in
[[Page 13436]]
the world . . . [and they] attempt to make optimal choices given the
demands of the world leading to the notion of optimization under
constraints.'' G. Gigerenzer, Is the Mind Irrational or Ecologically
Rational? in F. Parisi & V. Smith, The Law and Economics of Irrational
Behavior at 38 (2005). Thus, ``[t]he focus on the constraints in the
world has allowed economists to equate bounded rationality with
optimization under constraints.'' Id. at 40.
---------------------------------------------------------------------------
\47\ The Judges note that the economic experts for willing
buyers and willing sellers likewise are subject to inevitable
constraints.
---------------------------------------------------------------------------
Finally, IPG leveled a broad criticism of the SDC Methodology,
asserting that it is ``the product of several degrees of projection.''
Robinson AWDT at 7 n.10. That is, the SDC derived its royalty
distribution by analyzing the viewership of a few sampled individual
airings projected over the population of a Nielsen Designated Market
Area during ``sweeps'' weeks, and then projected over the entire year,
for only a relatively small (nonrandom) set of stations projected to
represent all retransmitted stations. Id. The Judges recognize the
validity of this criticism. However, the nature of viewership-type
estimates is to engage in such sampling and extrapolation. Thus, the
SDC Methodology may be compromised, but it is not subject to outright
disqualification.
(4) The Incidence of Zero Viewing
IPG criticizes the SDC Methodology because it is based on what IPG
characterizes as a ``disproportionately large number of `0' entries''
[i.e., zero viewing sampling points] in the Nielsen data for distant
viewing.'' IPG PFF at 38. More particularly, IPG notes that the Nielsen
Data include a recorded ``0'' for 72% of all quarter-hours of
broadcasts measured by the 1999 Nielsen Data, and recorded a ``0'' for
91.2% of all quarter-hours of devotional broadcasts. Id.
Zero viewing sampling points represent the quarter-hour sampling
points at which no sample households recorded that they were viewing
that station. See 2000-03 Determination, 78 FR at 64995. IPG criticized
the incidence of zero viewing sampling points in the 2000-03
proceeding, and the Judges addressed the issue in their Determination
in that proceeding.
[T]he Judges agree with Mr. Lindstrom that these ``zero
viewing'' sampling points can be considered important elements of
information, rather than defects in the process. As Mr. Lindstrom
testified, when doing sampling of counts within a population, it is
not unusual for a large number of zeros to be recorded, 6/4/13 Tr.
at 391-93, 410 (Lindstrom), and those ``zero viewing'' sample points
must be aggregated with the non-zero viewing points. 6/3/13 Tr. at
323 (Lindstrom).
. . . .
[A]s Mr. Lindstrom testified, distantly retransmitted stations
typically have very small levels of viewership in a television
market fragmented (even in the 2000-2003 period) among a plethora of
available stations. 6/4/13 Tr. at 393 (Lindstrom). Thus, it would be
expected, not anomalous, for Nielsen to record some zero viewing for
any given quarter-hour period within the diary sampling (sweeps)
period.
Id.\48\
---------------------------------------------------------------------------
\48\ The SDC designated Mr. Lindstrom's testimony in the 2000-03
cable distribution proceeding for consideration in this present
proceeding.
---------------------------------------------------------------------------
In the present proceeding, Mr. Sanders offered the following
practical reasons why zero viewing would be recorded for these
retransmitted programs: (1) There is much less viewing of out-of-market
signals, (2) the lion's share of viewing in any market is going to be
viewing of the local stations, (3) stations within a market tend to
have a long legacy and a history in the market, (4) stations within a
market have preferred dial positions, and (5) local television stations
devote incredible resources to promoting themselves. 9/4/14 Tr. at 681-
83 (Sanders). This testimony was not rebutted by any IPG witness.
Despite these seemingly reasonable and credible explanations of
``zero viewing'' sampling points, the probative force of these ``zero
viewing'' data points, as a general matter, is not without doubt. As
the Judges also noted in the 2000-03 Determination regarding Nielsen
sampling:
The sample size is not sufficient to estimate low levels of
viewership as accurately as a larger sample. Mr. Lindstrom
acknowledged that ``[t]he relative error on any given quarter-hour
for any given station . . . would be very high,'' 6/3/13 Tr. at 303
(Lindstrom).
Furthermore, Mr. Lindstrom acknowledged that he had not produced
the margins of error or the levels of confidence associated with the
Nielsen viewership data, despite the fact that such information
could be produced. 6/3/13 Tr. at 391-93, 410 (Lindstrom). Without
this information, the reliability of any statistical sample cannot
be assessed. (The Judges infer that, had such information
underscored the reliability of the Nielsen data, it would have been
produced by MPAA.)
78 FR at 64995. The Judges note that the evidence in the present
proceeding does not resolve these issues regarding sample size, margins
of error and levels of confidence.
Nonetheless, the Judges concluded in the 2000-03 Determination that
``viewership as measured after the airing of the retransmitted programs
is a reasonable, though imperfect proxy for the viewership-based value
of those programs.'' Id. at 64995. IPG has not provided record evidence
or testimony in this proceeding that would persuade the Judges to
depart from the conclusion reached in the 2000-03 Determination. In
light of the reasonable and credible explanations offered by the SDC
for the ``zero viewing'' sampling points, and the absence of any
persuasive evidence or testimony to the contrary, the Judges again find
and conclude that the incidence of such zero viewing points does not
invalidate a viewership-based valuation study such as utilized in the
SDC Methodology.
IPG did introduce in this proceeding evidence that it did not
introduce in the 2000-03 proceeding regarding the incidence of ``zero
viewing'' sample points for individual programs (rather than for the
aggregate of quarter-hours). Compare 2000-03 Determination, 78 FR at
64995 (finding that IPG had failed to introduce evidence that the
Nielsen data revealed particular programs with ``zero viewing'') with
Ex. IPG-R-011 (analyzing zero viewing by title). As the Judges noted in
the 2000-03 Determination, the distinction between ``zero viewing''
overall and ``zero viewing'' for individual programs or titles is
important because ``under the hypothetical market construct, royalties
would accrue on a program-by-program basis to individual copyright
owners, not to the distantly retransmitted stations.'' 2000-03
Determination, 78 FR at 64995. However, an analysis of the evidence
upon which IPG relied does not support its assertion that ``zero
viewing'' for individual programs was particularly pervasive among the
SDC or IPG programs, or that the incidences of ``zero-viewing'' that
did occur were disproportionately harmful to IPG.
First, the incidence of ``zero viewing'' for individual,
retransmitted SDC and IPG programs was no more than 15.8%, according to
IPG's own economics expert witness, Dr. Robinson. See Ex. IPG-R-011.
This 15.8% figure represented only three of the 19 programs believed at
issue in this proceeding or, alternatively stated, 16 of the 19
programs (84.2%) did not have ``zero viewing'' throughout the
sample.\49\
---------------------------------------------------------------------------
\49\ Before submitting her final recommendation, Dr. Robinson
amended her program count to conform to the Judges' rulings and to
capture data that she (apparently inadvertently) omitted in her
first analysis. See notes 6, 7, supra, and accompanying text; note
57, infra, and accompanying text.
---------------------------------------------------------------------------
Second, of the three programs with ``zero viewing'' throughout the
sample, two were SDC programs (``700 Club Super Sunday'' and ``James
Kennedy''), whereas only one of the three programs
[[Page 13437]]
(``Creflo A. Dollar Jr. Weekly'') was an IPG program. See Ex. IPG-R-
013. Further, the IPG program was retransmitted only three times and
represented less than one-tenth of one percent (.097%) of both the
total quarter-hours and the number of retransmitted broadcasts of IPG
programs at issue in this proceeding. Id. Similarly, the two SDC
retransmitted programs with ``zero viewing'' throughout the sample
represented a de minimis percent of the SDC's total devotional
programming at issue in this proceeding (for ``700 Club Super Sunday,''
four retransmitted broadcasts, representing less than .25% of the total
SDC quarter-hours and programs retransmitted and, for ``James
Kennedy,'' approximately 1% of total SDC quarter-hours and programs
retransmitted). Id. Moreover, the copyrights for all three of the
above-identified programs with supposed zero viewing throughout the
sample were owned by respective claimants who also owned the copyrights
for programs with virtually identical or similar names, viz., ``Creflo
A. Dollar,'' ``700 Club,'' and ``James Kennedy'', none of which had
zero viewing sample points for all retransmitted broadcasts of their
programs. Id. Based on these facts, Dr. Robinson acknowledged at the
hearing, that, in her view, she ``would not say that for the IPG and
the SDC titles that we have any that we have 100 percent zero
viewing.'' 9/4/14 Tr. at 827-28 (Robinson) (emphasis added).\50\
---------------------------------------------------------------------------
\50\ IPG attempts to deflect attention from the paucity of the
relevant evidence regarding the programs at issue in this proceeding
by noting a higher incidence of ``zero viewing'' for programs in
other categories, such as Alfred Hitchcock Presents and Today's
Homeowner. Ex. IPG-R-012; IPG PFF at 44. However, data sample points
in other categories of programming are not relevant because they do
not address the issues relating to the Devotional category and,
further, there is no evidence to place such data in an appropriate
context.
---------------------------------------------------------------------------
For all of the foregoing reasons, the Judges find and conclude that
there was not persuasive or sufficient evidence of ``zero viewing'' for
individual SDC and IPG programs to invalidate any reliance on the SDC
Methodology.\51\
---------------------------------------------------------------------------
\51\ IPG notes that the SDC could have improved its analysis to
attempt to attribute value to the distant ``zero viewing'' data
points, as performed by experts in prior proceedings. Although such
improvements might have permitted the Judges to give more weight to
the HHVH Report, the absence of such improvements did not invalidate
the HHVH Report.
---------------------------------------------------------------------------
3. Viewership as an Ex Ante or Ex Post Measure of Value
IPG asserts that viewership and ratings cannot form a measure of
relative market value because the extent of viewership and the ratings
measuring viewership are not available until after the programs have
been retransmitted. Thus, IPG argues, the hypothetical willing buyer
and willing seller could not utilize this viewership data ex ante to
negotiate a license. Galaz AWDT at 9; Ex. IPG-D-001 at 9.
Although IPG's premise is literally correct, it does not preclude
the use of such viewership data to estimate the value of the
hypothetical licenses. As Mr. Sanders testified, this problem can be
overcome--and indeed is overcome in the industry--by the use of a
``make good'' provision in the contracts between program copyright
owners and licensees. That is, program copyright licenses in the
television industry are established based upon an ex ante prediction of
viewership as measured by ratings. If the ex post ratings reveal that
the program's measured viewership was less than predicted and set forth
in the license agreement, the licensor must provide compensatory value
to the licensee. 9/4/14 Tr. at 685-95 (Sanders).\52\ In this manner,
such a rational measure of viewership can also be expressly
incorporated into the bargain in the hypothetical market constructed by
the Judges.\53\
---------------------------------------------------------------------------
\52\ Dr. Robinson was unfamiliar with the industry's use of a
``make good'' provision as a tool to account for viewership levels.
9/3/14 Tr. at 270 (Robinson).
\53\ The Judges anticipated the existence of such ``post-viewing
adjustments'' in their 2013 determination. See 2000-03
Determination, supra, at 64995, n.48 (``Since it is a hypothetical
market we are constructing, it also would not be unreasonable to
hypothesize that the CSO and the Copyright Owner might negotiate a
license that would contain a provision adjusting the value of the
license, post-viewing, to reflect actual viewership. . . . In that
regard, the Judges refer to one of the preconditions for relative
market value--reasonable knowledge of relevant facts. Actual
viewership would be a `relevant fact' that could be applied if post-
viewing adjustments to the license fees were hypothetically utilized
by the bargaining parties.'').
---------------------------------------------------------------------------
The Judges also agree with Mr. Sanders that the programs within the
Devotional Claimants category on the surface appear to be more
homogeneous inter se than they are in comparison with programs in
either the Sports Programming or the Program Suppliers' claimant
categories. Sanders WDT at 6. This relative homogeneity suggests that a
rational CSO would not be as concerned with whether different programs
would attract different audience segments (compared with more
heterogeneous programming) and therefore the CSO would rely to a
greater extent on absolute viewership levels.
For these reasons, the record testimony supports the conclusion
that viewership data is a useful metric in determining relative market
value, in the absence of optimal data that would permit a precise or an
estimated Shapley value.\54\ Accordingly, the Judges reject IPG's
argument that household viewing cannot constitute a measure of value in
this proceeding.
---------------------------------------------------------------------------
\54\ Interestingly, Dr. Erdem explained that, as between two
programs with overlapping viewership, the program with higher
viewership would have a greater proportionate Shapley value than the
less viewed program; the difference would be even greater than the
difference between the two programs based strictly on relative
viewership. 9/8/14 Tr. at 1082-83 (Erdem). Given the relative
homogeneity of devotional programming (compared to the apparent
relative heterogeneity between and among other Phase II category
programs), viewership overlaps between and among the SDC and IPG
programs are likely. Therefore, because the SDC programs had higher
overall ratings than IPG programs and because the SDC Methodology is
based solely on ratings, the SDC's percentage distribution (if
accurately measured) could in fact understate the SDC percentage and
overstate the IPG percentage, compared to percentages based on
potential Shapley values. See supra note 36, and accompanying text.
---------------------------------------------------------------------------
IPG notes, though, that even assuming arguendo the SDC's viewership
analysis is probative of value, the SDC's own ``reasonableness'' check
demonstrates a significant disparity between the results derived from
the HHVH Report (81.5%:18.5% in favor of the SDC) and the results from
the ``reasonableness'' check of local viewing for the SDC and IPG
programs at issue in this proceeding (71.3%:28.7% in favor of the SDC).
The Judges agree with IPG that this is an important disparity,
suggesting that IPG may well be entitled to a larger distribution than
indicated by the SDC's HHVH Report. Because of the importance of this
point, the Judges discuss its significance in their analysis set forth
in Part VI, infra, synthesizing and reconciling the parties' positions.
B. The IPG Methodology
1. The Details of the IPG Methodology
IPG proffered its distribution methodology (the IPG Methodology)
through its expert witness, Dr. Laura Robinson, whom the Judges
qualified to testify as an expert in economics, data analysis, and
valuation. 9/2/14 Tr. at 87 (Robinson).\55\ Through her application
[[Page 13438]]
of the IPG Methodology, Dr. Robinson set forth her opinion of the
relative market value of the retransmitted broadcasts of the
compensable copyrighted program titles represented by IPG and the SDC
and estimated the share attributable to both parties. Robinson AWDT at
14, 25; Ex. IPG-D-001 at 14, 25.
---------------------------------------------------------------------------
\55\ IPG initially asked the Judges to qualify Dr. Robinson as a
testifying expert ``regarding the value of the programming issue in
this matter for IPG and for the SDC,'' or, as alternatively stated
by IPG's counsel, as an expert ``valuing the relative value of these
programs to these royalties.'' 9/2/14 Tr. at 73-74, 80. However,
SDC's counsel objected, and the Judges then qualified Dr. Robinson
as an expert in the areas of knowledge listed in the text, supra.
IPG's counsel did not renew his request that Dr. Robinson be
qualified as an expert in the areas set forth in this footnote. Even
if Dr. Robinson had been qualified as an expert in the areas
originally identified by IPG, that would not have made any
difference in the Judges' findings and conclusions in this
determination.
---------------------------------------------------------------------------
Consistent with the conclusions of the Judges in this and other
determinations, Dr. Robinson identified the ``willing sellers'' in the
hypothetical market to be the owners of the copyrights to the programs
subject to retransmission and the ``willing buyers'' to be the CSOs
that would acquire the license to retransmit the program. 9/2/14 Tr. at
92 (Robinson). However, Dr. Robinson defined the hypothetical
marketplace in a manner different from that of the Judges in this
proceeding and in the 2000-03 Determination. Dr. Robinson defined the
hypothetical marketplace as equivalent to the actual marketplace in
which the CSO is required to acquire the retransmitted programs in the
same bundle as created by the station that the CSO retransmits. See,
e.g., 9/4/14 Tr. at 782 (Robinson) (``[I]t is certainly the case that
when a cable system operator is actually making the decision about
whether or not to retransmit a broadcast, that comes within their
decision whether or not to retransmit the station, which is a little
bit at odds with this whole notion of a hypothetical negotiation over
an individual broadcast. . . . They don't have the choice to broadcast
a particular program.'').\56\
---------------------------------------------------------------------------
\56\ In the hypothetical marketplace the terrestrial stations'
initial bundling of programs does not affect the marginal profit-
maximizing decisions of the hypothetical buyers and sellers.
---------------------------------------------------------------------------
Dr. Robinson identified the following ``obtainable data'' that she
claimed to comprise ``various indicia of value of the retransmitted
broadcasts'':
The length of the retransmitted broadcasts.
The time of day of the retransmitted broadcasts.
The fees paid by CSOs to retransmit the stations carrying
the broadcasts.
The number of persons distantly subscribing to the station
broadcasting the IPG-claimed program.
Robinson AWDT at 17; Ex. IPG-D-001 at 17.
Dr. Robinson relied upon four sets of data. First, she utilized
data from the Cable Data Corporation (CDC). This data included
information on more than 2,700 cable systems regarding:
The stations transmitted by each CSO.
The distant retransmission fees paid by each CSO.
The number of distant subscribers to each CSO.
For each station distantly retransmitted by these CSOs, the CDC
data also included:
The number of CSOs retransmitting each station.
The number of distant subscribers to the CSOs
retransmitting the station.
The average number of distant subscribers to the CSOs
retransmitting the station.
The distant retransmission fees paid by the CSOs to
retransmit each station.
The average distant retransmission fees paid by the CSOs
to retransmit each station.
Id. at 21.
Second, Dr. Robinson relied on TMS Data (the same source as that
relied upon by the SDC). The TMS data provided the following
information for the IPG and the SDC programs represented in this
proceeding:
The date and time each broadcast was aired.
The station call sign.
The program length in minutes.
The program type (e.g., Devotional).
The program title.
Id. at 21-22.
Third, Dr. Robinson relied upon the following information from
Nielsen:
Data reporting 1997 viewing, segregable according to time
period of the measured broadcast.
Reports reflecting the long-run stability of day-part
(time period) viewing patterns.
Id. at 22.
Applying this data, Dr. Robinson made several computations and
observations, as summarized in Tables 1 and 2 below:
Table 1--Data on IPG and Non-IPG Claimed Titles 1999
------------------------------------------------------------------------
IPG SDC
------------------------------------------------------------------------
Number of distantly retransmitted 12,017 6,558
broadcasts of claimed titles...........
Number of hours of distantly 6,010 5,856
retransmitted broadcasts of claimed
titles.................................
Number of quarter-hours of distantly 24,040 23,423
retransmitted broadcasts of claimed
titles.................................
------------------------------------------------------------------------
Table 2--Relative Market Value
------------------------------------------------------------------------
IPG (percent) SDC (percent)
------------------------------------------------------------------------
Hours of claimed distantly retransmitted 51 49
broadcasts.............................
Time of day of distantly retransmitted 46 54
broadcasts.............................
Fees Paid by CSOs distantly >50 <50
retransmitting devotional broadcasts...
Number of distant subscribers to CSOs 51 49
distantly retransmitting devotional
broadcasts.............................
------------------------------------------------------------------------
Id. at 26-27.
Dr. Robinson stressed repeatedly that the Judges should not
consider the above measures of value individually. Rather, she
testified that the Judges should consider the several approaches as a
whole, with any weakness in one approach offset by the other approaches
that do not suffer from that weakness. See, e.g., 9/3/14 Tr. at 243,
326, 329, 403 (Robinson); 9/4/14 Tr. at 775 (Robinson). Dr. Robinson
also testified that this approach was an important method of analysis
because her multiple valuation methods all tended toward a similar
result--approximately a 50:50 distribution--despite any weaknesses or
limitations in any one method. See 9/2/14 Tr. at 90 (Robinson) (``In
summary, I looked at four different measures of value, of the relative
value. And the IPG versus SDC are roughly equal.''); 9/3/14 Tr. at 245
(Robinson) (``since everything came out roughly equal, all the
indicators pointed to a roughly 50/50 split.''). Based upon these
calculated percentages, Dr. Robinson concluded
[[Page 13439]]
that the proper allocation of royalties should be in a range from
54%:46% favoring the SDC to 51%:49% favoring IPG. Id. at 25.
With regard to the particular factors Dr. Robinson applied, she
noted that her first measurement--of total broadcast time--was
essentially identical for both the IPG and the SDC programs when
measured by quarter-hour segments. 9/2/14 Tr. at 90-91 (Robinson).
Second, with regard to her ``time of day'' analysis, Dr. Robinson
testified that ``certain times of days are associated with different
amounts of viewership [a]nd everything else equal, it would be
reasonable to think that higher viewership might be associated with a
higher value.'' 9/2/14 Tr. at 93 (Robinson). Dr. Robinson concluded
that this time-of-day measurement, like the first measurement (total
broadcast time) revealed a ``roughly similar'' value measurement for
the IPG programs and the SDC programs. 9/2/14 Tr. at 94 (Robinson).
With regard to the third factor--the fees paid by the CSOs to
distantly retransmit the broadcasts--Dr. Robinson found that ``on
average, IPG broadcast quarter hours are shown on stations that are
retransmitted by CSOs who pay relatively more in distant retransmission
fees than do the CSOs who retransmit the stations with the SDC
broadcasts.'' Ex. IPG-D-001 at 31. From this metric, Dr. Robinson
concluded ``the IPG broadcasts have more value than the [SDC]
broadcasts.'' Id. at 32.
Finally, with regard to her fourth factor--the number of
subscribers to the cable systems--Dr. Robinson found that when
considering the average number of subscribers to the cable systems on
which the IPG and the SDC programs are retransmitted, ``the IPG
distantly retransmitted broadcasts are retransmitted by CSOs on
stations with approximately 6% more distant subscribers than [the SDC]
distantly retransmitted broadcasts.'' Id. at 33. Based upon this final
metric, Dr. Robinson opined: ``To the extent the value of the broadcast
relates to the number of distant subscribers to the CSOs retransmitting
the station, this metric indicates that IPG-distantly-retransmitted
broadcasts have more value than [the SDC]-distantly-retransmitted
broadcasts.'' Id. at 34.
Dr. Robinson corrected her analyses before and during the hearing
to reflect changes in the program titles that she could allocate to IPG
and to the SDC. First, she removed from her analyses the several IPG
programs that the Judges had concluded at the preliminary claims
hearing were not properly subject to representation by IPG. 9/2/14 Tr.
at 146 (Robinson). Second, Dr. Robinson added several program titles
that were properly subject to representation by the SDC but had not
been included in her original analyses. 9/2/14 Tr. 181-84 (Robinson).
See also 9/8/14 Tr. at 1016 (Robinson) (confirming that she made these
program inclusions and exclusions in her amended analysis). With these
adjustments, Dr. Robinson modified her conclusions as set forth on
Table 3 below:
Table 3
----------------------------------------------------------------------------------------------------------------
Non-IPG Total
IPG (percent) (percent) (percent)
----------------------------------------------------------------------------------------------------------------
Hours of claimed distantly retransmitted broadcasts............. 48 52 100
Time of day of distantly retransmitted broadcasts............... 46 54 100
Fees paid by CSOs distantly retransmitting devotional broadcasts 41 59 100
Number of distant subscribers to CSOs distantly retransmitting 52 48 100
devotional broadcasts..........................................
----------------------------------------------------------------------------------------------------------------
Ex. IPG-D-013.
Dr. Robinson acknowledged that the data available to her was
incomplete, in that she did not have information regarding all of the
fees, cable systems and stations that retransmitted the programs of IPG
and the SDC. Moreover, she acknowledged that the sample of CSOs and,
derivatively, the sample of stations retransmitted by those CSOs, were
not random samples. Accordingly, Dr. Robinson undertook what she
described as a ``sensitivity analysis'' to adjust for the missing data.
Robinson AWDT at 34-36; Ex. IPG-D-001 at 34-36.
Specifically, Dr. Robinson noted that she did not have data
regarding 29% of the total fees paid by all the CSOs that distantly
retransmit stations. Rather, she had information from CSOs who in the
aggregate had paid only 71% of the total fees paid in 1999 to distantly
retransmit stations. Dr. Robinson acknowledged that she also lacked
full information or a random sampling of CSOs and of stations (in
addition to her lack of full information or a random sampling of the
fees paid by CSOs to distantly retransmit stations). However, Dr.
Robinson did not attempt to adjust her original results to compensate
for the missing information or the fact that the data set was not
random.
Accordingly, in her ``sensitivity analysis,'' Dr. Robinson adjusted
all of her metrics by assuming that she was missing 29% of the data in
all of her valuation data categories (even though only one of her
metrics was calculated based on fees). By this ``sensitivity
analysis,'' Dr. Robinson first calculated how her allocations would
change if all of the assumed missing 29% of fees paid by CSOs to
distantly retransmit stations were allocated (in each of the categories
in Table 3) to IPG and, conversely, how her allocations would change if
all of the assumed missing 29% of such fees instead was allocated (in
each of the categories in Table 3) to the SDC. Id. Dr. Robinson
initially applied this sensitivity analysis to her original allocations
and, subsequently (at the request of the Judges), applied this
sensitivity analysis to her adjusted analyses that took into account
the (1) removal of certain IPG programs that had been eliminated by the
Judges in the preliminary hearing and (2) addition of certain SDC
programs that Dr. Robinson had overlooked in her initial report. Ex.
IPG-R-16 (revised). The application of this ``sensitivity analysis'' to
Dr. Robinson's adjusted analyses \57\ resulted in the proposed
allocations set forth on Table 4 below:
---------------------------------------------------------------------------
\57\ Because Dr. Robinson's adjusted analyses supersede her
original analyses (they admittedly included IPG programs that should
have been excluded and omitted SDC programs that should have been
included), the Judges choose not to clutter this determination with
the details of those now irrelevant calculations.
[[Page 13440]]
Table 4
----------------------------------------------------------------------------------------------------------------
IPG high IPG low Non-IPG high Non-IPG low
(percent) (percent) (percent) (percent)
----------------------------------------------------------------------------------------------------------------
Hours of claimed distantly retransmitted 63 34 66 37
broadcasts.....................................
Time of day of distantly retransmitted 62 33 67 38
broadcasts.....................................
Fees paid by CSOs distantly retransmitting 58 29 71 42
devotional broadcasts..........................
Number of distant subscribers to CSOs distantly 66 37 63 34
retransmitting devotional broadcasts...........
----------------------------------------------------------------------------------------------------------------
Ex. IPG-D-014.
2. Evaluation of the IPG Methodology
The SDC have raised the following specific criticisms of the IPG
Methodology. First, the SDC critiqued each of the four purported
measures of value presented by Dr. Robinson. See SDC PFF at ]] 10-13
(regarding volume); ]] 14-17 (regarding time of day); ]] 18-24
(regarding fee generation); and ]] 26-27 (regarding subscribership).
Second, the SDC noted that the sensitivity analysis undertaken by Dr.
Robinson revealed that SDC programming had an eighteen percentage point
higher value than IPG programming. SDC PFF at ] 25. Before undertaking
an analysis of the specific elements of the IPG Methodology or the
SDC's critiques thereof, it is important to consider several important
overarching defects in the approach undertaken by IPG.
a. General Deficiencies in the IPG Methodology
It bears repeating that a fundamental problem with the IPG
Methodology is that it is based on a decision by Dr. Robinson to
presume the existence of the compulsion arising from the pre-bundled
status of the retransmitted programs as it existed in the actual
compulsory-license market, rather than the compulsion-free hypothetical
fair market consistently applied by the Judges. See, e.g., Tr. 9/4/14
at 781-82 (Robinson) (quoted supra); see also 9/2/14 Tr. at 175-76
(Robinson) (acknowledging that the IPG Methodology does not address the
relationship between value and bundling).
A second problem with the IPG Methodology is that, although it
ostensibly is intended to eschew viewership as a primary measure of
program value, IPG's Methodology implicitly uses indicia of viewership
to measure program value. In particular, IPG's Methodology considers
and values programs based on their indirect contribution to viewership:
The duration of a program serves as an indicium of value (a program of
relatively longer duration would be more valuable because of its
viewership over a longer period), as does the time of day a program is
aired (there are more viewers at some times of day than others), and
the number of subscribers (potential viewers) to CSOs retransmitting
the program. Simply put, IPG's Methodology is not true to its own
critique of valuing programs based on viewership. Thus, the IPG
Methodology fails to address either the initial necessity of
considering absolute viewership or the subsequent necessity of
undertaking a Shapley type of measurement or estimation in order to
create a ``bundle'' of programs.
The Judges also find that Dr. Robinson did not truly undertake her
own independent inquiry and develop her own methodology, because she
worked solely with the data IPG, through Mr. Galaz, provided her. See
9/2/14 Tr. at 110-11 (Robinson); IPG PFF at 11. The type of data that
Mr. Galaz supplied to Dr. Robinson was the same type he utilized in the
2000-03 proceeding, when he presented his own methodology on behalf of
IPG. Mr. Galaz's response to a question from the Judges confirmed this
point:
Q. [I]n constructing the methodology that you relied on in the
2000-2003 proceeding, you used certain data from the CDC, from
Tribune, or whatever it was called at the time, and so forth. Is
that--were those types of data essentially the same as the types of
data that were provided to Dr. Robinson for purposes of this
proceeding?
Mr. Galaz: I would say it was essentially the same.
9/8/14 Tr. at 997 (Galaz).
It is not surprising, therefore, that Dr. Robinson conditioned her
analysis and conclusions by noting that she was only able to express an
opinion as to relative market value ``given the data that are available
in this matter.'' Ex. IPG-D-001 at 20. In fact, Dr. Robinson premised
her analysis on the fact that it was based upon the limited data
available to her. See, e.g., 9/2/14 Tr. at 111 (Robinson) (``I looked
at the data, looked at what I could do with them, and this is what I
could do.'').
Indeed, Mr. Galaz's methodology in the 2000-03 proceeding and Dr.
Robinsons' methodology in the present proceeding overlap. Compare 2000-
03 Determination, 78 FR at 64998 (``The weight that IPG accorded to any
given compensable broadcast was the product of (x) a `Station Weight
Factor' [based on subscriber or fee levels], (y) a `Time Period Weight
Factor,' and (z) the duration of the broadcast . . . .) with Robinson
AWDT at 28; Ex. IPG-D-001 at 28 (``[T]he indicia of the economic value
of the retransmitted broadcasts . . . are: The length of the
retransmitted broadcasts, the time of day of the retransmitted
broadcast, the fees paid by cable system operators to retransmit the
stations carrying the devotional broadcasts, and the number of persons
distantly subscribing [to] the stations broadcasting the devotional
programs.'').
Dr. Robinson clearly was straitjacketed in attempting to devise an
appropriate methodology by the limited data she received from Mr.
Galaz. In this regard, it is important to note that Mr. Galaz is not an
economist, statistician, econometrician or an expert in the field of
valuation of television programs or other media assets, and that he
therefore had no particular expertise that would permit him to select
or approve the use of appropriate data, especially when that selection
dictated the construction of a methodology to establish ``relative
market value'' in a distribution proceeding.\58\ The Judges therefore
[[Page 13441]]
conclude that the overall IPG Methodology carries no more weight than
IPG's methodology did in the 2000-03 proceeding. See 2000-03
Determination, 78 FR at 65002 (while IPG Methodology ``cannot be
applied to establish the basis for an allocation'' it can be used to
adjust ``marginally'' an allocation derived from other evidence).
---------------------------------------------------------------------------
\58\ See 2000-03 Determination, 78 FR at 65000. By contrast, the
SDC's expert witness, Mr. Sanders, was qualified as ``an expert in
the valuation of media assets, including television programs.'' 9/3/
14 Tr. at 463-64, and, in that capacity, he testified that the
broadcast industry relied on Nielsen viewing data as the ``best and
most comprehensive'' basis for valuing programs, 9/3/14 Tr. at 480-
81 (Sanders). Thus, Mr. Sanders was qualified to testify as to the
actual commercial use of a viewership-based valuation methodology.
Mr. Galaz, on the other hand, was not qualified to testify as to the
appropriateness of the data he selected for use in the IPG
Methodology and, it should be noted, neither he nor Dr. Robinson
testified that the factors relied upon in the IPG Methodology had
ever been relied upon commercially. See Tr. 9/3/14 at 348-49
(Robinson).
Not only did Mr. Galaz lack the expertise to approve or select
the type of data necessary to construct a persuasive methodology,
his credibility has been seriously compromised by his prior fraud
and criminal conviction arising from his misrepresentations in prior
distribution proceedings. See 78 FR at 6500 (``Mr. Galaz was
previously convicted and incarcerated for fraud in the context of
copyright royalty proceedings--a fraud that caused financial injury
to MPAA. In connection with that fraud, Mr. Galaz also admittedly
lied in a cable distribution proceeding much like the instant
proceeding. Mr. Galaz's fraud conviction and prior false testimony
compromises his credibility.'') Further, in the present case, the
Judges carefully observed that Mr. Galaz testified that ``what we
gave to Dr. Robinson was everything that we had in our possession
that we thought might affect . . .'' before catching himself and
stating instead ``or would--I should say with which [s]he could
work.'' 9/8/14 Tr. at 996 (Galaz) (emphasis added). In any event,
the Judges recognize that even a party that does not have such a
checkered history has an inherent self-interest in selecting the
types of data for use by its expert that is inconsistent with the
independence of the expert in identifying his or her own categories
of data.
---------------------------------------------------------------------------
Finally, IPG contends that the purpose of the IPG Methodology is to
compensate every claimant, even if there is no evidence of viewership
of the claimant's program. See Galaz AWDT at 8; Ex. IPG-D-001 at 8. The
Judges find no basis for that purpose to guide the methodology. Even if
viewership as a metric for determining royalties theoretically would be
subject to adjustment to establish or estimate a Shapley valuation,
there is certainly no basis to allow for compensation of a program in
the absence of any evidence of viewership.
b. Specific Deficiencies in the IPG Methodology
In addition to the foregoing overarching criticisms of the IPG
Methodology, the Judges note the following more particular deficiencies
in that methodology.
As a preliminary matter, Dr. Robinson acknowledged that IPG's
sample of stations had not been selected in a statistically random
manner. 9/2/14 Tr. at 155 (Robinson). Thus, the sample upon which Dr.
Robinson relied suffered from the same infirmity as the Kessler Sample
relied upon in part by the SDC. Moreover, each prong of the IPG
Methodology raised its own concerns.
(1) Broadcast Hours
Dr. Robinson acknowledged that the number of hours of broadcasts is
not actually a measure of value; rather it is a measure of volume. 9/3/
14 Tr. at 243-51 (Robinson). ``Volume'' fails to capture the key
measure of whether anyone is actually viewing the retransmitted
program. See 9/3/14 Tr. at 247 (Robinson); SDC PFF ] 12. Further,
``volume'' i.e., number of hours of air time, does not even reflect how
many subscribers have access to the programs. 9/8/14 Tr. at1085-86
(Erdem).
(2) Time of Day of Retransmitted Broadcasts
IPG's second measure of value compares the time of day viewership
of IPG and SDC programs. Using 1997 Nielsen sweeps data produced by the
MPAA in a previous proceeding, Dr. Robinson estimates the average
number of total television viewers for each quarter-hour when IPG or
SDC programs were broadcast according to the Tribune Data analyzed by
Dr. Robinson. 9/3/14 Tr. at 254-55 (Robinson).
Dr. Robinson's time-of-day measure does not measure the value of
the individual programs that are retransmitted. The proper measure of
value for such individual programs, when considering ratings, would
hold the time of day constant, and then consider relative ratings
within the fixed time periods. To do otherwise--as Dr. Robinson
acknowledged--absurdly would be to give equal value to the Super Bowl
and any program broadcast at the same time. 9/3/14 Tr. at 264
(Robinson).
Further, Dr. Robinson's analysis does not show, as she asserted,
that the SDC and IPG programs are broadcast at times of day that have
approximately equal viewership. Rather, her time-of-day analysis
pointed to a 54%:46% distribution in favor of the SDC.\59\
---------------------------------------------------------------------------
\59\ When Dr. Robinson adjusted for the proper addition of SDC
programs and deletion of IPG programs, and then applied her
sensitivity analysis, she changed this allocation to 67%:33% in
favor of SDC (not giving IPG any credit for the assumed 29% of the
data it declined to obtain).
---------------------------------------------------------------------------
Finally, IPG utilized 1997 data to estimate the level of viewing
throughout the broadcast day, rather than data that was contemporaneous
with the 1999 royalty distribution period at issue in this proceeding.
9/3/14 Tr. at 229, 255 (Robinson).\60\
---------------------------------------------------------------------------
\60\ Mr. Galaz asserted that information subsequently published
by Nielsen confirmed that ``there had been virtually no change'' in
day-part viewing between 1997 and 1999. 9/8/14 Tr. at 984 (Galaz).
However, IPG presented no evidence to support that assertion.
---------------------------------------------------------------------------
(3) Fees Paid
Dr. Robinson's third metric is derived from an analysis of fees
paid by CSOs per broadcast station. That is, several CSOs might pay
royalty fees to retransmit the same over-the-air station. Dr. Robinson
testified that stations generating relatively greater fees could be
presumed to have higher value programs in their respective station
bundles. 9/3/14 Tr. at 406-07 (Robinson). To measure this factor, Dr.
Robinson combined CDC data on royalty fees the CSOs paid (on a per-
station basis) and TMS data on broadcast hours by station in order to
compare the fees paid for retransmission of stations carrying SDC and
IPG programs. 9/3/14 Tr. at 229, 271 (Robinson).
In Phase I of this proceeding, the Librarian adopted the use of a
fees-paid metric for value, where that measure appeared to be the best
alternative valuation approach. See Distribution of 1998 and 1999 Cable
Royalty Funds, 69 FR 3606, 3609 (January 24, 2004). The use of a fee-
based attempt at valuation is particularly problematic, however, for a
niche area such as devotional programming, which constitutes only a
small fraction of total station broadcasting. See 9/8/14 Tr. at 1087-88
(Erdem). Because of the tenuous nature of this approach to valuation, a
royalty allocation based on a fees-paid metric might serve as, at best,
a ``ceiling'' on a distribution in favor of the party proposing that
approach. See Distribution of the 2004 and 2005 Cable Royalty Funds, 75
FR 57063, 57073 (September 17, 2010). That being said, when Dr.
Robinson adjusted her fees-paid based valuation by applying her
sensitivity analysis, she calculated a value ratio of 71%:29% in favor
of the SDC. As the SDC noted, this appears to be ``a fact that Dr.
Robinson had tried hard to obscure.'' SDC PFF ] 25.
(4) Subscribership Levels
Dr. Robinson's final metric measures the average number of distant
subscribers per cable system retransmitting IPG programming versus SDC
programming. 9/3/14 Tr. at 311-12 (Robinson). This metric measures
average subscribers per cable system, without taking into account the
number of cable systems retransmitting a station. Therefore, this
metric is of no assistance in measuring the total number of distant
subscribers even receiving a program, let alone the number of distant
subscribers who watch the program.
As Dr. Erdem demonstrated--and as Dr. Robinson admitted--this
subscribership metric can actually increase when a program is
eliminated, if the program had been retransmitted by a cable system
with lower than average numbers of subscribers. Erdem WRT at 8-9
(Redacted); Ex. SDC-R-001 at 8-9 (Redacted); 9/3/14 Tr. at 331-45
(Robinson). Indeed, this metric actually increased in favor of IPG
after the dismissal of two of IPG's claimants--Feed the Children and
Adventist Media Center. Ex. SDC-R-001 at 7-10; 9/3/14
[[Page 13442]]
Tr. at 329-30 (Robinson). Simply put, when a purported measure of
program value can move inversely to the addition or subtraction of a
claimant, the measure is, at best, of minimal assistance in determining
relative market value.
Dr. Robinson suggests that the Judges nonetheless should rely on
her opinion as to relative market value because all of her alternative
measures resulted in similar proportionate valuations. 9/2/14 Tr. at
102-03 (Robinson) (``[B]y coming at this with four different metrics .
. . the fact that the estimates all came out quite similarly gives me
some comfort that the numbers are reasonable.''); see also id. at 170
(Robinson) (emphasizing that she was ``looking at all of these factors
in combination''). However, if four measures of value are individually
untenable or of minimal value, they do not necessarily possess a
synergism among them that increases their collective probative value.
VI. Judges' Determination of Distribution
A. The Judges' Distribution of Royalties Is Within the Zone of
Reasonableness
As the foregoing analysis describes, the evidence submitted by the
two parties is problematic. First, the optimal measure or approximation
of relative value in a distribution proceeding--the Shapley valuation
method--was neither applied nor approximated by either party. Second,
the methodologies proposed by both parties have significant
deficiencies.
As between the parties' competing methodologies, however, the
Judges conclude that the approach proffered by the SDC is superior to
that proffered by IPG. The SDC Methodology, consistent with measures of
value in the television industry, relies on viewership to estimate
relative market value. The Judges conclude that in constructing a
hypothetical market to measure the relative market values of distantly
retransmitted programs viewership would be a fundamental metric used to
apply a Shapley valuation model. Therefore, a methodology that uses
viewership as an indicium of program value is reasonable, appropriate,
and consistent with recent precedent in distribution proceedings.
IPG's expert, Dr. Robinson, agreed that viewership is relevant to
the determination of program value. IPG's own methodology uses
viewership as a valuation proxy, although it does so in a much less
direct and transparent way than does the SDC Methodology. Further, the
SDC presented unrebutted testimony that estimating relative market
value based on viewership data alone when considering homogeneous
programming, as the Devotional Claimants category, might actually
understate the value of the more highly viewed programs vis-[agrave]-
vis a Shapley valuation of the same programs. Because the SDC programs
had higher ratings, the Judges conclude that the SDC Methodology,
ceteris paribus, may well tend to understate the SDC share of the
royalties in this proceeding.
By contrast, the IPG Methodology is reliant on data that does not
focus on the property right the Judges must value--the license to
retransmit individual programs in a hypothetical market that is
unaffected by the statutory license. Moreover, the IPG Methodology
fails to value the retransmitted programs in the hypothetical market as
applied by the Judges in this and prior proceedings. Rather, IPG has
assumed tacitly that the valuation of the individual programs has been
compromised by the preexisting bundling of the programs in the actual
market, and therefore all programs must be subject to common
measurements, based on broadcast hours, time of day, subscriber fees,
and subscriber levels. The Judges conclude, as they did in the 2000-03
Determination, that this failure to value programs individually is
erroneous. Accordingly, at best, as stated in the 2000-03
Determination, the IPG Methodology can serve as no more than a ``crude
approximation'' of value that may have some ``marginal'' impact on the
determination of relative market value. See 2000-03 Determination, 78
FR at 78002.
The Judges' preference for the valuation concept of the SDC
Methodology does not mean that the Judges find the SDC's application of
that concept to be free of problems or unimpeachably persuasive in its
own right. The application of the theoretically acceptable SDC
Methodology is inconsistent as regards its probative value.
The Judges' task in this and every distribution determination is to
establish a distribution that falls within a ``zone of
reasonableness.'' See Asociacion de Compositores y Editores de Musica
Latino Americana v. Copyright Royalty Tribunal, 854 F.2d 10, 12 (2d
Cir. 1988); Christian Broadcasting Network, Inc. v. Copyright Royalty
Tribunal, 720 F.2d 1295, 1304 (D.C. Cir. 1983). Based on the entirety
of the Judges' analysis in this determination, the Judges find that the
SDC's proposed royalty distribution of 81.5%:15.5% in favor of the SDC
can serve only as a guidepost for an upper bound of such a zone of
reasonableness. The Judges decline to adopt the 81.5%:15.5% split as
the distribution in this proceeding, however, because the Judges
conclude that the several defects in the application of the SDC
Methodology render the 81.5%:15.5% split too uncertain. That is, the
defects in the application of the SDC Methodology require the Judges to
examine the record for a basis to establish a distribution that
acknowledges both the merits and the imperfections in the SDC
Methodology.
To that end, the Judges look to the alternative confirmatory
measure of relative market value utilized by Mr. Sanders in his report
and testimony. More particularly, the Judges look to his analysis of
the viewership data for the SDC and IPG programs in the local market,
one that served as an ``analogous'' market by which to estimate the
distribution of royalties in this proceeding. The allocation of
royalties suggested by that confirmatory analysis was a 71.3%:28.7%
distribution in favor of the SDC.
On behalf of the SDC, Mr. Sanders testified that this analogous
body of data ``is potentially very relevant and should not, in my
opinion, be ignored.'' 9/3/14 Tr. at 503 (Sanders) (emphasis added).
The Judges agree. That distribution ratio arises from the Nielsen local
viewership ratings over a three-month period in 1999 and covers all of
the programs represented in this proceeding. Importantly, that approach
does not suffer from the uncertainty created by the selection and use
of the Kessler Sample of stations, nor any of the other serious
potential or actual deficiencies in the application of the SDC
Methodology, as discussed in this determination.
There was no sufficiently probative evidence in the record for the
Judges to establish a lower bound to a zone of reasonableness. That
being said, it is noteworthy that even under IPG's Methodology the
relative market valuations of the SDC and IPG programs would be no more
favorable to IPG than roughly a 50/50 split. Under at least two prongs
of IPG's Methodology, Dr. Robinson acknowledged that an adjusted
allocation would likely be closer to a 67/33 split (based on time of
day of retransmitted broadcasts) or a 71/29 split (based on fees paid)
in SDC's favor.
Further, as IPG correctly argued, the 71.3%:28.7% distribution is
significantly different (to the benefit of IPG) compared with the
uncertain results derived by the SDC Methodology. Given that the
81.5%:18.5% allocation derived by the
[[Page 13443]]
SDC Methodology represents a guidepost to the upper bound of a zone of
reasonableness, the ``very relevant'' (to use Mr. Sanders's
characterization) 71.3%:28.7% distribution has the added virtue of
serving as a rough proxy \61\ for the need to reflect the imperfections
in the application of the SDC Methodology.
---------------------------------------------------------------------------
\61\ As noted supra, the Judges may rely on the evidence
presented by the parties to make a distribution within the zone of
reasonableness, and, in so doing, mathematical precision is not
required. See Nat'l Ass'n of Broadcasters, 140 F.3d at 929; Nat'l
Cable Television Ass'n, 724 F.2d at 182.
---------------------------------------------------------------------------
Accordingly, the Judges find and conclude that a distribution ratio
of 71.3%:28.7% in favor of the SDC lies within the zone of
reasonableness.
B. The Judges' Distribution is Consistent With a Valuation Derived From
an Application of the IPG Methodology
The Judges also note a consensus between this 71.3%:28.7%
distribution and the least deficient of IPG's proposed valuations--the
``fees-paid'' valuation. More particularly, Dr. Robinson made
``sensitivity'' adjustments to all her values to account for the
incompleteness of her data. However, her only adjustment was to
multiply all her alternative value measures by 71% to adjust for the
29% of fees paid that her data set did not include. The Judges find and
conclude that Dr. Robinson could adjust only her fees-paid valuation
approach in this manner because the ``missing 29%'' only pertained to
that data set. In the other categories, Dr. Robinson (to put it
colloquially) was subtracting apples from oranges.
When Dr. Robinson made her adjustment in the fees-paid category
(and properly accounted for all programs), she changed her valuation
and distribution estimate to 71%:29% in favor of the SDC. See Table 4
supra.\62\ Moreover, Dr. Robinson testified that her sensitivity
analysis resulted in values that she would characterize as within an
economic ``zone of reasonableness.'' 9/2/14 Tr. at 158 (Robinson)
(emphasis added).
---------------------------------------------------------------------------
\62\ The fact that Dr. Robinson's adjustment was based on
multiplying her allocations by 71% (to account for the missing 29%)
and that the adjustment led to a recommended distribution to the SDC
of 71% is only coincidental.
---------------------------------------------------------------------------
Thus, not only do the Judges independently find that a 71.3%:28.7%
distribution in favor of the SDC proximately adjusts the distribution
within the zone of reasonableness, there is also a virtual overlap
between what can properly be characterized as the worst case
distribution scenarios that the parties' own experts respectively
acknowledge to be ``very relevant'' and falling within a ``zone of
reasonableness.'' \63\ Accordingly, given that IPG's expert witness
testified explicitly that a 71%:29% distribution in favor of the SDC
was within the ``zone of reasonableness'' and that the SDC's expert
witness testified explicitly that a 71.3%:28.7% distribution in favor
of the SDC was ``reasonable'' and ``should not . . . be ignored,'' such
a distribution is also consonant with the parties' understanding of a
reasonable allocation.
---------------------------------------------------------------------------
\63\ The Judges' acknowledgement that IPG's worst-case scenario
(arising out of its fees-paid approach) overlaps with the SDC's
worst case scenario constitutes the extent to which the Judges
credit the IPG Methodology.
---------------------------------------------------------------------------
C. The Judges' Distribution Is Consistent With the Parties' Economic
Decisions Regarding the Development and Presentation of Evidence
The parties admittedly proffered their respective worst-case
scenarios because each had chosen not to obtain data that are more
precise--because each party deemed the cost of acquiring additional
data to be too high relative to the marginal change in royalties that
might result from such additional data (and perhaps the overall
royalties that remain in dispute in the current proceeding). The
parties' independent yet identical decisions in this regard underscore
the Judges' reliance on the parties' worst-case scenarios in
establishing relative market value. When a party acts, or fails to act,
to cause evidentiary uncertainty as to the quantum of relief, the party
that created the uncertainty cannot benefit from its own decision in
that regard. As one commentary notes:
Factual uncertainty resulting from missing evidence is a salient
feature of every litigated case. Absolute certainty is unattainable.
Judicial decisions thus always involve risk of error. This risk
cannot be totally eliminated. However, it is sought to be minimized
by increasing the amount of probative evidence that needs to be
considered by the triers of fact. Missing evidence should therefore
be perceived as a damaging factor.
A. Porat and A. Stein, Liability for Uncertainty: Making Evidential
Damage Actionable, 18 Cardozo L. Rev. 1891, 1893 (1997) (emphasis
added). Alternatively stated, the SDC and IPG have failed to satisfy
their respective evidentiary burdens to obtain anything above the
minimum values indicated by their evidence, by failing to obtain random
samples, full surveys, the testimony of television programmers, or
other more probative evidence or testimony to support their respective
arguments for a higher percentage distribution.
Although the SDC and IPG each had an incentive to procure and
proffer additional evidence, that incentive existed only if the
additional evidence would have advanced the offering party's net
economic position. As the parties acknowledged at the hearing, the
amount at stake simply did not justify their investment in the
discovery, development, and presentation of additional evidence.\64\
When a party makes the choice to forego the expense of producing more
precise evidence, that party has implicitly acknowledged that the value
of any additional evidence is less than the cost of its procurement. As
Judge Richard Posner has noted: ``The law cannot force the parties to
search more than the case is worth to them merely because the
additional search would confer a social benefit.'' R. Posner, An
Economic Approach to Evidence, 51 Stan. L. Rev. 1477, 1491 (1999).
---------------------------------------------------------------------------
\64\ As noted previously, IPG criticized the SDC Methodology for
failing to utilize better data. That criticism applies equally to
both parties and reflects their respective decisions not to invest
additional resources to obtain more evidence. See supra notes 46-47
and accompanying text.
---------------------------------------------------------------------------
VII. Conclusion
Although there is a virtual overlap between the worst-case
scenarios of both parties, the Judges adopt the SDC's distribution
proposal, in light of the more fundamental deficiencies in the IPG
Methodology. Accordingly, based on the analysis set forth in this
Determination, the Judges conclude that the distribution at issue in
this proceeding shall be:
SDC: 71.3%
IPG: 28.7%
This Final Determination determines the distribution of the cable
royalty funds allocated to the Devotional Claimants category for the
year 1999, including accrued interest. The Register of Copyrights may
review the Judges' final determination for legal error in resolving a
material issue of substantive copyright law. The Librarian shall cause
the Judges' final determination, and any correction thereto by the
Register, to be published in the Federal Register no later than the
conclusion of the Register's 60-day review period.
January 14, 2015.
SO ORDERED.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
David R. Strickler,
United States Copyright Royalty Judge.
Jesse M. Feder,
United States Copyright Royalty Judge.
[[Page 13444]]
Dated: January 14, 2015.
Suzanne M. Barnett,
Chief United States Copyright Royalty Judge.
Approved by:
James H. Billington,
Librarian of Congress.
[FR Doc. 2015-05777 Filed 3-12-15; 8:45 am]
BILLING CODE 1410-72-P