FCC’s Extreme Proposal Threatens the Livelihood of Creators

By Matthew Barblan & Kevin Madigan

Earlier this year, the FCC proposed a new regulatory scheme ostensibly designed to improve the market for pay-TV set-top boxes. Chairman Wheeler claimed that the proposed rules would “tear down the barriers that currently prevent innovators from developing new ways for consumers to access and enjoy their favorite shows and movies on their terms.” But set-top boxes are already on their way out as more and more consumers turn to streaming apps to watch their favorite movies and shows. So what is the FCC up to here? A close look at the proposed rules reveals that this isn’t about set-top boxes at all. Instead, the rules are designed to benefit a handful of companies that want to disseminate pay-TV programs without negotiating with or paying a license to the owners of those programs, undermining the property rights of creators and copyright owners. The creative community is understandably up in arms.

As we explain in comments filed with the FCC, the proposed rules would require pay-TV providers to make copyrighted video content available to third-party companies that have no contractual relationship with either the pay-TV providers or the creators of the video programming. The Commission essentially aims to create a zero-rate compulsory license for these companies. But this zero-rate compulsory license would fundamentally disrupt copyright owners’ ability to pursue the wide variety of business models and licensing arrangements that enable our creative ecosystem to thrive.

A key component of copyright owners’ property interest is the ability to choose to whom they license their works and on what terms. Because their livelihoods depend on the success of their works, copyright owners are particularly well-positioned and incentivized to determine the best way to commercialize them. By conveying copyrighted works to third parties without the consent of copyright owners, the proposed rules trample on the property rights of copyright owners and risk severely damaging our vibrant creative economy.

Adding insult to injury, the proposed rules wouldn’t even require the recipients of this zero-rate compulsory license to abide by the underlying contractual terms between copyright owners and pay-TV providers. Licensing contracts between copyright owners and pay-TV providers often include specific terms detailing the obligations of the provider in distributing the creative works. These terms can include things like channel “neighborhood” assignments, branding requirements, advertising limits, platform restrictions, and the list goes on. While the Commission states that “our goal is to preserve the contractual arrangements” between copyright owners and pay-TV providers, the proposed rules would transfer some, but not all, of the underlying contractual obligations to the third-party recipients of the copyrighted works.

For example, under the Commission’s proposal, third-party recipients of the copyrighted works would not be required to abide by contractual terms about channel placement designed to protect viewer experience and brand value. Similarly, the Commission’s proposal would not require third-party recipients of copyrighted works to abide by contractual terms concerning advertising in the delivery of those works. By allowing third parties to sidestep these terms, the Commission risks reducing the advertising revenue that pay-TV providers can earn from disseminating copyrighted works, thereby reducing the value of the license agreements that copyright owners negotiate with pay-TV providers.

In another thumb-in-the-eye to creators and copyright owners, the Commission’s proposal fails to account for copyright owners who may want to protect their copyrighted works by disseminating them exclusively through proprietary (and not widely licensable) content protection mechanisms. Instead, the Commission proposes to require pay-TV providers “to support at least one content protection system to protect its multichannel video programming that is licensable on reasonable and nondiscriminatory terms by an organization that is not affiliated with [the pay-TV provider].” Thus, the Commission would force copyright owners to risk exposing their property to security threats that may be associated with using widely-licensable content protection mechanisms.

Furthermore, nothing in the Commission’s proposal would prevent third parties from delivering the copyrighted works side-by-side with stolen versions of those same works. It is easy to imagine a search function that aggregates copies of creative works from a variety of platforms and displays the search results side-by-side. In fact, anyone who has run an internet search for a movie or TV show has likely seen results that mix links to both legitimate and stolen works.

Copyright owners’ ability to protect their creative works is essential both to preserve the value of their property and to give them the confidence to enter into arrangements with intermediaries (like pay-TV providers) to disseminate their works to a wide variety of audiences. This is especially true in light of the unique security challenges involved in portable, online, and short-term access to copyrighted works. Any reasonable proposal in this space would help copyright owners move forward in the ongoing battle to prevent the rampant theft and illegal dissemination of their works that has accompanied the rise of the internet. Unfortunately, the Commission’s proposal does just the opposite, limiting copyright owners’ ability to protect their property and pushing them backwards in the ongoing struggle against piracy.

Furthermore, it is entirely unclear where the Commission would draw the legal authority to change the nature of copyright owners’ property rights. The proposed rules simply claim that Section 629 of the Communications Act grants the Commission authority to implement the regulations in order to ensure competition and consumer choice in the navigation device market. In its justification of authority, the Commission repeatedly states that it will broadly interpret ambiguous terms in the Communications Act and that “a broad interpretation is necessary.” But nowhere in its analysis does the Commission cite to language granting it the authority to rewrite copyright law. Even under the broadest of interpretations, it is clear that the Communications Act does not give the Commission the authority to amend the Copyright Act and create a zero-royalty compulsory license out of thin air.

By granting artists and creators property rights in the fruits of their labors, copyright supports a diverse and multifaceted ecosystem that enables the development, distribution, and enjoyment of creative works, and that provides significant economic and cultural benefits to our society. But this ecosystem only works if copyright owners are able to safely and freely deploy their property in the marketplace. Unfortunately, the Commission’s proposal fails to respect the property rights of creators and copyright owners, risking severe disruption to the very same creative marketplace the Commission claims to promote.

Google Image Search and the Misappropriation of Copyrighted Images

Cross-posted from the Mister Copyright blog.

Last week, American visual communications and stock photography agency Getty Images filed a formal complaint in support of the European Union’s investigation into Google’s anti-competitive business practices. The Getty complaint accuses Google of using its image search function to appropriate or “scrape” third-party copyrighted works, thereby drawing users away from the original source of the creative works and preserving its search engine dominance.

Specifically, Getty’s complaint focuses on changes made to Google’s image search functionality in 2013 that led to the appealing image galleries we’re familiar with today. Before the change, users were presented with low-resolution thumbnail versions of images and would be rerouted to the original source website to view a larger, more defined version and to find out how they might legally license or get permission to use the work. But with the current Google Image presentation, users are instantly delivered a large, desirable image and have no need to access the legitimate source. As Getty says in its complaint, “[b]ecause image consumption is immediate, once an image is displayed in high-resolution, large format, there is little impetus to view the image on the original source site.”

According to a study by Define Media Group, in the first year after the changes to Google Image search, image search referrals to original source websites were reduced by up to 80%. The report also provides before and after screenshots of a Google Image search and points out that before 2013, when a thumbnail was clicked, the source site appeared in the background. Not only does the source site not appear in the new version, but an extra click is required to get to the site, adding to the overall disconnect with the original content. Despite Google’s claims to the contrary, the authors of the study conclude that the new image search service is designed to keep users on the Google website.

It’s difficult not to consider Google’s image UI [user interface] change a shameless content grab – one which blatantly hijacks material that has been legitimately licensed by publishers so that Google Image users remain on their site, and are de-incentivized from visiting others.

While Getty’s complaint against Google is based on anticompetitive concerns, it involves the underlying contention that Google Image search enables misappropriation of copyrighted images on a massive scale. Anyone who has run a Google Image search knows that with the click of a mouse, a user is presented with hundreds of images related to their query, and with another simple right click, that user can then copy and paste these images as they please. But Google Image search often returns an abundance of copyright protected images, enabling anyone to copy, display and disseminate images without considering the underlying copyright and existing licenses. And while using the service may be free, make no mistake that Google is monetizing it through advertisements and the mining of users’ personal data.

When users are able to access and copy these full-screen, high resolution images from Google Image search, not only do third-party image providers lose traffic to their website, but the photographers and creators behind the images lose potential income, attribution and exposure that would come with users accessing the original source. As General Counsel Yoko Miyashita explains, “Getty Images represents over 200,000 photojournalists, content creators and artists around the world who rely on us to protect their ability to be compensated for their work.” When Google Image search obviates the need for a user to access the original creative content, these artists and creators are being denied a fair marketplace for their images, and their ability and motivation to create future works is jeopardized.

Shortly after Google changed to the new image search, individual photo publishers and image creators took to a Google Forum to voice their concerns over the effects the service was having on their images and personal web pages. A recurring complaint was that the service made it more difficult to find out information about images and that users now had to go through more steps to reach the original source website. One commenter, identifying herself as a “small time photo publisher,” described Google’s new practice of hotlinking to high-resolution images as a “skim engine” rather than a “search engine.” She lamented that not only was Google giving people access to her content without visiting her site, but her bandwidth usage (i.e. expense) went up due to the hotlinking of her high resolution images.

Google Image supporters argue that creators and image providers should simply use hotlink protection to block Google from displaying their content, but Google’s search engine dominance is so absolute, this would further curtail traffic to the original source of the content. Others suggest image providers stamp their images with watermarks to protect from infringement, but Getty VP Jonathan Lockwood explains that doing so would result in punishment from Google.

They penalise people who try to protect their content. There is then a ‘mismatch penalty’ for the site: you have to show the same one to Google Images that you own. If you don’t, you disappear.

The internet has made sharing creative works and gaining exposure as an artist easier than anyone could have imagined before the digital age, but it has also brought challenges in the form of protecting and controlling creative content. These challenges are particularly burdensome for image creators and providers, whose creative works are subject to unauthorized use the moment they are put online. Over the last few years, Google Image search has contributed to this problem by transforming from a service that provided direction to creative works to a complete substitute for original, licensed content.

With fewer opportunities for image providers and creators to realize a return–whether it be in the form of payment, attribution, or exposure–from their works, creativity and investment in creators will be stifled. Artists and rightsholders deserve fair compensation and credit for their works, and technology needs to work with image providers rather than against them to ensure that great content continues to be created.

Copyright Policy Should Be Based On Facts, Not Rhetoric

Here’s a brief excerpt of a post by Kevin Madigan & Devlin Hartline that was published on IPWatchdog.

After nearly twenty years with the DMCA, the Copyright Office has launched a new study to examine the impact and effectiveness of this system, and voices on both sides of the debate have filed comments expressing their views. For the most part, frustrated copyright owners report that the DMCA has not successfully stemmed the tide of online infringement, which is completely unsurprising to anyone who spends a few minutes online searching for copyrighted works. Unfortunately, some commentators are also pushing for changes that that would make things even more difficult for copyright owners.

To read the rest of this post, please visit IPWatchdog.

Separating Fact from Fiction in the Notice and Takedown Debate

By Kevin Madigan & Devlin Hartline

With the Copyright Office undertaking a new study to evaluate the impact and effectiveness of the Section 512 safe harbor provisions, there’s been much discussion about how well the DMCA’s notice and takedown system is working for copyright owners, service providers, and users. While hearing from a variety of viewpoints can help foster a healthy discussion, it’s important to separate rigorous research efforts from overblown reports that offer incomplete data in support of dubious policy recommendations.

Falling into the latter category is Notice and Takedown in Everyday Practice, a recently-released study claiming to take an in-depth look at how well the notice and takedown system operates after nearly twenty years in practice. The study has garnered numerous headlines that repeat its conclusion that nearly 30% of all takedown requests are “questionable” and that echo its suggestions for statutory reforms that invariably disfavor copyright owners. But what the headlines don’t mention is that the study presents only a narrow and misleading assessment of the notice and takedown process that overstates its findings and fails to adequately support its broad policy recommendations.

Presumably released to coincide with the deadline for submitting comments to the Copyright Office on the state of Section 512, the authors claim to have produced “the broadest empirical analysis of the DMCA notice and takedown” system to date. They make bold pronouncements about how “the notice and takedown system . . . meets the goals it was intended to address” and “continues to provide an efficient method of enforcement in many circumstances.” But the goals identified by the authors are heavily skewed towards service providers and users at the expense of copyright owners, and the authors include no empirical analysis of whether the notice and takedown system is actually effective at combating widespread piracy.

The study reads more like propaganda than robust empiricism. It should be taken for what it is: A policy piece masquerading as an independent study. The authors’ narrow focus on one sliver of the notice and takedown process, with no analysis of the systemic results, leads to conclusions and recommendations that completely ignore the central issue of whether Section 512 fosters an online environment that adequately protects the rights of copyright owners. The authors conveniently ignore this part of the DMCA calculus and instead put forth a series of proposals that would systematically make it harder for copyright owners to protect their property rights.

To its credit, the study acknowledges many of its own limitations. For example, the authors recognize that the “dominance of Google notices in our dataset limits our ability to draw broader conclusions about the notice ecosystem.” Indeed, over 99.992% of the individual requests in the dataset for the takedown study were directed at Google, with 99.8% of that dataset directed at Google Search in particular. Of course, search engines do not include user-generated content—the links Google provides are links that Google itself collects and publishes. There are no third parties to alert about the takedowns since Google is taking down its own content. Likewise, removing links from Google Search does not actually remove the linked-to content from the internet.

The authors correctly admit that “the characteristics of these notices cannot be extrapolated to the entire world of notice sending.” A more thorough quantitative study would include data on sites that host user-generated content, like YouTube and Facebook. As it stands, the study gives us some interesting data on one search engine, but even that data is limited to a sample size of 1,826 requests out of 108 million over a six-month period in mid-2013. And it’s not even clear how these samples were randomized since the authors admittedly created “tranches” to ensure the notices collected were “of great substantive interest,” but they provide no details about how they created these tranches.

Despite explicitly acknowledging that the study’s data is not generalizable, the authors nonetheless rely on it to make numerous policy suggestions that would affect the entire notice and takedown system and that would tilt the deck further in favor of infringement and against copyright owners. They even identify some of their suggestions as explicitly reflecting “Public Knowledge’s suggestion,” which is a far cry from a reasoned academic approach. The authors do note that “any changes should take into account the interests of . . . small- and medium-sized copyright holders,” but this is mere lip service. Their proposals would hurt copyright owners of all shapes and sizes.

The authors justify their policy proposals by pointing to the “mistaken and abusive takedown demands” that they allegedly uncover in the study. These so-called “questionable” notices are the supposed proof that the entire notice and takedown system needs fixing. A closer look at these “questionable” notices shows that they’re not nearly so questionable. The authors claim that 4.2% of the notices surveyed (about 77 notices) are “fundamentally flawed because they targeted content that clearly did not match the identified infringed work.” This figure includes obvious mismatches, where the titles aren’t even the same. But it also includes ambiguous notices, such as where the underlying work does not match the title or where the underlying page changes over time.

The bulk of the so-called “questionable” notices comes from those notices that raise “questions about compliance with the statutory requirements” (15.4%, about 281 notices) or raise “potential fair use defenses” (7.3%, about 133 notices). As to the statutory requirements issue, the authors argue that these notices make it difficult for Google to locate the material to take down. This claim is severely undercut by the fact that, as they acknowledge in a footnote, Google complies with 97.5% of takedown notices overall. Moreover, it wades into the murky waters of whether copyright owners can send service providers a “representative list” of infringing works. Turning to the complaint about potential fair uses, the authors argue that copyright owners are not adequately considering “mashups, remixes, or covers.” But none of these uses are inherently fair, and there’s no reason to think that the notices were sent in bad faith just because someone might be able to make a fair use argument.

The authors claim that their “recommendations for statutory reforms are relatively modest,” but that supposed modesty is absent from their broad list of suggestions. Of course, everything they suggest increases the burdens and liabilities of copyright owners while lowering the burdens and liabilities of users, service providers, and infringers. Having overplayed the data on “questionable” notices, the authors reveal their true biases. And it’s important to keep in mind that they make these broad suggestions that would affect everyone in the notice and takedown system after explicitly acknowledging that their data “cannot be extrapolated to the entire world of notice sending.” Indeed, the study contains no empirical data on sites that host user-generated content, so there’s nothing whatsoever to support any changes for such sites.

The study concludes that the increased use of automated systems to identify infringing works online has resulted in the need for better mechanisms to verify the accuracy of takedown requests, including human review. But the data is limited to small surveys with secret questions and a tiny fraction of notices sent to one search engine. The authors offer no analysis of the potential costs of implementing their recommendations, nor do they consider how it might affect the ability of copyright owners to police piracy. Furthermore, data presented later in the study suggests that increased human review might have little effect on the accuracy of takedown notices. Not only do the authors fail to address the larger problem of whether the DMCA adequately addresses online piracy, their suggestions aren’t even likely to address the narrower problem of inaccurate notices that they want to fix.

Worse still, the study almost completely discards the ability of users to contest mistaken or abusive notices by filing counternotices. This is the solution that’s already built into the DMCA, yet the authors inexplicably dismiss it as ineffective and unused. Apart from providing limited answers from a few unidentified survey respondents, the authors offer no data on the frequency or effectiveness of counternotices. The study repeatedly criticizes the counternotice system as failing to offer “due process protection” to users, but that belief is grounded in the notion that a user that fails to send a counternotice has somehow been denied the chance. Moreover, it implies a constitutional right that is not at issue when two parties interact in the absence of government action. The same holds true for the authors’ repeated—and mistaken—invocation of “freedom of expression.”

More fundamentally, the study ignores the fact that the counternotice system is stacked against copyright owners. A user can simply file a counternotice and have the content in question reposted, and most service providers are willing to repost the content following a counternotice because they’re no longer on the hook should the content turn out to be infringing. The copyright owner, by contrast, then faces the choice of allowing the infringement to continue or filing an expensive lawsuit in federal court. The study makes it sound like users are rendered helpless because counternotices are too onerous, but the reality is that the system leaves copyright owners practically powerless to combat bad faith counternotices.

Pretty much everyone agrees that the notice and takedown system needs a tune up. The amount of infringing content available online today is immense. This rampant piracy has resulted in an incredible number of takedown notices being sent to service providers by copyright owners each day. Undoubtedly, the notice and takedown system should be updated to address these realities. And to the extent that some are abusing the system, they should be held accountable. But in considering changes to the entire system, we should not be persuaded by biased studies based on limited (and secret) datasets that provide little to no support for their ultimate conclusions and recommendations. While it may make for evocative headlines, it doesn’t make for good policy.

Acknowledging the Limitations of the FTC’s PAE Study

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacy emphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

Center for the Protection of Intellectual Property