There’s been numerous ways of fighting piracy that has been proposed in the past. One study was conducted by BAE Systems Diteca and PRS for music and published by Google. The idea is that government needs to work with advertisers and payment processors to stop the funding to alleged “pirate sites”, thereby cutting off the oxygen supply. The question is, would that ever work?
The report suggests that rather than imposing blocks or filters that might threaten the rights of the netizens, governments should work together with advertising networks, payment processors and rightsholders, to block the revenue streams of pirate websites.
For example, online advertisers could be encouraged to sign up with industry bodies that have a strong code of conduct, while online payment platforms could be asked to refuse to deal with pirates.
The report has failed to take into account the fact that many modern peer-to-peer networks ignore websites altogether, allowing direct communication and exchange of content between the users through software applications and “magnet” links.
The third paragraph is quite puzzling to say the least. If you know your file-sharing history, you’ll know that file-sharing before BitTorrent was almost exclusively a network of users. Napster was like this, the Fast Track Network was like this, Gnutella is like this, eDonkey2000 is like this, Kademlia is like this, WinMX is like this, and Gnutella2 is like this. There was a world of file-sharing before BitTorrent. Yes, there were sites that shared hash links before like ED2K links, but a lot of users simply utilized the search feature and a number of sites could be seen as an added feature to be added to the overall file-sharing experience. BitTorrent was actually unique in that it depended on a general website to host a tracker and you had to, at one point, rely on .torrent files to obtain files. The only other kind of file-sharing that I’m aware of that depends on an HTTP type site would be communities that share links on one-click hosting sites (or, as some like to call them, “cyberlockers”). Beyond that, we start getting into some pretty liberal uses of the term “file-sharing”. Of course, let’s just say this is all just semantics for now.
Let’s just take a look at the actual report which is available on the Google policy website. The blog posting starts with the following:
Until now, most debate about online piracy is driven by emotion, not hard data. In an attempt to fill this gap, we teamed up with the UK copyright collective PRS for Music and asked BAE Systems Detica to take a detailed look. Their report was published today.
The immediate question that comes to mind would be, “What kind of sources are you reading anyway?” We don’t speak for other news sites and blogs, but we here at ZeroPaid don’t sit here and simply ramp up the debate on pure emotion. We actually use actual scientific data and analysis. Is using twenty academic studies as a start simply arguments “driven by emotion”? Trust us, those aren’t the only studies we have looked at and formed opinions based on fact. I think using arms-length independent research is ideal rather than research funded and/or directed by one side of the debate or the other.
Looking at the content, some of the descriptions of different types of file-sharing sites are somewhat vague. A screenshot of a description of a “P2P Community”:
The first point could be used to describe a huge number of sites. You could use that description and say Google is a P2P Community. After all, it has news, videos, scholarly articles, book samples, source code (Google Code), etc.
The third point, I’d say, is a little better, but in the context of copyright infringement, it could also presume any content being downloaded is infringing. In that case, I would stress on avoiding absolutes here.
Personally, I would have been a bit more satisfied with an example. It doesn’t have to be a website that exists today, but an example of the kind of websites this study is looking at would have made what is being looked at a little clearer – even in a footnote of some sort.
The study also went on to explain the data that was collected:
From the 257 websites, we used 153 websites as the ‘Training’ set and the remaining 104 websites as the ‘Validation’ set. We used the training set of websites to test the optimum number of segments needed to classify the market.
After our quick scan through the study itself, we simply found that the study was covering the identification of different sites, methods in which it generates revenue and its popularity. Looking back at the executive summary reveals that I’m not missing much in the study itself with that quick description:
The Six Business Models for Copyright Infringement is a segmentation driven investigation of sites that are thought by major rights holders to be significantly facilitating copyright infringement. In this study, we investigate the operation of a sample of these sites to determine their characteristics. Among other things, we investigate how they function, how they are funded, where they are hosted, what kinds of content they offer, and how large their user bases are.
The aim of this study is to provide quantitative data to inform debate around infringement and enforcement. Although a large amount of quantitative and qualitative data has been collected in the past through consumer surveys into why people use these sites, there is
insufficient data-driven analysis of the sites that are considered to facilitate copyright infringement.
Of course, we should also point out that contributing data did come from the British Phonographic Industry (BPI) which represents major record labels, the Federation Against Copyright Theft (FACT) which is a trade organization representing the UK’s big corporations. It’s hard for me to really accept these organizations as an unbiased source of information to begin with given the long and highly questionable history such organizations have with publishing data to further their cause. One example of this is the MPAA using push polls in 2006 to influence opinion, rather than collect it. To me, it’s a red flag when a study uses data from sources with a stake in the matter.
Google, in their blog posting, made the following comments about the data collected:
Detica examined hundreds of websites cited by rightsholders as the main online pirates. For each, it analysed – amongst other things – the number of unique visitors, IP addresses, the main sources of funding, and preferred audio-visual formats. The results suggest that the most effective weapon to tackle piracy is to follow the money – to squeeze the pirates’ financing.
The report details intriguing trends. Sites selling unlicensed music tend to have low and declining volumes of users, suggesting that the ease of buying legal copyrighted music is having an impact on piracy. In contrast, sites streaming free live TV account for a third of all sites and are increasing in number the fastest.
How best to combat this danger? Instead of imposing blocks or filters that might damage fundamental freedoms, governments should construct coalitions with reputable advertising networks, payment processors and rightsholders. Together, these coalitions can crack down and squeeze the financing behind online infringement.
The UK Government is moving in this direction. It encourages advertising networks, payment processors, rightsholders, and the police to collaborate and squeeze pirate financing. And the research shows that the UK no longer is an attractive home for pirates.
I found the first paragraph of this snippet amusing since the snippet says “hundreds” of sites were surveyed. Yet, looking at the study reveals that they survey covered 257 sites. Grammatically correct? Sure. A bit misleading? Probably.
What’s strange is Google went on to say that legitimate online services are taking off and more people are using the legitimate services, thus increasing the revenue for these industries. If that’s the case, then why should the government waste their time trying to further clamp down on pirate websites? Realistically speaking, the creation of legitimate services that can easily replace file-sharing networks as a source of content is the best way to fight piracy. If you want to make piracy a thing of the past, make it obsolete. This is the sentiment we got when we analyzed the 20 studies from earlier.
To further throw a wrench into the ideas being put forth here, are these people really serious in believing that somehow cracking down on the advertising models and payment distributions really going to somehow thwart piracy? With the advent of BitCoin, Flattr and who knows how many other services, I think pirates are going to find a way around any financial blockade the government throws at them. The US government tried putting the squeeze on Wikileaks after it embarrassed the government through the publication of diplomatic cables (which did expose corruption) and Wikileaks responded by finding ways around the blockade. If the US government was unsuccessful (after all, a few of the financial institutions used by Wikileaks were based in the US) and putting up a financial blockade on a single website, what hope does the UK government have on putting up financial blockades on several websites?
At the end of the day, I don’t see this plan working to crack down on the big piracy operations that are claimed to be targeted. The only sites that stand to run into big issues may be the small operations that don’t really have a plan and legitimate sites that could accidentally get roped into some sort of dragnet operation somewhere along the line. This has been the story for a lot of anti-piracy plans for years and I don’t see this plan being different from the others. Setting up big financial blockades against piracy websites is going to end up being a big waste of time and resources all the while solving nothing.