Analysis, Mobile Computing

Gaming the App Store: Modern Day Clickfraud?

With over a million apps each on the Apple App Store and on Google Play, marketing one’s app must be tough business for any developer. With the multitude of copycat apps that essentially have similar functionalities, app developers will be lucky to have their offerings included in top and featured lists, whether in terms of downloads, ratings, reviews or revenues. How does one reach the tipping point, in which great traction — and organic revenue potential — will shortly come thereafter?

It seems developers are finding ways to game the system, in what can be considered shady practices or at least grey areas in building up the profile of one’s app. In a recent tweet, Hong Kong-based TapCase developer Simon Pang shares what appears to be a lady doing ratings and reviews on an array of tablets. Pang writes how “this is how App Store ratings work.”

Barry Mead of Fireproof Games tweets that “‘respected’ major developers use systems like these daily.”

This being Twitter, of course, there is no verification yet as to the original source of the image, and whether the photographed activity is, indeed, a pay-to-review or pay-to-rate service. However, it has shed some light into such practices that skew app store ratings, popularity, downloads and eventually revenues for these developers.

Wall Street Journal‘s Lisa Fleisher plans to dig deeper into the issue. But without prejudice to Fleisher’s journalistic work, we can already arrive at several questions and critiques with respect to how major app ecosystems are run.

Is it against policy? What is being done?

One might wonder whether Apple is aware of how developers are gaming its app discovery system. It’s reminiscent of how developers will jack up their app prices from $0.99 to $999 (the maximum that the App Store supports), and then having someone buy a copy of the app. Even as Apple gets a 30% share of the money, the app then gets a boost on the top paid apps list, after which the developer can revert the price back to $0.99. In essence, moneyed developers can shell out cash and then lose 30% of this value, but then get a potentially more popular (and revenue generating) application in return.

If pay-to-review farms are mass-producing ratings and reviews in exchange for pay, then it’s grossly unfair to developers who rely on organic reviews and ratings by actual users. Ratings affect discoverability and revenue potential. Developers can easily recoup their investment once they start getting millions of daily downloads.

Apple’s developer terms of service says this: “If you attempt to cheat the system (for example, by trying to trick the review process, steal data from users, copy another developer’s work, or manipulate the ratings) your Apps will be removed from the store and you will be expelled from the developer program.”

Google Play also has this to say in its developer program policies: “Developers must not attempt to change the placement of any Product in the Store, or manipulate any product ratings or reviews by unauthorized means such as fraudulent installs, paid or fake reviews or ratings, or by offering incentives to rate products.”

But “tricking the review process” and “manipulating the ratings” are quite broad terminology. Does it specifically ban paying someone to do manual reviews? Or does it only ban bots? Does it have a geographic dimension? And what does Apple do to apps that employed such practices, but then gained legitimate and organic ratings, reviews and downloads later on?

At least Google’s terminology explicitly calls out paid and fake reviews and ratings. The question is whether the app store ecosystems are actually doing concrete steps in going after violators.

Gaming and economics

This reminds me of the way people have gamed the big systems before. To wit:

  • In the heyday of pay-per-click advertising, clickfraud was a big deal. But it did not prevent publishers from hiring third parties to do manual clicking to jack up revenue — or sometimes even to kill off competitors’ campaigns or screw with their ad budgets.
  • Content farms used to be a popular way to earn millions. These companies paid measly cents for writers to churn out rehashed, SEO-friendly, but questionable quality articles, and then earn from the ad placements. This was a viable business model until Google pulled the plug with various search algorithm updates that ended up hurting both content farms and valid publications.
  • While trading virtual goods with real money may be against the policies of most games, it has not stopped companies from doing “gold farming” activities — something that has implications not only within the dynamics of MMORPGs, but also development economics, as well (e.g., buyers tend to be from developed countries, while players who play to sell items come from emerging economies).
  • Today, viral sites have the habit of ripping off content from other sources (in which the originating source is rarely credited), adding clickbait headlines for the benefit of social sharing, and earning from advertisements. Facebook has recently attempted to curtail these practices through algorithm changes, but viral junk still remains in our newsfeeds.

The common denominator among these examples is money. Where there is potential money involved, then people will tend to find ways to game the system for their economic advantage. Never mind user experience, content quality or business ethics.

Given that “respected major developers” are said to be doing this, can we still trust app store ratings and discovery mechanisms?

Feature image credit: Bloomua / Shutterstock.com