Apple Can’t Ban “Rate This App” Dialogs – Marco.org:
We could all rate these apps lower as a form of protest, but it’s unlikely to have a meaningful impact. The App Store is a big place.
We could vote with our feet and delete any app that interrupts us with these, but we won’t. Are you really going to delete Instagram and stop using it? Yeah, exactly.
We’re stuck with these annoying dialogs. All we can really do is avoid using them ourselves and stigmatize them as akin to spam, popup ads, and telemarketing — techniques only used by the greedy, desperate, shameless, and disrespectful.
Sure they can… This is an engineering problem. It’s a classic biggish-data problem.
Attach a “report this as inappropriate” button to the things you want to police. Any time someone reports it, it’s stuffed into the database as a record (thing, datestamp, userID that reported it)
Now, hire a small team. their job: manage the biggest problems, where we can define biggest various ways, but typically, a combination of number of reports and the velocity of those reports (how quickly they come in; six reports over a month is a lot less urgent than 6 reports in an hour). Their job, to start, is to evaluate reports and either validate them or reject them. If they’re validated, then whatever we decide is appropriate happens (the ‘thing’ is removed from view, the developer gets a yap letter, the developer loses privileges, etc…). If they reject the report, then nothing happens.
Sort of.
What you want to build here is a reputation system. Every time this team validates a report, everyone who made that report gets their reputation value incremented. Every time a report is rejected, those that reported gets their reputation value decremented. Over time, you’ll build a data set that will tell you how reliably a person giving a report is in sync with the standards of those judging the reports.
You can use that data to build automation into the evaluation process. As someone’s reputation value goes up, we’re creating a trust metric that those reports are valuable and accurate, so those reports get bumped up the queue into the evaluation team. As someone’s reputation value goes down, you de-prioritize those reports, and at some point, you simply throw them out: Once someone’s proven themselves to be reliably inaccurate about reporting, you simply filter them out of the system (this will, as a side effect, do a good job of neutralizing the trolls that use the abuse reporting system as an attack vector; that’s something Facebook is amazingly bad at dealing with…)
You can take this to the next level. Once someone gets to a certain level of reliability, you can trust their reports. There will be a smallish set of reporters where you can assume those reports are correct and act on them without intervention by the review team. These reporters become an extension of the team in effect.
Expand this reputation management one level further: anyone who reports a violation that one of these “extended team” reporters reports gets their reputation extended as well; you can go the other way as well — anyone who flags as a violation material that the known “troll team” anti-reputation group flags gets their reputation dropped.
What this will do, over time, is create a reputation metric for every user reporting violations in the system; the highest rated users can be trusted implicitly and their reports are acted upon automatically. The reports in the next group down are prioritized to the evaluation team by a combination of the likelihood that it’s a valid report (based on the combination of the number of reports, the velocity that those reports come in, and the consensus reputation of those doing the reports).
The trolls will tend to report way out of sync with the mainstream of the community, and as they get identified, their actions will allow you to identify the clique they’re working in and over time they’ll trash the reputations within that clique and all of that data will get minimized or ignored, effectively neutering them.
There’s no need for the in-person evaluation team to scale massively, it needs to be big enough to manage the important problems, but more importantly, they need to be able to understand and consistently implement the policies, because what they’re doing isn’t necessarily policing, but identifying the extended team that will be doing the policing; this system depends on reputations being built over time based on appropriate implementation of the policies.
Over time, with a good database and some number crunching, you can create a policing system that is a combination of community-self-policing (because if the community isn’t reporting it, it’s not a problem), and administrative oversight (because reports are initially judged by the owners of the system, and the reputation is built around how well issue-reporters report in sync with the administration policies). What you end up with is a system where the most trusted users are automatically identified and then used to police the system based on the policy decisions made by the core administration team.
And as a nice side effect, the worst trolls and abusers are neutered, and if you really want to, you can have the system identify them and take them out behind the shed and Old Yeller them out of the community if you want…
This is a variant of a well-solved problem, which is the one of email spam (a flavor of this kind of system has been used by Amazon for years to float the best reviews to the top and the worst reviews out of view; I’m always amazed that companies don’t borrow from them more often). The problem isn’t that it can’t be done, it’s that the companies involved have instead decided they can get away with minimal effort and avoiding responsibility for policing their communities (Facebook, staring at you big time again) instead of digging in and solving the problem. the recent Twitter kerfluffle with their well-intended (but stupidly thought out) block policy is another example of how the people running these systems don’t see managing these problems as a priority.
When I was at Palm, the number one issue I heard from developers about was abusive, irrelevant and 1-star reviews; this is a big issue because that star rating is, if not the number one deciding factor in buy/not-buy, in the top two or three. A shift in your average rating from 4.8 to 4.5 could kill half your sales.
Unfortunately, nobody at Palm cared or wanted to. When they shipped the WebOS App store, in fact, their was no interface to view reviews, much less police them. Two years later, when I left, there was still no interface to deal with them (and no plans to build one) other than some hand-built crap I did on the fly to give me some ability deal with the worst of it (that hand-built crap involved mysql dumps of the production data, perl scripts to implement blacklists, a web site to let me bring up and delete stuff manually, and then creating a script that the DBA would run against production to implement the changes. Not exactly real time)
Unfortunately, some variation of that “we don’t care what’s screwing over the developers as long as they get their apps into the store” seems to exist on most of these platforms (and I, for one, don’t miss trying to fight that fight much these days).
This stuff can be fixed, it’s just not a priority. I couldn’t even get the product managers to look at possible fixes at Palm, even though I volunteered to build the damned thing on the side.
These are all communities, and they’re all social systems. That’s something a lot of organizations don’t recognize. As a result, many times they’re designed and built by people who don’t understand (or use) social systems, and so all of the necessary management and feedback systems aren’t there.
So rule one: don’t let people who don’t grok social systems build social systems.
That seems like a simple one, but honestly, it’s amazing how often it gets ignored..
In any event, it’s not that Apple can’t fix these problems. It’s that they don’t. There are known ways to manage these problems that will scale without throwing huge staffs at them.
They’re just not priorities.