Broken online reviews; here’s how to fix them

It’s a crime story fit for the digital age. It was recently reported that a number of restaurants in New York City have been targeted by internet scammers threatening to leave “one star” negative reviews unless they receive gift certificates. The same threats have been made against restaurants in Chicago and San Francisco and it appears a vegan restaurant received as many as eight one-star reviews in the space of a week before being approached for a review. ‘silver.

It’s surprising that this sort of thing hasn’t emerged before. An overreliance on the “wisdom of the crowd”, whereby many people judge things based on the approval of the rest of the community, makes us vulnerable to this type of fraud. It’s all about numbers. Products and businesses are measured online by the number of stars they get on a five-star scale, influencers are measured by the number of followers, posts are measured by the number of likes or retweets.

The Kardashian Satire Index provides a quantitative measure for academics by comparing citations of their research papers with their number of Twitter followers. But why are these systems considered useful and why do we consult them almost blindly?

In an age of information overload, feedback and reputation systems allow for quick decision-making, giving us the feeling (or the illusion) that we are in control because the decision made is perceived to be informed. Another idea at play here is the “attention economy paradigm.” According to this way of thinking, human attention is a scarce commodity and – as with all resources that are limited on this planet – it is of great value. Businesses compete for as high a spot as possible on the first page of Google’s search results in order to capture that attention. And user feedback is one of the many parameters that influence secret search engine ranking algorithms.

The notable success and acceptance of such reputation systems is predicated on the idea that the wisdom of the crowd comes into play. If a large enough sample of the population is asked to estimate something, the average of those estimates should be very close to the current value. Indeed, any personal bias becomes insignificant when a considerable amount of opinion is collected. But all systems that come with successful business models are open to abuse and can attract opportunistic and malicious actors, so much so that organized criminal groups can systematically form and exploit such systems. For example, the business opportunities that emerged during the COVID-19 pandemic were instantly matched by an assortment of criminal activities, including shopping scams, misinformation, illegal streaming, and even sexual exploitation of children.

There are several reasons and motivations for fake reviews. Commercial competitors may try to flood a commercial target with negative reviews in order to harm their competitor. Others may attempt, by creating fake profiles or “bribing” customers with free or discounted products, to generate positive reviews and misrepresent the quality of their products. But extortion via negative review threats is particularly insidious. A flurry of negative reviews on a business’s Google profile not only affects its search engine rankings, but also significantly influences the purchasing decisions of potential customers.

Although these practices have been rationalized by organized groups in India, variations have also been seen in other countries. Amazon recently sued 10,000 Facebook group administrators exceeding 43,000 members who allegedly solicited fake (positive) reviews in exchange for free products.

What can be done?

The abuse of online feedback and reputation systems has reached epidemic proportions. Countering it will require the coordination of everyone involved. Google and other review and reputation service providers need to invest more resources in preventing, detecting, and removing fake reviews. Machine learning technologies have made impressive progress in recent years and could help weed out fake content. Stricter rules governing the selection of examiners allowing their participation under specific conditions are needed. We’ve seen this with verified buyer systems that aim to provide assurance that the reviewer has had a genuine experience with the company. The presentation of comments and in particular the star rating system could also contain more contextual information, for example via additional color coding to communicate the sentiment extracted from textual comments. In this case, highly emotional comments based on less factual or helpful information might have a different color than those trying to be unbiased and objective. Companies should also embrace the problem review reporting system and use it responsibly. They should not report negative feedback if it is genuine, as it affects the relationship with the feedback platform, which will naturally be more suspicious of the business.

Consumers should be more vigilant and informed about this rather than religiously following these rankings. There are many telltale signs of a fake review, including simply checking the language to see if it’s generic. It is also instructive to check if the reviewer produces many negative reviews on many seemingly unrelated products in a short time. We, the crowd, should be active participants by always being fair with our shopping experiences and acknowledging and supporting businesses when they exceed our expectations – but also by providing candid negative reviews and recommendations for improvement. Only then will the wisdom of the crowd really serve us. (Katos is Professor of Cybersecurity, Head of BU-CERT, Bournemouth University)

Comments are closed.