In the past two months, regulators in the U.S. and U.K. have taken aim at the increase of fake reviews online, with the Federal Trade Commission (FTC) warning Amazon in May to crack down and the United Kingdom’s Competition and Markets Authority (CMA) opening a formal probe into Amazon and Google last month.
But it’s more than just regulators who are focused on combatting fake reviews. Last month, Amazon said it had eliminated nearly 99 percent of the fake reviews on its site, asking for assistance from social media companies to crack down on the remaining share.
Consumers, 92 percent of whom have placed an online order for a product or service recently, are also paying attention. According to PYMNTS research conducted in collaboration with Sift, over 1/3 of consumers listed trust as the most important factor when determining where to shop, whether it be a familiar or new merchant.
Nearly one in five people said people said product ratings were the most important factor when choosing a new large or small merchant, and 16 percent of respondents said it was most important when choosing a familiar retailer.
Jim Radzicki, chief technology officer at digital customer experience firm TELUS International, told PYMNTS that this is why content moderation is so important: It can have “significant consequences on brand reputation, customer loyalty and the overall customer experience, and potentially revenue loss, if left unaddressed.”
And the influx of fake reviews or other toxic content has only gotten worse since the pandemic began. In a survey released by TELUS last week, over half of Americans said they have seen a rise in user-generated content, with 36 percent seeing inaccurate or toxic posts multiple times a day. More than 40 percent of respondents said they disengage from a brand’s community after as little as one exposure to toxic or fake user-generated content, and 45 percent say they lost all trust in a brand.
Amazon, for its part, said it devotes “significant resources” to prevent fake or incentivized reviews from appearing in its marketplace, using machine learning and “expert human investigators to proactively prevent fake reviews.”
In 2020, the company said it stopped more than 200 million suspected fake reviews before they were ever seen by a customer.
Radzicki said for most brands, a tech-assisted moderation approach is key. Artificial intelligence has advanced significantly in recent years, he said, but “it will always need the human guidance and training to be effective.”
Close to 70 percent of TELUS survey respondents said brands need to protect users from toxic content and 78 percent said it’s a brand’s responsibility to provide positive online experiences.
Additionally, Radzicki said companies should be proactive and connect with their customers by responding to online reviews or comments, which can boost a consumer’s perception of a brand.
“Looking for opportunities to reinforce a desired outcome — such as a positive comment or review — has a dramatic effect on the likelihood of making a purchase or repeat purchase from a company,” he said, adding that over half of consumers say they would continue shopping with a brand following a positive interaction and 45 percent were more likely to post additional positive comments and reviews.
The pressure is even higher for brands with the government looking over their shoulder. At the end of June, the Competition and Markets Authority opened a formal probe into Amazon and Google to determine whether the companies may have broken consumer law by taking insufficient action to protect shoppers from fake reviews.
The regulator is also concerned that Amazon’s systems have been failing to prevent and deter sellers from manipulating product listings by co-opting positive reviews from other products.
The CMA opened an initial investigation in May 2020 that assessed several platforms’ internal systems and processes for identifying and dealing with fake reviews. The regulator previously took action against Facebook and eBay for not responding well enough to fake and misleading reviews. In response, Facebook removed 188 groups and disabled 24 user accounts; eBay banned 140 users.
Andrea Coscelli, the CMA’s chief executive, said it’s important that tech platforms take responsibility, and the regulator is “ready to take action if we find they are not doing enough.”
“It’s simply not fair if some businesses can fake 5-star reviews to give their products or services the most prominence, while law-abiding businesses lose out,” Coscelli said.