Neural Networks Can Auto-Generate Reviews That Fool Humans (arxiv.org) 67
Fake reviews used to be crowdsourced. Now they can be auto-generated by AI, according to a new research paper shared by AmiMoJo:
In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors.
Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Is this really new? (Score:3)
Considering the amount of reviews that you can see on Ebay on some stuff that seems too similar when you look at a lot of them.
Re: (Score:2)
Re: (Score:2)
Put the Turing test on the AI.
Or, rather, (Score:5, Informative)
The expected quality of product reviews is so bad that a human doing mediocrely is indistinguishable from a neural net doing very well.
Re: (Score:2)
Re: (Score:2)
Yes, the neural network was extensively trained on a body of actual reviews that Yelp had deemed "real". And when tested by Mechanical Turks, the generated reviews were statistically almost indistinguishable from the real human-generated reviews. Which turns out to be frightening. If you read the whole paper, you'll see Appendix B has a small sample set of generated reviews.
The good news is that all of those training reviews must have included mostly reviews by stupid, biased, and uncultured people. Tha
Re: (Score:1)
Wait a minute... (Score:2)
Re: (Score:1)
Artificial intelligence won't produce Slashdot stories. For that we need Artificial Stupidity.
Re: (Score:2)
I think we can implement that in the blockchain!
Re: (Score:2)
Re: (Score:2)
Well... a lot of the comments seem to be generated by AI... at least the more coherent ones.
Five stars (Score:5, Funny)
Great article. Would read again. A++++++++++++++++
So? (Score:5, Informative)
This doesn't really matter.
Go to amazon, search for "fidget spinner". Sort by "Avg. customer review", and click on the first result, "SamHity Cube in Style With Infinity Cube Pressure Reduction Toy - Infinity Turn Spin Cube Edc Fidgeting - Killing Time Toys Infinite Cube For ADD, ADHD, Anxiety, and Autism Adult and Children". You can tell right away that this is going to be a high-quality product, driven by a focused and effective product branding strategy.
133 5-star reviews, must be good, right? Let's check out what some of the reviews have to say:
"Said it before, as these are stocking stuffer for my sons, one the best charger/data cords out there." Huh, a fidget cube is also a charger/data cord?
"We love our camera! Works great, the night vision & picture and surprisingly clear." Wow! I had no idea the $8.89 fidget cube was also a night-vision camera.
"This product is great and worked exactly as described. I would highly recommend others to get this and see what I'm talking about. Especially for the price this item is well worth the buy!" I love highly specific reviews!
OK, let's tamp down some of the noise by only viewing verified purchases. "No results found." What?
So anyways, I discovered a huge number of these types of products with fake reviews over the past few months. Two months ago, I alerted amazon to the problem via multiple customer support channels. According to my last chat with an amazon product person, "my ticket is still open". When I asked him what's so challenging about spending 10 seconds to confirm that a top-ranking product has nothing but fraudulent reviews, he disconnected from chat.
So yeah, who cares if fake reviews can be written convincingly. Amazon certainly has a low bar when it comes to tolerating fraudulent reviews.
Re: (Score:3)
I love all the glowing reviews on Amazon which end with some variation of "I received this product at a discount or free in exchange for my unbiased review". For many products, those make up almost all of the reviews available.
There are already plenty of ways to game the system that we can't really trust the reviews to be representative of the products' quality - so what's one more?
Re:So? (Score:4, Informative)
I love all the glowing reviews on Amazon which end with some variation of "I received this product at a discount or free in exchange for my unbiased review".
This year, Amazon disallowed vendors to offer promotional discounts in exchange for reviews. However, 98% of highly-trafficked reviews were written prior to this change in policy, and will likely remain prominent for the foreseeable future.
Re: (Score:3, Informative)
One problem Amazon has is that they can't simply take down a product with fake reviews, because that further weaponizes the creation of fake reviews - bad actors would start generating obviously fake positive reviews for their competitors to suppress the product listing.
Google has done a similar thing by recently weaponizing "bad links" (counting them against a site instead of ignoring them), and this has resulted in a sh*t storm of bad links as people try to downgrade their competitors in the search result
Re: (Score:2)
One problem Amazon has is that they can't simply take down a product with fake reviews
They don't need to take down the product. They just need to take down the fake reviews.
An obvious improvement would be stop allowing reviews from people that didn't buy the product.
Re: (Score:1)
So if you buy a product elsewhere, are you really going to visit Amazon, search for your exact item, and fill out a detailed review?
Re: (Score:2)
Amazon's trustworthiness would increase a million-fold if they offered a feature to "ignore reviews from non-verified purchases." Of course, like you said, this would have to do more than just hide those reviews - it would also have to adjust any view that relied upon ranking in any way whatsoever, such as product ranking, recommended products, etc.
Re: (Score:2)
I've posted reviews on Amazon for things that I didn't buy from them. I kind of consider them the primary resource for product reviews. For instance, if I pick up a game in person from Gamestop and it's terrible, I'm more likely to note it on Amazon than go find out of Gamestop has some place for me to post, because I think it'll do more overall good on Amazon.
Re: (Score:2)
Did you try another agent, e-mail, etc.?
Re: (Score:2)
"Said it before, as these are stocking stuffer for my sons, one the best charger/data cords out there." Huh, a fidget cube is also a charger/data cord?
This is depressing. They don't even try to make it convincing. It says more about customers in general than about crooked vendors.
Turing prize? (Score:2)
Humans marked these AI-generated reviews as useful at approximately the same rate as they did for real (human-authored) Yelp reviews.
Eliza> How does that make you feel?
Re: (Score:2)
I don't really see this particular issue as a big problem for anyone involved. The people who buy the product are probably foolish enough to believe its good because so
Some neural networks are people. (Score:1)
OTOH (Score:4, Informative)
It doesn't take much to fool humans, as we have lately noticed.
Humans are morons (Score:2)
Re: (Score:3)
I used to author a local blog which concentrated on hyper local politics and restaurant reviews. People would comment all the time but those who were actually humans but astroturfing were obvious.
I built a simplistic model which immediately flagged any comment as astroturfing simply by counting the number of exclamation points (>1 per 50 words) and their recency of comment history.
After simply searching Facebook for their email address, I was usually able to determine their relationship to the restaurant
Re: (Score:2)
Glad to know that computers can output trash as quickly as humans can.
If you read the full paper, you would see that one of the fake review detection algorithms is measuring the number of reviews posted around the same time, which often indicates the company may have paid some people to write a bunch of glowing reviews. Their suggested solution to avoid this detection is to have the AI post the computer-generated reviews at a slower pace, so it doesn't trigger the algorithm.
The irony is that when computers output trash slower than the humans output trash, the trash-outputtin
Negative reviews (Score:2)
Probably not 100% accurate (Score:2)
Re: (Score:1)
way back in the when... (Score:2)
Low Bar (Score:2)
Fooling humans? Trump is president.
Not a serious challenge
Fake movie reviews on IMDB (Score:1)
It's already happening with cheap labor. Does it matter if NN's are writing these instead?
On big budget yet terrible movies I've often seen many, many IMDB reviews claiming those movies are fantastic. When I click on the name of the reviewer often I will find it was the only movie review they ever wrote. It's very suspicious and these are on big budget movies so not like it's a secret who is doing it.
The robots are coming! (Score:2)
I knew it! The robots are coming for all our jobs! Pretty soon, all those fake review writer jobs will be lost forever!
Generating vs. Detecting (Score:2)
No they can't (Score:2)
Really (Score:1)
Ignore ALL reviews (Score:2)