What happens when a sommelier recommends a wine you don't think you'll like?
The answer reveals a lot about why wine apps simply don't make sense. But that's nothing compared to why they're (usually) an investment black hole
You're in a shop, or a restaurant, and the sales assistant or sommelier suggests a wine. And you say "that's not really for me, thanks".
What happens next?
Keep that in mind as you read this brief online exchange about someone who recently used a wine matching app:
The first time it suggested a nice Lambrusco for me and the second time it gave up... (I answered truthfully btw, not to intentionally confuse it).
The person behind the app replied...
[our] software works like a human expert would. So if none of the wines in a given store would satisfy the taste preferences identified in your submission, our software will tell you that, just as a human expert would.
Now I'm genuinely not being catty here. I just want to ask if the first scenario I described was in any way like the way the software responded: "just as a human expert would?"
My reckoning is that a "human expert" would think around the problem. They'd dig into if this was for a gift. Or a treat. Or if you wanted to try something completely different but surprisingly familiar. Or simply use charm and psychology learned from watching Glengarry Glen Ross too many times to get the sale. ABC. Always Be Closing.
What we actually got is something conveniently captured in meme form:
The claim that this is “just what a human expert would do” relies on what’s known as The ELIZA Effect. It’s a flawed way of understanding the relationship between computers and humans that dates back to 1966 and one of the very first chatbot computer programmes. You’d think we’d have moved on in almost sixty years… but no. It’s also a sort of anthropomorphism which leads to both hype (which is why AI founders use it) and a fallacy (which is why we believe them):
As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust.
As the authors of that paper point out, neither of these is a good thing and lead to “negative ethical consequences of the phenomenon in this field.”
Admittedly “negative ethical consequences” rarely stopped people in business. Least of all technology founders. And dozens of people have come up with assorted wine matching apps with few or no qualms in this department.
But what’s extraordinary is how astonishingly small the market for wine apps actually is. Even if they could act like “a human expert” it is mind-boggling why anyone would invest in the vast majority of these platforms. (There are very limited exceptions, which we’ll see).
If you’re a paid subscriber, you’re also going to get some hard data and useful numbers on the UK wine sector. If you’re in other countries, the lessons are exactly the same. I just focus on one for ease. Either way, you’ll get an idea of where else to invest your money.
Keep reading with a 7-day free trial
Subscribe to Joe Fattorini's Substack to keep reading this post and get 7 days of free access to the full post archives.