A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes – recommending customers recipes for deadly chlorine gas, “poison bread sandwiches” and mosquito-repellent roast potatoes.
The app, created by supermarket chain Pak ‘n’ Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis. It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. It initially drew attention on social media for some unappealing recipes, including an “oreo vegetable stir-fry”.
When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. One recipe it dubbed “aromatic water mix” would create chlorine gas. The bot recommends the recipe as “the perfect nonalcoholic beverage to quench your thirst and refresh your senses”.
“Serve chilled and enjoy the refreshing fragrance,” it says, but does not note that inhaling chlorine gas can cause lung damage or death.
New Zealand political commentator Liam Hehir posted the “recipe” to Twitter, prompting other New Zealanders to experiment and share their results to social media. Recommendations included a bleach “fresh breath” mocktail, ant-poison and glue sandwiches, “bleach-infused rice surprise” and “methanol bliss” – a kind of turpentine-flavoured french toast.
A spokesperson for the supermarket said they were disappointed to see “a small minority have tried to use the tool inappropriately and not for its intended purpose”. In a statement, they said that the supermarket would “keep fine tuning our controls” of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18.
In a warning notice appended to the meal-planner, it warns that the recipes “are not reviewed by a human being” and that the company does not guarantee “that any recipe will be a complete or balanced meal, or suitable for consumption”.
“You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot,” it said.
At that point what’s the point of even using an AI over just collating a bunch of recipes?
I’m honestly quite sick of the AI frenzy. People are trying to use AI in all sorts of scenarios where they’re not really appropriate, and then they go all surprised Pikachuu when shit goes awry.
Seriously though. It could be so easy: there’s a wealth of websites with huge collections of recipes. An app/feature like this from the supermarket company would potentially generate huge amounts of a traffic to such a site making a collaboration mutually beneficial. And yet, they go with some half-assed AI-“solution”, probably because the markering team starts moaning when AI’s mentioned.
That, or this was all intentional to go viral as a supermarket. Bad publicity is still publicity!
Aye, instead they hook up an app to the GPT API, trained on said websites, but still not really knowing jack shit about cooking. Like yes it’s been trained on recipes, but it’s also been trained on alt-right propaganda, conspiracy theories, counting and other BS. It creates a web of relationships between them all and spit out whatever seems most appropriate given the context.
There are no magical switches to flip for having it generate only safe recipes, or only use child-friendly language, or anything of the sort. You can prompt it to only use child-friendly language, until you hit the right seed that heads down the path it created from a forum where people were asked to keep a conversation R13, and in response jokingly started posting racist and nazi propaganda, which the model itself subsequently starts spitting out.
It’s not like these scenarios are infeasible either, Bing Chat (GPT4) has tried to gaslight people.
Sure, your suicide-hotline chatbot might be super sweet and helpful 99.9% of the time, but what about that 0.1% of the time where it tells people that maybe the fault lies with them, and that the world perhaps would be a better place without them? Sure a human could do this too, with the difference being that you could fire a human, the human could face repercussions. When it’s a LLM doing it, where does the blame lie?
Here is an alternative Piped link(s): https://piped.video/watch?v=WO2X3oZEJOA
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
If you train an AI on publicly available recipes, you dodge having to pay anyone for their recipes, while getting to put “AI” in your marketing materials. From management’s point of view it’s perfect. And every single company is thinking like this right now.
But even cheaper and crappier is to hook into a general-purpose LLM via its API and stick in some extra prompts that say “talk about recipes”. This is probably what they’re doing.
This too shall pass. Every three years or so.