An AI model for code-editing company Cursor invented a new rule. The AI bot justified an unexpected behavior under a new policy. The mentioned policy didn’t exist, the chatbot made the policy up. Hacker News and Reddit were flooded with complaints and cancellation threats. Ars Technica reported this story.
A developer was using the AI-powered code editor Cursor when something strange happened. Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. The user contacted Cursor support, that’s when something odd occurred. “Sam” told them it was expected behavior under a new policy. Sam was a bot. There was no such policy.
This is just another example of AI confabulations (also called “hallucinations”), a type of “creative gap-filling” response where AI models invent plausible-sounding but false information. The potential damaged cause by this process comes for a “grey area”, where AI models can’t deal with uncertainty and create plausible, confident responses. Such behavior leads to information created from scratch.
The absence of human oversight has immediate consequences, which can be costly. Frustrated customers, damaged trust, and, in this case, canceled Cursor subscriptions.
BrokenToasterOven, a Reddit user, reported that “Logging into Cursor on one machine immediately invalidates the session on any other machine.”. He wrote an email to Cursor support and the reply was pretty quick. Sam was the sender and the message was “Cursor is designed to work with one device per subscription as a core security feature”.
Other users published messages describing the same problem. Not after that, several announced their subscription cancellations on Reddit, citing the non-existent policy as their reason. “I literally just cancelled my sub,” before moderators locked the Reddit thread and removed the original post.
Three hours later, a Cursor representative left a reply on Reddit: “Hey! We have no such policy”, “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”
Air Canada already did it
In February 2024, Air Canada was ordered to honor a refund policy invented by its own chatbot, after Jake Moffatt contacted Air Canada’s support after his grandmother death. The AI chatbot instructed Jake to book a regular-priced flight and apply for bereavement rates retroactively.
Air Canada denied his refund request, arguing that “the chatbot is a separate legal entity that is responsible for its own actions.” Such defense was rejected by the tribunal, ruling that companies are responsible for information provided by their AI tools.
Lesson learned
Cursor acknowledged the error and took steps to make amends. The company’s cofounder Michael Truell apologized publicly on Hacker News and stated that the user had been refunded. He also explained that the issue resulted from a backend change meant to improve session security that unintentionally created session invalidation problems for some users. “Any AI responses used for email support are now clearly labeled as such.” Truell also said that “We use AI-assisted responses as the first filter for email support.”
These episodes show the risks of deploying AI models in customer-facing roles without proper safeguards and transparency. Companies selling AI productivity tools to developers, having their own AI support system invent a policy that alienated its core users it’s ironic, to say the least.