• hedgehog@ttrpg.network
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    Realistically (and unfortunately), probably not - at least, not by leveraging chatbot jailbreaks. From a legal perspective, if you have the expertise to execute a jailbreak - which would be made clear in the transcripts that would be shared with the court - you also have the understanding of its unreliability that this plaintiff lacked.

    The other issue is the way he was promised the discount - buy the tickets now, file a claim for the discount later. You could potentially demand an upfront discount be honored under false advertising laws, but even then it would need to be a “realistic” discount, as obvious clerical errors are generally (depending on jurisdiction) exempt. No buying a brand new truck for $1, unfortunately.

    If I’m wrong about either of the above, I won’t complain. If you have an agent promising trucks to customers for $1 and you don’t immediately fire that agent, you’re effectively endorsing their promise, right?

    On the other hand, we’ll likely get enough cases like this - where the AI misleads the customer into thinking they can get a post-purchase discount without any suspicious chat prompts from the customer - that many corporations will start to take a less aggressive approach with AI. And until they do, hopefully those cases all work out like this one.