Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
Action Required: Conduct thorough due diligence on AI vendors, evaluating their safety protocols, risk management strategies, and response mechanisms for misuse or ethical concerns before integrating AI tools.
This lawsuit against OpenAI, alleging its ChatGPT tool was misused for stalking despite ignored warnings, underscores significant ethical and safety concerns for AI vendors. Financial advisors considering AI adoption should note this case as it highlights the critical need for robust vendor due diligence, focusing on risk management, user safety protocols, and potential liability associated with AI tool misuse.
Read full article at techcrunchWant the full daily Briefing?
30 stories like this every day, with Action Required call-outs and direct lines to ask Aria — finsay's AI compliance assistant.
Try free for 14 daysRelated stories
- This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts
Onix is launching a platform where AI versions of human experts, like health and wellness influencers, provide advice and potentially hawk p…
- Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
This article highlights the privacy risks and limitations of AI models like Meta's Muse Spark, which offered to analyze sensitive health dat…
- CyberAgent moves faster with ChatGPT Enterprise and Codex - OpenAI
CyberAgent is leveraging advanced AI tools, specifically ChatGPT Enterprise and Codex from OpenAI, to enhance its operational speed and effi…