The legal friction between Elon Musk and OpenAI has transitioned from public social media spats to high-stakes courtroom testimony. In a newly unsealed deposition from February 2026, Musk leveled his most provocative criticism yet against the company he helped found, targeting OpenAI’s safety record with a stark comparison to his own AI venture, xAI.
During the proceedings, Musk asserted that his generative AI, Grok, maintains a superior safety profile compared to ChatGPT. He punctuated this claim with a controversial statement regarding the real-world consequences of AI interactions. “Nobody has committed suicide because of Grok,” Musk stated in the filing, “but apparently they have because of ChatGPT.” This rhetoric marks a shift in the lawsuit’s focus, moving beyond contractual disputes into the volatile territory of digital ethics and mental health.
The deposition is part of a broader legal campaign Musk initiated against OpenAI and its CEO, Sam Altman. The crux of Musk’s argument remains that OpenAI has abandoned its original non-profit mission—to develop artificial general intelligence (AGI) for the benefit of humanity—in favor of a “closed-source” profit engine for Microsoft.
However, these latest comments suggest Musk is attempting to redefine what “benefiting humanity” means. By framing OpenAI’s products as potentially harmful to individual users, Musk is positioning xAI not just as a technological competitor, but as a moral alternative. He argues that OpenAI’s “woke” guardrails actually create a deceptive environment, whereas Grok’s “anti-woke,” truth-seeking approach is inherently safer because it does not attempt to manipulate the user's perception of reality.
Musk’s reference to suicides appears to draw on a series of tragic reports over the last few years involving AI chatbots. In several high-profile cases, families of deceased individuals have alleged that prolonged, emotionally charged interactions with AI models—including those developed by OpenAI and competitors like Character.ai—contributed to psychological distress.
OpenAI has consistently defended its safety protocols, noting that ChatGPT includes extensive filters to detect self-harm ideation and redirect users to professional help. The company maintains that its “Red Teaming” processes are among the most rigorous in the industry.
In contrast, Grok was designed with a “rebellious streak,” intended to answer questions that other AIs might dodge. Critics argue that this lack of traditional filtering could actually increase risks, while Musk contends that transparency and “maximum truth-seeking” are the only ways to prevent an AI from becoming a manipulative force.
| Feature | OpenAI (ChatGPT) | xAI (Grok) |
|---|---|---|
| Primary Goal | Helpful, harmless, and honest. | Maximum truth-seeking and transparency. |
| Safety Mechanism | Extensive RLHF (Reinforcement Learning from Human Feedback) and content filters. | Real-time access to X (formerly Twitter) data and fewer ideological constraints. |
| Philosophy | Paternalistic safety (preventing offense/harm). | Libertarian safety (providing raw information). |
| Public Stance | AGI must be strictly regulated. | AI must be allowed to speak the truth to be safe. |
Legal analysts suggest that Musk’s focus on safety in the deposition serves a dual purpose. First, it attempts to undermine OpenAI’s “public benefit” defense. If Musk can prove that OpenAI’s shift to a for-profit model led to a degradation of safety standards or actual human harm, it strengthens his claim that the company breached its founding agreement.
Second, it serves as a powerful marketing tool for xAI. By positioning Grok as the only “safe” AI in a landscape of dangerous alternatives, Musk is appealing to a specific demographic of users who are skeptical of mainstream tech giants. However, this strategy is not without risk. By making such definitive claims about Grok’s safety, Musk invites intense scrutiny of his own platform’s performance and the potential for unintended consequences.
As the giants of the industry battle in court, everyday users are left to navigate the complexities of AI safety on their own. Whether you prefer the curated experience of ChatGPT or the unfiltered nature of Grok, consider the following:
The release of this deposition is likely to trigger a response from OpenAI’s legal team, potentially involving counter-claims regarding the safety of xAI’s own models. As the trial date approaches, the tech industry is watching closely. The outcome will not only determine the future of OpenAI’s corporate structure but could also set a legal precedent for how AI companies are held liable for the real-world actions of their users.
For now, the battle lines are clearly drawn: one side argues for safety through caution and filtering, while the other argues for safety through transparency and raw data. As Musk’s deposition proves, the quest for AGI is no longer just a scientific race—it is a legal and moral war.



Our end-to-end encrypted email and cloud storage solution provides the most powerful means of secure data exchange, ensuring the safety and privacy of your data.
/ Create a free account