Stock Markets May 1, 2026 06:03 AM

Musk Frames OpenAI Suit as Defense of Nonprofit Mission in Extended Oakland Testimony

Over three days of testimony, Elon Musk argued he built OpenAI as a charitable endeavor and pushed back on defendants over funding, recruitment and AI safety

By Hana Yamamoto MSFT TSLA
Musk Frames OpenAI Suit as Defense of Nonprofit Mission in Extended Oakland Testimony
MSFT TSLA

Elon Musk spent more than seven hours on the witness stand across three days in an Oakland trial focusing on the future and governance of OpenAI. Musk portrayed the enterprise as intended to operate as a nonprofit for public benefit, said he supplied critical early funding, talent and introductions, and raised concerns about AI safety. Cross-examination highlighted sharp disputes over recruitment, Microsoft’s investment and the scope of expert testimony on extinction risk.

Key Points

  • Musk characterized OpenAI as intended to operate as a nonprofit charity and testified that he chose not to found a for-profit company.
  • He said he provided initial funding, recruited key talent including Ilya Sutskever, and leveraged relationships with technology executives for computing resources.
  • Musk described Microsoft’s investment in OpenAI as a "bait and switch" and said an offer to buy stock felt like a bribe; expert testimony on extinction risk was curtailed by the judge.

Elon Musk testified in Oakland, California, for over seven hours across three days in a case centered on OpenAI's direction and governance. At the core of Musk’s testimony was a portrayal of OpenAI as an entity he said was intended to benefit humanity rather than individuals, and a contention that its current leaders abandoned that original model.

Musk repeatedly described OpenAI as a charity, saying its founders had intended it to operate without individual profit. He acknowledged that the 2015 blog post announcing OpenAI’s formation did not include the word charity, but told jurors that the organization was "specifically meant to be for a charity that does not benefit any individual person. I could’ve started it as a for-profit and I specifically chose not to."

He also argued that OpenAI’s existence depended heavily on his early contributions. Musk testified he originated the idea, chose the name, recruited key personnel and provided the initial funding. He said he recruited Ilya Sutskever from Google, and that Google’s founders Larry Page and Sergei Brin tried to retain Sutskever. Musk told the court that after Sutskever joined, "Larry Page refused to speak to me ever again."

On the computing side, Musk said OpenAI was reliant on connections he could make with executives at large technology companies. He told jurors that his relationship with Microsoft CEO Satya Nadella and Nvidia CEO Jensen Huang was instrumental for OpenAI’s access to computing resources. "The only one who could actually call Satya Nadella and have him pick up was me," Musk testified. "The only reason he’s in this thing is because of me. Those are his words."

Musk recounted conversations about AI safety and attributed a pivotal exchange to Larry Page. Musk said he asked Page, "What if AI wipes out all humans?" and that Page responded it would be acceptable if artificial intelligence survived. Musk said Page called him a "speciesist" for prioritizing humanity. Musk testified that this exchange helped motivate the creation of what he described as an open-source nonprofit alternative to Google.

Jurors were also shown a text message thread from late 2022 in which Musk described Microsoft’s roughly $10 billion investment in OpenAI as a "bait and switch." Musk testified that when he confronted Sam Altman about the deal, Altman acknowledged it felt bad. Altman then offered Musk the chance to buy stock in OpenAI, which Musk said he perceived as "frankly, it felt like a bribe."

When questioned about his own commercial effort, Musk was asked why he would use OpenAI to train models for his xAI company if he considered OpenAI’s work dangerous. Musk answered that it is common practice to use other AI systems to validate one’s own models. He further explained that for-profit entities can still produce socially beneficial outcomes, responding to questions about why his own company was not formed as a charity.

Cross-examination by William Savitt, counsel for the OpenAI defendants, was at times combative. Musk objected that Savitt often cut him off, and the judge reminded counsel that cutting a witness off is permitted but also admonished Savitt for not allowing Musk to complete his answers on occasion. Musk told the court that "few answers are going to be complete especially when you cut me off all the time."

Pre-trial disputes touched on the admissibility and scope of expert testimony regarding the potential for AI to pose existential risks. Musk’s lawyer Steven Molo urged the court to permit questioning of an expert witness on the extinction risk posed by advanced AI, stating that "Extinction risk is a real problem. This is a real risk. We all could die." The judge limited that expert testimony and remarked that it was ironic that Musk, despite citing such risks, is building a company in the same technological area.


Contextual notes from testimony

  • Musk framed OpenAI as intended to be a nonprofit charitable endeavor and said he had intentionally chosen not to form a for-profit company.
  • He testified that he supplied initial funding, recruited key personnel including Ilya Sutskever, and used his contacts to secure computing resources through meetings with senior executives at large technology firms.
  • Musk described a late-2022 Microsoft investment as problematic and said an offer to buy stock in OpenAI made to him felt like a bribe.

Proceedings and courtroom dynamics

Musk’s testimony included recollections of personal conversations and direct quotations he attributed to others, and jurors were shown text messages and other exhibits to support parts of his account. Cross-examination highlighted friction over whether Musk’s statements were complete and whether he was afforded sufficient opportunity to finish his answers. The judge intervened at times to limit questioning and to set boundaries on expert testimony, particularly on the subject of extinction risk.

Throughout his testimony, Musk articulated concerns about AI safety and emphasized the role that human judgment and governance should play in the development of powerful AI systems. He framed the dispute in the lawsuit as one about mission and stewardship, saying that OpenAI’s current leadership had moved away from the nonprofit model he claimed to have intended.


This account reflects the central themes and exchanges from Musk’s testimony as presented at trial. It does not introduce new events or assertions beyond the testimony and courtroom developments described during the proceedings.

Risks

  • Dispute over organizational mission and governance at major AI research entities - this raises legal and governance uncertainty for the AI sector.
  • Tension around major corporate investments and partnerships, exemplified by Musk’s description of Microsoft’s investment, which could complicate future deal structures in the technology and cloud computing space.
  • Unresolved questions about the admissibility and scope of expert testimony on AI extinction risk, leaving uncertainty about how courts will consider existential-risk arguments in technology litigation.

More from Stock Markets

Brockman Reveals Near-$30 Billion OpenAI Stake and Financial Links to Altman During Musk Trial May 4, 2026 California Launches Probe into Federal Deal That Scrapped Central Coast Offshore Wind Project May 4, 2026 Pilots Union Praises Kirby’s Merger Vision, Stops Short of Endorsing Deal May 4, 2026 Embraer Sees Follow-On Middle East Defense Sales After UAE C-390 Agreement May 4, 2026 Intel hires long-serving Qualcomm executive to oversee PCs and physical AI unit May 4, 2026