Former OpenAI chief technology officer Mira Murati testified Wednesday that company CEO Sam Altman created a culture of chaos and distrust among top executives at the artificial intelligence startup, according to Reuters coverage of an ongoing trial that has thrust the company’s internal dynamics into the public spotlight. The testimony marks the latest revelation in proceedings that have exposed deep fractures within the organization that created ChatGPT and helped spark the generative artificial intelligence boom transforming industries worldwide.
Murati, who served as CTO from 2018 to 2023 before her brief tenure as interim CEO following Altman’s initial dismissal, described an environment where Altman routinely made unilateral decisions, changed strategic direction without consultation, and cultivated divisions between board members that ultimately led to her controversial termination in November 2023. Her detailed account contradicts Altman’s public statements about the circumstances surrounding her removal and the subsequent board restructuring that ended with his reinstatement.
Internal Conflict at the Heart of the AI Revolution
The trial stems from a lawsuit filed by OpenAI co-founder Elon Musk, who alleges that the company’s contentious transition from nonprofit to quasi-commercial structure breached original founding agreements established to ensure AI development would prioritize safety over commercial interests. Musk, who provided early funding and helped recruit key technical talent during the organization’s formative years, claims that Altman and current board members improperly enriched themselves while straying from OpenAI’s stated mission to develop artificial general intelligence safely and for the benefit of humanity rather than shareholders.
Murati’s testimony provides a detailed insider perspective on decisions that have previously been documented only through incomplete media reports and social media posts. She described weekly leadership meetings characterized by unpredictable agenda changes and Altman’s pattern of presenting fait accompli decisions that left other executives scrambling to understand and respond to major strategic shifts. “Decisions were made in corridors and announced in meetings. There was never a sense that the leadership team was aligned on strategy or direction,” Murati testified. “The pace of decisions outstripped our ability to ensure they were made thoughtfully.”
The former CTO also detailed conflicts over computing resource allocation between competing research teams, safety concerns that were sometimes overruled by commercial priorities, and board governance structures that failed to provide meaningful oversight of management decisions. Her testimony paints a picture of an organization that grew faster than its institutional controls could accommodate.
Key Allegations From the Proceedings
The trial has revealed several previously undisclosed details about OpenAI’s internal operations and governance failures:
- Resource allocation disputes: Murati testified that conflicts over graphics processing unit cluster allocation between competing research teams became so intense that she personally intervened multiple times to prevent key researchers from departing the organization over perceived unfair treatment.
- Safety and commercial tensions: The former CTO described fundamental disagreements about deployment pace between safety-focused researchers who wanted slower rollout of capable systems and product-focused leaders pushing for rapid commercialization.
- Board governance weaknesses: Testimony indicated that board oversight mechanisms were inadequate to monitor Altman’s decisions, with some directors relying on informal updates rather than formal reporting structures designed to ensure accountability.
- Partnership complications: Internal conflicts over the Microsoft partnership, including disagreements about how intellectual property should be shared and whether commercial obligations compromised research independence.
“The pressure to ship products and demonstrate progress was relentless. At the same time, researchers were raising legitimate concerns about the systems we were building. These two imperatives were in constant tension, and the resolution was almost always in favor of speed over caution,” Murati said in her testimony.
Altman’s Defense and Public Position
Altman has consistently denied allegations of mismanagement, characterizing the November 2023 board crisis as a misunderstanding driven by miscommunication rather than substantive concerns about his leadership or conduct. In numerous public appearances following his reinstatement as CEO, Altman has emphasized OpenAI’s safety record and commitment to responsible AI development while acknowledging that the organization has necessarily evolved from its founding structure to compete effectively.
OpenAI’s legal team argues that the company’s transformation reflects the realities of competing in an expensive and fast-moving technology sector where traditional nonprofit structures cannot attract the capital needed to compete with well-funded technology giants. The company’s landmark partnership with Microsoft, which has invested approximately $13 billion in exchange for cloud computing credits and exclusive licensing rights, has provided resources for frontier AI research but also created obligations that some argue conflict with the founding mission’s emphasis on safety over commercialization.
Implications for AI Governance and Accountability
The trial has drawn attention from policymakers, technology ethicists, and governance experts who see the proceedings as an unprecedented window into accountability challenges facing the AI industry broadly. The concentration of decision-making power in the hands of a small number of technology leaders, combined with the technical complexity of AI systems that even experts struggle to fully interpret, creates accountability gaps that current regulatory frameworks were not designed to address.
Several members of Congress have cited the OpenAI proceedings when calling for greater transparency requirements for AI companies, particularly those developing systems approaching or exceeding human-level capabilities in key domains. Proposed legislation would establish board diversity requirements, mandatory safety audit frameworks, whistleblower protection measures, and governance standards specifically for AI organizations.
The technology industry’s response to the revelations has been mixed. Some prominent executives have privately expressed concerns that the testimony could damage the broader AI sector’s public reputation during a critical period of public acceptance and regulatory consideration. Others argue that transparency about internal governance challenges could ultimately strengthen public trust by demonstrating that AI organizations face genuine accountability questions and must earn stakeholder confidence.
Industry analysts note that the trial coincides with intensifying competition from well-resourced rivals including Google DeepMind, Anthropic, Meta AI, xAI, and Chinese laboratories that have closed much of the capability gap with OpenAI’s flagship systems. The internal turmoil at OpenAI comes at a time when external competitive pressures demand clear strategic vision and organizational coherence rather than factional conflict.
The proceedings are expected to continue for several weeks, with additional former OpenAI executives scheduled to provide testimony. The case is being heard in Delaware Chancery Court, where the company’s original board restructuring is subject to review under the terms of OpenAI’s founding documents. A ruling in Musk’s favor could require restructuring of the company’s governance arrangements, potential repayment of certain commercial partnership benefits, or injunctive relief affecting the company’s partnership agreements.
