Central Development
On April 30, Elon Musk testified that xAI trained its Grok model using distillations of OpenAI’s models, putting model-to-model reuse at the center of his dispute with OpenAI, according to TechCrunch. The appearance came in a legal fight focused on whether and how competitors’ systems can be reused in training, as reported by Axios. Musk’s statements have sharpened legal and competitive questions around AI model distillation, TechCrunch noted.
Why It Matters
The testimony brings a contested industry practice—training via outputs or “distillations” of rival models—into a courtroom dispute that could influence future norms on intellectual property and trade-secret boundaries. Musk characterized the use of competitors’ models as standard in the field, according to Wired. He also framed Grok’s development as focused on AI safety, Axios reported. But questions about documentation and governance followed: Musk appeared inconsistent about his knowledge of xAI’s “safety cards,” per Ars Technica.
Perspective
Coverage diverged on emphasis. Wired underscored that labs commonly build on one another’s work, while Ars Technica focused on inconsistencies in Musk’s testimony and a courtroom setback in which he failed to keep xAI’s safety record and Trump-related discussions off the record. Those differences matter: they frame the dispute either as a test of routine practice or as a credibility and compliance challenge for xAI.
What to Watch
Court rulings on how far discovery can probe xAI’s training pipeline and “safety cards.”
- Whether the court signals clearer boundaries on model distillation and use of competitor outputs.
- Any follow-on disclosures from xAI or OpenAI that detail provenance, safety processes, or licensing.



