Suit Alleges Figma Trained AI on Customers Design Files
A proposed class action filed November 22 in federal court accuses design collaboration platform Figma of using customers proprietary design files, layer metadata and other content without consent to train its generative AI models. The complaint raises questions about data use, corporate assurances and the valuation gains tied to AI capabilities, matters that could reshape how software platforms handle user creations.

A proposed class action filed in the U.S. District Court for the Northern District of California alleges that Figma used customer design files, layer metadata and other uploaded content to train its generative artificial intelligence models without proper authorization. Plaintiffs contend Figma had assured users that their content would not be used for such purposes, and they say the alleged practice materially contributed to the company valuation after its 2025 initial public offering.
The complaint, filed November 22, seeks monetary damages and a court injunction to stop Figma from using customer data for model training. It seeks certification as a class action on behalf of users and organizations that stored proprietary designs on the platform. The filing frames the issue as both a privacy and commercial harm, arguing that the unauthorized use of original creative work undercut user control and provided Figma with a competitive advantage in the rapidly evolving AI market.
Figma responded with a public statement denying that it uses customer data to train models without explicit authorization and saying it takes steps to de identify data. The company emphasized that customer trust is central to its business model and signaled it would contest the allegations in court. The suit is the latest in a wave of legal challenges across the technology sector testing when and how user content may be used to fuel machine learning systems.
The case spotlights the tension between the rapid commercial deployment of generative AI features and long standing expectations about proprietary content and platform assurances. Design files often contain layered information, embedded assets and metadata that can reveal software architecture, branding elements and confidential product plans. Plaintiffs argue that training models on such material risks exposing intellectual property and diluting the market value of original works while generating intangible benefits for the platform itself.
Legal scholars and industry observers have flagged similar disputes in recent months, as companies that added AI capabilities sought to draw on large data sets to improve performance. Regulators and courts are now being asked to define the boundary between permissible data processing and use that requires explicit opt in or compensation. For enterprise clients, the outcome could affect contracting practices, with greater demand for explicit clauses governing data use for model development and for technical safeguards such as opt outs and stronger de identification.
Beyond contract language, the litigation raises broader ethical and policy questions about consent, transparency and the distribution of value created by AI systems. If courts find that platforms profited from users creative output without adequate notice or authorization, corporate approaches to data governance and product design may shift, potentially slowing feature rollouts or prompting new certification and audit mechanisms.
The complaint sets a timetable for litigation to follow, with plaintiffs asking the court to halt the challenged practices while the matter is adjudicated. Whatever the outcome, the case adds to an unfolding legal reckoning over how established software services incorporate generative AI and how users can retain control over their digital creations.


