Business

OpenAI wants to transform business. Many of its users just want life hacks

OpenAI wants to transform business. Many of its users just want life hacks

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
New data makes OpenAI look more like a consumer tech company
During its early years, OpenAI looked like it might build a business selling access to its increasingly powerful AI models to Fortune 500 companies. But when ChatGPT launched (almost by surprise) in late 2022, the startup suddenly had a breakout consumer product—one that raced to 100 million users in just a few months, faster than any app in history. Overnight, OpenAI became a consumer tech brand and, most importantly, the poster child for generative AI in the minds of everyday users.
Today, ChatGPT has more than 700 million weekly active users worldwide, according to OpenAI. And the way those people use the chatbot suggests the company may be drifting further toward the consumer market. This week, OpenAI released a study of 1.5 million user chat logs between May 2024 and June 2025, revealing that nearly three-quarters (73%) of chats were personal rather than work-related. Just a year earlier, in June 2024, personal and work prompts had been roughly equal. (That data excludes OpenAI’s API customers, who are largely developers and enterprises.)
The report comes at a time when, across industries, many enterprises are growing skeptical about how—and when—AI tools might deliver the efficiencies they were promised, the kind executives can tout on earnings calls. Despite the hype, by most objective accounts, the AI transformation hasn’t yet materialized. An August MIT report, for example, found that 95% of enterprise AI pilot projects have stalled. Meanwhile, talk of an AI bubble continues, with critics raising an eyebrow at bullish startup valuations and tech stock prices.
OpenAI still projects enormous revenue growth—up to $12.7 billion in 2025 and $29.4 billion in 2026—but the company is expected to keep losing billions annually. That’s fueling concerns about sustainability unless its enterprise business begins to generate significantly more revenue. Ultimately, that will depend on factors largely outside OpenAI’s control: macroeconomic conditions, credit markets, infrastructure investment, and the reskilling of the workforce for AI.
OpenAI maintains that it has three core businesses: ChatGPT subscriptions, enterprise access to its models, and long-term research on artificial general intelligence. None are likely to disappear. Still, tech companies are often forced to follow the money, and right now the money points to consumers. If ChatGPT’s massive user base keeps growing while enterprise adoption lags, OpenAI could feel pressure to devote more of its researchers and engineers to consumer features—say, payments—that might entice free users to pay for subscriptions.
Are the legal tides turning in AI’s favor when it comes to data copyright?
The biggest potential roadblock to the AI boom so far is lawsuits over AI training data. The major labs have routinely scraped vast amounts of online content to train their models, operating under the assumption that the practice falls under the “fair use” clause of the Copyright Act. That assumption is now being tested in lawsuits from publishers and creators, many still moving through the courts. Some key cases, however, have already been decided, and on the core question of whether scraping copyrighted data for training counts as fair use, the momentum appears to favor the AI companies.
The most consequential decision to date came this summer in Bartz v. Anthropic, which Anthropic plans to settle. Judge William Alsup ruled that Anthropic’s use of digitized books as training data qualifies as “fair use” under the Copyright Act. Crucially, he determined that Anthropic’s use was “transformative”—the models weren’t simply regurgitating the books’ content and format, but instead using the text to learn how to predict the next most likely word in a sequence. That’s the basic mechanism by which LLMs generate language.
Judge Vince Chhabria reached a similar conclusion in Kadrey v. Meta (a class action in which Sarah Silverman and two other authors sued for copyright infringement), finding that Meta’s use of the books was transformative—the fair use clause’s primary test. But Chhabria also cautioned that transformative use alone may not always be sufficient to secure fair use protection. The effect on a work’s market value could also factor in. His ruling suggested some reluctance to set a broad precedent for future AI training cases.
Even so, the combined weight of Bartz and Kadrey seems to be shaping industry behavior. One media executive told me publications are now hesitant to sue AI firms for using their content without permission, fearing an expensive loss. That caution reflects not only the outcomes of those cases but also the relatively modest remedies seen in other recent federal decisions, such as the Google search monopoly case, and a broader judicial mood in the current political climate.
Still, the most significant test case—the New York Times lawsuit against OpenAI and Microsoft—remains unresolved. OpenAI has tried repeatedly to have the case dismissed, without success. In May, a judge ordered the company to preserve millions of chat logs and transcripts that could prove relevant. If the Times prevails, the question of fair use in AI training could be thrown wide open again.
More AI coverage from Fast Company:
AI is bad at data. This startup can fix that
Whitney Houston is going on tour 13 years after her death, thanks to AI
AI scraping is inevitable. Can publishers turn it into revenue?
AI nostalgia is the new comfort food for an anxious internet
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.