Copyright forbes

With almost every company using AI nowadays, integration is no longer a differentiator. But some companies are seeing more revenue growth than others, and how they use AI helps them get there. In Forbes Research’s annual AI survey, conducted in August and September, companies that had at least 10% annual revenue growth tended to have five common traits in how they used AI. First off, these companies tended to have more collaboration across their C-suite in setting an AI strategy. They’ve also tended to choose more effective KPIs and metrics to track their progress on AI-related goals. These companies are leaning more into the analytical and predictive power of AI, using its forecasting capabilities to inform better decision-making. They also use AI to analyze competitors’ activities to better inform corporate strategy. And they have done more with AI to streamline internal processes and improve efficiency. Given these companies’ success so far with AI, it makes sense that they’re also confident about ROI. More high-growth companies—62%—expect significant or substantial ROI from AI over the next two years. Only 49% of all other companies feel the same. Unified AI strategy and high-level uses of it are important, but companies need to be sure that they’re ready for everything that AI brings with it. David Bray, chair of the Loomis Accelerator at the Stimson Center and DeepTempo advisor, testified before Congress before the government shutdown about ideas he has about AI regulations. I talked to him about his ideas and how enterprises can use AI to best preserve safety and privacy. An excerpt from our conversation is later in this newsletter. We are currently accepting nominations for the Forbes CIO Next 2025 list. We’re looking for innovators who have had significant impact both at their own companies and for other tech leaders as a whole. (And yes, you can nominate yourself.) Nominations are accepted until 5 p.m. ET today. We’re always looking to improve the stories and format of this newsletter. We’d appreciate your feedback in this brief reader survey. This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday. Nvidia co-founder and CEO Jensen Huang delivers a keynote speech at Computex 2025 in Taipei. I-HWA CHENG/AFP via Getty Images While many big tech companies are reporting earnings this week, the most notable stock price increase came from one that won’t be sharing its quarterly report for three weeks. AI chipmaker Nvidia saw its value surge above $5 trillion on Wednesday, making it the first company to reach that benchmark. The catalyst was comments from President Donald Trump from his Asia tour, where he said he planned to discuss Nvidia’s top-of-the-line Blackwell chips with Chinese President Xi Jinping. Trump referred to the Blackwell chip, which the U.S. has barred from exporting to China because of its advanced capabilities, as “super duper.” While the prospect of more chip sales to China bolstered Nvidia’s stock, it also set off alarm bells for several policymakers, experts and analysts who told the New York Times they feared giving China—a key rival in the race for AI technological dominance—access to top-of-the-line U.S. technology. After the meeting, Trump offered few details on his discussions with Xi on AI chips—just that discussions did take place, and further discussions are between China and Nvidia, with the U.S. as “sort of the arbiter or the referee.” Trump and U.S. Trade Representative Jamieson Greer both said Blackwell chips were not discussed. Nvidia’s stock has dipped just under 2% in the last day, bringing its value back to $4.9 trillion. Another Big Tech player—this one set to report earnings after markets close today—hit a valuation landmark this week. On Tuesday, Apple became the third company in history to be worth $4 trillion. The increase in its share price has less to do with trade in China, and more to do with new hardware and device launches in early September. In its first two weeks on the market, the new iPhone 17 outsold its predecessor model by 14% in the U.S. and China. Photo Illustration by Sheldon Cooper/SOPA Images/LightRocket via Getty Images OpenAI created a for-profit entity this week, opening the door for bigger fundraising, more profits, and a potential IPO. The for-profit entity, known as OpenAI Group PBC, is worth $500 billion. Longtime investor Microsoft holds a 27% stake in the new company. A 26% stake belongs to the nonprofit portion of the company, the OpenAI Foundation, while the remaining 47% stake is held by past and present OpenAI employees and other investors. The OpenAI Foundation will control the for-profit company through its board. This move has long been expected, as reports of OpenAI’s restructuring have been out for more than a year. Its previous status capped the amount of money that it could raise, a limiting factor in the expensive AI arms race. Reuters reports that OpenAI is already laying the groundwork for an IPO valuing the company at up to $1 trillion, Forbes’ Rashi Shrivistava writes in our AI newsletter The Prompt. The new structure renegotiates OpenAI’s partnership with Microsoft, which previously had exclusive access to OpenAI technology for sales and customer use until 2030—unless OpenAI had reached artificial general intelligence first. The agreement has changed, giving Microsoft access to technology until 2032, and research until 2030—unless AGI is reached, the New York Times reported. But OpenAI will have its own rights to hardware, including a new device under development by iPhone designer Jony Ive, whose firm OpenAI acquired earlier this year. ARTIFICIAL INTELLIGENCE You don’t need to have an actual brain to suffer from brain rot. Researchers from the University of Texas at Austin, Texas A&M University and Purdue University conducted a study to see how well AI models did after training on a steady diet of memes, clickbait, algorithmically generated listicles and provocative social media posts. And, Forbes senior contributor Leslie Katz writes, it made the AI much less effective. AI bots showed lapses in “reasoning,” factual inconsistencies and inability to maintain logical coherence in longer contexts. “The biggest takeaway is that language models mirror the quality of their data more intimately than we thought,” study co-authors Junyuan Hong and Atlas Wang told Katz by email. “When exposed to junk text, models don’t just sound worse, they begin to think worse.” Once an AI model has been trained on low-quality content, researchers found it’s highly difficult to improve its reasoning abilities with better data. Researchers wrote that too much brain rot seemed to be “a form of cognitive scarring,” producing AI models that appear “confident, yet confused.” BITS + BYTES Using Time, Space And Existing Law To Regulate AI David Bray, chair of the Loomis Accelerator at the Stimson Center and DeepTempo advisor. David Bray, Getty The federal government has been looking at AI governance and regulations for the past several years, and the House Subcommittee on Courts, Intellectual Property, AI and the Internet held a hearing last month to discuss where things should go next. One of the witnesses was David Bray, chair of the Loomis Accelerator at the Stimson Center and DeepTempo advisor, who presented ideas about using active inference to limit AI data by time and space, updates to existing laws to address AI, as well as smaller and more targeted language models to address different areas. I talked to Bray earlier this month about his proposals and what tech leaders at companies should consider. This conversation has been edited for length, clarity and continuity. You testified before the House of Representatives at a hearing soon before the government shutdown began, and the House has been out of session all month. What do you think might happen as a result of the hearing once they come back? Bray: What you’ll probably see once the government opens up again is what resonated as a result of the hearing. One of the things I emphasized was we do need to recognize that there is, unfortunately, competition with China over the direction AI takes. And so whatever we do, we need to respect state rights. At the same time, we do need a national strategy. We’ve been here before, when mainframe [computers] first came out in the 1970s, we had the Privacy Act of 1974, which was a federal authority, not a state authority. It didn’t mean that states couldn’t do their own additional addendums to that, but it was initially a federal architecture. I think we’re going to see something similar, which is a framework that is going to be oftentimes context specific. One of the things that has mystified me as an observer is, ever since generative AI came out, you’ve had different nations trying to create one single AI policy to rule them all. The reality is the risk calculus of AI in recommending you to buy something is completely different than AI making a recommendation to a clinician as to how to go about delivering healthcare. I think it’s going to have to be context specific. And why not upgrade existing laws? If we do that more common sense approach, businesses should already be familiar with the existing laws and where they work and don’t work, given the impact of the speed and scale of AI. That would be a much more gradual update, hopefully at appropriate speed. I think we do need to have something out within the next year, but it won’t be wholesale: Here’s a new AI policy. Now you’ve got to conform with it. What do you see as some of the bigger challenges with AI regulation? In some respects, we’re moving so fast in adoption, it’s a question of how do you make sure people have the choice if they want to use generative AI? I don’t know if we’ve actually seen any laws that say: How do I remove myself from generative AI if I don’t want to be involved? That is the infrastructure question. And then there’s the application question. Are we okay with generative AI using data about us for recommending things for us to buy? Are we okay with it making recommendations to a clinician about how to get our healthcare? And if not, what’s our recourse? There’s going to be this interesting dance between making sure that people have access to it if they want it, but also the ability to opt out. But one of the big challenges of generative AI is that it’s generative, which is a feature, not a bug. Sometimes, people talk about, ‘[Can] we can lose hallucination?’ Yeah, in theory, you could have a completely deterministic model. But even then, we see certain companies saying that it would no longer be creative. It would just give you the same answer every time. I think the trouble with generative AI is any safeguards will always be what I would call post-compute. The model has already done its thing, but then you have a human-written filter that says, ‘That’s not a socially acceptable answer,’ or ‘That’s outside the parameters,’ That’s, as we know, how things get sometimes a little bit squirrely: Somebody asked something of a generative AI system and the filter just wasn't prepared for it. And the next thing you know, bad things are happening. I am really interested in other approaches that can be pre-compute. What I mean by pre-compute is you can actually have a filter that could be written before the model has even considered anything. And the reason why I care about that is, early this summer, IEEE passed something that was about five years into the making: Spatial Web Protocols. The nice thing about spatial web protocols is you can actually say to the machine: I want to bound the following things within this time and space. When I get to my house, I want the following things to happen, or I don’t want the following things to happen. I want to permit the following things within a 15-mile radius of Newark Airport, but the other things to be restricted. Or I want to allow them after 9 p.m. I think that’s a model that’s necessary for more free societies and free markets like ours. The other thing I think we’re going to see more and more is what I would call small data models or small language models. Instead of one large language model to be your end all, be all for everything, people will build foundation models: The best foundation model for detecting cyber abnormalities. The best foundational model at helping clinicians with understanding what’s going on with your cardiovascular system. I think the future of AI is not singular monolithic platforms, but more small data models instead. What would you say to CIOs, CTOs, CISOs that are bringing in AI to their enterprise systems and want to be using it responsibly, not necessarily having to change a lot of things once there are more regulations? Make sure it is very clear to your CEO, to your board, to your enterprise that the generative part is a feature, not a bug. And so it means it’s going to generate. It’s sometimes going to generate things you don’t expect. I have talked a lot to CIOs and CISOs that say, ‘We all know that the challenge with cybersecurity is the moment you introduce anything complicated. I can think of nothing more complicated than generative AI, so are we bringing in the ultimate insider threat? It’s good that you acknowledge that risk. What you actually need to do is upgrade your enterprise. You need to make the case, because no CISO is going to tell their board we’re not going to do AI. What you say is that if we’re going to upgrade our enterprise for AI, we need to also simultaneously upgrade our cybersecurity posture. What you really need to do is go to what I call patterns of life. If you have 30 days to look at the enterprise, you can establish what normal looks like. If all of a sudden, your CEO at 11:00 at night on a Friday is attempting to wire $250,000 out, you want to stop that and say, is that really the CEO? And you want to have a face-to-face or out-of-band conversation to verify it, because it could be the CEO, but it could also be a bad actor. It could be also an agent gone wrong. We know that AI agents can be tricked fairly quickly into wiring money, even though they’ve been told explicitly not to. You make the case through your board that, yes, we’re going to adopt AI and we’re going to do so responsibly because we’ve also upgraded our cybersecurity posture. That makes us more ready for nation-state threats, but at the same time also ready if an AI agent goes off the rails. COMINGS + GOINGS Home retailer Bed Bath & Beyond named Rick Lockton as its executive vice president and chief digital, product, and technology officer, effective November 3. Lockton joins the company after working as SVP of Digital at Tractor Supply Co., and he’s also worked in leadership at Ashley Furniture and Walmart. Law firm Herbert Smith Freehills Kramer appointed Ilona Logvinova as its first global chief AI officer, effective November 5. Logvinova joins the firm from Cleary Gottlieb, and she’s also held leadership positions at McKinsey Legal and Mastercard. Educational software company GoGuardian appointed Vishal Gupta as its chief technology and product officer. Gupta most recently worked as the senior vice president, CIO and CTO at Lexmark International. STRATEGIES + ADVICE With so much emphasis on AI development, there’s been a lot of attention paid to creating systems that are fast. However, there are other just-as-important values for enterprise AI: Systems that are trustworthy and reliable. Everyone has their inner critic, and sometimes that is the loudest voice in a leader’s ear, but listening to that voice can stop you on the path to success. Here are some tips to respond confidently to your doubts and move forward with boldness. Which legacy online company did Italian app developer Bending Spoons buy this week? B. Geocities D. Netscape See if you got the answer right here. Got a tip? Share confidential information with Forbes. Editorial StandardsReprints & Permissions