Copyright The Street

Rules, practices, and processes—three words that mean just about everything to shareholders. The concept of corporate governance has a relatively short history in the United States, as it dates back only to the 1970s. While corporations have always undergone some form of governance, said Dan Byrne, content manager at The Corporate Governance Institute, “nothing compared to modern times’ level of control and oversight.” “Now, governments, consumers, and corporate culture care a lot about ensuring that companies live by a strict system of laws and practices,” he said. “Looking to the future, it is doubtful that this drive for more accountability will fade.” Artificial intelligence is now a part of corporate governance’s future as companies deploy the new technology in the boardroom. AI can handle time-consuming processes such as generating reports, analyzing financial data, and drafting communications, according to a Deloitte report. Board members and senior executives can free themselves from mundane administrative work by automating these repetitive functions, “allowing them to concentrate their time and energy on strategic decision-making.” AI can aid risk management “By leveraging predictive analytics, AI can assess large datasets to predict potential risks and vulnerabilities, giving boards a forward-looking approach to risk management,” the firm said. The Directors’ Institute said in an April 1 post that the role of AI in governance extends beyond automation; it is reshaping how companies approach risk management, compliance, and strategic planning. More Tech Stocks: Senior analyst lifts Palantir stock price target with a catch Nvidia just scored a massive AI win, but CEO Huang has regrets Apple’s iPhone 17 story just took an unexpected turn Analysts revamp Salesforce stock forecast after key meeting “Businesses are increasingly relying on artificial intelligence to streamline operations, enhance decision-making, and improve efficiency, leading to the integration of AI tools into the structure and execution of governance,” the Institute said. Generative AI, in particular, can create content, analyze vast amounts of data, and even simulate decision-making processes, making it a powerful tool for corporate boards. “With the rise of AI-driven solutions, traditional governance practices are evolving, leading to more data-driven, transparent, and responsive decision-making environments,” the Institute said. Decision-making support systems powered by AI can simulate different scenarios and recommend optimal courses of action, assisting board members and executives in making data-driven, well-informed decisions. But what’s wrong with the current arrangement? “Many boards have been woefully uninformed about the financial, operating, and strategic risk of management decisions—as borne out through repeated examples of corporate meltdowns over the years,” according to a Stanford University report. “Boards have erred in situations of CEO selection, financial reporting, product liability, compensation setting, and reputation management.” Analysts: AI makes mistakes “Artificial intelligence has the potential to change this dynamic,” the report said. Board members are much less likely to be “in the dark” about the operating and governance realities of their companies, the Stanford study said, as technology makes it easier for them to search and synthesize public and private information made available to them through AI board tools. AI increases the burden on managers and directors to review, synthesize, and analyze information prior to board meetings, the report said. Both parties can expect to spend substantially more time on meeting preparation, because the quantity of available knowledge is substantially greater. Retired University of Delaware law professor Charles Elson stressed the importance of human judgment in the boardroom. “AI can provide information, which as a director is helpful, but you still have to evaluate it and, frankly, most of being a director is asking the right questions at board meetings,” he said. “If directors are shareholders n the company and they respect their fiduciary duties I think they would use it as you as a tool as opposed to the substitution of their judgment.” The Stanford report warned that a substantial number of errors are generated by current AI models. “AI models come with inherent biases, the quality and availability of data can vary, and competitive intelligence may introduce additional complexities,” the report said. “AI makes computational and mathematical errors.” In addition, the Standford study said that AI does not always say “I don’t know” when it doesn’t know an answer to a question and instead grabs available data that might not be directly applicable. Boards and managers will need to learn how to fact check output before relying on it. The Directors’ Institute said questions about accountability emerge as AI increasingly integrates into governance workflows, especially when AI tools malfunction or yield inaccurate results. “If an AI system is responsible for a critical decision, such as risk assessment or executive compensation, and it malfunctions, it may lead to erroneous outcomes that could have serious legal and financial consequences for a company,” the group said. “To address this challenge, companies must establish accountability frameworks that clearly define who is responsible when AI tools lead to errors.”
 
                            
                         
                            
                         
                            
                        