Copyright Fast Company

One of the prevailing narratives around AI is centered on a captivating question: When will we achieve Artificial General Intelligence (AGI)? Our world is transfixed by the idea of a machine that can think, learn, and perform like a human. But this focus on a hypothetical future obscures the most critical strategic choice leaders face today. The problem we are all trying to solve isn’t AGI; it’s how to improve organizational productivity. The most common answer is to delegate. We’re told to boost productivity by having AI complete tasks for us. This is the “easy button.” This approach, however, is a strategic trap. It overlooks a far more powerful solution: improving productivity by designing AI to make our people demonstrably better at their jobs. This represents a shift in perspective. The goal is not just to get work done, but to enable our workforce to become something more. This distinction—between delegation and augmentation—is the key to building super-intelligent teams. THE “EASY BUTTON” FALLACY One of the primary business cases for AI has revolved around automation—expediting human effort with machine speed. As humans, we’re conditioned to seek the “easy button.” We want AI to be a delegate that does tasks for us, like authoring a first draft, summarizing a report, transcribing notes from a meeting. This feels productive because we’re saving time on a single task. But this model can have a hidden, corrosive cost. By delegating the “doing,” we are not making our people better. We run the risk of creating “work slop”—the Harvard Business Review’s term for AI-generated content that appears complete but is substantively hollow. This can look like a plausible-sounding report, summary, or block of code that lacks true human judgment, critical context, or the nuance that comes from experience. The risk is twofold. First, it doesn’t actually save work; it just shifts the burden. The person who used the “easy button” has simply handed off unverified work to their teammates, who must now spend their time adding the human insight that was missing. Second, and more insidiously, it causes skill erosion. When you repeatedly delegate the process of thinking, you stop exercising those cognitive muscles. Your own skills atrophy. This isn’t just theory. The evidence is mounting. University College London’s ethnographic and psychological research on police officers found that delegating report-writing to AI did not improve the officer’s own narrative skills. Findings from Stanford University demonstrated that while AI helped improve senior developer productivity, it reduced the hiring and mentorship of junior talent, effectively breaking the human growth pipeline. When AI simply does the work, the human learns less. Their skills erode. This “easy button” approach optimizes for the task, but fails the human—and ultimately, the organization. THE PARTNERSHIP MODEL: DESIGNING FOR “GOOD FRICTION” If the delegate model is a trap, the alternative is the partnership model. This approach asks a different, better question: “How do we design systems with AI that make our people better at their jobs?” The answer lies in designing for “good friction”—thoughtful interactions that keep humans engaged in the cognitive work, using AI to accelerate, verify, and scale their efforts. advertisement Consider the design of AI for police reports. The “easy button” approach would be to have the AI listen to a single source of data and write the incident report from scratch. This saves time on the task but could erode the officer’s critical recall and observational skills. Plus, an officer’s report is not simply a written record of their memory. It’s also a demonstration of their professional competence, a tool to persuade a jury, a resource for liability protection, and a nuanced report for communications with community members. By reducing the report’s purpose to a single function, the AI misses these other, crucial realities. The partnership model, however, introduces “good friction.” It requires the officer to first provide their own observations and outline the incident. Then, the AI engages as a partner. It can compose a full narrative based on the officer’s perspective, but more importantly, it cross-references that perspective with video, 911 transcripts, and other data to help the human verify details and identify discrepancies. The officer’s skills (observation, recall, judgment) are sharpened, not atrophied. The “work slop” is eliminated, and the final report is more accurate. THE SUPER-INTELLIGENT TEAM True human super-intelligence is a collective act. The moon landing was not the single accomplishment of three astronauts in space, but the output of nearly 400,000 NASA engineers, technicians, scientists, programmers, and factory workers. The ultimate goal of technology is not just making one human better but making the team more capable. This is also where the delegate model can fail most critically. At a team level, “work slop” manifests as a lack of trust. If your teammate can’t be sure whether you did the work or your AI did, they can’t trust your output, and collaboration breaks down. An AI designed for partnership does the opposite. It builds trust by becoming a collaborative tool that shares information faster and enables better, more transparent iteration between people, making the entire team more intelligent. The relentless pursuit of AGI, while alluring, risks obscuring the profound opportunity before us. We stand at the precipice of a new era, one where AI is a catalyst for amplifying human potential. The true frontier of innovation isn’t about teaching machines to think; it’s about deliberately cultivating human super-intelligence within our teams. The most urgent question is not when AGI will arrive, but whether your team is ready to be super-intelligent. Mahesh Saptharishi, PhD, is executive vice president and chief technology officer of Motorola Solutions.