Copyright The New York Times

For decades, computer science students have been taught a central skill: using computers to solve problems. In practice, that has meant programming or writing code to tell a machine how to perform tasks, like sorting a list of numbers or finding the most effective way to deploy snowplows during a winter storm. Now, generative artificial intelligence tools like ChatGPT and Claude can write those programs themselves, producing code in much the same way they write essays and legal briefs: by analyzing a vast number of similar texts and assembling new ones that look similar. A student can ask ChatGPT to write a program that sorts a list of numbers and get a working answer in seconds. This new capability is transforming how computer scientists get work done. The Microsoft chief executive Satya Nadella has said that up to 30 percent of the company’s code is being written by A.I. at this point. Even nonprogrammers suddenly have the ability to create their own software tools. For some coders, this may feel like an existential threat to their profession. It certainly marks a significant shift for computer scientists and those who educate them. The essential skill is no longer simply writing programs but learning to read, understand, critique and improve them instead. The future of computer science education is to teach students how to master the indispensable skill of supervision. Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.” There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code. But today’s computer science education still focuses on coding as the primary activity. And worryingly, some students are using A.I. tools to finish their assignments without learning or understanding how the code actually works. Most education doesn’t yet emphasize the skills critical for programming supervision — which hinges on understanding the strengths and limitations of A.I. tools. New developers with less than a year’s experience can actually be less efficient with A.I. tools than without. Research suggests that this could be because they are missing critical skills and knowledge needed to evaluate and correct what the A.I. produces. Those skills — understanding why code works, catching errors and knowing when to trust a result — can be learned over time, which is why experienced developers often find A.I. tools helpful with filling in details. But new developers need to build these judgment skills now in order to be ready to enter an A.I.-dominated work force. Changing what we teach means rethinking how we teach it. Educators are experimenting with ways to help students use A.I. as a learning partner rather than a shortcut. In one study, computer science students who used A.I. to debug faulty programs became better at finding and fixing errors themselves. Another study found that teaching students to give A.I. systems clearer, more complete instructions led to more accurate results.