Copyright Fast Company

The Customs and Border Protection agency aims to establish a framework for the “strategic use of artificial intelligence” and outline rules for ensuring safe and secure use of the tech, according to an internal document viewed by Fast Company. The directive, obtained through a public records request, spells out CBP’s internal procedures for sensitive deployments of the technology. Agency officials are banned from using AI for unlawful surveillance, according to the document, which also says that AI cannot be used as a “sole basis” for a law enforcement action, or to target or discriminate against individuals. The document includes myriad procedures for introducing all sorts of artificial intelligence tools, and indicates that CBP has a detailed approach to deploying AI. Yet those rules also include several workarounds, raising concerns that the technology could still be misused, particularly amid the militarization of the border and an increasingly violent deportation regime, sources tell Fast Company. And then there’s the matter of whether and how the directive is actually enforced. According to the directive, the agency is required to use AI in a “responsible manner” and maintain a “rigorous review and approval process.” The document spells out various procedures, including steps for sanctioning use of the technology and the agency’s approach to inventorying a list of its AI applications. It also discusses special approvals needed for deploying “high-risk” AI and how the agency internally handles reports that officials are using the tech for a “prohibited” application. The document has a warning for CBP staff that work with generative AI, too. “All CBP personnel using AI in the performance of their official duties should review and verify any AI-generated content before it is shared, implemented, or acted upon,” the directive states. “CBP personnel are accountable for the outputs of their work and are responsible for using these tools judiciously, ensuring that accuracy, appropriateness, and context are always considered.” CBP, which is housed under the Department of Homeland Security, is already exploring or using AI for a range of activities, including screening travelers, translating conversations, assisting with drone navigation, and detecting potential radioactive materials crossing the border. The agency is also interested in or using it to locate “items of interest” in video feeds, generate testable synthetic trade data, run automated surveillance towers, and mine the internet for potential threats. AI is even integrated into the CBP’s internal fitness app, according to a long list of use cases published online. The directive, which is titled “U.S. Customs and Border Protection Artificial Intelligence and Reporting” and assembled by the agency’s AI and operations and governance office, sheds light on how CBP says it’s monitoring the use of these tools, both within its own ranks and among its contractors. Fast Company reached out to CBP for comment but did not hear back by publication time. The full directive appears “fairly reasonable,” a former DHS IT official tells Fast Company, and seems like a straightforward implementation of White House guidance. “It looks like civil servants doing their job and following policy, while clarifying roles in the context of their own organization’s reporting structure,” they say. An ex-Biden administration official who worked on AI policy says the White House’s Office of Science and Technology Policy pressured parts of DHS, including CBP, to better organize its approach to AI. The directive, they say, shows that CBP, under the Trump administration, seems to be advancing on that front. But the ex-official still has a host of concerns, including what they call a “flick of the wrist” waiver process for getting around the minimum procedures for high-risk AI applications. The document states that using “high-risk AI,” without following these procedures, requires written approval from DHS’s chief information officer, the agency’s top tech official. The directive also lacks a protocol for explaining what should count as “high-impact” AI, creating another “obvious loophole” for skirting procedures, the person argues. That responsibility is left to another group called the AI inventory team and is supposed to factor in guidance from the White House, according to the directive. advertisement The former official also believes applications of AI should be deemed more sensitive when they’re closer to the border, particularly in places where CBP officers might have an expanded authority—a concern raised under the Biden administration, the person says. “These procedures are an empty process, and only a half promise at that. These rules give us lots of red tape and record keep requirements, but no substantive protections against biased, error prone, and destructive AI,” Albert Fox Cahn, the founder of S.T.O.P. and a fellow at Cambridge University, argues. “In a space where AI errors can literally be a matter of life and death, where machine learning mistakes can mean being locked in a cage or threatened with deportation to a country you’ve never seen, it’s shameful that CBP would enable wholesale deployment of such tech.” The directive comes as DHS expands its internal use of artificial intelligence. In recent years, the agency began several pilots with generative AI, including ChatGPT. The department also developed its own chatbot, called DHSChat. Upon taking office, the Trump administration’s DHS banned the use of commercial AI tools like ChatGPT, and directed employees to only use internal tools, FedScoop reported earlier this year. Notably, this directive, signed by CBP Commissioner Rodney Scott, was published just a day before DHS released a new AI strategy for the department and a plan for complying with Trump administration guidance for boosting the use of the technology for all sorts of applications through government. CBP has been using artificial intelligence for more than a decade, but the directive notes that its use of natural language processing technology, along with other new AI methodologies, have grown.