State Del. Caylin Young said artificial intelligence is potentially as “transformative” for the nation as electricity.
Many members of Maryland’s General Assembly– who will soon consider several proposals to regulate AI – are equally intrigued by the technology’s potential applications in health care, education and other industries.
Members also worry that, like electricity, AI can be dangerous.
When they convene for their next session in January, the lawmakers will consider bills governing the use of AI. Many of the proposals aim to guard against potential hazards such as misuse, invasion of consumer privacy or the spread of misinformation, according to a half-dozen interviews and legislators’ email replies to Baltimore Sun questions.
Other proposals -– many still in the works -– are aimed at ensuring AI is harnessed so it may be of maximum benefit.
“I see AI as transformational,” said Young, a Baltimore Democrat. “I see it like the wheel, like electricity, like the computer and the semiconductor.”
Young’s AI focus is on schools, where he says Maryland educators could “leverage these tools to lesson-plan faster, to grade faster, to be more creative.”
For schools struggling to elevate students’ reading and math skills, he said: “I think AI can help elevate and accelerate students in catching up and getting ahead.”
Young and others introduced a bill in the legislative session ending last April that would require the Maryland Department of Education to evaluate how AI is being used in public schools, and how it could be used in the future. The measure did not pass, but Young said he remains interested in pursuing the topic.
As a relatively new technology, AI is not regulated comprehensively by the federal government. States have passed a patchwork of disparate measures.
In April, the Maryland General Assembly approved creating a workgroup to make recommendations for regulations to protect consumers against abuses when AI is used in employment, housing and other fields.
“There needs to be guardrails on AI — similar to what is happening in the legal community, where AI has spit out inaccurate case law, where lawyers have gotten in trouble using AI-generated memos that have given inaccurate information,” said Republican Del. Jesse Pippy of Frederick County.
The potential for AI safety risks is high. In January, former Pikesville High School principal Eric Eiswert alleged in a lawsuit that his reputation was harmed when an audio recording of an apparently AI-generated racist rant was made to falsely sound like him. He lost his job.
In the coming weeks, the American Psychological Association plans to issue a health advisory to help guide people who use AI-generated counseling apps, Lynn Bufka, the APA’s head of practice, said in an interview.
The concern, Bufka said, is that the app users may not understand the limitations of such therapy. “AI absolutely should not be functioning without some sort of human in the loop,” she said.
State Sen. Katie Hester, a Howard and Montgomery County Democrat, expressed concern in the last session about job candidates facing discrimination or other unfair treatment if employers used AI to screen applicants. Her bill to prohibit such screening —except under strict conditions – did not pass. She was unavailable this week to be interviewed on whether she might reintroduce the measure when the legislature convenes in January.
In July, President Donald Trump signed three executive orders intended collectively to pursue AI “global dominance” for the nation. He eased environmental rules that could have held back building new data centers.
Trump earlier rescinded an order of his predecessor, Democrat Joe Biden, establishing AI guidelines intended to safeguard privacy, scientific research and worker rights.
The U.S. Senate rejected legislative language in July that would have frozen states’ AI regulation that Texas Sen. Ted Cruz and other Republicans say would handcuff AI advancement.
Jamil N. Jaffer, a George Mason University assistant law professor, said in an interview that he is wary of AI regulation that could stifle innovation even as the nation competes with China over the technology.
“There are a lot of things we can do before we get to regulation,” said Jaffer, a former associate general counsel to Republican former President George W. Bush. “You should try the carrot first of tax incentives and federal government procurement before the stick of regulation.”
In an interview, Nitin Agarwal, an Arkansas information science professor, also expressed doubts about either “a patchwork of differing state regulations” or a “one-size fits all approach at the federal level.”
But Agarawal said Congress should establish a federal framework or set of guiding principles “that promotes safety, transparency and accountability while providing a constant baseline across the nation.” He is a member of a state task force studying AI.
Baltimore Sun reporter Mennatalla Ibrahim contributed to this article.
Have a news tip? Contact Jeff Barker at jebarker@baltsun.com