Earlier this month, Gov. Phil Murphy (D-N.J.) authorized an artificial intelligence task force to research and analyze the impacts of AI and recommend policies to ensure the technology is used ethically.
“It is critical that New Jersey continue to foster an environment for innovation while protecting individual and civil rights, and I am confident that the AI Task Force will further this important mission,” Murphy said.
Jim Samuel, an associate professor of practice and executive director of the Master of Public Informatics program at the Edward J. Bloustein School of Planning and Public Policy, said the task force is a good step forward.
“We are currently at the cusp of entering into the age of AI,” he said. “We have stepped into a new phase, a new technological era, the fourth revolution.”
Samuel said AI can be a powerful tool in amplifying the voices of U.S. citizens and promoting civic engagement. He said AI can help the government rapidly collect thoughts and feedback from individuals when penning legislation.
Piyushimita (Vonu) Thakuriah, a distinguished professor and director of the Rutgers Urban and Civic Informatics Lab at the Bloustein School, researches the intersection of information technology and transportation. She said transportation technology is already widely using AI.
“The field of transportation has been at the frontier of AI,” she said. “Just take autonomous cars. That would not be possible without some of the more advanced AI technology that is around there. Ranging from machine vision algorithms (and) computer vision algorithms that actually try to detect how far the car is away from other cars around it.”
Both experts said they have even implemented AI in their classrooms, with both encouraging the use of ChatGPT as a resource.
“In some semesters, (I am) insistent that students use AI to learn AI,” Samuel said. “Last semester, students had to use ChatGPT. While there was a huge commotion and many people were proposing limits, I said no limits. I flipped the limit.”
Despite the positive applications of AI, he said that the technology is prone to abuse without human intervention. Samuel said AI alone should not be used to craft policy but instead should be used to advise and inform humans.
“Artificial intelligence (does) not understand meaning the way human intelligence does,” Samuel said. “Artificial intelligence may easily compose a more eloquent and elaborate text, but it still does not understand the meaning behind the text.”
Both experts cited protection privacy as a significant concern, as AI technology may have access to phone calls, facial recognition software and other tracking methods. Thakuriah also said AI technologies may include biased information, particularly regarding minority groups.
“There might be toxicity or offensive content in some of the output that comes out because (the AI model) has been trained on stuff that is toxic and harmful,” she said.
The experts cited other negative impacts of AI, such as loss of employment and misinformation. Both experts hope Murphy’s task force will suggest policy frameworks that address the issues they brought up.
Despite supporting the task force, Samuel said he wishes the initiative included more seasoned researchers.
“Researchers like myself and others have been studying artificial intelligence for years,” he said. “Most people have woken up to AI in the 2022 to 2023 time period. We have been doing this for years … At least at this present point in time, the task force does not have any academic thought leaders who are known in the world of artificial intelligence.”
Aside from the recommendations the task force will make, Thakuriah said it is important that AI thought leaders are in an open dialogue with the public.
“We need to have a conversation with broader society about where these kinds of technologies are going to go,” she said.