Society is often slow to appreciate that technological innovations have both positive and negative outcomes. Splitting the atom led to weapons that can destroy the planet, but also provided a source of carbon-free energy and health care advances. Social media apps have connected people and created thousands of jobs. But their features have also led to many individual, group and societal harms.
We must be smarter with the latest technological advancement, artificial intelligence, and the recent innovation of generative AI, or chatbots. We know them commercially as ChatGPT (OpenAI), Bard (Google), Bing Chat (Microsoft) or Midjourney (art generator), among others. Our societal, legal, political and economic institutions are currently struggling to determine how best to manage the opportunities, challenges, risks and rewards of this new technology.
We must balance the economic interests of innovation developers against sound scientific inquiry and the public’s best interests. We must establish legal guardrails and societal norms to ensure that the potential harms resulting from their use (which are many) are mitigated, without compromising the benefits they bring.
Faced with this new frontier, state and local governments must carry on. Vendors bombard government officials with AI-driven, use-case solutions. At the same time, media outlets and AI developers buffet the public by promoting the remarkable skills the systems possess while simultaneously warning us about the threats they represent. Concerns about the harm and threats to society are being amplified, as compelling examples are regularly discovered and demonstrated.
Important questions. How should government agencies and their dedicated public servants respond? First, let’s identify the questions facing our government agencies when adopting artificial intelligence or any new technology. How can they:
• Properly use the technology without significantly harming the public?
• Balance the benefits and risks through research and sound policies?
• Manage the hype and pressure to use it coming from vendors, the media, well-meaning staff and sometimes the public itself?
• Understand that technology is developed and is deployed on curves? Not everyone has to be the first to use something new. Most should wait and learn lessons from capable early adopters.
Once the technology and its risks are understood, agencies should engage affected stakeholders for input on its use in their organization. Public policies on the use of the technology should emerge from the discussion. Issues such as privacy, transparency and ethics should be considered. Officials must examine the costs and weigh the alternatives. Will the technology, artificial intelligence, solve problems or potentially create more?
Regardless of whether an AI system has been trained on data from outside or inside the agency, the training data must be verified as valid for that agency’s use case. Computational algorithms must be consistent within the agency’s environment; biases must be discovered and mitigated before implementation. The system and its data need to be tested, retested and validated. When analyzing results, the agency must watch for unintended consequences or potential harm.
The must-do list. There is no shortage of risks related to AI use. Our laws and constitutional protections were not developed with its risks in mind. We need updated interpretations, new laws and informed regulations to address the challenges. But as these evolve at the federal and state levels, there are practical things government agencies can do now to take advantage of the technology:
• Understand how applications are built and designed.
• Make sure the training data is valid.
• Determine if an AI solution is the best way to solve a problem; is a human-driven solution supported by less sophisticated technology better?
• Implement ongoing risk management, including sound and transparent policies on how it is used.
• Ensure the agency has the resources (time, staff, and budget) to manage its use.
• Monitor to ensure the results are as expected and look for unintended consequences.
While the federal government is moving toward policy solutions, state legislators across the country have introduced a range of bills in efforts to get a handle on AI risks in government and in general use. However, public policy caution is important. Fifty different state solutions, many of which may be impractical or overlapping, can hinder more than help. Organizations representing different levels and types of government agencies should team up to share common approaches and solutions that can be implemented across their sectors. Collaboration leads to efficiency and economies.
Immediate goal. Government agencies bogged down by red tape can make finding solutions difficult. States should be surgical in their approach to policy and regulation. The immediate goal should be thoughtful policies to ensure that AI does no harm as we move ahead.
The introduction of AI comes on the heels of society’s recognition of the unanticipated consequences of social media. Generative AI is the newest technology that provides economic benefits for innovators but lacks societal guardrails for users. We need to learn from the past and then walk (develop safeguards) and chew gum (deploy systems carefully) at the same time.
Applications using artificial Intelligence aren’t going away. AI in its forms of chatbots, image generators, prediction engines and related applications have already shown they can undermine the shaky grasp many people have on what is fact or fake, skew how we understand the world and manipulate political viewpoints with falsehoods. Yet they can also bring us automated vehicles, help manage climate change, improve health outcomes, provide life-changing opportunities for individuals who have physical or cognitive limitations and much more.
It is important to get this as close to right as we can the first time. It’s hard to stuff a genie back into the bottle.