Skip to main content

Justice, Fairness, Inclusion, and Performance.

								 NAPA AI Government AI Government

Virtual Roundtable on Making Government AI Ready

November 30, 2020

November 30, 2020

The public sector in the United States is at the very beginning of a long-term journey to develop and harness Artificial Intelligence tools to process huge amounts of data in seconds, automating tasks that would take days or longer for human beings to perform. At the same time, AI raises concerns about bias, security, transparency, and budget and procurement processes. Fellows, Erik Bergrud, James Hendler, and Theresa A. Pardo discuss past, current, and potential efforts to combat these challenges.

How should AI be used to improve service delivery?

James Hendler

There are a number of ways that AI could be used, but by and large the most important in the short-term is in offering help services for citizens. Websites and phone help systems have been enhanced by the use of AI ‘chatbot’ facilities. Governments need to be aware that these are limited in what they can do, but where they can be used they can generally make it so the same number of humans can service a larger number of help or service requests. As time goes on, we will see more AI deployed in helping government agencies to process their data in ways that will let the agencies develop better plans and service delivery approaches. But we are a long way from where AI replaces, as opposed to augments, government employees and their functions—especially with respect to service delivery.

Theresa A. Pardo

AI should be viewed enthusiastically, but responsibly, as a tool for innovations in service delivery. AI should be used to solve business problems and to increase service quality and efficiency. Some say AI will change the nature of government service delivery. Surely, in many places, it is doing so already. It is improving the quality and accessibility of services, making it easier for customers to receive services that are of a higher quality while also making the provision of services more efficient and effective.

A long-term challenge facing governments at all levels is making the best choices about which technologies to invest in, at what point to make those investments, and for which purpose or purposes. For AI to enable sustainable service delivery transformation, we must have a deep appreciation for how the specific characteristics of context and the particular service being transformed within that context will interact with the specific characteristics of the AI application being envisioned for use in that context. All uses of AI to improve service delivery must center customers, both in terms of the interests of all customers, i.e., responsiveness, timeliness, transparency, and accuracy of services, but also to enable the efforts of more individuals to get access to and receive services specific to them. A vision for AI is that the need for services will be anticipated and pushed to customers in context specific and culturally appropriate ways. This potential to increase the quality of service delivery requires attention to increasing the public’s trust in AI and to the ability of government agencies to ensure that AI-based service delivery innovations are trustworthy. Trustworthy AI requires explainability, reproducibility and a lack of bias.

Decades of experiences tells us that in public sector technology innovation, including service delivery innovation, one size does not fit all. Characteristics, including the nature of the services provided by each level of government and the capabilities of both the government itself and the citizenry, must be taken into account. Readiness to make good choices that lead to sustainable service delivery improvements varies across state governments, large federal agencies, and the full range of municipalities from the largest cities to the smallest counties. These differences must be understood and taken into account if sustainable service delivery transformation is to occur.

How do we develop an AI-ready public workforce?

Erik Bergrud

We begin by acknowledging that a minority of all public employees possess a public administration degree. In addition to collaborating with the Network of Schools of Public Policy, Affairs, and Administration (NASPAA) on engaging BPA and MPA students, the Academy could partner with the other national academies to foster a greater understanding of the challenges and opportunities AI brings to Federal, state, and local government agencies. Furthermore, the Volcker Alliance has launched a Government-to-University Initiative (G2U) by building regional networks of governments and universities. The Alliance’s “network of networks” provides a pipeline for recruiting future government employees, including those with significant STEM education. Finally, developing an AI-ready public workforce could be part of a broader Academy P-20 initiative designed to attract the best and brightest to public service. Digital natives, including those enrolled in elementary school, possess the aptitude and enthusiasm necessary to develop future AI public sector solutions.

James Hendler

This may sound strange coming from someone who has been working in the AI field for over forty years, but it is unclear to me we need to do very much. The assumption behind this question is that AI will somehow be some magic, powerful force that will change how people do things in major ways. However, while there is going to be significant automation provided by AI, by and large most people will take advantage of it through applications and systems that come to them in the context of what they already do. AI is not particularly special, it is just another information technology component—it is important to understand its capabilities, but it won’t be as disruptive as people seem to think. Obviously, those who deal with IT and related technologies will have to understand the details of its deployment—but most users will just find it delivered the way other IT services are: that is, as something that runs on their laptops or other devices below the level they need to worry about.

How should AI be incorporated into the public administration curriculum?

Theresa A. Pardo

Students in public administration should have the opportunity to learn about AI, and other emerging technologies in a variety of different ways. Students of public administration must be prepared to understand the significance of technology innovation, AI or otherwise, to the capabilities of governments to create public value, and they must be prepared to appreciate the interdependencies among technology, management and public policy innovation and to use that new appreciation to make better decisions about risks and rewards.

We do not need public administration students to be developers of AI, but we do need them to understand the “promises and pitfalls” of AI, and we need them to understand the critical role that they will play, as government officials, in making decisions as both stewards and users of AI. They must have course material that prepares them to see the potential of AI, but that also prepares them to be circumspect in ways that ensure that investments lead to sustainable and responsible value creation. They must be exposed to courses that train them to think critically about the potential of technology to increase the quality and accessibility of public services, and about the role that government officials and other societal actors must play in ensuring that technology innovations, including AI, are understood both in terms of their potential to create value but also their potential to undermine or threaten our collective commitment to basic ethical principles.

PA students must be trained to understand the nature of and sources of bias in data and in algorithms and the various roles that government officials can play in ensuring transparency and accountability in the design, implementation and use of AI. Students must understand the challenges associated with ensuring that accepted ethical principles guide AI design and investment choices and more importantly, they must understand the responsibilities they will have as government officials in ensuring that the government agencies they work for are both responsible stewards of and users of AI.

James Hendler

It is important that future public administrators understand the promises and limitations of AI in their decision-making. This does not mean learning how to program AI, or understanding the basic algorithms, but rather understanding what the limitations of AI systems are. Many of us think the greatest danger of AI is people putting too much trust into the systems, rather than understanding that the technology is fallible. I, and others, have written books aimed at teaching this sort of thing, and I’m sure we will continue to do so. Yet this is a fast-changing technology, so keeping the knowledge up to date is difficult—most of these books are out-of-date before they could become useful texts. Ideally, the solution is in the teaching more than the curriculum per se. We must break down the academic silos that separate the social and technical fields—adding more technical education, and educators, to the public administration teaching ranks and vice versa.

How do we raise awareness of, and resolve, the ethical issues associated with AI?

James Hendler

There are many technologies we have today that raise ethical issues. Medical provision, food distribution, community relations and so many other functions of government involve making decisions on a regular basis that have ethical dimensions. AI does not change this. There are technical aspects that must be addressed, for example many of us see the need for the development of an AI-ethics community that is similar to the bio-ethics community that helps us understand and make decisions about advanced medical technologies such as cloning, CRISPr, and so on.

AI is not really special in this way, but it is important that people understand it is limited and that it will need regulation. That is, eventually, we will likely need a regulatory regime around AI much as we have these regimes that help regulate these medical advances – and as we also have in regulating food, transportation, health and many other sectors. Within such a framework computer scientists, ethicists, and policy makers will have to work together to understand and deal with the technology in a way that allows innovation while controlling risk. Raising awareness of the potential harms that unregulated AI (for example, facial recognition systems used in law enforcement) can do is the most important starting place. Without policy informed by an understanding of these technologies, we leave the technologists in charge at society’s risk.

Erik Bergrud

In our Academy Election 2020 Project action plan, we identified a range of ethical issues which the Administration and Congress should consider, including “(a) ethical and moral questions; (b) greater public education about the benefits and risks of AI; (c) regulatory frameworks and guidelines; (d) legislation linked to current and future ethical issues; and (e) the proper relationship between technology, society, and public law.” We also suggested that the American Society for Public Administration’s Code of Ethics could provide a starting point for a federal government-wide commitment to ethical principles and standards in AI development and use, leading to adoption and implementation of AI ethical principles by departments and agencies.

The Academy’s Standing Panel on Technology Leadership, in particular, can play a collaborative role with peer entities in resolving AI ethical issues and eliminating racial bias in AI applications, including facial recognition technologies. Engaging other professional societies inside and outside of public administration, including the National Association of State Chief Information Officers and Association for Computing Machinery, could lead to the development of essential AI ethics training for Federal, state, and local government employees. The American public retains a vested interest in the ethical implementation of AI, and NAPA should build upon existing communications platforms to educate the public and policymakers about the ethical implications inherent in current and forthcoming technologies.

What multi-level governance systems are needed to protect against unintended bias?

James Hendler

This is a great question—it’s just unclear to me that it particularly relates to AI technologies. That is, unintended bias is inherent in human-to-human interactions, and AI-based systems are not likely to change this in any important way. AI systems designers must be aware of some of the technical dimensions of this, and regulatory frameworks needed to deal with the ethical systems would have to cover this, too. It is unclear that special governance is need for this particular aspect of AI and its use; instead, this must be part of the larger ethical framework.

How do we ensure that the benefits of AI are available to all groups?

James Hendler

As with any technology, there is no easy answer—and, “as with any technology,” is the crux of the matter. AI is not different from, for example, use of computing in education. It is a role of government to make sure that educational resources are available to all groups (unfortunately, this is not always done successfully). AI is just a tool, and how the tool is used will depend on investment and deployment. There’s no magic here. Whatever techniques we deploy to make any kind of government or technical benefits available to all groups are the same techniques we’ll need to use for AI.

What impact does the Covid-19 pandemic have on this Grand Challenge?

James Hendler

I would hope that the pandemic will help demystify questions like those above. AI has been hugely important and helpful in advancing the medical science and discovery processes being used by the health care industry in developing treatments and by the major pharmaceutical companies for creating vaccines for the disease. Predictive models of spread are also available to help decision makers understand what could be done to control it. From a public administration perspective, however, the role of AI is not large: the health care treatments must be approved through traditional channels, the vaccines still need to go through trials and safety testing, and the mitigation models can be used or, unfortunately, ignored by those who should pay attention. AI won’t change this—the bottom line is it is just another technology, and human strengths and weaknesses hold sway over its impact. Covid helps to prove that point—and that’s perhaps an important impact. Case studies of the role it plays, and more importantly doesn’t, will be a powerful tool in understanding AI and how it could be used to greater benefit if better understood.

ABOUT THE CONTRIBUTORS

Erik Bergrud. Associate Vice President for University Engagement, Park University, Parkville, Missouri. Former Positions with American Society for Public Administration: President, Senior Director of Program and Service Development, Senior for e-Organization Development, Director of Information Services, Director of Chapter/Section Relations.

James Hendler. Tetherless World Professor of Computer, Web and Cognitive Sciences, Rensselaer Polytechnic Institute; Director, Rensselaer Institute for Data Exploration and Application and RPI-IBM Artificial Research Intelligence Collaboration; Former positions include Program Manager/Chief Scientist (IPA), Information Systems Office, Def Advanced Research Projects Agency (DARPA); Open Data Advisor, New York State (unpaid), NYS Government; Internet Web Expert (unpaid), Data.gov project, IPA to GSA, working w/OSTP; Member Advisory Committee , Homeland Security Science and Technology Adv. Comm., DHS; Current activities include Board Member, Board on Research Data and Information, National Academy of Science, Engineering and Medicine; Director's Advisory Committee Member, Nat'l Security Directorate, Pacific Northwest National Laboratories; Fellow, Association for the Advancement of Artificial Intelligence (AAAI), Association for Computing Machinery (ACM), American Association for the Advancement of Science (AAAS).

Theresa Pardo. Director, Center for Technology in Government, University at Albany, State University of New York; Professor, Rockefeller College, University at Albany; Deputy Director, Center for Technology in Government, University at Albany, State University of New York; Project Director, Center for Technology in Government, University at Albany, State University of New York; Project Coordinator, Center for Technology in Government, University at Albany, State University of New York; Director, Academic Computing Services, Siena College; Assistant Director Academic Computing, Academic Computing Services, Union College; Coordinator of Academic Computing Services, Office of Computing Services, Union College; Technical Information Analyst, Office of Computing Services, Union College.