A study of the opportunities and risks of AI use in civil service
In recent years, technological developments in artificial intelligence (AI) have accelerated. This technology raises many questions, hopes and fears. The Agency for Public Management has conducted a study on AI use in civil service.
This study was carried out on the initiative of the Agency for Public Management, and it is included in the publication series Om offentlig sektor. According to its instruction, the Agency for Public Management must provide a decision-making basis for the Government's work on developing public administration and promoting a good administrative culture in central government. This study therefore aims to contribute to the discussion regarding the opportunities and risks of AI use in public administration. We have analysed how government agencies use AI as a tool to carry out their commissions, in relation to the fundamental values of government.
Big difference in how agencies using AI
Our study shows a wide variation in the degree to which AI has been adopted by government agencies. The civil service is not uniform, and the technological needs of various agencies differ. Some agencies have chosen to move faster, while others have adopted a more cautious and tentative approach. Among the largest agencies, most either use AI in some fashion or plan to do so.
The way in which agencies use AI also varies. Most often, AI is used to save resources in administrative-type operations. This may include, for example, case-sorting or categorisation tasks. Another application is to streamline communication with the public, e.g., using chatbots. Some agencies use AI to perform risk assessments of cases, in order to detect errors or criminal activity. In a few cases, agencies use AI to support decision-making in the exercise of official authority. Agencies also use AI in other ways in their core activities, but not specifically in the exercise of official authority. This includes a wide range of uses, from the Meteorological and HydrologicalInstitute's weather forecasts to image analysis by the Police Authority during investigations.
Agencies recognise AI’s potential, but there are risks
Agencies consider AI’s potential for increasing the efficacy and legal security of their operations to be great. They also see opportunities to improve service as well as accessibility. At the same time, AI use brings about several risks related to the ability of agencies to ensure the fundamental values of government. Such risks include agency decisions which may lack legal security, if AI technologies are developed using biased or insufficient data. Another problem is that, when AI is used as support, the public lacks transparency into decisions. Risks related to privacy also arise.
According to our study, the risks of AI use are greater or lesser, depending on how agencies use the technology. Agencies generally seem to use AI in riskier areas with caution. Most AI initiatives occur in internal administration, a low-risk area. However, riskier initiatives, in which agencies use AI in operations impacting individuals, have also been carried out.
Agencies must manage risks and act strategically
Our study shows that despite being aware of these risks, agencies have trouble managing them and adopting an AI strategy. All agencies must endeavour to achieve efficient operations and to make good use of state resources. This requires continuous development of working methods and a constant openness to take advantage of opportunities for efficiency gains.
Therefore, all agencies must consider the use of AI strategically, whether or not they use the technology. This means that agencies must understand and respond to technological developments, as well as analysing whether and how AI, as a tool, can or cannot enable them to better perform their commission. Any risks which AI use may pose to agency operations must also be identified.
Those agencies already using AI must actively ensure that their AI use conforms with the fundamental values of government. For example, AI must be used in a way that is more transparent, and the public must have access to how decisions are made. Agencies must also document, monitor and evaluate AI initiatives to a greater extent than at present. Not only will this prevent errors, but it will also help in assessing whether initiatives have streamlined operations and increased legal certainty to the extent expected.
The Government has an important role in driving development
In our view, the civil service as a whole remains in standby with regard to technological developments in AI. Agencies, with regard to AI, seem determined to proceed deliberately. This may indicate an awareness on their part of the risks associated with AI. However, it may also be due to their uncertainty regarding how AI use will look in the future. Such factors include legal uncertainties, access to the right skills and the need for coordination on AI issues.
At the same time, whether agencies wait or fully embrace developments, their actions have consequences, for the agencies themselves as well as the entire civil service. If they wait, agencies risk missing the potential benefits of AI, as well as an opportunity to gain important knowledge and experience for the future.
For that reason we considers that the Government can facilitate agencies by remaining receptive to agency concerns that regulations must be reviewed or revised, so as not to impede the appropriate use of AI. We also find reason for the Government to clarify the governance of the agencies' role in this area. This includes clarifying which agency or agencies should be responsible for coordinating any joint AI initiatives in the civil service.