Today, beneath the headline grabbing reports of geopolitical and geoeconomic volatility a significant and consequential transformation is quietly unfolding in the public sector. A shift underscored by the change in US Federal AI policy marked by Executive Order 14179 and subsequent OMB memoranda (M-25-21 and M-25-22). This policy decisively pivots from internal, government-driven AI innovation to significant reliance on commercially developed AI, accelerating the subtle yet critical phenomenon of “algorithmic privatization” of government.
Historically, privatization meant transferring tasks and personnel from public to private hands. Now, as government services and functions are increasingly delegated to non-human agents—commercially maintained and operated algorithms, large language models and soon AI agents and Agentic systems, government leaders will have to adapt. The best practices that come from a decades worth of research on governing privatization — where public services are largely delivered through private-sector contractors — rests on one fundamental assumption: all the actors involved are human. Today, this assumption no longer holds. And the new direction of the US Federal Government opens a myriad of questions and implications for which we do not currently have the answers. For example:
Who does a commercially provided AI agent optimize for in a principal-agent relationship? The contracting agency or the commercial AI supplier? Or does it optimize for its own evolving model?
Can you have a network of AI agents from different AI suppliers in the same service area? Who is responsible for the governance of the AI? The AI supplier or the contracting government agency?
What happens when we need to rebid the AI agent supply relationship? Can an AI Agent transfer its context and memory to the new incoming supplier? Or do we risk the loss of knowledge or create new monopolies and rent extraction driving up costs we saved though AI-enabled reductions in force?
The Stakes Are High For AI-Driven Government Services
Technology leaders—both within government agencies and commercial suppliers—must grasp these stakes. Commercial AI-based offerings using technologies that are less than two years old promise efficiency and innovation but also carry substantial risks of unintended consequences including maladministration.
Consider these examples of predictive AI solutions gone wrong in the last 5 years alone:
Australia’s Robodebt Scheme: A government initiative employing automated debt recovery AI falsely claimed money back from welfare recipients, resulting in unlawful repayment collection, significant political scandals, and immense financial and reputational costs. The resulting Royal Commission and largest ever compensation payment by any Australian jurisdiction is now burned into the nation’s psyche and that of politicians and civil servants.
These incidents highlight foreseeable outcomes when oversight lags technological deployment. Rapid AI adoption heightens the risk of errors, misuse, and exploitation.
Government Tech Leaders Must Closely Manage Third Party AI Risk
For government technology leaders, the imperative is clear, manage these acquisitions for what they are: third-party outsourcing arrangements that must be risk managed, regularly rebid and replaced. As you deliver on these new policy expectations you must:
Maintain robust internal expertise to oversee and regulate these commercial algorithms effectively.
Require all data captured by any AI solution to remain the property of the government.
Ensure a mechanism exists for training or transfer of data for any subsequent solution providers contracted to replace an incumbent AI solution.
Adopt an “Align by Design” approach to ensure your AI systems meet their intended objectives while adhering to your values and policies .
Private Sector Tech Leaders Must Embrace Responsible AI
For suppliers, success demands ethical responsibility beyond technical capability – accepting that your AI-enabled privatization is not a permanent grant of fief or title over public service delivery, so you must:
Embrace accountability, aligning AI solutions with public values and governance standards.
Proactively address transparency concerns with open, auditable designs.
Collaborate closely with agencies to build trust, ensuring meaningful oversight.
Help the industry drive towards interoperability standards to maintain competition and innovation.
Only responsible leadership on both sides – not merely responsible AI – can mitigate these risks, ensuring AI genuinely enhances public governance rather than hollowing it out.
The cost of failure at this juncture will not be borne by the technology titans such as X.AI, Meta, Microsoft, AWS or Google, but inevitably by individual taxpayers: the very people the government is intended to serve.
I would like to thank Brandon Purcell and Fred Giron for their help to challenge my thinking and harden arguments in what is a difficult time and space in which to address these critical partisan issues.