For more than three years, an IEEE Standards Association working group has been refining a draft standard for procuring artificial intelligence and automated decision systems, IEEE 3119-2025. It is intended to help procurement teams identify and manage risks in high-risk domains. Such systems are used by government entities involved in education, health, employment, and many other public sector areas. Last year the working group partnered with a European Union agency to evaluate the draft standardâs components and to gather information about usersâ needs and their views on the standardâs value.
At the time, the standard included fiveprocesses to help users develop their solicitations and to identify, mitigate, and monitor harms commonly associated with high-risk AI systems.
Those processes were problem definition, vendor evaluation, solution evaluation, contract negotiation, and contract monitoring.
The EU agencyâs feedback led the working group to reconsider the processes and the sequence of several activities. The final draft now includes an additional process: solicitation preparation, which comes right after the problem definition process. The working group believes the added process addresses the challenges organizations experience with preparing AI-specific solicitations, such as the need to add transparent and robust data requirements and to incorporate questions regarding the maturity of vendor AI governance.
The EU agency also emphasized that itâs essential to include solicitation preparation, which gives procurement teams additional opportunities to adapt their solicitations with technical requirements and questions regarding responsible AI system choices. Leaving space for adjustments is especially relevant when acquisitions of AI are occurring within emerging and changing regulatory environments.
Gisele Waters
IEEE 3119âs place in the standards ecosystem
Currently there are several internationally accepted standards for AI management, AI ethics, and general software acquisition. Those from the IEEE Standards Association and the International Organization for Standardization target AI design, use, and life-cycle management.
Until now, there has been no internationally accepted, consensus-based standard that focuses on the procurement of AI tools and offers operationalguidance for responsibly purchasing high-risk AI systems that serve the public interest.
The IEEE 3119 standard addresses that gap. Unlike the AI management standard ISO 42001 and other certifications related to generic AI oversight and risk governance, IEEEâs new standard offers a risk-based, operationalapproach to help government agencies adapt traditional procurement practices.
Governments have an important role to play in the responsible deployment of AI. However, market dynamics and unequal AI expertise between industry and government can be barriers that discourage success.
One of the standardâs core goals is to better inform procurement leaders about what they are buying before they make high-risk AI purchases. IEEE 3119 defines high-risk AI systems as those that make or are a substantial factor in making consequential decisions that could have significant impactson people, groups, or society. The definition is similar to the one used in Coloradoâs 2034 AI Act, the first U.S. state-level law comprehensively addressing high-risk systems.
The standardâs processes, however, do complement ISO 42001 in many ways. The relationship between both is illustrated below.
International standards, often characterized as soft law, are used to shape AI development and encourage international cooperation regarding its governance.
Hard laws for AI, or legally binding rules and obligations, are a work in progress around the world. In the United States, a patchwork of state legislation governs different aspects of AI, and the approach to national AI regulation is fragmented, with different federal agencies implementing their own guidelines.
Europe has led by passing the European Unionâs AI Act, which began governing AI systems based on their risk levels when it went into effect last year.
But the world lacks regulatory hard laws with an international scope.
The IEEE 3119-2025 standard does align with existing hard laws. Due to its focus on procurement, the standard supports the high-risk provisions outlined in the EU AI Actâs Chapter III and Coloradoâs AI Act. The standard also conforms to the proposed Texas HB 1709 legislation, which would mandate reporting on the use of AI systems by certain business entities and state agencies.
Because most AI systems used in the public sector are procured rather than built in-house, IEEE 3119 applies to commercial AI products and services that donât require substantial modifications or customizations.
The standardâs target audience
The standard is intended for:
Mid-level procurement professionals and interdisciplinary team members with a moderate level of AI governance and AI system knowledge.Public- and private-sector procurement professionals who serve as coordinators or buyers, or have equivalent roles, within their entities.Non-procurement managers and supervisors who are either responsible for procurement or oversee staff who provide procurement functions.Professionals who are employed by governing entities involved with public education, utilities, transportation, and other publicly funded services that either work on or manage procurement and are interested in adapting purchasing processes for AI tools. AI vendors seeking to understand new transparency and disclosure requirements for their high-risk commercial products and solutions.
Training program in the works
The IEEE Standards Association has partnered with the AI Procurement Lab to offer the IEEE Responsible AI Procurement Training program. The course covers how to apply the standardâs core processes and adapt current practices for the procurement of high-risk AI.
The standard includes over 26 tools and rubrics across the six processes, and the training program explains how to use many of these tools. For example, the training includes instructions on how to conduct a risk-appetite analysis, apply the vendor evaluation scoring guide to analyze AI vendor claims, and create an AI procurement ârisk registerâ tied to identified use-case risks and their potential mitigations. The training session is now available for purchase.
Itâs still early days for AI integration. Decision makers donât yet have much experience in purchasing AI for high-risk domains and in mitigating those risks. The IEEE 3119-2025 standard aims to support agencies build and strengthen their AI risk mitigation muscles.
From Your Site Articles
Related Articles Around the Web





Be the first to comment