Skip to main content

FHFA Insights
Artificial Intelligence/Machine Learning Supervisory Guidance for Enterprises

Published:
02/10/2022

The use and oversight of artificial intelligence (AI) and machine learning (ML) among financial institutions continues to be of great interest across the government as the development and adoption of AI/ML grows at a rapid pace. AI/ML tools and systems can support a range of functions such as customer engagement, risk analysis, credit decision-making, fraud detection, and information security. However, the use of AI/ML can also expose institutions to heightened risks, including compliance, financial, operational, and model risks. FHFA's Division of Enterprise Regulation (DER) has been monitoring and examining the use of AI/ML at Fannie Mae and Freddie Mac (the Enterprises) over the past several years  and evaluating their associated benefits and risks. 

As a result of these efforts, DER issued Advisory Bulletin (AB) 2022-02 ​on February 10, 2022, which provides AI/ML risk management guidance for the Enterprises and Common Securitization Solutions. It is the first publicly released guidance by a U.S. financial regulator that is specifically focused on AI/ML risk management. 

Recognizing the evolving nature of AI/ML and the varying degree of use among the Enterprises, FHFA's supervisory guidance is highly principled in nature and underscores the need for a risk-based and flexible approach to AI/ML risk management. For example, high-risk AI/ML use cases—such as those that affect the Enterprises' critical business functions, invoke compliance with laws and regulations, or involve highly complex and opaque methods—warrant more robust risk management considerations than AI/ML uses that are low risk or more transparent.  Additionally, the Enterprises' risk management should be adaptable enough to accommodate changes in the adoption, development, implementation, and use of AI/ML.

The AB identifies three areas of heightened AI/ML risks for the Enterprises: model risk, data risk, and other operational risks. An example of model risk is "black box risk," or a lack of interpretability, explainability, and transparency that is more prevalent in AI/ML models than in non-AI/ML models. Bias in the selection and processing of data, as well as sufficient computing power and IT infrastructure, are examples of increased data and other operational risks associated with the use of AI/ML at the Enterprises. FHFA encourages the Enterprises to leverage their existing risk management and control frameworks and to strengthen areas where heightened and unique risks exist due the use of AI/ML.

FHFA also highlights the need for certain foundational components to the Enterprises' governance of AI/ML. These include a set of AI/ML core ethical principles, an AI/ML taxonomy, and an AI/ML inventory. An AI/ML-specific taxonomy establishes a common vocabulary and understanding of AI/ML terms and capabilities—such as natural language processing and neural networks—that allows the Enterprises to effectively identify and manage AI/ML risks. An inventory captures AI/ML use cases across business lines and functions and provides the Enterprises with a comprehensive and holistic view of how to best manage their AI/ML associated risks that can manifest throughout the AI/ML lifecycle. Establishing a set of core ethical principles is essential for the Enterprises to promote consistent governance across different business activities and functions that interact with AI/ML in ways that can potentially lead to adverse outcomes. Such ethical principles include transparency, accountability, fairness and equity, diversity and inclusion, reliability, and security. 

In conjunction with the AB, FHFA's Office of Minority and Women Inclusion (OMWI) issued a Supervisory Letter​ on February 10, 2022, that details FHFA's expectations for the consideration of diversity and inclusion (D&I) in the Enterprises' use of AI/ML.

The Supervisory Letter states that the Enterprises' use of AI/ML must be designed to embed D&I considerations throughout all uses of AI/ML and address explicit and implicit biases to ensure fairness and equity in AI/ML recommendations. The guidance distinguishes fairness and equity from D&I by associating the former with the results and impacts on underserved communities of the AI/ML systems and any processes utilized by the Enterprises. On the other hand, D&I assessments account for how underserved communities, including minority-, women-, and disabled-owned businesses (MWDOBs), might be adversely affected by the use of AI/ML tools or applications at the Enterprises. 

AI/ML is a developing field, and ​all opportunity areas where D&I could be advanced may not yet be readily apparent, particularly in areas where AI/ML use is in early stages of development. The Supervisory Letter recommends that the Enterprises to continually evaluate their AI/ML use to uphold fairness and equity as well as champion D&I.  The Supervisory Letter recommends that the Enterprises develop frameworks to assess the effects of data embedded with historical, social, economic, and cultural biases, and to find paths to include underserved communities, such as MWDOBs, within their use of AI/ML.

FHFA looks forward to the Enterprises' responsible innovation, adoption, and use of AI/ML and other technology to promote a robust housing finance market. FHFA will also continue to engage with other regulators and governmental entities to enhance its understanding of AI/ML and to ensure its guidance and supervisory expectations regarding AI/ML are clear and consistent.

Tagged: Artificial Intelligence (AI); Machine Learning; Risk Management; Supervision; Diversity and Inclusion; Model Risk; Operational Risk; Governance; Bias; Fannie Mae; Freddie Mac; Common Securitization Solutions (CSS); Data Risk; Compliance; Minority-, Women-, and Disabled-Owned Business (MWDOB)

 

By: Swan Lee
Senior Risk Analyst
Office of Risk and Policy
Division of Enterprise Regulation

Anne Marie Pippin
Supervisory Risk Analyst
Office of Risk and Policy
Division of Enterprise Regulation

Soquel Harding
Principal Policy Analyst
Office of Minority and Women Inclusion