Business Transformation | 12.03.25
AI Use in the Broker-Dealer Industry: BISA 2025 Portfolio Magazine Preview
by: Serina Shores, First Citizens, and Kenneth Cherrier, Fidelity & Guaranty Life Insurance Company
Although AI has recently become a major topic of discussion, it has been quietly integrated into various aspects of our industry for years. Your employees and registered representatives are likely already using these tools — even for simple tasks like recommended email responses. However, many industry members have questions for regulatory bodies on how to manage the use of AI. Currently, there is limited regulation in the broker-dealer space, and although the Financial Industry Regulatory Authority (FINRA) and the Securities and Exchange Commission (SEC) are watching and assessing AI regulation, currently we can look to insurance industry rules, the National Institute of Standards and Technology (NIST) and other sources for guidance.
In this article, we outline several steps that broker-dealers can take to establish an effective risk management structure for AI. Whether you are new to AI or looking to enhance your current practices, these steps will help support your firm’s use of AI while reasonably addressing the associated risks.
4 Steps to Addressing AI Use Right Now
1. Apply Existing Rules Broadly
Currently, there are no specific AI regulations in the broker-dealer space, but FINRA and the SEC have been supportive. Their rules and regulations are agnostic regarding AI tools, and as such the use of AI has not been prohibited. The first step broker-dealers should take is to ensure that any use of AI is covered in your existing policies. Here are a few actions you may take:
- If you do not have one already, create an acceptable use policy that covers allowed and prohibited use of technology by employees and registered representatives. It's crucial to have written procedures and a policy outlining AI deployment, its areas of use and its business role. Developing this policy typically involves structured internal sessions to establish governing rules and procedures.
- Review your risk committee policy and charter to ensure there is governance overseeing the use of AI. A strong internal governance framework should define acceptable AI use, oversight and responsibilities at your organization.
- Establish and maintain an AI portfolio. In alignment with industry best practices, some companies maintain an active and centralized list (inventory) of all AI tools utilized by the company. This portfolio should include details such as the tool’s purpose, users, use cases, testing frequency/status and any associated risks or controls.
- Update your vendor management policy to incorporate an AI use questionnaire into both the vendor vetting process and ongoing vendor due diligence. AI can be developed internally or through third-party vendors. If you are deploying a vendor's AI, you must have appropriate vendor management oversight. Conduct due diligence to assess the vendor's reputation and responsiveness to AI issues. How often do they review their AI tool for potential deficiencies?
- Review your errors and omissions (E&O) and bonding documentation to ensure any issues resulting from the use of AI are covered.
- Address deactivation, considering factors for discontinuing an AI tool and its impact. Some companies initially deployed AI tools for customer profiling but found them ineffective and switched vendors or tools — be prepared.
2. Provide Users With Training
Many banks and credit unions are eager to deploy AI because they've heard about and seen other companies using it. However, they often rush the process — relying on vendors and not doing their own due diligence — and neglecting key elements like policy and training on AI. Some vendors offering AI tools may already have training materials — are you leveraging these? An AI tool should never be deployed without training the end user on its use.
If the AI you are using is internally created, develop training with guidance from your AI policy. All users of AI should be required to take training on the acceptable uses and risks associated with AI in general.
The more educated the user is about the tool and its purpose, the better they can leverage it to its maximum capability. Educated users can identify issues, such as false information or bias. Therefore, training on the tool's purpose, usage and dos and don'ts is essential.
Want to read the full article? BISA's Portfolio Magazine is now available to all, regardless of membership status. Access the full 2025 issue on the BISA website.