Gen AI is amazing and super accessible. It is not just saving time and creating efficiencies but doing “new” things that are actually challenging or too time consuming for humans. For example, one of our clients ripped through hundreds of thousands of call-center transcripts to see which employees their customers liked best and what those customers were most confused about! No human is going to do that.
Before you even start using AI at all at your company, you need an AI policy. If you do not have one, you need one even if you are not using AI, as otherwise, your employees could be using on their personal devices for work – with nobody telling them not to.
Here are five AI policy fails we have noticed. Regulators are starting to focus on AI usage and we have heard of “deficiency letters” from regulators being issued even to midsize and smaller financial institutions. One of the first things they will ask for is your AI policy. Get ready. (Not to drop an ad in here, but we can help you review your AI policy and think about AI ethics, but not only that, we have a software platform that can fix the issues we outline below.)
Fail #1: Not Addressing the Human Element
Your team is your most valuable resource, so treat it that way. Some people are early AI adopters, but most are not. Many are afraid that AI will take their jobs, and honestly, some people should be worried. Your AI policy should address the human element directly. Say you will not fire employees and replace them with AI. Commit to AI transparency in terms of data, telling stakeholders up front when you are using AI, and make sure you and AI learn from mistakes. The solution is to provide ongoing AI training to everyone at your company.
Fail #2: Allowing Unmonitored AI Access
Remember the massive fines banks paid for not monitoring communications with clients on text messages, WhatsApp, etc.? In September 2022, the SEC and CFTC combined levied $1.81 billion in fines for off-channel communications. The top offenders, including Morgan Stanley and Goldman Sachs, paid over $200 million each. Allowing your employees access to AI in your policy without monitoring it across every model and platform is a train wreck waiting to happen. The solution is a policy focused on incorporating an enterprise AI user interface with role-based access across all Gen AI models and use cases that can capture a comprehensive recording of every prompt, response, data uploaded, and model version change across all AI models.
Fail #3: Turning a Blind Eye to “Rogue IT”
AI is so cool and helpful that employees are willing to bend the rules to use it. The latest versions of the some of the Gen AI APIs (programmatic interfaces) as well as other cool plug-ins and tools are very tempting to use for someone even mildly technical. The problems are many, including Fail #2, but mostly we worry about cybersecurity issues. One-off AI solutions created by a single person have no place in an organization without a cybersecurity review, and instead should be centralized in IT to keep things safe. Otherwise, you will soon have hundreds of one-off projects with gaping security holes you will need to monitor. The solution is allowing developers and quants access to APIs that connect to vetted AI solutions and agents and go through an AI hub that does AI GRCC: governance, risk, compliance, and cybersecurity. Including 24/7 monitoring.
Fail #4: Regulation Not Included
If your AI policy does not encompass the reality of what your regulator(s) want today, as well as in the near future, that is a fail. For example, every regulator imaginable from the SEC to the FDA has said “Just because it’s AI doesn’t mean you do not have to follow the existing rules.” For example, if somehow your AI is discriminating on gender, race, or ethnicity in terms of hiring, lending, or anything else, you will have a big problem. The solution is regulatory compliance built into your enterprise AI platform.
Fail #5: Overlooking Data Privacy and Protection Standards
There are country (and local) laws around consumer and client data privacy and protection standards, whether you are a one-person RIA in the US or a multinational company headquartered in Frankfurt. For example, GDPR requires protection of consumer data. Your AI system must have sensitive information encrypted in motion and at rest. Also, you must be able to detect sensitive information (like personally identifiable or health information) on the fly from a prompt line or an ingested data set. The solution is encryption or tokenization of PII and PHI before it goes to AI or is stored somewhere.
Honestly, the biggest failure is simply banning AI at you institution, whether it is a community bank, a government agency, or a school. We have ways to solve all of the problems above now. If you do not provide AI, people will just start using it on their personal devices. And that is a disaster waiting to happen.
AIR-GPT was used in the production and editing of this article.
Copyright © 2025 by Artificial Intelligence Risk, Inc. This article may not be redistributed in whole or part without express written permission from the author.