As AI Broadens Its Reach, Regulations Continue to Expand
More organizations in the United States, especially in financial services, are expanding their use of artificial intelligence; but, are they compliant?
The EU AI Act Sets a Global Standard of What is To Come in the United States
I know what you are probably thinking: “Another article about how AI is bad and taking our jobs and is inaccurate.” However, indulge me for a moment. If you know me personally, you know I am one of the largest technologist AND futurists in the industry. I 💚 AI. There are so many powerful uses for artificial intelligence, machine learning, and these other data-centric technologies. As a society, we have access to more data than at any point in history. And a lot of technologies out there aren’t necessarily bad, but the entire technology deployment cycle isn’t being followed.
However, I think the EU AI Act has some forward-thinking insight that many technology enthusiasts, organizations, and “influencers” don’t take into account.
The EU AI Act requires that employees and users of AI be properly trained on how to use the system, including its proper usage and its limitations. Many users, companies, and organizations are deploying AI without proper training. Even the Taxpayer Advocate Service warned about people improperly using AI.
Hallucinations exist. You have seen the videos and examples. But here is the other truth: people make mistakes as well. The concern isn’t about using AI, its about using AI blindly and accepting the results from it as accurate and infallable. A litigant in Missouri was recently fined by a court, not for using AI, but for using AI and submitting incorrect or falsified content. Now imagine an AI system placed in charge or sysstems or automation. A requirement of the EU AI Act is to verify content prior to use.
The use of AI can defintiely lead to an overconfidence on topics not actaully understood by the user. I see this in technical industries such as accounting, tax preparation, bookkeeping, payroll, and more but I also see it in the academic space with students using AI to complete assignments without understanding or grasping the topics. This is scary on both accounts because the deep, technical meaning can get lost or misapplied. Again, the EU AI Act requires training of users and limitations put in place around what information can be generated.
Similarly, use of AI by non-technologists is continuing to grow - and that is amazing. However, the skillset is not increasing at the same rate. Sure, there are a few education (mostly CE or courses trying to qualify as CE) exists, but many are 1 or 2 hours; and one to two hours is not sufficient to learn the skillset needed to leverage this technology. The EU AI Act requires proper skill-based training on how to use the technologies. Many companies and organizations are pushing out these technologies without investing in deep training and education for their staff (or themselves).
Finally, while this point can be argued, there is not a global, general acceptance of AI as a reliable platform at this point. This is not an argument to one side or the other, but in recent surveys completed by MIT and Bain & Company, it revealed that 87% of executives are pursuing AI as a solution to budgetary, staffing and productivity concerns; however, this is coupled with large concerns over data security, accuracy, ownership of intellectual property, and more. Yes, acceptance of AI is growing, but is it large enough yet to be deployed universally or in replacement of other systems? That is not a topic for this article. The EU AI Act requires disclosure of when AI is used to generate content or solutions. Are the companies and organizations pushing out technology including these disclosures?
Now, you may be sitting there saying that I am an AI-hater, but nothing could be farther from the truth. If you re-read my comments above, there are positives and concerns with each item - most of which are addressed by the EU AI Act. There is so much potential and power, but there is this old saying many of use quite often, “With great power comes great responsibility.” You can’t have one without the other.
It is really important, when a new act comes out to review it and, especially if it is in an adjacent industry or region, research where overlaps might exist or try to analyze how that act might lead to implications to your own industry. As the EU has deployed the EU AI Act, there are already requirements being placed on US-based/owned organizations with locations, clients, or (possibly) data within the EU. But also, if this act was deployed in a technologically-similar environment, there is a heightened chance that similar regulations may find their way to the US.
What is the EU Artificial Intelligence Act?
The EU Artificial Intelligence Act (AI Act) is the European Union's comprehensive legislative framework aimed at regulating the use, development, and deployment of artificial intelligence (AI) systems across member states. Introduced by the European Commission in April 2021, the AI Act is designed to mitigate risks associated with AI while ensuring the protection of fundamental rights and values. The Act takes a risk-based approach to regulation, categorizing AI systems based on their potential impact on individuals and society. This regulatory framework has significant implications not only for companies within the EU but also for foreign entities, including financial services firms in the United States, that operate in or target the EU market.
Key Provisions of the AI Act
Risk-Based Classification of AI Systems:
The AI Act classifies AI systems into four categories based on their risk level:
Unacceptable Risk: AI systems that pose a significant threat to safety, livelihoods, or fundamental rights are banned. This includes AI systems used for social scoring by governments, real-time biometric identification in public spaces (with certain exceptions), and AI systems that manipulate human behavior or exploit vulnerabilities.
High Risk: AI systems that have a substantial impact on safety or fundamental rights are subject to strict requirements. These include AI applications used in critical infrastructure, education, employment, law enforcement, and financial services. In the financial sector, AI systems used for credit scoring, anti-money laundering (AML) processes, fraud detection, and automated trading could be classified as high-risk.
Limited Risk: AI systems that present limited risks, such as chatbots and emotion recognition systems, are subject to transparency obligations. Users must be informed when interacting with AI and provided with appropriate information on how the system functions.
Minimal Risk: These AI systems, which pose low or no risk, are largely unregulated under the AI Act.
Obligations for High-Risk AI Systems:
For AI systems categorized as high risk, the AI Act imposes several mandatory requirements to ensure safety, transparency, and accountability:
Data Quality and Governance: High-risk AI systems must be trained on high-quality datasets that are free from biases and inaccuracies. This is crucial for AI applications in financial services, where biased algorithms could lead to discriminatory outcomes in credit scoring or loan approval processes.
Transparency and Information Disclosure: Developers of high-risk AI systems must provide clear and comprehensive information on how these systems function, their purposes, and their capabilities and limitations. In the financial sector, this could mean disclosing how an AI model evaluates creditworthiness or detects fraudulent activities.
Human Oversight: Effective human oversight mechanisms must be implemented to ensure that high-risk AI systems operate as intended and that there is an option for human intervention to mitigate potential risks. Financial institutions must ensure that employees can override automated decisions, such as in cases of suspected fraud or erroneous credit denials.
Robustness, Accuracy, and Security: AI systems must be designed and developed to ensure robustness, accuracy, and security. This is particularly important for financial services firms, where inaccuracies in AI models can lead to significant financial losses or regulatory breaches.
Conformity Assessment and CE Marking:
High-risk AI systems must undergo a conformity assessment to ensure compliance with the AI Act's requirements before being placed on the EU market. This assessment includes checks on data quality, transparency, human oversight, and robustness. Successful assessments result in a CE (Conformité Européenne) marking, which signifies that the AI system meets EU standards.
The conformity assessment must be conducted periodically, and any significant modifications to the AI system may require re-assessment. For U.S. financial firms, this means that AI models deployed in the EU would need to comply with these standards and may face additional scrutiny when updated or modified.
Penalties for Non-Compliance:
The AI Act outlines significant penalties for non-compliance, including fines of up to €30 million or 6% of a company's global annual turnover, whichever is higher. For large U.S. financial firms, this could translate into substantial financial risks if their AI systems fail to comply with EU regulations.
Governance and Enforcement Mechanism:
The Act proposes establishing a European Artificial Intelligence Board (EAIB) to provide guidance and coordinate the implementation of the AI Act across EU member states. Each member state will designate national competent authorities responsible for monitoring and enforcement. For U.S. financial firms, this implies the need to engage with multiple regulatory bodies to ensure compliance.
Implications for Financial Services Firms in the United States
The AI Act has substantial implications for U.S.-based financial services firms, particularly those that operate in the EU market or provide AI-driven financial products and services to EU customers.
Extraterritorial Reach and Cross-Border Compliance:
The AI Act has an extraterritorial reach, meaning it applies to any company—regardless of location—that provides AI systems within the EU market or affects EU citizens. For U.S. financial firms, this means that any AI system, such as algorithms used in credit scoring, investment management, or fraud detection, must comply with EU requirements if offered to EU clients.
Operational and Compliance Costs:
Financial services firms in the U.S. will need to make significant investments in compliance to meet the AI Act's standards. This may involve:
Developing or updating AI systems to align with EU requirements.
Conducting regular audits and conformity assessments.
Implementing robust data governance frameworks to ensure data quality and minimize biases.
Establishing dedicated teams or partnering with EU-based experts for continuous monitoring and compliance.
These costs could be substantial, particularly for firms relying heavily on AI-driven decision-making processes.
Impact on AI-Driven Innovation and Strategy:
The strict regulatory environment in the EU could impact the innovation strategies of U.S. financial firms. High-risk AI applications, such as those in automated trading, credit scoring, and fraud detection, may require significant modifications to align with the AI Act. Firms may need to reassess the development and deployment of these applications to ensure compliance.
On the positive side, compliance with the AI Act could enhance a firm's reputation for ethical AI use, providing a competitive advantage in the global market.
Data Privacy and Security Alignment:
The AI Act's requirements for data quality and governance align with other EU regulations, such as the General Data Protection Regulation (GDPR). U.S. financial firms will need to ensure that their AI systems comply with both the AI Act and GDPR, necessitating a comprehensive approach to data privacy and security.
Strategic Partnerships and Collaboration:
To navigate the complexities of the AI Act, U.S. financial firms may need to establish partnerships with EU-based compliance experts, legal advisors, and AI developers. Collaboration with local entities can help in understanding the nuances of the Act, conducting conformity assessments, and maintaining compliance.
Potential for Increased Scrutiny and Auditing:
- Given the high stakes involved, financial services firms may face increased scrutiny from EU regulators, especially if their AI systems significantly impact consumers or the financial markets. Firms must be prepared for detailed audits and assessments of their AI systems.
The EU Artificial Intelligence Act represents a pioneering approach to AI regulation, emphasizing safety, transparency, and accountability. For U.S. financial services firms, the Act presents both challenges and opportunities. While compliance will require substantial adjustments in governance, data management, and AI development, it also provides a framework for responsible AI use that could enhance trust and credibility in the global marketplace. Financial firms must proactively invest in compliance and develop strategic partnerships to navigate this evolving regulatory landscape.
Here is your chance to catch both of the Financial Guardians Leadership Team in one place!
Join Brad and Josh as they co-host a panel with Matthew Metras at the New York State Society of Enrolled Agents in October for the Tax Tech Trailblazers Panel! We would love to see you there!
Financial Guardians has partnered with NATP to provide access to our monthly Guardian Tier membership at a 30% discount.
Active NATP members can access the online discount here.
Financial Guardians is a proud member of InCite, the recently launched online community exclusively for tax professionals, bookkeepers, and accountants.
Join today at www.incite.tax.







