Overview of AI Adoption in the South African Financial Sector
The Financial Sector Conduct Authority (FSCA) and Prudential Authority recently released a study examining the adoption of artificial intelligence (AI) within the South African financial sector. This article provides a high-level summary of their findings and highlights the importance of both embracing AI and establishing a governance framework for its ethical and effective use.
The Financial Sector Landscape
The South African financial sector is comprised of 67 banks, 158 registered insurers, 315 lenders, 27 major payment institutions, over 5,000 pension funds, and more than 200 fintech companies. This sector accounts for roughly 20% of the country’s GDP, which underscores its significance and explains why it is often at the forefront of adopting innovative technologies such as AI.
Challenges and Current Adoption of AI
Despite the sector’s readiness for innovation, integrating AI presents significant challenges, largely due to a lack of understanding and concerns regarding associated risks. Only 10.6% of the 2,100 study respondents currently leverage AI in their business operations. While exercising caution is prudent, completely ignoring AI could result in losing competitive advantage.
Benefits of AI Adoption
Organisations that have adopted AI are already experiencing enhanced decision-making, improved customer experience, and more effective risk management. The primary motivation for early adoption is the pursuit of competitive advantage—achieved through increased efficiency from automation, reduced costs, and better outcomes from data-driven decisions.
Current AI Use Cases in the Financial Sector
The number of AI use cases in the financial sector is continually growing. Some prominent examples include:
- Major banks are improving fraud detection through biometric verification and machine learning, establishing client profiles to identify abnormal transactions.
- Insurers are using machine learning to streamline claims management processes.
- Asset managers employ generative AI to develop ‘robo-advisors’ that provide personalised investment advice based on input from customers and financial advisors.
- Regulators are detecting insider trading and market manipulation by using anomaly detection tools.
- Lenders monitor repayment behaviours and suggest interventions where credit defaults are predicted.
- Financial institutions are strengthening anti-money laundering monitoring and supplementing traditional risk models.
Strategic Importance of AI Adoption
As AI use cases proliferate across the sector, organisations that wish to remain competitive must consider AI adoption. Key benefits include increased operational efficiency, enhanced cybersecurity, better fraud and money laundering prevention, and greater insights for customer service improvement and financial product personalisation.
Risks Associated with AI Integration
Despite its advantages, AI introduces several foreseeable risks, including:
- Processing personal information with AI tools exposes data to breaches, theft, and malicious acts by third parties.
- Generative AI tools can be misused for malicious activities, leading to cybersecurity risks such as convincing phishing attacks, deepfake audio and video, data theft, and identity fraud.
- Use of large language models for research or automated client services can result in inaccurate advice, bias, or incorrect outputs.
- Algorithmic trading models may trigger simultaneous market exits, increasing market volatility.
- Reliance on third-party AI service providers can result in dependency and systemic risks, especially if these providers experience cyber-attacks or operational failures affecting multiple institutions.
- The high cost of developing and maintaining AI technologies may restrict adoption among startups or smaller firms, potentially increasing the dominance of larger companies.
- AI systems optimising pricing may learn to mimic competitors’ strategies, leading to tacit collusion and anti-competitive behaviour.
- Overestimating AI capabilities can result in excessive reliance on flawed outputs.
The Role of Governance in AI Adoption
Effective governance is essential to harness the advantages of AI while mitigating its risks. Although South Africa does not yet have AI-specific regulations, AI use is indirectly governed by existing frameworks, including the Consumer Protection Act, Protection of Personal Information Act (POPIA), FSCA codes of conduct, Joint Standard on Cybersecurity, market conduct regulations, and intellectual property laws.
Additionally, organisations can develop ethical AI frameworks by drawing on international standards, such as the EU AI Act, to inform their governance principles.
Establishing an AI Governance Framework
It is advisable for firms to set up a cross-functional AI ethics and governance committee. This committee should include representatives from legal, compliance, IT, and risk teams, and be responsible for overseeing AI initiatives and implementing an AI use policy.
When designing an AI governance framework, firms should focus on data protection, fair customer treatment, and safeguards against emerging risks. The framework should address both internal AI use and oversight of third-party service providers.
Key Components of an AI Use Policy
Depending on the level of AI adoption, an AI use policy may include the following elements:
- Evaluating AI systems and prioritising organisational use cases, ensuring transparency, auditability, and accountability for risks and system failures before deployment.
- Mandating risk assessments and mitigation strategies for all AI applications, including data protection impact assessments under POPIA and human oversight for high-risk automated decisions.
- Requiring encryption, access controls, and anonymisation for AI-processed data in line with lawful processing requirements.
- Prohibiting the use of confidential client data, personal information, or proprietary models in public or unapproved AI tools to prevent breaches and unauthorised data training.
- Monitoring for unauthorised (‘shadow’) AI usage and conducting regular audits for anomalies or leaks.
- Addressing biases, hallucinations, and inaccuracies through rigorous testing, validation, and careful data input, especially for financial predictions or compliance tasks.
- Integrating AI risks into enterprise-wide frameworks, particularly addressing vulnerabilities such as prompt injection and model inversion in third-party tools.
- Ensuring explainability for AI outputs in regulatory reporting or client communications, with mechanisms for appeals in line with the Financial Advisory and Intermediary Services Act.
- Requiring suppliers, service providers, and contractors to disclose and obtain approval for AI usage, and to adhere to equivalent data protection policies.
- Including AI-related clauses in contracts, conducting due diligence on vendor tools, and reserving rights to audit compliance.
- Updating third-party risk management processes to include annual AI attestations and prohibitions on using firm data for vendor AI training.
- Making AI decision-making processes understandable to users.
- Testing data inputs for accuracy and validating AI system outputs for reliability.
- Considering the impact of AI adoption on job displacement within the organisation.
- Providing mandatory training on AI risks, POPIA compliance, and policy adherence for all staff and suppliers.
- Updating staff policies to include compliance requirements regarding AI.
- Enforcing policies through disciplinary measures, incident reporting, and periodic reviews, with escalation to regulators if breaches affect clients.
Continuous Review and Improvement
Once established, the AI governance framework should be reviewed on an annual basis to ensure alignment with AI developments, changes in regulations, and global best practices.
*The FSCA/PA study referred to in this article is titled ‘Artificial Intelligence in the South African Financial Sector’, published in November 2025 and authored by Nolwazi Hlophe and Lebogang Mabetha
About the author: Taryn Blignaut is a senior financial services lawyer with an interest in AI. Contact Taryn on taryn@legalninjas.co.za for assistance with formulating governance frameworks around AI integration.