The moment a South African bank or insurer uses AI to make a high-stakes decision such as granting a home loan, setting a medical premium or flagging a transaction for fraud, it is operating in a regulatory blind spot.
While our regulators, the Financial Sector Conduct Authority (FSCA) and the Prudential Authority (PA), have done commendable work in surveying the landscape, the core assumption that existing laws are sufficient for immediate governance is a regulatory illusion that must be urgently addressed.
The recent joint FSCA and PA report rightly identifies that financial institutions rely heavily on established frameworks, primarily the Protection of Personal Information Act (Popia) and the Financial Intelligence Centre Act (Fica), to manage AI-related risk. The problem is that these laws were never designed to police the unique dangers posed by machine learning models.
Popia focuses on the collection and processing of data, not the decision-making function of the algorithm itself. An AI credit scoring model may comply perfectly with Popia by safeguarding the input data, but if the model is trained on historically biased lending data its output will systematically — and legally — discriminate against certain demographic groups.
The harm is not in the data leak but in the algorithmic bias that denies fair access to credit. Popia offers little recourse against this specific form of automated prejudice.
Similarly Fica, which mandates sophisticated anti-money-laundering and counter-terrorism financing systems, is being supplemented by AI for identifying suspicious transactions. While AI is a powerful tool against financial crime, its “black box” nature creates an immense compliance risk.
If an algorithm flags a seemingly innocent transaction, leading to an account freeze, the bank’s compliance officer may struggle to provide the necessary audit trail and human-intelligible explanation demanded by a court or the regulator. The opaqueness of the model directly undermines the requirement for transparent decision-making.
This regulatory lag could have significant economic consequences. When AI is perceived as an opaque, unfair gatekeeper, public trust in the financial sector erodes. Furthermore, local institutions that adhere strictly to global ethical standards (such as those emerging from the EU’s AI Act or the G20 principles) face a competitive disadvantage against those who exploit the current ambiguity.
Our regulators should not wait for a full, standalone AI Act, which could take years to finalise. The risk is immediate. We need targeted, surgical amendments to our existing framework.
The most critical intervention lies in mandating two new requirements:
- Algorithmic explainability and impact assessments. Insurers and banks must be required to perform mandatory bias and fairness testing on any AI model used for high-stakes customer decisions. Furthermore, any decision to repudiate a claim or deny credit based solely on an algorithmic score must be prohibited, requiring a human review and a non-technical explanation to the customer.
- Data integrity and governance. Stronger rules are needed to hold institutions accountable for the quality and representativeness of the data used to train their models. This moves beyond simple privacy and addresses the root cause of algorithmic bias: flawed historical data.
South Africa is a financial hub and a leader in digital adoption. We have the data and the talent to build the most sophisticated AI systems. However, unless we retrofit our existing laws, especially Popia, to address algorithmic decision-making directly, we risk allowing systemic, automated unfairness to become entrenched.
The time for regulatory adaptation is now, using the tools we already have to mandate fairness and transparency in the age of the machine.
• Simelane is a regulatory professional focusing on the intersection of technology and compliance in the South African financial sector.






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.