When the regulator asks "explain your AI" we've already prepared your answer.
Off-the-shelf platforms work until they don't. When your risk profile, your regulatory obligations, or your operational reality outgrows what a product can offer, Foundry builds what doesn't exist yet — and engineers the governance architecture that makes it defensible under the frameworks your regulator is already enforcing.
From "we should use AI" to "this AI survives regulatory examination" is not a product purchase.
Most vendors will sell you a model. What they won't sell you — because it can't be productised — is everything that sits around the model: the governance framework, the data residency architecture, the explainability pipeline, the integration layer into your operational systems, and the audit-ready validation report your regulator will eventually demand.
Those requirements look different across a central bank's AI circular, POCAMLA, POPIA, DPA 2019, and every sector regulator in between. They share one thing: the model is the smallest part of what they are asking for.
That gap — between "deployed" and "defensible" — is what Foundry closes.
Foundry is not an AML vendor.
Foundry is a bespoke AI engineering firm. We build AI for any industry, any use case, to whatever technical and regulatory standard applies — credit scoring, fraud detection, clinical risk, underwriting, portfolio modelling, operations automation, LLM-powered workflows.
The depth of our AML practice is proof of what this engineering looks like when the stakes are highest. It is not a constraint on who should call us.
We evaluate OpenAI, Anthropic, Google Gemini, Meta Llama, Mistral, Cohere, Qwen, and private on-premises deployments for every engagement. The model is chosen by what the use case demands and what the regulatory context permits — not by commercial preference. Where data sovereignty requires it, we build with open-weight models that run entirely within your infrastructure.
Credit Decisioning AI
DPA 2019 · CBK Credit Guidelines · Fair LendingAutomated credit decisions must be explainable to the applicant and auditable by the regulator. Adverse action notices, fairness testing, and model governance documentation are not optional.
Fraud Detection
Model Governance · Bias Testing · FATF GuidanceFraud models that cannot explain their decisions expose institutions to legal challenge from falsely accused customers and regulatory scrutiny over discriminatory pattern recognition.
Insurance Underwriting AI
IRA Guidelines · POCAMLA · DPA 2019The IRA is developing guidance on algorithmic underwriting. Institutions deploying AI in pricing or risk classification need governance frameworks that anticipate the examination, not react to it.
Customer Service & Conversational AI
DPA 2019 · POCAMLA S.29 · Consent ManagementA chatbot with access to case or transaction data must not inadvertently breach tipping-off provisions or disclose information that triggers a regulatory obligation. Data flows must be mapped and bounded.
KYC & Onboarding Automation
POCAMLA · IRA · Automated Decision RightsAI that affects a customer's ability to access financial services is subject to human review requirements, explainability obligations, and data minimisation standards under the DPA 2019.
Internal Risk & Treasury Models
Basel III/IV · Board Governance · Audit CommitteeInternal models used for capital allocation, liquidity management, or risk appetite decisions require the same validation rigour as customer-facing models — with the board and the prudential regulator as the audience.
Large language models introduce a category of regulatory risk that classical AI frameworks were not designed to handle.
Foundry designs and deploys LLM systems for regulated financial institutions — with the same governance rigour applied to every other AI deployment in a compliance environment.
An LLM-generated SAR narrative that contains fabricated facts is a false regulatory submission. In Kenya, that is a criminal offence under POCAMLA. Output validation is not optional.
Identical prompts produce different outputs across runs. A regulated decision system with no reproducibility cannot be audited. Every LLM output carrying regulatory consequence needs a deterministic audit trail.
Sending customer transaction data to a cloud LLM API is a third-country data transfer under DPA 2019. Without a data processing agreement and adequacy assessment, it may be unlawful.
FATF and central bank AI guidance requires explainability. "The model said so" is not an acceptable answer. LLM outputs in regulated workflows need grounding, citation, and human review architecture.
What Foundry builds for LLM deployments:
The five service areas below describe the engineering practice in full. Each applies wherever the engagement begins — AML is one expression of them, not the boundary.
If what you've read describes your situation — we should talk. Foundry engagements begin with a scoping conversation, not a sales deck.
Start a conversation →Five service areas. One integrated infrastructure practice.
Compliant AI Workflow Design
Your AI deployment needs to survive more than go-live. It needs to survive an audit.
Most AI vendors design for production. Foundry designs for the examination room.
We architect AI workflows with explainability, governance, and audit-readiness as structural requirements — not retrofits. When a regulator, a board, or an external auditor asks "show me how this decision was made", your system already has the answer.
- Explainability architecture — LIME/SHAP outputs mapped to regulator-readable language, not raw probability scores
- Model governance documentation: model cards, validation reports, change management logs, and version histories
- AI risk assessments aligned to FATF Guidance on AI (2021), applicable central bank AI circulars, and sector-specific regulatory frameworks
- Pre-deployment regulatory review — testing model output against known failure modes and regulatory expectations before the examiner does
- Continuous monitoring pipelines for model drift, false positive rate creep, and population shift
FATF Guidance on AI — 2021
"Automated systems used in AML/CFT decision-making must be explainable, auditable, and subject to meaningful human oversight."
The FATF guidance was explicit and it set the direction of travel for every central bank and financial sector regulator that has since issued AI circulars. The CBK's draft AI policy, the FSCA's algorithmic decision-making guidance, and the NCC's AI framework all reflect the same requirement: AI that cannot explain itself will not be permitted in regulated compliance workflows.
Custom Model Design & Calibration
Every model reflects the assumptions baked into the data it was trained on. When your population, your risk environment, and your regulatory context were not part of those assumptions, the model fails — sometimes quietly, always expensively.
Off-the-shelf models carry off-the-shelf failure modes. AML rule libraries calibrated to European banking patterns produce 60–80% false positive rates in African institutions. Credit models built where thin-file customers are the exception create discriminatory outcomes where they are the norm. LLMs deployed without output validation generate confident, well-formatted text that is factually wrong — and in a regulated submission, that is liability.
These are not edge cases. They are predictable consequences of deploying models that were not designed for where you operate.
Foundry designs and calibrates models to your data population, your sector, and the performance standards your regulatory environment requires — whether that model detects financial crime, scores creditworthiness, processes claims, powers an LLM workflow, or surfaces risk intelligence that a board will eventually be asked to sign off on.
- Detection model design: AML typology development, fraud pattern modelling, and anomaly detection calibrated to your transaction network — not imported baselines
- Behavioural baseline modelling against your historical data; false positive rate engineering that reduces operational noise without reducing coverage
- Predictive model design: credit scoring, underwriting, churn, and propensity models with fair lending compliance, thin-file handling, and adverse action explainability
- LLM deployment design: open-weight model selection (Llama, Mistral, Qwen) for on-premises inference, fine-tuning governance, and RAG architecture for document-intensive workflows
- Prompt governance and output validation for any LLM-generated content used in regulated submissions — SAR narratives, audit reports, regulatory correspondence, clinical documentation
- Hallucination mitigation: guardrails, output grounding, and human-in-the-loop review pipelines for any generative output that carries regulatory or operational consequence
- Sensitivity analysis, threshold calibration, and performance tuning across all model types and deployment contexts
Infrastructure Architecture & Deployment
The choice between on-premises, hybrid, and cloud is not a cost decision. It is a data sovereignty decision.
For a regulated financial institution in Africa, where your data lives determines what you can legally do with it. The Kenya Data Protection Act 2019, POPIA in South Africa, and the NDPR in Nigeria all impose specific requirements on the location and transfer of personal data. Your cloud choice is a compliance choice first.
Foundry designs and deploys infrastructure that satisfies your jurisdiction's data residency obligations without sacrificing the operational performance your compliance team needs to work at speed.
On-Premises
Maximum SovereigntyAll PII, model weights, inference engines, and transaction data remain within your physical infrastructure. No data leaves your perimeter.
Right fit: Required where central bank guidance, data protection law, or board policy prohibits third-country data transfer. Highest compliance confidence. Higher capex.
Examples: Central banks, systemically important institutions, institutions under active regulatory examination.
Hybrid
BalancedSensitive PII and real-time inference on-premises. Analytics, dashboards, reporting, and non-sensitive workloads in a regional cloud environment.
Right fit: Best fit for most mid-to-large African financial institutions. Compliance where it matters, cloud economics where it doesn't.
Examples: Insurance groups, commercial banks with existing on-prem data centres.
Private Cloud
DedicatedDedicated infrastructure within a compliant cloud provider's regional data center. No multi-tenancy. Near-cloud economics with on-premises-level isolation.
Right fit: For institutions that want cloud deployment economics but cannot share infrastructure with other tenants. Supports AWS Cape Town, Azure South Africa, Safaricom Cloud.
Examples: Pan-African institutions with multi-country operations, insurers, payment processors.
Full Cloud (Regional)
Fast DeployFastest deployment, lowest capex. Appropriate where the relevant data protection authority has approved the provider's regional data center.
Right fit: Suitable where standard contractual clauses or adequacy decisions are in place, and the regulator has confirmed acceptability.
Examples: Fintech lenders, payment companies, institutions with no on-prem infrastructure.
Core Banking & Ecosystem Integration
The hardest part of any compliance AI deployment is not the model. It is getting clean, real-time data out of a core banking system built in 2003.
Every African institution's compliance infrastructure is only as good as the data pipeline feeding it. Fragmented customer records, batch-only CBS exports, undocumented API schemas, and legacy transaction codes that mean different things in different branches — these are the real integration problems. Foundry has solved them before.
We design integration layers that extract, normalise, and enrich transaction data from whatever core platform you are running, and deliver it into your compliance systems with the latency your monitoring obligations require.
- Temenos T24 / Temenos Transact — direct API and batch extraction
- Oracle Flexcube — FCUBS REST and legacy MDB integration
- Finacle (Infosys) — CIF and transaction data normalisation
- Mambu, i-flex, and bespoke core banking platforms
- Real-time streaming via Kafka where CBS supports event publishing
- ETL pipeline design for institutions with batch-only data extraction
- Customer identity reconciliation across fragmented data sources
- Webhook architecture for alert routing to existing case management tools
Data Residency, Sovereignty & Cross-Border Compliance
A deployment that complies with Kenya's DPA 2019 may not comply with POPIA, NDPR, or your central bank's data handling circular.
African financial institutions increasingly operate across multiple jurisdictions with diverging and sometimes contradictory data protection frameworks. A cross-border payment processed in Nairobi, cleared through Johannesburg, and reported to Kampala touches three regulatory regimes simultaneously. Most vendors design for one jurisdiction and hope the others don't notice.
Foundry designs for multi-jurisdiction compliance from architecture — not as a legal opinion added after the system is built.
- Jurisdiction mapping: a written inventory of which data assets are subject to which national framework
- Data residency enforcement: technical controls (encryption key locality, region-locked storage, network policy) that ensure PII remains in-country
- Cross-border transfer safeguards: standard contractual clauses, adequacy decisions, and localisation exemptions documented and maintained
- Regulator-ready data lineage documentation: provenance, transformation, and access logs for every sensitive data element
- FATF cross-border typology coverage: 22 high-risk jurisdiction monitoring with country-specific transaction patterns
- Multi-jurisdiction regulatory reporting: STR/CTR/SAR formatting for FRC (Kenya), FIC (South Africa), NFIU (Nigeria), and others
Six phases. No black box delivery at any stage.
We examine your current state: transaction volumes, existing tech stack, regulatory obligations, pending examination timelines, and the distance between where you are and where you need to be.
We design the full system: data flows, deployment topology, model architecture, integration points, governance framework, and the regulatory documentation structure. You see the blueprint before we write a line of code.
We build. Weekly reviews, staging environments, and a shared documentation workspace throughout. No black box delivery. You have visibility at every stage and the ability to redirect before it costs time.
Before deployment, we run the system against a controlled dataset and produce a validation report formatted for regulator review. We model the examination questions before the examiner asks them.
We deploy to your chosen infrastructure, integrate with your existing systems, and run parallel operations alongside your current process until you are confident in the transition.
We do not disappear after go-live. Full operational documentation, staff capability transfer, and a retained support arrangement. Your team owns the system. We are available when the edge cases appear — and they will.
Foundry is not for every institution. The platform usually is.
Foundry engagements are intensive and require significant institutional commitment on both sides. Not every institution is at the point where that depth is warranted — and that is fine. The platform is designed to serve institutions that are.
Foundry is the right conversation if…
- You have an upcoming regulatory examination and your current compliance AI process cannot withstand an explainability question
- Your data population, risk patterns, or operational context — insurance, mobile money, informal sector, thin-file credit — don't fit the baselines built for European or US markets
- You operate across multiple jurisdictions with conflicting data residency requirements
- You are a systemically important institution with board-level accountability for AI decisions
- Your internal IT team cannot maintain a deployed ML system without vendor dependency
- You have been quoted KES 260M or equivalent by an international vendor and need a better answer
- You are building something that has not been built for your market before
The platform is probably the right answer if…
- You need standard AML transaction monitoring deployed quickly against known typologies
- Your regulatory obligations are covered by POCAMLA, FATF 40 Recommendations, and existing regional rule libraries
- You have a small compliance team that needs a tool, not a bespoke infrastructure build
- Your time-to-value requirement is weeks, not months
- You are at an early stage of your AI compliance journey and need to establish a foundation before customising
Selective Engagements
We work with a small number of institutions per cycle. Intentionally.
Infrastructure engineering for regulated institutions requires full attention. We do not outsource it, we do not run it in parallel with ten other clients, and we do not sell you a template and call it bespoke. The earlier we speak, the more we can shape the architecture before constraints are set.
or email us at foundry@novisintel.com