The minds behind explainable AI

We're a specialized team dedicated to making machine learning transparent and understandable for financial institutions across the UK.

Meet Our Specialists

Our team combines deep technical expertise with real-world banking experience. Each member brings a unique perspective to the challenge of making AI systems more interpretable.

Cordelia Blackthorne, Lead ML Architect

Cordelia Blackthorne

Lead ML Architect

Cordelia spent eight years at Barclays developing risk assessment models before joining us in early 2024. She has this uncanny ability to spot patterns in data that others miss — probably why her interpretability frameworks have become industry benchmarks. When she's not debugging neural networks, you'll find her restoring vintage motorcycles in her Brighton garage.

Neural Networks Risk Modeling SHAP Methods Regulatory Compliance
Thaddeus Grimsby, Financial AI Specialist

Thaddeus Grimsby

Financial AI Specialist

Thaddeus bridges the gap between complex algorithms and practical banking needs. His background in quantitative finance at HSBC means he understands both the technical requirements and regulatory pressures that banks face. He's particularly passionate about making AI decisions auditable — something that keeps compliance officers much happier.

Credit Scoring Fraud Detection Model Validation Explainable AI

Our Mission

We believe that artificial intelligence in banking should be transparent, trustworthy, and truly useful. Too many financial institutions struggle with black-box models that make decisions they can't explain to regulators or customers.

  • Transparency in every algorithm we develop
  • Practical solutions that work in real banking environments
  • Regulatory compliance built into the foundation
  • Continuous learning and adaptation to new challenges

How We Work

Our methodology focuses on building interpretable models from the ground up, rather than trying to explain complex black boxes after the fact.

Model Transparency

We design machine learning systems where every decision can be traced and understood. This isn't just about adding explanation layers — it's about choosing architectures that are inherently interpretable while maintaining predictive power.

Regulatory Integration

Working with banking regulations isn't an afterthought. We embed compliance requirements directly into our development process, ensuring that audit trails and documentation standards are met from day one.

Practical Implementation

Our solutions integrate smoothly with existing banking infrastructure. We understand that financial institutions can't replace entire systems overnight, so we build bridges between legacy processes and modern AI capabilities.

Continuous Monitoring

Model performance and interpretability require ongoing attention. We establish monitoring frameworks that track not just accuracy metrics, but also explanation quality and regulatory alignment over time.