Regulatory requirements change fast, PDPL, GDPR, data localisation, AI ethics rules, EU AI Act. Beyond compliance, organisations need governance frameworks that enable innovation while managing risk.
For investment firms and portfolio companies, the right governance framework is the difference between a business that attracts capital and one that loses it during due diligence.
NEED HELP BUILDING GOVERNANCE FRAMEWORKS?
Whether you're an investment firm building governance standards across your portfolio or a founder preparing for your first regulatory audit, we provide frameworks that work in the real world - not just on paper.
From AI ethics committees to compliance documentation, every structure we build is designed to satisfy regulators, withstand investor scrutiny, and prevent problems before they become headlines.
Ready to build governance that gives investors confidence? Let's talk.
Data governance has a bad reputation for being conceptual mostly because organisations struggle with operationalising governance to achieve tangible improvements in the quality of their data
Our CEO Yusra Ahmad talks to Sue Chadwick of Pinsent Masons about data ethics, the RED Foundation Data Ethics Playbook and views into the future.
Discussion paper written on behalf of the RED Foundation on the implications of consent in the context of real estate data usage.
A step by step guide to implementing ethical data practices within a real estate context.
A report by the RED Foundation Data Ethics Steering Group that explores the unseen costs of AI, offering a critical perspective on the ethical challenges AI presents.
AI governance is the framework of policies, processes, and accountabilities that determine how AI systems are built, deployed, monitored, and retired within your organisation.
Most organisations think they can add governance later - after they've deployed AI and seen whether it works. This is the most expensive mistake in AI adoption.
Without governance in place before deployment, you have no way of knowing whether your AI systems are performing as intended, creating unintended harms, or exposing you to regulatory liability.
💡 Practical tip: The right time to build governance is before your first AI deployment, not after your first AI incident. Every week without governance is a week of unmanaged risk accumulating silently.
The organisations that struggle to secure board investment in governance are usually making the wrong argument. They present governance as a cost and a constraint. The board hears overhead and slowdown.
The right argument is financial. Governance investment reduces the probability and severity of AI incidents that carry regulatory, reputational, and operational costs that dwarf the cost of prevention.
For investment firms specifically, portfolio companies with mature governance frameworks command higher valuations, pass due diligence faster, and attract better institutional investors.
💡 Practical tip: Don't present governance to your board as a compliance exercise. Present it as risk-adjusted return on investment. Quantify the cost of a single AI incident in your sector and compare it to the cost of prevention. The case makes itself.
AI governance needs to sit at leadership level because it involves decisions about risk appetite, resource allocation, and organisational values that only leadership can make. IT can implement governance decisions. Legal can advise on compliance implications. But neither can own the strategic accountability.
The most effective model we see combines a senior executive sponsor (typically CEO, COO, or CDO) with a cross-functional AI ethics committee that includes legal, technical, operational, and increasingly customer representation.
💡 Practical tip: If your AI governance sits exclusively in your IT department, that's the first thing to fix. Not because IT can't manage it technically, but because governance decisions are business decisions that require business accountability.
It has very real financial consequences - and organisations that treat it as a principles exercise find that out the hard way.
Regulatory fines for AI ethics failures can reach 4% of global annual turnover under GDPR alone. But the indirect costs are frequently larger: customer trust erosion, talent loss, investor confidence damage, and media scrutiny that follows an organisation for years.
The organisations that avoid these consequences don't just have ethics policies. They have ethics by design - meaning ethical principles are embedded into AI systems at the architecture level, not added as an afterthought once systems are already in production.
💡 Practical tip: If your AI ethics framework lives in a document rather than in your development and deployment processes, it won't protect you when something goes wrong. Ethics by design means ethics is a technical requirement, not a communications exercise.
AI is both. And treating it as only one creates dangerous blind spots.
As a risk in its own right, AI systems can fail, produce biased outputs, be manipulated, or behave in unintended ways that cause direct harm to customers, employees, or markets. These risks need to be managed as a distinct AI risk category in your enterprise risk framework.
As an enabler of other risks, AI accelerates and amplifies existing risk categories. It makes fraud faster and more sophisticated. It increases operational dependency and creates new single points of failure. It amplifies data privacy risks by processing more data, faster, at greater scale.
The organisations that manage AI risk most effectively do both - they create a dedicated AI risk category while simultaneously reviewing how AI changes the profile of every existing risk category in their framework.
💡 Practical tip: Take your existing enterprise risk register and ask one question about every entry: "How does AI change this risk?" The answers will tell you exactly where your governance gaps are and where to prioritise first.
Yes - but only if you build proportionately. Small organisations make the mistake of either ignoring governance entirely (too much overhead) or trying to implement enterprise-scale frameworks (too complex to sustain).
The right approach is a minimum viable governance framework - the smallest set of policies, processes, and accountabilities that meaningfully reduces your risk exposure while remaining practical for a small team to actually follow.
This grows with your organisation. What works for a 10-person team won't work at 100 people - but building nothing because you're not yet at 100 people is the most dangerous option of all.
💡 Practical tip: Start with three things. A clear policy on what AI you use and how. A simple process for reviewing AI outputs before they affect customers or decisions. And one person accountable for keeping both current. That's a governance framework. Build from there.