IRIS+ Metric Selection
A measurement-selection pattern that chooses a small, defensible set of GIIN IRIS+ metrics tied to the investment’s theory of change, rather than reporting everything the catalog can count.
Also known as: IRIS metric selection, IRIS+ metrics, Core Metric Set selection.
Understand This First
- Theory of Change — the causal pathway the metrics are meant to test.
- The Five Dimensions of Impact — the impact-claim frame each metric should support.
- Operating Principles for Impact Management — the management-system frame in which selected metrics are monitored and reported.
Context
IRIS+ is the Global Impact Investing Network’s public system for impact investors to measure, manage, and compare impact. It includes a thematic taxonomy, Core Metric Sets, the IRIS catalog of metrics, SDG mappings, and alignment to frameworks including the Five Dimensions of Impact. For a family office, the practical question is not whether IRIS+ exists. The question is which few metrics belong in the investment memo, the manager reporting package, and the family council dashboard.
Metric selection sits after Theory of Change and before reporting. The office has already named the outcome pathway: who is supposed to experience what change, through which intervention, with which investor contribution, and under which assumptions. IRIS+ then gives the team a common metric vocabulary so the office doesn’t invent a private language for every investment.
The pattern applies to fund allocations, direct deals, PRIs, MRIs, recoverable grants, and DAF-funded strategies. It is especially useful when the family office has to compare managers across themes. A climate-credit manager, an affordable-housing PRI, and a workforce lender will never share one perfect outcome metric. They can still report through a disciplined selection process that makes each claim auditable.
Problem
Impact reporting fails in two opposite ways. The first failure is metric sprawl: the office asks for every metric the manager can provide, then receives a dashboard no committee member can govern. The second failure is metric invention: each manager reports its own preferred indicators, often with attractive names and weak definitions, leaving the family unable to compare claims across the portfolio.
Neither failure is trivial. Too many metrics hide the few that matter. Custom-only metrics make manager narratives impossible to compare. Activity counts can crowd out outcome measures. A polished dashboard can tell the family how many people were reached, how many dollars were deployed, or how many assets were financed while saying little about whether the intended change occurred.
The deeper problem is sequence. If the office starts with the IRIS+ catalog, the investment thesis bends toward what is easy to count. If it starts with a theory of change and the Five Dimensions, the metric catalog becomes a tool rather than a substitute for judgment.
Forces
- Standardization versus fit. Portfolio comparison needs shared definitions, while each investment still needs metrics that fit its outcome pathway.
- Completeness versus usability. A family council can govern five well-chosen indicators; it can’t govern a 90-line dashboard.
- Output versus outcome. The metrics a manager can report reliably are often activity counts, not the outcome the family actually cares about.
- Data ambition versus data infrastructure. The best metric is useless if the investee or intermediary can’t collect it without corrupting operations.
- Comparability versus contribution. IRIS+ helps compare results, but the office still has to say what its own capital changed.
Solution
Select IRIS+ metrics through a four-step filter: outcome first, dimension second, metric third, data test last.
Start with the theory of change. Pull out the two or three outcomes the investment is actually meant to affect, plus the assumptions most likely to break. For an affordable-housing PRI, that might be rent burden, housing stability, and long-term affordability covenant duration. For a small-business lender, it might be loan access, enterprise survival, household income, and job quality. If an outcome is not in the theory of change, don’t add a metric for it merely because the catalog offers one.
Then map each outcome to the Five Dimensions. What names the outcome. Who names the affected people, communities, or environmental systems. How Much separates scale, depth, and duration. Contribution asks whether the enterprise and the investor changed anything relative to the counterfactual. Risk names what could make the claimed impact weaker than expected. A metric that doesn’t support one of those dimensions is usually dashboard decoration.
Only then use IRIS+. Search the relevant impact category, SDG, theme, and Core Metric Set. Prefer official IRIS metrics where they fit because they carry definitions, calculation guidance, and comparability value. Use custom metrics only when the theory of change names an outcome IRIS+ doesn’t cover tightly enough, and label those custom metrics as custom rather than smuggling them into an IRIS+ table.
Finally, run the data test. For each proposed metric, ask who collects it, from which system, at what cadence, at what cost, and with what error risk. The office should be able to say whether the data comes from investee operating systems, audited records, customer surveys, third-party datasets, or manual spreadsheets. If the only way to report a metric is a one-off analyst exercise every December, it probably doesn’t belong in the core set.
An IRIS+ metric can be well defined and still be the wrong metric for the claim. Standardization improves comparability; it doesn’t prove additionality, beneficiary experience, data quality, or investor contribution.
How It Plays Out
Consider a $1.2B single-family office with a $160M foundation and a 15% impact sleeve. The office is reviewing a $12M PRI into a community-development lender that finances childcare centers in three counties. The first manager draft offers 37 metrics, including dollars lent, borrowers served, full-time employees, women-owned borrowers, square feet financed, children reached, and jobs supported. The dashboard looks impressive. It isn’t yet governable.
The impact lead rewrites the metric plan from the theory of change. The intended outcome is not “childcare financed” in the abstract. It is more affordable childcare capacity in counties where low-income workers are leaving jobs because care is unavailable within a workable commute. The office’s contribution claim is also specific: a seven-year below-market PRI lets the lender offer ten-year loans to center operators whose bank alternatives are too short or too expensive.
The committee reduces the reporting package to a core set:
| Claim component | Metric choice | Source | Why it stays |
|---|---|---|---|
| Capital deployed | IRIS metric for client organizations financed, plus total loan amount | Lender loan system | Confirms the activity happened and matches the PRI covenant. |
| Childcare capacity | Number of childcare seats created or preserved, segmented by county | Borrower operating reports | Tests whether financing changed actual care capacity in the target geography. |
| Affordability | Share of seats serving households below the office’s income threshold | Borrower enrollment data | Tests the Who dimension; total seats alone would overstate the claim. |
| Worker outcome | Parent or caregiver employment retention six months after enrollment | Lean Data survey plus employer/self-report | Tests the outcome the family cares about, not only the output. |
| Investor contribution | Comparison of PRI loan terms to available senior-credit terms | Deal file and declined term sheets | Tests whether the office’s capital changed tenor or price. |
| Risk | Closure rate, staff vacancy rate, and subsidy-policy exposure | Borrower and county data | Flags the operating risks most likely to weaken the outcome. |
The office also rejects several proposed metrics. “Children reached” is too broad unless it is tied to seat duration and affordability. “Jobs supported” may be useful for the lender’s general report, but it is not central to this investment’s theory of change. Square feet financed is an easy number, not a decision-useful outcome. The committee keeps those items out of the family dashboard even if the manager tracks them internally.
The data test changes the plan. Parent employment retention is the strongest outcome metric, but the lender doesn’t collect it. Rather than drop the claim or pretend the data exists, the office funds a $150,000 technical-assistance grant for a lightweight survey run twice a year. The survey is not branded as IRIS+. It is a custom evidence layer attached to an IRIS+-anchored reporting package.
Twelve months later, the report is shorter than the original draft and much more useful. The office financed nine centers, preserved or created 640 seats, and reached the income-threshold mix in two of three counties. Parent employment retention improved in the counties where subsidy-processing delays were low and lagged where centers couldn’t staff classrooms. The family council can see the actual management question: keep the PRI terms, but redirect technical assistance toward staffing and subsidy navigation before expanding into the third county.
The failure case is the office that reports all 37 metrics and calls the result rigor. No one on the committee knows which numbers matter, no one can tell whether the family’s concessionary capital changed anything, and no one learns what to revise when the outcome disappoints. More metrics produced less governance.
Consequences
The benefit is focus. IRIS+ Metric Selection turns a broad impact intention into a reporting package the family office can govern. The investment committee gets comparable definitions. The family council gets a short dashboard tied to the claims it approved. The manager gets fewer but clearer reporting obligations.
The pattern also improves diligence. A manager that can’t map its proposed metrics to a theory of change is probably reporting what it can count rather than what it needs to know. A manager that refuses standardized definitions without a good reason is asking the family to accept a private vocabulary. A manager that uses IRIS+ well but can’t explain data sources, cadence, and error risk still has work to do.
The liabilities are practical. IRIS+ does not cover every local outcome tightly. Some of the most important evidence, especially beneficiary experience, may require surveys, interviews, administrative data, or custom operating metrics. The office also has to keep the selected set current as GIIN updates the catalog and as the strategy learns. A metric set chosen once and never revisited becomes stale governance.
The mature posture is selective, not maximalist. Use IRIS+ where it gives real comparability. Add custom measures where the theory of change requires them. Keep the core dashboard small enough that a committee can argue about the numbers rather than admire the formatting.
Related Patterns
| Note | ||
|---|---|---|
| Complements | Lean Data | Lean Data often supplies the field method for collecting beneficiary evidence when IRIS+ needs direct user feedback. |
| Complements | Operating Principles for Impact Management | OPIM supplies the management-system discipline in which selected metrics are assessed, monitored, reported, and verified. |
| Complements | The Five Dimensions of Impact | The Five Dimensions tell the office what kind of impact claim each metric is meant to evidence. |
| Depends on | Theory of Change | IRIS+ metrics should be chosen after the office has named the causal pathway and assumptions the metrics are supposed to test. |
| Protects against | Impact Washing | A small, justified metric set makes weak activity-count claims harder to pass off as evidence of impact. |
Sources
- Global Impact Investing Network, IRIS+ About, current access 2026 — the official description of IRIS+ as a public system with thematic taxonomy, Core Metric Sets, catalog metrics, SDG mapping, Five Dimensions alignment, and framework crosswalks.
- Global Impact Investing Network, IRIS Catalog of Metrics, current access 2026 — the public catalog of IRIS metrics, filters, categories, SDG mappings, dimensions, and versioned metric definitions.
- Kelly McCarthy, Leticia Emme, and Lissa Glasgo, Global Impact Investing Network, IRIS+ Core Metrics Sets: Fundamentals, 2019 — the GIIN guidance on Core Metric Set purpose, key questions, short metric lists, calculation instructions, and decision-use.
- Global Impact Investing Network, The State of Impact Measurement and Management Practice: Second Edition, 2023 — field-level evidence on IMM practice, including the role of IRIS within the IRIS+ system.
This entry describes a structural pattern and is not legal, tax, or investment advice. Consult qualified counsel and tax advisors licensed in your jurisdiction before adopting any structure described here.