⚠️ DRAFT - WORK IN PROGRESS
This site is currently under development and being shared for review purposes only. Content is subject to change.
Welcome to Centering the Public Interest in Integrated Administrative Data Analytics. Integrated Administrative Data Systems (IDS) are among our most powerful tools to guide decision making for social good. By linking records across health, housing, education, and social services we can start to understand the complex challenges that poverty and marginalization carry.
But data is not a neutral mirror of reality; it is a product of human systems. Without a framework that accounts for the insights of those represented in the data, even the most sophisticated analytics can inadvertently perpetuate bias and mask systemic inequality.
This curriculum, supported by the Public Interest Technology University Network (PIT-UN), is designed to bridge the gap between administrative data and lived experience and between those working on data analytics and those working on the ground to drive positive social change.
Through the five modules below, you will learn to apply the FAIR2 framework, moving beyond technical data standards to Frame data with community knowledge, Articulate this knowledge as assumptions in a causal map, Identify hidden biases, and Report back to the communities represented in the data.
Join us in enhancing the power of linked administrative data with community knowledge to serve the public interest!
An introduction to the Frame-Articulate-Identify-Report framework. Learn how to integrate community knowledge and structural context into the foundation of your data science projects.
This module introduces FAIR2 Data Chats as a method to elicit the experiential knowledge necessary to interpret administrative records and guide data analytics. It also explores how race and gender data differences across and within administrative systems can be addressed to reduce discrimination bias.
Not everything that matters is measured. Learn to identify Label Bias, where administrative proxies (like shelter use) are used to represent complex human outcomes (like homelessness).
Here we see how administrative data is prone to what is called collider bias and how ignoring the selection process of people represented in administrative data can create statistical illusions that mask discrimination.