Inside the Whitepaper
Across 15 U.S. institutions, IR teams of 2 to 5 people are managing anywhere from 5 to 50+ surveys a year while response rates have quietly dropped from 35%. The problem isn't effort. It's that the ecosystem is still optimised for collection, not for what comes after.
This whitepaper draws on in-depth interviews with 15 IR and assessment professionals to document what "after" actually looks like: the analysis that consumes weeks, the reports that don't drive decisions, and the AI tools that help with the wrong things.
- The hidden costs of manual data analysis workflows across different team sizes
- Why response rates have declined structurally and why technical fixes don't reach the root cause
- How survey fatigue becomes an institution's own coordination failure
- Where analyst time actually goes and why the analysis phase is the real bottleneck
- Why AI is being used at the wrong layer of the workflow
Who This Research Is For
Three audiences will find something different in this whitepaper — and something they've been waiting for someone to say out loud.
Seeking validation, vocabulary, and frameworks for the challenges they navigate on a daily basis — survey overload, analysis bottlenecks, and reports that don't drive decisions.
Vice provosts, chief data officers, and CIOs who commission surveys and want to understand the full picture of what makes survey programmes succeed — and why so many don't.
A ground-level view of where current tools fail, what practitioners actually need from the next generation of survey infrastructure, and where product investment will have the most impact.
By the Numbers
Six patterns across every institution
Every institution uses different tools, serves different populations, runs at a different scale. Their frustrations are identical. That uniformity is the finding.
Participation has collapsed across all institution types. Every strategy — incentives, personalised invitations, QR codes — has been tried. None produced lasting improvement. It's a trust problem, not a delivery problem.
Multiple departments send surveys in the same week. Students receive overlapping questions from different offices. No shared calendar, no deduplication layer, no cross-department authority. The problem is structural.
Open-text cleaning, longitudinal data stacking, and report generation consume the majority of analyst time. Current platforms optimise for survey construction. The post-collection phase is largely unautomated.
Data tied to personal accounts is a governance choice. Longitudinal stacking being manual is a tool gap. Every time a respondent drops off mid-way, that data disappears. None of this is inevitable.
Every participant uses AI for question drafting and report writing. None have AI embedded in their survey tool. Every AI interaction requires exporting data and switching context. It adds friction rather than reducing it.
Surveys are run, reports are filed, and institutional change rarely results. This erodes respondent trust and suppresses future participation. Closing this loop requires leadership commitment — not better software.
How the Research Was Conducted
Semi-structured, sixty-minute conversations with IR and assessment professionals across five institution types — liberal arts colleges, community colleges, mid-size regional universities, large research universities, and a higher education consultancy.
Transcripts were reviewed and coded thematically. Findings are organised into universal patterns (present in all or nearly all interviews), emerging signals, and distinctive observations — highly specific findings with structural significance. All participants and institutions are fully anonymised.
Voices from the Field
"For every hour we spend designing a survey, we spend ten on cleaning, coding, and getting it into a format anyone can read."
"The open-text cleaning is the single biggest pain point. Thousands of employer name spellings. All manual. Every single cycle."
"Low response rates are structural now. Most of us have moved from trying to solve it to managing expectations around it."
Five Recommendations
These are not speculative futures. They are informed by the people who live inside survey workflows every day.
Build analysis, cleaning, and reporting directly into the survey workflow. The design phase is not where time is lost — every hour in design costs ten in post-collection.
Assist with question logic, phrasing, and open-text analysis in-platform. Every AI interaction that requires an export is friction, not a feature.
Coordinate timing across departments to cut fatigue and redundant outreach. Survey overload is an emergent property of decentralised authority — it requires a coordination layer.
Meet respondents inside LMS and SSO environments to lift response rates. Surveys built for desktop and distributed by email are structurally misaligned with how students live.
Shared, validated question banks keep longitudinal institutional data intact. Data tied to personal accounts is a governance choice — one that costs institutions their history.
Help us build the future of Institutional surveys
We are speaking with IR professionals to understand how data collection really works in higher ed today — the tools, the process, and the gaps. It's a 30-minute remote conversation, and you will receive a $50 gift card as a thank-you.