Let’s Talk About the Questions That Matter MostWhen organizations think about evaluation, the first thing they often think about is data collection. But before you build a survey, schedule interviews, or create a dashboard, there’s a foundational step that deserves your attention: Deciding what questions your evaluation is meant to answer. These aren’t your survey questions — they’re the deeper, guiding questions that shape the direction and purpose of your entire evaluation. These are your evaluation questions, and they are the backbone of a strong, intentional learning strategy. A Grocery List (and Evaluation Questions)If you’ve worked with me, you’ve probably heard my grocery shopping metaphor — but it’s worth repeating. When you sit down to make your grocery list for the week, you rarely begin with “fish tacos” or “salsa.” You start with the big questions: Those broad categories shape what ends up on your list. Then come the specifics: That’s how evaluation works, too. Your evaluation questions are the big questions (e.g., “How are our programs supporting long-term stability for youth?”), while your subquestions are the more detailed prompts (e.g., “How do youth describe the impact of our financial literacy workshops?”). Start big. Stay intentional. Then get into the details. How to Develop Strong Evaluation Questions (Collaboratively)Here’s a process I often lead teams through — and one you can facilitate yourself: Step 1: Set the TableDecide whether this will be a solo reflection or a team conversation. If it’s a group, bring together people who hold different perspectives — program staff, leadership, fundraisers, even community partners if you’re ready. Set aside 45–90 minutes, depending on your group size and how much discussion you’d like. Step 2: Surface the QuestionsStart by giving each participant space to answer this prompt: “What do you want to know about our work? What do you need to know?” You can phrase this slightly differently depending on the audience — for example, “What do our funders always ask?” or “What do our participants wish we better understood?” Have people jot their ideas down as individual sticky notes (or in a shared Google Doc or Jamboard). One question per note. Encourage them not to overthink or translate into “evaluation speak.” The goal is breadth, not polish. Step 3: Group and ClusterPut all the sticky notes up on a wall (physical or virtual). Then, as a group, begin reading them aloud and grouping similar questions together. You might end up with clusters like:
You’ll likely also have outliers — questions that don’t fit a clear group. Don’t discard them. Set them aside. We’ll come back to them. Step 4: Name the ClustersNext, step back and ask: What is this group of questions trying to understand? Write one overarching question that represents each cluster. This becomes a draft evaluation question — one that captures the essence of several smaller questions. These are the “what’s for dinner?” questions — not the salsa brand. Step 5: Connect to a FrameworkNow, take a look at your draft questions and ask: What kind of question is this? Most evaluation questions fall into one (or more) of these six categories:
You might have overlap — and that’s okay. Some strong evaluation questions blend types. For example: Step 6: Prioritize What Matters MostYou probably have more questions than you can realistically answer. That’s expected. Now it’s time to prioritize. Ask yourselves:
Narrow it down to 2–4 core evaluation questions. These are your compass. Every tool you build — a survey, an interview guide, a dashboard — should help answer one or more of these. The smaller, detailed questions? They become your subquestions — guideposts for how you design your tools and interpret your findings. A Moment of ReflectionThink about the last time you launched a survey or wrote a report. Did you know what you were trying to learn from the start? Or were you hoping the data would speak for itself? When we aren’t grounded in clear evaluation questions, we risk collecting a lot and learning very little. But when we pause to ask the right questions — and ask them together — we move closer to meaningful, useful, human-centered evaluation. Want help leading this process? With curiosity and care, 📩 Reply to this email or reach out to learn more! Exciting news at Bridgepoint: We gained a team member!Meet Chayney Beard – Operations & Admin SidekickChayney is a virtual assistant with a passion for bringing order to chaos and turning clunky processes into smooth-running systems. With over a decade of experience in small business environments, Chayney understands firsthand the value of time, clarity, and well-oiled workflows. Her specialties include process optimization, automation, project management, and bookkeeping—services she approaches with precision and care. Chayney is to Bridgepoint what Bridgepoint is for nonprofits - she turns chaos into clarity, creates approachable and advanced solutions, and does so collaboratively. But beyond the spreadsheets and systems, Chayney brings heart to everything she does. Guided by values like loyalty, patience, and fairness, she works with genuine kindness and a strong sense of dedication. She believes in the power of thoughtful support and quiet strength to transform how teams function and flourish. When she’s not optimizing workflows, you’ll find her gaming, geeking out over planners and stickers, or preserving memories through her love of journaling and scrapbooking. In short: Chayney is the behind-the-scenes powerhouse who helps teams breathe easier and operate better—one task at a time. |
Organizations partner with me to provide a comprehensive measure of their community impact and demystify the program evaluation process. In my newsletter, I share practical tips for enhancing your program evaluation process.
The Art of Prioritizing What (and Who) Matters in Evaluation Here we are—email four in our Evaluation Design series. We’ve got two more design-focused topics coming your way before we shift into the nitty-gritty of data collection: strategies, tools, and what to do when things don’t go as planned. This summer, much like the rest of the year, has been full of contradictions. Just last week, my family and I traveled to South Carolina to visit relatives I hadn’t seen in years. The garden is...
Bridgepoint Evaluation May 26th Let's stop throwing spaghetti at the evaluation wall. ↓ Hi Reader, I was leading a survey design workshop the other week when someone leaned back in their chair, paused, and said, “I can think of the questions I’m asking… but I have no idea what the bigger question is.” I’ve heard some version of that line more times than I can count. And honestly? It’s one of my favorite moments—because it means we’re moving beyond checking boxes and starting to look at the...
Are you collecting more data than you can manage? Let's fix that. Nonprofits and mission-driven organizations collect a lot of data—surveys, interviews, spreadsheets filled with numbers. But if you’ve ever felt like you’re drowning in data while still struggling to find clear insights, you’re not alone. Here’s what typically happens:➡️ Teams create a one-off survey for a project.➡️ Another team member conducts a few interviews for a grant report.➡️ Someone else pulls numbers from a database...