The Secret to Getting a Full Picture: Mixed Methods Made Easy


Business or Personal Updates

This month, we’re discussing my favorite topic - how to bring together your data collection methods to tell a nuanced, holistic story about your work and why you do it. I’m writing this one while sitting in blinding sunshine at my favorite coworking space, soaking up the sun and appreciating the transition to extended daylight and singing birds.

I’m also staying tuned in as the community rapid response gets activated for the fifth time today. For those of you continuing to balance showing up to work, being a good neighbor, and navigating daily life - I see you. Don’t forget to rest, feel joy, and drink water.

At Bridgepoint Evaluation, we have two new projects in contracting that we’re so excited to share more with you about! We have two coaching projects onboarding in the next 45 days–I love these projects. I get to take everything I love about being a professor and customize it to one organizational context and an individual’s learning goals, and be there alongside them as they navigate practical application and adjustments in real-time. We also have an Impact Report Abstract project in motion, just recently wrapping up our staff-wide workshop, where everyone got to help us build outcome chains and see the widespread impact of their work.

And a new exciting announcement to share - I had the opportunity to be a guest on Tony Martignetti Nonprofit Radio! Give it a listen here.


Now, today’s topic!

Welcome back to our Data Interpretation series.

So far, we’ve explored how to analyze quantitative data with care, and how to make sense of qualitative data without getting lost in transcripts or word clouds. Today, we zoom out. Because the most useful insights rarely come from a single data source. They come from how your data sources talk to each other.

I recently shared a metaphor that’s been sticking with me. Your different data sources are like the parts of a song. Your operational and organizational data act like the verses — they describe what happened, with whom, and how often. Surveys often function as the chorus, repeating the core message in a clear, accessible way. And qualitative data — interviews, focus groups, open-ended responses — serve as the bridge, adding depth, context, and emotional truth.

When those parts are arranged intentionally, the message lands. People recognize your work in their own experience. When they’re stitched together without thought, the story misses its mark.

Mixed-methods work is about being your own production team — weaving sources together so they tell the story of what you did, with what, to what end, and what happened as a result.

Most organizations already have the raw materials to do this well.

Operational and organizational data, like attendance and participation rates, give you a clear picture of reach and implementation. They’re excellent at showing patterns over time and highlighting where engagement drops off or spikes. What they don’t explain is why those patterns exist.

Survey data often fills part of that gap. Pre- and post-surveys help you monitor change, capture perceived impact, and identify consistent themes across participants. They reinforce your core message, but they tend to smooth out nuance and contradiction.

That’s where interviews and focus groups come in. Qualitative data helps explain variation, surface unintended outcomes, and name barriers or successes that numbers alone can’t capture. Internal documents — staff meeting notes, board presentations, reflection memos — add another layer of context, grounding findings in operational reality and decision-making moments.

Each source tells a partial truth. Mixed-methods analysis is about letting those truths inform one another.

The process doesn’t start by combining everything at once. It starts by looking at each data source on its own and asking a few simple questions: What stands out? What surprised me? What feels most important for decision-making or sharing? Where do I still have questions?

From there, you begin to look across sources. If operational data shows a drop in participation, do surveys point to scheduling or access challenges? Do interviews reveal transportation issues or competing responsibilities? Do staff notes reference recent program changes that help explain the pattern?

This is where insight deepens.

For example, imagine participation data shows a sharp drop after the third session. Survey responses suggest the schedule no longer works for many participants. Interviews reveal caregiving conflicts and transportation barriers. Staff meeting notes point to a recent staffing shift that changed session timing. Together, the story isn’t about lack of commitment — it’s about misalignment between program design and participant reality, and it points clearly toward actionable adjustments.

Or consider a powerful interview quote from a participant who says, “This program helped me feel confident speaking up for the first time.” On its own, it’s a compelling story. But when you see that post-surveys show increased confidence, attendance data shows sustained engagement, and focus group conversations echo similar language, that quote becomes more than anecdote. It becomes a finding.

Mixed-methods analysis isn’t about complexity or volume. It’s about care. You don’t need every data type to do this work well, and more data doesn’t automatically mean better insight. Integration is an interpretive skill, not a technical one.

When done thoughtfully, mixed methods help your data do what it’s meant to do: support decisions, deepen learning, and tell a story that is both credible and human.

If you’re pulling multiple data sources together right now, reply and tell me one question you’re trying to answer — or one recent insight you’re proud of. And if you want support facilitating a mixed-methods sensemaking process that helps your team move from data to action, just hit reply. I’d love to help.

📩 Reply to this email or reach out to learn more!

Bridgepoint Evaluation

Organizations partner with me to provide a comprehensive measure of their community impact and demystify the program evaluation process. In my newsletter, I share practical tips for enhancing your program evaluation process.

Read more from Bridgepoint Evaluation

The Art of Prioritizing What (and Who) Matters in Evaluation Here we are—email four in our Evaluation Design series. We’ve got two more design-focused topics coming your way before we shift into the nitty-gritty of data collection: strategies, tools, and what to do when things don’t go as planned. This summer, much like the rest of the year, has been full of contradictions. Just last week, my family and I traveled to South Carolina to visit relatives I hadn’t seen in years. The garden is...

Let’s Talk About the Questions That Matter Most When organizations think about evaluation, the first thing they often think about is data collection. But before you build a survey, schedule interviews, or create a dashboard, there’s a foundational step that deserves your attention: Deciding what questions your evaluation is meant to answer. These aren’t your survey questions — they’re the deeper, guiding questions that shape the direction and purpose of your entire evaluation. These are your...

Bridgepoint Evaluation May 26th Let's stop throwing spaghetti at the evaluation wall. ↓ Hi Reader, I was leading a survey design workshop the other week when someone leaned back in their chair, paused, and said, “I can think of the questions I’m asking… but I have no idea what the bigger question is.” I’ve heard some version of that line more times than I can count. And honestly? It’s one of my favorite moments—because it means we’re moving beyond checking boxes and starting to look at the...