The Search for the Predictive Indicator: A First Look at Results

mariapena
Maria Peña, Chief Program Officer, LIFT

Just before Thanksgiving, I started working with a man named Mr. Assefa who, despite working full-time, lives several thousand dollars below the federal poverty line of just $27,910 annually for a family of five. Mr. Assefa’s three daughters are part of the 16.4 million children in the U.S. who currently live in poverty. That’s 1 in 5 children under the age of 18 or, worse, 1 in 4 if you’re under the age of five.

When I started working with LIFT staffers to build our Constituent Voice (CV) system a few months ago, I thought often of Members like Mr. Assefa. I thought about their kids. How would this work help us serve them better? As an organization, we had been collecting outcomes data on our Members for years – data like whether they found a job or got into safe housing or obtained much-needed food assistance for themselves and their kids.  What could “soft” data like whether they felt respected at LIFT or more connected to their communities tell us that fourteen years of “hard” outcomes data did not? Were there insights in the subjective data that could help us figure out a more effective way to help them lift themselves out of poverty for good?

As our team at LIFT sat down to discuss, design, and argue about what ultimately became the twenty questions on our Member surveys – the bedrock of our CV system – we kept Members like Mr. Assefa in mind. Behind every survey is the hope that the questions we ask our Members will help us unearth a critical key to their success.

Net Promoter & Constituent Voice

In choosing to implement Constituent Voice at LIFT, we made the decision not just to put participant feedback at the center of our program design and implementation, but also to adopt a proven, widely accepted analytical methodology from the corporate world – the Net Promoter system – that would allow us to one day benchmark our responsiveness to participants’ needs against that of other organizations. As I shared in my last blog post, this approach helps companies to gather feedback, improve and innovate, and most importantly, predict profitability. At LIFT, we’re betting that this system can deliver to us similar insights.

Here’s how we put it into action: at LIFT, our volunteer Advocates meet with program participants – i.e., Members – for an hour or more about once a week to help them make progress on the goals that they define as being most important in their lives. Typically, these are goals like finding a job or getting food assistance for their families. At the end of each meeting, we administer short surveys to Members to gauge the how they feel about LIFT and how they assess what we call personal or social factors – for example, how confident they feel about their ability to achieve their own goals or how connected they feel to their community. Members take these surveys on iPads right in our LIFT offices and use a slider scale to indicate how strongly they agree with statements like “With LIFT’s help, I’m making progress on my goals” or “LIFT helps me with the goals I think are most important”. Ratings range from 0-Completely Disagree to 10-Strongly Agree. We use Net Promoter analysis to distinguish between three profiles of respondents:

  • “Promoters” rate LIFT as 9 and 10 on the 0-10 point scale used in the survey.
  • “Neutrals” give ratings of 7 and 8.
  • Those giving ratings from 0-6 are categorized as “Unconvinced”.

To determine the Net Promoter Score (NPS) for a particular question, you just need to calculate the proportion of Unconvinced respondents and subtract that number from the proportion of Promoters.  For example, let’s say that for a given question, 20% of respondents are Unconvinced, 10% are Neutrals, and 70% are Promoters. The NPS for that question is 50:NPS

Net Promoter scores range in value from -100 (all Unconvinced) to 100 (all Promoters). In the corporate world, where NPS is famously and reliably predictive of company performance, it is not uncommon to have negative NP scores but the most successful for-profit companies generally have NP scores in the 50s or above. At LIFT, we calculate the NPS for every question in our surveys.

Our initial results

After redesigning the surveys this fall to better align to LIFT’s theory of change, we launched a full set of surveys on October 15, 2013, each with 4-6 substantive questions. For half the respondents, the surveys were completely anonymous. For the other half of respondents, the surveys were “non-anonymous”, meaning the Advocates entered each respondent’s Member ID into the iPad before handing the survey to the Member. While local LIFTers are never permitted access to the identified data, the non-anonymous surveys allow LIFT’s national evaluation team to tie the subjective feedback responses to the objective economic outcome data (e.g., jobs or housing secured) recorded in our online case management system to test what correlations, if any, exist between the two.

With surveys being administered after nearly every Member meeting in three of our sites – North Philadelphia, West Philadelphia, and Uptown Chicago, we compiled about 1,500 submitted surveys by Thanksgiving, with about half of those responses tagged with Member ID. Looking at that first six weeks of data, we were thrilled – and relieved! – to see that our Members thought were we doing a fairly good job of meeting their expectations.

Overall, the vast majority of respondents gave us the highest ratings. Promoters, i.e., those giving us the highest scores of 9 or 10, represented the lion’s share of respondents, as shown in green below (see Figure 1). Very few gave us a 7 or 8, shown as the Neutrals in blue, and an even smaller proportion gave us a 6 or lower rating, shown as the Unconvinced in red. The NPS for each question, indicated by the orange line, is strong, ranging between the 60s and 80s for nearly every question.

Promoters
Figure 1

Many of the results were as expected. People who came back for 4 or more meetings tended to rate us higher (NPS of 77) than those with 3 or fewer meetings (NPS of 72), suggesting perhaps that those who find LIFT less valuable may stop coming to LIFT early on (see Figure 2). Having grouped our twenty questions into categories like relationship quality or service importance, we also saw that we scored consistently high across both Philadelphia and Chicago for every category except social factors (see Figure 3). While we at LIFT believe strongly that sound social foundations, like having a support system or feeling connected to your community, leads to greater economic progress, we knew going into this CV work that our programming has been less focused in this area and would likely result in lower scores.

NPS by # Meetings
Figure 2and Figure 3

Some results surprised us. We anticipated that we may see a “politeness bias” in the results if our Members felt compelled to give us good scores, especially if they were non-anonymous respondents. Interestingly, though, we yet to see evidence of a politeness bias (see Figure 4) and, in fact, have earned slightly lower scores from Members who know they’re on the record (average NPS: 73) versus those who remain anonymous (average NPS: 78). We have also found that, generally speaking, men (average NPS: 80) tend to rate LIFT higher than women do (average NPS: 75). While not a huge difference on average (see Figure 5), the gender difference is greater in certain questions. For example, women score us significantly lower than men on Question 1 (“LIFT keeps its promises and does what it says it will do”), with women giving us an average NPS of 70, versus the men’s average NPS of 83. But, women rate us better on Question 2 (“People at LIFT really care about me and want to help me the best they can”), with women giving us an average NPS of 82, versus the men’s average NPS of 70. If we understood these results better, what could this tell us about how to better serve all our Members?

Politeness Bias
Figure 4 and Figure 5

What’s next

As anyone who’s done this work can tell you, getting the survey data is the easy part. (Well, not that easy…)  The harder part is figuring out how to efficiently and effectively make sense of the data, within the context of other performance and evaluation information you collect. Even harder is figuring out how to move from data to action, i.e., how to learn from the feedback and actually improve the way you operate, so you can better meet your participants’ needs. Even harder than that is figuring out how to build the systems and processes that let your organization consistently manage and innovate based in part on participant feedback. The challenge is to systematically listen to the people you serve and design solutions around what they tell you they need. At LIFT, we call this a human-centered approach to social change.

Our team is just starting our journey with participant feedback and has already made lots of mistakes along the way (more on that in an upcoming blog). But, we’re excited by the insights that our data is starting to reveal and excited about the possibilities they open up for how we re-think our program.

I’ll share more about the results and, in particular, what we are learning as we triangulate subjective feedback data against our objective economic data. Relationship metrics are a proven predictor of company performance in the corporate world, but will they be similarly predictive of our Members’ economic progress? Of LIFT’s overall impact?