The Demand Solutions Blog

Is Lead Scoring the Right Approach for Multifamily?

by Donald Davidoff | Sep 8, 2015 12:00:00 AM

is-lead-scoring-right-for-multifamilyThere seems to be a significant and growing attention to the issues and opportunities for lead scoring. This is happening throughout all business verticals, not just multifamily. In the multifamily space, I’ve seen webinars, panels and other education activities discussing this important issue.

As anyone who knows me would attest, I’m all about data and predictive analytics. However, I’m nervous that we’re succumbing to the “buzz” around lead scoring and not thinking through all of the ramifications.

Here are a few thoughts:

  • Lead analysis is VERY important for understanding quantity and quality of leads from various channels which then informs decisions about spend. Similarly, lead analysis can show us comparative performance of leasing associates/teams so that regional and community managers can coach their teams. There are a number of very interesting new products coming online for this very purpose. A few of my favorites are SlopeJet, ShowPro by AnyoneHome and LeaseHawk—all worth looking at if you haven’t yet done so. (Full disclosure: I have interests in the first two, and Mike Mueller, who owns LeaseHawk, is a good friend)
  • Lead scoring, which I define as putting some numerical or qualitative (e.g. color code) measure on each lead to indicate how good or bad it is in order to have leasing associates somehow treat them differently (e.g. by calling the “good” ones before the “bad” ones), has some serious challenges:
    • The biggest challenge is that this is almost always “a solution in search of a problem.” Lead scoring would be very valuable to optimize leasing associates’ time whenever they have more leads than they could handle, separating out good versus bad would optimize the limited resource of their time. The challenge is that I’ve yet to be involved in a time and motion study of leasing agents that showed they have more leads than time. I’ve seen plenty of examples of inefficient pipeline management that makes associates feel overwhelmed, but the solution to that is to improve the lead management system, not to “score” some leads out of the way. If you truly have more leads than your associates have time to manage, then lead scoring could be a great solution.
    • Another challenge is that it’s very hard to get good data to drive a lead scoring model. There’s some things that are so obvious that they don’t need a model. For example, a closer in desired move-in date is more likely to convert soon than one much further out, a lead for a unit type for which we have many exposed units is more valuable today than one for a unit type that is fully occupied, etc. There are other things that are really cool to measure and could be of great value—e.g. # of pages visited, time on site, etc. The challenge with this is that most leads come via phone or email, not via our own website’s lead generation form, so the really rich data is only available on a small fraction of our leads. So it’s really hard (maybe impossible) to build a lead scoring model that is predictive enough to be truly useful.
    • Some may say that the worst case from both the above is that there’s not as much value as we would hope, but there’s certainly no harm in creating a lead scoring model. I am actually afraid there would be harm. I don’t have any multifamily data to prove this, but it reminds me of some studies in the world of education. The studies took students and randomly assigned them into two groups. They then told the teachers that the first group was a “track 1” of “really good” students and the other was a group of “regular” students. Sure enough, the “track 1” group outperformed the “regular” group on testing at the end of the study. The mechanisms may vary, but I fear that labeling one lead as “better” than another will create self-fulfilling prophecies and reduce overall sales effectiveness.
    • I’ve seen some systems try to get around this issue of “anchoring” by referring to all the leads as being at least “good.” The others are just “better” or “best” or some other superlative. I give a bit more credit to our leasing associates than these systems do. They get that “good” means “worse than the others,” and the same concerns around anchoring results arise.

Note that I distinguish “lead scoring” from categorizing leads for various nurturing campaigns, i.e. deciding that some leads are appropriate for an email drip campaign or some other automated follow-up but no longer require leasing associate attention is perfectly fine—assuming, of course, that we use good criteria for making that decision.

In closing, I don’t want to say that lead scoring is bad in and of itself. In many industries, it’s a highly valuable way to triage a flood of leads. So if we have more leads than we can handle, lead scoring would be useful. But in situations like most of our communities where we don’t, then a really robust “lead analysis” system will give us all that we need to make good vendor decisions and deliver timely and meaningful coaching.

 

Subscribe Now