How to Measure the Impact of Technology on Rent Growth
by Donald Davidoff | Aug 11, 2020 12:00:00 AM
In recent weeks we've been examining some changes to technology adoption through the COVID-dominated months of 2020. Curb-to-couch access control and self-show are obvious examples of innovations that have become a necessity as communities have grappled with social distancing. But as we discussed last week, we still need to understand the numbers when considering technology investments.
Most of the time, proptech investment decisions turn on whether or not the cost can be justified through rent increases. But rent growth is a complicated thing, particularly when it is influencing an investment decision. Correlation is not causation, and given the typically broad range of factors that influence rent growth, it is never easy to isolate the role of a single factor in driving it.
The challenge is to find evidence of the impact of the technology on rent growth. That means getting comfortable with estimates of future rent increases and, assuming they justify moving forward with implementation, conducting tests to prove revenue lift.
How to Measure Revenue Lift
The most efficient way to project revenue lift is to review a peer or competitor who has conducted appropriate testing. If you don't know of any operators who have conducted formal testing, your technology vendor should know somebody who has. The most reputable vendors have referential customers who have done legitimate testing, so this has the added benefit of being an appropriate step in any source selection process.
Different operators approach investments in different ways as the attractiveness of the upside and the need to differentiate properties provide a natural impetus for roll-out. Yet for some operators, there is a need for a formal proof point on the benefits. Imagine, for example, a skeptical owner who wants a proof point on rent increases. By far the best way to do that is to run a formal A/B test.
In the case of smart home technology, A/B testing measures the results experienced by a cohort of rental units using the technology against those of a cohort that does not use it. Running an A/B test requires two main decisions:
- What metric(s) will you use as the key results indicators?
- What will define the test and control cohorts?
The simplest metric to use is the change in exposure. We could compare the exposure of a test set of homes against a control set, given a set rent premium on the test homes. However, exposure can change due to notices that have nothing to do with the technology we are testing. For that reason, a much better metric to focus on is "days on market" (DOM).
What Days on Market Reveal
DOM measures the time between the notice to vacate of the departing resident and the application date of the arriving resident. This "time in play" is an excellent proxy for market response to price. If the test homes are leasing at a lower DOM than the control, then we can charge more; if the test homes are leasing more slowly, then the price premium is too high; and of course, if they are statistically equivalent, then the price differential is just right.
Since DOM analysis measures market response, it can be applied to anything that potentially impacts prospects' decisions. The impact of technology, renovations or any new amenities can all be understood using this methodology.
For defining test and control cohorts, best practice, where possible, is to implement the change on half of the available units and compare DOM to the other half without the change (and the premium). In a situation where it is imperative to update all homes at the same time, then the next best option is to pick an appropriate sister community.
Once you have your cohorts, you can compare the test community's rent growth against the rent growth of the control community. In choosing a control property, you should opt for one that is geographically as close as possible and starts with somewhat comparable rents and similar availability.
You may not be able to get a perfect match, but by testing several communities against pre-selected sister communities, any differences are likely to average out across all that data. There may not be a perfect test, but you can certainly get a test good enough from which to make smart business decisions. And in times like these, when an appetite for innovation meets economic uncertainty, we ignore at our peril the need to make smart technology decisions.