Other revenue management systems tout features showing customers how much more revenue a hotel made using their system over what it would have made without. We just wrapped up board meetings in New York and were asked why we weren’t offering something similar. My answer was simple: I don’t know how we, or anyone, for that matter, could accurately provide that number. I’ve heard many experts claim to calculate price elasticity through the years, but I’ve always wondered how.
Plus, I was happy to tell the board we’re doing something even better. We just launched an amazing new feature that lets our users experiment with different pricing strategies. This first-to-market solution allows hotels to maximize revenue on the vast majority of days when hotels aren't expecting a sellout.
The linear programming optimization algorithm driving all revenue management systems works really well at finding the optimal price when there’s unconstrained demand, but the algorithm completely fails when the hotel is not pacing to sell out. The math behind the algorithm is in essence calculating what one extra unit of inventory would be worth to a customer. So in a 100-room property, the system is figuring out what a 101st room would be worth if one could be built and then recommending a price based on that. The problem is when only 50 (or 70 or 90) rooms are sold, that extra room is worth what? Zero dollars. The system doesn’t work. It is advocating selling your hotel at the lowest possible price, and this is happening at most hotels in most locations on most days.
The $0 bid price is the greatest fallacy of hotel revenue management systems and the weak methods of working around this problem with historical rates and comparisons to competitors are inadequate. Revenue management systems have been causing hotels to lose money for decades. Until now.
Let me explain. First, about our competition and the experts: My question is always how can they tell how many customers didn’t book a room because the price was too high? Or how many would have paid more if given the chance? There’s no easy way to answer those questions. Most systems don’t use web shopping data like GameChanger, our RMS, which factors in regrets (when a customer gets a price quote but doesn’t book) and denials (when a room or room type is unavailable to the customer) into our algorithm. That information provides a sense of how price sensitive customers are, but even that doesn’t tell the whole story.
The closest anyone gets to calculating true price elasticity is through A/B testing, which online retailers like Amazon and many others regularly use. They can offer two different prices (A and B) to similar customers at the same time to calculate price elasticity.
It’s not that simple in the hotel industry. We have rate parity agreements, other search and OTA sites constantly crawling hotel websites and customers who are sharing our products (the hotel) at the same time. We can change prices hourly or even minutes apart, and play with non-refundable deals and other offers, but if two customers log into the same website at the same time, the price better be the same. Remember the outcry in 2012 when the Wall Street Journal reported how Orbitz was steering Mac users to higher-priced hotels? Not exactly the same situation, but the debate and initial charges centered on how unfair, if not unethical, it would be to charge guests different prices at the same time for the same product.
But all of that doesn’t mean we can’t “experiment” with different rates and strategies, which brings us back to what we’re in the midst of rolling out in our application. We’re giving customers the chance to test different pricing strategies. It’s as simple as checking a box, and a hotel can always unclick it at any point.
Hotels can test higher (or lower prices) in a controlled experiment. It’s not exactly A/B testing, but it’s close. GameChanger can isolate near identical situations and if the rates are changed at one time, but not another, we can get a truer sense of how elastic price is. For example, we might increase price by $20 on a Tuesday 30 days out, when we know pickup should be 10 rooms, and then compare the results to a similar Tuesday previously (or in the future) when we didn’t raise rates.
The experiment allows us to see if the price increase changed demand in that situation. If it did not, our users can feel more confident raising rates in those segments and continue to test more aggressive pricing on days without unconstrained demand.
At the Revenue Strategy Summit in November, Lee Pillsbury touched on this topic during his keynote address. He said his biggest mistake was believing demand for hotel rooms was like that of airline seats. He brought revenue management to this industry and Marriott in 1983, following the yield management being done by American Airlines, and his assumption was that if you reduced rates by 10%, demand would increase 10%—that price elasticity was greater than one. He now believes hotel customers aren’t nearly as price sensitive and the industry has been underpricing itself for decades. We think he’s right, and hope to help our customers prove it.
We may not be able to tell you exactly how much more money you made with us last month, but we’re confident you made more, and will make even more going forward. Come test that theory with us.