Machine learning can automate the handling of huge troves of data to help companies make and save money. However, they’re not without pitfalls, as the real estate tech company Redfin learned.
As Redfin began building its own machine-learning capabilities, it ran into a problem: Employees weren't using them. Bridget Frey, the firm’s CTO, said in an interview that there was a key reason for that: At first, Redfin didn't leave room in these systems for the real estate agents who were supposed to use them to make modifications.
For example, a Listings Matchmaker feature generated a list of personalized recommendations for homebuyers, based on their interests. In its initial iteration, agents weren’t able to add recommendations they thought would be useful.
“And the agents were saying ‘I’m an expert in this neighborhood, I know that this house is perfect, why are you not letting me add it in?’ And we just realized that our engineers ... were relying too much on the machine learning,” Frey said. “We needed to take what was special about our agents and use that to enhance the algorithm and enhance the experience for our customer.”
Any company that expects its employees to stand behind the product of an automated system needs to let them manually edit the results, Frey said.
What’s not clear is which set of insights -- the machine's or the agent's -- actually led to better outcomes. In the case of Listings Matchmaker, Frey said users didn’t seem to click on one set of recommendations over another, but she didn’t know if the company had data about which recommendations resulted in a purchase.
The more important difference is that agents used the feature more once they were able to recommend particular homes, she said.
Frey’s comments were echoed by Alphabet Executive Chairman and former Google CEO Eric Schmidt during an on-stage interview at the RSA security conference in San Francisco last week. He said humans must remain in the loop of machine learning systems.
“It’s very important with these systems to understand that they are advisory,” Schmidt said. “They help you understand something, but ultimately you want humans to be in charge of these things.”
Another concern is algorithmic bias, or the tendency of automated systems to return results that are skewed because of factors like the data that goes into them and the way they're designed.
Frey said Redfin has been trying to build a diverse team to work on its technology, in part to identify and correct for those biases.
“It requires constant vigilance to fight against the biases that can creep into software. Ultimately, human beings are the ones who write the algorithms and choose the data sets,” Frey said in an email. “So we’ve decided, for example, not to collect and process certain demographic data that could introduce bias into our algorithms. But if the humans powering these algorithms have one inherent perspective, it is likely the algorithm will adopt a similar disposition.”
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.