Menu
Banks' use of big data to be scrutinzed by EU regulators

Banks' use of big data to be scrutinzed by EU regulators

Algorithms are also being investigated

Financial institutions in Europe may face tougher rules governing their use of big data thanks to a new investigation by financial regulators.

Focusing on the "opportunities and challenges" associated with big data, the new investigation aims to determine whether new regulatory or supervisory measures are needed, according to a joint statement published Monday by the European Securities and Markets Authority, the European Banking Authority and the European Insurance and Occupational Pensions Authority.

In particular, it will focus on financial institutions' use of consumers' personal data for profiling purposes as well as for identifying patterns of consumption to make targeted offers. Such activities "raise questions" about firms' "expected behaviors" and "overarching obligations," the group said.

Also planned for 2016 is continued work by the joint committee on an initiative launched earlier this year that focuses on algorithms. The ongoing goal is to assess "the phenomenon of human interaction between consumers and financial institutions being increasingly replaced by algorithms that provide advice or other forms of recommendations," with a particular focus on risks and benefits and any need for regulation or other actions.

Results of the algorithm-focused analysis will be included in a discussion paper this fall and then policy recommendations for 2016.

Earlier this year, the U.S. Federal Trade Commission's Bureau of Consumer Protection established the Office of Technology Research and Investigation to focus on algorithmic transparency and other issues.

"Some of us already feel that our banker is a robot, and it might be a good idea: we want our institutions to operate relatively uniformly under fair and transparent rules," said Christian Sandvig, a professor in the School of Information at the University of Michigan.

Fairness and transparency, however, are far from guaranteed.

"A serious risk of making banking and financial advice a human-algorithmic interaction is that these algorithms may not be robotic in the right way," Sandvig said. "Machine-learning algorithms fed by big data quite often produce unexpected results, and their operation can be opaque to their own designers."

Credit scoring, for example -- now fairly universally accepted as a standard part of the financial system -- was originally developed in part as a way to make discriminatory financial decisions more opaque, Sandvig said.

"In an earlier era of banking, to award a mortgage bankers hired private detectives to judge whether someone was likely to be a homosexual or communist," he said. "The credit score hides a secret process behind a number."

It's also possible for financial-recommendation algorithms to provide "a false objectivity," he warned.

"As a society, we need to develop regulatory capacity to systematically assess the consequences of these algorithmic systems," Sandvig said. "This move is a first step on a long road."

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Federal Trade CommissionTechnologyTechnology Research

Show Comments
[]