CONNECT WITH US

Master Talks 2: Economics facilitates AI algorithms to make better decisions

Peng Chen, DIGITIMES, Taipei 0

Credit: Epoch Foundation

AI algorithms play a huge part in today's world. They also continue to be refined to perform various tasks better. For example, Hui Chen, professor of finance at MIT Sloan School, has researched applying economic concepts to machine learning algorithms that will make the algorithms "smarter."

Chen shared his insight at "Building a Better World," a master series organized by Epoch Foundation and MIT Sloan School of Management.

In a speech titled "Can economics help make smarter AI?" the professor said that his research is trying to answer the following questions: How can we integrate machine learning methods with economics to make decisions more effectively? How can we make our lending algorithms more robust to potential strategic attacks?

Chen said he generated the questions and research idea from a chat with his former student, who was trying to start a new company to revolutionize agricultural finance with AI and help lenders make better loan decisions.

While algorithms can help detect bad borrowers who are likely to default, Chen said it is important to understand the mistakes machine learning can make, including ones that are "false positive" and "false negative."

In the farm loan example, a false-positive mistake would be turning away a good borrower, which causes the lender a business opportunity loss. On the other hand, the false-negative mistake means accepting a bad borrower's application that possibly ends up in default and results in the lender's principal investment loss. Chen said an economic framework is needed to assist decision-makers in balancing the two types of mistakes.

"This is where my recent research comes in, where we derive an economic loss function framework that teaches the learning algorithm exactly how to pay attention to the different types of mistakes in a systematic manner," he added.

While it is intuitive to teach algorithms to minimize false-negative mistakes, the professor said other factors like base rate need to be considered. In the farm loan case, the base rate refers to the numbers of good and bad borrowers and the chance of encountering either. If the fraction of bad borrowers in the market is high, the algorithm is more likely to make a false-negative mistake. Therefore, lenders should make more efforts in minimizing them, Chen said.

When compared with the standard way to train learning algorithms, Chen said the economic loss function helps teach algorithms to take economic trade-offs into account automatically and create better performances.

A computation framework to deal with strategic attacks on AI algorithms

AI algorithms are likely to be manipulated once people understand how it works. Chen elaborated it with the agricultural lending example in his speech. If a farmer (borrower) knows spending much time at a bar after midnight would affect the credit, they might pay with cash rather than a credit card to eliminate the digital footprints.

Chen said a strategic attack on the algorithm like a farmer's behavior change could mean the information people learned in old data is no longer valid in the new data. His research, which he said borrows a page from the Game Theory, presents a computational framework that helps people compute robust algorithms embedded in "games" between borrowers and lenders.

"Our goal here is to try to model these kinds of economic incentives (e.g., to get a better credit assessment) and better anticipate what farmers' behavior would be," the professor said.

With the algorithms' help, lenders could try to strategically anticipate borrowers' behaviors and rely on attributes that are difficult to change, such as a farmer's income and educational background, according to Chen.

He also said the computational framework could apply to other fields like health insurance when companies need to screen people's health-related behaviors.