Last month, I applied for a loan at my local Federal Credit Union. This is a friendly and personable place, not a multinational financial megalith. I was turned down, not by the loan agent, but by the form-processing software. I appealed to him, ‘If I can have a few minutes to talk to you, I think I can convince you that I am actually a very low risk for default.’ The friendly agent told me not to bother, that according to Federal regulations passed after the 2008 mortgage meltdown, individual lenders no longer had any discretion over lending decisions, that loan approval is strictly according to a standard algorithm. — JJM
Deep learning represents a fundamentally different way to program computers. Instead of providing logic and instructions, the programmer provides the computer with skills for observation and generalization by example. The program then creates its own logic.
There is a self-driving car that works really well, perhaps better than Google or Tesla, but no one understands how it does what it does. It learned by watching human drivers. Should we trust it?
Already, computers are deciding who gets a loan, who gets hired, and who gets parole. These decisions have proved to work very well in practice, perhaps better than any human judge. But are they fair? Are they equitable? A problem is that they cannot be challenged or queried or held to account.
If algorithms drive cars, they are making life and death decisions. It is probable that algorithms are already used to drive military drones, but we in whose names this killing is perpetrated are not permitted to know even whether the decisions to take human lives are made by human operators or silicon brains.