For about a decade I have been involved in various system development efforts that involved Artificial Intelligence. They have all been challenging but in different ways. Today AI is rightfully considered a game changer in many industries and areas of society, but it makes sense to reflect on the challenges I have encountered in order to asses the viability of AI solutions.
10 years of AI
About 10 years ago I designed my first AI solution, or Machine Learning as we typically called it back then. I was working in the retail industry at that time and was trying to find the optimal way of targeting individual customers with the right offers at the right time. Lots of thought went into it and I worked with an awesome University Professor (Rune Møller Jensen) to identify and design the best algorithm for our problem. This was challenging but not completely impossible. This was before TensorFlow or any other comprehensive ML libraries were developed. Never the less everything died due to protracted discussions about how to implement our design in SQL (which of course is not possible: how do you do a K-means clustering algorithm in SQL), since that was the only language known to the BI team responsible for the solution.
Fast forward a few years I find myself in the financial services industry trying to build models to identify potential financial crime. Financial crime has a pattern and this time the developers had an adequate language to implement AI and were open to use the newest technologies such as Spark and Hadoop. We were able to generate quite a few possible ideas and POCs but everything again petered out. This time the challenge was the regulatory wall or rather various more or less defined regulatory concerns. Again the cultural and organizational forces against the solution were too big to actually generate a viable product (although somehow we did manage to win a big data prize)
Even more fast-forward until today. Being responsible for designing data services for the City of New York the forces I encountered earlier in my career are still there, but the tides are turning and more people know about AI and take courses preparing them for how it works. Now I can actually design solutions with AI that will get implemented without serious internal forces working against it. But the next obstacle is already waiting and this time it is something particular to government and not present in the private industry. When you work for a company it is usually straightforward to define what counts as good, that is, something you want more of like say, money. In the retail sector, at the end of the day all they cared about were sales. In the Financial Services sector it was detecting financial crime. In the government sector that is not as straight forward.
What drives government AI adoption?
Sure, local, state and federal government will always say that they want to save money. But really the force driving everything in government is something else. What drives government is public perception, since that is what gets officials elected and elected officials define the path, appoint directors who hire the people who will ultimately decide what initiatives get implemented. Public perception is only partially defined by efficiency and monetary results. There are other factors that interfere with success such as equity, fairness, transparency etc.
Let me give some examples to explain. One project I am working on has to do with benefits eligibility. Initially City Hall wanted to pass legislation that would automatically sign up residents for benefits. However, after intervention by experts this was changed to doing a study first. The problem is that certain benefits interfere with other benefits and signing you up for something that you are eligible for may affect you negatively because you could loose another benefit.
While this is not exactly artificial intelligence it is still an algorithm that displays the same types of structural characteristics: the algorithm magically makes your life better. Even if we could make the algorithm count the maximum benefit of all available benefits and sign the resident up for the optimal combination, we still would not necessarily thrill everyone. Since benefits are complex it might be that some combination will give you more in the long term rather than the short term. What then if the resident prefers something in the short term? What if the system fails and a family gets evicted and has to live in a shelter because the system failed to detect eligibility due to bad master data?
When I was in the retail industry that would amount to a vegetarian getting an offer for steaks. Not optimal but also not critical if we could just sell 10 more steaks. In the financial services industry it would amount to a minor Mexican drug lord successfully laundering a few hundred thousand dollars. Again, this is not great but also not a critical issue. In government a family being thrown out on the street is a story that could be picked up by the media to show how bad the administration is. Even if homelessness drops 30% it could be the difference between reelection and loss of power.
What does a success look like?
So, the reward structures are crucial to understand what will drive AI adoption. Currently I am not optimistic about using AI in the City. Other recent legislation has mandated algorithmic transparency. The source code for every algorithm that affects decisions concerning citizens needs to be open to the public. While this makes sense from a public perception perspective it does not from a technical one. Contrary to popular belief I don’t think AI will catch on in the government any time soon. I think this can be generalized to any sector where the reward function is multi-dimensional, that is, where success cannot be measured by just one measure.