AI, Traffic Tickets and Stochastic Liberty

Recently I received a notification from Green Mobility the electric car ride-share company I am using some times. I have decided not to own a car any longer and experiment with other mobility options, not that I care about the climate, it’s just, well, because. I like these cars a lot, I like their speed and acceleration and that you can just drop them off and not think about them ever again. Apparently I seem to have enjoyed the speed and acceleration a little too much, since the notification said the police claimed that I (allegedly) was speeding on one of my trips. For a very short period of time I toyed with the “It-wasn’t-me” approach, but quickly decided against that since technology was quite obviously not on my side here. Then I directed my disappointment at not receiving complete mobility immunity along with all the other perks of not owning my car against the company charging me an extra fee on top of the ticket, a so called administration fee. But that was a minor fee anyway. Then I decided to rant against the poor support person based on the fact that they called it a parking ticket in their notification and that I obviously wasn’t parking according to the photo. Although in my heart I did realize that this was not going anywhere.

I believe this is a familiar feeling to any of my fellow motorist: the letter in the mail displaying your innocent face at the wheels of your car and a registered speed higher than allowed along with payment details of the ticket you received for the violation. It is interesting to observe the anger we feel and the unmistakable sense that this is deeply unfair even though it is obviously not. The fine is often the result of an automated speed camera that doesn’t even follow normal working hours or lunch breaks (an initial reason for it being unfair). A wide suite of mobility products like GPS systems and Waze keeps track of these speed cameras in real time. Some people follow and report this with something approaching religious zeal. But what is the problem here? People know or should know the speed limit and know you will get a ticket if you are caught. The operative part of this sentence seems to be the “if you are caught” part. More about that in a minute.

The Technology Optimisation Paradox

Last year I was working with for the City of New York to pilot a system that would use artificial intelligence to detect different things in traffic. Like most innovation efforts in a city context it was not funded beyond the hours we could put into it, so we needed to get people excited and find a sponsor to take this solution we were working on further. Different suggestions about what we should focus on came up. One of them was that we should use the system to detect traffic violations and automatically fine car owners based on the license plate.

This is completely feasible, I have received tickets myself based on my license plates, so I gathered that the technology would be a minor issue. We could then roll it out on all the approximately 3000 traffic cameras that are already in the city. Imagine how much revenue that could bring in. It could probably sponsor a couple of new parks or sports grounds or even a proper basket ball team for New York. At the same time it would improve traffic flow because less people would double park and park in bus lanes etc. When you look at it, it seems like a clear win-win solution. We could improve traffic for all New Yorkers, build new parks and have a team in the NBA Play Offs (eventually). We felt pretty confident.

This is where things got complicated. We quickly realized that this was indeed not a pitch that would energize anyone, at least not in way way that was beneficial to the project. Even though people are getting tickets today and do not suggest seriously that they should not, the idea of OPTIMIZING this function in the city seemed completely off. This is a general phenomenon in technological solutions, I call this the “Technology Optimization Paradox”: when optimizing a function, which is deemed good and relevant leads to resistance at a certain optimization threshold. If the function is good and valuable there should be no logical reason why doing it better should be worse, but this is sometimes how people feel. This is the technology optimization paradox. It is often seen in the area of law enforcement. We don’t want surveillance even though that would greatly increase the fight against terrorism. We like the function of the FBI that lead to arrests and exposure of terrorist plots but we don’t want to open our phones to pervasive eaves dropping.

Stochastic Liberty

This is where we get back to the “If you are caught” part. Everyone agrees that it is fair that you are punished for a crime if you are caught. The emphasis here is on the “if”. When we use technology like AI we get very very close to substituting the “if” with a “when”. This is what we feel is unfair. It is as though we have an intuitive expectation that we should have a fair chance of getting away with something. This is what I call the right to stochastic liberty: The right for the individual to have events to be un-deterministic. Especially adversary events. We want to have the liberty to have a chance to get away with an infringement. This is the issue many people have with AI when it is used for certain types of tasks, specifically tasks that have an optimization paradox. It takes away the stochastic liberty, it takes away the chance element.

Let us look at some other examples. When we do blood work, do we want AI to automatically tell us about all our hereditary diseases, so the doctor can tell us that we need to eat more fiber and stop smoking? No sir,  we quietly assert our right to stochastic liberty and the idea that maybe we will be the 1% who lives to be 90 fuelled on a diet of sausages, fries and milkshake even though half our family died of heart attacks before they turned 40.  But do we want AI to detect a disease that we have suspicion that we might have? Yes!

Do we want AI to automatically detect when we have put too many deductions on our tax return? No way, we want our stochastic liberty. Somebody in the tax department must sit sweating and justify why regular citizens tax returns are being looked through. At most we can accept the occasional spot test (like the rare traffic police officer, who also has to take a break and get lunch and check the latest sport results, thats fair). But do we want AI to help us find systematic money laundering and tax-evation schemes: hell yeah!

Fear of the AI God

Our fear of AI is that it would become this perfect god that would actually enforce all the ideals and ethics that we agree on (more or less). We don’t want our AI to take away our basic human right of stochastic liberty.

This is a lesson you don’t have to explain to politicians who ultimately run the city and decide what gets funded and what not. They know that unhappy people getting too many traffic tickets that they think are unfair, will not vote for them. This is what some AI developers and technocrats do not appreciate when they talk about how we can use AI to make the city a better place. The city is a real place where technology makes real impacts on real people and the dynamics of technology solutions exceed those of the system in isolation. This is a learning point for all technology innovation involving AI: there are certain human preferences and political realities that impose the same limits on the AI solution as the type of algorithm, IOPS and CPU usage.

 


Udgivet

i

af

Tags:

da_DKDanish