Part of the proof published by Yining Wang, et al, in an article entitled A Theoretical Anlysis of NDCG published in the Journal of Machine Learning Research. NDCG (normalized discounted cumulative gain) is somehow used in the ranking of artifical intelligence systems ….
The concluding keynote session on the third and last day of ALM Media’s LegalWeek, the Experience, How AI Benefits People and Society and Its Legal Challenges, Opportunities, and Ethics, was quite thought-provoking. The panel was excellent; high energy and very knowledgeable, but – and it was only because the hour flew by – they did not get around to discussing the ethics of it all.
L to R: Zev Eigen, Andrew Arruda, Natalie Pierce, Brian Kuhn, Ray Thomas
For many of us, this is key, and what makes artificial intelligence scary. Who is going to teach these machines ethics, and, frankly, can ethics really be taught to a machine? We don’t understand equations like the ones above, and we are never going to understand them, so how can we know?
My skepticism comes from my background in evolutionary biology and its lesson that life moves forward on what works today. Apart from (some) humans, life doesn’t use long-term planning much and certainly ethics per se are lacking. Altruism is rare in the animal kingdom and not fully understood even after decades of study because it doesn’t really make sense; why would I sacrifice myself for your benefit unless there is actually something in it for me after all, in which case it isn’t really altruistic.
Humans have only been able to become truly selfless because our intellect allows us to recognize a Greater Good and to derive some form of satisfaction in serving it. That shy smile of gratitude might work for us, being susceptible to warm fuzzies, but I can already hear the descendants of Watson chuckling in their sleek lairs. Machines are not going to learn altruism.
One need only look at health care in this country to see the problem. It is so expensive that health insurance can only work well if we all pay into it throughout the healthy portions of our lives because only that will build reserves adequate to pay for the care we are likely to require if we’re not lucky enough to drop dead out of the blue and yet we exploded in rage when paying into it became mandated. I’d love to meet the algorithm asked to approve the $100,000+ my 88-year-old father’s 3-day hospitalization cost – would the fact that he’s a sweet old guy and not a crotchety old man make a difference? You will not find group health insurance on the plains of the Serengeti.
While of course everyone on the panel acknowledged that AI is in its infancy, they pointed with evident enthusiasm to some of the remarkable successes already achieved in real life, with the likes of Deep Blue, and Watson, and the idea physicians are already able to call upon the likes of them to treat complex medical conditions is reassuring until the next generation supercomputer wakes up one morning and wonders why we are bothering to treat these conditions at all. Uh-oh. But true learning, I would submit, dictates that eventuality.
Fortunately, except in the highest-end laboratories we seem a ways from there – who hasn’t wondered what Neflix was “thinking” when it recommends movies “because you watched” so-and-so? Are Amazon’s you-might-likes, based on what you’ve looked at, all that on-point? Why do targeted ads know I was researching something to buy, but not that I’ve already bought it so they should stop wasting someone’s money on continuing to present me with targeted ads? Can enough lines of code be written to tell Flickr, the photo-sharing website that now tries to tag photographs automatically, that red pills in a blister pack are not gourmet tomatoes in a plastic sleeve, and would you really use a self-driving Uber for all of your transportation needs?
Will any software anywhere ever learn to give an older worker with some health issues and a spotty full-time employment record a job, when clearly the better decision would be not to, especially after tapping into his or her various “rewards” cards to see what they eat and what medications they take, etc. Let’s go back to Uber, perhaps unfairly. Would their algorithm ever learn to care whether it has 1,000 drivers in a city making $10,000 apiece, or 10,000 drivers making $1,000 apiece when more drivers benefit Uber by reducing wait time? Throw in an ability to check local unemployment rates and the median age of new hires in that town and is it any surprise that their driver recruitment page features an older – and presumably under-employed – man behind the wheel? Whatever they have running the show is already confident enough to offer drivers a few hundred bucks for new driver referrals because it has “learned” that existing drivers will increase their own competition for the price of an NFL ticket – now that is King of the Jungle behavior!
But there is another concern that emerges; the day before this session, I attended one in which studies were presented showing that the precision and recall of technology-assisted review (TAR), the famous predictive coding this conference devotes considerable attention to, varied significantly depending on the processing used, which I, like many people, did not fully understand. While the two are not directly comparable in all situations, you can bet that in X number of years, it will be shown that not all AI is equally smart, and yet all will be cloaked with that aura of invincibility technology so often confers.
The final irony may be that an “honest” machine would “know” that AI is not ready for prime time and prohibit its use for anything beyond picking your next book, movie, or date, but until such honest machines are actually in charge, there is nothing to prevent some humans from deploying AI as if it was up to making life or death, or at least quality of life, decisions thereby letting the germ out of the sealed petri dishes. But when AI is in charge, Stephen Hawking thinks we might have much bigger problems, and I think Charles Darwin would agree.