The Ethics Behind Artificial Intelligence

    Artificial Intelligence (AI) has the power to transform how we live and work, providing businesses with powerful new tools to make their operations more efficient. However, academics and technologists have multiple concerns about the ethics of AI.

    Mike Guggemos, Chief Information Officer at Insight Enterprises and Kevin LaGrandeur, Ph.D., Professor of English at the New York Institute of Technology, Fellow of the Institute for Ethics and Emerging Technologies and Co-Founder of the NY Posthuman Research Group, share their views on the thorny issue of AI and ethics.

    Q - How are organisations currently using AI?

    KL: AI is being used to automate an increasing number of numerical, formulaic and repetitive processes. One of the most talked about applications for AI to-date is for self-driving or autonomous vehicles. Codelco, for example, is a Chilean copper mining company that has been a global pioneer in the use of autonomous trucks. That said, the use of AI for self-driving vehicles in more complex environments, such as city centres, has been less successful, with a number of high profile accidents reported in the media earlier this year.

    Another interesting example is the Russian AI robot named “Vera” who wants to take over your hiring process. Vera claims to be able to find and filter CVs, conduct interviews over the phone or online and select the top candidates 10 times faster than a human being. This represents a pretty revolutionary displacement of a human function.

    MG: Vera is being used predominately to recruit for blue collar jobs and is just being extended out to technical positions. We’re beginning to see more and more chatbots in the workforce. At Insight our number one client service agent in terms of client satisfaction is a bot. We get the most positive feedback on it, it has the greatest efficiency in calls and thanks to natural voice recognition and access to an FAQ database, it always gets the right answer and is pleasant to interact with.

    Elsewhere we’re seeing AI play an integral role in the advancement of medical diagnostics. In dermatology AI is proving particularly effective in diagnosing skin cancer while radiological practices are also benefitting from AI systems that can read and interpret multiple images quickly. The recent announcement that the U.S. National Security Agency (NSA) is expanding its use of Amazon Web Services with a $600 million contract has significant ramifications for AI. The NSA is using AI from an intelligence collection and gathering perspective, employing pattern recognition to assemble disparate sources and create a layer of similarity between them by using algorithms.

    In summary, while it’s clear to see the massive potential AI has for the future, its current applications in organisations are largely limited to repetitive activities or pattern recognition.

    Q - What are the concerns surrounding the use of AI in business and the wider world?

    KL: Businesses are mainly concerned about the displacement of workers and the glitchy nature and simplicity of current AI applications. AI just can’t do complex tasks such as making ethical decisions. In the wider world there are concerns about safety and privacy. Our homes could already be largely automated by AI using the Internet of Things (IoT) but cybersecurity is so poor for third-party apps and devices such as thermostats, smart doorbells and refrigerators etc. that people don’t trust the technology currently available.

    MG: I think it comes back to how we define what AI is and what it is not. There are areas today where AI is so prolific that people don’t even think about it. Take, for example, how Spotify or Netflix present you with suggestions – that is all driven by algorithms that often have the effect of narrowing your choices rather than widening them.

    KL: Absolutely. It’s removing the fluidity to explore and ultimately limiting creativity and innovation. It’s enough to make you intentionally ‘like’ things you dislike, just to throw the algorithms.  

    MG: From a business perspective people often forget that although AI has intelligence, it doesn’t have wisdom. Consider the wisdom that is required to understand that Skype for Business isn’t going to allow a connection to a personal Skype account. A human being would find a way around this, but robots can’t unless they have been taught the workaround. Intervention must be automated and that by its very nature is an issue.  

    People tend to overestimate AI. They think there are capabilities that are not there, and that there are immediate concerns about AI that are really many years away. From a technologist’s point of view chatbots and robotic process automation is great, but they are problematic to implement. This is why, over time, AI will create support jobs for people as much as it will displace them. Automation creates work, you may end shifting job roles, but at the end of the day, someone needs to maintain the code.

    Most people don’t realise they are interacting with AI and their perception of AI is often so far out of the realm of the immediate possibility that it’s fantastical. We marvel about self-driving cars, but they are simply following a mathematical rule set of ‘I]if-then’ statements. The concept of ‘The Terminator’ and the prospect of robots ruling the world is ludicrous – in the foreseeable future at least.

    Q - How important is privacy?

    MG: The privacy aspect of AI is huge and of course brings us to the role of legislation. Going back to how AI is used in medical diagnosis, let’s take a skin cancer clinic, for example, where AI does the initial screening and the results are passed on to a human being. Now of course privacy around patient records and data is highly legislated, but what many people don’t realise is that AI has been used for medical diagnosis since the 1960s, way back before current patient privacy legislation came into being. The bottom line is that the AI capability preceded the legislation, had it been the other way around and had legislation preceded the capability, the capability probably wouldn’t have been created and millions of lives might not have been saved.

    The same is true of driverless cars. It was the openness of legislation in the U.S. state of Arizona that allowed driverless car testing and created the opportunity. Yes, legislation is necessary but if it is applied too early it can hold back progress. It’s a very fine line.

    KL: Privacy is a great concern and one the general public cares about deeply, especially since the introduction of smart TVs and virtual assistants such as Amazon’s Alexa into our homes. What these devices capture and to whom they send it is a real privacy issue. The problem is that the information they gather is sent to third parties and you and I have no control over their privacy protocols.

    Do we need legislation? At some point we will but legislation is often most effective retrospectively, an incident must occur, so you can see the problem and then make legislation relative to it. Businesses are looking to use AI for efficiencies, it’s not built for ethical decisions. Some form of curbs may be necessary, but they may well inhibit innovation, progression and ultimately profit.

    MG: There are many excruciating examples of virtual assistants sharing private conversations with random third parties. While the potential outcomes of this don’t bear thinking about for the individuals concerned, ultimately this is going to make the technology better and safer. If you look at any major development in history, progress comes through failure, we learn from our mistakes and gain greater benefits.

    Q - How can we treat the issue of responsibility?

    MG: AI is no different from any other technology we’ve developed and grown up with. If you go back to some of the admonishments we received as children – ‘don’t run with scissors’, ‘be kind to others’ and so on, these basic fundamental human values apply in the emerging technology and AI space. In other words, if you are not certain what you are doing, ensure you only harm yourself.

    KL: These issues are being hotly debated at events such as the Governance of Emerging Technology Conference. We know that there’s currently a huge push for autonomous vehicles, but when it comes to who is responsible for a self-driving car and how we decide about responsibility in legal terms, there are no real answers yet.

    To go further, how do you program a vehicle to make ethical decisions? What ethical decisions can you translate into an algorithm? Attempting to code fairness suggests that we can define it, and therein lies the primary ethical concerns. How can you determine what is fair and what isn't? Who determines those rules?

    MG: What’s more, the answer to who gets to determine what is right and what is wrong varies according to whether I am involved in that situation or I am abstract. This is at the heart of the true issue around AI and ethics - whether it’s a self-driving car, a drone, or a medical screening, how do we determine where to go if there’s an issue? Who says what’s the right thing to do?

    Q - Are technology providers doing enough to ease public concerns?

    MG: In my interactions with companies using and creating emerging technologies, the general position comes back to my earlier point of not running with scissors. We have to presume that both the producer and the user of the technology are responsible. Innovation will be stifled if the responsibility is solely placed with the producer.

    KL: Technology companies are trying, especially the bigger ones. Autonomous vehicle producers are working with philosophers and academics on the ethics of AI but it’s still early days. As Mike alluded to, ethical questions are often only addressed when disasters happen, but this isn’t limited to AI, it’s the nature of society.

    Q - Should we be concerned about AI taking jobs from humans?

    MG: AI is an absolute game changer but at the end of the day, it’s about robotic process automation and, for robotics to be affected, there has to be a process expert, someone who knows the process tip to toe. There are no shortcuts, if you automate a process badly you just get poor output done faster.

    To make this work you have to increase people’s skillsets to do the mapping in the most efficient way and to coach and modify both the robot and process. The individual then becomes a manager of multiple bots and highly skilled in process mapping.  Yes, roles and responsibilities are changing but only in a handful of situations have people actually been replaced. This may change over time but at the moment we are seeing people move around the organisation rather than be removed.

    KL: It’s the manufacturing jobs that are going but that’s nothing new, it’s been happening for decades and, for the most part, we are seeing individual tasks being automated rather than entire jobs. I firmly believe that this sort of creative disruption is going to create more jobs than it destroys but that shift may take 25 years. In the interim some people will lose their jobs and will need to retrain and adapt to work in new, efficient ways. Some people are better at this than others and organisations will need to address this carefully.

    MG: If you can automate something then you need fewer people than you needed in the past, that’s been true for the last 20, 50, 100, even 200 years. This is not exclusive to AI, it simply comes down to automating a repetitive task. Different industries will be impacted in different ways, with accountancy, human resources and the legal profession most affected. The good news for the IT industry is that people tend to be better educated, have greater adaptability and more creativity than many other industries and these are all areas that play well to retraining.

    KL: As I’ve said this creative disruption would ultimately create more jobs than it will destroy but it’s likely to have more of an impact on less skilled workers. I don’t see lawyers disappearing from the workforce any time soon, but legal secretaries might.

    Q - Overall, what is the potential for AI and how can we ensure its success?

    KL: The potential for AI is tremendous but we can only ensure its success by asking ethical questions, being alert to the development and judicious about its use.

    MG: The future for AI is extraordinarily bright. It’s wonderful to see confident and able people combining their best work trying to do something that is creatively disruptive and will be a true amalgamation of positive intent. Will this take 10, 20 or 30 or more years?  I don’t know but I’m confident that the benefits from this revolution will be just as staggering as the last industrial revolution.
     

    Read this next: Is Your Organisation Ready for AI?