The Ethics Behind Artificial Intelligence

    Artificial Intelligence (AI) has the power to transform how we live and work, providing businesses with powerful new tools to make their operations more efficient. However, academics and technologists have multiple concerns about the ethics of AI.

    Kevin LaGrandeur, Ph.D., Professor of English at the New York Institute of Technology, Fellow of the Institute for Ethics and Emerging Technologies and Co-Founder of the NY Posthuman Research Group, shares his views on the thorny issue of AI and ethics.

    Q - How are organisations currently using AI?

    KL: AI is being used to automate an increasing number of numerical, formulaic and repetitive processes. One of the most talked about applications for AI to-date is for self-driving or autonomous vehicles. Codelco, for example, is a Chilean copper mining company that has been a global pioneer in the use of autonomous trucks. That said, the use of AI for self-driving vehicles in more complex environments, such as city centres, has been less successful, with a number of high profile accidents reported in the media earlier this year.

    Another interesting example is the Russian AI robot named “Vera” who wants to take over your hiring process. Vera claims to be able to find and filter CVs, conduct interviews over the phone or online and select the top candidates 10 times faster than a human being. This represents a pretty revolutionary displacement of a human function.

    Q - What are the concerns surrounding the use of AI in business and the wider world?

    KL: Businesses are mainly concerned about the displacement of workers and the glitchy nature and simplicity of current AI applications. AI just can’t do complex tasks such as making ethical decisions. In the wider world there are concerns about safety and privacy. Our homes could already be largely automated by AI using the Internet of Things (IoT) but cybersecurity is so poor for third-party apps and devices such as thermostats, smart doorbells and refrigerators etc. that people don’t trust the technology currently available.

    Q - How important is privacy?

    KL: Privacy is a great concern and one the general public cares about deeply, especially since the introduction of smart TVs and virtual assistants such as Amazon’s Alexa into our homes. What these devices capture and to whom they send it is a real privacy issue. The problem is that the information they gather is sent to third parties and you and I have no control over their privacy protocols.

    Do we need legislation? At some point we will but legislation is often most effective retrospectively, an incident must occur, so you can see the problem and then make legislation relative to it. Businesses are looking to use AI for efficiencies, it’s not built for ethical decisions. Some form of curbs may be necessary, but they may well inhibit innovation, progression and ultimately profit.

    Q - How can we treat the issue of responsibility?

    KL: These issues are being hotly debated at events such as the Governance of Emerging Technology Conference. We know that there’s currently a huge push for autonomous vehicles, but when it comes to who is responsible for a self-driving car and how we decide about responsibility in legal terms, there are no real answers yet.

    To go further, how do you program a vehicle to make ethical decisions? What ethical decisions can you translate into an algorithm? Attempting to code fairness suggests that we can define it, and therein lies the primary ethical concerns. How can you determine what is fair and what isn't? Who determines those rules?

    Q - Are technology providers doing enough to ease public concerns?

    KL: Technology companies are trying, especially the bigger ones. Autonomous vehicle producers are working with philosophers and academics on the ethics of AI but it’s still early days. As Mike alluded to, ethical questions are often only addressed when disasters happen, but this isn’t limited to AI, it’s the nature of society.

    Q - Should we be concerned about AI taking jobs from humans?

    KL: It’s the manufacturing jobs that are going but that’s nothing new, it’s been happening for decades and, for the most part, we are seeing individual tasks being automated rather than entire jobs. I firmly believe that this sort of creative disruption is going to create more jobs than it destroys but that shift may take 25 years. In the interim some people will lose their jobs and will need to retrain and adapt to work in new, efficient ways. Some people are better at this than others and organisations will need to address this carefully.

    Q - Overall, what is the potential for AI and how can we ensure its success?

    KL: The potential for AI is tremendous but we can only ensure its success by asking ethical questions, being alert to the development and judicious about its use.

    Read this next: Is Your Organisation Ready for AI?