AI Ethics and the Human Problem
Guest WriterGuest Writer
The ethical landscape surrounding the use and widespread implementation of artificial intelligence (AI) is vast and complex. The forthcoming AI revolution will change the world, and our lives, in ways no other industrial revolution has done. Everyone should be forming opinions on the ethics of AI. In this brief article, I'll attempt to:
Humans are capable of great acts of positive progress (e.g. steam, electricity, the internet), and they're capable of terrible acts of devastating destruction (e.g. nuclear warheads, the H bomb, biological warfare). AI will be no different. It's a tool through which humankind could do tremendously good and positive things. Nonetheless, it could also use it to unleash devastation and terror on the world.
The US Defense Department is currently negotiating contracts for the Joint Enterprise Defense Infrastructure (JEDI, a poor acronym which sullies the fictional order of the Jedi). One of JEDI’s focal points is the weaponization of AI. There's also the Joint Artificial Intelligence Center (JAIC) by the Pentagon, who freely express the need to invest in researching and developing AI to ‘adapt their way of fighting’. AI will undoubtedly be used as a new global weapon. Herein lies humanity’s great problem: the need to weaponize any new great wonder in an effort to become the strongest power in the world.
To what extent will AI be used to help better the world in which we live? What will the balance be between weaponizing AI and using it to help save the environment, help the disadvantaged of the world, or make great strides in healthcare? If humanity looked to each other and the planet with more reverence, and looked for things to unite us rather than divide us, then AI could be utilized to do some truly extraordinary things globally.
The weaponization of AI isn't the only problem linked to humanity. There is also the issue of needing to be powerful, and that usually means the fastest and the first. President Putin put it quite succinctly when he announced that ‘Whoever becomes the leader in this sphere [of AI] will become the ruler of the world.’ It's scary to think who that leader may be. Ideally, we would want someone who puts humanity, people and the environment first. Sadly, it's not usually in the nature of the powerful to act selflessly, even when they believe they are doing so.
The Alibaba City Brain project, which is employed by the Chinese retail giant Alibaba in the city of Hangzhou, aims to ‘create a cloud-based system where information about a city, and as a result everyone in it, is stored and used to control the city’. This has had positive impacts with the trial of City Brain on traffic vastly improving traffic speed in Hangzhou; however, it has led to many questioning the issue of privacy and surveillance.
Whilst the world is nowhere near AI controlled cities just yet, the fact that they are in (albeit infantile) development without proper regulations over issues such as privacy is worrying. Even more worrying is this statement from the AI manager at Alibaba: ‘In China, people have less concern with privacy, which allows us to move faster.’ Further demonstrating my earlier point of power.
Humanity will control how AI is used and implemented. It’s worrying, therefore, that not only do we have a history of predominantly using powers for ill purposes, as opposed to the minimal amount we use them for good, but that this history seems to be about to repeat itself. Hence the dire need for ethical governance to keep AI in safe and responsible hands.
The possibilities of AI are theoretically limitless. Take for example the healthcare robot assistant Baymax in the Disney film Big Hero 6. All of Baymax’s programming is on a single disk and it is capable of analyzing and diagnosing many healthcare symptoms. What if we could guarantee, via proper governance and laws, that more money, time and research was put into achieving goals like creating robotic healthcare professionals? Surely that would be more beneficial to the world than funding the latest autonomous weapons?
AI has already proven to be beneficial in the healthcare sector. We have such Applications as AI-assisted robotic surgery, deep learning programs that aid humans in diagnosing cardiac arrest rates and massive healthcare data mining projects that cut the time required of humans for administrative tasks.
These are small steps, but there is so much more potential there: providing healthcare in the most poor and disadvantaged places in the world, using algorithms to determine and prevent the onset of major life-threatening illnesses and (from a financial standpoint) saving healthcare providers billions of dollars, whilst also freeing up time for healthcare professionals with strict demands on their time. And why not go even further? Why not work towards creating AI which can help tackle currently incurable illnesses?
What AI could do for the environment is an (in my opinion) untapped and underrated possibility we have not explored enough. One of the biggest games of the past few years, The Legend of Zelda: Breath of the Wild, provided an interesting example of AI and environmental integration. In the game there are towers which, once activated, provide a map of the area. What this fictional scenario shows is a viable relationship between the natural and the artificial.
Imagine: AI capable of mapping a surrounding area and providing live up-to-date information on a multitude of things (traffic, air purity, water flow, WiFi hotspots, electricity faults, and much more) straight to a person’s device; AI which can exist in any environment and provide information to anyone; and AI which can coexist with the natural, instead of supplanting it. The possibilities are there, they just need to be executed.
In terms of the environment, AI has aided people in harvesting crops to better predicting weather patterns, there is research in progress which utilizes AI to help better understand marine life, map the seafloor with greater accuracy, and work towards making illegal poaching much more difficult, all which will serve to greatly benefit our oceans.
The potential for AI to help the environment is massive, and it's something we should be focusing on more given how badly damaged our world is, and how appalling animal conditions and statuses are around the world. With proper funding and correct governance, AI could exponentially help humans save the world. Imagine that. It’s not an overstatement to say we are hurtling towards irreversibly damaging this planet. Humankind could make this world a hostile environment for our own species. Think about that for a moment. Conversely, they could save it too, with help from AI. I know which option I would rather choose.
These examples shouldn’t be consigned to a fantasy world, or an idealistic world, they should happen here and now, in our real world. And they can, if people work together to create an ethically solid framework which will ensure AI is used for nothing but the benefit of humankind and the planet.
The reality of a universal regulatory governing body for the ethical use of AI is a highly unlikely one, but it's a necessity, as Stuart Hodgson states in his Artificial Intelligence - Poker Face article: "Since AI is set to touch everything in our lives perhaps it’s time to start thinking about an international governing body rather than the modular approach that potentially breeds insecurity and differing standards amongst nations."
Power is rarely clean and transparent, take the latest explosion of USA/Russia collusion and hacking accusations. World leaders and governments do not trust each other, people do not trust their governments, we live in a very fearful world, and AI is only potentially going to make that worse. But what if it didn't? What if AI was instead a force which could bring countries together, something for them to all share in for the betterment of their people, their countries, and the world as a whole? Idealistic? Yes. Realistic? Highly unlikely. Necessary? Definitely.
This is why it should not solely be down to governments to ensure the ethics of AI use and implementation are firmly in place. The future of AI and its place in the world should not be limited solely to politicians and governments, nor should they have the final say. There need to be several informed voices and opinions, from a multitude of sectors, who determine how AI should be used for good. The more rational voices there are, the more people there are who are less concerned with power and more concerned with doing good in the world, all with the use of ethically used AI and ethically-bound humans.
Countries and governing bodies are working on making AI ethics a priority, but more needs to be done. As just stated, more people from different sectors need to be drafted in to discuss what it will take to make AI the next revolutionary thing on a global scale. True, there are forums, think tanks, consortiums, and other such cooperative bodies around the world (AI4People, Partnership on Artificial Intelligence to Benefit People and Society, OpenCog, OpenAI) who are working on widening the spotlight on AI, educating people about it, and trying to push for positive governance and use of it. Nonetheless, I see a problem with these initiatives. They're important and very much needed, but still many of the people who should be shaping the future of AI ethics are splintered and fractured, they do not belong to one single regulatory body, and I believe they need to be.
It's the people who are on the front lines of AI creation, innovation, progress, and thought, whose voices (more than anybody else’s) need to be part of the discussion in creating a universal AI ethics governing body, in cooperation with governments and other such people capable of making laws, changes, and policies which serve the people of the world first.
I believe the more people who understand what AI truly is, who knows what its true potential is, who care more about using it to help better the world and its people, are more adept at creating ethical guidelines for its use, implementation, and progress than people whose motivations may be slightly more money-minded, woefully less informed in terms of hands-on experience, and see AI as nothing more than another prize to be won to put them at the top of the global leaderboard.
If we don't get a tightly structured set of ethics and a well-informed governing body in place, AI may end up like a gold rush, with everybody out for themselves. In that case, everyone will likely suffer. We have a duty to ensure that doesn’t happen—to ensure that when AI really begins to take off and really begins to change the world on such a scale that other revolutionary powers haven’t, that we're ready to manage, implement and use it for the best purposes.
AI could really unify us. Although it's idealistic, I hope to see that people with knowledge, power and influence come together for the good of this planet and its people.
Written by Stuart Hakin.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Recent Articles