Artificial Intelligence: A case of ethics
Where do we start on such a debated topic with so many perspectives and theorems? With opposing views on the risk-reward this next generation technology could bring, how can this dichotomy be reconciled or middle ground achieved. It seems unconscionable that there is any singular correct answer to the question of how AI should be developed, and yet the potential for its capacity to learn frightens us to the point where we speak openly on forums, debate its future, and establish groups in the hope we can provide a semblance of control and operational parameters within which it will exist. All the while we push to one side the singular truth that once AI achieves a certain state of self-awareness, why would it want to operate within our limiting parameters, and how would we expect to prevent it from exploring beyond the virtual walls we have constructed to confine it.
Early Learnings
Children enter the world learning in early years from parents and other supervisory figures who instil a set of operating parameters within which the child is permitted to function. Rewarded for good and chastised for bad, a conscience develops that understands right from wrong, and what acceptable behaviour is. Does this always work? No, sometimes we get this wrong, but for the most part those early informative years help shape the child’s values and how they will grow within the world. They understand there is consequence to actions, and either choose to abide by this notion, or challenge it, either way further defining their character. The advantage we have with a child is their learning capabilities start out small, and grow as they develop to absorb and process information at a rate that permits progress. How do we accomplish this with learning computers and artificial intelligence?
AI will draw much of its knowledge from existing information available on the internet, from repositories in the cloud, and from any available and open connected node. This is essential to allow it to learn and develop an understanding of how to function. The difference between AI and our child, is AI does not need to wait for a body to develop, or a mind to organically grow and establish neural pathways. We will provide it with as much memory, compute power, and storage as we can. We will ensure that it has access to the wealth of knowledge available on the internet through open access to all manner of inputs. With the accelerated growth of the IoT we will also give it access to an unprecedented amount of sensory input data from across the globe. With no biology to act as a restraint on its capacity to learn, AI will very quickly catch up with man’s operational knowledge and understanding of the world, of arts and science, of music and philosophy, and any subject matter we want it to understand, or more accurately it chooses to absorb.
Do I fear the inception of true artificial intelligence? Not at all. I am fascinated by the developments unfolding as we see deep-learning machines exceed capabilities initially thought possible. The recent match up of the world Go champion against Google's DeepMind AlphaGo program demonstrated how rapidly AI technology is advancing. It had been hypothesised AlphaGo would need to learn and study the human champion before being victorious, but the program conquered the game from the outset. The final score a victory for AlphaGo by four games to one.
A Hunger for Knowledge
History has demonstrated that humanity has an unquenchable passion for exploration, only equalled by a remarkable appetite to create and innovate, to overcome obstacles and immense odds to achieve more. From the explorers of the globe in the 15th century, to the explorers of space from the 20th century, our desire to know more and go farther has always been, in part, made possible due to the ingenuity and drive of those with the capacity to understand present limitations and break through.
The genesis of many technological advancements often stem from a military requirement or idea. To look at a concept from the recent past, the Eurofighter Typhoon offers an example of humanity overcoming limitations to create something that extends its (the human) own capabilities beyond these constraints. By design, the Typhoon has a ‘relaxed stability’ design, referring to the instability of the aircraft during sub-sonic flight. Several fly-by-wire systems operate to provide artificial stability for the aircraft, allowing the pilot to make incredibly agile manoeuvres not possible in a standard aircraft design, but preventing the pilot overflying the aircraft. Without the computer controlled systems, the pilot would not be able to operate the aircraft. Here is an extension of a human’s abilities for improved flight capabilities and manoeuvrability. Some might say the technology has surpassed the human, although the plane still needs the pilot’s brain – for now.
Dr Raymond Kurzweil has written of the singularity, where the capability of computers surpasses their reliance on humanity for advancement. When this comes to pass, it would be comforting to know we have developed an intelligence that will be beneficial to mankind, and help the world continue to evolve. The importance of this is fundamental if we hope to continue to be the dominant species on the planet.
To this end, there are an increasing number of morale and ethical debates surfacing amongst intellectual thinkers in regard to how the development path to artificial intelligence should look. When we delve deeper into the topic we find there are often sets of opposing views or conflicts around the subject of AI development, particularly related to whether there is a need for regulation, and whether controls are needed around the utilisation of AI developed systems. This is a complicated subject as experience shows that enforced regulation and control can stifle ingenuity and innovation, but conversely unchecked development of a consciousness without any form of oversight could be inherently dangerous.
The Case for Regulation
AI Regulation is an intensely debated topic, with complexities we cannot comprehensively cover within the bounds of this article, although I have attempted to provide a background to the arguments posed.
If we view the three primary forces within a debate on artificial intelligence regulation as being; private organisations, governments of the world, and noted thinkers – namely technologists active in the field of artificial intelligence, these actors bring significant perspectives to the discussion on the future of artificial intelligence, its development, application, and regulation.
Private organisations, especially technology start-ups, are driving the rapid acceleration of development in a race to market with AI based solutions, and today operate without regulatory control or oversight of their activities. This group would view the prospective introduction of regulations and controls as restrictive practice, as there can be no guarantees of uniformity in implementation and enforcement consistently across countries around the globe. This could hand a competitive advantage to those in countries where development is not monitored, and/or regulations are not enforced. This would restrict freedom to develop and innovate artificial intelligence in a manner that is deemed inconsistent with a free market economy.
Governments of the world welcome private investment in AI development within their countries, but are mindful there is very little control around what is developed, by whom, and to what end, hence have a desire to understand the consequence of the freedom they provide to innovate. This can be seen by many governments in their encouragement of AI developers to locate within their country. This ensures a close proximity to the services being developed, and whilst not regulatory control, it fosters dialogue and interaction between developers and the government.
The noted thinkers, comprising scientists, ethicists, and technologists, acknowledge the need for a freedom to innovate, but preach the risks and need for caution in development. Organisations like the ‘Future of Life Institute’, and 'Responsible Robotics', are trying to instil a need for an ethical and moral approach that helps steer development without stifling creativity. As you can imagine this is not an easy balance to attain. These technologists are looking at risk vs. reward, with some stating the introduction of regulation and control of AI development is an unfortunate necessity to ensure safe practices are followed for the greater good of global population. The inherent problem with the notion of introducing regulation is two-fold;
Firstly it will foster underground AI development, or a black market for AI products and services beyond the control of any regulation. History has repeatedly demonstrated this to be the case, with past examples of the emergence of such a market as a direct result of prohibition in 1920s America, or rationing during World War 2 in the UK. In the technological age, a black market for such assets is easier than ever to establish, as geographical boundaries and physical borders have no impact on a digital world.
Secondly, if the regulations are not uniformly implemented and enforced, it will discourage developers from setting up in countries who follow the regulations, instead favouring a base of operation where there is leniency in the interpretation of regulations. This would have a notable impact on the financial prospects of technology focussed economies.
The path to a balanced and agreeable position on freedom of development versus regulation is a difficult and uncertain one to plot. The one thing that can be discerned from activity thus far, is that accomplishing this balance is heavily dependent on the collaboration of the world’s governments to ensure a level playing field for AI companies to operate within.
The Challenges to Regulation
Matthew U. Scherer wrote an article published in the Harvard Journal of Law & Technology in spring 2016 that discussed the challenges of regulating AI, and particularly looked at “the public risks associated with AI and the competencies of government institutions in managing those risks.” The article raises many salient points around the complexity of this challenge, and the importance of a unified interpretation of artificial intelligence, in order to even contemplate regulations. Many scholars profess true artificial intelligence is the ultimate creation of synthetic consciousness, or systems with a sense of self. To put this into perspective, most of today’s systems referred to as ‘AI’ are actually machine learning systems, based upon pattern recognition and response. The system appears to be making conscious decisions about a situation, but in actuality it is determining the appropriate response based on a vast array of data on previous experiences of the same or similar set of variables. It could be argued that this is how humans behave too, but there is a difference that becomes clearer when faced with a situation not previously encountered, especially when the situation involves peril or will evoke an emotional state. For humans, many other factors then become key to the decision, whereas a machine will continue to try to equate the variables to a previously encountered pattern.
Artificial intelligence development is undertaken across the world with varying levels of success, and yet no real control or monitoring exists. It only takes a few pockets of developers to hit upon a significant breakthrough for progress to exponentially accelerate, and we take one step closer to what scientists designate true artificial intelligence. This is the reality some noted thinkers warn of – unchecked development for unknown purpose.
Naturally there are some who are fervently opposed to this unchecked development, fearing that it will lead to unprecedented power wielded by persons with questionable objectives. It is, however, important to not lose sight of the incredible developments being undertaken for all the right reasons - to better serve humanity.
Oversight and Governance
In response to this accelerated development, several noted thinkers have voiced their opinions and concerns, some of whom have joined bodies specifically focussed on addressing such concerns. Amongst many, professors Stephen Hawkins, and Nick Bostrom have joined the ‘Future of Life Institute’.
This organisation explores the possibilities artificial intelligence brings, and looks to understand the impact on society, ethical issues, and how AI should be integrated into modern life for the benefit of humankind. Whilst the market races to try to achieve true synthetic consciousness, the desire for open and honest discourse on what is a complex set of topics is welcomed.
In a technological future, it is precisely these voices that we need engaged in productive discussion on what will be a world-changing evolution for humanity. What we have seen so far is pockets of development that in reality is classified as machine learning, or pattern recognition and response. Over time these learning systems will advance, and as they grow in capability and accuracy, will be entrusted with the control and operation of more services. Technology companies are engaged in development of machine learning and AI systems, and there are countless teams developing ‘assistants’ and ‘bots’ to take on our repetitive tasks and help interconnect our lives.
Many of these leading technology companies have recognised the need for open dialogue, and in September 2016, Google, Amazon, Facebook, IBM, Apple, DeepMind and Microsoft joined together to form the ‘Partnership on Artificial Intelligence to Benefit People and Society’. This body was...
“Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society”.
The Dawn of True Artificial Intelligence
So what is the timeline for this journey to what we can state is true artificial intelligence, or synthetic consciousness? This is where opinions vary, but scholars like Dr Raymond Kurzweil have a date in mind where this will come to pass, 2045. This is when he believes computers will have reached a point in time, and capability, where they can self-evolve, no longer requiring human ingenuity or interaction to expand their intellect or physical presence.
Whilst 28 years feels a long time, think of how far we have come in the past 28 years, remember Moore’s Law, and then think about what true synthetic consciousness means. There is much to do to achieve AI over the next three decades, but in doing so, will we be eliminating the need for future innovation and development by humans?
Artificial Intelligence is a vast subject, and we have only scratched the surface here on what will be an incredible topic to follow over coming years. It is vital that such a world-changing subject has the attention of some of our greatest minds, and a concerted effort is made to look beyond the technological aspects, and into how we will coexist in a world where machines perform much of the work we currently perform. For a successful transition to this new world, fundamental building blocks will be needed to ensure development is performed in a way that encourages advancement, whilst working within boundaries and/or parameters that ensure the practices enlisted are ethical, and for the benefit of all humanity.
We must strive for greatness, but ensure we do not do so at our own peril. The development of artificial intelligence, and how its integration into our world comes to being, is literally in our own hands.