The Artificial Intelligence revolution: Law, Ethics & Challenges
Tarek Nakkach of Hewlett-Packard Enterprise draws attention to the legal issues that will rise from the implementation and use of AI and calls for the need for legislative reform in the Middle East.
The term artificial intelligence (AI) may bring to mind an apocalyptic scenario of a blockbuster Hollywood movie where machines and robots take over the world and destroy civilization as we know it. Not far from that, many futurists and scientists have predicted a similar scenario. Stephen Hawking told the BBC couple of years ago that “the development of full artificial intelligence could spell the end of the human race”, while Elon Musk believes that AI will bring World War III to Mankind.
Some of these fears may be justified, Facebook recently shut down robots for inventing their own language. There is also a concern that governments may use AI to wage wars.
However, others see AI as an opportunity for growth and development in areas like medicine, environment and transport. Just couple of weeks ago, a true visionary HE Sheikh Mohammed Bin Rashid Al Maktoum, announced the launch of the UAE artificial intelligence strategy and the appointment of the first Minister of State for Artificial Intelligence. The newly-appointed Minister, HE Omar Bin Sultan Al Olama announced that AI can offer the quick win the world needs in issues of climate change.
Most of the multinational technology companies are investing heavily in AI as well. At HPE, we have recently announced new purpose-built platforms and services capabilities to help companies simplify the adoption of AI, with an initial focus on a key subset of AI known as deep learning. We have introduced Rapid Software Installation for AI, a deep learning cookbook, and an HPE AI Innovation Centre.
The speed at which AI is developing is surprising. It is also disrupting the legal industry. The aim of this article is not to discuss the use of AI by law firms or how soon we will be seeing robot lawyers. These subjects have been heavily-debated recently and it is clear that the legal industry needs to adapt to the imminent change. What I will discuss though briefly in this article are the apparent legal issues that will arise from the implementation and use of AI in our daily lives and the need for legislation reform in the Middle East.
One of the main duties of a new legislation would be to define AI. I have looked at many definitions and found the one from Britannica to be the simplest and most-relatable. Artificial Intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” The differentiating factor in the definition is intelligent tasks performed by machines as opposed to “non-intelligent” tasks which may be covered in current legal regimes in the region.
The intelligent factor which is the ability of the machine to learn from its environment and take non- preprogrammed decisions. This in itself will create amongst other things, issues around liability, intellectual property and ethics. I will go through these briefly.
The current liability regimes under civil law trace mainly the responsibility for causing a harmful act back to a corporation or a person. Can a machine or a robot be liable though? The first reaction would be that it is inconceivable to consider the machine liable for harm caused because it means we would need to consider it as a person. However, the latest announcement by Saudi Arabia that it granted its citizenship to a robot, makes us wonder whether robots would be considered as persons albeit “electronic” or “digital” persons and so granted rights and obligations and maybe even passports?
Generally under local laws if a machine operated by a human afflicts damage to a person or a property then the “operator” is liable unless he or she proves a defect in the machine and in that case the corporation that sold the machine or the manufacturer would be held liable. This is a very basic example. However, the question becomes more difficult in case of an autonomous robot for example. Who would be held liable? Is it the manufacturer of the robot? Is it the person or company who designed the algorithm? The answer to these questions will depend on how autonomous the robot is and so how intelligent it is. Did the AI software allow the robot to be fully autonomous and learn from its environment? Did it still require continuous human intervention? Was it programmed to perform certain tasks and it only executed them?
The current legal regimes do not cover areas where the machine is “intelligent” enough to take decisions on its own without human influence and this is why we need to enact laws or modify the existing ones to determine the extent of liability of a robot.
Another important topic in AI is Intellectual Property. If a robot creates a work of art and it is sold for millions of Dollars, who owns the copyright? If the robot invents a new surgical device, who owns the patent? Is it the company that owns the robot (that would be a simple answer)? Is it the company that wrote the algorithm? Who is to determine that in case where the robot is “intelligent”?
Finally, the implementation of AI creates an ethical issue. Can AI be used to wage wars, will governments be using killer robots for example? What happens if the machines decide to destroy other machines or destroy humans for example? Should there be rules and regulations to define AI in an ethical way? I read recently in a news report about people calling for the creation of a new religion to worship an AI God! Thus the need to put a framework in place or set of ethical rules to govern such issues.
Certainly the future is exciting, there are people who fight change and others who embrace it. I believe governments in the region and around the world should follow the example of the UAE and create ministries for AI and draft policy and legislation that deal with the main issues arising out of the deployment of AI.
Tarek Nakkach, Region Counsel, Middle East, Hewlett Packard Enterprise