Impact of Artificial Intelligence and Business Concerns
Interview with Kay Firth-Butterfield, World Economic Forum
What is the current and potential impact of AI on companies, in terms of R&D, investment, innovation, human resources and productivity?
While there are only a few companies that can truly call themselves AI companies, all companies will need to develop an AI strategy because whether simply as operating systems or embodied in robots from cars to home healthcare to entertainment AI will become ubiquitous.
Substantial brand value can be lost if the wrong decisions are made about the use of AI. Therefore, it is important that the various regulatory and other governance mechanisms are thought about now.
Kay Firth-Butterfield
Currently companies are not investing as much on R&D and AI as they probably should be doing. The causes for that are principally in three areas: a lack of understanding of the technology and what it is capable of, some of the technology is still nascent and not generally useful, and a lack of understanding about where regulation will come from and which developments it will hit. As a result, we can see some businesses which are forward looking trying to use AI in places where it really is not necessary to use it, for example facial recognition in air conditioning, and other businesses holding back until they see what the innovation space is and how governments may be supporting it.
Another worry for many companies is that the undirected use of AI could lead to substantial job losses in the short term. Socially conscious companies wish to find a way to minimize that disruption to their workforce and to the political stability of the area. In many areas of the world job loss which has not been mitigated by retraining has led to geopolitical change and instability in markets is not generally useful to businesses. Some companies have started using AI in human resources. This brings up the need for companies to be aware of the potential problems of using AI in this area, and others. These problems fall into four broad main categories: bias, transparency, accountability and privacy.
Substantial brand value can be lost if the wrong decisions are made about the use of AI. Therefore, it is important that the various regulatory and other governance mechanisms are thought about now, the fast pace of change in this technology is such that we cannot wait.
The report identifies various fields where innovation is taking place. In which fields will AI be most disruptive and have the greatest impact?
The fields where AI will be most disruptive and the ones where they will have the greatest impact are not necessarily the same. It depends whether one is talking about good or bad impact. I will illustrate this with two examples:
(1) The AI enabled toy – our children will increasingly work with robots. Thus, it could be said that having an AI enabled toy, or several, from an early age would be very useful preparation for the workforce. Likewise the benefits of personalized education for all children around the world are huge. However, there are some fundamental issues which need to be addressed first. How will we deal with privacy? If the child’s words and thoughts are to be collected from the moment they start to speak, if their learning styles are being analyzed how do we protect that data? Who owns the data and can it be monetized? If the toy is ‘listening’ in the household it is not just the children’s data which is being collected. If the child has many devices all listening and then other children bring their devices the privacy issue grows. The potential for abuse by government or those trying to influence, for example voting, is huge and what is at risk is human autonomy. How do parents choose which curriculum they want their child to learn? Most parents try to decide on a school but that is not, currently, an option for education by AI. In many countries there are bans on advertising to children but if an AI enabled toy says it is cold does that amount to advertising if the child then asks for a coat for it? What is the impact on our children’s creative play of not having to create stories for their toys but instead having their toys come with a backstory? If your invisible friend talks back you are not using your imagination. Will this change not only our autonomy but also the way we think? How will children brought up with many AI enabled friends react to their peers? Will we need one another as much if we have devices which attend to our needs and never take an alternative view to our own. Is the AI enabled toy the ultimate echo chamber? The World Economic Forum, in partnership with UNICEF, various country regulators, academics, civil society and businesses, is currently undertaking a project seeking to answer the above questions and the many more that arise.
(2) Use of AI in cars – when we see movies such as Star Wars and Star Trek humans are still ‘driving’ space ships and yet the autonomous car will not need to be driven and AI will take away the need for anyone to have that skill. AI will be better able to calculate distance, speed and destination than we humans and so it makes sense to give up our driving skills to AI. However, this brings with it questions of privacy and the company providing the AI will need to know location data and could route the car past places that paid for this service to encourage car users to use their shops. Likewise, there will be data collected about the car user from devices on board: how do we maintain privacy? Germany has started work on this aspect and decreed that data from conversations inside the car should belong to the user and not the car manufacturer or the AI company. Additionally, the use of autonomous vehicles is an area where significant job loss is likely, McKinsey estimated as many as 25,000 truck drivers a week could be affected. Equally, as individual car ownership declines with more people hiring a car when needed there could be significant impact on car manufacturing and its ancillary services. On the other hand the positive environmental impacts of autonomous cars are huge and the reduction in fatalities on the road important.
Are you more concerned about risks arising from artificial general intelligence/superintelligence or those arising from narrow AI?
There are many who worry about controlling AGI and superintelligence and are, rightly, working on this area. Additionally, we need to be concerned about narrow AI and the issues of bias, privacy, accountability and transparency. These four problem areas are seen time and again in the use of narrow AI, be it in predictive policing and sentencing or the ability to obtain jobs and loans. As more AI enabled objects such as cars and robots come to market we will see these issues multiplying. This is why the need for governance to be thought through at this stage is so necessary.
How important is regularization and standardization of AI and why?
There are many forms of governance and regulation by government is only one of them. So often such regulation lags behind fast-paced technologies such as AI because it takes so long to regulate, especially in democratic countries.
At the WEF we know that in some cases regulation is necessary; however we believe that the best way of succeeding in governance of AI is to use agile governance measures. These include: the development and use of standards (IEEE and WEF Protocols), the emergence of social norms which constrain or endorse, private incentive schemes, certification, oversight by professional bodies, industry agreements and policies that organizations apply voluntarily or by contract within their relationships with competitors, suppliers, partners and customers. As such the work of the Global Initiative on Ethical Considerations in Artificial Intelligence (AI) and Autonomous Systems (AS) is of great importance as part of these agile governance initiatives.
We have a number of projects that showcase this approach. One is Unlocking Public Sector AI. We have identified that AI holds the potential to vastly improve government operations and meet the needs of citizens in new ways, ranging from traffic management to healthcare delivery to processing tax forms. But many public institutions are cautious about harnessing this powerful technology because of concerns over bias, privacy, accountability, transparency and overall complexity. New incidents are emerging of negative consequences driven by the use of AI in areas such as criminal sentencing, law enforcement and even employment opportunities. Governments do not have the luxury of using inscrutable “black box” algorithms that are increasingly characterizing AI deployed by industry. As citizens increasingly demand the same level of service from their governments as they do from innovative private-sector companies, public officials will be required to identify the specific benefits this complex technology can bring while also understanding the negative possibilities of the tools created using AI.
The project convenes stakeholders across sectors to co-design guidelines that will empower governments to confidently and responsibly procure AI as well as guide their own internal development of technology that utilizes AI. A number of governments have committed to pilot these guidelines to test assumptions about their efficacy and impact, iterate the guidelines based on this learning, and share the updated versions publicly to encourage international adoption. Since governments have limited budgets and often struggle to separate hype from substance regarding new AI products, this project will also collect input and provide guidance on the most effective use cases of AI within government, as well as those that are immature, unproven, fraught with uncertainty or risky.
These guidelines will empower governments to responsibly deploy and design AI technology for the benefit of citizens. At the same time, governments’ significant buying power can drive private-sector adoption of these standards even for products that are sold beyond government. And, as industry debates setting its own standards on these technologies, the government’s moral authority and credibility can help set a baseline for these discussions. These indirect methods of influencing the trajectory of AI provide a softer alternative to regulation, particularly needed in an arena where traditional governance measures are too slow in the face of fast-paced technological change, especially in AI. While companies are generally wary of stricter guidelines for government procurement, this project builds on numerous case studies where common-sense frameworks can help governments overcome reluctance to procure complex new technologies and actually open new markets for companies.
How can the regulation of AI keep up with technological development and how important is multilateral/international regulation and standardization?
We believe that working with business, government, civil society, IGOs and academia the Forum can help co-create governance mechanisms for AI which means that there will not be a race to the bottom. Many governments around the world are interested in collaborating with the Forum which has C4IR offices in San Francisco, China, Tokyo and India, with some seven more due to open around the world in 2019. The governance of AI is something which we need to do together so that the benefits of AI can be obtained and the negativities mitigated.
WEF is working on a project to help businesses think about governance of AI, the AI Board Toolkit.
As AI increasingly becomes an imperative for business models across industries, corporate leaders will be required to identify the specific benefits this complex technology can bring to their businesses as well the concerns about the need to design, develop and deploy it responsibly. Striking the right balance will lead to sustainable businesses in the Fourth Industrial Revolution, but failing to design, develop and use AI responsibly can bring damage brand value and risk customer backlash.
Board members of all companies are responsible for stewarding their companies through the current period of unprecedented technological change and its attendant societal impacts. A practical set of tools can empower Board members in asking the right questions, understanding the key trade-offs and meeting the needs of diverse stakeholders, as well as how to consider and optimize approaches such as appointing a Chief Values Officer, Chief AI Officer or AI Ethics Advisory Board.
This AI Board Toolkit will be designed around four pillars: technical, brand, governance and organizational impacts of AI, each providing an ethical lens around creating, marketing and sustaining AI in the long term. The toolkit will also support companies in deciding whether and how to adopt particular approaches and in understanding the power of the technology to advance their business.
The toolkit will be designed by AI experts in collaboration with board members and key stakeholders from diverse companies and industries to ensure it meets the specific needs of corporate leaders and can lead to practical action and concrete impact.