The dark side of AI: Biases and a new geopolitical battle

The letters AI (artificial intelligence) strike fear into the hearts of some with the specter of a robot takeover of jobs, while many others are excited about the possibilities of AI for global business and the betterment of mankind. Yet looming in the background is an inevitable superpower struggle, mostly involving the United States and China.

The dark side of AI: Biases and a new geopolitical battle AFP PHOTO/VICTOR HABBICK VISIONS/SCIENCE PHOTO LIBRARY

Today, the majority of the global population communicates via social media or makes transactions online, leaving digital traces that are the “new oil” – data as a strategic asset – used by organizations that claim to facilitate the quality of our daily lives. And so to perform a data transformation, you enable a digital transformation. Indeed, our world is being dramatically influenced and driven by “big data.” In 2000, about 25 percent of all data was digitized; about 18 years later, 97 percent of all data was digitized in one form or another. In the future, data-rich markets will offer individual choices without the constraints of inescapable cognitive limitations.

A majority of people in the developed world, and increasingly in emerging markets such as Indonesia, now communicate via social media channels, be it Facebook, WhatsApp, Instagram, Snapchat, Twitter or LinkedIn, to name just a few. All are powered by digital data and algorithms. Of the 262 million Indonesians, about 53 percent are internet users, lagging behind a number of other Asean member states such as Singapore, Malaysia and Thailand. But Indonesia is still the biggest digital economy in Southeast Asia, with more than 100 million smartphones in service. A vast majority of these Indonesian internet users, about 89 percent, are chatting on social media platforms, compared to only about 7 percent that engage in internet banking. By comparison, 76 percent of Chinese smartphone users and 25 percent of American users make mobile point-of-sale purchases. Astonishingly, about 62 percent of all global mobile financial transactions originate from Chinese customers. It seems that “China has leapfrogged check and cards … and gone straight to mobile,” according to Neil Shah, a research director at Counterpoint Research in Mumbai. There is no dispute that China is ahead of the rest of the world in mobile payments. However, Indonesia, as the biggest Asean market, has numbers on its side, too, in comparison with many Western countries, allowing it to make a similar jump in the right context.

It is not hard to argue that the world is moving from a finance capitalist system to a form of data capitalism, facilitated by growing internet traffic or the network effect, massive data sets and the enhanced processing data capacity or analytical power of computers. And artificial intelligence (AI) – basically an advanced information system based on huge amounts of data used in algorithms to seek patterns and make predictions – will play a contributing role by allowing humans to make better and smarter decisions. We all can agree in principle that intelligence is the ability to deploy novel means to attain a goal. The distinction between specialized intelligence and general intelligence helps clarify the
difference between the specialized abilities of today’s learning machines, or “weak” and narrow AI, and humans’ more general abilities. Artificial intelligence is the overarching science that is concerned with intelligent algorithms, whether or not they learn from data. Machine learning is a subfield of AI devoted to algorithms that learn from data. Artificial intelligent machines are “smarter” – faster and better – than humans in terms of certain kinds of specialized knowledge or intelligence. But it remains very specialized knowledge about a specific domain.

We can basically distinguish three types of AI that businesses attempt to commercialize: (i) process automation with robotics and cognitive automation, counting today for about one-half of AI applications; (ii) cognitive insights that allow better predictions, which will be our focus – roughly one-third of today’s AI applications; and (iii) cognitive engagement as in the natural language processing of chatbots and intelligent assistants.

IBM’s Watson, the supercomputer, may have beaten the human champion at “Jeopardy!” but it can’t play any type of chess game at all. A Tesla car may (sort of ) drive autonomously, but that car cannot autonomously pick up a box at the nearby Carrefour. Artificial intelligence firm DeepMind (now part of Google) has tackled games such as Go and StarCraft and is now turning its attention to how to solve school-level math problems. The researchers tasked an AI platform with teaching itself to solve an arithmetic equation (or algorithm), algebra and probability problems, among others. Believe it or not, the machine-learning device of DeepMind has not done a very good job so far.

Current AI machine-learning algorithms are, at their core, rather simple and straightforward. Some may even describe these computers as “dumb” but very fast machines on steroids. AI is doing descriptive statistics in a way that is not science and would be almost impossible to make into science. Most phenomena in the world are nonlinear and machine learning opens up a vast new world of nonlinear models shedding light on previously hardly understood correlations and realities. Despite the remarkable advances in computing, the hype about artificial general intelligence – a general intelligence computer that will think like a human and possibly develop consciousness – smacks of science fiction. Deep learning machines and AI cannot answer the “why” question yet. We have no ideawhich parts of the human brain – if it’s the brain at all – are responsible for human consciousness.

It seems that we tend to underestimate the complexity and creativity of the human brain and how amazingly general it is compared to any digital device we have developed so far. However, what makes us human, characterized by general intelligence, is the scope of goals we set for ourselves, in contrast to artificial narrow intelligence (ANI), which is unable to set goals for itself and does not have any self-conscience. Intelligence tests of AI are built on British researcher Alan Turing’s famous “imitation

Some may even describe these AI computers as “dumb” but very fast machines on steroids.

game” test. A computer passes the Turing test if it can fool a human during a question and answer session into believing that it is, in fact, a human being. We are a long way from achieving this feat.

Obviously, we should not underestimate the enormous potential benefits that AI may create in this new era of Industry 4.0. But on the downside, this hyperconnected world will also be managed and possibly manipulated by a few multinational quasi-monopolies. This global interconnectivity could also result in huge and dangerous systemic failure. To what extent can we rely on this data to make profound and better choices in our life, making data governance crucial to guarantee some form of data “trustworthiness”? And to what extent has AI, using digital data to make better and faster decisions, and resulting in sometimes unexpected “discoveries,” become the new geopolitical battlefield between superpowers attempting competitive supremacy in this new
field.

Potential benefits and current limitations

The power of our increasingly “datafication” reality, be it biotech, nanotechnology, robotics, cybertechnology or artificial intelligence systems, will have a huge impact on everyday life; it records all our movements, human interactions and financial transactions, all stored in the “cloud.” This new reality within Industry 4.0 will facilitate our lives and possibly enhanceits quality. This transactional improved efficiency will likely result in more sustainable solutions. Indonesians enormously benefit from the Tokopedia online platform, and the competition between Indonesia’s Go-Jek and Malaysia’s Grab has allowed people to get bike or taxi rides via online apps. Just consider the GPS-led systems that are behind guiding autonomous cars or virtual assistants such as Apple’s Siri and Amazon’s Echo. These tools are reflections of the way we think and talk. Companies such as Spotify and its playlists and the Facebook newsfeed combine human and computer expertise to create new services and enable people to discover and engage with content and brands in new ways.

Artificial intelligence, today, is functioning by brute force using millions of samples and using reinforcement learning based on little pieces to approximate a desired function. For instance, AI – as in deep learning machines – will power self-driving cars by helping them to “see” the world around them: recognizing patterns in the camera pixels and figuring out what they correlate to (stop signs), and using that information to make decisions (to stop for example) that optimize the desired outcome (for instance, to deliver me safely home in minimal time). Transportation by self-driving cars, where transportation is transformed into a prediction problem, will keep us safer, on average.

Few organizations use AI for personalization better than the online fashion company Stitch Fix or the movie streaming giant Netflix. Computers do not understand why you are watching a particular movie, but they are great at crunching data – tabulating vast databases of subscribers’ movie-watching histories from a ratings matrix to estimate conditional probabilities of an individual’s movie preferences. This is discovered organically by AI. And Gap and other fashion houses intend to rely more on AI to predict but notnecessarily invent the next fashion item for customers. These organizations basically personalize their offers to individual customers by applying a sophisticated algorithm that uses continuous conditional probability calculations, which is the chance that one thing happens given that some other thing has already happened. Conditional probability is how AI systems express judgments in a way that reflects their partial knowledge. And personalization algorithms run on conditional probabilities, all of which must be estimated from big data sets in which you as an individual are the conditioning event. Real problems are framed in terms of conditional probability (ifthen logic) to solve them.

Amazon, for instance, looks for unique patterns in the data it receives from customers that reveal their preferences. Identifying such patterns enables Amazon to statistically deduce customers’ wants and needs without having to ask them directly. The data approximately tells what you and I want. The data does not know why we prefer one thing over another; it just “sees” that we choose one thing over the other, indicating some hidden patterns in our preferences. But that is sufficient for Amazon to feed the preferences-matching algorithm and search for the products that potential customers are most likely to purchase. Let us not forget that one-third of Amazon’s business comes from its recommendations, as does three-quarters of Netflix’s business.

Quite a number of companies are already taking advantage of big data and its prediction power. Aviva, a private global insurance company, is now able to predict insurance claims, not based on a detailed report of the health of its subscribers who may have given blood and urine examples costing the company $125 per person for the analysis, but on credit reports and consumer marketing data that costs only $5 per person on average. This data on the lifestyles of people taking an insurance policy now functions as a proxy to predict the health of these customers. Banks can detect credit card fraud by looking at anomalies, and the best way to find them is to crunch all the data – big data – rather than a sample. The card network uses information or data about past fraudulent (and non-fraudulent) transactions to predict whether a particular recent transaction seems not to be “normal” and therefore possibly fraudulent, preventing actual fraud or future illegal transactions. And let us not forget that computers are now executing trades in the financial markets at speeds that were impossible to conceive just a couple of decades ago, bringing down trading costs and bid-ask spreads to levels never before seen.

These algorithms, functioning as neural networks, can detect small nonlinearities in very noisy data, beating the linear models that were prevalent in finance. However, do not expect these powerful algorithms to have a clue how their predictions relate to politics, for instance, since these machines do not understand why a particular context is important or not to understand a financial ecosystem. AI has made financial trading systems quite efficient, though once in a blue moon it causes systemic failures that are jaw-breaking and dangerous if not swiftly corrected. Despite the fact that actual trading has been replaced by computers, major unique investment decisions are still made by humans and communicated to clients by human executives.

General Electric and Rolls-Royce both have implemented big data analytics in their commercial jet engine businesses to predict more accurately when to replace expensive parts or optimally start maintenance of the jet engines, allowing those firms to apply new business models by leasing or renting power to the airplanes instead of merely selling engines. Indeed, computers have significantly improved at image and voice recognition and speech synthesis. Computers can now detect tumors in radiographs before most humans. Medical diagnosis and personalized medicine will improve substantially. Increasingly intertwined collaboration between humans and machines – that is the future.

Remember that today, AI is at its best within a very specific domain of expertise, assistance or automation. Due to the methods used to train AI platforms, AI effectiveness is directly tied to goal-specification clarity. What makes AI so powerful is its ability to learn. Normally, we think of labor as being learners and capital as being fixed. Now with AI, we have capital that learns. Companies need to ensure that information flows into decisions; they follow decisions to an outcome, and then they learn from the outcome and feed that learning back into the system. Managing the learning loop will be more valuable than ever. The way one can make individuals and groups smarter, the way one can make a more “humanized AI,” will work only if feedback is truthful. In other words, data must be grounded in truth. Manipulative advertising,propaganda and “fake news” destroy the  usefulness of social sampling and data in general.

Obviously, data-driven markets offer compelling advantages, and innovation and progress should not be stifled by irrational emotional fears or too stringent regulations. But the shortcomings and ethical challenges should not be ignored, especially with the concentration of data in a few companies and the possibility of systemic failure. What interests us here is the importance of transparency of information and its algorithms to reduce potential information asymmetry. In other words, can data governance control artificial intelligence? And how do international politics affect such data governance that affects our lives beyond national borders?

Can we really trust big data?

Facebook’s algorithm decides what information to show us on the basis of the choices we already have made. This filter algorithm may create a filter bubble or echo chamber, even for initially unbiased people. The filter model picks up small initial differences and exaggerates them until the other side of the argument is lost. And we do not even mention the spreading of untrue rumors that become fake news, which has become a source of constant entertainment, in a kind of post-truth world.

Another example is the popular Tinder application that uses algorithms to romantically link people. It is an example of what can be described as “amplified biasedness” by machine learning. Tinder is one of the fastest growing social networking apps on a global scale, with users in 190 countries swiping 1.6 billion pictures and generating around 20 billion matches every day. However, we should not ignore how the biases of Tinder’s algorithm reflect our society and how we analyze and perceive humans. Despite the personal swiping choices we make in finding a romantic partner, this online dating application seems to be reinforcing racial prejudices. Depending on how an algorithm is programmed, the users’ online behavior and the set of data it is given to process the intended matching process, certain cultural aspects will be highlighted, visualized and prioritized while others are left out or

rendered invisible. Tinder’s “magic” black box is not revealing how it functions, which means that this kind of algorithm is not value-free and basically reflects existing cultural and individual preferences and human biases, lurking in a darker shadow, not exactly expected from a cold, presumably objective calculating machine. Unfortunately, the system’s algorithm also reflects a darker side
of our culture: embedded biasedness. Some reliable academic studies have shown that black women and Asian men are potentially marginalized and possibly discriminated against in online dating environments. The opposite would likely be true in a Chinese version of Tinder, of course, because the prevailing data in a particular context will be reflected in the data stream used to “predict” certain choices or decisions.

Specific biases in the criteria and variables used in these algorithms are either unexamined or data designers remain unconscious and unaware of them, enhancing our point that we should be worried to blindly trust these algorithms. And here we face a paradox: machine learning AI pretends to be neutral
and provide better decision-making options, whereas in reality the underlying criteria and variables of these algorithms, often based on detecting personal preferences through behavioral patterns to come up with recommendations, are nothing but a mirror to our societal practices, potentially even reinforcing existing biases. Indeed, societal biased garbage in, biased garbage out. The game of speedy and more precise predictions is not as objective as proclaimed by the owners of these apps. Even if we or those owners have the best intentions, those intentions too could beeasily (socially or personally) biased.

 The code of algorithms is not value-neutral – it contains many judgments about who we are, who we should become and how we should live. If we are asked to choose a software solution, will we be subtly influenced to buy from a particular online vendor, and could we be affected by the vendor’s (subconscious) prescriptive norms and values? What if these values (or biases) are less than benevolent? Even if a data set accurately reflects historical facts that does not mean it is ethical and fair, especially if it can be proven that history itself was not necessarily fair. We should question whether an algorithm is fair and whether AI is doing things that humans believe are ethical. To bring ethics into AI, one needs a human-inthe- loop approach, as in an “open algorithm,” not a black box.

Moreover, we can easily fall into the dictatorship of data, where quantification and data become a new fetish. However, the quality of the underlying data can be poor or even biased. It can be misanalyzed or used in a misleading manner. Worse, data can fail to capture what it purports to quantify, and consequently, we may attribute a degree of truth to the data that it does not deserve. Many thinkers have argued that creative brilliance does not depend on data. The increasing reliance on data may also lead to the risk of a “tyranny of algorithms,” where unelected data scientists and data experts run the world. The incredible power of AI firms such as Google, Amazon, Facebook, Apple, Microsoft, Baidu, Alibaba, Tencent and some other tech firms using AI cannot be overstated. They currently control the data, and thus they control AI. Can we trust these organizations to do the right thing, always? Not quite.

The internet has made tracking easier, cheaper and more useful. However, the interne and big data also threaten our privacy. In an era of big data, the three core strategies long used to ensure privacy – individual notice and consent, opting out and anonymisation – have lost much of their effectiveness. Indeed, the Cambridge Analytica debacle, where data from Facebook was used to influence the 2016 US elections and possibly the Brexit vote, shows that through access to personal data, companies and individuals (having access to this data) can influence human behavior through personalized messages and advertising in a way never seen before. Artificial intelligence is more like advertising intelligence, where big corporations have gotten better at collecting consumer data and filtering and packaging this data to sell back to consumers in the form of recommendations.

We believe that individuals should own and control access to their personal data, instead of the application providers. And even when individuals “consent” to the use of their personal data, these corporations should account for the proper and transparent use of the data. Trust is the fundamental and necessary yardstick. Moreover, in nondemocratic states, and even in nominally democratic ones, governments know more about their citizens than Orwell imagined in “1984.” And the prospect of AI being used for malicious military purposes obviously remains frightening.

The geopolitical struggle for AI supremacy

If data is the new oil for economies, it is crucial who controls it and how it is regulated. And as everyone understands, the impact of AI automation on jobs could be dramatic in the short and medium term, especially if a number of blue-collar and also white-collar jobs are replaced by fast machine learning devices or robotics. When the deep learning machine AlphaGo (backed by arguably the world’s top technology company, Google) beat the world’s best Go player, Ke Jie of China, in May 2017, it was China’s Sputnik moment. The Chinese government set off a national mobilization for AI innovation. In ancient China, Go represented one of the four art forms any Chinese scholar was expected to master, leading to Zen-like intellectual refinement and wisdom.

The groundbreaking, deep learning approach to artificial intelligence turbocharged the cognitive capabilities of machines. These deep learning-based programs, known as narrow AI, in contrast to general artificial intelligence, which has not been achieved yet, can now do a better job than humans in identifying faces, recognizing speech and issuing loans. So, companies and the countries in which they originate are eager to master this “new oil.” How these countries, especially the two most advanced countries in artificial intelligence, the United States and China, choose to compete and cooperate in AI will certainly have a dramatic effect on global economics and geopolitics in general. From a geopolitical perspective, it is not difficult to argue that mastery of artificial intelligence be interpreted as part of the Digital Silk Road that Chinese President Xi Jinping has put forward. China obviously wants to strengthen its geopolitical stance through economic and other means. The Belt and Road Initiative (BRI) seeks to link economies, with China playing a dominant role in financing land, sea and digital trade routes. And although China’s BRI is promoted as “soft power” to create a win-win situation, we are less sanguine about the possible outcome as debt-ridden countries that have received “generous” BRI investments are leaned on if they can’t pay back Beijing. China wants access to harbors to strengthen its military presence, for instance in Sri Lanka and Pakistan, and is looking to make more subtle inroads in the Greek port city of Piraeus, for instance.

A lot of the difficult work in AI development and discovery has been driven by a handful of elite researchers, virtually all clustered in the United States, Canada, Israel, Britain and France. We are entering the age of AI implementation, where we see real-world applications. In this age of big data, successful AI algorithms need data, computing power and some good AI algorithm engineers to make a difference. However, once computing power and engineering talent reach a certain threshold, data becomes the decisive factor in determining the overall power and accuracy of an algorithm. By looking at the four critical success factors for successful AI implementations – abundant data, hungry entrepreneurs, AI scientists and an AI friendly environment – one can assume that China may emerge in the near future as the leading power in AI implementation ahead of the United States. Researcher and venture capitalist-entrepreneur Kai-Fu Lee argues that moving from discovery to implementation reduces one of China’s biggest weak points, outside-thebox thinking on research questions, and also
leverages China’s most significant strength, having “scrappy entrepreneurs with sharp instincts for building robust businesses” that require speed and adaptability. Moreover, China’s alternate digital universe, controlled by Communist Party officials, now creates and captures vast new data about the real world that will prove invaluable in an era of AI implementation. And the fact that China’s government takes a utilitarian view, in contrast to Europe’s right-based approach, means that policies (under the control of the Communist Party) will encourage faster adoption of these technologies.

The Communist Party seems to have a strategy to take the lead in this new field: a strong degree of state support and intervention, transfer of both technology and talent, and investment over the long term.

Second, eager Chinese AI entrepreneurs will find ways to implement data in commercially viable AI-powered applications. China has about 286 million digital natives versus 75 million in the United States. This means that direct and indirect competition among American tech firms and Chinese-backed firms will become a fierce struggle for AI supremacy.

Beijing has supported “national champions” with substantial funding, encouraged domestic companies to acquire chip technology through overseas deals and made long-term bets on supercomputing facilities. Companies such as Baidu and startups like Cambricorn are designing chips specifically for use by AI algorithms. And above everything is the access to large quantities of data as a crucial driver for any AI system. It is well documented that China’s data protectionism favors Chinese AI companies in accessing data from China’s large domestic market. Consider China’s progress in utilizing artificial intelligence, driven by big data from one-fifth of all the humans on the planet, combined with China’s gladiatorial entrepreneurs, unique internet ecosystem and a proactive government push, and it is not too difficult to imagine that there may be a shift in AI supremacy in favor of China. In robotics, European firms, especially Germany, may still play a competitive global role.

The battle between Chinese and American firms will become fiercer by the day. The White House’s political intervention to prevent Tencent and especially Huawei from installing 5G networks on American soil was the first shot across the bow of China’s global aspirations. Huawei, along with Nokia and Erikson, has become one of the biggest suppliers of high-tech kit used to build mobile
phone networks around the world. Critics dismiss the idea that Huawei would respond to Western cybersecurity concerns out of commercial self-interest, pointing to a Chinese law that compels private firms to assist the state intelligence service when asked.

Indeed, geopolitics will be increasingly determined by the power struggle in the field of AI and who comes out on top. Data, as the new oil, where China has a number of advantages over the United States, is necessary but not sufficient to gain AI supremacy. The theoretical frameworks for deep learning innovation remain indispensable, and the United States is still ahead. Although China may have the edge in face recognition algorithms at the moment, boosted by its Ministry of Public Security, and smartphone apps for financial services, Google and other companies in Silicon Valley are still ahead in the AI game. Moreover, China does not yet have the international data that is necessary to reduce biases for apps that could be used beyond its borders. However, it is not too difficult to imagine that China may gain an advantage by establishing clusters of worldclass AI innovation centers by 2030.

Economic benefits are the primary and immediate driving force behind China’s development of AI, since AI systems could enable it to drastically improve its productivity levels and meet gross domestic product targets. However, China’s adoption of AI technologies could also have implications for its mode of social governance, whereby AI is intended to play an “irreplaceable” supervisory and security role in maintaining social stability. And AI undeniably will benefit a broad range of public services, including judicial services, medical care and public security. However, the growing  concern over privacy and the willingness of private companies to participate in various social credit systems in China highlights the potential threats.

Finally, military applications of AI could provide a decisive strategic advantage in international security. Another real danger is the possibility of social disorder and political collapse stemming from widespread unemployment and gaping inequality between the AI haves and AI have-nots. Ethical concerns require a robust civil society as we have in Europe. It is unclear to what extent an open ethical debate can be conducted in China, where overall civil society is more bound by Communist Party oversight. The growing battle between those economic powers, aggravated by increased animosity between nationalistic and populist political leaders, is making economics even more transactional than before.

Accountability and data governance

The central question of the future will be: “How can humans benefit from collaborating with machines?” It is not about AI versus people. Indeed, AI provides a lot of benefits that could improve human decisionmaking. However, it is obvious that privacy is under attack from all sides. To what extent
should the power of the internet and AI firms be clipped and constrained to secure the privacy of the individual? There is a clear need for ethics and data governance in using artificial intelligence algorithms. And we should be crystal clear about which dangerous AI applications should be forbidden or not condoned in any form. It is primordial that we embed ethical behavior in those systems, which implies that we need to come to a consensual agreement on what those ethical behaviors should be. In other words, we need to invent more understandable and accountable systems that could augment human capabilities.

Artificial intelligence is starting to cause a revolution in business, but how should corporate boards respond to the implications of technologies that are not fully understood yet (except by a few top scientists and experts who may have written the algorithms)? We believe that a board and its executives should take an ethical approach to ensure full accountability and responsibility for their
activities, including the use of data, fueling AI solutions. And although artificial intelligence is creating enormous commercial opportunities and real solutions on a grand scale for many, we cannot and should not ignore the daunting level of risk.

Our premise is that data belongs to the person to whom it relates. Admittedly, raw data has an economic value that may be increased, sometimes in unexpected ways, by amalgamation with the data of many others. Call it the use of big data. Nonetheless, organizational accountability remains vital in that machine-learning may lead AI to deliver decisions with an accuracy and complexity that defy mere human skills. Without proper accountability, possible conflicts of interest about the use of private data by digital organizations and their customers may arise, and it may undermine the trust in those organizations, as Facebook is experiencing. Consequently, corporate boards remain responsible for providing this accountability. When board members do fully understand exactly how AI works, they need to consider the implications and be prepared to address the risks involved with the use of private data (even with the consent of the individual owner).

Moreover, we cannot ignore the possibility that powerful digital corporations may exploit the vulnerability of their customers or employees without their conscious knowledge. For instance, it may be perfectly acceptable to price airline tickets according to overall demand and supply at a specific time of day, but it becomes unacceptable to track a particular passenger’s booking habits to increase the price of tickets that person is likely to buy. Airlines which do that and are caught face a big reputational risk. And then there is the major geopolitical battle between the United States and China, and to a lesser extent Britain, Canada, Australia, Japan and the European Union, about who will determine international AI standards.

The EU has taken a lead in legally securing data privacy, enshrined in Europe’s new General Data Protection Regulation, but the EU lags behind the United States and China in terms of AI development (applications). This means that when companies use data, they must respect the rights of the original owner. The principles to protect privacy are normally ethical in nature rather than technical. Trust can be destroyed very quickly by a failure or abuse of technology. Systems that are overly intrusive, biased or which set out to exploit vulnerabilities are likely to cause reputational damage. Facebook has suffered some of these risks. CEOs and their respective organizations that consider and respond to the ethical challenges are more likely to be trusted. And those that are trusted are more likely to survive and prosper in the long run. Governance has key role in building these reflections into business models. If boards succeed in doing so, they will be creating a framework in which new technology can be used to everybody’s advantage.

As long as we are aware of the potential threats from AI and take all possible measures to reduce those risks, including being aware of the dangers of complete dependency on the technological digitization of our world view, we should be able to remain creative and innovate for a better and fairer

world. The darker side, be it the biases of weak AI, privacy violations and overreliance on data of an AI-driven black box, as well as the immediate new geopolitical battle between the United States and China, and ultimately the possible emergence of a “strong” artificial [general] intelligence, cannot be ignored. We should be prepared for the dangers of ideas, ideologies and institutions that allow information to feed collective decisions and understanding, which may be contrary to an open and less dogmatic perspective of the world. If intelligence is the ability to deploy novel means to attain a goal, then we should allow some competitive forces to drive evolution.

Artificial intelligence is becoming good at many human jobs such as diagnosing diseases, translating languages and providing customer service, and it has improved rapidly since 2013. Obviously, there is a fear that AI will ultimately replace human workers throughout the global economy. But that does not need to happen. Never before have digital devices and machine tools (the internet of things is currently estimated to reach 20 billion devices) been so responsive to us. This kind of technology may radically alter how work gets done and who does it. The impact may be even larger when AI technology complements and augments human capabilities, not replaces them.

The future lies in a beneficial collaboration between human general intelligence and artificial specific intelligence and deep learning machines. The human advantage lies in the ability to ask metaphysical questions (why?) and address ethical concerns. Only humans can feel empathy and mindful compassion toward other beings. In addition, our neocortex allows us to think rationally and reasonably, and make links between unexpected patterns, resulting in innovative and insightful improvements of tools. And the use of intelligent artificial tools could improve organizations’ products
and services, positively impacting our quality of life, as long as the environmental,  ethical, social and governance boundaries are embedded within a broader system.

Despite the darker side of AI, we recommend wise leadership, supported by data governance at the company level, and proper AI policies and regulations on data at a national or preferably super-national level. This is in order to accommodate these innovative new tools in the form of AI and deep learning machines, and stimulate collaboration between smart humans and intelligent machines to secure “wiser” decisions.



Peter Verhezen is principal of Verhezen & Associates, a governance and risk consultancy, and a visiting professor at Antwerp Management School and Melbourne Business School.

You need to login to write a comment!