Potential risks from General purpose AI systems- Part II: Systemic Risks

Increased use of energy, water, and raw materials are AI advances, and a growth in computing use in general-purpose AI development is witnessed. For the necessary infrastructure to be set up, these environmental resources are being used on a large scale, which poses a risk in the already dilapidated environment today.

Potential risks from General purpose AI systems- Part II: Systemic Risks
Table of Contents

    Potential risks from General purpose AI systems- Part II: Systemic Risks.

    Before we continue with systemic risks, let’s look at something I read.

    In his book Scary Smart, Mo Gawdat asks his readers to imagine sitting in the wilderness next to a campfire in 2055, which is 99 years since the AI story began in 1956. In the imagined scenario, the story of AI has led us in the middle of nowhere, the question that lingers as you start reading the book is, are we in the wilderness escaping the machines or enjoying how efficient AI has been in making life (in the wilderness) better. Anyone who has watched an AI apocalypse movie will jump to the idea that we are in the wilderness escaping the machines! But this could be wrong; what if we managed to build AI in the sense that it was safe? What if our governance efforts make AI precisely what the world needs to solve the existing challenges? Press enter or click to view image in full size Systemic risks

    When talking about systemic risks, they are risks beyond what is directly posed by the capabilities of available general purpose AI systems. The widespread deployment of these AI systems is associated with systemic risks that cut across labor markets, privacy risks, and environmental risks. The continuous expansion and advancement of general-purpose AI poses a risk to labor markets of its capabilities to automate a wide variety of tasks. People previously handled these tasks, and the continuous advancement of AI systems in perfecting these tasks delivery poses a significant disruption in the labor market. However, there is a different school of thought that alludes to the job losses that could be offset by the creation of new jobs in non-automated sectors.

    A second risk is the Global AI research and development divide. In the current research and development of AI, there is a concentration on the countries that are doing this. It is in a few Western countries and China. This increases the case of the AI divide, which could increase further with expanding dependence on this small set of countries. This is likely also to contribute to global inequality. Low- and middle-income countries will possibly feel this divide further as AI advances with their limited access to computing, which is expensive computing that is needed to develop general-purpose AI.

    There is a high market concentration of a small number of companies' AI systems, which poses a single point of failure risk. Today, there are countable AI systems in the mainstream that are accessible to most people in the world. As a result, a single bug in any of these could cause a worldwide risk. If organizations across sensitive and critical sectors rely on these small number of available general-purpose AI systems, a single vulnerability would affect all of them and cause simultaneous failures or disruptions.

    Increased use of energy, water, and raw materials are AI advances, and a growth in computing use in general-purpose AI development is witnessed. For the necessary infrastructure to be set up, these environmental resources are being used on a large scale, which poses a risk in the already dilapidated environment today. We all agree that there is progress in efficiency techniques that would allow computing to be used better. However, this is still not mainstream or significant enough to offset the resources consumed in computing.

    The conversation about privacy risk has been heard many times. It’s not just the AI sector but also other sectors around data. General-purpose AI development uses massive data sets to train. This increases its likelihood of exposing this data in responses or maliciously being used to cause or contribute to a breach of privacy. General-purpose AI can also be used to infer sensitive and private information from large amounts of data. This issue is also a potential risk since we still don’t have enough data on widespread privacy violations.

    Recently, AI (processes) realized that taking a second or two before responding to a request to break down the request into simpler smaller tasks, can have better result, and voila, long chain of thought was borne!
    

    Next week, we’ll look at risk management techniques for managing potential general-purpose AI risks.