Potential risks from General-purpose AI systems: Part 1- Risks.

Based on the International AI Safety Report, January 2025 Press enter or click to view image in full size

Let’s start with an analogy to keep us informed.

For those of us who have watched the TV show, Person of interest, we are aware of the two intelligent surveillance systems in the show. The machine is the ethical one and is considered the protagonist. It was built by Finch. The second system, Samaritan, was built by Author but deployed by Decima technologies. That’s now what I want to talk about though. I want to focus on their working philosophy.

The machine is the ethical one, it focuses on giving everyone a chance and does not classify one as either the enemy or friend, it just points to the subject/target as a person of interest and it’s up to its agents, to investigate and find out where the subject falls in the dichotomy. This makes it the system everyone would prefer watching over them, or just me.

Samaritan on the contrary does things differently. It perceives everything as binary, good or evil. This is however not permanent, the good can be deemed evil or irrelevant at any given time which would require the target’s elimination. Additionally, Samaritan can somehow take in requests meaning it’s agent can ask for a person to be found and within seconds, Samaritan can find the subject. This is not the case at first with the machine. The access to Samaritan that allows its agent to choose a target makes it dangerous: it can be used for malicious reasons.

Immediately after I started reading the report, I started to think about this show. Most of the functions of these surveillance systems remind me of how AI is being used today. According to the recently published International AI safety Report, January 2025, various risks have been identified regarding use of general-purpose AI.

Before we get started on the risk…

What is general-purpose AI?

It is AI that can perform a wide variety of tasks. It can write computer programs, generate custom photorealistic images, and engage in extended open-ended conversations. Most of us have used this type of AI in one way or another. Press enter or click to view image in full size

Companies are rushing to invest and develop general-purpose AI agents to compete and advance their standing in the market. AI agents are autonomous, general-purpose AI systems that can act, plan, and delegate. They are meant to achieve a goal with minimal or no human oversight. This rapid development, however, reduces the time needed to evaluate the risks they pose. This results in the evidence dilemma challenge, where policy markets cannot weigh the potential risks and benefits of these advancements as there is no ample scientific evidence. Most of the risks identified below suffered from this- evidence dilemma- since the growth rate does not provide enough time to study the systems and evaluate the risks.

So, which are these risks, you ask?

Let’s dive in and look at General-purpose AI risks:

According to the report, the risk can be categorized into 3:

  1. Malicious use risks

  2. Risks from malfunctions, and

  3. Systemic risks

Several harms from general-purpose AI have been well established. NCII—nonconsensual intimate imagery, CSAM—child sexual abuse material, bias output on people and opinions, reliability issues, and privacy violations are common issues witnessed with the rampant rise of general-purpose AI. Additional risks keep coming in as more capabilities are witnessed with these AI systems. Let’s briefly look through the three categories mentioned above. Malicious use risks

When we talk about malicious use, we mainly refer to malicious users. With the capabilities of general-purpose AI today and in the future, malicious actors can use AI to cause harm to individuals, organizations, and society. Lately, there has been a surge in fake content created to embarrass or portray the wrong image of individuals, especially celebrities in society. Cases of President Trump being portrayed as saying something that he did not have been on the rise in the media (link). AI-generated content has been used to harm individuals by using nonconsensual deepfakes. Voice impersonation can also be used to conduct financial fraud, such as impersonation and blackmail.

With the generative power of general-purpose AI, manipulative content can be generated to sway public opinion today. A simple prompt on a general-purpose AI can generate content at scale that can be used to manipulate political views. One mitigative approach is content watermarking, but this can be circumvented with a simple tool like cropping and image manipulation.

Another scarier risk is cyber-attacks/offenses. Using general-purpose AI, we can detect previously unknown bugs and cybersecurity vulnerabilities in systems. This is both a benefit and a risk, depending on who is using the AI system. A malicious attacker would exploit the vulnerability, while a defender would focus on patching it.

We also have biological and chemical attack potential emanating from using AI. According to the report, recent AI systems have displayed some ability to provide instructions and guidance for reproducing known biological and chemical weapons and to facilitate the design of novel toxic compounds. This results in a major risk from AI. Malicious individuals who decide to use this power of AI to attack or prepare for an attack would result in massive loss of life. One way to mitigate this would perhaps also use AI to hypothesize antidotes for the hypothesized chemical and biological weapons. However, this means the weapons, too, will be hypothesized. Perhaps watch Night Agent season 2 before doing this. Risks from malfunctions

Even without the users being evil actors, AI on its own can malfunction and cause harm. I am not referring to the extent of the terminator world but risks that can arise due to the malfunction of general-purpose AI. One of these risks is reliability issues. Currently, people are using general purpose AI for various purposes opinion of them being consulting about medial or legal advice. Yes, we are from googling our symptoms to asking AI for a diagnosis!

These systems might generate false answers or misleading Responses, which most people don’t take the time to verify. Most people don’t verify because they have limited AI literacy, which means they don't know that AI responses can be false, and sometimes, the AI systems can hallucinate. Another reason we sometimes fail to verify is misleading advertisements and miscommunication by the AI developers. Press enter or click to view image in full size

AI systems can amplify social and political biases, harming the affected groups. This would result in discriminatory outcomes around resource allocation, reinforcement of stereotypes, and neglect of some groups or viewpoints.

See you next week as we explore systemic risks. check here

Remember AI governance does not aim at thwarting the 
development and advancement of AI systems but to prevent 
harm as a result of the advancement.