In 2019, a British company was defrauded of $243,000 when the CEO was voted in as well impersonate by criminals using artificial intelligence (AI). In South Africa, impersonation fraud more increased by 356% between April 2022 and April 2023, according to the South African Fraud Prevention Service. The organisation’s executive director, Manny van Schalkwyjk, said the 2021 INTERPOL report placed South Africa at the forefront in Africa in terms of cyber threats.
Advancement in technology makes it easier to commit these crimes and makes it difficult to distinguish between the real and the fake. The widespread personal use of AI applications such as ChatGPT and like.ai is adding to concerns about potential abuse.
While AI is still in its infancy in Africa, it has the potential to have both positive and negative effects on transnational organized crime. AI technologies are helping organized criminal groups to commit more sophisticated crimes, at a greater distance, and with less physical risk. Researchers He says Criminals use AI in the same ways as legitimate companies, for “supply chain management, risk assessment and mitigation, employee screening, social media data mining, various types of analysis and problem solving.”
Use of organized criminals in Africa Drones for intelligence, surveillance and reconnaissance. Drug cartels in Mexico already Uses Autonomous attack drones are controlled by artificial intelligence, providing flexibility and coordination during physical attacks on human targets, supply chain or infrastructure.
Satellite imagery can help criminals plan and manage smuggling routes using artificial intelligence systems such as Earth Monitoring, which provide accurate, near-real-time local terrain data. Organized criminals can also attack AI systems to evade detection (such as biometric scans), circumvent security systems at banks, warehouses, airports, ports, and borders, or wreak havoc on private sector, government, and economic infrastructure networks.
AI-powered attacks on confidential personal databases and applications allow criminals to extort money or extort money to generate income or leverage political influence. Deepfake technology can access funds by impersonating account holders, requesting access to secure systems, and manipulating political support through fake videos of public figures or politicians speaking or acting reprehensible.
Digital Police
However, AI also provides new ways for law enforcement to police crime. It can map movements, identify patterns, and predict, investigate and prevent crime. Predictive policing allows law enforcement to calculate where crime is likely to occur based on artificial intelligence algorithms. But this comes with Harmsincluding patterns of discriminatory policing.
In Africa, private security companies are often more technologically advanced than police forces, many of which lack basic Internet access, technological resources and capabilities. But private sector digital systems could be linked to police databases or used to track down suspects.
For example, the VumaCam license plate recognition system in Johannesburg uses more than 2,000 cameras and is connected to the South African Police Service’s national database of suspected or stolen vehicles. Scarface at Bidvest Protea Coin in South Africa uses facial recognition to detect potential suspects in real time. His data can be used as evidence in criminal cases.
AI can also be used against organized crime in remote areas. EarthRanger uses artificial intelligence and predictive analytics to collect real and historical data from wildlife, ranger patrols, spatial data, and observed threats in protected environmental areas. Technology helped dismantling Poaching episodes in the Grumeti Game Reserve in Tanzania and encouraged local communities to do so coexistence With protected wildlife in Liwonde National Park, Malawi.
Africa Regional Data Cube is an AI system that layers 17 years of satellite imagery and Earth observation data for five African countries (Kenya, Senegal, Sierra Leone, Tanzania, and Ghana). It stacks 8,000 scenes across a time series and makes the compressed, geocoded, analysis-ready data accessible via an online user interface. Its data comparing land changes over time could, for example, help identify and track illegal mining operations.
Although AI can help tackle organized crime in Africa, it comes with many limitations — and risks. The AI needs access to an uninterruptible power supply, a stable internet connection, and capacity To store and process huge amounts of data. This means that its deployment across Africa will be uneven, depending on countries’ resources, law enforcement capabilities, and willingness to operate through public-private partnerships.
Because AI systems can be attacked or fail, over-reliance on AI in tackling organized crime is another risk. The technologies can also give law enforcement agencies overwhelming powers, which can violate citizens’ rights to privacy and freedom of assembly and association.
As artificial intelligence is constantly evolving, legal frameworks are always trying to catch up. Private companies and even governments may take advantage of this to circumvent privacy concerns. VumaCam has come audit To collect potentially sensitive data on individuals unrelated to crime.
Authoritarian governments can use legitimate AI systems to monitor political opponents or suppress criticism from civil society. humans rights Human rights advocates in Zimbabwe are concerned about the government’s implementation of Chinese-developed facial recognition software and the possible ownership and use of this data.
In September 2021, the United Nations High Commissioner for Human Rights, Michelle Bachelet, called for governments to stop using artificial intelligence that threatens or violates human rights. In March this year, top AI developers called for giant AI experiments to be halted to allow the development of strict security protocols and oversight mechanisms.
But artificial intelligence is rapidly evolving, both in scope and scope. International bodies, governments, and civil society must keep pace with developing responsible and ethical AI principles. Laws need to be developed to investigate, prosecute and punish AI users for criminal and violent ends. DM
Romy Sigsworth, Research Consultant, Enact, Institute for Security Studies (ISS)
enact a law Funded by the European Union and implemented by the Institute for Security Studies in partnership with INTERPOL and the Global Initiative to Combat Transnational Organized Crime.
It was first published by ISS today.