I am Mohammed Alothman, a professional in applied technologies on artificial intelligence and, at this turning point of AI breakthroughs, am motivated to reflect about what the future holds.
The predictions about AGI are all over the map, with names like Sam Altman, CEO of OpenAI, stating that it could be here by 2027 or 2028. Meanwhile, Elon Musk suggests that maybe it can be realized by 2025 or 2026.
The limitations of artificial intelligence are now more apparent than ever, and given how close these technical breakthroughs have been to being realized, the thought should be given on how AI will be abused – both by design and accident. The harm will not come from the AIs themselves but from what people will do with them.
The Reality of AI: Understanding The limitations of Artificial Intelligence
The AI systems we work with are only effective with specific limitations and often require human supervision to avoid mistakes.
For example, take large language models such as ChatGPT, and other things; they’re doing exceptionally great in terms of text being just like humans write it, but in the context of taking decisions accurately, they fail miserably.
And for this reason, I am repeatedly telling my clients that whereas in most cases AI misuse will not apply, not that it’s the result of wicked intentions but lack of proper appreciation of the AI technology, understanding its strength, and, more importantly, weakness.
The Increasing Threat of AI Misuse
With the development of AI, not just its limitations are causing issues but what the people have done with it. The abuse of AI, though perhaps unwitting, is fast turning into a serious concern, and we would have to contend with it as we go on developing the next. The unbridled usage of AI or its morally inappropriate use will cause bad impacts.
Unintentional AI Misuse: A Dangerous Path
In my line of work, I’ve seen how easy it is for people to fall into the trap of relying on AI without fully understanding the limitations of artificial intelligence. Consider, for instance, the scenario of lawyers who employ AI-generated content for court papers. Immediately after the introduction of ChatGPT, lawyers started using the tool in their practice, saving time and not knowing that the tech would eventually lie and create false content.
That is a beautiful example of AI misuse due to ignorance. Lawyers in New York, British Columbia and Colorado have been sanctioned or revoked for presenting false information created by AI, whereas AI systems can sometimes “make up” information without regard to context or accuracy.
If I talk about AI in legal or medical contexts, I emphasize that these are not contexts where we can afford to cut corners. The disadvantages of artificial intelligence in high-risk businesses are most visible and any misuse of AI in such businesses could be disastrous. Impacts on individuals can be life-altering – wrong decisions at law, or wrong medical diagnosis with wrong data.
Intentional AI Misuse: A Greater Threat
Even the prospect of nefarious AI use is rather more dismal when the intent behind the AI’s use is also malicious. I’d say an example I like to point to time and again is deep fakes. Non-consensual deep fake videos and images featuring celebrities like Taylor Swift filled up social media.
These are pictures that have been created by artificial intelligence applications, some of which may be outsmarted by just misspelling the spelling of a name. This only serves to prove that even artificial intelligence can be utilized for malicious actions.
With time, AI will become very complex such that it will be really hard to differentiate the original from the counterfeit. This leads to what has been referred to as the “liar’s dividend” – the false denial created by those with and the ability to inflict power that exists on the part of groups and individuals because evidence and the call for it is deceptive.
We’ve already witnessed this happen in the context of politicians and corporate heads who use the argument of deep fakes to discredit reliable information. In my view, AI misuse in this trend will become even more prevalent for years to come, making it hard to trust visual and audio media.
AI in Decision Making: A Risk for Everyone
A dangerous misuse of AI in its critical application in life-or-death choices is perhaps the most unsettling of all at this development level of the field. AI is increasingly leveraged in industries, such as health care, finance, and criminal justice to decide who can access resources or prospects.
The issue appears when the restrictions of artificial intelligence are translated into errors that discriminate against vulnerable groups disproportionately. For example, artificial intelligence was used by the Dutch tax administration to find fraud in child welfare applications and ended up with thousands of false convictions.
Parents had to pay tens of thousands of euros back that they owed no one, and at the end, it even ended up with the resignation of the Dutch Prime Minister and his cabinet.
With such AI misuse in fields such as these, it is quite obvious to me that the AI systems must be very well monitored and audited so that it doesn’t make any harmful decisions based on flawed or biased data. Just because AI may take a decision, AI should not be relied upon without supervision.
A Way Forward for Preventing AI Misuse
However, unfortunately, there is no easy way out of this problem of AI misuse. But yet, as a society, we have ways and means to reduce potential risks with AI technologies. In this regard, what we must do is continually educate people and organizations as to the scope of artificial intelligence. As I often caution clients, knowing the pitfalls and dangers in AI is the first step towards responsible use of AI.
Also required are policy development by governments and organizations to deter the dangerous use of AI in the prevention of deep fakes use and also its applications within health, such as diagnosis and treatment, or law.
Such legislation for the restriction in the usage of AI for not hurting or injuring some people. The challenge will be to draw law and regulation that are highly responsive to the dynamic nature of technological innovation.
But as AI is advancing, so do employees need to be trained on how to use these too. The point is making systems that do not allow or facilitate accidental misuse of AI, including the legal mistakes that I listed above. Organizations need to be equipped with means against malicious or ethically inappropriate use of artificial intelligence.
Conclusion: For Responsible AI
The growing capabilities of AI should be met with cautionary optimism; however, the limitations of artificial intelligence as well as the serious threat of AI misuse must be identified. As I have just discussed, it is not the AI itself that would pose the greatest threat, but rather how people apply it – either through ignorance or malice. The future of AI technology is bright, and it’s up to each of us to ensure that we use it responsibly and ethically.
About the Author
Mohammed Alothman is the founder of AI Tech Solutions and is an accomplished AI technology pioneer. Mohammed Alothman has designed and commercialized AI technologies.
Mohammed Alothman has offered consultancy services to several businesses across different sectors on countering the difficulties and struggles of implementing AI in enterprises. He is a strong advocate for ethical AI design and responsible AI implementation, and his knowledge of how AI technology can be used and the limitations of artificial intelligence have earned him a place as a thought leader in the field.
Mohammed Alothman continues to guide organizations in navigating the rapidly evolving landscape of AI while ensuring that these technologies are used for the greater good.
Read more Articles :
How AI Is Becoming A New Companion: A Discussion With Mohammad S A A Alothman
Mohammad Alothman Discusses How Artificial Intelligence
Mohammad S A A Alothman Talks About AI’s Influence on UK Industries
How AI Is Transforming Road Repair: A Discussion with Mohammad S A A Alothman