Defending Against AI’s Dark Side
Introduction
While artificial intelligence (AI) remains an intriguing frontier, it is essentially similar to other technologies. Hence, if you decide to integrate it into your organisation, it is crucial to be aware of the potential hazards.
Despite the benefits of AI being widely acknowledged, with 35% of businesses adopting it, our attention must still shift to be ready for potential pitfalls. This comprises of understanding how AI can be used to deceive, manipulate, or harm organisations, and understanding tools and tactics to defend against and minimise risks.
AI is being used disingenuously
One notable risk associated with AI is the potential for individuals to disguise themselves. AI tools can produce appealing CVs and expedite the creation process. In a competitive job market, graduates frequently use tools like OpenAI or ChatGPT to create cover letters and CVs. While this can assist candidates in passing initial screenings, businesses may encounter discrepancies between the qualifications on paper and the actual person during the interview.
Likewise, financial institutions employ online forms and AI to determine whether to issue loans or credit. Due to this automation, companies may not always meet applicants face-to-face, leaving them vulnerable to exploitation.
In a twist on traditional whaling attacks (targeting senior executives), fraudsters currently use AI to deepfake requests from Chief Financial Officers (CFOs).
These instances highlight the need for organisations to be mindful, construct robust screening systems, and offer extensive stakeholder training.
Unethical business practices
AI can greatly improve business advantages through upgraded online dynamic pricing tactics. With 94% of shoppers comparing prices online, algorithms leverage this data to provide personalised pricing based on spending habits. However, there is a concern that businesses would utilise these algorithms to engage in deceitful pricing methods, measuring consumer willingness to pay rather than delivering reasonable prices.
This manipulation is not restricted to pricing. Companies may also apply sophisticated algorithms to forecast and manipulate consumer behaviour, potentially breaking ethical boundaries by exploiting individual preferences or vulnerabilities.
Insider and third-party risks
Insider threats introduce additional complexities, where discontented employees with access to AI algorithms could hinder operations or jeopardise sensitive data. By deliberately providing confidential information into generative AI systems, employees could subject organisational secrets to possible hacking, creating substantial security concerns to businesses and clients. In early 2023, a global electronics company prohibited employees from utilising AI after discovering that an employee had disclosed sensitive internal information while using AI for work-related objectives.
Many companies rely on third-party providers for critical data and services. Nonetheless, this partnership may pose risks due to the third party’s biasness and varying risk tolerance that differ from the company’s expectations or requirements. This misalignment can lead to vulnerabilities, such as hasty development, lack in security measures, and greater exposure to manipulation.
Risk defence
Security is constructed on three principles: confidentiality, integrity, and availability, and any measures being put in place are to protect these. As techniques progress in their ability to target those principles, defences must also evolve. Companies can reduce their risks through:
- Comprehensive defence strategy
Businesses must ensure that AI systems are thoroughly vetted and monitored, analyse the credibility of third-party contributions, and safeguard against a wide range of potential risks, including those posed by unscrupulous users and compromised algorithms.
- Responsible governance and disclosure
Balanced governance is essential to tackle cybersecurity threats and ethical concerns. A lack of proactive measures can cause significant reputational damage and undermine trust across industries.
- Responsible AI practices
From developers to enterprises, ethical AI practices must be integrated at every stage of the value chain. Examples of practices include human-centred design, data privacy and security, transparency, and accountability.
- Regulatory compliance
Constantly be informed of evolving AI and cybersecurity regulations and standards, such as ISO 27001 or the National Institute of Standards and Technology (NIST) cybersecurity framework. Establishing adherence to these regulations is crucial in averting legal and regulatory risks.
The impact of AI is transformative and apparent. However, harnessing its full potential requires a determined effort to balance technological advancements with ethical responsibility. By proactively developing robust defences and committing industry-wide ethical AI practices, businesses and societies can leverage AI while limiting its inherent hazards.
Copyright 2024 BSI. All Rights Reserved.
Contact us via email for more information.