One challenge of AI is that it impacts other areas of IT. Security is one of the biggest concerns.
Shadow usage – when an employee uses an AI tool that has not been approved by the IT department, potentially exposing data – is one risk that arises from the use of AI in an organisation. Another is ‘indirect prompt injection attacks’ where an instruction is hidden in a data set, a website, or some other information source. Here again, the danger is that it could lead to inadvertent exposure of business information, or direct users onto unsafe or malicious websites.

AI can also produce false or misleading information (so-called ‘hallucinations’), generate skewed preferences or omit important data. This can affect any decision making. If a flawed Generative AI model is used to assess job applicants, for example, that could lead to certain candidates being favoured over others for the wrong reasons.
Securing AI is a recognised challenge and unless the proper precautions have been taken in the first place, the more organisations come to rely on AI tools to speed up processes and save time, the greater the potential risks. According to Cisco's 2024 AI Readiness Index, only 29% of organisations using AI feel fully equipped to detect and prevent unauthorised tampering and Cisco has developed a solution that helps customers to protect AI applications. You can also find an excellent and detailed article by Microsoft on the subject of securing AI here.
Protecting customers from potential security issues when using AI is something we cover as part of the awareness phase in our Destination AI enablement programme.
Destination AI from TD SYNNEX is our comprehensive end-to-end programme designed to help you and your customers get ahead of the AI curve.

Read more of our latest Artificial Intelligence stories