Guidelines are crucial for secure AI system development

The need for accurate data on which to train artificial intelligence (AI) to achieve optimal outcomes, predictions or tasks has never been more critical. Without it, the results could be worthless output at best, or harmful decisions and loss of life at worst, specifically in the case of reliance on AI driven diagnostic decisions in the health sector. Robust cybersecurity measures to protect AI systems from being maliciously manipulated have thus become a necessary requirement to ensure the integrity and reliability of their operations.


In a recent article authored by Michael Ioannou, Chief Information Officer at Elias Neocleous & Co LLC, and published in the Cyprus Mail, Michael discusses the recent guidelines for secure AI development established by the UK’s National Cybersecurity Centre (NCSC) together with the US’s Cybersecurity and Infrastructure Security Agency (CISA), and which another 16 countries have agreed to implement. He further explores the way in which the guidelines have been structured around 4 key areas within the AI system development life cycle, and touches on the adoption of these guidelines as a priority for all countries, including Cyprus with its burgeoning tech sector.


The full feature can be viewed in the Cyprus Mail here.


For any further queries, please contact Michael Ioannou at Elias Neocleous & Co LLC