The use of AI and machine learning (ML) is increasing across many industries, including cybersecurity. And whilst it can bring many benefits in helping professionals do their job of tackling increasingly complex cybersecurity threats, there are several problems that need to be addressed, including the way AI is trained and how it can be used for unethical purposes. 

As a result, Kaspersky has set out six ethical principles which it believes the industry should follow, and will be presenting them at the UN Internet Governance Forum in Kyoto, Japan. Here is a summary of them:

Transparency

So that users can know how security providers’ AI systems make decisions and why, they should be developed to be as interpretable as possible and have the necessary safeguards to ensure they produce valid outcomes.

Safety

AI developers should prioritise resilience and security to reduce manipulation of input data sets to produce inappropriate decisions.

Human control

ML systems should be combined with human expertise that can monitor results and performance. These experts should have the ability to fine-tune verdicts of automated systems and also to adapt and modify the systems themselves when necessary to protect against highly sophisticated cyber threats.

Privacy

Because AI is trained on large data sets which can include personal information, an ethical approach that respects individual privacy must be taken when using the data. This can include limiting the types of data used, anonymisation of data and integrating data protection measures within organisations. 

Developed for cybersecurity

AI in cybersecurity must be used solely for defensive purposes.

Open for dialogue

There needs to be collaboration within the industry in order to overcome the challenges presented by the adoption and use of AI for security, so sharing best practices with all stakeholders is promoted.