
Yuliya Shlychkova, Vice President, Government Affairs & Public Policy, Kaspersky
Dmitry Fonarev, Senior Public Affairs Manager, Kaspersky
In 2024, the European Commission introduced the AI Pact, a voluntary pledge designed to help organizations prepare for the implementation of the EU Artificial Intelligence Act (AI Act) before its legal obligations fully take effect. This initiative serves as a bridge between legislation and practice, encouraging early compliance, shared learning, and trust-building around the development and use of artificial intelligence in Europe and beyond.
At its core, the AI Pact invites companies, public authorities, and other organizations that develop or deploy AI systems to embrace key principles reflected in the AI Act, including risk management, transparency, human oversight, and accountability. Signatories are encouraged to begin aligning their internal governance structures, technical processes, and compliance frameworks with forthcoming regulatory requirements – particularly in relation to high-risk AI systems – rather than waiting for formal enforcement deadlines.
Beyond individual commitments, the AI Pact also functions as a collaborative platform. It creates space for dialogue among regulators, industry, civil society, and technical experts, enabling the exchange of best practices, identification of implementation challenges, and the development of practical guidance for responsible AI deployment.
As a global cybersecurity company integrating AI and machine learning (ML) into its products and services, Kaspersky joined the AI Pact in November 2024. The company has been utilizing AI technologies for more than two decades and has established a responsible approach to AI development and deployment, reflected in its internal security policies for both AI operators and employees working with AI services. In addition, Kaspersky has developed its ethical principles governing the use of AI systems in cybersecurity. In practice, this includes developing AI/ML systems that are interpretable to the greatest extent possible, maintaining transparency about how solutions are operating and using AI technologies, implementing safeguards to ensure the reliability of outcomes, and preserving human oversight as a core element of all AI/ML systems.
The decision to sign the AI Pact reflects Kaspersky’s broader commitment to promoting the prudent and responsible use of AI technologies, particularly in the cybersecurity domain where trust, reliability, and safety are essential. Following its voluntary pledge to publicly report on one-year progress, Kaspersky shares a number of initiatives undertaken over the past year, many of which go beyond the Pact’s baseline commitments.
Raising awareness inside and outside the company
In alignment with one of the AI Pact’s core commitments – promoting AI literacy – Kaspersky launched an internal online course on AI and neural networks, mandatory for all employees. The training provides practical knowledge on the application of AI in cybersecurity tasks, highlighting both the opportunities and risks associated with the technology, as well as emerging challenges faced by professionals in the field.
The company also extends training efforts to external stakeholders. In November 2024, Kaspersky delivered a comprehensive training program for educators and parents titled “How to Make AI in Education Work for the Good.” In July 2025, the company organized a summit for academic partners focused on AI and cybersecurity in the context of evolving threats.
Developing practical guidelines
While existing standards largely address foundation model development or high-level risk management, there remains a need for practical instruments that support implementation at the operational level. To address this gap, Kaspersky developed the Guidelines for Secure Development and Deployment of AI Systems, co-authored with leading academic experts and presented at the Internet Governance Forum 2024.
The guidelines are particularly relevant for organizations relying on third-party AI components. They cover key aspects of AI system development and operation, including cybersecurity awareness and training, threat modelling, risk assessment, supply chain and data security, as well as testing and validation processes. Available in six languages, the document provides practical recommendations for developers, administrators, and AI DevOps teams seeking to address technical and operational risks.
Enabling cybersecurity specialists
Capacity building remains a central element of Kaspersky’s efforts to contribute to a safer digital environment. Building on its experience in professional training and its expertise in applying AI technologies to threat detection and privacy protection, the company launched an online course on Large Language Model (LLM) security.
The course is particularly designed for AI and LLM architects, prompt engineers, and AI pentesters. It explores attack techniques specific to LLMs, introduces practical defensive approaches, and explains how structured assessment frameworks can be used to strengthen model security.
Enhancing AI literacy among SMBs
The rapid expansion of AI services has intensified risks for small and medium-sized businesses (SMBs), as cybercriminals increasingly use AI tools to automate phishing campaigns and disguise malicious software as legitimate AI applications. Given that SMBs often lack the financial and organizational resources available to larger enterprises, raising awareness of AI-related risks is critical to reducing exposure to cyber threats.
To support this objective, Kaspersky organized two workshops on cyber hygiene and AI for SMBs in 2025, held in Nigeria and Saudi Arabia. During these sessions, company experts presented the evolving threat landscape, outlined common attack vectors, discussed opportunities associated with AI adoption, and provided practical recommendations for risk mitigation.
Contributing to policymaking initiatives and sharing best practices
In 2025, Kaspersky participated in numerous public consultations conducted by national authorities, international organizations, and multistakeholder platforms. These contributions focused on promoting robust AI safety measures and highlighting the importance of enhanced cooperation among stakeholders. Notably, the company provided feedback on the European Commission’s Digital Omnibus proposal which was aimed at simplifying elements of the AI Act, emphasizing the need for regulatory clarity and proportionality, particularly in distinguishing between high-risk AI systems and those used for protective or resilience-enhancing purposes.
Kaspersky also contributed to the European Commission’s Repository of AI Literacy Practices, a platform designed to support knowledge exchange among AI providers, operators, and the public. The company is among 40 organizations whose initiatives were selected for inclusion, with particular recognition given to its ethical AI principles, guidelines for secure AI use, and the activities of the Kaspersky AI Technology Research Center and Kaspersky Academy.
***
As AI technologies continue to evolve and become more deeply embedded in digital infrastructure, voluntary initiatives such as the AI Pact demonstrate the value of proactive engagement between industry and regulators. However, responsible AI governance cannot rely solely on compliance frameworks. It requires sustained investment in safety, transparency, technical robustness, and human oversight, as well as continuous dialogue across jurisdictions and sectors. Advancing the safe and trustworthy use of AI will depend on strengthening international multistakeholder cooperation – bringing together governments, industry, academia, and civil society to share expertise, align practices, and collectively address emerging risks while enabling innovation to deliver societal and economic benefits.