Should We Start Certifying Cybersecurity for AI Solutions?
Roland AtouiRoland Atoui
Today machine-learning and deep-learning techniques take part in our daily life under the name of AI. AI technology is being advanced to counter sophisticated and destructive cyberattacks. As AI cybersecurity is an emerging field, experts worry about the potential new threats that may emerge if vulnerabilities in AI technology are exposed. Without a certifying body regulating AI technology for the use of cybersecurity, will organizations find themselves more at risk and victim to manipulation? Will certification impact AI cybersecurity?
'As AI cybersecurity is an emerging field, experts worry about the potential new threats that may emerge if vulnerabilities in AI technology are exposed.' - Roland Atoui
On April 21, 2021, the European Commission (EC) published a proposal describing the “first-ever legal framework on AI”. Margrethe Vestager is the Executive Vice President of the European Commission for A Europe Fit for the Digital Age. She describes the landmark rules as a way for the EU to spearhead “the development of new global norms to make sure AI can be trusted.” Commissioner for Internal Market Thierry Breton adds that the new AI regulation “offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security.”
However, the potential of new risks emerging cannot to be ignored. The Commission proposes requirements for strengthening AI systems. Particularly those that may be used to bypass or manipulate human behavior. Some AI systems considered to pose the highest risk if manipulated: transportation infrastructures, education platforms, robot-assisted procedures, credit scoring, evidence evaluations, and document authentication.
According to Stefanie Lindstaedt, CEO of the Know-Center, a leading European research center for AI, “The potential of AI in Europe will only be exploited if the trustworthiness of data handling, as well as fair, reliable, and secure algorithms, can be demonstrated.”
Because AI security needs to be strengthened to mitigate risks and maintain accountability, experts are providing their views and providing recommendations. The Centre for European Policy Studies (CEPS) Task Force on AI and Cybersecurity proposed the following:
On helping promote AI as a powerful solution in countering cyberattacks, a few organizations have already invested in the development of methodologies and tools to bringing trust and value to customers and enable cybersecurity assessments that demonstrate that they are secure and ethical to deploy.
Finally, compliance to standards and regulations are key to enabling trust in AI. As organizations and individuals alike continue to rely on AI more and more, there will surely be growing demands for cybersecurity. Standards and regulations are a way to build trust between AI and users.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Related Articles