Artificial Intelligence, Surveillance and Threats to Security: Towards a Decay of Democratic Values?
The prevention of serious crime and other threats to security is one of the fields that, in principle, could greatly benefit from artificial intelligence (AI) and “intelligent” algorithms. For example, AI can enhance surveillance techniques, a key tool for law enforcement and intelligence agencies (even) in “mature” democracies.
Facial recognition technologies, aerial drones to carry out surveillance over individuals, “black boxes” analysing communication metadata and other tools are definitely “appealing” to states that have to deal with “emergencies”, e.g. international terrorism (but also the current pandemic). Nonetheless, they make the already complex balance between the effort to protect security (or, in the case of the pandemic, public health) and the need to guarantee individual rights even trickier. As a matter of fact, AI may heavily interfere with rights as privacy, data protection, free speech; at the same time, the risk of biases and discrimination lurks behind algorithms. This scenario may trigger citizens’ distrust towards governments and public institutions, alarmingly contributing to the “decay” of democratic values.
These are some major challenges, from a public law standpoint, that arise from the growing use of AI to tackle security issues and that our Working Group intends to discuss.