Computers, Privacy & Data Protection (CPDP) Conference offers the cutting edge in legal, regulatory, academic and technological development in privacy and data protection. CPDP gathers academics, lawyers, practitioners, policy-makers, computer scientists and civil society from all over the world to exchange ideas and discuss the latest trends and emerging issues. Each year, CPDP offers a compelling and diverse line-up of speakers and panels. This unique multidisciplinary formula has served to make CPDP one of the leading data protection and privacy conferences in Europe and around the world.
The 13th edition - CPDP2020, adopted “Data Protection and Artificial Intelligence” as its overarching theme to pave the way for a timely and thorough discussion over a broad range of ethical, legal and policy issues related to new technologies and data analytics. CPDP2020 offered more than 80 panels addressing current debates in the area of information technology, privacy and data protection, including on topics such as: AI and healthcare, autonomous vehicles, AI-based sentiment analysis, deepfakes, digital evidence, AI for crime prevention, GDPR compliance for SMEs and many more.
CPDP2020 was attended by over 1000 attendees from 60 countries. The EVIDENCE2e-CODEX project was represented by Alexandra Tsvetkova, LIBRe Foundation, disseminating its activities and results to participants and speakers.
Several panels covered electronic evidence related topics:
This is a very important period for the making of legal regimes for cross border access to digital evidence. Evidence of crimes used to exist locally; today e-evidence, essential in investigating crimes, is often stored in a different jurisdiction. The US Cloud Act enables some foreign governments to access content data directly from US-based service providers, after the conclusion of bilateral agreements lifting, partially, the SCA blocking statute. The first such Agreement was signed recently between the UK and the US. However, the Cloud Act also raised the question of eventual conflicts with the GDPR. The US and the EU have started negotiations in order to conclude an agreement and avoid conflicts of laws. And this while work on the E-Evidence Regulation becomes intensive at the EU, after the recent publication of the LIBE Committee Draft E-Evidence Report.
What are the main proposals and features of the LIBE Committee Draft E-Evidence Report and what is the way forward for E-Evidence?
When does the GDPR act as a “blocking statute,” to prohibit transfers of personal data in response to requests by non-EU law enforcement agencies?
What is the correct interpretation of article 48 of the GDPR in this respect and how does this article relate to “the lawful bases for transfer” under Articles 45 and 46 of GDPR and to the “derogations” recognised by article 49?
What are the main features of the US/UK Cloud Act Agreement and what could be its influence for the ongoing US/EU negotiations?
Originality, as a legal requirement, is present in many branches of EU Member States' civil law. At first sight, Blockchain technologies, Artificial Intelligence and Big data, due to their technical aspects, seem to be incompatible with the originality requirement as it has been developed in the EU Member States' laws. As a result, the procedural admissibility of evidence produced and developed through these technologies has been questioned. Some Member States have taken legislative initiatives to overcome such issues. The panel intends to assess, through the expertise of panellists from different backgrounds, whether such incompatibility is concretizable and if so, which the legal requirements should be taken into account by policymakers to ensure that the evidence produced with these technologies could be admissible in court. Amongst others, the panel will consider the following questions:
Is the concept of originality still useful?
How can the integrity of documents be proven?
Do we need more digital legal forensics?
Can evidence produced by AI, Blockchain and other disruptive technologies be defined as original?
Video recording is available here.
AI can make predictions about where, when, and by whom crimes are likely to be committed. AI can also estimate how likely it is that a suspect, defendant or convict flees or commits further crimes. Against the backdrop that AI helps predictive policing and predictive justice, what should the EU’s legal and policy responses be, in particular after the adoption of the Artificial Intelligence Ethics Guidelines? One approach is to count on the vitality of recently adopted data protection laws -in particular, Law Enforcement Directive (EU) 2016/680. Another approach would be to launch a regulatory reform process, either in or out of the classical data protection realm. This panel will look at the usefulness and reliability of AI for criminal justice and will critically asses the different regulatory avenues the new European Commission might consider.
How does the idea of “trustworthy AI” translate into the area of criminal law?
Should we not ban the use of predictive policing systems or the use of AI in criminal law cases, on the basis of ethics?
Does the new European Commission plan to propose legislation in this area? If yes, what would be the objectives of such new laws? Should the actors leading such a reform be different from the ones that were leading the EU data protection reform?
Is it possible to develop predictive justice and predictive policing, and still respect the requirements of the GDPR and Directive (EU) 2016/680?