The ITSL is working on a number of research projects within the scope of its research areas.
Ongoing research projects:
Comprehensible Algorithms: A Legal Framework for the Use of Artificial Intelligence
Project duration: July 1, 2021 to June 30, 2024
Funding Organisation: Mercator Foundation Switzerland
• Prof. Dr Florent Thouvenin, Professor of Information and Communication Law, Chair of the ITSL Steering Committee and Director of the Digital Society Initiative (DSI) at the University of Zurich. In his comprehensive research, Florent Thouvenin has been dealing with the challenges digitalisation poses for the law. More recently, his focus lies on data protection law and the legal comprehension of AI systems.
• Prof. Dr Nadja Braun Binder, MBA, Professor of Public Law at the University of Basel. Nadja Braun Binder has been working on legal issues relating to digitalisation of the governemnt and administration for almost 20 years. In recent years, her special interests encompass the use of AI in public administration, legal issues related to digital democracy as well as the influence of social media on the formation of political opinion.
• Dr Stephanie Volz, scientific director of the ITSL and lecturer at the Faculty of Law at the University of Zurich. She has been involved in issues related to digitalisation for many years, first in research, then in practice and now again in research. Her focuses in this area stem from media, data protection and competition law.
• Dr Franziska Oehmer, scientific director of the ITSL as well as senior assistant and lecturer at the fög – Research Centre Public Sphere and Society at the University of Zurich. Her teaching and research focuses cover platform governance, media literacy and the mediatisation of law.
• Fabienne Graf, academic associate at the ITSL. A doctoral candidate at the University of Lucerne and the Humboldt University of Berlin, she researches technicity in software patent law and related epistemological problems. Her scientific output covers topics within the law of new technologies, intellectual property law and philosophy of law.
• Liliane Obrecht, currently a student assistant to Nadja Braun Binder. She co-authored a study on AI in public administration that was commissioned by the Canton of Zurich. She has written her Master's thesis on data protection and digitalisation in the context of the COVID-19 pandemic, and will soon take up doctoral studies at the Law Faculty of the University of Basel.
Regulating Artificial Intelligence (AI) raises diverse and complex questions. Core challenges cover: the lack of transparency and traceability of decisions made by AI systems, risks to privacy, the danger of discrimination and risks of manipulation. Further uncertainties concern the liability for damage caused by AI systems.
The use of AI in administration as well as in the area of media and information intermediaries has particular relevance for society: In the context of administration, the state regularly acts in a sovereign manner towards its citizens. In addition, all citizens are affected by administrative actions. Whereby sensitive areas are of particular concern, for example in social insurance. The traditional media and the new information intermediaries (social media, search engines, micro-blogging services, photo and video sharing platforms) have a great influence on the formation of public opinion through the selection and presentation of content. This becomes especially evident in the run-up to votes and elections. Said phenomenon proofs increasingly problematic as information intermediaries not only influence the formation of public opinion in general, but also control the perception of content at the level of individual citizens and consumers – thus affecting them in their thinking and actions.
This project's main objective is to create a comprehensive legal framework for the use of AI in Switzerland. To this end, generally applicable legal provisions and selected sector-specific regulation concerning the public administration and the media sector (journalistic media and information intermediaries). Such norms, capable of addressing the central challenges of AI systems and preventing the occurrence of specific disadvantages for citizens are to be developed and applied. These new legal standards are presented in white papers and will be accessible to relevant parties in policy and administration, as well as to interested members of the public. The project also includes a number of measures promoting the subsequent entry of the novel norms into the legislative process.
Certain challenges can be met not only by enacting new legal norms, but also through adequate interpretation of existing legal norms. In addition to the development of new legal norms, novel interpretations and applications of existing law will be identified and presented in scientific publications. These publications are primarily aimed at expert academics, judges and administrative personnel, but may also be of interest to members of civil society.
The new legal norms and the white papers, together with the scientific publications on the interpretation and application of the existing legal norms, will form a comprehensive legal framework with which the central challenges arising from the use of AI systems can be addressed.
In addition to developing a generally applicable legal framework for AI in Switzerland, the project also analyses two areas in further depth: A first focus lies on the use of AI systems in public administration. Such use is subject to special rules, such as the constitutional prohibition of discrimination and fundamental procedural rights. With regard to AI applications, the right of citizens to be heard, their participation in administrative procedures and the right to a statement of reasons for state decisions raise specific questions. In this context, the prevention of discrimination and the fostering of transparency are particularly important. The legal regulation must adequately address these points without hindering the digitalisation of the administration and the use of AI.
A second focus concerns the media and information intermediaries, as the most important sources of information in democracies. Their particular impact on the formation of opinion and will is especially highlighted in connection with elections and votes. In order to take into account the new media use behaviour, the project will not only develop standards for journalistic media such as newspapers, radio or TV news, but also cover information intermediaries. Latter comprise of online services that make third-party content available to users, usually in a structured and searchable form. Information intermediaries include social media platforms such as Twitter and Facebook, as well as search engines, micro-blogging services, photo and video sharing platforms. They not only impart the content of their users ("user-generated content"), but also convey journalistic media contributions. Traditional media, on the other hand, are incorporating discussions originating from social media into their reporting. In view of this complex interaction of journalistic media and information intermediaries in opinion-forming, the project will examine whether special provisions for the use of AI systems are (also) required by Swiss law. If such need is established, research is carried out into what these should look like.
A normative framework adequately addressing the numerous issues of AI cannot be established by legal scholars alone. The project therefore follows an interdisciplinary and collaborative approach. Two interdisciplinary workshops involving colleagues from computer science, ethics, sociology, psychology as well as communication and media sciences will also ensure that the development of legal norms is based on the latest scientific knowledge.
Comprehensive solutions that are also suitable for parliamentary implementation can only be developed with the involvement of the relevant stakeholders. They include representatives of civil society, developers of AI systems and their users (companies and authorities). With regard to the focus areas, people from the public administration and from the media and information intermediaries will be consulted. In the form of multi-stakeholder dialogues, these stakeholder groups will be involved in the project from its early stages. This way, their needs and experiences can be appropriately incorporated into the development of the legal standards and white papers.
Data Protection and Research
Driven by breakthroughs in data processing, great progress has been made in the field of data-intensive research in recent years. The data required for this type of research is regularly personal data within the meaning of data protection law. This is all the more true as the concept of personal data has been continuously expanded as a result of technical developments in the practice of judicial and administrative authorities. Innumerable research projects must therefore ensure that the requirements of data protection law are observed.
However, the concepts and approaches of current European and Swiss data protection law were developed long before the emergence of new, data-driven research approaches. Accordingly, these data protection laws lack a structured approach to the challenges of data-driven research and it is sometimes assumed that the problems can be solved by simple means (anonymisation or publication of results in anonymised form). However, this is hardly the case today. For many researchers, this creates uncertainty as to which research work is permissible under which conditions. Although this can certainly be answered in individual cases, the legal uncertainty is considerable (even for specialised lawyers). In particular, it is largely unclear how the interests within the system of standards under data protection law are to be balanced, in particular the public interest in knowledge and the interests of the individuals concerned in the protection of their privacy and/or in "informational self-determination". This balancing of interests is also reflected in a conflict of fundamental rights, namely between the freedom of science (Art. 20 BV) and the protection of personal freedom and privacy (Art. 10 and 13 BV).
The legal uncertainty creates the danger that promising research projects will not be carried out or that delays or high costs will result. This could be largely avoided if the legal situation were sufficiently clear. In addition, there is the currently much-cited danger that Europe will fall behind the USA and China in research and development due to the strict requirements of data protection law.
Against this background, the ITSL is making the topic of "data protection and research" a research focus in the current year (and in part even beyond). The aim is to make significant contributions in Switzerland and beyond to the question of how data protection law can be interpreted and applied to make research largely possible - without jeopardizing the interests of the persons concerned that are worthy of protection.
Governance Mechanisms for Access and Use of Data in Public Health Crises
Joint research project of the Center for Information Technology, Society, and Law (ITSL) at the University of Zurich and the University of Geneva.
Access to data to inform decision making is of utmost importance in a public health crisis. As the current crisis evidences, access to and effective use of relevant data is not always possible. The research project aims to identify existing barriers for access and use of data in public health crises and to develop alternative governance mechanisms to facilitate access to and use of the data needed to meet public health crises.
Cultural, legal and infrastructural barriers in particular were identified as the main forms of barriers. To overcome these barriers, substantial progress will be necessary with regard to data literacy, interoperability and novel approaches to data protection law. While the focus of this project is to improve the access and use of data in public health crisis, some fundamental shifts are necessary that also apply to “normal” times to enable appropriate reactions to future health crises.
Governance of Disinformation in Digital Public Spheres
Joint Research Project of the Center for Information Technology, Society, and Law (ITSL) and the fög – Research Institute for the Public Sphere and Society at the University of Zurich.
Disinformation has gained importance in the context of digitalization. New communication channels, first and foremost social media and messenger services such as Whatsapp, Telegram or Signal, enable the exchange and dissemination of information to a large audience, including intentionally and unintentionally spreading false news. In Switzerland, too, there is currently heightened concern about problematic effects of disinformation.
Looking at the governance of disinformation, there is currently a patchwork of approaches, rules and instances. So far, the subject has been approached from either a communication and media studies or a legal perspective, but the two perspectives have never been linked. The goal of our research project is now to show where concrete gaps in regulation and enforcement exist. These findings form the basis for the formulation of governance options with regard to disinformation, which take into account state, organizational and individual measures and may act as a basis for political and legal measures.
Privacy is a key factor for individual and social well-being. In the digital age, ubiquitous data processing practices by businesses and government agencies and the abundant digital traces we knowingly or unknowingly leave behind affect privacy in various ways with consequences for individuals and society. To ensure that the processing of digital traces ultimately benefits individuals and society, we launch a research project that rethinks privacy with a synergetic combination of four perspectives: Philosophy, communication studies, law, and technology.
This research project is structured in three parts. In the first part – Deconstructing Privacy – the project explores the definitions, ascriptions, perceptions, and concepts of privacy as well as existing mechanisms to protect it. Upon these findings, the second part – Reshaping Privacy – starts out on the presumption that the processing of digital traces can be both beneficial and harmful, and that current regulatory and technical attempts have their limitations to successfully fight the actual harms, thereby curtailing important benefits. We explore three (potential) harms that are particularly important: manipulation, discrimination, and chilling effects. By analyzing these harms we develop a better understanding of how they affect individuals, groups, and society at large. In the third part – Governing Privacy – all disciplines jointly devise governance arrangements that minimize the harms caused by the processing of digital traces while allowing the benefits to come to fruition. Based on a comparison of the governance recommendations for the three harms, we ultimately aim to draw up the foundations for a new governance framework for privacy in the digital age.
Completed research projects:
The ScanVan project team at EPFL has developed a new "spherical" camera and a dedicated vehicle to allow the 3D-digitization of Swiss and European cities. While conventional cameras always have a limited field of view, the ScanVan camera is capable of perceiving in all directions simultaneously. Using photogrammetry, the images are used to generate a 3D point cloud of the captured area. Mounted on a small car, the system permits to digitise a city simply by driving once in each of its streets. The project was funded by the Swiss National Science Foundation as part of the National Research Programme 75 "Big Data" (NRP 75).
The ITSL has joined the research project in the spring of 2020 to analyze the potential privacy issues in the light of Swiss and European data protection law. This input led to the implementation of different measures to ensure privacy by design. The interface was designed to incorporate this aspect intrinsically into its operation. Algorithms were programmed to erase people and vehicles from the captured spherical images. Additionally, an annotation interface makes it easy for anyone to point out privacy issues and request the removal of the respective data.
More information about this project is available on the ScanVan project website and in the following video:
Technological progress – especially in the field of artificial intelligence (AI) or machine learning – leads to decisions being made automatically in more and more areas of daily life. Because it is ultimately algorithms that convey a result on the basis of certain decision-relevant parameters, the literature uses the terminologies "automated" and "algorithmic" decisions interchangeably.
Policymakers have become aware of the automation of everyday life and the associated delegation of certain human decision-making processes to machines. Since May 2018, for example, the European General Data Protection Regulation (GDPR) addresses this phenomenon in different provisions and recitals. Although such legal approaches exist, research on how policymakers should regulate issues surrounding automated decision-making is still in its infancy.
In this research project funded by the Hasler Foundation, the ITSL examines the nature and characteristics of automated decision-making on the one hand, and the need for and design of regulation on the other. The combination of these two complementary parts provides answers to the central research question of how to deal with automated decision-making from a regulatory point of view.
These questions were discussed in public on 13 November 2019 at an event hosted by ITSL. Individual contributions to this event were published in extended form in the Schweizerischen Zeitschrift für Wirtschafts- und Finanzmarktrecht in early 2020. From 12 to 14 September 2019, ITSL held an international expert workshop with scientists in the field of law, communications studies and computer science. The workshop was sponsored by the Swiss National Science Foundation (SNSF) and its main insights are compiled in a Workshop Report (PDF, 440 KB).
Artificial Intelligence Briefing (Foundation for Technology Assessment)
On behalf of the Foundation for Technology Assessment (TA-Swiss), the ITSL in conjunction with other researchers from Switzerland and Austria conducted an interdisciplinary study on the risks and opportunities of Artificial Intelligence (AI). The research group consisted of researchers from the fields of informatics, business administration, economics, educational sciences, communication science, law and ethics. The study’s main objective was enabling policy makers to make informed decisions with regard to AI. The interdisciplinary study evaluates the impact AI has on four main areas: the world of work, education, consumption and administration. An emphasis is put on deep learning algorithms; however, other forms of AI are also subject to investigation. The study was presented in April under the name «Wenn Algorithmen für uns entscheiden: Chancen und Risiken der künstlichen Intelligenz» during a media press conference and is available here.
Foundations of the Right to Privacy
Nowadays, information and communication technologies allow mass collection and analysis of personal data. Against this background, the right to privacy is of major scholarly interest. But despite the fact that this right’s origins date back to the Universal Declaration of Human Rights (UDHR) in 1948, its exact scope remains relatively blurry. In this research project, we examine the foundations of the right to privacy and analyse how modern data protection law draws on this right.
Between Solidarity and Personalisation - Big Data in the Insurance Industry
The insurance industry has a genuine interest in Big Data applications. By applying profiling or predictive analytics techniques and by using quantified self applications, specific risks of an insured person can be assessed more precisely. At the same time, the moral foundation of any insurance system is solidarity; individual risks should be distributed among all insured persons. These conflicting goals are to some extent paradigmatic for the challenges brought about by digitalization. The ITSL participated in an interdisciplinary project funded by the National Research Program (NRP 75 – Big Data) which analysed this conflict and brought together researchers from ethics, economics and law. The results of this research may by found here
The increasing number of data-driven business models as well as the growing importance and value of data have spurred the question whether data belong to someone, and if so, to whom. While the topic has already entered the political sphere, a number of key questions remain unanswered or were, to date, only touched upon briefly. A one-year research project funded by the Hasler Foundation addresses these fundamental questions regarding such a potential exclusivity right on data: How can such a right be justified? What would be its scope and limitations? And how could it be implemented?
These questions were discussed in public on 29 March 2017 at an event hosted by ITSL. From 6 to 8 July 2017, ITSL held an international expert workshop with scientists in the field of law and computer science. The workshop was sponsored by the Swiss National Science Foundation (SNSF) and its main insights are compiled in a Workshop Summary (PDF, 223 KB).