The 4th edition of the Data Science & Law Forum focused on the ever-changing interplay between emerging technologies, democracy and rule-of-law, and continues to debate and explore advancements in research and the development of robust, trustworthy and ethically deployed AI.
The Data Science & Law Forum has been providing a space for collective reflection and learning on Responsible AI Governance since 2018. What started as a large annual conference, has this year moved to smaller, more frequent events, keeping the conversation going around this timely matter.
Delegates can expect a programme of thought-provoking panels and conversations, from looking at advancing public trust in AI to exploring the unknowns of Large Language Models.
The Forum convened on Thursday, May 19th, welcoming experts from the research community, civil society, policymaking, and legal practice.
CEST | Session |
---|---|
13:15 - 15:15 | Closed workshop (invitation only) Regulating the Use of Facial Recognition Technologies in Public Spaces in Europe
|
14:45 - 15:30 | Start of the event Registration for in-person attendees (general admission) |
15:30 - 16:15 | Fireside Chat Tech in Uncertain Times
|
16:15 - 16:30 | Break |
16:30 - 17:45 | Panel session Operationalizing Responsible AIAlthough Responsible AI regulations are unlikely to take effect before 2024, AI systems are widely deployed across all sectors and segments of society. In absence of a legislative framework, it is especially important for organizations to be proactive and implement principles on transparency, fairness, accountability, privacy and security in ways that build confidence and public trust in AI. This session will discuss the progress and challenges around developing and operationalizing policy frameworks to advance trust in AI within the public and private sectors. We will explore gaps that need to be addressed, and the learnings that can be drawn from the AI Act.
Moderated by:
|
17:45 - 18:00 | Break |
18:00 - 19:15 | Panel session Deploying Large Language Models Safely and LawfullyLarge Language Models (LLMs) have become much larger and more powerful in recent years, achieving remarkable results across natural language processing (NLP) tasks such as text generation, translation, question answering, coding and more. Once the exclusive realm of research, LLMs are increasingly deployed in real world applications. These developments pose important questions which this panel intends to explore:
|
19:15 | Closing remarks, followed by reception |
This year’s Data Science and Law Forum will take place in a hybrid format, with both virtual and in-person attendance. To ensure the safety of our speakers, attendees and team, while maximizing interaction amongst participants, in-person attendance at this event will be limited.
Demand for in-person registration is likely to be high. If you are interested in participating in the conference in-person, we recommend that you please express interest for a place as soon as you register. Those attending in-person will be invited to a drinks and appetisers reception after the final session, to enjoy networking and conversation.
For those attending virtually, sessions will be live streamed to our website.
We will continue to monitor ongoing COVID-19 regulations, follow recommendations regarding masks, social distancing, and sanitation set out by the venue and local authorities and may revise the capacity limit based on the advice received.
Meet the speakers who'll be joining this event and sharing insights.
Assistant Professor of Private Law at the at the Dirpolis Institute, Scuola Superiore Sant’Anna, & Adjunct professor in private law at the University of Pisa
BiographyData Science Lead at Accenture BeLux & Responsible AI lead for Benelux & France
BiographyMember of the European Parliament, President of the European Movement International
BiographyAssociate Professor and Senior Research Fellow at the Oxford Internet Institute, University of Oxford & Defence Science and Technology Fellow at the Alan Turing Institute
BiographyPost-doctoral fellow at the Digital Democracies Institute at Simon Fraser University
BiographyMember of the European Parliament; Chair of the European Parliament Special Committee on Artificial Intelligence
BiographyGhazi Ahamat is a Team Lead at the Centre for Data Ethics and Innovation (CDEI), an expert team within the UK government that works on enabling responsible innovation in AI and data driven technologies. Ghazi leads the CDEI's work on AI Assurance and is currently developing an AI assurance roadmap, which sets out the CDEI’s view of the current AI assurance ecosystem in the UK. He was previously a co-author of the CDEI's Review into Bias in Algorithmic Decision Making.
Ghazi previously founded the Victorian Centre for Data Insights, a state government central analytics team in Australia, and as a consultant with BCG he advised governments in Australia and the Middle East on Data Science, Strategy and Transformation. He has a Masters in Technology Policy (with Distinction) at the University of Cambridge, where he focused on the policy and strategic implications of AI. He also studied Economics and Pure Mathematics at the University of Melbourne.
Andrea Bertolini is Assistant Professor of Private Law at the at the Dirpolis Institute, Scuola Superiore Sant’Anna, and adjunct professor in private law at the University of Pisa. He is the director and scientific coordinator of the European Centre of Excellence on the Regulation of Robotics and AI (EURA, www.eura.satannapisa.it ), funded by the European Commission through the Jean Monnet Action. His published research ranges from private law (contracts, torts, law of obligations) to regulation of robotics and AI and technoethics, with a comparative and law and economics approach, and a focus on alternative liability models, product safety regulation and certification, insurance and risk management, human machine interaction, user manipulation and deception. Dr. Bertolini consults national, International, and European policy makers both with written studies and in person hearings, on a regular basis on issues of robotics and AI regulation. Dr Bertolini holds a joint degree from the Scuola Superiore Sant'Anna and the University of Pisa, a Ph.D in private law from the Scuola Superiore Sant’Anna, as well as a LL.M. from Yale Law School. He is an attorney licensed to practice in Italy and New York.
Theodore Christakis is Professor of International and European Law at University Grenoble Alpes (France), Director of Research for Europe with the Cross-Border Data Forum, Senior Fellow with the Future of Privacy Forum and a former Distinguished Visiting Fellow at the New York University Cybersecurity Centre. He is Chair on the Legal and Regulatory Implications of Artificial Intelligence with the Multidisciplinary Institute on AI (AI-Regulation.com). He has been a member of the French National Digital Council, and he is currently serving as a member of the French National Committee on Digital Ethics as well as a member of the International Data Transfers Experts Council of the UK Government. As an international expert he has advised governments, international organisations, and private companies on issues concerning international and European law, cybersecurity, artificial intelligence, and data protection law. He also has experience working as external Data Protection Officer (GDPR compliance).
TwitterNatasha Crampton leads Microsoft’s Office of Responsible AI as the company’s first Chief Responsible AI Officer. The Office of Responsible AI puts Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach to responsible AI. The Office of Responsible AI also collaborates with stakeholders within and outside the company to shape new laws, norms, and standards to help ensure that the promise of AI technology is realized for the benefit of all.
Prior to this role, Natasha served as lead counsel to the Aether Committee, Microsoft’s advisory committee on responsible AI. Natasha also spent seven years in Microsoft’s Australian and New Zealand subsidiaries helping Microsoft’s highly regulated customers move to the cloud.
Prior to Microsoft, Natasha worked in law firms in Australia and New Zealand, specializing in copyright, privacy, and internet safety and security issues. Natasha graduated from the University of Auckland in New Zealand with a Bachelor of Laws (Honours) and a Bachelor of Commerce majoring in Information Systems.
Adeline Decuyper works at Accenture as a Data Science Manager, where she leads Responsible AI topics across Benelux and France. She has 10 years of experience, and has had leading roles in data science projects across different industries. Prior to that, she was working in academia and holds a PhD from UCLouvain on modelling human behavior using mobile phone data. Before, she studied Engineering in Applied Mathematics at the same university.
Casper oversees Microsoft’s government affairs and public policy work across the European continent. He leads a team of government affairs professionals tasked with strengthening relations with European Union institutions, NATO, European governments, and other key stakeholders ensuring Microsoft is a constructive partner in supporting policy makers to achieve their goals. Outside of his work for Microsoft, Casper is a member of the European Council on Foreign Relations (ECFR) and serves on the Executive Board of Digital Europe, and on the Advisory Boards of Bluetown and Think Tank Europe. A 2009 Marshall Memorial Fellow, Casper was a career diplomat and most recently served as Denmark’s (and the world’s) first Ambassador to the global tech industry. In 2018, he was named among the World’s 100 most influential people in digital government.
Gretchen Krueger leads the Deployment Planning team at OpenAI. Her team focuses on building safety evaluations and interventions, and on assessing the wider social impacts of their deployments in areas such as disinformation and economic impact. Gretchen’s research interests span the various societal dimensions of highly general and capable AI systems and the development of technical and non-technical mechanisms to support responsible AI development and deployment. Prior to joining OpenAI, she worked at the AI Now Institute at New York University and for the City of New York.
Daniel is a Senior Policy Analyst at Access Now’s Brussels office. His work focuses on the impact of emerging technologies on digital rights, with a particular focus on artificial intelligence (AI), facial recognition, biometrics, and augmented and virtual reality. While he was a Mozilla Fellow, he developed aimyths.org, a website that gathers resources to tackle myths and misconceptions about AI. He has a PhD in Philosophy from KU Leuven in Belgium and was previously a member of the Working Group on Philosophy of Technology at KU Leuven.
Mike Linksvayer is Head of Developer Policy at GitHub, leading the company’s efforts to advocate for developers globally, including by helping policymakers understand and leverage open innovation. Previously Mike led open source compliance at GitHub. Mike has worked in the “open” space for two decades, including as a volunteer director of Software Freedom Conservancy and previously as VP and CTO of Creative Commons.
Eva Maydell (Paunova) is a second-term Member of the European Parliament within the European People’s Party (EPP) Group, representing the Citizens for European Development of Bulgaria (GERB). She is a Member of the Committee on Industry, Research and Energy (ITRE) and Economic and Monetary Affairs (ECON) Committee. She is also a Vice-Chair of the Delegation for relations with Japan and substitute member in the Delegation for relations with the US. Until her election, she was Executive Coordinator of the GERB-EPP Delegation and Senior Policy Advisor in the European Parliament. MEP Maydell’s key interests include innovation & digitalisation, investments, SMEs & entrepreneurship and education.
Irene Solaiman is an AI safety and policy expert. She currently is building public policy and conducting social impact research at Hugging Face and advising responsible AI initiatives at OECD and IEEE. Her research includes AI alignment, algorithmic fairness, and combating misuse and malicious use. Her recent speeches include guest lectures at Harvard University and technical talks at large AI labs. Irene formerly built AI policy at Zillow Group. Before that, she led public policy at OpenAI, where she initiated and led bias and social impact research. Notably, her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She also advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center. Outside of work, Irene enjoys her ukulele, making bad puns, and mentoring underrepresented people in tech. Irene holds a B.A. in International Relations from the University of Maryland and a Master in Public Policy from the Harvard Kennedy School.
Mariarosaria Taddeo is an Associate Professor and Senior Research Fellow at the Oxford Internet Institute, University of Oxford, and is Defence Science and Technology Fellow at the Alan Turing Institute. Her work focuses mainly on the ethical analysis of artificial intelligence (AI), ethics of AI for national defence, cybersecurity, cyber conflicts, and ethics of digital innovation. Her area of expertise is Digital Ethics. Her research has been published in major journals like Nature, Nature Machine Intelligence, Science, and Science Robotics. Since 2019, Professor Taddeo leads a Dstl (Defence Science Technology Laboratory, Ministry of Defence UK) funded research project on the Ethics of AI in National Defence. She has received multiple awards, including the 2010 Simon Award for Outstanding Research in Computing and Philosophy; the 2016 World Technology Award for Ethics. In 2018, InspiringFifty named her among the most inspiring 50 Italian women working in technology. ORBIT listed her among the top 100 women working on the Ethics of AI in the world. She is one of the twelve 2020 "Outstanding Rising Talents" named by the Women's Forum for Economy and Society. Since 2016, Taddeo serves as editor-in-chief of Minds & Machines (SpringerNature) and of Philosophical Studies Series (SpringerNature).
Zeerak Talat is a post-doctoral fellow at the Digital Democracies Institute at Simon Fraser University and is the co-chair with Angelina McMillan-Major and Pedro Ortiz Suarez for the data sourcing working group in the BigScience initiative. Zeerak's research focuses on the foundational limitations and the ethics of machine learning and NLP technologies as viewed through content moderation and social predication tasks. Talat received a Ph.D. from the University of Sheffield. In the Ph.D., Talat worked on automated content moderation and how the practice of automating content moderation using machine learning revealed underlying the political economy of machine learning, displaying issues of access, equality, and ethical practices. Talat also founded and runs the Workshop on Online Abuse and Harms, which focuses on the technical and social developments of automated content moderation infrastructure. Talat is currently working on critical machine learning and the philosophy of machine learning, aiming to identify the specific underlying causes for why and how machine learning is a currently a marginalizing technology.
Dragoș Tudorache is a Member of the European Parliament and Vice-President of the Renew Europe Group. He is the Chair of the Special Committee on the Artificial Intelligence in the Digital Age (AIDA) and the LIBE rapporteur on the AI Act, and he sits on the Committee on Civil Liberties, Justice and Home Affairs (LIBE), Committee on Foreign Affairs (AFET), Subcommittee on Security and Defense (SEDE), and the European Parliament's Delegation for relations with the United States (D-US).
Dragos began his career in 1997 as a judge in Romania. Between 2000 and 2005, he built and led the legal departments at the Organization for Security and Co-operation in Europe (OSCE) and the UN missions in Kosovo. After working on justice and anticorruption at the European Commission Representation in Romania, supporting the country's EU accession, he joined the Commission as an official and, subsequently, qualified for leadership roles in EU institutions, managing a number of units and strategic projects such as the Schengen Information System, Visa Information System, and the establishment of eu-LISA.
During the European migration crisis, Dragos was entrusted with leading the coordination and strategy unit in DG-Home, the European Commission Directorate-General for Migration and Home Affairs, until he joined the Romanian Government led by Dacian Ciolos. Between 2015 and 2017, he served as Head of the Prime Minister's Chancellery, Minister of Communications and for the Digital Society, and Minister of Interior. He was elected to the European Parliament in 2019. His current interests in the European Parliament include security and defence, artificial intelligence and new technologies, transatlantic issues, the Republic of Moldova, and internal affairs.
David van Weel is NATO’s Assistant Secretary General for Emerging Security Challenges. He is the Secretary General’s primary advisor on emerging security challenges and their implications for the security of the Alliance and a member of the Secretary General’s senior management team. The Emerging Security Challenges Division, which he directs and manages, aims to provide a coordinated approach by the Alliance to all new and emerging challenges. These include cyber and hybrid threats, terrorism, as well as emerging and disruptive technologies (such as AI and quantum computing), energy security challenges, including those posed by environmental changes, and data policy. The division also runs the Science for Peace and Security Programme, which promotes dialogue and practical cooperation between NATO and partner nations through scientific research, technological innovation and knowledge exchange. The Division aims to provide innovative policy solutions for countering and defending the Alliance and Allies against these challenges and to maintain the innovative and technological advantage of the Alliance in conjunction with partners, industry and other multilateral organisations. Prior to joining NATO, David van Weel was the Foreign Policy and Defense Advisor for the Prime Minister of The Netherlands (2016-2020). This position followed a long career in The Netherlands Ministry of Defence, where he ended as Director for International Affairs and Operations/ Policy Director (2014-2016) after serving as the Chief of Cabinet for the Minister of Defence and the Permanent Secretary (2012-2014) and as the senior policy officer for amongst others operations in Afghanistan and Libya, NATO, nuclear policy and disarmament, special operations and the preparation of the Defence Budget. David started his career in the Royal Netherlands Navy, where, upon completion of the Naval Academy (1994-1999), he served on different frigates, served in the British Royal Navy as an exchange officer, worked as a Staff Officer for Middle and Eastern European countries in the Defence Staff and ended as a Primary Warfare officer and Navigation Officer. David is married to Iris and has two daughters, Felice and Alix.