Program
9:30-10:20 Registration
10:20-10:30 Opening Remarks: Ken Satoh, Director, Center for Juris-Informatis)
10:30-11:30 AI Regulation in the EU: Georg Borges (University of Saarland, Germany)
Presentation Slides
(The copyright of the slides belongs to Prof. Georg Borges, so if you wish to use them, please obtain permission from Prof. Borges.)
Abstract:
Developers of AI models and systems outside the EU may be significantly affected by European Union regulations. This applies not only if they directly target the European market, but also if their AI models or systems are used by other parties to create products or services made available in the EU. The presentation will provide an overview of the challenges posed by the European legal framework for AI models and systems from the perspective of Japanese providers. The focus is on the European AI Act, data protection, liability for damage caused by AI products, and copyright infringements.
ShortBio:
Georg Borges is a Professor of Civil Law, Legal Informatics, German and International Business Law and Legal Theory and the managing director of the Institute for Legal Informatics at Saarland University, Germany. From 2004 to 2014, he was Professor of Law at Ruhr-University Bochum. Beside this, he was also a Judge at the State Court of Appeals, Hamm Circuit. Since February 2023, he is also a distinguished visiting professor at the University of Johannesburg, Faculty of Law. Since September 2024, he is also visiting professor at Keio University, Tokyo.
As an expert on IT Law and on law and informatics, Prof. Borges authored several books and numerous articles in the field. Prof. Borges is involved in numerous projects in the field of IT and legal informatics. Currently, a focus of his interest is on the legal framework of AI and on data protection.
11:30-12:30 Code of Practice for General-Purpose AI in the EU: David Restrepo Amariles (HEC Paris, France)
Presentation Slides
(The copyright of the slides belongs to Prof. David Restrepo Amariles, so if you wish to use them, please obtain permission from Prof. Amariles.)
Abstract:
The EU’s General-Purpose AI Code of Practice is a central soft-law instrument translating the EU AI Act’s high-level obligations into operational expectations for developers and deployers of foundation models. It is intended to structure due diligence across the AI value chain through three core pillars—transparency, copyright-related compliance, and safety and security—thereby shaping standards of care, supervisory practice, and, indirectly, liability and market access as the AI Act moves toward enforcement. We discuss the implications of this Code for Japanese companies engaging with European markets and partners. We focus on where compliance demands are likely to be most salient in practice: documentation and information-sharing across the supply chain, rights-management and content provenance in training and deployment, and elevated governance requirements for advanced models that may be treated as presenting systemic risk.
ShortBio:
Professor David Restrepo is Associate Professor of AI Innovation in Highly Regulated Markets and the Worldline Chair Professor on the Future of Money at HEC Paris, with cross-appointments at the Hi! Paris Center on Artificial Intelligence and the HEC Paris Law Department. He co-leads the Center on the Future of Money and Digital Assets and the Smart Law Hub, serves as Chair of the Artificial Intelligence Special Interest Group (SIG) of the American Academy of Legal Studies in Business, and is a member of the Royal Academy of Science, Letters and Fine Arts of Belgium.
Professor Restrepo has contributed to technical standard-setting initiatives for artificial intelligence systems through committees at AFNOR, ISO, and CEN–CENELEC, and has provided consultative feedback on the EU AI Act and the Code of Practice. He directs the Transatlantic Dialogue on AI and Regulation. His research has been published in leading academic venues, including Nature Communications, Artificial Intelligence and Law, Lecture Notes in Computer Science, Marketing Letters, and Computer Law & Security Review, and has been featured in outlets such as Forbes, the Financial Times, The Wall Street Journal, American Banker, Les Echos, South Korea’s Maeil Business Newspaper, and L’Echo.
12:30-14:00 Lunch Break
14:00-15:00 AI Regulation in the United States: Merve Hickok (Center for AI and Digital Policy, USA)
Presentation Slides
(The copyright of the slides belongs to Prof. Merve Hickok, so if you wish to use them, please obtain permission from Prof. Hickok.)
Abstract:
Unlike countries with centralized governance structures, the U.S. system distributes authority among the three branches of federal government, and those of the state government, each approaching AI through their priorities and needs. For businesses intending to operate in different states, or directly with the federal government, it is critical to understand this complex mix of policies and requirements. In her presentation, Merve Hickok will explain the differences between the sources of AI policy, current trends in legislation, litigation and investigation on AI systems. She will also provide a future outllook for overseas businesses to consider.
ShortBio:
Merve Hickok is the President and Policy Director at Center for AI and Digital Policy (CAIDP), deeply engaged in global AI policy and regulatory work. She has provided testimony to the U.S. Congress, the Turkish National Assembly, the State of California, the New York City and the Detroit City councils.
In addition to her current role as Invited Advisor to GPAI Tokyo Expert Support Center, Merve provides AI policy expertise to OECD.AI, UNESCO AI Experts Without Borders, Council of Europe Committee on AI, EU AI Office Code of Practice Working Group, and Hiroshima AI Process Partners Group. Merve Hickok is also the founder of AIethicist.org. She is a globally renowned, award-winning expert on AI policy, ethics and governance. Her contributions and perspective have featured in The New York Times, Washington Post, Guardian, CNN, Forbes, Bloomberg, Wired, Scientific American, The Atlantic, Politico, Protocol, Vox, The Economist and MIT Technology Review.
15:00-16:00 AI Regulation in Japan: Takashi Nakazaki (Anderson Mori & Tomotsune)
Presentation Slides
(The copyright of the slides belongs to Mr. Takashi Nakazaki, so if you wish to use them, please obtain permission from Mr. Nakazaki.)
Abstract:
As the business use of artificial intelligence expands beyond generative AI to include agentic AI and physical AI systems, the spectrum of risk has become increasingly complex. Organizations now face not only technical and societal risks—such as deepfakes and hallucinations—but also a growing range of legal risks, including intellectual property infringement, data protection compliance, product liability, cybersecurity, and sector-specific regulatory exposure. In this context, robust AI governance has become indispensable.
On the policy front, the Japanese government has been actively developing a regulatory and governance framework to facilitate responsible AI innovation. Following the enactment of the AI Act in 2025 and the formulation of the AI Basic Plan and AI Guidelines, further significant developments are anticipated in 2026. These include proposed amendments to the Act on the Protection of Personal Information (APPI), the development of an AI Principles Code (tentative name), and updates to the AI Business Guidelines.
This section will provide an overview of current and forthcoming AI regulatory trends under Japanese law. It will examine legal and governance challenges arising from business deployment of AI, including in regulated sectors such as finance and healthcare. The discussion will cover a broad range of topics—from data protection and intellectual property to risk allocation, compliance frameworks, and corporate governance—situating them within the broader Japanese legal and policy landscape.
ShortBio:
Takashi Nakazaki is a Partner at Anderson Mori & Tomotsune and a Member of the IAPP Asia Advisory Board. He also serves as an Auditor (Board Member) of the Japan Institute for Health Security, Japan’s national health security institution.
His practice focuses on privacy, data protection, and AI governance. He advises global and domestic clients on complex regulatory issues relating to cross-border data transfers, AI development and deployment, generative AI compliance, cybersecurity, and digital platform regulation. He is particularly experienced in navigating multi-jurisdictional privacy frameworks, including GDPR and emerging AI regulatory regimes, and regularly supports Japanese companies in EU and other overseas compliance matters.
Beyond private practice, Mr. Nakazaki has played an active role in shaping Japan’s AI governance landscape. He served as a member of the working group that drafted Japan’s AI Guidelines for Business and has participated in multiple expert committees organized by the Ministry of Economy, Trade and Industry (METI), the Ministry of Internal Affairs and Communications (MIC), and the Cabinet Office, contributing to national discussions on AI policy and digital regulation.
16:00-17:00 AI Regulation in the UK: Jessica Alder, Simon Deakin (University of Cambridge, UK)
Presentation Slides
(The copyright of the slides belongs to Ms. Jessica Alder and Prof. Simon Deakin, so if you wish to use them, please obtain permission from both of Ms. Alder and Prof. Deakin.)
Abstract:
The UK government’s approach to artificial intelligence (AI) has been to promote its use and adoption within the economy and across society, while seeking to address emerging risks and build public trust. It has avoided enacting a ‘horizontal’ or generally applicable regulatory measure along the lines of the EU’s AI Act, choosing instead to retain existing sector-specific regulation. This strategy has the merit of being flexible and avoiding regulatory lock-in at a point when AI technologies are still developing. The downside is continuing uncertainty over the legality of practices which are at the core of the emerging AI business model, including web scraping. Data protection laws have recently been weakened, which puts the UK at odds with other European countries and opens up a potential regulatory gap with the EU. The implications of AI use for consumer safety and employment protection are still emerging, and a government-sponsored AI Bill seems distant. This incremental evolution is likely to continue.
ShortBios:
Jessica Alder is a researcher at the Centre for Business Research, University of Cambridge, working on the legal and economic dimensions of artificial intelligence and digital regulation, with particular interests in market governance, innovation, and comparative regulatory frameworks. Her research builds on an interdisciplinary background in law, economics, and quantitative analysis, including a forthcoming publication based on her dissertation on housing market inequality in Tokyo, alongside experience spanning academic research, policy analysis, and strategy consulting.
Simon Deakin is a professor of law and director of the Centre for Business Research at the University of Cambridge, and a visiting fellow and specially appointed professor at the Hitotsubashi Institute of Advanced Studies, Tokyo. He specialises in labour law, private law and company law, and conducts research in the fields of empirical legal studies and the economics of law. His books include The Law of the Labour Market (with Frank Wilkinson, 2005), Hedge Fund Activism in Japan (with John Buchanan and Dominic Chai, 2011), and Is Law Computable? (with Christopher Markou, 2021).
17:00-17:10 Closing Remarks: Mihoko Sumida Professor, Institute for Advanced Study, Hitotsubashi University