LL.M. in Arbitration and Dispute Resolution
Speakers: Adrian Mak and Wilson Lui
Moderator: Shahla Ali, Faculty of Law
Date: 22 December 2025
Time: 3:00pm – 4:00pm
Venue: Room 824, 8/F, Cheng Yu Tung Tower, Centennial Campus, HKU
For Registration, please click here.
About the Speakers:
Adrian Mak is a fellow at the Stanford Law AI Initiative, specializing in AI governance and law, data privacy, and international dispute resolution. His work includes co-editing “Privacy and Personal Data Protection Law in Asia” (Hart) and contributing to “The Cambridge Handbook of Private Law and Artificial Intelligence” (Cambridge). As a director of Anselmo Reyes KK (on Academic Leave), he has served as a tribunal secretary and counsel in over 35 international arbitrations. He is appointed as a Panel Arbitrator in the Republic of Uzbekistan, Specialist Mediator at the Singapore International Mediation Center, and served as a resource person to the Asian Development Bank. He holds a Stanford LL.M. and is admitted to practice in New York and Hong Kong.
Wilson Lui is a Research Fellow at the Centre for Private Law of the University of Hong Kong. He was a part-time Lecturer at the Faculty of Law of the University of Hong Kong from 2022 to 2025. He has published five books and more than 15 book chapters and journal articles. Wilson is currently pursuing his PhD at the University of Melbourne, and holds an MPhil from the University of Oxford and an LLM from the University of Cambridge. He is a Fellow of the Chartered Institute of Arbitrators, the Hong Kong Institute of Arbitrators and Advance HE.
About the workshop:
International arbitration stands at a critical threshold: the shift from passive Generative AI to “Agentic AI”—autonomous systems that can plan, investigate, and execute complex legal workflows. This lecture examines the friction between the agentic AI’s autonomy and the legitimacy frameworks of the New York Convention.
Beyond the efficiency of automated drafting, we analyze how agentic workflows introduce novel, existential risks to the arbitral process, including goal hijacking and knowledge ecosystem poisoning. We will propose a rigorous three-tier taxonomy (Assisted, Supported, Decided) to map these risks and present a Model Agentic AI Protocol designed to engineer enforceability through architectural and legal safeguards.