Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Jun 23, 2021

20:30

6th panel - Full paper - 30 minutes

00:30 min

Savelka, Jaromir; Westermann, Hannes; Benyekhlef, Karim; Alexander, Charlotte S.; Grant, Jayla C.; Amariles, David Restrepo; El-Hamdani, Rajaa; Meeus, Sebastien; Troussel, Aurore; Araszkiewicz, Michal; Ashley, Kevin D.; Ashley, Alexandra; Branting, Karl L.; Falduti, Mattia; Grabmair, Matthias; Harasta, Jakub; Novotna, Tereza; Tippett, Elizabeth; Johnson, Shiwanni.

Abstract: In this paper, we examine the use of multi-lingual sentence embeddings to transfer predictive models for functional segmentation of adjudicatory decisions across jurisdictions, legal systems (common and civil law), languages, and domains (i.e. contexts). Mechanisms for utilizing linguistic resources outside of their original context are very appealing in AI & Law where differences between legal systems, languages, or traditions often block wider adoption of research outcomes. We analyze the use of Language-Agnostic Sentence Representations in sequence labeling models (GRU) that are transferable across languages. To investigate transfer between different contexts we developed an annotation scheme for functional segmentation of adjudicatory decisions. We found that models generalize beyond the contexts on which they were trained (e.g., a model trained on administrative decisions from the US can be applied to criminal law decisions from Italy). Further, we found that training the models on multiple contexts increases robustness and improves overall performance when evaluating on previously unseen contexts. Finally, we found that pooling the training data from all the contexts enhances models’ in-context performance.

Copyright 2021 ICAIL. All rights reserved