Crowd-sourced annotation platform for legal provisions

create a crowd-sourced annotation platform for regulations that would allow researchers annotate different regulations

Crowd-sourced annotation platform for legal provisions In order to turn legal norms into automatically processable regulations we need a deeper understanding of the elements contained in specific norms: From specific obligations (e.g., “must do X”), to prohibition (e.g., “must not do X”), to conditions (e.g., “if X then Y”), to understanding which terms are open-textured, meaning that they require a specific interpretation by the person applying the norm (e.g., “appropriate measures”). Having a better understanding of what elements norms consists of would benefit researchers in the field of computational law to understand what norms can more easily be encoded. At this stage, the way that researchers in the field go about it is to look at specific provisions case by case.

In this challenge, we propose to create a crowd-sourced annotation platform for regulations that would allow researchers in the field to put out research questions and legal texts, and would leave annotators the possibility to go through the texts and annotate it as researchers seek. With this, the community could benefit from a common platform in which the annotations are kept and commonly available. Each researcher would be able to extend the annotation metrics and propose new elements to be annotated. Like on other platforms, either the result from the annotators could be rated up or down depending on agreement, or even research questions could also go under a similar process to indicate whether the community sees a need for it. Such a crowd-sourced platform would also be in line and enable the goals set in by the Swiss government to push for digitally ready legislation. Only if we know how norms are structured and what elements it contains can we determine how to make provisions more digitally ready in the future.

Example: We were looking into the open-textured terms that arise in a given legislation. The reason we were interested in this is that measuring open texture can assist in determining the feasibility of encoding regulation and where additional legal information is required to properly assess a legal issue or dispute. To look for open-textured terms we conducted experiments with over 20 annotators who we asked to state whether a term is open-textured. Doing this in an annotation tool, open access to everyone would have greatly facilitated the conduct of the experiment by showing where exactly annotators focused (without having recourse to them otherwise indicating each bits in an excel file). As this is a generic problem, not only for open-texture, but for anyone in need of having data reviewed by annotators, such a tool could be beneficial to the legal AI community at large.

Edited content version 4

13.03.2023 21:10 ~ aurelia

Joined the team

13.03.2023 21:10 ~ aurelia

Challenge posted

13.03.2023 20:59 ~ aurelia
▲▲▲
All attendees, sponsors, partners, volunteers and staff at our hackathon are required to agree with the Hack Code of Conduct. Organisers will enforce this code throughout the event. We expect cooperation from all participants to ensure a safe environment for everybody.

Creative Commons LicenceThe contents of this website, unless otherwise stated, are licensed under a Creative Commons Attribution 4.0 International License.

Open Legal Lab 2023