ricdomolm commited on
Commit
44f81a6
1 Parent(s): 81a5a72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ We report mean classification accuracy across the 260 legal classification tasks
31
 
32
  **What are the Lawma models useful for?** We recommend using the Lawma models only for legal classification tasks that are very similar to those in the Supreme Court and Court of Appeals databases. The main take-away of our paper is that fine-tuning on specific tasks of interest leads to large improvements in performance. Therefore, we strongly recommend to further fine-tune on the tasks that you whish to use the model for. Relatively few examples --i.e, dozens or hundreds-- may already lead to large gains in performance. Since Lawma was fine-tuned on a diverse set of legal classification tasks, fine-tuning Lawma may yield better resuts compared to fine-tuning more general instruction-tuned models.
33
 
34
- **What legal classification tasks do we consider?** We consider almost all of the variables of the [Supreme Court](http://scdb.wustl.edu/data.php) and [Songer Court of Appeals](www.songerproject.org/us-courts-of-appeals-databases.html) databases. Our reasons to study these legal classification tasks are both technical and substantive. From a technical machine learning perspective, these tasks provide highly non-trivial classification problems where
35
  even the best models leave much room for improvement. From a substantive legal perspective, efficient
36
  solutions to such classification problems have rich and important applications in legal research.
37
 
 
31
 
32
  **What are the Lawma models useful for?** We recommend using the Lawma models only for legal classification tasks that are very similar to those in the Supreme Court and Court of Appeals databases. The main take-away of our paper is that fine-tuning on specific tasks of interest leads to large improvements in performance. Therefore, we strongly recommend to further fine-tune on the tasks that you whish to use the model for. Relatively few examples --i.e, dozens or hundreds-- may already lead to large gains in performance. Since Lawma was fine-tuned on a diverse set of legal classification tasks, fine-tuning Lawma may yield better resuts compared to fine-tuning more general instruction-tuned models.
33
 
34
+ **What legal classification tasks is Lawma fine-tuned on?** We consider almost all of the variables of the [Supreme Court](http://scdb.wustl.edu/data.php) and [Songer Court of Appeals](www.songerproject.org/us-courts-of-appeals-databases.html) databases. Our reasons to study these legal classification tasks are both technical and substantive. From a technical machine learning perspective, these tasks provide highly non-trivial classification problems where
35
  even the best models leave much room for improvement. From a substantive legal perspective, efficient
36
  solutions to such classification problems have rich and important applications in legal research.
37