EzraAragon commited on
Commit
cab20eb
·
1 Parent(s): 91fb8c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -9,7 +9,9 @@ tags:
9
  ---
10
  # DisorBERT
11
 
12
- [DisorBERT](https://aclanthology.org/2023.acl-long.853/) We propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders. <img style="float: right;" src="https://cdn-uploads.huggingface.co/production/uploads/64b946226b5ee8c388730ec1/uXCiWXUGrzhh6SE7ymBy_.png" width="150"/>
 
 
13
 
14
  We follow the standard fine-tuning a masked language model of [Huggingface’s NLP Course](https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt).
15
 
 
9
  ---
10
  # DisorBERT
11
 
12
+ <img style="float: left;" src="https://cdn-uploads.huggingface.co/production/uploads/64b946226b5ee8c388730ec1/uXCiWXUGrzhh6SE7ymBy_.png" width="150"/>
13
+
14
+ [DisorBERT](https://aclanthology.org/2023.acl-long.853/) We propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders.
15
 
16
  We follow the standard fine-tuning a masked language model of [Huggingface’s NLP Course](https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt).
17