julianrisch commited on
Commit
e30c9cf
1 Parent(s): 6afebd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -40
README.md CHANGED
@@ -5,13 +5,13 @@ datasets:
5
  license: cc-by-4.0
6
  ---
7
 
8
- # tinyroberta-squad2
9
 
10
  ## Overview
11
  **Language model:** tinyroberta-squad2
12
  **Language:** English
13
  **Training data:** The PILE
14
- **Code:**
15
  **Infrastructure**: 4x Tesla v100
16
 
17
  ## Hyperparameters
@@ -34,56 +34,71 @@ This model has not been distilled for any specific task. If you are interested i
34
 
35
  ## Usage
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ### In Transformers
38
  ```python
39
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
40
 
41
- model_name = "deepset/tinyroberta-squad2"
 
 
 
 
 
 
 
 
42
 
 
43
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
44
  tokenizer = AutoTokenizer.from_pretrained(model_name)
45
  ```
46
 
47
- ### In FARM
 
 
 
 
 
 
 
 
48
 
49
- ```python
50
- from farm.modeling.adaptive_model import AdaptiveModel
51
- from farm.modeling.tokenization import Tokenizer
52
- from farm.infer import Inferencer
53
 
54
- model_name = "deepset/tinyroberta-squad2"
55
- model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
56
- tokenizer = Tokenizer.load(model_name)
57
- ```
58
 
59
- ### In haystack
60
- For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
61
- ```python
62
- reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
63
- # or
64
- reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
65
- ```
66
 
 
67
 
68
- ## Authors
69
- Branden Chan: `branden.chan [at] deepset.ai`
70
- Timo Möller: `timo.moeller [at] deepset.ai`
71
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
72
- Tanay Soni: `tanay.soni [at] deepset.ai`
73
- Michel Bartels: `michel.bartels [at] deepset.ai`
74
 
75
- ## About us
76
- ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
77
- We bring NLP to the industry via open source!
78
- Our focus: Industry specific language models & large scale QA systems.
79
-
80
- Some of our work:
81
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
82
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
83
- - [FARM](https://github.com/deepset-ai/FARM)
84
- - [Haystack](https://github.com/deepset-ai/haystack/)
85
-
86
- Get in touch:
87
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
88
-
89
- By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
5
  license: cc-by-4.0
6
  ---
7
 
8
+ # tinyroberta for Extractive QA
9
 
10
  ## Overview
11
  **Language model:** tinyroberta-squad2
12
  **Language:** English
13
  **Training data:** The PILE
14
+ **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
15
  **Infrastructure**: 4x Tesla v100
16
 
17
  ## Hyperparameters
 
34
 
35
  ## Usage
36
 
37
+ ### In Haystack
38
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
39
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
40
+ ```python
41
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
42
+
43
+ from haystack import Document
44
+ from haystack.components.readers import ExtractiveReader
45
+
46
+ docs = [
47
+ Document(content="Python is a popular programming language"),
48
+ Document(content="python ist eine beliebte Programmiersprache"),
49
+ ]
50
+
51
+ reader = ExtractiveReader(model="deepset/tinyroberta-6l-768d")
52
+ reader.warm_up()
53
+
54
+ question = "What is a popular programming language?"
55
+ result = reader.run(query=question, documents=docs)
56
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
57
+ ```
58
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
59
+
60
  ### In Transformers
61
  ```python
62
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
63
 
64
+ model_name = "deepset/tinyroberta-6l-768d"
65
+
66
+ # a) Get predictions
67
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
68
+ QA_input = {
69
+ 'question': 'Why is model conversion important?',
70
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
71
+ }
72
+ res = nlp(QA_input)
73
 
74
+ # b) Load model & tokenizer
75
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
76
  tokenizer = AutoTokenizer.from_pretrained(model_name)
77
  ```
78
 
79
+ ## About us
80
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
81
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
82
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
83
+ </div>
84
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
85
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
86
+ </div>
87
+ </div>
88
 
89
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
 
 
 
90
 
91
+ Some of our other work:
92
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
93
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
94
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
95
 
96
+ ## Get in touch and join the Haystack community
 
 
 
 
 
 
97
 
98
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
99
 
100
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
 
 
 
 
 
101
 
102
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
103
+
104
+ By the way: [we're hiring!](http://www.deepset.ai/jobs)