Update README.md
Browse files
README.md
CHANGED
@@ -30,8 +30,6 @@ I have the following two research questions:
|
|
30 |
<img width="581" alt="image" src="https://user-images.githubusercontent.com/56193069/154847506-88c4283d-8a35-4c53-81c1-83c193ecf739.png">
|
31 |
|
32 |
|
33 |
-
## But why? what is your rationale?
|
34 |
-
|
35 |
In short, the reason I have the two questions is to **kill two birds with one stone,** where the two birds are ***suggesting better biases*** and ***suggesting novel predictions***, and the stone is ***designing a human-inspired Idiomifier**.*
|
36 |
|
37 |
### What do you mean by the first bird, *suggest better biases*? (SLA → NLP)
|
@@ -70,21 +68,13 @@ Hence, I believe it is sensible to ask the first question:
|
|
70 |
1. (SLA → NLP) If we have both of the models **decreamentally infer** the figurative meaning of idioms from their constituents, will this lead to an increased performance in the Eng2Eng Idiomify task? If so, what would be the mathematical interpretation of such human-inspired success?
|
71 |
|
72 |
### What do you mean by the second bird, *suggest novel predictions? (NLP → SLA)*
|
73 |
-
|
74 |
-
<img width="767" alt="image" src="https://user-images.githubusercontent.com/56193069/154848983-5dd69f34-0470-44ef-9e46-57dde8473306.png">
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
(and also, something like Maeura’s work on finding emergent properties from simulation):
|
79 |
-
|
80 |
-
![Uploading image.png…]()
|
81 |
|
82 |
|
83 |
1. (NLP → SLA) What differences can we observe from L1 & L2 Idiomifiers in how they learn idioms? From this, **can we draw any novel predictons on how L1 & L2 learners learn idioms?**
|
84 |
|
85 |
|
86 |
-
|
87 |
-
- what should be the title (for Feburary Workshop)
|
88 |
-
- Catching two birds with one stone: an attempt to catch Applied Linguistics and Natural Language Processing by desigining a human-inspired Idiomifier
|
89 |
-
- your resume?
|
90 |
## Mischelleanous
|
|
|
|
|
|
30 |
<img width="581" alt="image" src="https://user-images.githubusercontent.com/56193069/154847506-88c4283d-8a35-4c53-81c1-83c193ecf739.png">
|
31 |
|
32 |
|
|
|
|
|
33 |
In short, the reason I have the two questions is to **kill two birds with one stone,** where the two birds are ***suggesting better biases*** and ***suggesting novel predictions***, and the stone is ***designing a human-inspired Idiomifier**.*
|
34 |
|
35 |
### What do you mean by the first bird, *suggest better biases*? (SLA → NLP)
|
|
|
68 |
1. (SLA → NLP) If we have both of the models **decreamentally infer** the figurative meaning of idioms from their constituents, will this lead to an increased performance in the Eng2Eng Idiomify task? If so, what would be the mathematical interpretation of such human-inspired success?
|
69 |
|
70 |
### What do you mean by the second bird, *suggest novel predictions? (NLP → SLA)*
|
71 |
+
(work in progress)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
|
74 |
1. (NLP → SLA) What differences can we observe from L1 & L2 Idiomifiers in how they learn idioms? From this, **can we draw any novel predictons on how L1 & L2 learners learn idioms?**
|
75 |
|
76 |
|
77 |
+
|
|
|
|
|
|
|
78 |
## Mischelleanous
|
79 |
+

|
80 |
+
|