Spaces:
Running
Running
Deepak Sahu
commited on
Commit
·
10b6366
1
Parent(s):
2582bfd
readme update approach section
Browse files- .gitattributes +1 -0
- .resources/approach.png +3 -0
- .resources/approach.pptx +3 -0
- .resources/preview.png +3 -0
- README.md +34 -6
.gitattributes
CHANGED
@@ -1 +1,2 @@
|
|
1 |
app_cache/* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
1 |
app_cache/* filter=lfs diff=lfs merge=lfs -text
|
2 |
+
.resources/* filter=lfs diff=lfs merge=lfs -text
|
.resources/approach.png
ADDED
![]() |
Git LFS Details
|
.resources/approach.pptx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d25d31d5c9e3b6ef70c4cfda9ac6f631676e013696fb23a429656c26ad1e66c
|
3 |
+
size 44976
|
.resources/preview.png
ADDED
![]() |
Git LFS Details
|
README.md
CHANGED
@@ -15,27 +15,55 @@ A HyDE based approach for building recommendation engine.
|
|
15 |
|
16 |
Try it out: https://huggingface.co/spaces/LunaticMaestro/book-recommender
|
17 |
|
18 |
-

|
|
|
23 |
- Pipeline walkthrough in detail
|
24 |
|
25 |
-
*For each part of pipeline there is separate script which needs to be executed,
|
26 |
- Training
|
27 |
- [Step 1: Data Clean](#step-1-data-clean)
|
28 |
|
29 |
-
##
|
30 |
-
|
31 |
-
## Libraries installed separately
|
32 |
|
33 |
-
|
|
|
34 |
|
35 |
```SH
|
36 |
pip install sentence-transformers datasets
|
37 |
```
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
## Training Steps
|
40 |
|
41 |
**ALL files Paths are at set as CONST in beginning of each script, to make it easier while using the paths while inferencing; hence not passing as CLI arguments**
|
|
|
15 |
|
16 |
Try it out: https://huggingface.co/spaces/LunaticMaestro/book-recommender
|
17 |
|
18 |
+

|
19 |
|
20 |
## Table of Content
|
21 |
|
22 |
+
> All images are my actual work please source powerpoint of them in `.resources` folder of this repo.
|
23 |
+
|
24 |
- [Running Inference Locally](#libraries-execution)
|
25 |
+
- [10,000 feet Approach overview](#approach)
|
26 |
- Pipeline walkthrough in detail
|
27 |
|
28 |
+
*For each part of pipeline there is separate script which needs to be executed, mentioned in respective section along with output screenshots.*
|
29 |
- Training
|
30 |
- [Step 1: Data Clean](#step-1-data-clean)
|
31 |
|
32 |
+
## Running Inference Locally
|
|
|
|
|
33 |
|
34 |
+
### Libraries
|
35 |
+
I used google colab with following libraries extra.
|
36 |
|
37 |
```SH
|
38 |
pip install sentence-transformers datasets
|
39 |
```
|
40 |
|
41 |
+
### Running
|
42 |
+
|
43 |
+
#### Local System
|
44 |
+
|
45 |
+
```SH
|
46 |
+
python app.py
|
47 |
+
```
|
48 |
+
access at http://localhost:7860/
|
49 |
+
|
50 |
+
#### Goolge Colab
|
51 |
+
|
52 |
+
Modify app.py edit line 93 to `demo.launch(share=True)` then run following in cell.
|
53 |
+
|
54 |
+
```
|
55 |
+
!python app.py
|
56 |
+
```
|
57 |
+
|
58 |
+
## Approach
|
59 |
+
|
60 |
+

|
61 |
+
|
62 |
+
References:
|
63 |
+
- This is the core idea: https://arxiv.org/abs/2212.10496
|
64 |
+
- https://github.com/aws-samples/content-based-item-recommender
|
65 |
+
- For future, a very complex work https://github.com/HKUDS/LLMRec
|
66 |
+
|
67 |
## Training Steps
|
68 |
|
69 |
**ALL files Paths are at set as CONST in beginning of each script, to make it easier while using the paths while inferencing; hence not passing as CLI arguments**
|