Deepak Sahu commited on
Commit
10b6366
·
1 Parent(s): 2582bfd

readme update approach section

Browse files
.gitattributes CHANGED
@@ -1 +1,2 @@
1
  app_cache/* filter=lfs diff=lfs merge=lfs -text
 
 
1
  app_cache/* filter=lfs diff=lfs merge=lfs -text
2
+ .resources/* filter=lfs diff=lfs merge=lfs -text
.resources/approach.png ADDED

Git LFS Details

  • SHA256: 553308cb453cf48db11102ea409d86591cabf5afae4146dec765d3bb2c9bf54f
  • Pointer size: 130 Bytes
  • Size of remote file: 57.2 kB
.resources/approach.pptx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d25d31d5c9e3b6ef70c4cfda9ac6f631676e013696fb23a429656c26ad1e66c
3
+ size 44976
.resources/preview.png ADDED

Git LFS Details

  • SHA256: a3d938839b6f20377cb870e9ed9373a59e1069d349b0e67558008bea7f61d94f
  • Pointer size: 131 Bytes
  • Size of remote file: 125 kB
README.md CHANGED
@@ -15,27 +15,55 @@ A HyDE based approach for building recommendation engine.
15
 
16
  Try it out: https://huggingface.co/spaces/LunaticMaestro/book-recommender
17
 
18
- ![image](https://github.com/user-attachments/assets/b0fbabb6-1218-43c8-ba0b-8b6329502a6c)
19
 
20
  ## Table of Content
21
 
 
 
22
  - [Running Inference Locally](#libraries-execution)
 
23
  - Pipeline walkthrough in detail
24
 
25
- *For each part of pipeline there is separate script which needs to be executed, mentione in respective section along with output screenshots.*
26
  - Training
27
  - [Step 1: Data Clean](#step-1-data-clean)
28
 
29
- ## Local Execution
30
-
31
- ## Libraries installed separately
32
 
33
- I used google colab with following libraries extra.
 
34
 
35
  ```SH
36
  pip install sentence-transformers datasets
37
  ```
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ## Training Steps
40
 
41
  **ALL files Paths are at set as CONST in beginning of each script, to make it easier while using the paths while inferencing; hence not passing as CLI arguments**
 
15
 
16
  Try it out: https://huggingface.co/spaces/LunaticMaestro/book-recommender
17
 
18
+ ![image](.resources/preview.png)
19
 
20
  ## Table of Content
21
 
22
+ > All images are my actual work please source powerpoint of them in `.resources` folder of this repo.
23
+
24
  - [Running Inference Locally](#libraries-execution)
25
+ - [10,000 feet Approach overview](#approach)
26
  - Pipeline walkthrough in detail
27
 
28
+ *For each part of pipeline there is separate script which needs to be executed, mentioned in respective section along with output screenshots.*
29
  - Training
30
  - [Step 1: Data Clean](#step-1-data-clean)
31
 
32
+ ## Running Inference Locally
 
 
33
 
34
+ ### Libraries
35
+ I used google colab with following libraries extra.
36
 
37
  ```SH
38
  pip install sentence-transformers datasets
39
  ```
40
 
41
+ ### Running
42
+
43
+ #### Local System
44
+
45
+ ```SH
46
+ python app.py
47
+ ```
48
+ access at http://localhost:7860/
49
+
50
+ #### Goolge Colab
51
+
52
+ Modify app.py edit line 93 to `demo.launch(share=True)` then run following in cell.
53
+
54
+ ```
55
+ !python app.py
56
+ ```
57
+
58
+ ## Approach
59
+
60
+ ![image](.resources/approach.png)
61
+
62
+ References:
63
+ - This is the core idea: https://arxiv.org/abs/2212.10496
64
+ - https://github.com/aws-samples/content-based-item-recommender
65
+ - For future, a very complex work https://github.com/HKUDS/LLMRec
66
+
67
  ## Training Steps
68
 
69
  **ALL files Paths are at set as CONST in beginning of each script, to make it easier while using the paths while inferencing; hence not passing as CLI arguments**