Sasan Jafarnejad commited on
Commit
1a8eb64
·
2 Parent(s): 1f2d661 55d6099

Merge branch 'aberthe-main-patch-01221' into 'main'

Browse files
Files changed (5) hide show
  1. .DS_Store +0 -0
  2. README.md +25 -79
  3. car_assistant.ipynb +0 -0
  4. llama2.py +86 -0
  5. stttotts.py +177 -0
.DS_Store ADDED
Binary file (6.15 kB). View file
 
README.md CHANGED
@@ -1,92 +1,38 @@
1
- # Talking Car
2
 
3
-
4
-
5
- ## Getting started
6
-
7
- To make it easy for you to get started with GitLab, here's a list of recommended next steps.
8
-
9
- Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
10
-
11
- ## Add your files
12
-
13
- - [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
14
- - [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
15
-
16
- ```
17
- cd existing_repo
18
- git remote add origin https://gitlab.uni.lu/360Lab/talking-car.git
19
- git branch -M main
20
- git push -uf origin main
21
- ```
22
-
23
- ## Integrate with your tools
24
-
25
- - [ ] [Set up project integrations](https://gitlab.uni.lu/360Lab/talking-car/-/settings/integrations)
26
-
27
- ## Collaborate with your team
28
-
29
- - [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
30
- - [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
31
- - [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
32
- - [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
33
- - [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
34
-
35
- ## Test and Deploy
36
-
37
- Use the built-in continuous integration in GitLab.
38
-
39
- - [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
40
- - [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
41
- - [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
42
- - [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
43
- - [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
44
-
45
- ***
46
-
47
- # Editing this README
48
-
49
- When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
50
-
51
- ## Suggestions for a good README
52
- Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
53
-
54
- ## Name
55
- Choose a self-explaining name for your project.
56
 
57
  ## Description
58
- Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
59
-
60
- ## Badges
61
- On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
62
-
63
- ## Visuals
64
- Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
65
 
66
- ## Installation
67
- Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
68
 
69
- ## Usage
70
- Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
71
 
72
- ## Support
73
- Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
 
 
 
74
 
75
- ## Roadmap
76
- If you have ideas for releases in the future, it is a good idea to list them in the README.
77
 
78
- ## Contributing
79
- State if you are open to contributions and what your requirements are for accepting them.
 
 
80
 
81
- For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
82
 
83
- You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
 
 
 
 
 
 
 
84
 
85
  ## Authors and acknowledgment
86
- Show your appreciation to those who have contributed to the project.
87
-
88
- ## License
89
- For open source projects, say how it is licensed.
90
 
91
- ## Project status
92
- If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
 
1
+ # Project Title: Talking car
2
 
3
+ A speaking assistant designed for in-car use, leveraging the LLaMA 2 model to facilitate vocal interactions between the car and its users. This notebook provides the foundation for a speech-enabled interface that can understand spoken questions and respond verbally, enhancing the driving experience with intelligent assistance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  ## Description
 
 
 
 
 
 
 
6
 
7
+ This project integrates speech-to-text and text-to-speech functionalities into a car's infotainment system, using the LLaMA 2 model to process and respond to vocal queries from users. It employs Gradio for user interface creation, NexusRaven for function calling, and integrates various APIs to fetch real-time information, making it a comprehensive solution for creating a responsive and interactive car assistant.
 
8
 
9
+ ## Features
 
10
 
11
+ • Speech-to-Text and Text-to-Speech: Enables the car assistant to listen to spoken questions and respond audibly, providing a hands-free experience for drivers and passengers.
12
+ • Intelligent Function Calling with NexusRaven: Implements a sophisticated system for executing commands and retrieving information based on user queries, using the LLaMA 2 model's capabilities.
13
+ • Dynamic Model Integration: Incorporates multiple models for language recognition, speech processing, and text generation.
14
+ • User-Friendly Gradio Interface: easy-to-use interface for testing and deploying the speaking assistant within the car's infotainment system.
15
+ • Real-Time Information Retrieval: Capable of integrating with various APIs to provide up-to-date information on weather, routes, points of interest, and more.
16
 
17
+ ## Requirements
 
18
 
19
+ • Gradio for creating interactive interfaces
20
+ • Hugging Face Transformers and additional ML models for speech and language processing
21
+ • NexusRaven for complex function execution
22
+ All required libraries and packages are directly loaded inside the notebook.
23
 
24
+ ## Installation
25
 
26
+ To set up the speaking assistant in your car's system, follow these steps:
27
+ 1. Run all the cells until the “Interfaces (text and audio)” section.
28
+ 2. Choose between the interfaces which one to run: audio-to-audio or text-to-text.
29
+ ### Usage
30
+ 1. Model Setup: Begin by loading the necessary models for speech recognition, language processing, and text-to-speech conversion as detailed in the "Models loads" section.
31
+ 2. Function Definition: Customize the assistant's responses and capabilities by defining functions in the "Function calling with NexusRaven" section.
32
+ 3. Interface Configuration: Choose the Gradio interface that suits your in-car system, following setup instructions in the "Interfaces (text and audio)" section.
33
+ 4. Activation: Execute one of the interface to start the speaking assistant, enabling vocal interactions within the car.
34
 
35
  ## Authors and acknowledgment
 
 
 
 
36
 
37
+ Sasan Jafarnejad
38
+ Abigail Berthe--Pardo
car_assistant.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
llama2.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """llama2
3
+
4
+ Automatically generated by Colaboratory.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/15UK6iHd1y0pMMQc-DbZIYhSteoTCUsMH
8
+ """
9
+
10
+ #install and then restart execution
11
+ !pip install accelerate
12
+ !pip install bitsandbytes
13
+ !pip install optimum
14
+ !pip install auto-gptq
15
+
16
+ !pip install transformers
17
+ from transformers import AutoModelForCausalLM,AutoTokenizer
18
+ import torch
19
+ !pip install transformers huggingface_hub
20
+ from huggingface_hub import notebook_login
21
+
22
+ notebook_login()
23
+
24
+ mn = 'stabilityai/StableBeluga-7B'
25
+ #mn = "TheBloke/Llama-2-7b-Chat-GPTQ"
26
+
27
+ model = AutoModelForCausalLM.from_pretrained(mn, device_map=0, load_in_8bit=True)
28
+
29
+ #model = AutoModelForCausalLM.from_pretrained(mn, device_map=0, torch_dtype=torch.float16)
30
+
31
+ sb_sys = "### System:\nYou are a AI driving assistant in my car, that follows instructions extremely well. Help as much as you can.\n\n"
32
+
33
+ def gen(p, maxlen=15, sample=True):
34
+ toks = tokr(p, return_tensors="pt")
35
+ res = model.generate(**toks.to("cuda"), max_new_tokens=maxlen, do_sample=sample).to('cpu')
36
+ return tokr.batch_decode(res)
37
+
38
+ tokr = AutoTokenizer.from_pretrained(mn)
39
+
40
+ #to have a prompt corresponding to the specific format required by the fine-tuned model Stable Beluga
41
+ def mk_prompt(user, syst=sb_sys): return f"{syst}### User: {user}\n\n### Assistant:\n"
42
+
43
+ complete_answer= ''
44
+
45
+ #attempt to get user location
46
+
47
+ import requests
48
+
49
+ response = requests.get("http://ip-api.com/json/")
50
+ data = response.json()
51
+ print(data['city'], data['lat'], data['lon'])
52
+ city= data['city']
53
+ lat = data['lat']
54
+ lon = data['lon']
55
+
56
+ import re
57
+ model_answer= ''
58
+ general_context= f'I am in my car in {city}, latitude {lat}, longitude {lon}, I can move with my car to reach a destination'
59
+ pattern = r"Assistant:\\n(.*?)</s>"
60
+
61
+ ques = "I hate pizzas"
62
+
63
+ ques_ctx = f"""Answer the question with the help of the provided context.
64
+
65
+ ## Context
66
+
67
+ {general_context} .
68
+
69
+ ## Question
70
+
71
+ {ques}"""
72
+
73
+ complete_answer = str(gen(mk_prompt(ques_ctx), 150))
74
+
75
+ match = re.search(pattern, complete_answer, re.DOTALL)
76
+
77
+ if match:
78
+ # Extracting the text
79
+ model_answer = match.group(1)
80
+ else:
81
+ model_answer = "There has been an error with the generated response."
82
+
83
+ general_context += model_answer
84
+ print(model_answer)
85
+
86
+ print(complete_answer)
stttotts.py ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """sttToTts.ipynb
3
+
4
+ Automatically generated by Colaboratory.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/15QqRKFSwfhRdnaj5-R1z6xFfeEOOta38
8
+ """
9
+
10
+ #text-to-speech and speech to text
11
+ !pip install TTS
12
+ !pip install transformers
13
+
14
+ #text to speech
15
+ from TTS.api import TTS
16
+ tts = TTS("tts_models/multilingual/multi-dataset/your_tts", cs_api_model = "TTS.cs_api.CS_API", gpu=True)
17
+
18
+ #voice recording
19
+ import IPython.display
20
+ import google.colab.output
21
+ import base64
22
+ # all imports for voice recording
23
+ from IPython.display import Javascript
24
+ from google.colab import output
25
+ from base64 import b64decode
26
+
27
+ #to record sound, found on https://gist.github.com/korakot/c21c3476c024ad6d56d5f48b0bca92be
28
+
29
+ RECORD = """
30
+ const sleep = time => new Promise(resolve => setTimeout(resolve, time))
31
+ const b2text = blob => new Promise(resolve => {
32
+ const reader = new FileReader()
33
+ reader.onloadend = e => resolve(e.srcElement.result)
34
+ reader.readAsDataURL(blob)
35
+ })
36
+ var record = time => new Promise(async resolve => {
37
+ stream = await navigator.mediaDevices.getUserMedia({ audio: true })
38
+ recorder = new MediaRecorder(stream)
39
+ chunks = []
40
+ recorder.ondataavailable = e => chunks.push(e.data)
41
+ recorder.start()
42
+ await sleep(time)
43
+ recorder.onstop = async ()=>{
44
+ blob = new Blob(chunks)
45
+ text = await b2text(blob)
46
+ resolve(text)
47
+ }
48
+ recorder.stop()
49
+ })
50
+ """
51
+
52
+ def record(name, sec):
53
+ display(Javascript(RECORD))
54
+ s = output.eval_js('record(%d)' % (sec*1000))
55
+ b = b64decode(s.split(',')[1])
56
+ with open(f'{name}.webm','wb') as f:
57
+ f.write(b)
58
+ return (f'{name}.webm') # or webm ?
59
+
60
+ #to record the text which is going to be transcribed
61
+ record('audio', sec = 10)
62
+
63
+ #works -- speech-to-text with an audio I provide the path to reach
64
+ from transformers import WhisperProcessor, WhisperForConditionalGeneration
65
+ import librosa
66
+
67
+ # load model and processor
68
+ processor = WhisperProcessor.from_pretrained("openai/whisper-small")
69
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
70
+ model.config.forced_decoder_ids = None
71
+
72
+ # load audio from a specific path
73
+ audio_path = "audio.webm"
74
+ audio_array, sampling_rate = librosa.load(audio_path, sr=16000) # "sr=16000" ensures that the sampling rate is as required
75
+
76
+
77
+ # process the audio array
78
+ input_features = processor(audio_array, sampling_rate, return_tensors="pt").input_features
79
+
80
+
81
+ predicted_ids = model.generate(input_features)
82
+
83
+ transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
84
+ print(transcription)
85
+
86
+ #to record the speaker's voice used for tts
87
+ record('speaker', sec = 10 )
88
+
89
+ #library to convert digits to words (ex : 1 --> one)
90
+ import locale
91
+ locale.getpreferredencoding = lambda: "UTF-8"
92
+ !pip install inflect
93
+
94
+ import re
95
+ import inflect
96
+ #because numbers under digit format are ignored otherwise
97
+ def convert_numbers_to_words(s):
98
+ p = inflect.engine()
99
+ # Find all sequences of digits in the string
100
+ numbers = re.findall(r'\d+', s)
101
+ for number in numbers:
102
+ # Convert each number to words
103
+ words = p.number_to_words(number)
104
+ # Replace the original number in the string with its word representation
105
+ s = s.replace(number, words)
106
+ return s
107
+
108
+ #model test 1 for text to speech
109
+ #works - text to speech with voice cloner (by providing the path to the audio where the voice is)
110
+ from google.colab import drive
111
+ from IPython.display import Audio
112
+
113
+
114
+
115
+ tts.tts_to_file(text=convert_numbers_to_words(str(transcription)),
116
+ file_path="output.wav",
117
+ speaker_wav='speaker.webm',
118
+ language="en",
119
+ emotion ='angry',
120
+ speed = 2)
121
+ audio_path = "output.wav"
122
+ Audio(audio_path)
123
+
124
+ #model test 2 for text to speech
125
+ from IPython.display import Audio
126
+ # TTS with on the fly voice conversion
127
+ api = TTS("tts_models/deu/fairseq/vits")
128
+ api.tts_with_vc_to_file(
129
+ text="Wie sage ich auf Italienisch, dass ich dich liebe?",
130
+ speaker_wav="speaker.webm",
131
+ file_path="ouptut.wav"
132
+ )
133
+ audio_path = "output.wav"
134
+ Audio(audio_path)
135
+
136
+ #model test 3 for text to speech
137
+ from TTS.api import TTS
138
+ tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True)
139
+
140
+ from IPython.display import Audio
141
+
142
+
143
+ # generate speech by cloning a voice using custom settings
144
+ tts.tts_to_file(text="But for me to rap like a computer it must be in my genes I got a laptop in my back pocket My pen'll go off when I half-cock it Got a fat knot from that rap profit Made a livin' and a killin' off it Ever since Bill Clinton was still in office with Monica Lewinsky feelin' on his nutsack I'm an MC still as honest",
145
+ file_path="output.wav",
146
+ speaker_wav="Slide 1.m4a",
147
+ language="en",
148
+ emotion = "neutral",
149
+ decoder_iterations=35)
150
+
151
+ audio_path = "output.wav"
152
+ Audio(audio_path)
153
+
154
+ # Init TTS with the target studio speaker
155
+ tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False)
156
+ # Run TTS
157
+ tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH)
158
+ # Run TTS with emotion and speed control
159
+ tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
160
+
161
+ #model test 4 for text to speech
162
+ from IPython.display import Audio
163
+
164
+ from TTS.api import TTS
165
+ #api = TTS(model_name="tts_models/eng/fairseq/vits").to("cuda")
166
+ #api.tts_to_file("This is a test.", file_path="output.wav")
167
+
168
+ # TTS with on the fly voice conversion
169
+ api = TTS("tts_models/deu/fairseq/vits")
170
+ api.tts_with_vc_to_file(
171
+ "I am a basic human",
172
+ speaker_wav="speaker.webm",
173
+ file_path="output.wav"
174
+ )
175
+
176
+ audio_path = "output.wav"
177
+ Audio(audio_path)