rs545837 commited on
Commit
2cf6224
1 Parent(s): 39627fa

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. output.txt +73 -0
  2. train_list.txt +65 -0
  3. val_list.txt +8 -0
output.txt ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ trelis_voice1_0.wav|I'm going to walk you through 10 quick tips for fine tuning. For each of those, I'll point you to one or two trellis videos on YouTube and also point you to the right branch if you're working out of the trellis advanced fine tuning repository. Tip number one is to start with a small model. I recommend starting with something like Lama 3 8B or Phi 3 Mini. And the reason is because fine tuning is about experimentation and you want to be able to try many things quickly. If you start off with Lama 3 8 or 70B, it's going to take you much more time in order to test out what's working and what's not. You can always start small and scale up later.|1
2
+ trelis_voice1_1.wav|The video I recommend here is memorization. This one, I use a relatively small model as I do in many of my fine tuning tutorials, just because it's quicker to learn fast. Tip number two is to use LoRa or QLoRa. I don't recommend starting off with full fine-tuning for a few reasons. First of all, LoRa and QLoRa allow you to start with fewer GPUs or a smaller GPU. That's going to make iteration faster. But for small datasets, the performance might even be better than full fine-tuning because full fine-tuning can tend to overfit. So I'd recommend even if you eventually want to do full fine-tuning, start off with LoRa or QLoRa and try to get it working before you want to spend more on GPU rental and more of your time.|1
3
+ trelis_voice1_2.wav|The video here if you want to pick out the right LoRa parameters is a live stream on how to pick LoRa parameters. And if you're working out of the Trellis repo, you can check out the Unsloth branch for the fastest fine-tuning on a single GPU using LoRa or QLoRa. Tip number three is to create 10 manual test questions. So you want to create 10 question answer pairs and use those to choose which base model is going to perform best. So just by running those on different base models, you can see which one is going to give you the best baseline for starting off your fine tuning. Then after you do any training run, you want to run that manual test.|1
4
+ trelis_voice1_3.wav|and just evaluate whether the model is doing well. This gives you probably a better sense than solely looking at the eval and training loss during the fine-tuning process. This is what I do in this memorization video as well, which you can check out on YouTube, and you'll see in the memorization scripts how I allow you to set up this manual dataset. That's also possible in the unsloth branch and the multi-GPU branch, which I'll get to later. Tip number four is to create data sets manually. Yes, I know this is a bit of work, but I think it's underrated. When you manually curate a data set like I did for the trellis function calling data set, it lets you appreciate exactly which rows of data are needed to get the performance that you need.|1
5
+ trelis_voice1_4.wav|You can, of course, use Python and chat GPT to help automate some of this and generate rows. But I think the manual touch does allow you a better understanding, which will allow you to get performance faster. Here, you can check out the function calling v3 branch and also the unslot and multi-GPU branches of the advanced fine-tuning repo. Tip number five is to start off training with a small number of rows. In fact, I always run training first with just one row of data to check that my training pipeline is working correctly and I don't run out of memory. Then I'll move to training on 100 rows, then 1,000. And I'm checking all the time whether my performance is actually improving or whether just my dataset design is completely off.|1
6
+ trelis_voice1_5.wav|If you do want to automate a little more how you generate synthetic data sets, you can check out this video here on data set preparation with LLMs. Tip number six is always use a validation data set. If you don't have one, you can just split off 10 to 20% of your training data set. You want to be checking your training loss as you progress along the process. Make sure it's not too bumpy and your learning rate is not too high or your batch size or virtual batch size is too small. You also want to check your validation loss, and this should be monotonically decreasing in a smooth way. If it's ever upticking, that means you might be overfitting and you're training for too many epochs, or you may not have enough data.|1
7
+ trelis_voice1_6.wav|I think you're better off to just fit it on one GPU, because when you move to multi GPU, you have data that's moving between them, the training becomes more complicated, it's easier to make mistakes, and it can be slower in some ways. Also, on one GPU, you can use unsloth, which gives you a 2x speed up. So that's quite beneficial if you can just focus on keeping things simple, until you've at least got a training approach that's working well, and you're happy to then spend the time and money to scale up. Something I should mention as well is that you can waste a lot of time with installations and getting stuck in getting set up for fine tuning.|1
8
+ trelis_voice1_7.wav|One way around that is to use an image or a template that sets up your CUDA and PyTorch to a specific version. I've got a one-click template here for RunPod, and you can use that to consistently have the same environment on which to install the final packages you need for fine tuning. Tip number eight is to use weights and biases. This is a tool that allows you to track the losses and the rewards as you move through your training run. You can include this in a script with pip install wandb, then set the environment variable for wandb project to a project name. And this will create a folder basically within which you can have multiple runs of run name.|1
9
+ trelis_voice1_8.wav|And the way you set the run name is in the training arguments by passing in the run name. Here you would set the run name like one epoch and constant scheduler or whatever you want to call it. And you also need to set up report to wand b weights and biases. This is supported in the Onslaught and the multi-GPU branches and also in many of the Jupyter notebooks that are throughout all the branches of the advanced fine-tuning repo. Before I move to tips 8 and 9, I want to comment on scaling up. So I've talked about starting with a low number of rows, starting with LoRa or QLoRa, and starting with a small model. Well, here's the order you want to scale up in.|1
10
+ trelis_voice1_9.wav|Start by increasing the rows of data on a small model, then move QLoRa to LoRa. If you really want to try full fine-tuning, test it out on a small model and see if it really improves performance. Then, as a very last step, you can think about moving to a larger model where it's going to take more time and money to get in that final result. There are two videos of relevance here. If you want to understand the pros and cons of full fine-tuning versus QLORA or LoRa, take a look at this video. And if you want to understand the complexities of doing multi-GPU training, check out multi-GPU fine-tuning. Moving to two last tips.|1
11
+ trelis_voice1_10.wav|Tip number nine is to use unsupervised fine-tuning. This can be useful if you have a large data set. I'm going to say larger than 10,000 rows of data. Here, you'll need to use Python scripts in order to clean up, say, repeated characters or too much new lines. You can also use language models in order to clean up the data set chunk by chunk. The video of relevance here is the Wikipedia video I made, where I first extract data from Wikipedia, clean it, and then use it for fine-tuning. Last of all, my tip number 10 is to do preference fine-tuning. This is where you have a data set with chosen, which are better or preferred responses, and rejected, which are the responses to the same prompts but are of lower quality.|1
12
+ trelis_voice1_11.wav|The preference fine-tuning will move your model to give responses more like your chosen answers and less like your rejected answers, which is useful if you want to do some fine-tuning for tone or style, or if you want to make some corrections where the model's giving a response you don't quite like. Here I recommend the Orpo YouTube video, and there's also a branch by that name in Advanced Fine Tuning. Orpo is also supported in the Unslot branch, where there's a Python Jupyter notebook and also just a Python.py script you can run. And Orpo is supported as an option in the Multi-GPU branch too. So to recap these 10 tips, start with a small model, use LoRa or QLoRa, not full fine-tuning.|1
13
+ trelis_voice1_12.wav|Always create 10 manual test questions or maybe a few more. Remember that manual data sets are probably underrated. You can always get a little bit of help from Python or from chat GPT. Start training on a small number of rows, even just one row to test the pipeline, but then 100, and make sure it's having a good effect before you decide to scale up. Make sure you know that the data type and the data set that you've set up is actually the right one. Number six, always use a validation set. Just split one off from a training set if you don't have one. Number seven, try to just start training on one GPU. Number eight, use weights and biases for tracking.|1
14
+ trelis_voice1_13.wav|And when you're scaling from small to large, increase first the rows, then move to using more VRAM with LoRa instead of QLoRa or full fine tuning instead of LoRa. By the way, there's a factor of four roughly in VRAM difference between each of those. So LoRa is about four times QLoRa and full fine tuning is about four times. LoRa, or even more in some cases. And last of all, increase to a larger model size only at the very end of your training process when you think you have a pipeline that's working well. Then for advanced tips, consider doing unsupervised fine-tuning if you have a large amount of data, only if you have a large amount of data, I'd say.|1
15
+ trelis_voice1_14.wav|And last of all, you can consider preference fine-tuning, in which case I'd recommend using ORPL, which will do supervised fine-tuning and odds ratio preference optimization. at the same time. Now, this approach here I've talked about for language models, but it also works for video and speech or images, multimodal models. So you can check out this video here on multimodal text plus image, where I prepare a data set and bring it through fine tuning. And likewise, for this speech to text model, where I prepare a data set and bring it through fine tuning. There are specific repos for multimodal. That's the vision repository here. And there's a repo for transcription. And this LLMs repo is the advanced fine-tuning repo I've been talking to date in or up until now in this presentation.|1
16
+ trelis_voice1_15.wav|I've laid out here all of the playlists that are relevant depending on what you need. So there are four different sections, four playlists and four repositories that go with them. There's the LLM fine tuning playlist, which is all about fine tuning language models. Then there's a repo for that advanced fine tuning. There's the vision playlist, which is for multimodal models and repo link. There's a video on transcription and a repo link. And then there are many videos on server setup. That's if you want to deploy your own custom model, either on a server that will sleep or start up when you need it to run, that's called serverless, or a server that's always on if you're using something like TGI or VLLM through a service like RunPod or Vast AI.|1
17
+ trelis_voice1_16.wav|And so here is the link for this. I'll note as well that within this repo, there's some scripts that allow you to redact information, personally identifiable information like names, emails, or credit card numbers before you send the data to a third-party LLM. And there are also scripts on function calling inference and speed test too. I'll talk a little more about those just at the end of this video. Last of all, these repos, of which there are four, they're available for purchase individually, but you can also now buy a repo bundle, which will give you lifetime access to all four of these repositories, which includes any future updates made to those repos. You can purchase that all together now as a bundle.|1
18
+ trelis_voice1_17.wav|This very last section of the video is for those who have purchased lifetime access to one of the Trellis repositories, but I'll just put it part of this public video because it will give a sense of what's in these repositories for those of you who might be interested to purchase lifetime membership later. The first repo is the advanced fine-tuning repo, and this is split into branches according to function. They are all listed here roughly in the order that they have been released. Now, a few of the branches that I'll highlight are, first of all, the Wikipedia branch, which is for unsupervised fine-tuning and data cleaning. If you do want to do ORPO, you have the ORPO branch here. And if you want to prepare data, you can do so with the help of a language model.|1
19
+ trelis_voice1_18.wav|This is done in the memorization branch, where you can set up some data generation based on PDF content. And likewise, if you go to the supervised fine tuning branch, there is also a script or multiple scripts for generating Q&A data from a base data set right there. Then there are two important branches here, unsloth and multi-GPU. The unsloth branch allows you to run fine tuning in either a notebook or as a Python script. Whereas the multi-GPU branch allows you to run Python scripts that will deploy multi-GPU training that's fully shared data parallel or distributed data parallel. Now I'll briefly show you each of those two main branches. So here we'll go into the unsloth branch.|1
20
+ trelis_voice1_19.wav|The way that you run training in this Unslot branch is by setting up the configuration in a config file. I've also got a config file that you can use here if you want to do some function calling fine tuning. And once you have your configuration set up, you can run the test.py in order to run a set of test questions that you've manually generated, or you can run questions from validation or test split of a data set. Then when you want to train your model, you simply run train.py, or you can run it step by step in a Python Jupyter notebook. Now, the notebook is recommended if you want to go through the training the first time, you can see step by step what's happening and easily print out things at intermediate points.|1
21
+ trelis_voice1_20.wav|But when you've got your script honed, it can be a lot faster to run a Python script. And that's why I have made this script available, which you just run from the command line and it will go through everything within the training. Just to give you a sense of how you configure the training and test setup, you'll set a model slug. You will then set some parameters, like whether you want to fine tune in 4-bit, what data type you want to use, depending on your GPU. You can then choose a data set, say for function calling, or if you want to memorize some data, like on the rules of TouchRugby. Here, you can set up your testing.|1
22
+ trelis_voice1_21.wav|You can decide to test either from a set of messages that you have prepared manually, or you can use the training, or you can use the validation split of a test set that's on Hugging Face by setting use data set to test equal to true right here. Next, you set up your training and validation splits. Here I've selected a main branch for training, and I've selected the training split. You can fix a max number of rows here. This will save you time if you just want to download and run on, say, 100 rows instead of on a massive dataset. Now, I spoke earlier about generating a validation set. You can either download from a split that's on Hugging Face called test or validation, but you can also generate a validation split from the train split.|1
23
+ trelis_voice1_22.wav|If you just set this to true, it will sequester 20% of the training data to use as validation. Next up is the LoRa configuration. You have all the regular LoRa parameters you'll see here. Check out the live stream video on choosing LoRa parameters if you want to know more. You can set LoRa or LoRa alpha and also rank stabilize LoRa, set that to true or false. Here you've got some Weights and Biases project configurations. You set the project name, and then for each run, you can use a different name here for running in Weights and Biases. You can set up your HuggingFace username. This will be used when pushing models to Hub.|1
24
+ trelis_voice1_23.wav|Now there's a more advanced technique here where you can decide to train on completions only. This means that you will only be considering the loss on the answer portion, not on the prompt or question portion. And this can be useful if your answers are quite short and you don't want the loss on all of the prompts to kind of crowd out or cloud out the information or the signal that's coming from training on the response or the answer. So you set the completions to true here. Sometimes I use this for function calling, fine tuning. And then you need to let the model know where your answer is starting. So in a Lama 3 model, the answer will start after assistant and header ID.|1
25
+ trelis_voice1_24.wav|In a Lama 2 model, it will start after inst. And then I think this is maybe a chat ML format. the answer will start after I am start assistant. So this allows the training loop to check within your prompt. It will check for where this start of the assistance answer is, and then it will only look at the loss after that point. After this, there are some standard parameters like setting the training batch size, the validation batch size, the gradient accumulation, whether you want to run a validation set or not. the number of epochs, the learning rate, an output directory for your training model and results, whether you want to train with BrainFloat 16 or not. You can set your scheduler.|1
26
+ trelis_voice1_25.wav|You can decide whether to save the model at a certain number of steps of training. set your max sequence length, gradient checkpointing, and whether to use re-entrancy, which allows you to speed up the training. Next, you can decide whether you want to use ORPO or not. By default, I've got that set to false. If you're using ORPO, you need a column that's called chosen and one called rejected. and you can set your max prompt length. And then the beta, the beta basically weighs how much of the preference fine-tuning, what's the importance of that loss relative to the standard SFT loss. Remember, ORPL does two things in one. It does SFT and it does preference fine-tuning in one.|1
27
+ trelis_voice1_26.wav|So if you have this at 0.2, it's kind of the importance of the odds ratio is about 0.2 relative to the SFT loss. Last of all, you can push to hub, so you can set a target model name if you want to push to hub. So very quickly, if we take a look at the test script, this will simply load the model. So it will load all of your configurations. It will load the model here, a fast language model using unsloth. It will set up the tokenizer, set up the chat template, load the dataset, either from your manual data that's in the repo or from Hugging Face, and then it will run inference through all of those samples and print the results out to file.|1
28
+ trelis_voice1_27.wav|Just as an example, I can show you within test output, you'll see here a large number of tests that I have run. I'll try to find a recent one. So here is some fine-tuning on TouchRugby, and you'll see there is a prompt, a question, and it'll print out the correct response, and it'll also print out the generated response. And then you can just manually compare whether these answers are good or not. Now, just one other script I'll point out here, which is viewModules. You can just run Python viewModules if you want to see what modules are within the given model. This allows you to pick out which modules you might want to fine tune using LoRa.|1
29
+ trelis_voice1_28.wav|And that's pretty much it for the unsloth branch, which is recommended if you're going to fine tune on one GPU. It does not work if you're fine tuning on multi GPU, which is why I have the multi GPU branch and the multi GPU branch is configured much in a similar way to unsloth, except that it allows you to run in fully sharded data parallel or in distributed data parallel. It has a config file that you can set up. It has the test.py and the train.py file that will allow you to run testing and training. And I'll just briefly show you the config file. So at the start here, you'll see this parameter that's not in the unsloth branch. If you set it to auto, it will just do standard training.|1
30
+ trelis_voice1_29.wav|You can train on multiple GPUs, but it will be pipeline parallel, so not quite as efficient. However, you can set this to DDP for distributed data parallel, or you can set it to FSDP for fully sharded data parallel. Now, when you're doing that, you'll need to configure the multi-GPU setup. That can be done by running config, accelerate config, and you'll see the instructions if you head over to the multi-GPU branch for doing that. So this is the Advanced Fine Tuning repo, and you can find out more at trials.com forward slash advanced dash fine dash tuning. The next repo I'll briefly go through is the Advanced Vision repo. This does much of the same, but for multimodal text plus image models.|1
31
+ trelis_voice1_30.wav|It allows you to prepare your data and push it up to create a Hugging Face dataset. Then you can fine tune Lava, IdaFix and, or IdaFix and Moondream models. You can do multimodal server setup with text generation inference. There's a one-click template for running an IdaFix server, including on a custom model. And last of all, there is a script for fine-tuning multimodal text plus video models. This is basically a variation on text plus image models where you split the video into multiple images. The next repo is the Advanced Inference repo. This allows you to set up a server so that you can hit an endpoint for a custom model. You can do so on RunPod, Vast AI, or using a Llama CPP type server.|1
32
+ trelis_voice1_31.wav|There's also the option to deploy serverlessly using RunPod. This means that you can put a server that will only turn on when it's being queried and will turn off after it's being queried. It's quite useful for batch jobs that are less time sensitive, because it means you're not paying for the server when it's not being used, and it will just turn on when you need it, which is going to save you cost. There are also a number of scripts for making API calls, simple API calls, OpenAI style or TGI style, function calling API calls if you want to test out function calling performance of a model. Then there are speed tests for single queries and multiple queries.|1
33
+ trelis_voice1_32.wav|There's also a folder now on privacy, which allows you to basically hide information, like personal information on credit cards, names, email addresses, before you send it to a third-party API so that you can reduce any data privacy risks. Last of all, there's the advanced transcription repository. This one here allows you to generate data if you want to fine tune a whisper model and then do the fine tuning. And again, much of the 10 tips that I provided earlier are going to apply here for transcription. And that is it for my 10 tips on fine-tuning. If I've left anything out, please let me know below in the comments and I'll get back to you. In the meantime, if you want more information on Trellis resources, including free and paid, try out trellis.com.|1
34
+ trelis_voice1_0.wav|I'm going to walk you through 10 quick tips for fine tuning. For each of those, I'll point you to one or two trellis videos on YouTube and also point you to the right branch if you're working out of the trellis advanced fine tuning repository. Tip number one is to start with a small model. I recommend starting with something like Lama 3 8B or Phi 3 Mini. And the reason is because fine tuning is about experimentation and you want to be able to try many things quickly.|1
35
+ trelis_voice1_1.wav|If you start off with Lama 3 8 or 70B, it's going to take you much more time in order to test out what's working and what's not. You can always start small and scale up later. The video I recommend here is memorization. This one, I use a relatively small model as I do in many of my fine tuning tutorials, just because it's quicker to learn fast. Tip number two is to use LoRa or QLoRa.|1
36
+ trelis_voice1_2.wav|I don't recommend starting off with full fine-tuning for a few reasons. First of all, LoRa and QLoRa allow you to start with fewer GPUs or a smaller GPU. That's going to make iteration faster. But for small datasets, the performance might even be better than full fine-tuning because full fine-tuning can tend to overfit. So I'd recommend even if you eventually want to do full fine-tuning, start off with LoRa or QLoRa and try to get it working before you want to spend more on GPU rental and more of your time.|1
37
+ trelis_voice1_3.wav|The video here if you want to pick out the right LoRa parameters is a live stream on how to pick LoRa parameters. And if you're working out of the Trellis repo, you can check out the Unsloth branch for the fastest fine-tuning on a single GPU using LoRa or QLoRa. Tip number three is to create 10 manual test questions. So you want to create 10 question answer pairs and use those to choose which base model is going to perform best.|1
38
+ trelis_voice1_4.wav|When you manually curate a data set like I did for the trellis function calling data set, it lets you appreciate exactly which rows of data are needed to get the performance that you need. You can, of course, use Python and chat GPT to help automate some of this and generate rows. But I think the manual touch does allow you a better understanding, which will allow you to get performance faster. Here, you can check out the function calling v3 branch and also the unslot and multi-GPU branches of the advanced fine-tuning repo.|1
39
+ trelis_voice1_5.wav|If you do want to automate a little more how you generate synthetic data sets, you can check out this video here on data set preparation with LLMs. Tip number six is always use a validation data set. If you don't have one, you can just split off 10 to 20% of your training data set. You want to be checking your training loss as you progress along the process. Make sure it's not too bumpy and your learning rate is not too high or your batch size or virtual batch size is too small.|1
40
+ trelis_voice1_6.wav|You also want to check your validation loss, and this should be monotonically decreasing in a smooth way. If it's ever upticking, that means you might be overfitting and you're training for too many epochs, or you may not have enough data. Here, I recommend the Trellis repo branches of Unsloth or MultiGPU. They each allow you to split off validation, split from your base training set. This is something you can also do easily using Hugging Face datasets if you check out their documentation.|1
41
+ trelis_voice1_7.wav|I think you're better off to just fit it on one GPU, because when you move to multi GPU, you have data that's moving between them, the training becomes more complicated, it's easier to make mistakes, and it can be slower in some ways. Also, on one GPU, you can use unsloth, which gives you a 2x speed up. So that's quite beneficial if you can just focus on keeping things simple, until you've at least got a training approach that's working well, and you're happy to then spend the time and money to scale up.|1
42
+ trelis_voice1_8.wav|Something I should mention as well is that you can waste a lot of time with installations and getting stuck in getting set up for fine tuning. One way around that is to use an image or a template that sets up your CUDA and PyTorch to a specific version. I've got a one-click template here for RunPod, and you can use that to consistently have the same environment on which to install the final packages you need for fine tuning.|1
43
+ trelis_voice1_9.wav|Tip number eight is to use weights and biases. This is a tool that allows you to track the losses and the rewards as you move through your training run. You can include this in a script with pip install wandb, then set the environment variable for wandb project to a project name. And this will create a folder basically within which you can have multiple runs of run name. And the way you set the run name is in the training arguments by passing in the run name.|1
44
+ trelis_voice1_10.wav|Here you would set the run name like one epoch and constant scheduler or whatever you want to call it. And you also need to set up report to wand b weights and biases. This is supported in the Onslaught and the multi-GPU branches and also in many of the Jupyter notebooks that are throughout all the branches of the advanced fine-tuning repo. Before I move to tips 8 and 9, I want to comment on scaling up. So I've talked about starting with a low number of rows, starting with LoRa or QLoRa, and starting with a small model.|1
45
+ trelis_voice1_11.wav|Well, here's the order you want to scale up in. Start by increasing the rows of data on a small model, then move QLoRa to LoRa. If you really want to try full fine-tuning, test it out on a small model and see if it really improves performance. Then, as a very last step, you can think about moving to a larger model where it's going to take more time and money to get in that final result.|1
46
+ trelis_voice1_12.wav|There are two videos of relevance here. If you want to understand the pros and cons of full fine-tuning versus QLORA or LoRa, take a look at this video. And if you want to understand the complexities of doing multi-GPU training, check out multi-GPU fine-tuning. Moving to two last tips. Tip number nine is to use unsupervised fine-tuning. This can be useful if you have a large data set. I'm going to say larger than 10,000 rows of data.|1
47
+ trelis_voice1_13.wav|This is where you have a data set with chosen, which are better or preferred responses, and rejected, which are the responses to the same prompts but are of lower quality. You might have a set of data like this if you have production data from customers or from a chatbot. You may have some conversational data that you consider of good quality. You may even have corrected or annotated versions of those conversations where you've improved the assistance responses. That's going to be ideal as your chosen dataset.|1
48
+ trelis_voice1_14.wav|The preference fine-tuning will move your model to give responses more like your chosen answers and less like your rejected answers, which is useful if you want to do some fine-tuning for tone or style, or if you want to make some corrections where the model's giving a response you don't quite like. Here I recommend the Orpo YouTube video, and there's also a branch by that name in Advanced Fine Tuning. Orpo is also supported in the Unslot branch, where there's a Python Jupyter notebook and also just a Python.py script you can run.|1
49
+ trelis_voice1_15.wav|And Orpo is supported as an option in the Multi-GPU branch too. So to recap these 10 tips, start with a small model, use LoRa or QLoRa, not full fine-tuning. Always create 10 manual test questions or maybe a few more. Remember that manual data sets are probably underrated. You can always get a little bit of help from Python or from chat GPT. Start training on a small number of rows, even just one row to test the pipeline, but then 100, and make sure it's having a good effect before you decide to scale up.|1
50
+ trelis_voice1_16.wav|Make sure you know that the data type and the data set that you've set up is actually the right one. Number six, always use a validation set. Just split one off from a training set if you don't have one. Number seven, try to just start training on one GPU. Number eight, use weights and biases for tracking. And when you're scaling from small to large, increase first the rows, then move to using more VRAM with LoRa instead of QLoRa or full fine tuning instead of LoRa.|1
51
+ trelis_voice1_17.wav|By the way, there's a factor of four roughly in VRAM difference between each of those. So LoRa is about four times QLoRa and full fine tuning is about four times. LoRa, or even more in some cases. And last of all, increase to a larger model size only at the very end of your training process when you think you have a pipeline that's working well. Then for advanced tips, consider doing unsupervised fine-tuning if you have a large amount of data, only if you have a large amount of data, I'd say.|1
52
+ trelis_voice1_18.wav|And last of all, you can consider preference fine-tuning, in which case I'd recommend using ORPL, which will do supervised fine-tuning and odds ratio preference optimization. at the same time. Now, this approach here I've talked about for language models, but it also works for video and speech or images, multimodal models. So you can check out this video here on multimodal text plus image, where I prepare a data set and bring it through fine tuning. And likewise, for this speech to text model, where I prepare a data set and bring it through fine tuning.|1
53
+ trelis_voice1_19.wav|There are specific repos for multimodal. That's the vision repository here. And there's a repo for transcription. And this LLMs repo is the advanced fine-tuning repo I've been talking to date in or up until now in this presentation. I've laid out here all of the playlists that are relevant depending on what you need. So there are four different sections, four playlists and four repositories that go with them. There's the LLM fine tuning playlist, which is all about fine tuning language models.|1
54
+ trelis_voice1_20.wav|Then there's a repo for that advanced fine tuning. There's the vision playlist, which is for multimodal models and repo link. There's a video on transcription and a repo link. And then there are many videos on server setup. That's if you want to deploy your own custom model, either on a server that will sleep or start up when you need it to run, that's called serverless, or a server that's always on if you're using something like TGI or VLLM through a service like RunPod or Vast AI.|1
55
+ trelis_voice1_21.wav|This very last section of the video is for those who have purchased lifetime access to one of the Trellis repositories, but I'll just put it part of this public video because it will give a sense of what's in these repositories for those of you who might be interested to purchase lifetime membership later. The first repo is the advanced fine-tuning repo, and this is split into branches according to function. They are all listed here roughly in the order that they have been released.|1
56
+ trelis_voice1_22.wav|And likewise, if you go to the supervised fine tuning branch, there is also a script or multiple scripts for generating Q&A data from a base data set right there. Then there are two important branches here, unsloth and multi-GPU. The unsloth branch allows you to run fine tuning in either a notebook or as a Python script. Whereas the multi-GPU branch allows you to run Python scripts that will deploy multi-GPU training that's fully shared data parallel or distributed data parallel.|1
57
+ trelis_voice1_23.wav|Now I'll briefly show you each of those two main branches. So here we'll go into the unsloth branch. The way that you run training in this Unslot branch is by setting up the configuration in a config file. I've also got a config file that you can use here if you want to do some function calling fine tuning. And once you have your configuration set up, you can run the test.py in order to run a set of test questions that you've manually generated, or you can run questions from validation or test split of a data set.|1
58
+ trelis_voice1_24.wav|Then when you want to train your model, you simply run train.py, or you can run it step by step in a Python Jupyter notebook. Now, the notebook is recommended if you want to go through the training the first time, you can see step by step what's happening and easily print out things at intermediate points. But when you've got your script honed, it can be a lot faster to run a Python script. And that's why I have made this script available, which you just run from the command line and it will go through everything within the training.|1
59
+ trelis_voice1_25.wav|You can decide to test either from a set of messages that you have prepared manually, or you can use the training, or you can use the validation split of a test set that's on Hugging Face by setting use data set to test equal to true right here. Next, you set up your training and validation splits. Here I've selected a main branch for training, and I've selected the training split. You can fix a max number of rows here.|1
60
+ trelis_voice1_26.wav|This will save you time if you just want to download and run on, say, 100 rows instead of on a massive dataset. Now, I spoke earlier about generating a validation set. You can either download from a split that's on Hugging Face called test or validation, but you can also generate a validation split from the train split. If you just set this to true, it will sequester 20% of the training data to use as validation. Next up is the LoRa configuration.|1
61
+ trelis_voice1_27.wav|You have all the regular LoRa parameters you'll see here. Check out the live stream video on choosing LoRa parameters if you want to know more. You can set LoRa or LoRa alpha and also rank stabilize LoRa, set that to true or false. Here you've got some Weights and Biases project configurations. You set the project name, and then for each run, you can use a different name here for running in Weights and Biases.|1
62
+ trelis_voice1_28.wav|You can set up your HuggingFace username. This will be used when pushing models to Hub. Now there's a more advanced technique here where you can decide to train on completions only. This means that you will only be considering the loss on the answer portion, not on the prompt or question portion. And this can be useful if your answers are quite short and you don't want the loss on all of the prompts to kind of crowd out or cloud out the information or the signal that's coming from training on the response or the answer.|1
63
+ trelis_voice1_29.wav|So you set the completions to true here. Sometimes I use this for function calling, fine tuning. And then you need to let the model know where your answer is starting. So in a Lama 3 model, the answer will start after assistant and header ID. In a Lama 2 model, it will start after inst. And then I think this is maybe a chat ML format. the answer will start after I am start assistant. So this allows the training loop to check within your prompt.|1
64
+ trelis_voice1_30.wav|It will check for where this start of the assistance answer is, and then it will only look at the loss after that point. After this, there are some standard parameters like setting the training batch size, the validation batch size, the gradient accumulation, whether you want to run a validation set or not. the number of epochs, the learning rate, an output directory for your training model and results, whether you want to train with BrainFloat 16 or not. You can set your scheduler.|1
65
+ trelis_voice1_31.wav|And then the beta, the beta basically weighs how much of the preference fine-tuning, what's the importance of that loss relative to the standard SFT loss. Remember, ORPL does two things in one. It does SFT and it does preference fine-tuning in one. So if you have this at 0.2, it's kind of the importance of the odds ratio is about 0.2 relative to the SFT loss. Last of all, you can push to hub, so you can set a target model name if you want to push to hub.|1
66
+ trelis_voice1_32.wav|So very quickly, if we take a look at the test script, this will simply load the model. So it will load all of your configurations. It will load the model here, a fast language model using unsloth. It will set up the tokenizer, set up the chat template, load the dataset, either from your manual data that's in the repo or from Hugging Face, and then it will run inference through all of those samples and print the results out to file.|1
67
+ trelis_voice1_33.wav|Just as an example, I can show you within test output, you'll see here a large number of tests that I have run. I'll try to find a recent one. So here is some fine-tuning on TouchRugby, and you'll see there is a prompt, a question, and it'll print out the correct response, and it'll also print out the generated response. And then you can just manually compare whether these answers are good or not. Now, just one other script I'll point out here, which is viewModules.|1
68
+ trelis_voice1_34.wav|It does not work if you're fine tuning on multi GPU, which is why I have the multi GPU branch and the multi GPU branch is configured much in a similar way to unsloth, except that it allows you to run in fully sharded data parallel or in distributed data parallel. It has a config file that you can set up. It has the test.py and the train.py file that will allow you to run testing and training. And I'll just briefly show you the config file.|1
69
+ trelis_voice1_35.wav|That can be done by running config, accelerate config, and you'll see the instructions if you head over to the multi-GPU branch for doing that. So this is the Advanced Fine Tuning repo, and you can find out more at trials.com forward slash advanced dash fine dash tuning. The next repo I'll briefly go through is the Advanced Vision repo. This does much of the same, but for multimodal text plus image models. It allows you to prepare your data and push it up to create a Hugging Face dataset.|1
70
+ trelis_voice1_36.wav|Then you can fine tune Lava, IdaFix and, or IdaFix and Moondream models. You can do multimodal server setup with text generation inference. There's a one-click template for running an IdaFix server, including on a custom model. And last of all, there is a script for fine-tuning multimodal text plus video models. This is basically a variation on text plus image models where you split the video into multiple images. The next repo is the Advanced Inference repo.|1
71
+ trelis_voice1_37.wav|It's quite useful for batch jobs that are less time sensitive, because it means you're not paying for the server when it's not being used, and it will just turn on when you need it, which is going to save you cost. There are also a number of scripts for making API calls, simple API calls, OpenAI style or TGI style, function calling API calls if you want to test out function calling performance of a model. Then there are speed tests for single queries and multiple queries.|1
72
+ trelis_voice1_38.wav|So the idea is to use a very fast and relatively small language model to pick out the right snippets and then include those snippets in the context of a more powerful model like, say, GPT-4. There's also a folder now on privacy, which allows you to basically hide information, like personal information on credit cards, names, email addresses, before you send it to a third-party API so that you can reduce any data privacy risks. Last of all, there's the advanced transcription repository.|1
73
+ trelis_voice1_39.wav|This one here allows you to generate data if you want to fine tune a whisper model and then do the fine tuning. And again, much of the 10 tips that I provided earlier are going to apply here for transcription. And that is it for my 10 tips on fine-tuning. If I've left anything out, please let me know below in the comments and I'll get back to you. In the meantime, if you want more information on Trellis resources, including free and paid, try out trellis.com.|1
train_list.txt ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ trelis_voice1_0.wav|aɪm ɡˌoʊɪŋ tə wˈɔːk juː θɹuː tˈɛn kwˈɪk tˈɪps fɔːɹ fˈaɪn tˈuːnɪŋ. fɔːɹ ˈiːtʃ əv ðˈoʊz, aɪl pˈɔɪnt juː tə wˈʌn ɔːɹ tˈuː tɹˈɛliz vˈɪdɪoʊz ˌɔn juː tˈuːb ænd ˈɔːlsoʊ pˈɔɪnt juː tə ðə ɹˈaɪt bɹˈæntʃ ɪf jʊɹ wˈɜːkɪŋ ˌaʊɾəv ðə tɹˈɛliz ɐdvˈænst fˈaɪn tˈuːnɪŋ ɹᵻpˈɑːzɪtˌoːɹi. tˈɪp nˈʌmbɚ wˈʌn ɪz tə stˈɑːɹt wɪð ɐ smˈɔːl mˈɑːdəl. aɪ ɹˌɛkəmˈɛnd stˈɑːɹɾɪŋ wɪð sˈʌmθɪŋ lˈaɪk lˈɑːmə θɹˈiː ˈeɪt bˈiː ɔːɹ fˈaɪ θɹˈiː mˈɪni. ænd ðə ɹˈiːzən ɪz bɪkˈʌz fˈaɪn tˈuːnɪŋ ɪz ɐbˌaʊt ɛkspˌɛɹɪməntˈeɪʃən ænd juː wˈɔnt təbi ˈeɪbəl tə tɹˈaɪ mˈɛni θˈɪŋz kwˈɪkli. ɪf juː stˈɑːɹt ˈɔf wɪð lˈɑːmə θɹˈiː ˈeɪt ɔːɹ sˈɛvənti bˈiː, ɪts ɡˌoʊɪŋ tə tˈeɪk juː mˈʌtʃ mˈoːɹ tˈaɪm ɪn ˈɔːɹdɚ tə tˈɛst ˈaʊt wʌts wˈɜːkɪŋ ænd wʌts nˈɑːt. juː kæn ˈɔːlweɪz stˈɑːɹt smˈɔːl ænd skˈeɪl ˌʌp lˈeɪɾɚ. |1
2
+ trelis_voice1_1.wav|ðə vˈɪdɪoʊ aɪ ɹˌɛkəmˈɛnd hˈɪɹ ɪz mˌɛmɚɹᵻzˈeɪʃən. ðˈɪswˌʌn, aɪ jˈuːz ɐ ɹˈɛlətˌɪvli smˈɔːl mˈɑːdəl æz aɪ dˈuː ɪn mˈɛnɪəv maɪ fˈaɪn tˈuːnɪŋ tuːtˈoːɹɪəlz, dʒˈʌst bɪkˈʌz ɪts kwˈɪkɚ tə lˈɜːn fˈæst. tˈɪp nˈʌmbɚ tˈuː ɪz tə jˈuːz lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː. aɪ dˈoʊnt ɹˌɛkəmˈɛnd stˈɑːɹɾɪŋ ˈɔf wɪð fˈʊl fˈaɪntˈuːnɪŋ fɚɹə fjˈuː ɹˈiːzənz. fˈɜːst ʌv ˈɔːl, lˈoʊ ɹˈɑː ænd kjˈuː lˈoʊ ɹˈɑː ɐlˈaʊ juː tə stˈɑːɹt wɪð fjˈuːɚ dʒˌiːpˌiːjˈuː ɔːɹ ɐ smˈɔːlɚ dʒˌiːpˌiːjˈuː. ðæts ɡˌoʊɪŋ tə mˌeɪk ˌɪɾɚɹˈeɪʃən fˈæstɚ. bˌʌt fɔːɹ smˈɔːl dˈeɪɾəsˌɛts, ðə pɚfˈoːɹməns mˌaɪt ˈiːvən biː bˈɛɾɚ ðɐn fˈʊl fˈaɪntˈuːnɪŋ bɪkˈʌz fˈʊl fˈaɪntˈuːnɪŋ kæn tˈɛnd tʊ ˌoʊvɚfˈɪt. sˌoʊ aɪd ɹˌɛkəmˈɛnd ˈiːvən ɪf juː ᵻvˈɛntʃuːəli wˈɔnt tə dˈuː fˈʊl fˈaɪntˈuːnɪŋ, stˈɑːɹt ˈɔf wɪð lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː ænd tɹˈaɪ tə ɡɛt ɪt wˈɜːkɪŋ bᵻfˌoːɹ juː wˈɔnt tə spˈɛnd mˈoːɹ ˌɔn dʒˌiːpˌiːjˈuː ɹˈɛntəl ænd mˈoːɹ ʌv jʊɹ tˈaɪm. |1
3
+ trelis_voice1_2.wav|ðə vˈɪdɪoʊ hˈɪɹ ɪf juː wˈɔnt tə pˈɪk ˈaʊt ðə ɹˈaɪt lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz ɪz ɐ lˈaɪv stɹˈiːm ˌɔn hˌaʊ tə pˈɪk lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz. ænd ɪf jʊɹ wˈɜːkɪŋ ˌaʊɾəv ðə tɹˈɛliz ɹˈiːpoʊ, juː kæn tʃˈɛk ˈaʊt ðɪ ʌnslˈɑːθ bɹˈæntʃ fɚðə fˈæstɪst fˈaɪntˈuːnɪŋ ˌɔn ɐ sˈɪŋɡəl dʒˌiːpˌiːjˈuː jˈuːzɪŋ lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː. tˈɪp nˈʌmbɚ θɹˈiː ɪz tə kɹiːˈeɪt tˈɛn mˈænjuːəl tˈɛst kwˈɛstʃənz. sˌoʊ juː wˈɔnt tə kɹiːˈeɪt tˈɛn kwˈɛstʃən ˈænsɚ pˈɛɹz ænd jˈuːs ðoʊz tə tʃˈuːz wˌɪtʃ bˈeɪs mˈɑːdəl ɪz ɡˌoʊɪŋ tə pɚfˈɔːɹm bˈɛst. sˌoʊ dʒˈʌst baɪ ɹˈʌnɪŋ ðoʊz ˌɔn dˈɪfɹənt bˈeɪs mˈɑːdəlz, juː kæn sˈiː wˌɪtʃ wˈʌn ɪz ɡˌoʊɪŋ tə ɡˈɪv juː ðə bˈɛst bˈeɪslaɪn fɔːɹ stˈɑːɹɾɪŋ ˈɔf jʊɹ fˈaɪn tˈuːnɪŋ. ðˈɛn ˈæftɚ juː dˈuː ˌɛni tɹˈeɪnɪŋ ɹˈʌn, juː wˈɔnt tə ɹˈʌn ðæt mˈænjuːəl tˈɛst. |1
4
+ trelis_voice1_3.wav|ænd dʒˈʌst ɪvˈæljuːˌeɪt wˈɛðɚ ðə mˈɑːdəl ɪz dˌuːɪŋ wˈɛl. ðɪs ɡˈɪvz juː pɹˈɑːbəbli ɐ bˈɛɾɚ sˈɛns ðɐn sˈoʊlli lˈʊkɪŋ æt ðɪ ɪvˈæl ænd tɹˈeɪnɪŋ lˈɔs dˈʊɹɹɪŋ ðə fˈaɪntˈuːnɪŋ pɹˈɑːsɛs. ðɪs ɪz wʌt aɪ dˈuː ɪn ðɪs mˌɛmɚɹᵻzˈeɪʃən vˈɪdɪoʊ æz wˈɛl, wˌɪtʃ juː kæn tʃˈɛk ˈaʊt ˌɔn juː tˈuːb, ænd juːl sˈiː ɪnðə mˌɛmɚɹᵻzˈeɪʃən skɹˈɪpts hˌaʊ aɪ ɐlˈaʊ juː tə sˈɛt ˌʌp ðɪs mˈænjuːəl dˈeɪɾəsˌɛt. ðæts ˈɔːlsoʊ pˈɑːsᵻbəl ɪnðɪ ʌnslˈɑːθ bɹˈæntʃ ænd ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ, wˌɪtʃ aɪl ɡɛt tə lˈeɪɾɚ. tˈɪp nˈʌmbɚ fˈoːɹ ɪz tə kɹiːˈeɪt dˈeɪɾə sˈɛts mˈænjuːəli. jˈɛs, aɪ nˈoʊ ðɪs ɪz ɐ bˈɪt ʌv wˈɜːk, bˌʌt aɪ θˈɪŋk ɪts ˌʌndɚɹˈeɪɾᵻd. wˌɛn juː mˈænjuːəli kjˈʊɹɹeɪt ɐ dˈeɪɾə sˈɛt lˈaɪk aɪ dˈɪd fɚðə tɹˈɛliz fˈʌŋkʃən kˈɔːlɪŋ dˈeɪɾə sˈɛt, ɪt lˈɛts juː ɐpɹˈiːʃɪˌeɪt ɛɡzˈæktli wˌɪtʃ ɹˈoʊz ʌv dˈeɪɾə ɑːɹ nˈiːdᵻd tə ɡɛt ðə pɚfˈoːɹməns ðæt juː nˈiːd. |1
5
+ trelis_voice1_4.wav|juː kˈæn, ʌv kˈoːɹs, jˈuːs pˈaɪθən ænd tʃˈæt dʒˌiːpˌiːtˈiː tə hˈɛlp ˈɔːɾəmˌeɪt sˌʌm ʌv ðɪs ænd dʒˈɛnɚɹˌeɪt ɹˈoʊz. bˌʌt aɪ θˈɪŋk ðə mˈænjuːəl tˈʌtʃ dˈʌz ɐlˈaʊ juː ɐ bˈɛɾɚɹ ˌʌndɚstˈændɪŋ, wˌɪtʃ wɪl ɐlˈaʊ juː tə ɡɛt pɚfˈoːɹməns fˈæstɚ. hˈɪɹ, juː kæn tʃˈɛk ˈaʊt ðə fˈʌŋkʃən kˈɔːlɪŋ vˈiː θɹˈiː bɹˈæntʃ ænd ˈɔːlsoʊ ðɪ ʌnslˈɑːt ænd mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃᵻz ʌvðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ. tˈɪp nˈʌmbɚ fˈaɪv ɪz tə stˈɑːɹt ˈɔf tɹˈeɪnɪŋ wɪð ɐ smˈɔːl nˈʌmbɚɹ ʌv ɹˈoʊz. ɪn fˈækt, aɪ ˈɔːlweɪz ɹˈʌn tɹˈeɪnɪŋ fˈɜːst wɪð dʒˈʌst wˈʌn ɹˈoʊ ʌv dˈeɪɾə tə tʃˈɛk ðæt maɪ tɹˈeɪnɪŋ pˈaɪplaɪn ɪz wˈɜːkɪŋ kɚɹˈɛktli ænd aɪ dˈoʊnt ɹˈʌn ˌaʊɾəv mˈɛmɚɹi. ðˈɛn aɪl mˈuːv tə tɹˈeɪnɪŋ ˌɔn wˈʌnhˈʌndɹɪd ɹˈoʊz, ðˈɛn wˈʌn,zˈiəɹoʊzˈiəɹoʊ zˈiəɹoʊ. ænd aɪm tʃˈɛkɪŋ ˈɔːl ðə tˈaɪm wˈɛðɚ maɪ pɚfˈoːɹməns ɪz ˈæktʃuːəli ɪmpɹˈuːvɪŋ ɔːɹ wˈɛðɚ dʒˈʌst maɪ dˈeɪɾəsˌɛt dɪzˈaɪn ɪz kəmplˈiːtli ˈɔf. |1
6
+ trelis_voice1_5.wav|ɪf juː dˈuː wˈɔnt tʊ ˈɔːɾəmˌeɪt ɐ lˈɪɾəl mˈoːɹ hˌaʊ juː dʒˈɛnɚɹˌeɪt sɪnθˈɛɾɪk dˈeɪɾə sˈɛts, juː kæn tʃˈɛk ˈaʊt ðɪs vˈɪdɪoʊ hˈɪɹ ˌɔn dˈeɪɾə sˈɛt pɹˌɛpɚɹˈeɪʃən wɪð ˌɛlˌɛlˈɛm. tˈɪp nˈʌmbɚ sˈɪks ɪz ˈɔːlweɪz jˈuːs ɐ vˌælɪdˈeɪʃən dˈeɪɾə sˈɛt. ɪf juː dˈoʊnt hˈæv wˌʌn, juː kæn dʒˈʌst splˈɪt ˈɔf tˈɛn tə twˈɛnti pɚsˈɛnt ʌv jʊɹ tɹˈeɪnɪŋ dˈeɪɾə sˈɛt. juː wˈɔnt təbi tʃˈɛkɪŋ jʊɹ tɹˈeɪnɪŋ lˈɔs æz juː pɹəɡɹˈɛs ɐlˈɔŋ ðə pɹˈɑːsɛs. mˌeɪk ʃˈʊɹ ɪts nˌɑːt tˈuː bˈʌmpi ænd jʊɹ lˈɜːnɪŋ ɹˈeɪt ɪz nˌɑːt tˈuː hˈaɪ ɔːɹ jʊɹ bˈætʃ sˈaɪz ɔːɹ vˈɜːtʃuːəl bˈætʃ sˈaɪz ɪz tˈuː smˈɔːl. juː ˈɔːlsoʊ wˈɔnt tə tʃˈɛk jʊɹ vˌælɪdˈeɪʃən lˈɔs, ænd ðɪs ʃˌʊd biː mənətˈɑːnɪkli dˈiːkɹiːsɪŋ ɪn ɐ smˈuːð wˈeɪ. ɪf ɪts ˈɛvɚɹ ˈʌptɪkɪŋ, ðæt mˈiːnz juː mˌaɪt biː ˌoʊvɚfˈɪɾɪŋ ænd jʊɹ tɹˈeɪnɪŋ fɔːɹ tˈuː mɛni ˈɛpɑːkz, ɔːɹ juː mˈeɪ nˌɑːɾɐv ɪnˈʌf dˈeɪɾə. |1
7
+ trelis_voice1_6.wav|aɪ θˈɪŋk jʊɹ bˈɛɾɚɹ ˈɔf tə dʒˈʌst fˈɪt ɪɾ ˌɔn wˈʌn dʒˌiːpˌiːjˈuː, bɪkˈʌz wɛn juː mˈuːv tə mˈʌltaɪ dʒˌiːpˌiːjˈuː, juː hæv dˈeɪɾə ðæts mˈuːvɪŋ bᵻtwˈiːn ðˌɛm, ðə tɹˈeɪnɪŋ bɪkˌʌmz mˈoːɹ kˈɑːmplᵻkˌeɪɾᵻd, ɪts ˈiːziɚ tə mˌeɪk mɪstˈeɪks, ænd ɪt kæn biː slˈoʊɚɹ ɪn sˌʌm wˈeɪz. ˈɔːlsoʊ, ˌɔn wˈʌn dʒˌiːpˌiːjˈuː, juː kæn jˈuːz ʌnslˈɑːθ, wˌɪtʃ ɡˈɪvz juː ɐ tˈuː ˈɛks spˈiːd ˈʌp. sˌoʊ ðæts kwˈaɪt bˌɛnɪfˈɪʃəl ɪf juː kæn dʒˈʌst fˈoʊkəs ˌɔn kˈiːpɪŋ θˈɪŋz sˈɪmpəl, ʌntˈɪl juːv æt lˈiːst ɡɑːt ɐ tɹˈeɪnɪŋ ɐpɹˈoʊtʃ ðæts wˈɜːkɪŋ wˈɛl, ænd jʊɹ hˈæpi tə ðˈɛn spˈɛnd ðə tˈaɪm ænd mˈʌni tə skˈeɪl ˈʌp. sˈʌmθɪŋ aɪ ʃˌʊd mˈɛnʃən æz wˈɛl ɪz ðæt juː kæn wˈeɪst ɐ lˈɑːt ʌv tˈaɪm wɪð ˌɪnstəlˈeɪʃənz ænd ɡˌɛɾɪŋ stˈʌk ɪn ɡˌɛɾɪŋ sˈɛt ˌʌp fɔːɹ fˈaɪn tˈuːnɪŋ. |1
8
+ trelis_voice1_7.wav|wˈʌn wˈeɪ ɚɹˈaʊnd ðæt ɪz tə jˈuːz ɐn ˈɪmɪdʒ ɔːɹ ɐ tˈɛmplət ðæt sˈɛts ˌʌp jʊɹ kjˈuːdə ænd pˈaɪ tˈɔːɹtʃ tʊ ɐ spəsˈɪfɪk vˈɜːʒən. aɪv ɡɑːt ɐ wˈʌŋklˈɪk tˈɛmplət hˈɪɹ fɔːɹ ɹˈʌn pˈɑːd, ænd juː kæn jˈuːz ðæt tə kənsˈɪstəntli hæv ðə sˈeɪm ɛnvˈaɪɹənmənt ˌɔn wˌɪtʃ tʊ ɪnstˈɔːl ðə fˈaɪnəl pˈækɪdʒᵻz juː nˈiːd fɔːɹ fˈaɪn tˈuːnɪŋ. tˈɪp nˈʌmbɚɹ ˈeɪt ɪz tə jˈuːz wˈeɪts ænd bˈaɪəsᵻz. ðɪs ɪz ɐ tˈuːl ðæt ɐlˈaʊz juː tə tɹˈæk ðə lˈɔsᵻz ænd ðə ɹᵻwˈɔːɹdz æz juː mˈuːv θɹuː jʊɹ tɹˈeɪnɪŋ ɹˈʌn. juː kæn ɪŋklˈuːd ðɪs ɪn ɐ skɹˈɪpt wɪð pˈɪp ɪnstˈɔːl wˈændˌiːbiː, ðˈɛn sˈɛt ðɪ ɛnvˈaɪɹənmənt vˈɛɹɪəbəl fɔːɹ wˈændˌiːbiː pɹˈɑːdʒɛkt tʊ ɐ pɹˈɑːdʒɛkt nˈeɪm. ænd ðɪs wɪl kɹiːˈeɪt ɐ fˈoʊldɚ bˈeɪsɪkli wɪðˌɪn wˌɪtʃ juː kæn hæv mˌʌltɪpəl ɹˈʌnz ʌv ɹˈʌn nˈeɪm. |1
9
+ trelis_voice1_8.wav|ænd ðə wˈeɪ juː sˈɛt ðə ɹˈʌn nˈeɪm ɪz ɪnðə tɹˈeɪnɪŋ ˈɑːɹɡjuːmənts baɪ pˈæsɪŋ ɪnðə ɹˈʌn nˈeɪm. hˈɪɹ juː wʊd sˈɛt ðə ɹˈʌn nˈeɪm lˈaɪk wˈʌn ˈɛpɑːk ænd kˈɑːnstənt skˈɛdʒuːlɚ ɔːɹ wʌtˈɛvɚ juː wˈɔnt tə kˈɔːl ɪt. ænd juː ˈɔːlsoʊ nˈiːd tə sˈɛt ˌʌp ɹᵻpˈoːɹt tə wˈɔnd bˈiː wˈeɪts ænd bˈaɪəsᵻz. ðɪs ɪz səpˈoːɹɾᵻd ɪnðɪ ˈɑːnslɔːt ænd ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃᵻz ænd ˈɔːlsoʊ ɪn mˈɛnɪəv ðə dʒˈʌpaɪɾɚ nˈoʊtbʊks ðæt ɑːɹ θɹuːˈaʊt ˈɔːl ðə bɹˈæntʃᵻz ʌvðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ. bᵻfˌoːɹ aɪ mˈuːv tə tˈɪps ˈeɪt ænd nˈaɪn, aɪ wˈɔnt tə kˈɑːmɛnt ˌɔn skˈeɪlɪŋ ˈʌp. sˌoʊ aɪv tˈɔːkt ɐbˌaʊt stˈɑːɹɾɪŋ wɪð ɐ lˈoʊ nˈʌmbɚɹ ʌv ɹˈoʊz, stˈɑːɹɾɪŋ wɪð lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː, ænd stˈɑːɹɾɪŋ wɪð ɐ smˈɔːl mˈɑːdəl. wˈɛl, hˈɪɹz ðɪ ˈɔːɹdɚ juː wˈɔnt tə skˈeɪl ˌʌp ˈɪn. |1
10
+ trelis_voice1_9.wav|stˈɑːɹt baɪ ɪŋkɹˈiːsɪŋ ðə ɹˈoʊz ʌv dˈeɪɾə ˌɔn ɐ smˈɔːl mˈɑːdəl, ðˈɛn mˈuːv kjˈuː lˈoʊ ɹˈɑː tə lˈoʊ ɹˈɑː. ɪf juː ɹˈiəli wˈɔnt tə tɹˈaɪ fˈʊl fˈaɪntˈuːnɪŋ, tˈɛst ɪɾ ˈaʊt ˌɔn ɐ smˈɔːl mˈɑːdəl ænd sˈiː ɪf ɪt ɹˈiəli ɪmpɹˈuːvz pɚfˈoːɹməns. ðˈɛn, æz ɐ vˈɛɹi lˈæst stˈɛp, juː kæn θˈɪŋk ɐbˌaʊt mˈuːvɪŋ tʊ ɐ lˈɑːɹdʒɚ mˈɑːdəl wˌɛɹ ɪts ɡˌoʊɪŋ tə tˈeɪk mˈoːɹ tˈaɪm ænd mˈʌni tə ɡɛt ɪn ðæt fˈaɪnəl ɹɪzˈʌlt. ðɛɹˌɑːɹ tˈuː vˈɪdɪoʊz ʌv ɹˈɛlᵻvəns hˈɪɹ. ɪf juː wˈɔnt tʊ ˌʌndɚstˈænd ðə pɹˈoʊz ænd kˈɑːnz ʌv fˈʊl fˈaɪntˈuːnɪŋ vˈɜːsᵻz kjˈuːlˈoːɹə ɔːɹ lˈoʊ ɹˈɑː, tˈeɪk ɐ lˈʊk æt ðɪs vˈɪdɪoʊ. ænd ɪf juː wˈɔnt tʊ ˌʌndɚstˈænd ðə kəmplˈɛksᵻɾiz ʌv dˌuːɪŋ mˈʌltaɪdʒˌiːpˌiːjˈuː tɹˈeɪnɪŋ, tʃˈɛk ˈaʊt mˈʌltaɪdʒˌiːpˌiːjˈuː fˈaɪntˈuːnɪŋ. mˈuːvɪŋ tə tˈuː lˈæst tˈɪps. |1
11
+ trelis_voice1_10.wav|tˈɪp nˈʌmbɚ nˈaɪn ɪz tə jˈuːz ʌnsˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ. ðɪs kæn biː jˈuːsfəl ɪf juː hæv ɐ lˈɑːɹdʒ dˈeɪɾə sˈɛt. aɪm ɡˌoʊɪŋ tə sˈeɪ lˈɑːɹdʒɚ ðɐn tˈɛn,zˈiəɹoʊzˈiəɹoʊ zˈiəɹoʊ ɹˈoʊz ʌv dˈeɪɾə. hˈɪɹ, juːl nˈiːd tə jˈuːz pˈaɪθən skɹˈɪpts ɪn ˈɔːɹdɚ tə klˈiːn ˈʌp, sˈeɪ, ɹᵻpˈiːɾᵻd kˈæɹɪktɚz ɔːɹ tˈuː mʌtʃ nˈuː lˈaɪnz. juː kæn ˈɔːlsoʊ jˈuːs lˈæŋɡwɪdʒ mˈɑːdəlz ɪn ˈɔːɹdɚ tə klˈiːn ˌʌp ðə dˈeɪɾə sˈɛt tʃˈʌŋk baɪ tʃˈʌŋk. ðə vˈɪdɪoʊ ʌv ɹˈɛlᵻvəns hˈɪɹ ɪz ðə wˌɪkipˈiːdiə vˈɪdɪoʊ aɪ mˈeɪd, wˌɛɹ aɪ fˈɜːst ˈɛkstɹækt dˈeɪɾə fɹʌm wˌɪkipˈiːdiə, klˈiːn ɪt, ænd ðˈɛn jˈuːz ɪt fɔːɹ fˈaɪntˈuːnɪŋ. lˈæst ʌv ˈɔːl, maɪ tˈɪp nˈʌmbɚ tˈɛn ɪz tə dˈuː pɹˈɛfɹəns fˈaɪntˈuːnɪŋ. ðɪs ɪz wˌɛɹ juː hæv ɐ dˈeɪɾə sˈɛt wɪð tʃˈoʊzən, wˌɪtʃ ɑːɹ bˈɛɾɚ ɔːɹ pɹɪfˈɜːd ɹᵻspˈɑːnsᵻz, ænd ɹᵻdʒˈɛktᵻd, wˌɪtʃ ɑːɹ ðə ɹᵻspˈɑːnsᵻz tə ðə sˈeɪm pɹˈɑːmpts bˌʌt ɑːɹ ʌv lˈoʊɚ kwˈɔlᵻɾi. |1
12
+ trelis_voice1_11.wav|ðə pɹˈɛfɹəns fˈaɪntˈuːnɪŋ wɪl mˈuːv jʊɹ mˈɑːdəl tə ɡˈɪv ɹᵻspˈɑːnsᵻz mˈoːɹ lˈaɪk jʊɹ tʃˈoʊzən ˈænsɚz ænd lˈɛs lˈaɪk jʊɹ ɹᵻdʒˈɛktᵻd ˈænsɚz, wˌɪtʃ ɪz jˈuːsfəl ɪf juː wˈɔnt tə dˈuː sˌʌm fˈaɪntˈuːnɪŋ fɔːɹ tˈoʊn ɔːɹ stˈaɪl, ɔːɹ ɪf juː wˈɔnt tə mˌeɪk sˌʌm kɚɹˈɛkʃənz wˌɛɹ ðə mˈɑːdəlz ɡˈɪvɪŋ ɐ ɹᵻspˈɑːns juː dˈoʊnt kwˈaɪt lˈaɪk. hˈɪɹ aɪ ɹˌɛkəmˈɛnd ðɪ ˈɔːɹpoʊ juː tˈuːb vˈɪdɪoʊ, ænd ðɛɹz ˈɔːlsoʊ ɐ bɹˈæntʃ baɪ ðæt nˈeɪm ɪn ɐdvˈænst fˈaɪn tˈuːnɪŋ. ˈɔːɹpoʊ ɪz ˈɔːlsoʊ səpˈoːɹɾᵻd ɪnðɪ ʌnslˈɑːt bɹˈæntʃ, wˌɛɹ ðɛɹz ɐ pˈaɪθən dʒˈʌpaɪɾɚ nˈoʊtbʊk ænd ˈɔːlsoʊ dʒˈʌst ɐ pˈaɪθən.pˈaɪ skɹˈɪpt juː kæn ɹˈʌn. ænd ˈɔːɹpoʊ ɪz səpˈoːɹɾᵻd æz ɐn ˈɑːpʃən ɪnðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ tˈuː. sˌoʊ tə ɹᵻkˈæp ðiːz tˈɛn tˈɪps, stˈɑːɹt wɪð ɐ smˈɔːl mˈɑːdəl, jˈuːs lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː, nˌɑːt fˈʊl fˈaɪntˈuːnɪŋ. |1
13
+ trelis_voice1_12.wav|ˈɔːlweɪz kɹiːˈeɪt tˈɛn mˈænjuːəl tˈɛst kwˈɛstʃənz ɔːɹ mˈeɪbiː ɐ fjˈuːmˌoːɹ. ɹᵻmˈɛmbɚ ðæt mˈænjuːəl dˈeɪɾə sˈɛts ɑːɹ pɹˈɑːbəbli ˌʌndɚɹˈeɪɾᵻd. juː kæn ˈɔːlweɪz ɡɛt ɐ lˈɪɾəl bˈɪt ʌv hˈɛlp fɹʌm pˈaɪθən ɔːɹ fɹʌm tʃˈæt dʒˌiːpˌiːtˈiː. stˈɑːɹt tɹˈeɪnɪŋ ˌɔn ɐ smˈɔːl nˈʌmbɚɹ ʌv ɹˈoʊz, ˈiːvən dʒˈʌst wˈʌn ɹˈoʊ tə tˈɛst ðə pˈaɪplaɪn, bˌʌt ðˈɛn wˈʌnhˈʌndɹɪd, ænd mˌeɪk ʃˈʊɹ ɪts hˌævɪŋ ɐ ɡˈʊd ɪfˈɛkt bᵻfˌoːɹ juː dᵻsˈaɪd tə skˈeɪl ˈʌp. mˌeɪk ʃˈʊɹ juː nˈoʊ ðætðə dˈeɪɾə tˈaɪp ænd ðə dˈeɪɾə sˈɛt ðæt juːv sˈɛt ˌʌp ɪz ˈæktʃuːəli ðə ɹˈaɪt wˌʌn. nˈʌmbɚ sˈɪks, ˈɔːlweɪz jˈuːs ɐ vˌælɪdˈeɪʃən sˈɛt. dʒˈʌst splˈɪt wˈʌn ˈɔf fɹʌm ɐ tɹˈeɪnɪŋ sˈɛt ɪf juː dˈoʊnt hˈæv wˌʌn. nˈʌmbɚ sˈɛvən, tɹˈaɪ tə dʒˈʌst stˈɑːɹt tɹˈeɪnɪŋ ˌɔn wˈʌn dʒˌiːpˌiːjˈuː. nˈʌmbɚɹ ˈeɪt, jˈuːs wˈeɪts ænd bˈaɪəsᵻz fɔːɹ tɹˈækɪŋ. |1
14
+ trelis_voice1_13.wav|ænd wɛn jʊɹ skˈeɪlɪŋ fɹʌm smˈɔːl tə lˈɑːɹdʒ, ˈɪŋkɹiːs fˈɜːst ðə ɹˈoʊz, ðˈɛn mˈuːv tə jˈuːzɪŋ mˈoːɹ vɹˈæm wɪð lˈoʊ ɹˈɑː ɪnstˈɛd ʌv kjˈuː lˈoʊ ɹˈɑː ɔːɹ fˈʊl fˈaɪn tˈuːnɪŋ ɪnstˈɛd ʌv lˈoʊ ɹˈɑː. baɪ ðə wˈeɪ, ðɛɹz ɐ fˈæktɚɹ ʌv fˈoːɹ ɹˈʌfli ɪn vɹˈæm dˈɪfɹəns bᵻtwˌiːn ˈiːtʃ əv ðˈoʊz. sˌoʊ lˈoʊ ɹˈɑː ɪz ɐbˌaʊt fˈoːɹ tˈaɪmz kjˈuː lˈoʊ ɹˈɑː ænd fˈʊl fˈaɪn tˈuːnɪŋ ɪz ɐbˌaʊt fˈoːɹ tˈaɪmz. lˈoʊ ɹˈɑː, ɔːɹ ˈiːvən mˈoːɹ ɪn sˌʌm kˈeɪsᵻz. ænd lˈæst ʌv ˈɔːl, ˈɪŋkɹiːs tʊ ɐ lˈɑːɹdʒɚ mˈɑːdəl sˈaɪz ˈoʊnli æt ðə vˈɛɹi ˈɛnd ʌv jʊɹ tɹˈeɪnɪŋ pɹˈɑːsɛs wɛn juː θˈɪŋk juː hæv ɐ pˈaɪplaɪn ðæts wˈɜːkɪŋ wˈɛl. ðˈɛn fɔːɹ ɐdvˈænst tˈɪps, kənsˈɪdɚ dˌuːɪŋ ʌnsˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ ɪf juː hæv ɐ lˈɑːɹdʒ ɐmˈaʊnt ʌv dˈeɪɾə, ˈoʊnli ɪf juː hæv ɐ lˈɑːɹdʒ ɐmˈaʊnt ʌv dˈeɪɾə, aɪd sˈeɪ. |1
15
+ trelis_voice1_14.wav|ænd lˈæst ʌv ˈɔːl, juː kæn kənsˈɪdɚ pɹˈɛfɹəns fˈaɪntˈuːnɪŋ, ɪnwˌɪtʃ kˈeɪs aɪd ɹˌɛkəmˈɛnd jˈuːzɪŋ ˈɔːɹpəl, wˌɪtʃ wɪl dˈuː sˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ ænd ˈɑːdz ɹˈeɪʃɪˌoʊ pɹˈɛfɹəns ˌɑːptɪmᵻzˈeɪʃən. æt ðə sˈeɪm tˈaɪm. nˈaʊ, ðɪs ɐpɹˈoʊtʃ hˈɪɹ aɪv tˈɔːkt ɐbˌaʊt fɔːɹ lˈæŋɡwɪdʒ mˈɑːdəlz, bˌʌt ɪɾ ˈɔːlsoʊ wˈɜːks fɔːɹ vˈɪdɪoʊ ænd spˈiːtʃ ɔːɹ ˈɪmɪdʒᵻz, mˌʌltɪmˈoʊdəl mˈɑːdəlz. sˌoʊ juː kæn tʃˈɛk ˈaʊt ðɪs vˈɪdɪoʊ hˈɪɹ ˌɔn mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs ˈɪmɪdʒ, wˌɛɹ aɪ pɹɪpˈɛɹ ɐ dˈeɪɾə sˈɛt ænd bɹˈɪŋ ɪt θɹuː fˈaɪn tˈuːnɪŋ. ænd lˈaɪkwaɪz, fɔːɹ ðɪs spˈiːtʃ tə tˈɛkst mˈɑːdəl, wˌɛɹ aɪ pɹɪpˈɛɹ ɐ dˈeɪɾə sˈɛt ænd bɹˈɪŋ ɪt θɹuː fˈaɪn tˈuːnɪŋ. ðɛɹˌɑːɹ spəsˈɪfɪk ɹˈiːpoʊz fɔːɹ mˌʌltɪmˈoʊdəl. ðæts ðə vˈɪʒən ɹᵻpˈɑːzɪtˌoːɹi hˈɪɹ. ænd ðɛɹz ɐ ɹˈiːpoʊ fɔːɹ tɹænskɹˈɪpʃən. ænd ðɪs ˌɛlˌɛlˈɛm ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ aɪv bˌɪn tˈɔːkɪŋ tə dˈeɪt ɪn ɔːɹ ˌʌp ʌntˈɪl nˈaʊ ɪn ðɪs pɹˌɛzəntˈeɪʃən. |1
16
+ trelis_voice1_15.wav|aɪv lˈeɪd ˈaʊt hˈɪɹ ˈɔːl ʌvðə plˈeɪlɪsts ðæt ɑːɹ ɹˈɛlᵻvənt dᵻpˈɛndɪŋ ˌɔn wʌt juː nˈiːd. sˌoʊ ðɛɹˌɑːɹ fˈoːɹ dˈɪfɹənt sˈɛkʃənz, fˈoːɹ plˈeɪlɪsts ænd fˈoːɹ ɹᵻpˈɑːzɪtˌoːɹiz ðæt ɡˈoʊ wɪð ðˌɛm. ðɛɹz ðɪ ˌɛlˌɛlˈɛm fˈaɪn tˈuːnɪŋ plˈeɪlɪst, wˌɪtʃ ɪz ˈɔːl ɐbˌaʊt fˈaɪn tˈuːnɪŋ lˈæŋɡwɪdʒ mˈɑːdəlz. ðˈɛn ðɛɹz ɐ ɹˈiːpoʊ fɔːɹ ðæt ɐdvˈænst fˈaɪn tˈuːnɪŋ. ðɛɹz ðə vˈɪʒən plˈeɪlɪst, wˌɪtʃ ɪz fɔːɹ mˌʌltɪmˈoʊdəl mˈɑːdəlz ænd ɹˈiːpoʊ lˈɪŋk. ðɛɹz ɐ vˈɪdɪoʊ ˌɔn tɹænskɹˈɪpʃən ænd ɐ ɹˈiːpoʊ lˈɪŋk. ænd ðˈɛn ðɛɹˌɑːɹ mˈɛni vˈɪdɪoʊz ˌɔn sˈɜːvɚ sˈɛɾʌp. ðæts ɪf juː wˈɔnt tə dᵻplˈɔɪ jʊɹ ˈoʊn kˈʌstəm mˈɑːdəl, ˈiːðɚɹ ˌɔn ɐ sˈɜːvɚ ðæt wɪl slˈiːp ɔːɹ stˈɑːɹt ˌʌp wɛn juː nˈiːd ɪt tə ɹˈʌn, ðæts kˈɔːld sˈɜːvɚləs, ɔːɹ ɐ sˈɜːvɚ ðæts ˈɔːlweɪz ˌɔn ɪf jʊɹ jˈuːzɪŋ sˈʌmθɪŋ lˈaɪk tˌiːdʒˌiːˈaɪ ɔːɹ vˌiːˌɛlˌɛlˈɛm θɹuː ɐ sˈɜːvɪs lˈaɪk ɹˈʌn pˈɑːd ɔːɹ vˈæst ˌeɪˈaɪ. |1
17
+ trelis_voice1_16.wav|ænd sˌoʊ hˈɪɹ ɪz ðə lˈɪŋk fɔːɹ ðˈɪs. aɪl nˈoʊt æz wˈɛl ðæt wɪðˌɪn ðɪs ɹˈiːpoʊ, ðɛɹz sˌʌm skɹˈɪpts ðæt ɐlˈaʊ juː tə ɹᵻdˈækt ˌɪnfɚmˈeɪʃən, pˈɜːsənəli aɪdˈɛntɪfˌaɪəbəl ˌɪnfɚmˈeɪʃən lˈaɪk nˈeɪmz, ˈiːmeɪlz, ɔːɹ kɹˈɛdɪt kˈɑːɹd nˈʌmbɚz bᵻfˌoːɹ juː sˈɛnd ðə dˈeɪɾə tʊ ɐ θˈɜːdpˈɑːɹɾi ˌɛlˌɛlˈɛm. ænd ðɛɹˌɑːɹ ˈɔːlsoʊ skɹˈɪpts ˌɔn fˈʌŋkʃən kˈɔːlɪŋ ˈɪnfɚɹəns ænd spˈiːd tˈɛst tˈuː. aɪl tˈɔːk ɐ lˈɪɾəl mˈoːɹ ɐbˌaʊt ðoʊz dʒˈʌst æt ðɪ ˈɛnd ʌv ðɪs vˈɪdɪoʊ. lˈæst ʌv ˈɔːl, ðiːz ɹˈiːpoʊz, ʌvwˈɪtʃ ðɛɹˌɑːɹ fˈoːɹ, ðeɪɚɹ ɐvˈeɪləbəl fɔːɹ pˈɜːtʃɪs ˌɪndᵻvˈɪdʒuːəli, bˌʌt juː kæn ˈɔːlsoʊ nˈaʊ bˈaɪ ɐ ɹˈiːpoʊ bˈʌndəl, wˌɪtʃ wɪl ɡˈɪv juː lˈaɪftaɪm ˈæksɛs tʊ ˈɔːl fˈoːɹ ʌv ðiːz ɹᵻpˈɑːzɪtˌoːɹiz, wˌɪtʃ ɪŋklˈuːdz ˌɛni fjˈuːtʃɚɹ ˈʌpdeɪts mˌeɪd tə ðoʊz ɹˈiːpoʊz. juː kæn pˈɜːtʃɪs ðæt ˈɔːl təɡˌɛðɚ nˈaʊ æz ɐ bˈʌndəl. |1
18
+ trelis_voice1_17.wav|ðɪs vˈɛɹi lˈæst sˈɛkʃən ʌvðə vˈɪdɪoʊ ɪz fɔːɹ ðoʊz hˌuː hæv pˈɜːtʃɪst lˈaɪftaɪm ˈæksɛs tə wˈʌn ʌvðə tɹˈɛliz ɹᵻpˈɑːzɪtˌoːɹiz, bˌʌt aɪl dʒˈʌst pˌʊt ɪt pˈɑːɹt ʌv ðɪs pˈʌblɪk vˈɪdɪoʊ bɪkˈʌz ɪt wɪl ɡˈɪv ɐ sˈɛns ʌv wʌts ɪn ðiːz ɹᵻpˈɑːzɪtˌoːɹiz fɔːɹ ðoʊz ʌv juː hˌuː mˌaɪt biː ˈɪntɹɛstᵻd tə pˈɜːtʃɪs lˈaɪftaɪm mˈɛmbɚʃˌɪp lˈeɪɾɚ. ðə fˈɜːst ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ, ænd ðɪs ɪz splˈɪt ˌɪntʊ bɹˈæntʃᵻz ɐkˈoːɹdɪŋ tə fˈʌŋkʃən. ðeɪ ɑːɹ ˈɔːl lˈɪstᵻd hˈɪɹ ɹˈʌfli ɪnðɪ ˈɔːɹdɚ ðæt ðeɪ hɐvbɪn ɹᵻlˈiːst. nˈaʊ, ɐ fjˈuː ʌvðə bɹˈæntʃᵻz ðæt aɪl hˈaɪlaɪt ɑːɹ, fˈɜːst ʌv ˈɔːl, ðə wˌɪkipˈiːdiə bɹˈæntʃ, wˌɪtʃ ɪz fɔːɹ ʌnsˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ ænd dˈeɪɾə klˈiːnɪŋ. ɪf juː dˈuː wˈɔnt tə dˈuː ˈɔːɹpoʊ, juː hæv ðɪ ˈɔːɹpoʊ bɹˈæntʃ hˈɪɹ. ænd ɪf juː wˈɔnt tə pɹɪpˈɛɹ dˈeɪɾə, juː kæn dˈuː sˌoʊ wɪððə hˈɛlp əvə lˈæŋɡwɪdʒ mˈɑːdəl. |1
19
+ trelis_voice1_18.wav|ðɪs ɪz dˈʌn ɪnðə mˌɛmɚɹᵻzˈeɪʃən bɹˈæntʃ, wˌɛɹ juː kæn sˈɛt ˌʌp sˌʌm dˈeɪɾə dʒˌɛnɚɹˈeɪʃən bˈeɪst ˌɔn pˌiːdˌiːˈɛf kˈɑːntɛnt. ænd lˈaɪkwaɪz, ɪf juː ɡˌoʊ tə ðə sˈuːpɚvˌaɪzd fˈaɪn tˈuːnɪŋ bɹˈæntʃ, ðɛɹ ɪz ˈɔːlsoʊ ɐ skɹˈɪpt ɔːɹ mˌʌltɪpəl skɹˈɪpts fɔːɹ dʒˈɛnɚɹˌeɪɾɪŋ kjˈuː ænd ɐ dˈeɪɾə fɹʌm ɐ bˈeɪs dˈeɪɾə sˈɛt ɹˈaɪt ðˈɛɹ. ðˈɛn ðɛɹˌɑːɹ tˈuː ɪmpˈoːɹtənt bɹˈæntʃᵻz hˈɪɹ, ʌnslˈɑːθ ænd mˈʌltaɪdʒˌiːpˌiːjˈuː. ðɪ ʌnslˈɑːθ bɹˈæntʃ ɐlˈaʊz juː tə ɹˈʌn fˈaɪn tˈuːnɪŋ ɪn ˈiːðɚɹ ɐ nˈoʊtbʊk ɔːɹ æz ɐ pˈaɪθən skɹˈɪpt. wˈɛɹæz ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ ɐlˈaʊz juː tə ɹˈʌn pˈaɪθən skɹˈɪpts ðæt wɪl dᵻplˈɔɪ mˈʌltaɪdʒˌiːpˌiːjˈuː tɹˈeɪnɪŋ ðæts fˈʊli ʃˈɛɹd dˈeɪɾə pˈæɹəlˌɛl ɔːɹ dˈɪstɹɪbjˌuːɾᵻd dˈeɪɾə pˈæɹəlˌɛl. nˈaʊ aɪl bɹˈiːfli ʃˈoʊ juː ˈiːtʃ əv ðoʊz tˈuː mˈeɪn bɹˈæntʃᵻz. sˌoʊ hˈɪɹ wiːl ɡˌoʊ ˌɪntʊ ðɪ ʌnslˈɑːθ bɹˈæntʃ. |1
20
+ trelis_voice1_19.wav|ðə wˈeɪ ðæt juː ɹˈʌn tɹˈeɪnɪŋ ɪn ðɪs ʌnslˈɑːt bɹˈæntʃ ɪz baɪ sˈɛɾɪŋ ˌʌp ðə kənfˌɪɡjɚɹˈeɪʃən ɪn ɐ kənfˈɪɡ fˈaɪl. aɪv ˈɔːlsoʊ ɡɑːt ɐ kənfˈɪɡ fˈaɪl ðæt juː kæn jˈuːz hˈɪɹ ɪf juː wˈɔnt tə dˈuː sˌʌm fˈʌŋkʃən kˈɔːlɪŋ fˈaɪn tˈuːnɪŋ. ænd wˈʌns juː hæv jʊɹ kənfˌɪɡjɚɹˈeɪʃən sˈɛt ˈʌp, juː kæn ɹˈʌn ðə tˈɛst.pˈaɪ ɪn ˈɔːɹdɚ tə ɹˈʌn ɐ sˈɛt ʌv tˈɛst kwˈɛstʃənz ðæt juːv mˈænjuːəli dʒˈɛnɚɹˌeɪɾᵻd, ɔːɹ juː kæn ɹˈʌn kwˈɛstʃənz fɹʌm vˌælɪdˈeɪʃən ɔːɹ tˈɛst splˈɪt əvə dˈeɪɾə sˈɛt. ðˈɛn wɛn juː wˈɔnt tə tɹˈeɪn jʊɹ mˈɑːdəl, juː sˈɪmpli ɹˈʌn tɹˈeɪn.pˈaɪ, ɔːɹ juː kæn ɹˈʌn ɪt stˈɛp baɪ stˈɛp ɪn ɐ pˈaɪθən dʒˈʌpaɪɾɚ nˈoʊtbʊk. nˈaʊ, ðə nˈoʊtbʊk ɪz ɹˌɛkəmˈɛndᵻd ɪf juː wˈɔnt tə ɡˌoʊ θɹuː ðə tɹˈeɪnɪŋ ðə fˈɜːst tˈaɪm, juː kæn sˈiː stˈɛp baɪ stˈɛp wʌts hˈæpənɪŋ ænd ˈiːzili pɹˈɪnt ˈaʊt θˈɪŋz æɾ ˌɪntɚmˈiːdiət pˈɔɪnts. |1
21
+ trelis_voice1_20.wav|bˌʌt wɛn juːv ɡɑːt jʊɹ skɹˈɪpt hˈoʊnd, ɪt kæn biː ɐ lˈɑːt fˈæstɚ tə ɹˈʌn ɐ pˈaɪθən skɹˈɪpt. ænd ðæts wˌaɪ aɪ hæv mˌeɪd ðɪs skɹˈɪpt ɐvˈeɪləbəl, wˌɪtʃ juː dʒˈʌst ɹˈʌn fɹʌmðə kəmˈænd lˈaɪn ænd ɪt wɪl ɡˌoʊ θɹuː ˈɛvɹɪθˌɪŋ wɪðˌɪn ðə tɹˈeɪnɪŋ. dʒˈʌst tə ɡˈɪv juː ɐ sˈɛns ʌv hˌaʊ juː kənfˈɪɡjɚ ðə tɹˈeɪnɪŋ ænd tˈɛst sˈɛɾʌp, juːl sˈɛt ɐ mˈɑːdəl slˈʌɡ. juː wɪl ðˈɛn sˈɛt sˌʌm pɚɹˈæmɪɾɚz, lˈaɪk wˈɛðɚ juː wˈɔnt tə fˈaɪn tˈuːn ɪn fˈoːɹbˈɪt, wˌʌt dˈeɪɾə tˈaɪp juː wˈɔnt tə jˈuːz, dᵻpˈɛndɪŋ ˌɔn jʊɹ dʒˌiːpˌiːjˈuː. juː kæn ðˈɛn tʃˈuːz ɐ dˈeɪɾə sˈɛt, sˈeɪ fɔːɹ fˈʌŋkʃən kˈɔːlɪŋ, ɔːɹ ɪf juː wˈɔnt tə mˈɛmɚɹˌaɪz sˌʌm dˈeɪɾə, lˈaɪk ɔnðə ɹˈuːlz ʌv tˈʌtʃ ɹˈʌɡbi. hˈɪɹ, juː kæn sˈɛt ˌʌp jʊɹ tˈɛstɪŋ. |1
22
+ trelis_voice1_21.wav|juː kæn dᵻsˈaɪd tə tˈɛst ˈiːðɚ fɹʌm ɐ sˈɛt ʌv mˈɛsɪdʒᵻz ðæt juː hæv pɹɪpˈɛɹd mˈænjuːəli, ɔːɹ juː kæn jˈuːz ðə tɹˈeɪnɪŋ, ɔːɹ juː kæn jˈuːz ðə vˌælɪdˈeɪʃən splˈɪt əvə tˈɛst sˈɛt ðæts ˌɔn hˈʌɡɪŋ fˈeɪs baɪ sˈɛɾɪŋ jˈuːs dˈeɪɾə sˈɛt tə tˈɛst ˈiːkwəl tə tɹˈuː ɹˈaɪt hˈɪɹ. nˈɛkst, juː sˈɛt ˌʌp jʊɹ tɹˈeɪnɪŋ ænd vˌælɪdˈeɪʃən splˈɪts. hˈɪɹ aɪv sᵻlˈɛktᵻd ɐ mˈeɪn bɹˈæntʃ fɔːɹ tɹˈeɪnɪŋ, ænd aɪv sᵻlˈɛktᵻd ðə tɹˈeɪnɪŋ splˈɪt. juː kæn fˈɪks ɐ mˈæks nˈʌmbɚɹ ʌv ɹˈoʊz hˈɪɹ. ðɪs wɪl sˈeɪv juː tˈaɪm ɪf juː dʒˈʌst wˈɔnt tə dˈaʊnloʊd ænd ɹˈʌn ˈɔn, sˈeɪ, wˈʌnhˈʌndɹɪd ɹˈoʊz ɪnstˈɛd ʌv ˌɔn ɐ mˈæsɪv dˈeɪɾəsˌɛt. nˈaʊ, aɪ spˈoʊk ˈɜːlɪɚɹ ɐbˌaʊt dʒˈɛnɚɹˌeɪɾɪŋ ɐ vˌælɪdˈeɪʃən sˈɛt. juː kæn ˈiːðɚ dˈaʊnloʊd fɹʌm ɐ splˈɪt ðæts ˌɔn hˈʌɡɪŋ fˈeɪs kˈɔːld tˈɛst ɔːɹ vˌælɪdˈeɪʃən, bˌʌt juː kæn ˈɔːlsoʊ dʒˈɛnɚɹˌeɪt ɐ vˌælɪdˈeɪʃən splˈɪt fɹʌmðə tɹˈeɪn splˈɪt. |1
23
+ trelis_voice1_22.wav|ɪf juː dʒˈʌst sˈɛt ðɪs tə tɹˈuː, ɪt wɪl siːkwˈɛstɚ twˈɛnti pɚsˈɛnt ʌvðə tɹˈeɪnɪŋ dˈeɪɾə tə jˈuːz æz vˌælɪdˈeɪʃən. nˈɛkst ˌʌp ɪz ðə lˈoʊ ɹˈɑː kənfˌɪɡjɚɹˈeɪʃən. juː hæv ˈɔːl ðə ɹˈɛɡjʊlɚ lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz juːl sˈiː hˈɪɹ. tʃˈɛk ˈaʊt ðə lˈaɪv stɹˈiːm vˈɪdɪoʊ ˌɔn tʃˈuːzɪŋ lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz ɪf juː wˈɔnt tə nˈoʊ mˈoːɹ. juː kæn sˈɛt lˈoʊ ɹˈɑː ɔːɹ lˈoʊ ɹˈɑː ˈælfə ænd ˈɔːlsoʊ ɹˈæŋk stˈeɪbɪlˌaɪz lˈoʊ ɹˈɑː, sˈɛt ðæt tə tɹˈuː ɔːɹ fˈɔls. hˈɪɹ juːv ɡɑːt sˌʌm wˈeɪts ænd bˈaɪəsᵻz pɹˈɑːdʒɛkt kənfˌɪɡjɚɹˈeɪʃənz. juː sˈɛt ðə pɹˈɑːdʒɛkt nˈeɪm, ænd ðˈɛn fɔːɹ ˈiːtʃ ɹˈʌn, juː kæn jˈuːz ɐ dˈɪfɹənt nˈeɪm hˈɪɹ fɔːɹ ɹˈʌnɪŋ ɪn wˈeɪts ænd bˈaɪəsᵻz. juː kæn sˈɛt ˌʌp jʊɹ hˈʌɡɪŋ fˈeɪs jˈuːzɚnˌeɪm. ðɪs wɪl biː jˈuːzd wɛn pˈʊʃɪŋ mˈɑːdəlz tə hˈʌb. |1
24
+ trelis_voice1_23.wav|nˈaʊ ðɛɹz ɐ mˈoːɹ ɐdvˈænst tɛknˈiːk hˈɪɹ wˌɛɹ juː kæn dᵻsˈaɪd tə tɹˈeɪn ˌɔn kəmplˈiːʃənz ˈoʊnli. ðɪs mˈiːnz ðæt juː wɪl ˈoʊnli biː kənsˈɪdɚɹɪŋ ðə lˈɔs ɔnðɪ ˈænsɚ pˈoːɹʃən, nˌɑːt ɔnðə pɹˈɑːmpt ɔːɹ kwˈɛstʃən pˈoːɹʃən. ænd ðɪs kæn biː jˈuːsfəl ɪf jʊɹ ˈænsɚz ɑːɹ kwˈaɪt ʃˈɔːɹt ænd juː dˈoʊnt wˈɔnt ðə lˈɔs ˌɔn ˈɔːl ʌvðə pɹˈɑːmpts tə kˈaɪnd ʌv kɹˈaʊd ˈaʊt ɔːɹ klˈaʊd ˈaʊt ðɪ ˌɪnfɚmˈeɪʃən ɔːɹ ðə sˈɪɡnəl ðæts kˈʌmɪŋ fɹʌm tɹˈeɪnɪŋ ɔnðə ɹᵻspˈɑːns ɔːɹ ðɪ ˈænsɚ. sˌoʊ juː sˈɛt ðə kəmplˈiːʃənz tə tɹˈuː hˈɪɹ. sˈʌmtaɪmz aɪ jˈuːz ðɪs fɔːɹ fˈʌŋkʃən kˈɔːlɪŋ, fˈaɪn tˈuːnɪŋ. ænd ðˈɛn juː nˈiːd tə lˈɛt ðə mˈɑːdəl nˈoʊ wˌɛɹ jʊɹ ˈænsɚɹ ɪz stˈɑːɹɾɪŋ. sˌoʊ ɪn ɐ lˈɑːmə θɹˈiː mˈɑːdəl, ðɪ ˈænsɚ wɪl stˈɑːɹt ˈæftɚɹ ɐsˈɪstənt ænd hˈɛdɚɹ aɪdˈiː. |1
25
+ trelis_voice1_24.wav|ɪn ɐ lˈɑːmə tˈuː mˈɑːdəl, ɪt wɪl stˈɑːɹt ˈæftɚɹ ˈɪnst. ænd ðˈɛn aɪ θˈɪŋk ðɪs ɪz mˈeɪbiː ɐ tʃˈæt ˌɛmˈɛl fˈɔːɹmæt. ðɪ ˈænsɚ wɪl stˈɑːɹt ˈæftɚɹ aɪɐm stˈɑːɹt ɐsˈɪstənt. sˌoʊ ðɪs ɐlˈaʊz ðə tɹˈeɪnɪŋ lˈuːp tə tʃˈɛk wɪðˌɪn jʊɹ pɹˈɑːmpt. ɪt wɪl tʃˈɛk fɔːɹ wˌɛɹ ðɪs stˈɑːɹt ʌvðɪ ɐsˈɪstəns ˈænsɚɹ ɪz, ænd ðˈɛn ɪt wɪl ˈoʊnli lˈʊk æt ðə lˈɔs ˈæftɚ ðæt pˈɔɪnt. ˈæftɚ ðˈɪs, ðɛɹˌɑːɹ sˌʌm stˈændɚd pɚɹˈæmɪɾɚz lˈaɪk sˈɛɾɪŋ ðə tɹˈeɪnɪŋ bˈætʃ sˈaɪz, ðə vˌælɪdˈeɪʃən bˈætʃ sˈaɪz, ðə ɡɹˈeɪdiənt ɐkjˌuːmjʊlˈeɪʃən, wˈɛðɚ juː wˈɔnt tə ɹˈʌn ɐ vˌælɪdˈeɪʃən sˈɛt ɔːɹ nˈɑːt. ðə nˈʌmbɚɹ ʌv ˈɛpɑːkz, ðə lˈɜːnɪŋ ɹˈeɪt, ɐn ˈaʊtpʊt dᵻɹˈɛktɚɹi fɔːɹ jʊɹ tɹˈeɪnɪŋ mˈɑːdəl ænd ɹɪzˈʌlts, wˈɛðɚ juː wˈɔnt tə tɹˈeɪn wɪð bɹˈeɪn flˈoʊt sˈɪkstiːn ɔːɹ nˈɑːt. juː kæn sˈɛt jʊɹ skˈɛdʒuːlɚ. |1
26
+ trelis_voice1_25.wav|juː kæn dᵻsˈaɪd wˈɛðɚ tə sˈeɪv ðə mˈɑːdəl æɾə sˈɜːʔn̩ nˈʌmbɚɹ ʌv stˈɛps ʌv tɹˈeɪnɪŋ. sˈɛt jʊɹ mˈæks sˈiːkwəns lˈɛŋθ, ɡɹˈeɪdiənt tʃˈɛkpɔɪntɪŋ, ænd wˈɛðɚ tə jˈuːz ɹˌiːˈɛntɹənsi, wˌɪtʃ ɐlˈaʊz juː tə spˈiːd ˌʌp ðə tɹˈeɪnɪŋ. nˈɛkst, juː kæn dᵻsˈaɪd wˈɛðɚ juː wˈɔnt tə jˈuːz ˈɔːɹpoʊ ɔːɹ nˈɑːt. baɪ dᵻfˈɔlt, aɪv ɡɑːt ðæt sˈɛt tə fˈɔls. ɪf jʊɹ jˈuːzɪŋ ˈɔːɹpoʊ, juː nˈiːd ɐ kˈɑːlʌm ðæts kˈɔːld tʃˈoʊzən ænd wˈʌn kˈɔːld ɹᵻdʒˈɛktᵻd. ænd juː kæn sˈɛt jʊɹ mˈæks pɹˈɑːmpt lˈɛŋθ. ænd ðˈɛn ðə bˈeɪɾə, ðə bˈeɪɾə bˈeɪsɪkli wˈeɪz hˌaʊ mˈʌtʃ ʌvðə pɹˈɛfɹəns fˈaɪntˈuːnɪŋ, wˌʌts ðɪ ɪmpˈoːɹtəns ʌv ðæt lˈɔs ɹˈɛlətˌɪv tə ðə stˈændɚd ˌɛsˌɛftˈiː lˈɔs. ɹᵻmˈɛmbɚ, ˈɔːɹpəl dˈʌz tˈuː θˈɪŋz ɪn wˌʌn. ɪt dˈʌz ˌɛsˌɛftˈiː ænd ɪt dˈʌz pɹˈɛfɹəns fˈaɪntˈuːnɪŋ ɪn wˌʌn. |1
27
+ trelis_voice1_26.wav|sˌoʊ ɪf juː hæv ðɪs æt zˈiəɹoʊ.tˈuː, ɪts kˈaɪnd ʌvðɪ ɪmpˈoːɹtəns ʌvðɪ ˈɑːdz ɹˈeɪʃɪˌoʊ ɪz ɐbˌaʊt zˈiəɹoʊ.tˈuː ɹˈɛlətˌɪv tə ðɪ ˌɛsˌɛftˈiː lˈɔs. lˈæst ʌv ˈɔːl, juː kæn pˈʊʃ tə hˈʌb, sˌoʊ juː kæn sˈɛt ɐ tˈɑːɹɡɪt mˈɑːdəl nˈeɪm ɪf juː wˈɔnt tə pˈʊʃ tə hˈʌb. sˌoʊ vˈɛɹi kwˈɪkli, ɪf wiː tˈeɪk ɐ lˈʊk æt ðə tˈɛst skɹˈɪpt, ðɪs wɪl sˈɪmpli lˈoʊd ðə mˈɑːdəl. sˌoʊ ɪt wɪl lˈoʊd ˈɔːl ʌv jʊɹ kənfˌɪɡjɚɹˈeɪʃənz. ɪt wɪl lˈoʊd ðə mˈɑːdəl hˈɪɹ, ɐ fˈæst lˈæŋɡwɪdʒ mˈɑːdəl jˈuːzɪŋ ʌnslˈɑːθ. ɪt wɪl sˈɛt ˌʌp ðə tˈoʊkənˌaɪzɚ, sˈɛt ˌʌp ðə tʃˈæt tˈɛmplət, lˈoʊd ðə dˈeɪɾəsˌɛt, ˈiːðɚ fɹʌm jʊɹ mˈænjuːəl dˈeɪɾə ðæts ɪnðə ɹˈiːpoʊ ɔːɹ fɹʌm hˈʌɡɪŋ fˈeɪs, ænd ðˈɛn ɪt wɪl ɹˈʌn ˈɪnfɚɹəns θɹuː ˈɔːl ʌv ðoʊz sˈæmpəlz ænd pɹˈɪnt ðə ɹɪzˈʌlts ˈaʊt tə fˈaɪl. |1
28
+ trelis_voice1_27.wav|dʒˈʌst æz ɐn ɛɡzˈæmpəl, aɪ kæn ʃˈoʊ juː wɪðˌɪn tˈɛst ˈaʊtpʊt, juːl sˈiː hˈɪɹ ɐ lˈɑːɹdʒ nˈʌmbɚɹ ʌv tˈɛsts ðæt aɪ hæv ɹˈʌn. aɪl tɹˈaɪ tə fˈaɪnd ɐ ɹˈiːsənt wˌʌn. sˌoʊ hˈɪɹ ɪz sˌʌm fˈaɪntˈuːnɪŋ ˌɔn tˈʌtʃ ɹˈʌɡbi, ænd juːl sˈiː ðɛɹ ɪz ɐ pɹˈɑːmpt, ɐ kwˈɛstʃən, ænd ˌɪɾəl pɹˈɪnt ˈaʊt ðə kɚɹˈɛkt ɹᵻspˈɑːns, ænd ˌɪɾəl ˈɔːlsoʊ pɹˈɪnt ˈaʊt ðə dʒˈɛnɚɹˌeɪɾᵻd ɹᵻspˈɑːns. ænd ðˈɛn juː kæn dʒˈʌst mˈænjuːəli kəmpˈɛɹ wˈɛðɚ ðiːz ˈænsɚz ɑːɹ ɡˈʊd ɔːɹ nˈɑːt. nˈaʊ, dʒˈʌst wˈʌn ˈʌðɚ skɹˈɪpt aɪl pˈɔɪnt ˈaʊt hˈɪɹ, wˌɪtʃ ɪz vjˈuː mˈɑːdʒuːlz. juː kæn dʒˈʌst ɹˈʌn pˈaɪθən vjˈuː mˈɑːdʒuːlz ɪf juː wˈɔnt tə sˈiː wʌt mˈɑːdʒuːlz ɑːɹ wɪðˌɪn ðə ɡˈɪvən mˈɑːdəl. ðɪs ɐlˈaʊz juː tə pˈɪk ˈaʊt wˌɪtʃ mˈɑːdʒuːlz juː mˌaɪt wˈɔnt tə fˈaɪn tˈuːn jˈuːzɪŋ lˈoʊ ɹˈɑː. |1
29
+ trelis_voice1_28.wav|ænd ðæts pɹˈɪɾi mˈʌtʃ ɪt fɚðɪ ʌnslˈɑːθ bɹˈæntʃ, wˌɪtʃ ɪz ɹˌɛkəmˈɛndᵻd ɪf jʊɹ ɡˌoʊɪŋ tə fˈaɪn tˈuːn ˌɔn wˈʌn dʒˌiːpˌiːjˈuː. ɪt dʌznˌɑːt wˈɜːk ɪf jʊɹ fˈaɪn tˈuːnɪŋ ˌɔn mˈʌltaɪ dʒˌiːpˌiːjˈuː, wˌɪtʃ ɪz wˌaɪ aɪ hæv ðə mˈʌltaɪ dʒˌiːpˌiːjˈuː bɹˈæntʃ ænd ðə mˈʌltaɪ dʒˌiːpˌiːjˈuː bɹˈæntʃ ɪz kənfˈɪɡɚd mˈʌtʃ ɪn ɐ sˈɪmɪlɚ wˈeɪ tʊ ʌnslˈɑːθ, ɛksˈɛpt ðˌɐɾɪt ɐlˈaʊz juː tə ɹˈʌn ɪn fˈʊli ʃˈɑːɹdᵻd dˈeɪɾə pˈæɹəlˌɛl ɔːɹ ɪn dˈɪstɹɪbjˌuːɾᵻd dˈeɪɾə pˈæɹəlˌɛl. ɪt hɐz ɐ kənfˈɪɡ fˈaɪl ðæt juː kæn sˈɛt ˈʌp. ɪt hɐz ðə tˈɛst.pˈaɪ ænd ðə tɹˈeɪn.pˈaɪ fˈaɪl ðæt wɪl ɐlˈaʊ juː tə ɹˈʌn tˈɛstɪŋ ænd tɹˈeɪnɪŋ. ænd aɪl dʒˈʌst bɹˈiːfli ʃˈoʊ juː ðə kənfˈɪɡ fˈaɪl. sˌoʊ æt ðə stˈɑːɹt hˈɪɹ, juːl sˈiː ðɪs pɚɹˈæmɪɾɚ ðæts nˌɑːt ɪnðɪ ʌnslˈɑːθ bɹˈæntʃ. ɪf juː sˈɛt ɪt tʊ ˈɔːɾoʊ, ɪt wɪl dʒˈʌst dˈuː stˈændɚd tɹˈeɪnɪŋ. |1
30
+ trelis_voice1_29.wav|juː kæn tɹˈeɪn ˌɔn mˌʌltɪpəl dʒˌiːpˌiːjˈuː, bˌʌt ɪt wɪl biː pˈaɪplaɪn pˈæɹəlˌɛl, sˌoʊ nˌɑːt kwˈaɪt æz ɪfˈɪʃənt. haʊˈɛvɚ, juː kæn sˈɛt ðɪs tə dˌiːdˌiːpˈiː fɔːɹ dˈɪstɹɪbjˌuːɾᵻd dˈeɪɾə pˈæɹəlˌɛl, ɔːɹ juː kæn sˈɛt ɪt tʊ ˌɛfˌɛsdˌiːpˈiː fɔːɹ fˈʊli ʃˈɑːɹdᵻd dˈeɪɾə pˈæɹəlˌɛl. nˈaʊ, wˌɛn jʊɹ dˌuːɪŋ ðˈæt, juːl nˈiːd tə kənfˈɪɡjɚ ðə mˈʌltaɪdʒˌiːpˌiːjˈuː sˈɛɾʌp. ðæt kæn biː dˈʌn baɪ ɹˈʌnɪŋ kənfˈɪɡ, ɐksˈɛlɚɹˌeɪt kənfˈɪɡ, ænd juːl sˈiː ðɪ ɪnstɹˈʌkʃənz ɪf juː hˈɛd ˌoʊvɚ tə ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ fɔːɹ dˌuːɪŋ ðˈæt. sˌoʊ ðɪs ɪz ðɪ ɐdvˈænst fˈaɪn tˈuːnɪŋ ɹˈiːpoʊ, ænd juː kæn fˈaɪnd ˈaʊt mˈoːɹ æt tɹˈaɪəlz.kˈɑːm fˈɔːɹwɚd slˈæʃ ɐdvˈænst dˈæʃ fˈaɪn dˈæʃ tˈuːnɪŋ. ðə nˈɛkst ɹˈiːpoʊ aɪl bɹˈiːfli ɡˌoʊ θɹuː ɪz ðɪ ɐdvˈænst vˈɪʒən ɹˈiːpoʊ. ðɪs dˈʌz mˈʌtʃ ʌvðə sˈeɪm, bˌʌt fɔːɹ mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs ˈɪmɪdʒ mˈɑːdəlz. |1
31
+ trelis_voice1_30.wav|ɪɾ ɐlˈaʊz juː tə pɹɪpˈɛɹ jʊɹ dˈeɪɾə ænd pˈʊʃ ɪɾ ˌʌp tə kɹiːˈeɪt ɐ hˈʌɡɪŋ fˈeɪs dˈeɪɾəsˌɛt. ðˈɛn juː kæn fˈaɪn tˈuːn lˈɑːvə, ˈaɪdə fˈɪks ænd, ɔːɹ ˈaɪdə fˈɪks ænd mˈuːndɹiːm mˈɑːdəlz. juː kæn dˈuː mˌʌltɪmˈoʊdəl sˈɜːvɚ sˈɛɾʌp wɪð tˈɛkst dʒˌɛnɚɹˈeɪʃən ˈɪnfɚɹəns. ðɛɹz ɐ wˈʌŋklˈɪk tˈɛmplət fɔːɹ ɹˈʌnɪŋ ɐn ˈaɪdə fˈɪks sˈɜːvɚ, ɪŋklˈuːdɪŋ ˌɔn ɐ kˈʌstəm mˈɑːdəl. ænd lˈæst ʌv ˈɔːl, ðɛɹ ɪz ɐ skɹˈɪpt fɔːɹ fˈaɪntˈuːnɪŋ mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs vˈɪdɪoʊ mˈɑːdəlz. ðɪs ɪz bˈeɪsɪkli ɐ vˌɛɹɪˈeɪʃən ˌɔn tˈɛkst plˈʌs ˈɪmɪdʒ mˈɑːdəlz wˌɛɹ juː splˈɪt ðə vˈɪdɪoʊ ˌɪntʊ mˌʌltɪpəl ˈɪmɪdʒᵻz. ðə nˈɛkst ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst ˈɪnfɚɹəns ɹˈiːpoʊ. ðɪs ɐlˈaʊz juː tə sˈɛt ˌʌp ɐ sˈɜːvɚ sˌoʊ ðæt juː kæn hˈɪt ɐn ɛndpˈɔɪnt fɚɹə kˈʌstəm mˈɑːdəl. juː kæn dˈuː sˌoʊ ˌɔn ɹˈʌn pˈɑːd, vˈæst ˌeɪˈaɪ, ɔːɹ jˈuːzɪŋ ɐ lˈɑːmə sˌiːpˌiːpˈiː tˈaɪp sˈɜːvɚ. |1
32
+ trelis_voice1_31.wav|ðɛɹz ˈɔːlsoʊ ðɪ ˈɑːpʃən tə dᵻplˈɔɪ sˈɜːvɚləsli jˈuːzɪŋ ɹˈʌn pˈɑːd. ðɪs mˈiːnz ðæt juː kæn pˌʊt ɐ sˈɜːvɚ ðæt wɪl ˈoʊnli tˈɜːn ˌɔn wɛn ɪts bˌiːɪŋ kwˈiəɹɪd ænd wɪl tˈɜːn ˈɔf ˈæftɚɹ ɪts bˌiːɪŋ kwˈiəɹɪd. ɪts kwˈaɪt jˈuːsfəl fɔːɹ bˈætʃ dʒˈɑːbz ðæt ɑːɹ lˈɛs tˈaɪm sˈɛnsᵻtˌɪv, bɪkˈʌz ɪt mˈiːnz jʊɹ nˌɑːt pˈeɪɪŋ fɚðə sˈɜːvɚ wɛn ɪts nˌɑːt bˌiːɪŋ jˈuːzd, ænd ɪt wɪl dʒˈʌst tˈɜːn ˌɔn wɛn juː nˈiːd ɪt, wˌɪtʃ ɪz ɡˌoʊɪŋ tə sˈeɪv juː kˈɔst. ðɛɹˌɑːɹ ˈɔːlsoʊ ɐ nˈʌmbɚɹ ʌv skɹˈɪpts fɔːɹ mˌeɪkɪŋ ˌeɪpˌiːˈaɪ kˈɔːlz, sˈɪmpəl ˌeɪpˌiːˈaɪ kˈɔːlz, ˈoʊpən ˌeɪˈaɪ stˈaɪl ɔːɹ tˌiːdʒˌiːˈaɪ stˈaɪl, fˈʌŋkʃən kˈɔːlɪŋ ˌeɪpˌiːˈaɪ kˈɔːlz ɪf juː wˈɔnt tə tˈɛst ˈaʊt fˈʌŋkʃən kˈɔːlɪŋ pɚfˈoːɹməns əvə mˈɑːdəl. ðˈɛn ðɛɹˌɑːɹ spˈiːd tˈɛsts fɔːɹ sˈɪŋɡəl kwˈiəɹɪz ænd mˌʌltɪpəl kwˈiəɹɪz. |1
33
+ trelis_voice1_32.wav|ðɛɹz ˈɔːlsoʊ ɐ fˈoʊldɚ nˈaʊ ˌɔn pɹˈaɪvəsi, wˌɪtʃ ɐlˈaʊz juː tə bˈeɪsɪkli hˈaɪd ˌɪnfɚmˈeɪʃən, lˈaɪk pˈɜːsənəl ˌɪnfɚmˈeɪʃən ˌɔn kɹˈɛdɪt kˈɑːɹdz, nˈeɪmz, ˈiːmeɪl ɐdɹˈɛsᵻz, bᵻfˌoːɹ juː sˈɛnd ɪt tʊ ɐ θˈɜːdpˈɑːɹɾi ˌeɪpˌiːˈaɪ sˌoʊ ðæt juː kæn ɹᵻdˈuːs ˌɛni dˈeɪɾə pɹˈaɪvəsi ɹˈɪsks. lˈæst ʌv ˈɔːl, ðɛɹz ðɪ ɐdvˈænst tɹænskɹˈɪpʃən ɹᵻpˈɑːzɪtˌoːɹi. ðˈɪswˌʌn hˈɪɹ ɐlˈaʊz juː tə dʒˈɛnɚɹˌeɪt dˈeɪɾə ɪf juː wˈɔnt tə fˈaɪn tˈuːn ɐ wˈɪspɚ mˈɑːdəl ænd ðˈɛn dˈuː ðə fˈaɪn tˈuːnɪŋ. ænd ɐɡˈɛn, mˈʌtʃ ʌvðə tˈɛn tˈɪps ðæt aɪ pɹəvˈaɪdᵻd ˈɜːlɪɚɹ ɑːɹ ɡˌoʊɪŋ tʊ ɐplˈaɪ hˈɪɹ fɔːɹ tɹænskɹˈɪpʃən. ænd ðæt ɪz ɪt fɔːɹ maɪ tˈɛn tˈɪps ˌɔn fˈaɪntˈuːnɪŋ. ɪf aɪv lˈɛft ˈɛnɪθˌɪŋ ˈaʊt, plˈiːz lˈɛt mˌiː nˈoʊ bᵻlˌoʊ ɪnðə kˈɑːmɛnts ænd aɪl ɡɛt bˈæk tə juː. ɪnðə mˈiːntaɪm, ɪf juː wˈɔnt mˈoːɹ ˌɪnfɚmˈeɪʃən ˌɔn tɹˈɛliz ɹᵻsˈoːɹsᵻz, ɪŋklˈuːdɪŋ fɹˈiː ænd pˈeɪd, tɹˈaɪ ˈaʊt tɹˈɛliz.kˈɑːm. |1
34
+ trelis_voice1_0.wav|aɪm ɡˌoʊɪŋ tə wˈɔːk juː θɹuː tˈɛn kwˈɪk tˈɪps fɔːɹ fˈaɪn tˈuːnɪŋ. fɔːɹ ˈiːtʃ əv ðˈoʊz, aɪl pˈɔɪnt juː tə wˈʌn ɔːɹ tˈuː tɹˈɛliz vˈɪdɪoʊz ˌɔn juː tˈuːb ænd ˈɔːlsoʊ pˈɔɪnt juː tə ðə ɹˈaɪt bɹˈæntʃ ɪf jʊɹ wˈɜːkɪŋ ˌaʊɾəv ðə tɹˈɛliz ɐdvˈænst fˈaɪn tˈuːnɪŋ ɹᵻpˈɑːzɪtˌoːɹi. tˈɪp nˈʌmbɚ wˈʌn ɪz tə stˈɑːɹt wɪð ɐ smˈɔːl mˈɑːdəl. aɪ ɹˌɛkəmˈɛnd stˈɑːɹɾɪŋ wɪð sˈʌmθɪŋ lˈaɪk lˈɑːmə θɹˈiː ˈeɪt bˈiː ɔːɹ fˈaɪ θɹˈiː mˈɪni. ænd ðə ɹˈiːzən ɪz bɪkˈʌz fˈaɪn tˈuːnɪŋ ɪz ɐbˌaʊt ɛkspˌɛɹɪməntˈeɪʃən ænd juː wˈɔnt təbi ˈeɪbəl tə tɹˈaɪ mˈɛni θˈɪŋz kwˈɪkli. |1
35
+ trelis_voice1_1.wav|ɪf juː stˈɑːɹt ˈɔf wɪð lˈɑːmə θɹˈiː ˈeɪt ɔːɹ sˈɛvənti bˈiː, ɪts ɡˌoʊɪŋ tə tˈeɪk juː mˈʌtʃ mˈoːɹ tˈaɪm ɪn ˈɔːɹdɚ tə tˈɛst ˈaʊt wʌts wˈɜːkɪŋ ænd wʌts nˈɑːt. juː kæn ˈɔːlweɪz stˈɑːɹt smˈɔːl ænd skˈeɪl ˌʌp lˈeɪɾɚ. ðə vˈɪdɪoʊ aɪ ɹˌɛkəmˈɛnd hˈɪɹ ɪz mˌɛmɚɹᵻzˈeɪʃən. ðˈɪswˌʌn, aɪ jˈuːz ɐ ɹˈɛlətˌɪvli smˈɔːl mˈɑːdəl æz aɪ dˈuː ɪn mˈɛnɪəv maɪ fˈaɪn tˈuːnɪŋ tuːtˈoːɹɪəlz, dʒˈʌst bɪkˈʌz ɪts kwˈɪkɚ tə lˈɜːn fˈæst. tˈɪp nˈʌmbɚ tˈuː ɪz tə jˈuːz lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː. |1
36
+ trelis_voice1_2.wav|aɪ dˈoʊnt ɹˌɛkəmˈɛnd stˈɑːɹɾɪŋ ˈɔf wɪð fˈʊl fˈaɪntˈuːnɪŋ fɚɹə fjˈuː ɹˈiːzənz. fˈɜːst ʌv ˈɔːl, lˈoʊ ɹˈɑː ænd kjˈuː lˈoʊ ɹˈɑː ɐlˈaʊ juː tə stˈɑːɹt wɪð fjˈuːɚ dʒˌiːpˌiːjˈuː ɔːɹ ɐ smˈɔːlɚ dʒˌiːpˌiːjˈuː. ðæts ɡˌoʊɪŋ tə mˌeɪk ˌɪɾɚɹˈeɪʃən fˈæstɚ. bˌʌt fɔːɹ smˈɔːl dˈeɪɾəsˌɛts, ðə pɚfˈoːɹməns mˌaɪt ˈiːvən biː bˈɛɾɚ ðɐn fˈʊl fˈaɪntˈuːnɪŋ bɪkˈʌz fˈʊl fˈaɪntˈuːnɪŋ kæn tˈɛnd tʊ ˌoʊvɚfˈɪt. sˌoʊ aɪd ɹˌɛkəmˈɛnd ˈiːvən ɪf juː ᵻvˈɛntʃuːəli wˈɔnt tə dˈuː fˈʊl fˈaɪntˈuːnɪŋ, stˈɑːɹt ˈɔf wɪð lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː ænd tɹˈaɪ tə ɡɛt ɪt wˈɜːkɪŋ bᵻfˌoːɹ juː wˈɔnt tə spˈɛnd mˈoːɹ ˌɔn dʒˌiːpˌiːjˈuː ɹˈɛntəl ænd mˈoːɹ ʌv jʊɹ tˈaɪm. |1
37
+ trelis_voice1_3.wav|ðə vˈɪdɪoʊ hˈɪɹ ɪf juː wˈɔnt tə pˈɪk ˈaʊt ðə ɹˈaɪt lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz ɪz ɐ lˈaɪv stɹˈiːm ˌɔn hˌaʊ tə pˈɪk lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz. ænd ɪf jʊɹ wˈɜːkɪŋ ˌaʊɾəv ðə tɹˈɛliz ɹˈiːpoʊ, juː kæn tʃˈɛk ˈaʊt ðɪ ʌnslˈɑːθ bɹˈæntʃ fɚðə fˈæstɪst fˈaɪntˈuːnɪŋ ˌɔn ɐ sˈɪŋɡəl dʒˌiːpˌiːjˈuː jˈuːzɪŋ lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː. tˈɪp nˈʌmbɚ θɹˈiː ɪz tə kɹiːˈeɪt tˈɛn mˈænjuːəl tˈɛst kwˈɛstʃənz. sˌoʊ juː wˈɔnt tə kɹiːˈeɪt tˈɛn kwˈɛstʃən ˈænsɚ pˈɛɹz ænd jˈuːs ðoʊz tə tʃˈuːz wˌɪtʃ bˈeɪs mˈɑːdəl ɪz ɡˌoʊɪŋ tə pɚfˈɔːɹm bˈɛst. |1
38
+ trelis_voice1_4.wav|wˌɛn juː mˈænjuːəli kjˈʊɹɹeɪt ɐ dˈeɪɾə sˈɛt lˈaɪk aɪ dˈɪd fɚðə tɹˈɛliz fˈʌŋkʃən kˈɔːlɪŋ dˈeɪɾə sˈɛt, ɪt lˈɛts juː ɐpɹˈiːʃɪˌeɪt ɛɡzˈæktli wˌɪtʃ ɹˈoʊz ʌv dˈeɪɾə ɑːɹ nˈiːdᵻd tə ɡɛt ðə pɚfˈoːɹməns ðæt juː nˈiːd. juː kˈæn, ʌv kˈoːɹs, jˈuːs pˈaɪθən ænd tʃˈæt dʒˌiːpˌiːtˈiː tə hˈɛlp ˈɔːɾəmˌeɪt sˌʌm ʌv ðɪs ænd dʒˈɛnɚɹˌeɪt ɹˈoʊz. bˌʌt aɪ θˈɪŋk ðə mˈænjuːəl tˈʌtʃ dˈʌz ɐlˈaʊ juː ɐ bˈɛɾɚɹ ˌʌndɚstˈændɪŋ, wˌɪtʃ wɪl ɐlˈaʊ juː tə ɡɛt pɚfˈoːɹməns fˈæstɚ. hˈɪɹ, juː kæn tʃˈɛk ˈaʊt ðə fˈʌŋkʃən kˈɔːlɪŋ vˈiː θɹˈiː bɹˈæntʃ ænd ˈɔːlsoʊ ðɪ ʌnslˈɑːt ænd mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃᵻz ʌvðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ. |1
39
+ trelis_voice1_5.wav|ɪf juː dˈuː wˈɔnt tʊ ˈɔːɾəmˌeɪt ɐ lˈɪɾəl mˈoːɹ hˌaʊ juː dʒˈɛnɚɹˌeɪt sɪnθˈɛɾɪk dˈeɪɾə sˈɛts, juː kæn tʃˈɛk ˈaʊt ðɪs vˈɪdɪoʊ hˈɪɹ ˌɔn dˈeɪɾə sˈɛt pɹˌɛpɚɹˈeɪʃən wɪð ˌɛlˌɛlˈɛm. tˈɪp nˈʌmbɚ sˈɪks ɪz ˈɔːlweɪz jˈuːs ɐ vˌælɪdˈeɪʃən dˈeɪɾə sˈɛt. ɪf juː dˈoʊnt hˈæv wˌʌn, juː kæn dʒˈʌst splˈɪt ˈɔf tˈɛn tə twˈɛnti pɚsˈɛnt ʌv jʊɹ tɹˈeɪnɪŋ dˈeɪɾə sˈɛt. juː wˈɔnt təbi tʃˈɛkɪŋ jʊɹ tɹˈeɪnɪŋ lˈɔs æz juː pɹəɡɹˈɛs ɐlˈɔŋ ðə pɹˈɑːsɛs. mˌeɪk ʃˈʊɹ ɪts nˌɑːt tˈuː bˈʌmpi ænd jʊɹ lˈɜːnɪŋ ɹˈeɪt ɪz nˌɑːt tˈuː hˈaɪ ɔːɹ jʊɹ bˈætʃ sˈaɪz ɔːɹ vˈɜːtʃuːəl bˈætʃ sˈaɪz ɪz tˈuː smˈɔːl. |1
40
+ trelis_voice1_6.wav|juː ˈɔːlsoʊ wˈɔnt tə tʃˈɛk jʊɹ vˌælɪdˈeɪʃən lˈɔs, ænd ðɪs ʃˌʊd biː mənətˈɑːnɪkli dˈiːkɹiːsɪŋ ɪn ɐ smˈuːð wˈeɪ. ɪf ɪts ˈɛvɚɹ ˈʌptɪkɪŋ, ðæt mˈiːnz juː mˌaɪt biː ˌoʊvɚfˈɪɾɪŋ ænd jʊɹ tɹˈeɪnɪŋ fɔːɹ tˈuː mɛni ˈɛpɑːkz, ɔːɹ juː mˈeɪ nˌɑːɾɐv ɪnˈʌf dˈeɪɾə. hˈɪɹ, aɪ ɹˌɛkəmˈɛnd ðə tɹˈɛliz ɹˈiːpoʊ bɹˈæntʃᵻz ʌv ʌnslˈɑːθ ɔːɹ mˈʌltaɪ dʒˌiːpˌiːjˈuː. ðeɪ ˈiːtʃ ɐlˈaʊ juː tə splˈɪt ˈɔf vˌælɪdˈeɪʃən, splˈɪt fɹʌm jʊɹ bˈeɪs tɹˈeɪnɪŋ sˈɛt. ðɪs ɪz sˈʌmθɪŋ juː kæn ˈɔːlsoʊ dˈuː ˈiːzili jˈuːzɪŋ hˈʌɡɪŋ fˈeɪs dˈeɪɾəsˌɛts ɪf juː tʃˈɛk ˈaʊt ðɛɹ dˌɑːkjuːməntˈeɪʃən. |1
41
+ trelis_voice1_7.wav|aɪ θˈɪŋk jʊɹ bˈɛɾɚɹ ˈɔf tə dʒˈʌst fˈɪt ɪɾ ˌɔn wˈʌn dʒˌiːpˌiːjˈuː, bɪkˈʌz wɛn juː mˈuːv tə mˈʌltaɪ dʒˌiːpˌiːjˈuː, juː hæv dˈeɪɾə ðæts mˈuːvɪŋ bᵻtwˈiːn ðˌɛm, ðə tɹˈeɪnɪŋ bɪkˌʌmz mˈoːɹ kˈɑːmplᵻkˌeɪɾᵻd, ɪts ˈiːziɚ tə mˌeɪk mɪstˈeɪks, ænd ɪt kæn biː slˈoʊɚɹ ɪn sˌʌm wˈeɪz. ˈɔːlsoʊ, ˌɔn wˈʌn dʒˌiːpˌiːjˈuː, juː kæn jˈuːz ʌnslˈɑːθ, wˌɪtʃ ɡˈɪvz juː ɐ tˈuː ˈɛks spˈiːd ˈʌp. sˌoʊ ðæts kwˈaɪt bˌɛnɪfˈɪʃəl ɪf juː kæn dʒˈʌst fˈoʊkəs ˌɔn kˈiːpɪŋ θˈɪŋz sˈɪmpəl, ʌntˈɪl juːv æt lˈiːst ɡɑːt ɐ tɹˈeɪnɪŋ ɐpɹˈoʊtʃ ðæts wˈɜːkɪŋ wˈɛl, ænd jʊɹ hˈæpi tə ðˈɛn spˈɛnd ðə tˈaɪm ænd mˈʌni tə skˈeɪl ˈʌp. |1
42
+ trelis_voice1_8.wav|sˈʌmθɪŋ aɪ ʃˌʊd mˈɛnʃən æz wˈɛl ɪz ðæt juː kæn wˈeɪst ɐ lˈɑːt ʌv tˈaɪm wɪð ˌɪnstəlˈeɪʃənz ænd ɡˌɛɾɪŋ stˈʌk ɪn ɡˌɛɾɪŋ sˈɛt ˌʌp fɔːɹ fˈaɪn tˈuːnɪŋ. wˈʌn wˈeɪ ɚɹˈaʊnd ðæt ɪz tə jˈuːz ɐn ˈɪmɪdʒ ɔːɹ ɐ tˈɛmplət ðæt sˈɛts ˌʌp jʊɹ kjˈuːdə ænd pˈaɪ tˈɔːɹtʃ tʊ ɐ spəsˈɪfɪk vˈɜːʒən. aɪv ɡɑːt ɐ wˈʌŋklˈɪk tˈɛmplət hˈɪɹ fɔːɹ ɹˈʌn pˈɑːd, ænd juː kæn jˈuːz ðæt tə kənsˈɪstəntli hæv ðə sˈeɪm ɛnvˈaɪɹənmənt ˌɔn wˌɪtʃ tʊ ɪnstˈɔːl ðə fˈaɪnəl pˈækɪdʒᵻz juː nˈiːd fɔːɹ fˈaɪn tˈuːnɪŋ. |1
43
+ trelis_voice1_9.wav|tˈɪp nˈʌmbɚɹ ˈeɪt ɪz tə jˈuːz wˈeɪts ænd bˈaɪəsᵻz. ðɪs ɪz ɐ tˈuːl ðæt ɐlˈaʊz juː tə tɹˈæk ðə lˈɔsᵻz ænd ðə ɹᵻwˈɔːɹdz æz juː mˈuːv θɹuː jʊɹ tɹˈeɪnɪŋ ɹˈʌn. juː kæn ɪŋklˈuːd ðɪs ɪn ɐ skɹˈɪpt wɪð pˈɪp ɪnstˈɔːl wˈændˌiːbiː, ðˈɛn sˈɛt ðɪ ɛnvˈaɪɹənmənt vˈɛɹɪəbəl fɔːɹ wˈændˌiːbiː pɹˈɑːdʒɛkt tʊ ɐ pɹˈɑːdʒɛkt nˈeɪm. ænd ðɪs wɪl kɹiːˈeɪt ɐ fˈoʊldɚ bˈeɪsɪkli wɪðˌɪn wˌɪtʃ juː kæn hæv mˌʌltɪpəl ɹˈʌnz ʌv ɹˈʌn nˈeɪm. ænd ðə wˈeɪ juː sˈɛt ðə ɹˈʌn nˈeɪm ɪz ɪnðə tɹˈeɪnɪŋ ˈɑːɹɡjuːmənts baɪ pˈæsɪŋ ɪnðə ɹˈʌn nˈeɪm. |1
44
+ trelis_voice1_10.wav|hˈɪɹ juː wʊd sˈɛt ðə ɹˈʌn nˈeɪm lˈaɪk wˈʌn ˈɛpɑːk ænd kˈɑːnstənt skˈɛdʒuːlɚ ɔːɹ wʌtˈɛvɚ juː wˈɔnt tə kˈɔːl ɪt. ænd juː ˈɔːlsoʊ nˈiːd tə sˈɛt ˌʌp ɹᵻpˈoːɹt tə wˈɔnd bˈiː wˈeɪts ænd bˈaɪəsᵻz. ðɪs ɪz səpˈoːɹɾᵻd ɪnðɪ ˈɑːnslɔːt ænd ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃᵻz ænd ˈɔːlsoʊ ɪn mˈɛnɪəv ðə dʒˈʌpaɪɾɚ nˈoʊtbʊks ðæt ɑːɹ θɹuːˈaʊt ˈɔːl ðə bɹˈæntʃᵻz ʌvðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ. bᵻfˌoːɹ aɪ mˈuːv tə tˈɪps ˈeɪt ænd nˈaɪn, aɪ wˈɔnt tə kˈɑːmɛnt ˌɔn skˈeɪlɪŋ ˈʌp. sˌoʊ aɪv tˈɔːkt ɐbˌaʊt stˈɑːɹɾɪŋ wɪð ɐ lˈoʊ nˈʌmbɚɹ ʌv ɹˈoʊz, stˈɑːɹɾɪŋ wɪð lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː, ænd stˈɑːɹɾɪŋ wɪð ɐ smˈɔːl mˈɑːdəl. |1
45
+ trelis_voice1_11.wav|wˈɛl, hˈɪɹz ðɪ ˈɔːɹdɚ juː wˈɔnt tə skˈeɪl ˌʌp ˈɪn. stˈɑːɹt baɪ ɪŋkɹˈiːsɪŋ ðə ɹˈoʊz ʌv dˈeɪɾə ˌɔn ɐ smˈɔːl mˈɑːdəl, ðˈɛn mˈuːv kjˈuː lˈoʊ ɹˈɑː tə lˈoʊ ɹˈɑː. ɪf juː ɹˈiəli wˈɔnt tə tɹˈaɪ fˈʊl fˈaɪntˈuːnɪŋ, tˈɛst ɪɾ ˈaʊt ˌɔn ɐ smˈɔːl mˈɑːdəl ænd sˈiː ɪf ɪt ɹˈiəli ɪmpɹˈuːvz pɚfˈoːɹməns. ðˈɛn, æz ɐ vˈɛɹi lˈæst stˈɛp, juː kæn θˈɪŋk ɐbˌaʊt mˈuːvɪŋ tʊ ɐ lˈɑːɹdʒɚ mˈɑːdəl wˌɛɹ ɪts ɡˌoʊɪŋ tə tˈeɪk mˈoːɹ tˈaɪm ænd mˈʌni tə ɡɛt ɪn ðæt fˈaɪnəl ɹɪzˈʌlt. |1
46
+ trelis_voice1_12.wav|ðɛɹˌɑːɹ tˈuː vˈɪdɪoʊz ʌv ɹˈɛlᵻvəns hˈɪɹ. ɪf juː wˈɔnt tʊ ˌʌndɚstˈænd ðə pɹˈoʊz ænd kˈɑːnz ʌv fˈʊl fˈaɪntˈuːnɪŋ vˈɜːsᵻz kjˈuːlˈoːɹə ɔːɹ lˈoʊ ɹˈɑː, tˈeɪk ɐ lˈʊk æt ðɪs vˈɪdɪoʊ. ænd ɪf juː wˈɔnt tʊ ˌʌndɚstˈænd ðə kəmplˈɛksᵻɾiz ʌv dˌuːɪŋ mˈʌltaɪdʒˌiːpˌiːjˈuː tɹˈeɪnɪŋ, tʃˈɛk ˈaʊt mˈʌltaɪdʒˌiːpˌiːjˈuː fˈaɪntˈuːnɪŋ. mˈuːvɪŋ tə tˈuː lˈæst tˈɪps. tˈɪp nˈʌmbɚ nˈaɪn ɪz tə jˈuːz ʌnsˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ. ðɪs kæn biː jˈuːsfəl ɪf juː hæv ɐ lˈɑːɹdʒ dˈeɪɾə sˈɛt. aɪm ɡˌoʊɪŋ tə sˈeɪ lˈɑːɹdʒɚ ðɐn tˈɛn,zˈiəɹoʊzˈiəɹoʊ zˈiəɹoʊ ɹˈoʊz ʌv dˈeɪɾə. |1
47
+ trelis_voice1_13.wav|ðɪs ɪz wˌɛɹ juː hæv ɐ dˈeɪɾə sˈɛt wɪð tʃˈoʊzən, wˌɪtʃ ɑːɹ bˈɛɾɚ ɔːɹ pɹɪfˈɜːd ɹᵻspˈɑːnsᵻz, ænd ɹᵻdʒˈɛktᵻd, wˌɪtʃ ɑːɹ ðə ɹᵻspˈɑːnsᵻz tə ðə sˈeɪm pɹˈɑːmpts bˌʌt ɑːɹ ʌv lˈoʊɚ kwˈɔlᵻɾi. juː mˌaɪthɐv ɐ sˈɛt ʌv dˈeɪɾə lˈaɪk ðɪs ɪf juː hæv pɹədˈʌkʃən dˈeɪɾə fɹʌm kˈʌstəmɚz ɔːɹ fɹʌm ɐ tʃˈætbɑːt. juː mˌeɪhɐv sˌʌm kɑːnvɚsˈeɪʃənəl dˈeɪɾə ðæt juː kənsˈɪdɚɹ ʌv ɡˈʊd kwˈɔlᵻɾi. juː mˈeɪ ˈiːvən hæv kɚɹˈɛktᵻd ɔːɹ ˈænətˌeɪɾᵻd vˈɜːʒənz ʌv ðoʊz kɑːnvɚsˈeɪʃənz wˌɛɹ juːv ɪmpɹˈuːvd ðɪ ɐsˈɪstəns ɹᵻspˈɑːnsᵻz. ðæts ɡˌoʊɪŋ təbi aɪdˈiəl æz jʊɹ tʃˈoʊzən dˈeɪɾəsˌɛt. |1
48
+ trelis_voice1_14.wav|ðə pɹˈɛfɹəns fˈaɪntˈuːnɪŋ wɪl mˈuːv jʊɹ mˈɑːdəl tə ɡˈɪv ɹᵻspˈɑːnsᵻz mˈoːɹ lˈaɪk jʊɹ tʃˈoʊzən ˈænsɚz ænd lˈɛs lˈaɪk jʊɹ ɹᵻdʒˈɛktᵻd ˈænsɚz, wˌɪtʃ ɪz jˈuːsfəl ɪf juː wˈɔnt tə dˈuː sˌʌm fˈaɪntˈuːnɪŋ fɔːɹ tˈoʊn ɔːɹ stˈaɪl, ɔːɹ ɪf juː wˈɔnt tə mˌeɪk sˌʌm kɚɹˈɛkʃənz wˌɛɹ ðə mˈɑːdəlz ɡˈɪvɪŋ ɐ ɹᵻspˈɑːns juː dˈoʊnt kwˈaɪt lˈaɪk. hˈɪɹ aɪ ɹˌɛkəmˈɛnd ðɪ ˈɔːɹpoʊ juː tˈuːb vˈɪdɪoʊ, ænd ðɛɹz ˈɔːlsoʊ ɐ bɹˈæntʃ baɪ ðæt nˈeɪm ɪn ɐdvˈænst fˈaɪn tˈuːnɪŋ. ˈɔːɹpoʊ ɪz ˈɔːlsoʊ səpˈoːɹɾᵻd ɪnðɪ ʌnslˈɑːt bɹˈæntʃ, wˌɛɹ ðɛɹz ɐ pˈaɪθən dʒˈʌpaɪɾɚ nˈoʊtbʊk ænd ˈɔːlsoʊ dʒˈʌst ɐ pˈaɪθən.pˈaɪ skɹˈɪpt juː kæn ɹˈʌn. |1
49
+ trelis_voice1_15.wav|ænd ˈɔːɹpoʊ ɪz səpˈoːɹɾᵻd æz ɐn ˈɑːpʃən ɪnðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ tˈuː. sˌoʊ tə ɹᵻkˈæp ðiːz tˈɛn tˈɪps, stˈɑːɹt wɪð ɐ smˈɔːl mˈɑːdəl, jˈuːs lˈoʊ ɹˈɑː ɔːɹ kjˈuː lˈoʊ ɹˈɑː, nˌɑːt fˈʊl fˈaɪntˈuːnɪŋ. ˈɔːlweɪz kɹiːˈeɪt tˈɛn mˈænjuːəl tˈɛst kwˈɛstʃənz ɔːɹ mˈeɪbiː ɐ fjˈuːmˌoːɹ. ɹᵻmˈɛmbɚ ðæt mˈænjuːəl dˈeɪɾə sˈɛts ɑːɹ pɹˈɑːbəbli ˌʌndɚɹˈeɪɾᵻd. juː kæn ˈɔːlweɪz ɡɛt ɐ lˈɪɾəl bˈɪt ʌv hˈɛlp fɹʌm pˈaɪθən ɔːɹ fɹʌm tʃˈæt dʒˌiːpˌiːtˈiː. stˈɑːɹt tɹˈeɪnɪŋ ˌɔn ɐ smˈɔːl nˈʌmbɚɹ ʌv ɹˈoʊz, ˈiːvən dʒˈʌst wˈʌn ɹˈoʊ tə tˈɛst ðə pˈaɪplaɪn, bˌʌt ðˈɛn wˈʌnhˈʌndɹɪd, ænd mˌeɪk ʃˈʊɹ ɪts hˌævɪŋ ɐ ɡˈʊd ɪfˈɛkt bᵻfˌoːɹ juː dᵻsˈaɪd tə skˈeɪl ˈʌp. |1
50
+ trelis_voice1_16.wav|mˌeɪk ʃˈʊɹ juː nˈoʊ ðætðə dˈeɪɾə tˈaɪp ænd ðə dˈeɪɾə sˈɛt ðæt juːv sˈɛt ˌʌp ɪz ˈæktʃuːəli ðə ɹˈaɪt wˌʌn. nˈʌmbɚ sˈɪks, ˈɔːlweɪz jˈuːs ɐ vˌælɪdˈeɪʃən sˈɛt. dʒˈʌst splˈɪt wˈʌn ˈɔf fɹʌm ɐ tɹˈeɪnɪŋ sˈɛt ɪf juː dˈoʊnt hˈæv wˌʌn. nˈʌmbɚ sˈɛvən, tɹˈaɪ tə dʒˈʌst stˈɑːɹt tɹˈeɪnɪŋ ˌɔn wˈʌn dʒˌiːpˌiːjˈuː. nˈʌmbɚɹ ˈeɪt, jˈuːs wˈeɪts ænd bˈaɪəsᵻz fɔːɹ tɹˈækɪŋ. ænd wɛn jʊɹ skˈeɪlɪŋ fɹʌm smˈɔːl tə lˈɑːɹdʒ, ˈɪŋkɹiːs fˈɜːst ðə ɹˈoʊz, ðˈɛn mˈuːv tə jˈuːzɪŋ mˈoːɹ vɹˈæm wɪð lˈoʊ ɹˈɑː ɪnstˈɛd ʌv kjˈuː lˈoʊ ɹˈɑː ɔːɹ fˈʊl fˈaɪn tˈuːnɪŋ ɪnstˈɛd ʌv lˈoʊ ɹˈɑː. |1
51
+ trelis_voice1_17.wav|baɪ ðə wˈeɪ, ðɛɹz ɐ fˈæktɚɹ ʌv fˈoːɹ ɹˈʌfli ɪn vɹˈæm dˈɪfɹəns bᵻtwˌiːn ˈiːtʃ əv ðˈoʊz. sˌoʊ lˈoʊ ɹˈɑː ɪz ɐbˌaʊt fˈoːɹ tˈaɪmz kjˈuː lˈoʊ ɹˈɑː ænd fˈʊl fˈaɪn tˈuːnɪŋ ɪz ɐbˌaʊt fˈoːɹ tˈaɪmz. lˈoʊ ɹˈɑː, ɔːɹ ˈiːvən mˈoːɹ ɪn sˌʌm kˈeɪsᵻz. ænd lˈæst ʌv ˈɔːl, ˈɪŋkɹiːs tʊ ɐ lˈɑːɹdʒɚ mˈɑːdəl sˈaɪz ˈoʊnli æt ðə vˈɛɹi ˈɛnd ʌv jʊɹ tɹˈeɪnɪŋ pɹˈɑːsɛs wɛn juː θˈɪŋk juː hæv ɐ pˈaɪplaɪn ðæts wˈɜːkɪŋ wˈɛl. ðˈɛn fɔːɹ ɐdvˈænst tˈɪps, kənsˈɪdɚ dˌuːɪŋ ʌnsˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ ɪf juː hæv ɐ lˈɑːɹdʒ ɐmˈaʊnt ʌv dˈeɪɾə, ˈoʊnli ɪf juː hæv ɐ lˈɑːɹdʒ ɐmˈaʊnt ʌv dˈeɪɾə, aɪd sˈeɪ. |1
52
+ trelis_voice1_18.wav|ænd lˈæst ʌv ˈɔːl, juː kæn kənsˈɪdɚ pɹˈɛfɹəns fˈaɪntˈuːnɪŋ, ɪnwˌɪtʃ kˈeɪs aɪd ɹˌɛkəmˈɛnd jˈuːzɪŋ ˈɔːɹpəl, wˌɪtʃ wɪl dˈuː sˈuːpɚvˌaɪzd fˈaɪntˈuːnɪŋ ænd ˈɑːdz ɹˈeɪʃɪˌoʊ pɹˈɛfɹəns ˌɑːptɪmᵻzˈeɪʃən. æt ðə sˈeɪm tˈaɪm. nˈaʊ, ðɪs ɐpɹˈoʊtʃ hˈɪɹ aɪv tˈɔːkt ɐbˌaʊt fɔːɹ lˈæŋɡwɪdʒ mˈɑːdəlz, bˌʌt ɪɾ ˈɔːlsoʊ wˈɜːks fɔːɹ vˈɪdɪoʊ ænd spˈiːtʃ ɔːɹ ˈɪmɪdʒᵻz, mˌʌltɪmˈoʊdəl mˈɑːdəlz. sˌoʊ juː kæn tʃˈɛk ˈaʊt ðɪs vˈɪdɪoʊ hˈɪɹ ˌɔn mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs ˈɪmɪdʒ, wˌɛɹ aɪ pɹɪpˈɛɹ ɐ dˈeɪɾə sˈɛt ænd bɹˈɪŋ ɪt θɹuː fˈaɪn tˈuːnɪŋ. ænd lˈaɪkwaɪz, fɔːɹ ðɪs spˈiːtʃ tə tˈɛkst mˈɑːdəl, wˌɛɹ aɪ pɹɪpˈɛɹ ɐ dˈeɪɾə sˈɛt ænd bɹˈɪŋ ɪt θɹuː fˈaɪn tˈuːnɪŋ. |1
53
+ trelis_voice1_19.wav|ðɛɹˌɑːɹ spəsˈɪfɪk ɹˈiːpoʊz fɔːɹ mˌʌltɪmˈoʊdəl. ðæts ðə vˈɪʒən ɹᵻpˈɑːzɪtˌoːɹi hˈɪɹ. ænd ðɛɹz ɐ ɹˈiːpoʊ fɔːɹ tɹænskɹˈɪpʃən. ænd ðɪs ˌɛlˌɛlˈɛm ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ aɪv bˌɪn tˈɔːkɪŋ tə dˈeɪt ɪn ɔːɹ ˌʌp ʌntˈɪl nˈaʊ ɪn ðɪs pɹˌɛzəntˈeɪʃən. aɪv lˈeɪd ˈaʊt hˈɪɹ ˈɔːl ʌvðə plˈeɪlɪsts ðæt ɑːɹ ɹˈɛlᵻvənt dᵻpˈɛndɪŋ ˌɔn wʌt juː nˈiːd. sˌoʊ ðɛɹˌɑːɹ fˈoːɹ dˈɪfɹənt sˈɛkʃənz, fˈoːɹ plˈeɪlɪsts ænd fˈoːɹ ɹᵻpˈɑːzɪtˌoːɹiz ðæt ɡˈoʊ wɪð ðˌɛm. ðɛɹz ðɪ ˌɛlˌɛlˈɛm fˈaɪn tˈuːnɪŋ plˈeɪlɪst, wˌɪtʃ ɪz ˈɔːl ɐbˌaʊt fˈaɪn tˈuːnɪŋ lˈæŋɡwɪdʒ mˈɑːdəlz. |1
54
+ trelis_voice1_20.wav|ðˈɛn ðɛɹz ɐ ɹˈiːpoʊ fɔːɹ ðæt ɐdvˈænst fˈaɪn tˈuːnɪŋ. ðɛɹz ðə vˈɪʒən plˈeɪlɪst, wˌɪtʃ ɪz fɔːɹ mˌʌltɪmˈoʊdəl mˈɑːdəlz ænd ɹˈiːpoʊ lˈɪŋk. ðɛɹz ɐ vˈɪdɪoʊ ˌɔn tɹænskɹˈɪpʃən ænd ɐ ɹˈiːpoʊ lˈɪŋk. ænd ðˈɛn ðɛɹˌɑːɹ mˈɛni vˈɪdɪoʊz ˌɔn sˈɜːvɚ sˈɛɾʌp. ðæts ɪf juː wˈɔnt tə dᵻplˈɔɪ jʊɹ ˈoʊn kˈʌstəm mˈɑːdəl, ˈiːðɚɹ ˌɔn ɐ sˈɜːvɚ ðæt wɪl slˈiːp ɔːɹ stˈɑːɹt ˌʌp wɛn juː nˈiːd ɪt tə ɹˈʌn, ðæts kˈɔːld sˈɜːvɚləs, ɔːɹ ɐ sˈɜːvɚ ðæts ˈɔːlweɪz ˌɔn ɪf jʊɹ jˈuːzɪŋ sˈʌmθɪŋ lˈaɪk tˌiːdʒˌiːˈaɪ ɔːɹ vˌiːˌɛlˌɛlˈɛm θɹuː ɐ sˈɜːvɪs lˈaɪk ɹˈʌn pˈɑːd ɔːɹ vˈæst ˌeɪˈaɪ. |1
55
+ trelis_voice1_21.wav|ðɪs vˈɛɹi lˈæst sˈɛkʃən ʌvðə vˈɪdɪoʊ ɪz fɔːɹ ðoʊz hˌuː hæv pˈɜːtʃɪst lˈaɪftaɪm ˈæksɛs tə wˈʌn ʌvðə tɹˈɛliz ɹᵻpˈɑːzɪtˌoːɹiz, bˌʌt aɪl dʒˈʌst pˌʊt ɪt pˈɑːɹt ʌv ðɪs pˈʌblɪk vˈɪdɪoʊ bɪkˈʌz ɪt wɪl ɡˈɪv ɐ sˈɛns ʌv wʌts ɪn ðiːz ɹᵻpˈɑːzɪtˌoːɹiz fɔːɹ ðoʊz ʌv juː hˌuː mˌaɪt biː ˈɪntɹɛstᵻd tə pˈɜːtʃɪs lˈaɪftaɪm mˈɛmbɚʃˌɪp lˈeɪɾɚ. ðə fˈɜːst ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst fˈaɪntˈuːnɪŋ ɹˈiːpoʊ, ænd ðɪs ɪz splˈɪt ˌɪntʊ bɹˈæntʃᵻz ɐkˈoːɹdɪŋ tə fˈʌŋkʃən. ðeɪ ɑːɹ ˈɔːl lˈɪstᵻd hˈɪɹ ɹˈʌfli ɪnðɪ ˈɔːɹdɚ ðæt ðeɪ hɐvbɪn ɹᵻlˈiːst. |1
56
+ trelis_voice1_22.wav|ænd lˈaɪkwaɪz, ɪf juː ɡˌoʊ tə ðə sˈuːpɚvˌaɪzd fˈaɪn tˈuːnɪŋ bɹˈæntʃ, ðɛɹ ɪz ˈɔːlsoʊ ɐ skɹˈɪpt ɔːɹ mˌʌltɪpəl skɹˈɪpts fɔːɹ dʒˈɛnɚɹˌeɪɾɪŋ kjˈuː ænd ɐ dˈeɪɾə fɹʌm ɐ bˈeɪs dˈeɪɾə sˈɛt ɹˈaɪt ðˈɛɹ. ðˈɛn ðɛɹˌɑːɹ tˈuː ɪmpˈoːɹtənt bɹˈæntʃᵻz hˈɪɹ, ʌnslˈɑːθ ænd mˈʌltaɪdʒˌiːpˌiːjˈuː. ðɪ ʌnslˈɑːθ bɹˈæntʃ ɐlˈaʊz juː tə ɹˈʌn fˈaɪn tˈuːnɪŋ ɪn ˈiːðɚɹ ɐ nˈoʊtbʊk ɔːɹ æz ɐ pˈaɪθən skɹˈɪpt. wˈɛɹæz ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ ɐlˈaʊz juː tə ɹˈʌn pˈaɪθən skɹˈɪpts ðæt wɪl dᵻplˈɔɪ mˈʌltaɪdʒˌiːpˌiːjˈuː tɹˈeɪnɪŋ ðæts fˈʊli ʃˈɛɹd dˈeɪɾə pˈæɹəlˌɛl ɔːɹ dˈɪstɹɪbjˌuːɾᵻd dˈeɪɾə pˈæɹəlˌɛl. |1
57
+ trelis_voice1_23.wav|nˈaʊ aɪl bɹˈiːfli ʃˈoʊ juː ˈiːtʃ əv ðoʊz tˈuː mˈeɪn bɹˈæntʃᵻz. sˌoʊ hˈɪɹ wiːl ɡˌoʊ ˌɪntʊ ðɪ ʌnslˈɑːθ bɹˈæntʃ. ðə wˈeɪ ðæt juː ɹˈʌn tɹˈeɪnɪŋ ɪn ðɪs ʌnslˈɑːt bɹˈæntʃ ɪz baɪ sˈɛɾɪŋ ˌʌp ðə kənfˌɪɡjɚɹˈeɪʃən ɪn ɐ kənfˈɪɡ fˈaɪl. aɪv ˈɔːlsoʊ ɡɑːt ɐ kənfˈɪɡ fˈaɪl ðæt juː kæn jˈuːz hˈɪɹ ɪf juː wˈɔnt tə dˈuː sˌʌm fˈʌŋkʃən kˈɔːlɪŋ fˈaɪn tˈuːnɪŋ. ænd wˈʌns juː hæv jʊɹ kənfˌɪɡjɚɹˈeɪʃən sˈɛt ˈʌp, juː kæn ɹˈʌn ðə tˈɛst.pˈaɪ ɪn ˈɔːɹdɚ tə ɹˈʌn ɐ sˈɛt ʌv tˈɛst kwˈɛstʃənz ðæt juːv mˈænjuːəli dʒˈɛnɚɹˌeɪɾᵻd, ɔːɹ juː kæn ɹˈʌn kwˈɛstʃənz fɹʌm vˌælɪdˈeɪʃən ɔːɹ tˈɛst splˈɪt əvə dˈeɪɾə sˈɛt. |1
58
+ trelis_voice1_24.wav|ðˈɛn wɛn juː wˈɔnt tə tɹˈeɪn jʊɹ mˈɑːdəl, juː sˈɪmpli ɹˈʌn tɹˈeɪn.pˈaɪ, ɔːɹ juː kæn ɹˈʌn ɪt stˈɛp baɪ stˈɛp ɪn ɐ pˈaɪθən dʒˈʌpaɪɾɚ nˈoʊtbʊk. nˈaʊ, ðə nˈoʊtbʊk ɪz ɹˌɛkəmˈɛndᵻd ɪf juː wˈɔnt tə ɡˌoʊ θɹuː ðə tɹˈeɪnɪŋ ðə fˈɜːst tˈaɪm, juː kæn sˈiː stˈɛp baɪ stˈɛp wʌts hˈæpənɪŋ ænd ˈiːzili pɹˈɪnt ˈaʊt θˈɪŋz æɾ ˌɪntɚmˈiːdiət pˈɔɪnts. bˌʌt wɛn juːv ɡɑːt jʊɹ skɹˈɪpt hˈoʊnd, ɪt kæn biː ɐ lˈɑːt fˈæstɚ tə ɹˈʌn ɐ pˈaɪθən skɹˈɪpt. ænd ðæts wˌaɪ aɪ hæv mˌeɪd ðɪs skɹˈɪpt ɐvˈeɪləbəl, wˌɪtʃ juː dʒˈʌst ɹˈʌn fɹʌmðə kəmˈænd lˈaɪn ænd ɪt wɪl ɡˌoʊ θɹuː ˈɛvɹɪθˌɪŋ wɪðˌɪn ðə tɹˈeɪnɪŋ. |1
59
+ trelis_voice1_25.wav|juː kæn dᵻsˈaɪd tə tˈɛst ˈiːðɚ fɹʌm ɐ sˈɛt ʌv mˈɛsɪdʒᵻz ðæt juː hæv pɹɪpˈɛɹd mˈænjuːəli, ɔːɹ juː kæn jˈuːz ðə tɹˈeɪnɪŋ, ɔːɹ juː kæn jˈuːz ðə vˌælɪdˈeɪʃən splˈɪt əvə tˈɛst sˈɛt ðæts ˌɔn hˈʌɡɪŋ fˈeɪs baɪ sˈɛɾɪŋ jˈuːs dˈeɪɾə sˈɛt tə tˈɛst ˈiːkwəl tə tɹˈuː ɹˈaɪt hˈɪɹ. nˈɛkst, juː sˈɛt ˌʌp jʊɹ tɹˈeɪnɪŋ ænd vˌælɪdˈeɪʃən splˈɪts. hˈɪɹ aɪv sᵻlˈɛktᵻd ɐ mˈeɪn bɹˈæntʃ fɔːɹ tɹˈeɪnɪŋ, ænd aɪv sᵻlˈɛktᵻd ðə tɹˈeɪnɪŋ splˈɪt. juː kæn fˈɪks ɐ mˈæks nˈʌmbɚɹ ʌv ɹˈoʊz hˈɪɹ. |1
60
+ trelis_voice1_26.wav|ðɪs wɪl sˈeɪv juː tˈaɪm ɪf juː dʒˈʌst wˈɔnt tə dˈaʊnloʊd ænd ɹˈʌn ˈɔn, sˈeɪ, wˈʌnhˈʌndɹɪd ɹˈoʊz ɪnstˈɛd ʌv ˌɔn ɐ mˈæsɪv dˈeɪɾəsˌɛt. nˈaʊ, aɪ spˈoʊk ˈɜːlɪɚɹ ɐbˌaʊt dʒˈɛnɚɹˌeɪɾɪŋ ɐ vˌælɪdˈeɪʃən sˈɛt. juː kæn ˈiːðɚ dˈaʊnloʊd fɹʌm ɐ splˈɪt ðæts ˌɔn hˈʌɡɪŋ fˈeɪs kˈɔːld tˈɛst ɔːɹ vˌælɪdˈeɪʃən, bˌʌt juː kæn ˈɔːlsoʊ dʒˈɛnɚɹˌeɪt ɐ vˌælɪdˈeɪʃən splˈɪt fɹʌmðə tɹˈeɪn splˈɪt. ɪf juː dʒˈʌst sˈɛt ðɪs tə tɹˈuː, ɪt wɪl siːkwˈɛstɚ twˈɛnti pɚsˈɛnt ʌvðə tɹˈeɪnɪŋ dˈeɪɾə tə jˈuːz æz vˌælɪdˈeɪʃən. nˈɛkst ˌʌp ɪz ðə lˈoʊ ɹˈɑː kənfˌɪɡjɚɹˈeɪʃən. |1
61
+ trelis_voice1_27.wav|juː hæv ˈɔːl ðə ɹˈɛɡjʊlɚ lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz juːl sˈiː hˈɪɹ. tʃˈɛk ˈaʊt ðə lˈaɪv stɹˈiːm vˈɪdɪoʊ ˌɔn tʃˈuːzɪŋ lˈoʊ ɹˈɑː pɚɹˈæmɪɾɚz ɪf juː wˈɔnt tə nˈoʊ mˈoːɹ. juː kæn sˈɛt lˈoʊ ɹˈɑː ɔːɹ lˈoʊ ɹˈɑː ˈælfə ænd ˈɔːlsoʊ ɹˈæŋk stˈeɪbɪlˌaɪz lˈoʊ ɹˈɑː, sˈɛt ðæt tə tɹˈuː ɔːɹ fˈɔls. hˈɪɹ juːv ɡɑːt sˌʌm wˈeɪts ænd bˈaɪəsᵻz pɹˈɑːdʒɛkt kənfˌɪɡjɚɹˈeɪʃənz. juː sˈɛt ðə pɹˈɑːdʒɛkt nˈeɪm, ænd ðˈɛn fɔːɹ ˈiːtʃ ɹˈʌn, juː kæn jˈuːz ɐ dˈɪfɹənt nˈeɪm hˈɪɹ fɔːɹ ɹˈʌnɪŋ ɪn wˈeɪts ænd bˈaɪəsᵻz. |1
62
+ trelis_voice1_28.wav|juː kæn sˈɛt ˌʌp jʊɹ hˈʌɡɪŋ fˈeɪs jˈuːzɚnˌeɪm. ðɪs wɪl biː jˈuːzd wɛn pˈʊʃɪŋ mˈɑːdəlz tə hˈʌb. nˈaʊ ðɛɹz ɐ mˈoːɹ ɐdvˈænst tɛknˈiːk hˈɪɹ wˌɛɹ juː kæn dᵻsˈaɪd tə tɹˈeɪn ˌɔn kəmplˈiːʃənz ˈoʊnli. ðɪs mˈiːnz ðæt juː wɪl ˈoʊnli biː kənsˈɪdɚɹɪŋ ðə lˈɔs ɔnðɪ ˈænsɚ pˈoːɹʃən, nˌɑːt ɔnðə pɹˈɑːmpt ɔːɹ kwˈɛstʃən pˈoːɹʃən. ænd ðɪs kæn biː jˈuːsfəl ɪf jʊɹ ˈænsɚz ɑːɹ kwˈaɪt ʃˈɔːɹt ænd juː dˈoʊnt wˈɔnt ðə lˈɔs ˌɔn ˈɔːl ʌvðə pɹˈɑːmpts tə kˈaɪnd ʌv kɹˈaʊd ˈaʊt ɔːɹ klˈaʊd ��aʊt ðɪ ˌɪnfɚmˈeɪʃən ɔːɹ ðə sˈɪɡnəl ðæts kˈʌmɪŋ fɹʌm tɹˈeɪnɪŋ ɔnðə ɹᵻspˈɑːns ɔːɹ ðɪ ˈænsɚ. |1
63
+ trelis_voice1_29.wav|sˌoʊ juː sˈɛt ðə kəmplˈiːʃənz tə tɹˈuː hˈɪɹ. sˈʌmtaɪmz aɪ jˈuːz ðɪs fɔːɹ fˈʌŋkʃən kˈɔːlɪŋ, fˈaɪn tˈuːnɪŋ. ænd ðˈɛn juː nˈiːd tə lˈɛt ðə mˈɑːdəl nˈoʊ wˌɛɹ jʊɹ ˈænsɚɹ ɪz stˈɑːɹɾɪŋ. sˌoʊ ɪn ɐ lˈɑːmə θɹˈiː mˈɑːdəl, ðɪ ˈænsɚ wɪl stˈɑːɹt ˈæftɚɹ ɐsˈɪstənt ænd hˈɛdɚɹ aɪdˈiː. ɪn ɐ lˈɑːmə tˈuː mˈɑːdəl, ɪt wɪl stˈɑːɹt ˈæftɚɹ ˈɪnst. ænd ðˈɛn aɪ θˈɪŋk ðɪs ɪz mˈeɪbiː ɐ tʃˈæt ˌɛmˈɛl fˈɔːɹmæt. ðɪ ˈænsɚ wɪl stˈɑːɹt ˈæftɚɹ aɪɐm stˈɑːɹt ɐsˈɪstənt. sˌoʊ ðɪs ɐlˈaʊz ðə tɹˈeɪnɪŋ lˈuːp tə tʃˈɛk wɪðˌɪn jʊɹ pɹˈɑːmpt. |1
64
+ trelis_voice1_30.wav|ɪt wɪl tʃˈɛk fɔːɹ wˌɛɹ ðɪs stˈɑːɹt ʌvðɪ ɐsˈɪstəns ˈænsɚɹ ɪz, ænd ðˈɛn ɪt wɪl ˈoʊnli lˈʊk æt ðə lˈɔs ˈæftɚ ðæt pˈɔɪnt. ˈæftɚ ðˈɪs, ðɛɹˌɑːɹ sˌʌm stˈændɚd pɚɹˈæmɪɾɚz lˈaɪk sˈɛɾɪŋ ðə tɹˈeɪnɪŋ bˈætʃ sˈaɪz, ðə vˌælɪdˈeɪʃən bˈætʃ sˈaɪz, ðə ɡɹˈeɪdiənt ɐkjˌuːmjʊlˈeɪʃən, wˈɛðɚ juː wˈɔnt tə ɹˈʌn ɐ vˌælɪdˈeɪʃən sˈɛt ɔːɹ nˈɑːt. ðə nˈʌmbɚɹ ʌv ˈɛpɑːkz, ðə lˈɜːnɪŋ ɹˈeɪt, ɐn ˈaʊtpʊt dᵻɹˈɛktɚɹi fɔːɹ jʊɹ tɹˈeɪnɪŋ mˈɑːdəl ænd ɹɪzˈʌlts, wˈɛðɚ juː wˈɔnt tə tɹˈeɪn wɪð bɹˈeɪn flˈoʊt sˈɪkstiːn ɔːɹ nˈɑːt. juː kæn sˈɛt jʊɹ skˈɛdʒuːlɚ. |1
65
+ trelis_voice1_31.wav|ænd ðˈɛn ðə bˈeɪɾə, ðə bˈeɪɾə bˈeɪsɪkli wˈeɪz hˌaʊ mˈʌtʃ ʌvðə pɹˈɛfɹəns fˈaɪntˈuːnɪŋ, wˌʌts ðɪ ɪmpˈoːɹtəns ʌv ðæt lˈɔs ɹˈɛlətˌɪv tə ðə stˈændɚd ˌɛsˌɛftˈiː lˈɔs. ɹᵻmˈɛmbɚ, ˈɔːɹpəl dˈʌz tˈuː θˈɪŋz ɪn wˌʌn. ɪt dˈʌz ˌɛsˌɛftˈiː ænd ɪt dˈʌz pɹˈɛfɹəns fˈaɪntˈuːnɪŋ ɪn wˌʌn. sˌoʊ ɪf juː hæv ðɪs æt zˈiəɹoʊ.tˈuː, ɪts kˈaɪnd ʌvðɪ ɪmpˈoːɹtəns ʌvðɪ ˈɑːdz ɹˈeɪʃɪˌoʊ ɪz ɐbˌaʊt zˈiəɹoʊ.tˈuː ɹˈɛlətˌɪv tə ðɪ ˌɛsˌɛftˈiː lˈɔs. lˈæst ʌv ˈɔːl, juː kæn pˈʊʃ tə hˈʌb, sˌoʊ juː kæn sˈɛt ɐ tˈɑːɹɡɪt mˈɑːdəl nˈeɪm ɪf juː wˈɔnt tə pˈʊʃ tə hˈʌb. |1
val_list.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ trelis_voice1_32.wav|sˌoʊ vˈɛɹi kwˈɪkli, ɪf wiː tˈeɪk ɐ lˈʊk æt ðə tˈɛst skɹˈɪpt, ðɪs wɪl sˈɪmpli lˈoʊd ðə mˈɑːdəl. sˌoʊ ɪt wɪl lˈoʊd ˈɔːl ʌv jʊɹ kənfˌɪɡjɚɹˈeɪʃənz. ɪt wɪl lˈoʊd ðə mˈɑːdəl hˈɪɹ, ɐ fˈæst lˈæŋɡwɪdʒ mˈɑːdəl jˈuːzɪŋ ʌnslˈɑːθ. ɪt wɪl sˈɛt ˌʌp ðə tˈoʊkənˌaɪzɚ, sˈɛt ˌʌp ðə tʃˈæt tˈɛmplət, lˈoʊd ðə dˈeɪɾəsˌɛt, ˈiːðɚ fɹʌm jʊɹ mˈænjuːəl dˈeɪɾə ðæts ɪnðə ɹˈiːpoʊ ɔːɹ fɹʌm hˈʌɡɪŋ fˈeɪs, ænd ðˈɛn ɪt wɪl ɹˈʌn ˈɪnfɚɹəns θɹuː ˈɔːl ʌv ðoʊz sˈæmpəlz ænd pɹˈɪnt ðə ɹɪzˈʌlts ˈaʊt tə fˈaɪl. |1
2
+ trelis_voice1_33.wav|dʒˈʌst æz ɐn ɛɡzˈæmpəl, aɪ kæn ʃˈoʊ juː wɪðˌɪn tˈɛst ˈaʊtpʊt, juːl sˈiː hˈɪɹ ɐ lˈɑːɹdʒ nˈʌmbɚɹ ʌv tˈɛsts ðæt aɪ hæv ɹˈʌn. aɪl tɹˈaɪ tə fˈaɪnd ɐ ɹˈiːsənt wˌʌn. sˌoʊ hˈɪɹ ɪz sˌʌm fˈaɪntˈuːnɪŋ ˌɔn tˈʌtʃ ɹˈʌɡbi, ænd juːl sˈiː ðɛɹ ɪz ɐ pɹˈɑːmpt, ɐ kwˈɛstʃən, ænd ˌɪɾəl pɹˈɪnt ˈaʊt ðə kɚɹˈɛkt ɹᵻspˈɑːns, ænd ˌɪɾəl ˈɔːlsoʊ pɹˈɪnt ˈaʊt ðə dʒˈɛnɚɹˌeɪɾᵻd ɹᵻspˈɑːns. ænd ðˈɛn juː kæn dʒˈʌst mˈænjuːəli kəmpˈɛɹ wˈɛðɚ ðiːz ˈænsɚz ɑːɹ ɡˈʊd ɔːɹ nˈɑːt. nˈaʊ, dʒˈʌst wˈʌn ˈʌðɚ skɹˈɪpt aɪl pˈɔɪnt ˈaʊt hˈɪɹ, wˌɪtʃ ɪz vjˈuː mˈɑːdʒuːlz. |1
3
+ trelis_voice1_34.wav|ɪt dʌznˌɑːt wˈɜːk ɪf jʊɹ fˈaɪn tˈuːnɪŋ ˌɔn mˈʌltaɪ dʒˌiːpˌiːjˈuː, wˌɪtʃ ɪz wˌaɪ aɪ hæv ðə mˈʌltaɪ dʒˌiːpˌiːjˈuː bɹˈæntʃ ænd ðə mˈʌltaɪ dʒˌiːpˌiːjˈuː bɹˈæntʃ ɪz kənfˈɪɡɚd mˈʌtʃ ɪn ɐ sˈɪmɪlɚ wˈeɪ tʊ ʌnslˈɑːθ, ɛksˈɛpt ðˌɐɾɪt ɐlˈaʊz juː tə ɹˈʌn ɪn fˈʊli ʃˈɑːɹdᵻd dˈeɪɾə pˈæɹəlˌɛl ɔːɹ ɪn dˈɪstɹɪbjˌuːɾᵻd dˈeɪɾə pˈæɹəlˌɛl. ɪt hɐz ɐ kənfˈɪɡ fˈaɪl ðæt juː kæn sˈɛt ˈʌp. ɪt hɐz ðə tˈɛst.pˈaɪ ænd ðə tɹˈeɪn.pˈaɪ fˈaɪl ðæt wɪl ɐlˈaʊ juː tə ɹˈʌn tˈɛstɪŋ ænd tɹˈeɪnɪŋ. ænd aɪl dʒˈʌst bɹˈiːfli ʃˈoʊ juː ðə kənfˈɪɡ fˈaɪl. |1
4
+ trelis_voice1_35.wav|ðæt kæn biː dˈʌn baɪ ɹˈʌnɪŋ kənfˈɪɡ, ɐksˈɛlɚɹˌeɪt kənfˈɪɡ, ænd juːl sˈiː ðɪ ɪnstɹˈʌkʃənz ɪf juː hˈɛd ˌoʊvɚ tə ðə mˈʌltaɪdʒˌiːpˌiːjˈuː bɹˈæntʃ fɔːɹ dˌuːɪŋ ðˈæt. sˌoʊ ðɪs ɪz ðɪ ɐdvˈænst fˈaɪn tˈuːnɪŋ ɹˈiːpoʊ, ænd juː kæn fˈaɪnd ˈaʊt mˈoːɹ æt tɹˈaɪəlz.kˈɑːm fˈɔːɹwɚd slˈæʃ ɐdvˈænst dˈæʃ fˈaɪn dˈæʃ tˈuːnɪŋ. ðə nˈɛkst ɹˈiːpoʊ aɪl bɹˈiːfli ɡˌoʊ θɹuː ɪz ðɪ ɐdvˈænst vˈɪʒən ɹˈiːpoʊ. ðɪs dˈʌz mˈʌtʃ ʌvðə sˈeɪm, bˌʌt fɔːɹ mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs ˈɪmɪdʒ mˈɑːdəlz. ɪɾ ɐlˈaʊz juː tə pɹɪpˈɛɹ jʊɹ dˈeɪɾə ænd pˈʊʃ ɪɾ ˌʌp tə kɹiːˈeɪt ɐ hˈʌɡɪŋ fˈeɪs dˈeɪɾəsˌɛt. |1
5
+ trelis_voice1_36.wav|ðˈɛn juː kæn fˈaɪn tˈuːn lˈɑːvə, ˈaɪdə fˈɪks ænd, ɔːɹ ˈaɪdə fˈɪks ænd mˈuːndɹiːm mˈɑːdəlz. juː kæn dˈuː mˌʌltɪmˈoʊdəl sˈɜːvɚ sˈɛɾʌp wɪð tˈɛkst dʒˌɛnɚɹˈeɪʃən ˈɪnfɚɹəns. ðɛɹz ɐ wˈʌŋklˈɪk tˈɛmplət fɔːɹ ɹˈʌnɪŋ ɐn ˈaɪdə fˈɪks sˈɜːvɚ, ɪŋklˈuːdɪŋ ˌɔn ɐ kˈʌstəm mˈɑːdəl. ænd lˈæst ʌv ˈɔːl, ðɛɹ ɪz ɐ skɹˈɪpt fɔːɹ fˈaɪntˈuːnɪŋ mˌʌltɪmˈoʊdəl tˈɛkst plˈʌs vˈɪdɪoʊ mˈɑːdəlz. ðɪs ɪz bˈeɪsɪkli ɐ vˌɛɹɪˈeɪʃən ˌɔn tˈɛkst plˈʌs ˈɪmɪdʒ mˈɑːdəlz wˌɛɹ juː splˈɪt ðə vˈɪdɪoʊ ˌɪntʊ mˌʌltɪpəl ˈɪmɪdʒᵻz. ðə nˈɛkst ɹˈiːpoʊ ɪz ðɪ ɐdvˈænst ˈɪnfɚɹəns ɹˈiːpoʊ. |1
6
+ trelis_voice1_37.wav|ɪts kwˈaɪt jˈuːsfəl fɔːɹ bˈætʃ dʒˈɑːbz ðæt ɑːɹ lˈɛs tˈaɪm sˈɛnsᵻtˌɪv, bɪkˈʌz ɪt mˈiːnz jʊɹ nˌɑːt pˈeɪɪŋ fɚðə sˈɜːvɚ wɛn ɪts nˌɑːt bˌiːɪŋ jˈuːzd, ænd ɪt wɪl dʒˈʌst tˈɜːn ˌɔn wɛn juː nˈiːd ɪt, wˌɪtʃ ɪz ɡˌoʊɪŋ tə sˈeɪv juː kˈɔst. ðɛɹˌɑːɹ ˈɔːlsoʊ ɐ nˈʌmbɚɹ ʌv skɹˈɪpts fɔːɹ mˌeɪkɪŋ ˌeɪpˌiːˈaɪ kˈɔːlz, sˈɪmpəl ˌeɪpˌiːˈaɪ kˈɔːlz, ˈoʊpən ˌeɪˈaɪ stˈaɪl ɔːɹ tˌiːdʒˌiːˈaɪ stˈaɪl, fˈʌŋkʃən kˈɔːlɪŋ ˌeɪpˌiːˈaɪ kˈɔːlz ɪf juː wˈɔnt tə tˈɛst ˈaʊt fˈʌŋkʃən kˈɔːlɪŋ pɚfˈoːɹməns əvə mˈɑːdəl. ðˈɛn ðɛɹˌɑːɹ spˈiːd tˈɛsts fɔːɹ sˈɪŋɡəl kwˈiəɹɪz ænd mˌʌltɪpəl kwˈiəɹɪz. |1
7
+ trelis_voice1_38.wav|sˌoʊ ðɪ aɪdˈiə ɪz tə jˈuːz ɐ vˈɛɹi fˈæst ænd ɹˈɛlətˌɪvli smˈɔːl lˈæŋɡwɪdʒ mˈɑːdəl tə pˈɪk ˈaʊt ðə ɹˈaɪt snˈɪpɪts ænd ðˈɛn ɪŋklˈuːd ðoʊz snˈɪpɪts ɪnðə kˈɑːntɛkst əvə mˈoːɹ pˈaʊɚfəl mˈɑːdəl lˈaɪk, sˈeɪ, dʒˌiːpˌiːtˈiː fˈoːɹ. ðɛɹz ˈɔːlsoʊ ɐ fˈoʊldɚ nˈaʊ ˌɔn pɹˈaɪvəsi, wˌɪtʃ ɐlˈaʊz juː tə bˈeɪsɪkli hˈaɪd ˌɪnfɚmˈeɪʃən, lˈaɪk pˈɜːsənəl ˌɪnfɚmˈeɪʃən ˌɔn kɹˈɛdɪt kˈɑːɹdz, nˈeɪmz, ˈiːmeɪl ɐdɹˈɛsᵻz, bᵻfˌoːɹ juː sˈɛnd ɪt tʊ ɐ θˈɜːdpˈɑːɹɾi ˌeɪpˌiːˈaɪ sˌoʊ ðæt juː kæn ɹᵻdˈuːs ˌɛni dˈeɪɾə pɹˈaɪvəsi ɹˈɪsks. lˈæst ʌv ˈɔːl, ðɛɹz ðɪ ɐdvˈænst tɹænskɹˈɪpʃən ɹᵻpˈɑːzɪtˌoːɹi. |1
8
+ trelis_voice1_39.wav|ðˈɪswˌʌn hˈɪɹ ɐlˈaʊz juː tə dʒˈɛnɚɹˌeɪt dˈeɪɾə ɪf juː wˈɔnt tə fˈaɪn tˈuːn ɐ wˈɪspɚ mˈɑːdəl ænd ðˈɛn dˈuː ðə fˈaɪn tˈuːnɪŋ. ænd ɐɡˈɛn, mˈʌtʃ ʌvðə tˈɛn tˈɪps ðæt aɪ pɹəvˈaɪdᵻd ˈɜːlɪɚɹ ɑːɹ ɡˌoʊɪŋ tʊ ɐplˈaɪ hˈɪɹ fɔːɹ tɹænskɹˈɪpʃən. ænd ðæt ɪz ɪt fɔːɹ maɪ tˈɛn tˈɪps ˌɔn fˈaɪntˈuːnɪŋ. ɪf aɪv lˈɛft ˈɛnɪθˌɪŋ ˈaʊt, plˈiːz lˈɛt mˌiː nˈoʊ bᵻlˌoʊ ɪnðə kˈɑːmɛnts ænd aɪl ɡɛt bˈæk tə juː. ɪnðə mˈiːntaɪm, ɪf juː wˈɔnt mˈoːɹ ˌɪnfɚmˈeɪʃən ˌɔn tɹˈɛliz ɹᵻsˈoːɹsᵻz, ɪŋklˈuːdɪŋ fɹˈiː ænd pˈeɪd, tɹˈaɪ ˈaʊt tɹˈɛliz.kˈɑːm. |1