topic_shift
bool
2 classes
utterance
stringlengths
1
7.9k
session_id
stringlengths
7
14
false
Yeah. Uh , maybe I should write it on the board. So , there 's four rounds of training. Um , I g I g I guess you could say iterations. The first one is three , then seven , seven , and seven. And what these numbers refer to is the number of times that the , uh , HMM re - estimation is run. It 's this program called H E
QMSum_120
false
But in HTK , what 's the difference between , uh , a an inner loop and an outer loop in these iterations ?
QMSum_120
false
OK. So what happens is , um , at each one of these points , you increase the number of Gaussians in the model.
QMSum_120
false
Yeah. Oh , right ! This was the mix up stuff.
QMSum_120
false
Yeah. The mix up.
QMSum_120
false
That 's right.
QMSum_120
false
Right.
QMSum_120
false
I remember now.
QMSum_120
false
And so , in the final one here , you end up with , uh for all of the the digit words , you end up with , uh , three mixtures per state ,
QMSum_120
false
Yeah.
QMSum_120
false
eh , in the final thing. So I had done some experiments where I was I I want to play with the number of mixtures.
QMSum_120
false
Mm - hmm.
QMSum_120
false
But , um , uh , I wanted to first test to see if we actually need to do this many iterations early on.
QMSum_120
false
Uh , one , two ,
QMSum_120
false
Mm - hmm.
QMSum_120
false
And so , um , I I ran a couple of experiments where I reduced that to l to be three , two , two , uh , five , I think , and I got almost the exact same results.
QMSum_120
false
Mm - hmm.
QMSum_120
false
And but it runs much much faster. So , um , I I think m it only took something like , uh , three or four hours to do the full training ,
QMSum_120
false
As opposed to ?
QMSum_120
false
Good.
QMSum_120
false
as opposed to wh what , sixteen hours or something like that ? I mean , it takes you have to do an overnight basically , the way it is set up now.
QMSum_120
false
Yeah. It depends.
QMSum_120
false
Mm - hmm.
QMSum_120
false
Mm - hmm.
QMSum_120
false
So , uh , even we don't do anything else , doing something like this could allow us to turn experiments around a lot faster.
QMSum_120
false
And then when you have your final thing , do a full one , so it 's
QMSum_120
false
And when you have your final thing , we go back to this.
QMSum_120
false
Yeah.
QMSum_120
false
So , um , and it 's a real simple change to make. I mean , it 's like one little text file you edit and change those numbers , and you don't do anything else.
QMSum_120
false
Oh , this is a
QMSum_120
false
Mm - hmm.
QMSum_120
false
And then you just run.
QMSum_120
false
OK.
QMSum_120
false
So it 's a very simple change to make and it doesn't seem to hurt all that much.
QMSum_120
false
So you you run with three , two , two , five ? That 's a
QMSum_120
false
So I Uh , I I have to look to see what the exact numbers were.
QMSum_120
false
Yeah.
QMSum_120
false
I I thought was , like , three , two , two , five ,
QMSum_120
false
Mm - hmm.
QMSum_120
false
but I I 'll I 'll double check. It was over a week ago that I did it ,
QMSum_120
false
OK. Mm - hmm.
QMSum_120
false
so I can't remember exactly.
QMSum_120
false
Oh.
QMSum_120
false
But , uh
QMSum_120
false
Mm - hmm.
QMSum_120
false
um , but it 's so much faster. I it makes a big difference.
QMSum_120
false
Hmm.
QMSum_120
false
So we could do a lot more experiments and throw a lot more stuff in there.
QMSum_120
false
Yeah.
QMSum_120
false
That 's great.
QMSum_120
false
Um. Oh , the other thing that I did was , um , I compiled the HTK stuff for the Linux boxes. So we have this big thing that we got from IBM , which is a five - processor machine. Really fast , but it 's running Linux. So , you can now run your experiments on that machine and you can run five at a time and it runs , uh , as fast as , you know , uh , five different machines.
QMSum_120
false
Mm - hmm.
QMSum_120
false
Mm - hmm.
QMSum_120
false
So , um , I 've forgotten now what the name of that machine is but I can I can send email around about it.
QMSum_120
false
Yeah.
QMSum_120
false
And so we 've got it now HTK 's compiled for both the Linux and for , um , the Sparcs. Um , you have to make you have to make sure that in your dot CSHRC , um , it detects whether you 're running on the Linux or a a Sparc and points to the right executables. Uh , and you may not have had that in your dot CSHRC before , if you were always just running the Sparc. So , um ,
QMSum_120
false
Mm - hmm.
QMSum_120
false
uh , I can I can tell you exactly what you need to do to get all of that to work. But it 'll it really increases what we can run on.
QMSum_120
false
Hmm. Cool.
QMSum_120
false
So , together with the fact that we 've got these faster Linux boxes and that it takes less time to do these , um , we should be able to crank through a lot more experiments.
QMSum_120
false
Mm - hmm.
QMSum_120
false
So.
QMSum_120
false
Hmm.
QMSum_120
false
So after I did that , then what I wanted to do was try increasing the number of mixtures , just to see , um see how how that affects performance.
QMSum_120
false
Yeah.
QMSum_120
false
So.
QMSum_120
false
Yeah. In fact , you could do something like keep exactly the same procedure and then add a fifth thing onto it
QMSum_120
false
Mm - hmm.
QMSum_120
false
that had more.
QMSum_120
false
Exactly.
QMSum_120
false
Yeah.
QMSum_120
false
Right. Right.
QMSum_120
false
So at at the middle o where the arrows are showing , that 's you 're adding one more mixture per state ,
QMSum_120
false
Uh - huh. Uh ,
QMSum_120
false
or ?
QMSum_120
false
let 's see , uh. It goes from this uh , try to go it backwards this at this point it 's two mixtures per state. So this just adds one. Except that , uh , actually for the silence model , it 's six mixtures per state.
QMSum_120
false
Mm - hmm.
QMSum_120
false
Uh , so it goes to two.
QMSum_120
false
OK.
QMSum_120
false
Um. And I think what happens here is
QMSum_120
false
Might be between , uh , shared , uh shared variances or something ,
QMSum_120
false
Yeah. I think that 's what it is.
QMSum_120
false
or
QMSum_120
false
Uh , yeah. It 's , uh Shoot. I I I can't remember now what happens at that first one. Uh , I have to look it up and see.
QMSum_120
false
Oh , OK.
QMSum_120
false
Um , there because they start off with , uh , an initial model which is just this global model , and then they split it to the individuals. And so , it may be that that 's what 's happening here. I I I have to look it up and see. I I don't exactly remember.
QMSum_120
false
OK.
QMSum_120
false
OK.
QMSum_120
false
So. That 's it.
QMSum_120
false
Alright. So what else ?
QMSum_120
true
Um. Yeah. There was a conference call this Tuesday. Um. I don't know yet the what happened Tuesday , but the points that they were supposed to discuss is still , uh , things like the weights , uh
QMSum_120
false
Oh , this is a conference call for , uh , uh , Aurora participant sort of thing.
QMSum_120
false
For
QMSum_120
false
Yeah. Yeah.
QMSum_120
false
I see.
QMSum_120
false
Mmm.
QMSum_120
false
Do you know who was who was since we weren't in on it , uh , do you know who was in from OGI ? Was was was Hynek involved or was it Sunil
QMSum_120
false
I have no idea.
QMSum_120
false
or ?
QMSum_120
false
Mmm , I just
QMSum_120