topic_shift
bool
2 classes
utterance
stringlengths
1
7.9k
session_id
stringlengths
7
14
false
So And , uh , it it doesn't appear that there 's strong evidence that even though things were somewhat tuned on those three or four languages , that that going to a different language really hurt you. And the noises were not exactly the same. Right ? Because it was taken from a different , uh I mean they were different drives.
QMSum_86
false
Different cars. Yeah.
QMSum_86
false
I mean , it was it was actual different cars and so on.
QMSum_86
false
Yeah.
QMSum_86
false
So. Um , it 's somewhat tuned. It 's tuned more than , you know , a a a a
QMSum_86
false
Mm - hmm.
QMSum_86
false
You 'd really like to have something that needed no particular noise at all , maybe just some white noise or something like that a at most.
QMSum_86
false
Mm - hmm.
QMSum_86
false
But that 's not really what this contest is. So. Um , I guess it 's OK.
QMSum_86
false
Mm - hmm.
QMSum_86
false
That 's something I 'd like to understand before we actually use something from it ,
QMSum_86
false
I think it 's
QMSum_86
false
because it would
QMSum_86
false
it 's probably something that , mmm , the you know , the , uh , experiment designers didn't really think about , because I think most people aren't doing trained systems , or , you know , uh , systems that are like ours , where you actually use the data to build models. I mean , they just doing signal - processing.
QMSum_86
false
Yeah.
QMSum_86
false
Well , it 's true ,
QMSum_86
false
So.
QMSum_86
false
except that , uh , that 's what we used in Aurora one , and then they designed the things for Aurora - two knowing that we were doing that.
QMSum_86
false
Yeah. That 's true.
QMSum_86
false
Um.
QMSum_86
false
And they didn't forbid us right ? to build models on the data ?
QMSum_86
false
No. But , I think I think that it it it probably would be the case that if , say , we trained on Italian , uh , data and then , uh , we tested on Danish data and it did terribly , uh , that that it would look bad. And I think someone would notice and would say " Well , look. This is not generalizing. " I would hope tha I would hope they would.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Um. But , uh , it 's true. You know , maybe there 's parameters that other people have used you know , th that they have tuned in some way for other things. So it 's it 's , uh We should we should Maybe that 's maybe a topic Especially if you talk with him when I 'm not here , that 's a topic you should discuss with Hynek
QMSum_86
false
Mm - hmm.
QMSum_86
false
to , you know , double check it 's OK.
QMSum_86
false
Do we know anything about the speakers for each of the , uh , training utterances ?
QMSum_86
false
What do you mean ? We we
QMSum_86
false
Do you have speaker information ?
QMSum_86
false
Social security number
QMSum_86
false
That would be good.
QMSum_86
false
Like , we have male , female ,
QMSum_86
false
Hmm.
QMSum_86
false
Bank PIN.
QMSum_86
false
at least.
QMSum_86
false
Just male f female ?
QMSum_86
false
Mmm.
QMSum_86
false
What kind of information do you mean ?
QMSum_86
false
Well , I was thinking about things like , you know , gender , uh you know , gender - specific nets and , uh , vocal tract length normalization.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Things like that. I d I don't I didn't know what information we have about the speakers that we could try to take advantage of.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Hmm. Uh. Right. I mean , again , i if you had the whole system you were optimizing , that would be easy to see. But if you 're supposedly just using a fixed back - end and you 're just coming up with a feature vector , w w I 'm not sure I mean , having the two nets Suppose you detected that it was male , it was female you come up with different
QMSum_86
false
Well , you could put them both in as separate streams or something. Uh.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Maybe.
QMSum_86
true
I don't know. I was just wondering if there was other information we could exploit.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Hmm. Yeah , it 's an interesting thought. Maybe having something along the I mean , you can't really do vocal tract normalization. But something that had some of that effect
QMSum_86
false
Yeah.
QMSum_86
false
being applied to the data in some way.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Um.
QMSum_86
false
Do you have something simple in mind for I mean , vocal tract length normalization ?
QMSum_86
false
Uh no. I hadn't I hadn't thought it was thought too much about it , really. It just something that popped into my head just now. And so I I I mean , you could maybe use the ideas a similar idea to what they do in vocal tract length normalization. You know , you have some sort of a , uh , general speech model , you know , maybe just a mixture of Gaussians that you evaluate every utterance against , and then you see where each , you know , utterance like , the likelihood of each utterance. You divide the the range of the likelihoods up into discrete bins and then each bin 's got some knob uh , setting.
QMSum_86
false
Yeah. But just listen to yourself. I mean , that uh really doesn't sound like a real - time thing with less than two hundred milliseconds , uh , latency that and where you 're not adjusting the statistical engine at all.
QMSum_86
false
Yeah. Yeah.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Yeah. That 's true.
QMSum_86
false
You know , that just
QMSum_86
false
Right.
QMSum_86
false
Hmm.
QMSum_86
false
I mean Yeah.
QMSum_86
false
Could be expensive.
QMSum_86
false
No. Well not just expensive. I I I don't see how you could possibly do it. You can't look at the whole utterance and do anything. You know , you can only Right ?
QMSum_86
false
Oh ,
QMSum_86
false
Each frame comes in and it 's gotta go out the other end.
QMSum_86
false
right.
QMSum_86
false
So , uh
QMSum_86
false
Right. So whatever it was , it would have to be uh sort of on a per frame basis.
QMSum_86
false
Yeah.
QMSum_86
false
Mm - hmm.
QMSum_86
false
Yeah. I mean , you can do , um Fairly quickly you can do male female f male female stuff.
QMSum_86
false
Yeah. Yeah.
QMSum_86
false
But as far as , I mean Like I thought BBN did a thing with , uh , uh , vocal tract normalization a ways back. Maybe other people did too. With with , uh , uh , l trying to identify third formant average third formant using that as an indicator of
QMSum_86
false
I don't know.
QMSum_86
false
So. You know , third formant I if you imagine that to first order what happens with , uh , changing vocal tract is that , uh , the formants get moved out by some proportion
QMSum_86
false
Mm - hmm.
QMSum_86
false
So , if you had a first formant that was one hundred hertz before , if the fifty if the vocal tract is fifty percent shorter , then it would be out at seven fifty hertz , and so on. So , that 's a move of two hundred fifty hertz. Whereas the third formant which might have started off at twenty - five hundred hertz , you know , might be out to thirty - seven fifty , you know so it 's at So , although , you frequently get less distinct higher formants , it 's still third formant 's kind of a reasonable compromise , and
QMSum_86
false
Mm - hmm.
QMSum_86
false
So , I think , eh , if I recall correctly , they did something like that. And and
QMSum_86
false
Hmm.
QMSum_86
false
But Um , that doesn't work for just having one frame or something.
QMSum_86
false
Yeah.
QMSum_86
false
Mm - hmm.
QMSum_86
false
You know ? That 's more like looking at third formant over over a turn or something like that ,
QMSum_86
false
Mm - hmm.
QMSum_86
false
and
QMSum_86
false
Right.
QMSum_86
false
Um. So. But on the other hand , male female is a is a is a much simpler categorization than figuring out a a factor to , uh , squish or expand the the spectrum.
QMSum_86
false
Mm - hmm.
QMSum_86
false
So , um. Y you could imagine that I mean , just like we 're saying voiced - unvoiced is good to know uh , male female is good to know also. Um.
QMSum_86
false
Mm - hmm.
QMSum_86
false
But , you 'd have to figure out a way to to to , uh , incorporate it on the fly. Uh , I mean , I guess , as you say , one thing you could do is simply , uh , have the the male and female output vectors you know , tr nets trained only on males and n trained only on females or or , uh , you know. But Um. I don't know if that would really help , because you already have males and females and it 's mm - hmm putting into one net. So is it ?
QMSum_86
false
Is it balanced , um , in terms of gender the data ?
QMSum_86
false
Mmm.
QMSum_86
false
Do you know ?
QMSum_86
false
Almost , yeah.
QMSum_86
false
Hmm.
QMSum_86
false
Mm - hmm.
QMSum_86