topic_shift
bool 2
classes | utterance
stringlengths 1
7.9k
| session_id
stringlengths 7
14
|
---|---|---|
false | Do something. | QMSum_87 |
false | But his his goal was always to proceed from there to then allow broad category change also. | QMSum_87 |
false | Uh - huh. But , eh do do you think that if you consider all the frames to apply the the , eh the BIC criterion to detect the the the different acoustic change , eh between speaker , without , uh with , uh silence or with overlapping , uh , I think like like , eh eh a general , eh eh way of process the the acoustic change. | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | In a first step , I mean. | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | An - and then , eh eh without considering the you you you , um you can consider the energy like a another parameter in the in the feature vector , eh. | QMSum_87 |
false | Right. Absolutely. | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | This this is the idea. And if , if you do that , eh eh , with a BIC uh criterion for example , or with another kind of , eh of distance in a first step , and then you , eh you get the , eh the hypothesis to the this change acoustic , eh to po process | QMSum_87 |
false | Right. | QMSum_87 |
false | Because , eh eh , probably you you can find the the eh a small gap of silence between speaker with eh eh a ga mmm , small duration Less than , eh two hundred milliseconds for example | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | and apply another another algorithm , another approach like , eh eh detector of ene , eh detector of bass - tone energy to to consider that , eh that , eh zone. of s a small silence between speaker , or another algorithm to to process , eh the the segment between marks eh founded by the the the BIC criterion and applied for for each frame. | QMSum_87 |
false | Mm - hmm. Mm - hmm. | QMSum_87 |
false | I think is , eh nnn , it will be a an an a more general approach the if we compare with use , eh a neural net or another , eh speech recognizer with a broad class or or narrow class , because , in my opinion eh it 's in my opinion , eh if you if you change the condition of the speech , I mean , if you adjust to your algorithm with a mixed speech file and to , eh to , eh adapt the neural net , eh used by Javier with a mixed file. | QMSum_87 |
false | Mm - hmm. Mm - hmm. | QMSum_87 |
false | uh With a m mixed file , | QMSum_87 |
false | With the what file ? | QMSum_87 |
false | " Mixed ". | QMSum_87 |
false | with a the mix , mix. | QMSum_87 |
false | " Mixed. " | QMSum_87 |
false | " Mixed ? " | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | Sorry. And and then you you , eh you try to to apply that , eh , eh , eh , speech recognizer to that signal , to the PDA , eh speech file , I I think you will have problems , because the the the the condition you you will need t t I I suppose that you will need to to to retrain it. | QMSum_87 |
false | Well , I I | QMSum_87 |
false | Oh , absolutely. This is this is not what I was suggesting to do. | QMSum_87 |
false | u Look , I I think this is a One once It 's a I used to work , like , on voiced on voice silence detection , you know , and this is this kind of thing. | QMSum_87 |
false | Really ? Yeah. | QMSum_87 |
false | Um If you have somebody who has some experience with this sort of thing , and they work on it for a couple months , they can come up with something that gets most of the cases fairly easily. Then you say , " OK , I don't just wanna get most of the cases I want it to be really accurate. " Then it gets really hard no matter what you do. So , the p the problem is is that if you say , " Well I I have these other data over here , that I learn things from , either explicit training of neural nets or of Gaussian mixture models or whatever. " | QMSum_87 |
false | Yeah. | QMSum_87 |
false | Uh Suppose you don't use any of those things. You say you have looked for acoustic change. Well , what does that mean ? That that means you set some thresholds somewhere or something , | QMSum_87 |
false | Yeah. | QMSum_87 |
false | right ? and and so where do you get your thresholds from ? | QMSum_87 |
false | Yeah. | QMSum_87 |
false | From something that you looked at. So you always have this problem , you 're going to new data um H how are you going to adapt whatever you can very quickly learn about the new data ? Uh , if it 's gonna be different from old data that you have ? And I think that 's a problem with this. | QMSum_87 |
false | Well , also what I 'm doing right now is not intended to be an acoustic change detector for far - field mikes. What I 'm doing is trying to use the close - talking mike and just use Can - and just generate candidate and just try to get a first pass at something that sort of works. | QMSum_87 |
false | Yeah ! | QMSum_87 |
false | You have candidates. | QMSum_87 |
false | Actually actually actually | QMSum_87 |
false | the candidate. | QMSum_87 |
false | to make marking easier. Yeah. | QMSum_87 |
false | Or | QMSum_87 |
false | and I haven't spent a lot of time on it and I 'm not intending to spend a lot of time on it. | QMSum_87 |
false | OK. I um , I , unfortunately , have to run , | QMSum_87 |
false | So. | QMSum_87 |
false | but , um I can imagine uh building a um model of speaker change detection that takes into account both the far - field and the uh actually , not just the close - talking mike for that speaker , but actually for all of th for all of the speakers. | QMSum_87 |
false | Yep. Everyone else. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | um If you model the the effect that me speaking has on your microphone and everybody else 's microphone , as well as on that , and you build , um basically I think you 'd you would build a an HMM that has as a state space all of the possible speaker combinations | QMSum_87 |
false | All the Yep. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | and , um you can control | QMSum_87 |
false | It 's a little big. | QMSum_87 |
false | It 's not that big actually , um | QMSum_87 |
false | Two to the N. Two to the number of people in the meeting. | QMSum_87 |
false | But Actually , Andreas may maybe maybe just something simpler but but along the lines of what you 're saying , | QMSum_87 |
false | Anyway. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | I was just realizing , I used to know this guy who used to build , uh um , mike mixers automatic mike mixers where , you know , t in order to able to turn up the gain , you know , uh as much as you can , you you you lower the gain on on the mikes of people who aren't talking , | QMSum_87 |
false | Mmm. | QMSum_87 |
false | Yeah Yeah. | QMSum_87 |
false | Mmm. Mm - hmm. | QMSum_87 |
false | right ? And then he had some sort of reasonable way of doing that , | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | but uh , what if you were just looking at very simple measures like energy measures but you don't just compare it to some threshold overall but you compare it to the energy in the other microphones. | QMSum_87 |
false | I was thinking about doing that originally to find out who 's the loudest , and that person is certainly talking. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | But I also wanted to find threshold uh , excuse me , mol overlap. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | So , not just just the loudest. | QMSum_87 |
false | But , eh | QMSum_87 |
false | Mm - hmm. | QMSum_87 |
false | I I Sorry. I I have found that when when I I analyzed the the speech files from the , eh mike , eh from the eh close eh microphone , eh I found zones with a a different level of energy. | QMSum_87 |
false | Sorry , I have to go. | QMSum_87 |
false | OK. Could you fill that out anyway ? Just , put your name in. Are y you want me to do it ? I 'll do it. | QMSum_87 |
false | But he 's not gonna even read that. Oh. | QMSum_87 |
false | I know. | QMSum_87 |
false | including overlap zone. including. because , eh eh depend on the position of the of the microph of the each speaker to , eh , to get more o or less energy i in the mixed sign in the signal. and then , if you consider energy to to detect overlapping in in , uh , and you process the the in the the the speech file from the the the mixed signals. The mixed signals , eh. I I think it 's it 's difficult , um only to en with energy to to consider that in that zone We have eh , eh , overlapping zone Eh , if you process only the the energy of the , of each frame. | QMSum_87 |
false | Well , it 's probably harder , but I I think what I was s nnn noting just when he when Andreas raised that , was that there 's other information to be gained from looking at all of the microphones and you may not need to look at very sophisticated things , | QMSum_87 |
false | Yeah. | QMSum_87 |
false | because if there 's if most of the overlaps you know , this doesn't cover , say , three , but if most of the overlaps , say , are two , if the distribution looks like there 's a couple high ones and and the rest of them are low , | QMSum_87 |
false | Yeah. Yeah. Yeah. Yeah. | QMSum_87 |
false | And everyone else is low , yeah. | QMSum_87 |
false | you know , what I mean , | QMSum_87 |
false | Yeah. | QMSum_87 |
false | there 's some information there about their distribution even with very simple measures. | QMSum_87 |
false | Yeah. Yeah. | QMSum_87 |
false | Uh , by the way , I had an idea with while I was watching Chuck nodding at a lot of these things , is that we can all wear little bells on our heads , so that then you 'd know that | QMSum_87 |
false | Yeah. | QMSum_87 |
false | Ding , ding , ding , ding. | QMSum_87 |
false | Yeah. | QMSum_87 |
false | " Ding ". That 's cute ! | QMSum_87 |
false | I think that 'd be really interesting too , with blindfolds. Then | QMSum_87 |
false | Nodding with blindfolds , | QMSum_87 |
false | Yeah. The question is , like whether | QMSum_87 |
false | " what are you nodding about ? " | QMSum_87 |
false | Well , trying with and with and without , yeah. | QMSum_87 |
false | " Sorry , I 'm just I 'm just going to sleep. " | QMSum_87 |
false | But then there 's just one @ @ , like. | QMSum_87 |
Subsets and Splits