topic_shift
bool
2 classes
utterance
stringlengths
1
7.9k
session_id
stringlengths
7
14
false
Yeah , but But how does the expert but how does the expert system know how who which one to declare the winner , if it doesn't know the question it is , and how that question should be answered ?
QMSum_134
false
Based on the k what the question was , so what the discourse , the ontology , the situation and the user model gave us , we came up with these values for these decisions.
QMSum_134
false
Yeah I know. But how do we weight what we get out ? As , which one i Which ones are important ? So my i So , if we were to it with a Bayes - net , we 'd have to have a node for every question that we knew how to deal with , that would take all of the inputs and weight them appropriately for that question.
QMSum_134
false
Mm - hmm.
QMSum_134
false
Does that make sense ? Yay , nay ?
QMSum_134
false
Um , I mean , are you saying that , what happens if you try to scale this up to the situation , or are we just dealing with arbitrary language ?
QMSum_134
false
We
QMSum_134
false
Is that your point ?
QMSum_134
false
Well , no. I I guess my question is , Is the reason that we can make a node f or OK. So , lemme see if I 'm confused. Are we going to make a node for every question ? Does that make sense ?
QMSum_134
false
For every question ?
QMSum_134
false
Or not.
QMSum_134
false
Like
QMSum_134
false
Every construction.
QMSum_134
false
Hmm. I don't Not necessarily , I would think. I mean , it 's not based on constructions , it 's based on things like , uh , there 's gonna be a node for Go - there or not , and there 's gonna be a node for Enter , View , Approach.
QMSum_134
false
Wel W OK. So , someone asked a question.
QMSum_134
false
Yeah.
QMSum_134
false
How do we decide how to answer it ?
QMSum_134
false
Well , look at look Face yourself with this pr question. You get this You 'll have y This is what you get. And now you have to make a decision. What do we think ? What does this tell us ? And not knowing what was asked , and what happened , and whether the person was a tourist or a local , because all of these factors have presumably already gone into making these posterior probabilities. What what we need is a just a mechanism that says , " Aha ! There is "
QMSum_134
false
Yeah. I just don't think a " winner - take - all " type of thing is the
QMSum_134
false
I mean , in general , like , we won't just have those three , right ? We 'll have , uh , like , many , many nodes. So we have to , like So that it 's no longer possible to just look at the nodes themselves and figure out what the person is trying to say.
QMSum_134
false
Yep. Because there are interdependencies , right ? The uh Uh , no. So if if for example , the Go - there posterior possibility is so high , um , uh , w if it 's if it has reached reached a certain height , then all of this becomes irrelevant. So. If even if if the function or the history or something is scoring pretty good on the true node , true value
QMSum_134
false
Wel I don't know about that , cuz that would suggest that I mean
QMSum_134
false
He wants to go there and know something about it ?
QMSum_134
false
Do they have to be mutual Yeah. Do they have to be mutually exclusive ?
QMSum_134
false
I think to some extent they are. Or maybe they 're not.
QMSum_134
false
Cuz I , uh The way you describe what they meant , they weren't mutu uh , they didn't seem mutually exclusive to me.
QMSum_134
false
Well , if he doesn't want to go there , even if the Enter posterior proba So.
QMSum_134
false
Wel
QMSum_134
false
Go - there is No. Enter is High , and Info - on is High.
QMSum_134
false
Well , yeah , just out of the other three , though , that you had in the
QMSum_134
false
Hmm ?
QMSum_134
false
those three nodes. The - d They didn't seem like they were mutually exclusive.
QMSum_134
false
No , there 's No. But It 's through the
QMSum_134
false
So th s so , yeah , but some So , some things would drop out , and some things would still be important.
QMSum_134
false
Mm - hmm.
QMSum_134
false
But I guess what 's confusing me is , if we have a Bayes - net to deal w another Bayes - net to deal with this stuff ,
QMSum_134
false
Mm - hmm.
QMSum_134
false
you know , uh , is the only reason OK , so , I guess , if we have a Ba - another Bayes - net to deal with this stuff , the only r reason we can design it is cuz we know what each question is asking ?
QMSum_134
false
Yeah. I think that 's true.
QMSum_134
false
And then , so , the only reason way we would know what question he 's asking is based upon Oh , so if Let 's say I had a construction parser , and I plug this in , I would know what each construction the communicative intent of the construction was
QMSum_134
false
Mm - hmm.
QMSum_134
false
and so then I would know how to weight the nodes appropriately , in response. So no matter what they said , if I could map it onto a Where - Is construction , I could say , " ah !
QMSum_134
false
Ge Mm - hmm.
QMSum_134
false
well the the intent , here , was Where - Is " ,
QMSum_134
false
OK , right.
QMSum_134
false
and I could look at those.
QMSum_134
false
Yeah. Yes , I mean. Sure. You do need to know I mean , to have that kind of information.
QMSum_134
false
Hmm. Yeah , I 'm also agreeing that a simple pru Take the ones where we have a clear winner. Forget about the ones where it 's all sort of middle ground. Prune those out and just hand over the ones where we have a winner. Yeah , because that would be the easiest way. We just compose as an output an XML mes message that says. " Go there now. " " Enter historical information. " And not care whether that 's consistent with anything. Right ? But in this case if we say , " definitely he doesn't want to go there. He just wants to know where it is. " or let 's call this this " Look - At - H " He wants to know something about the history of. So he said , " Tell me something about the history of that. " Now , the e But for some reason the Endpoint - Approach gets a really high score , too. We can't expect this to be sort of at O point three , three , three , O point , three , three , three , O point , three , three , three. Right ? Somebody needs to zap that. You know ? Or know There needs to be some knowledge that
QMSum_134
false
We Yeah , but , the Bayes - net that would merge I just realized that I had my hand in between my mouth and my micr er , my and my microphone. So then , the Bayes - net that would merge there , that would make the decision between Go - there , Info - on , and Location , would have a node to tell you which one of those three you wanted , and based upon that node , then you would look at the other stuff.
QMSum_134
false
Yep. Yep.
QMSum_134
false
I mean , it i Does that make sense ?
QMSum_134
false
Yep. It 's sort of one of those , that 's It 's more like a decision tree , if if you want. You first look o at the lowball ones ,
QMSum_134
false
Yeah , i
QMSum_134
false
and then
QMSum_134
false
Yeah , I didn't intend to say that every possible OK. There was a confusion there , k I didn't intend to say every possible thing should go into the Bayes - net , because some of the things aren't relevant in the Bayes - net for a specific question. Like the Endpoint is not necessarily relevant in the Bayes - net for Where - Is until after you 've decided whether you wanna go there or not.
QMSum_134
false
Mm - hmm.
QMSum_134
false
Right.
QMSum_134
false
Show us the way , Bhaskara.
QMSum_134
false
I guess the other thing is that um , yeah. I mean , when you 're asked a specific question and you don't even Like , if you 're asked a Where - Is question , you may not even look like , ask for the posterior probability of the , uh , EVA node , right ? Cuz , that 's what I mean , in the Bayes - net you always ask for the posterior probability of a specific node. So , I mean , you may not even bother to compute things you don't need.
QMSum_134
false
Um. Aren't we always computing all ?
QMSum_134
false
No. You can compute , uh , the posterior probability of one subset of the nodes , given some other nodes , but totally ignore some other nodes , also. Basically , things you ignore get marginalized over.
QMSum_134
false
Yeah , but that 's that 's just shifting the problem. Then you would have to make a decision ,
QMSum_134
false
Yeah. So you have to make
QMSum_134
false
" OK , if it 's a Where - Is question , which decision nodes do I query ? "
QMSum_134
false
Yeah. Yes. But I would think that 's what you want to do.
QMSum_134
false
That 's un
QMSum_134
false
Right ?
QMSum_134
false
Mmm.
QMSum_134
false
Well , eventually , you still have to pick out which ones you look at.
QMSum_134
false
Yeah.
QMSum_134
false
So it 's pretty much the same problem ,
QMSum_134
false
Yeah it 's it 's it 's apples and oranges.
QMSum_134
false
isn't it ?
QMSum_134
false
Nuh ? I mean , maybe it does make a difference in terms of performance , computational time.
QMSum_134
false
Mm - hmm.
QMSum_134
false
So either you always have it compute all the posterior possibilities for all the values for all nodes , and then prune the ones you think that are irrelevant ,
QMSum_134
false
Mmm.
QMSum_134
false
or you just make a p @ @ a priori estimate of what you think might be relevant and query those.
QMSum_134
false
Yeah.
QMSum_134
false
So basically , you 'd have a decision tree query , Go - there. If k if that 's false , query this one. If that 's true , query that one. And just basically do a binary search through the ?
QMSum_134
false
I don't know if it would necessarily be that , uh , complicated. But , uh I mean , it w
QMSum_134
false
Well , in the case of Go - there , it would be. In the case Cuz if you needed an If y If Go - there was true , you 'd wanna know what endpoint was. And if it was false , you 'd wanna d look at either Lo - Income Info - on or History.
QMSum_134
false
Yeah. That 's true , I guess. Yeah , so , in a way you would have that.
QMSum_134
false
Also , I 'm somewhat boggled by that Hugin software.
QMSum_134
false
OK , why 's that ?
QMSum_134
false
I can't figure out how to get the probabilities into it. Like , I 'd look at
QMSum_134
false
Mm - hmm.
QMSum_134
true
It 's somewha It 's boggling me.
QMSum_134
false
OK. Alright. Well , hopefully it 's fixable.
QMSum_134
false
Ju
QMSum_134
false
It 's there 's a
QMSum_134
false
Oh yeah , yeah. I d I just think I haven't figured out what the terms in Hugin mean , versus what Java Bayes terms are.
QMSum_134
false
OK.
QMSum_134
false
Um , by the way , are Do we know whether Jerry and Nancy are coming ?
QMSum_134
false
So we can figure this out.
QMSum_134
false
Or ?
QMSum_134
false
They should come when they 're done their stuff , basically , whenever that is. So.
QMSum_134
false
What d what do they need to do left ?
QMSum_134
false
Um , I guess , Jerry needs to enter marks , but I don't know if he 's gonna do that now or later. But , uh , if he 's gonna enter marks , it 's gonna take him awhile , I guess , and he won't be here.
QMSum_134
false
And what 's Nancy doing ?
QMSum_134