image
imagewidth (px) 1.24k
1.76k
| markdown
stringlengths 1
996
|
---|---|
SEKI-Working-Paper ISSN
1860-5931
```
UNIVERSITÄT DES SAARLANDES
FACHRICHTUNG INFORMATIK
D66123 SAARBRÜCKEN
GERMANY
WWW: http://www.ags.uni-sb.de/
```
<image>
|
|
## This Seki-Working-Paper Was Internally Reviewed By:
Jörg Siekmann FR Informatik, Universität des Saarlandes, D–66123 Saarbrücken, Germany E-mail: [email protected] WW: http://ww-ags.dfki.uni-sb.de
## Editor Of Seki Series:
Claus-Peter Wirth Brandenburger Str. 42, D–65582 Diez, Germany E-mail: [email protected] W: http://ww.ags.uni-sb.de/~cp |
|
```
UNIVERSITÄT DES SAARLANDES
FACHRICHTUNG INFORMATIK
D–66123 SAARBRÜCKEN
GERMANY
WWW: http://www.ags.uni-sb.de/
```
# Seki-Working-Paper Issn 1860-5931
<image>
|
|
## This Seki-Working-Paper Was Internally Reviewed By:
Jörg Siekmann FR Informatik, Universität des Saarlandes, D–66123 Saarbrücken, Germany E-mail: **[email protected]** WWW: **http://www-ags.dfki.uni-sb.de**
## Editor Of Seki Series:
Claus-Peter Wirth Brandenburger Str. 42, D–65582 Diez, Germany E-mail: **[email protected]** WWW: http://www.ags.uni-sb.de/~cp |
|
<image>
A smaller version of the above problem is designed with 4 stages, [5,4,4,3] states in each stage and hence 240 scenarios in total. The CP model has 15 decision **variables and 10 constraints for** one scenario, and 1,038 decision variables and 692 constraints for 240 scenarios. Fig.3 shows that Dupacova et al. algorithm requires approximately one third **the scenarios before the first decision is**
made correctly. It is interesting to see that the general performance of scenario reduction algorithms has deteriorated in the 4-stage case. This is mainly due to the fact that the longer the planning horizon, the better the chance of recovery from an early mistake.
<image>
16 |
|
<image>
<image>
<image>
|
|
[25] N. Shazeer, M. Littman, and G. Keim. (1999). Solving Crossword Puzzles as Probabilistic Constraint Satisfaction. In *Proceedings of the 16th National Conference on Artificial Intelligence*. American Association for Artificial Intelligence.
[26] S. A. Tarim and B. G. Kingsman. (2004). The stochastic dynamic production/inventory lot–
sizing problem with service–level constraints. *International Journal of Production Economics*, 88:105–119.
[27] T. Walsh. (2002). Stochastic Constraint Programming. in *Proceedings of ECAI-2002*. [28] N. Yorke-Smith and C. Gervet. (2003). Certainty closure: A framework for reliable constraint reasoning with uncertainty. In F. Rossi, editor, *Proceedings of 9th International Conference on* Principles and Practice of Constraint Programming (CP2003)**. Springer, Berlin.** |
|
[R´egin, 1996] J.C. R´egin. Generalized arc consistency for global cardinality constraints. In *Proc. of AAAI'96*, pp 209–215. 1996.
[Schiex *et al.*, 1995] T. Schiex, H. Fargier, and G. Verfaille.
Valued constraint satisfaction problems: hard and easy problems. In *Proc. of IJCAI'95*, pp 631–637. 1995.
[Smith *et al.*, 1995] B.M. Smith, S.C. Brailsford, P.M. Hubbard, and H.P. Williams. The progressive party problem.
Research Report 95.8, Uni. of Leeds. 1995.
[Van Hentenryck *et al.*, 1999] P. van Hentenryck, L. Michel, L. Perron, and J.C. R´egin. Constraint programming in OPL. In *Proc. of PPDP'99*, pp 98–116. Springer, 1999. |
|
aound (20) brass ∞ brown ∞ car
<image>
(26)
catherine (77) charlotte (17) csi (20) CUT (8) DOOR (50) dr (19)
front ∞ Gİİ ∞ gribbs ∞ grissom «158 gun (17) hallway (18) hands (17) harmon (19) holds (24) holly (88)
husband int ∞ jim ∞ jimmy (24) judge (18) laferty (24)
lOOKS (35) man (19) morning (20) mr (28) NİCK
okay (18) OPENS (41) pulls (21) PUTS (30) ROOM (27) shoe (23) sits (19) stands (23)
stokes (55) store (17) takes (43) turns (4) walks (23)
warrick (123) wife (18) WIIIOWS (49) yeah (20)
Figure 3: A tag cloud, showing 50 tags, with stemming applied and very frequent words ignored. Word frequencies are also shown. Produced by TagCrowd, ww.tagcrowd.com, using television program CSI 101. |
|
<image>
|
|
operations. Constraints at this level are operational constraints. Many domain constraints are not explicitly represented at this level. The relationship between ontology and the underlying data sources are represented in the following figure:
<image>
Besides learning ontologies from existing data sets, we can also reuse existing ontologies available from the Web [SP07]. The first step is to get the candidates by using ontology search tools like OntoSearch [T+04],
83
|
|
<image>
85
|
|
62
'
8"
62 2
2
8 " 22 2
"
" + "+
+ 2 C2
@ 2
2
" 2
'
" 2
+
'" 2
"+ 2
" 2
+
2
+
'
2
""
**"**
"" ,9 4 "
+
$
"
22 "+
**8""**
"" "
2 2
' 4
## 4, 6#%'##
(
=
"
2
"
8"+ 2
' 4 ""2
,9
@ "
"
5I8H 8
;H8H I8!; I7H 62
" 2
2
" ""
+
O
" 2
+"
"
<image>
= + 2
" "
"
2
"
O
" 4
**8"+** "
!
2
"+ K 2
- P 9
:
2
"+ 8 9
:62
2
"+
"
+
! 2
2
|
|
S=<T9
!
)
4
9
6 6
) % '
-( **,
2**
2 !
\#
%
F5"" :B:8:EE **:;<=**
S=;T&
&
,
G
6
,
4
*
!
3 3 62
8!
!"53 V"
!" !
2
) ' *
1 **:;;E**
SKBT
)
*
9
*
R7
% G
)
%)
**6
,**
$
' 2
2 !
!+
6
2 !+6
2=BB< "" =:8=F 7 V" **=BB<**
SK:T1 4 J
*
& ( ""
A
4$
\#U
A \#-$
5 ""
- 62U
*
-&= ( **=BBF**
SK=T1 3 V 6
) !
)
!
6
2 1" &
+ =L **=BBB**
SKKT
4
%
4
!
-$
) 1 **:;;<**
SK?T
D
4
6
! A "
2
2
A
'
A
-$ 62U
*
-&= - **=BBL**
SKET\#3
3
R - **$!
-,**
-3
.
. 1
2 V 4" !
9
9
) 9 1" KB8?F **:;;;**
SKFT\#9
&
J ,
$ "
5
'" **:;;F**
2"5OO"O,
7O,
7;FO,
7;F12 SKLT\#6 4 9
)
" 4
\#
" =BBL **2"5OO**
O
2
2
O
$"
O
2 SK<T\#,R
9 4
) **>
.
2**
2" ,
9 1 :;;< **2"5OO**
O
O"2 |
|
"+
2"2
2
**"
"2
-** ""
D
""
+2
2 \#64: = &(
"
2
2
"2 4
"
- 9 4'" )
M ) 1+ D **=BB?** - 9 72
@ \#
21+ , **=BBL**
- 9 $
D
4'" J **=BBF** - 9
" \#
21+ 4
+ **=BBL**
- 9 4'2
+ **=BBL**
- 9
M
" *
@
2
1 **=BBF**
- 9 \#"
2
"
2
! 4 , **=BBL**
- ! !+
G ) **=BB<**
62
""
"
'" 2
'"
"
8"+ "
-
" 2
2
""2
'"
4 2
"
## /,4 **%.#0**
6
"
""
" 2
2
8"
****
(
" ""
22
"
" ""
+9)
:;<=
+ 2 " " **9)
@** **2**
2
+ 2
2
2
22
-
"
2
"
" 42
2
- +
2
"
+2 N
N 62
2
0
2
" +2 2
"+
" 2
2 +
2
62 2
+
2
+2
2
22 "2 2
3
6
2
2 2
+
2
+ \# 66 \# > 2 6
" "
"
"" |
|
We first performed the simulation of the pendulum controled by the P(I)D law. The error, rate and control signals are then pictured below:
<image>
$$\begin{array}{l l}{{\mathrm{if}}}&{{\omega>0}}\\ {{\mathrm{if}}}&{{\omega<0}}\end{array}$$
<image>
<image>
If u,θ,ω are positive (resp. negative), then U,Ξ,Ω are given PST (
NGT) mnesor. We recall that multiplying two mnesors means choosing the more constrained of both (
PST × NGT λ = PST and
€
€ |
|
NGT × PST λ = NGT if λ f (0)). If one is as positive but the other one is negative, the multiplication returns zero (
PST × NGT = 0). So whenever theta and rate are of different signs, the control is zero.
€
The mnesor control U is calculated as follows:
U = Ξ× Ω
€
Then it is converted back to real values (see below).
<image>
$$\begin{array}{l}{{\mathrm{If~}U=P S T\lambda_{u}}}\\ {{\mathrm{If~}U=N G T\lambda_{u}}}\end{array}$$
€
U-tou converter FIGURE 6.
The error, rate and control signals are pictured below:
<image>
CONCLUSION
One can notice the similarity to fuzzy control. But here the theory is completely axiomatized. Fuzzification (what we call real-to-mnesor conversion) seems more natural and defuzzification much more simple.
REFERENCES |
|
performance profiles. In UAI, pages 460–467, 2008.
[9] Stuart Russell and Eric Wefald. Do the right thing: studies in limited rationality. MIT Press, Cambridge, MA, USA, 1991.
[10] Stuart J. Russell and Devika Subramanian. Provably bounded-optimal agents. *Journal of Artificial Intelligence Research*, 2:575–609, 1995. [11] Joann`es Vermorel and Mehryar Mohri. Multiarmed bandit algorithms and empirical evaluation. In *ECML*, pages 437–448, 2005.
[12] Shlomo Zilberstein. *Operational Rationality* through Compilation of Anytime Algorithms. PhD
thesis, University of California at Berkeley, Berkeley, CA, USA, 1993.
[13] Shlomo Zilberstein. Resource-bounded sensing and planning in autonomous systems. *Autonomous Robots*, pages 31–48, 1996. |
|
<image>
## Theorem 1 (Guo And Janicki [2]). Let
M = (X,earlier thanM ,not later thanM ,nonsimultaneousM )
be a gso-structure, i.e., M *satisfies all axioms from* (2.9) to (2.14). Let Ω be the set of all stratified orders ✁ *on X satisfying the following stratified order extension conditions:*
1. nonsimultaneousM ⊆ ✁⇆ and 2. not later thanM ⊆ ✁⌢.
12 |
|
2. The not later than relation is represented as the following directed graph G2:
<image>
3. The nonsimultaneous relation is represented by the following (undirected) graph G3 (because nonsimultaneous is symmetric).
<image>
6 |
|
This Knowledge Technology factsheet was written by Dr. York Sure of Institute AIFB, University of Karlsruhe.
<image> Last updated 05/11/03 Dr. York Sure is an assistant professor at the Institute of Applied Computer Science and Formal Description Methods (AIFB) at the University of Karlsruhe. York co-organized several national and international conferences/workshops and co-authored over 35 papers as articles, collections and book chapters in the areas of Knowledge Management, Ontologies and the Semantic Web. He is currently working on the integration of Human Language Technologies, Data & Text Mining and Ontology & Metadata Technologies to enhance semantically enabled knowledge technologies. You can reach him at: [email protected] Further information: http://www.aifb.uni-karlsruhe.de/WBS/ysu KTweb Knowledge Technology Fact Sheet - Semantic Web: Page 5 www.ktweb.org November 2003 |
|
[Samulowitz and Memisevic 2007] Samulowitz, H., and Memisevic, R. 2007. Learning to Solve QBF. *In proc.*
of 22nd Conference on Artificial Intelligence, AAAI.
[Xu et al. 2007] Xu, L.; Hutter, F.; Hoos, H.; and LeytonBrown, K. 2007. SATzilla-07: The Design and Analysis of an Algorithm Portfolio for SAT. *Lecture Notes in Computer Science* 4741:712.
[Xu, Hoos, and Leyton-Brown 2007] Xu, L.; Hoos, H. H.;
and Leyton-Brown, K. 2007. Hierarchical hardness models for sat. In *Principles and Practice of Constraint Programming (CP-07)*, volume 4741 of Lecture Notes in Computer Science, 696–711. Springer-Verlag. |
|
# Convergence Of Expected Utility For Universal Ai
PETER DE BLANC
DEPARTMENT OF MATHEMATICS
TEMPLE UNIVERSITY
arXiv:0907.5598v2 [cs.AI] 2 Dec 2009 |
|
6 PETER DE BLANC DEPARTMENT OF MATHEMATICS TEMPLE UNIVERSITY
We could use a smaller hypothesis space; perhaps not all computable **environments should be considered.** The simplest approach may be to use a bounded utility function. Then **convergence is guaranteed.**
References
[1] Hutter, M., Universal Algorithmic Intelligence: A mathematical top-down approach**, Artificial General Intelligence (2007), Springer**
[2] Wikipedia, Kleene's Recursion Theorem (http://en.wikipedia.org/wiki/Kleene's recursion **theorem)**
[3] Solomonoff, R., A Formal Theory of Inductive Inference**, Information and Control, Part I:**
Vol 7, No. 1, pp. 1-22, March 1964 |
|
<image>
<image>
## 4. Results:
Analysis of first situation is started off by setting number of close-open iteration and maximum number of rules equal to 10 and 4 in SONFIS-R, respectively. The |
|
error measure criteria in SONFIS are Root Mean Square Error (RMSE), given as
<image>
$$\mathrm{below:}\;R M S E=$$
where i t is output of SONFIS and *
i t is real answer; m is the number of test data
(test objects). In the rest of paper, let m=19 and number of training data set =150. Figures 3 indicate the results of the aforesaid system. The indicated position in Figure 3a, b states minimum RMSE over the iterations.
It is worth noting that upon this balancing criterion, we may lose the general dominant distribution on the data space. The performance of the obtained fuzzy rules on the test data has been portrayed in Figure 4(a). In employing of second algorithm (Figure3), we use- -for in this case- only exact rules i.e., one decision class in right hand of an if-then rule. Figure 6 depicts the performance of SORSTR over 7 random selection of SOM structure, respectively. The applied Error
<image>
|
|
<image>
<image>
In this case, strength factor is adapted with EM, in a linear form. It must be noticed that for unrecognizable objects in test data (elicited by rules) a fix value such 4 is ascribed. So for measure part when any object is not identified, 1 is attributed. This is main reason of such swing of EM in reduced data set 6 (Figure 6-b). Clearly, in data set 5 SORST gains a lowest error (15 neurons in SOM). Total of extracted rules on the training data set (reduced data) were 33. |
|
# Technical Report
<image>
Assessing the Impact of Informedness on a Consultant's Profit Eugen Staab and Martin Caminada Faculty of Sciences, Technology and Communication University of Luxembourg 6, rue R. Coudenhove-Kalergi 1359 Luxembourg Luxembourg ISBN 978-2-87971-027-3 September 4, 2009 |
|
3.2.1 Using R1
- Information rate ∆Narg = 2:
<image>
|
|
<image>
|
|
<image>
- Information rate ∆Narg = 4: |
|
<image>
|
|
<image>
- Information rate ∆Narg = 4: |
|
<image>
|
|
c *2009 Eugen Staab, Martin Caminada.*
ISBN 978-2-87971-027-3 |
|
<image>
## Experiments 3 3.1 Conditions Of Experiments
Before dealing with results and observations of experiments, we present context of these experiments. For the first experiment, we worked with the development team of betapolitique. Betapolitique is a web blog of politic news. The duration of experiment was three months (February to may 2007) during French presidential elections. No help nor explications were provided to users. There was no objective nor a subject of debate defined. Users of betapolitique are anonymous to each other and were not forced to participate. They could read their arguments on the right of the article (figure 6) and use a treemap (developed by the society Pikko www.pikkosoftware.com) to see which article was the most annotated (figure 7).
<image>
|
|
<image>
<image>
For the second experiment, we collaborated with the organization committee of ECAP congress. The web site of ECAP was designed to read and discuss texts of the speakers of the congress. The experiment begun few days before the congress and ended to the end of the congress (but the website still exists). A
tutorial is accessible. Participants of the congress were unknown each other. Their arguments were displayed on the right of the text and corresponding selections were highlighted into the text (figure 8). The goal of the debate was to enhance the quality of presentations by taking into account remarks from others.
<image>
|
|
<image>
<image>
<image>
<image>
<image>
<image>
<image>
<image>
|
|
<image>
<image>
<image>
|
|
<image>
idef2(c) = 0.7
<image>
idef3(r, c) = 0.3 idef3(su, c) = ? idef3(b, c) = ? idef3(r, m) = ? idef3(su, m) = ? idef3(b, m) = 0.2 idef(p, c) = ? idef(sh,c) = 0.8 |
|
<image>
|
|
<image>
|
|
8. Meinolf Sellmann. The theory of Grammar constraints. In Proc. of the 12th Int. Conf. on the |
|
Tranouez P., Bertelle C. and Olivier D. 2005, « Changing levels of description in a fluid flow simulation", EPNADS 2005 |
|
<image>
<image>
such that the new tree induces a new path from s to t. This reduction is described in [6]. The action rep(tr, eo, ei) is called a basic move. Figure 1 gives an example of basic move.
It is possible to consider more complex moves by applying a set of independent basic moves. Two basic moves are independent if the execution of the first one does not affect the second one and vice versa. The execution order of these basic moves does not affect the final result. Figure 2 gives an example of complex move.
## 4 Comet **Implementation**
We extend the LS(Graph & Tree) framework by implementing some GraphInvariants, *GraphConstraint*s and *GraphObjective*s (see [5] for more detail) for modeling and solving some COP problems.
GraphInvariant is a concept representing objects which maintain some properties of a dynamic graph1 1dynamic graph is a graph that can be changed e.g., by the removal or the insertion of vertices, edges. |
|
<image>
<image>
|
|
<image>
<image>
|
|
<image>
|
|
[19] D. Applegate, W. Cook (1991): A computational study of the job-shop scheduling problem. ORSA Journal on Computing 3, pp. 149-156.
[20] R. Qu, F. He (2008): A hybrid constraint programming approach for nurse rostering problems. Applications and Innovations in Intelligent Systems XVI , pp. 211-224.
[21] Willem-Jan van Hoeve, G. Pesant, L.M. Rousseau (2006): *On global warming: Flow-based soft global* constraints. *Journal of Heuristics* , pp. 347-373. |
|
version in *Journal of Logic Programming* 37(1–3):293–316, 1998. Based on the unpublished manuscript Constraint Processing in cc(FD), 1991. |
|
<image>
<image>
|
|
<image>
Figure 2. Akaike Information Criterion (AIC) convergence vs. PSO iterations.
<image>
Figure 2 illustrates that the PSO algorithm rapidly converged close to the ultimate minimum error within the first 80 algorithm iterations. The global best model in this simulation started off as model m2 but after 3 iterations it changed to model m1 and remains unchanged for the rest of the simulation. The model order, based on the minimum of the objective function for these PSO settings was m1, m6, m2, m7, m8, m3, m4 and then m5.
5.1.2 Simulation number 2 This simulation is the same as simulation 1 except the objective function has been changed to SSE. Figure 4 shows the convergence plot of the SSE objective function over the first 100 algorithm iterations. Figure 5 illustrate the convergence behaviour of global best model over 100 iterations.
<image>
|
|
## A Lemma 5
The following lemma is a straightforward version of H¨older's inequality.
Lemma 5 Let **q, r >** 1 *with* 1/q + 1/r = 1*. Then, the following result similar to Holder's inequality holds:* ¨
$$|{\bf w}\cdot\mathbf{\Phi}(x)|\leq\Big{(}\sum_{k=1}^{p}\|{\bf w}_{k}\|^{q}\Big{)}^{1/q}\Big{(}\sum_{k=1}^{p}\Big{\|}\mathbf{\Phi}(x)\Big{\|}^{r}\Big{)}^{1/r}.\tag{1}$$
Proof: Let Ψq(w**) = (**Ppk=1 kwkk q)
1/q and Ψr(Φ(x**)) = (**Ppk=1 kΦk(x)k
$${}_{\pm}(x)\|^{r})^{1/r},{\mathrm{~then~}}$$
|w · Φ(x)| Ψq(w)Ψr(Φ(x)) = Xp wk Ψq(w) · Φk(x) Ψr(Φ(x)) k=1 ≤ Xp k=1 wk Ψq(w) · Φk(x) Ψr(Φ(x)) ≤ Xp k=1 kwkk Ψq(w) · kΦk(x)k Ψr(Φ(x)) (Cauchy-Schwarz) ≤ Xp k=1 1 q kwkk q Ψq(w) q + 1 r kΦk(x)k r Ψr(Φ(x))r(Young's inequality) = 1 q + 1 r = 1. |
|
[18] M. Pelillo, K. Siddiqi, and S.W. Zucker. Matching hierarchical structures using association graphs. *IEEE Transactions on Pattern Analysis and Machine Intelligence*,
21(11):1105–1120, 1999.
[19] J.W. Raymond, E.J. Gardiner, and P. Willett. RASCAL: Calculation of graph similarity using maximum common edge subgraphs. *Computer Journal*, 45(6):631–
644, 2002.
[20] K. Schädler and F. Wysotzki. Comparing structures using a Hopfield-style neural network. *Applied Intelligence*, 11:15–30, 1999.
[21] R.C. Wilson and E.R. Hancock. Structural matching by discrete relaxation. *IEEE*
Transactions on Pattern Analysis and Machine Intelligence, 19(6):634–648, 1997. |
|
<image>
<image>
|
|
<image>
|
|
<image>
<image>
Figure 19: Primitive classifier for parallel lines.
classifier clpar(dp1*, pt*) is simple:
pos −→ R |
|
28. A. Schenker, M. Last, H. Bunke, and A. Kandel, "Clustering of web documents using a graph model", *Web Document Analysis: Challenges and Opportunities*, p.
1–16, 2003.
29. A. Schenker, M. Last, H. Bunke, and A. Kandel, Graph-Theoretic Techniques for Web Content Mining, World Scientific Publishing, 2005.
30. S. Theodoridis and K. Koutroumbas, *Pattern Recognition*, Elsevier, 2009.
31. A. Torsello and E.R. Hancock, "Learning shape-classes using a mixture of tree-unions", *IEEE Transactions on Pattern Analysis and Machine Intelligence*,
28(6):954-967, 2006.
32. S. Umeyama, "An eigendecomposition approach to weighted graph matching problems", *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 10(5):695–
703, 1988.
33. M. Van Wyk, M. Durrani, and B. Van Wyk, "A RKHS interpolator-based graph matching algorithm", *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 24(7):988–995, 2002. |
|
3. c**: the connectivity degree, i.e. the number of afferent synapses of the**
neuron
The N bipolar neurons are linked by connections, named synapses, **each**
bearing a weight ∈ {**-1, 0, +1**}
2. At time t, the i**-th node computes incoming**
signals from afferent neurons and, at time t **+ 1, produce a signal, i.e. fires,**
according to the following update law:
$$\sigma_{i}^{t+1}=\operatorname{sgn}\left(\sum_{j=1}^{N}{\frac{w_{i j}\cdot\ \sigma_{i}^{t}}{c_{i}}}-b_{i}\right)\qquad\qquad(1)$$
where sgn(x) returns the sign of real number x, wij **is the incoming**
weighted synapse of i-th neuron from the j**-th one, and** ci =Pj |Wij |**. This**
formalization makes the adopted formal neuron similar to that of McCulloch & Pitts [9].
Dynamics: search of an asymptotic configuration. By generating an arbitrary ~σ and ~b, and a connection structure W **with entries**
wij uniformly distributed in {-1, 0, +1}, we are able to define the start-
<image>
|
|
passage among basins. By decreasing T**, only moves that decrease the error** E **begin to be accepted, see Eq. (4), causing a more exhaustive exploration** of the small-E **of the basin up to the reaching of the global minimum. The**
dependence of E from n and N **shown in Fig. 3b,c can be easily reported to**
the task difficulty, typically correlated with the number of instances of the learning problem and the number of involved functional areas.
a b c
<image>
Figure 4 shows the passage from an initial unconstrained dynamics to an optimized one, ruled by the learning of a scheme. This transition also corresponds to a passage from a ζ t0 having one of possible τ and l **(Fig. 4a),**
to a ζ tmeas having τ and l **respectively equal to zero and one (Fig. 4b).**
a b
<image>
8 |
|
<image>
|
|
(IJCSIS) International Journal of Computer Science and Information Security,
<image>
<image>
<image>
<image>
<image>
|
|
<image>
and production
(IJCSIS) International Journal of Computer Science and Information Security,
<image>
Figure 10. (d): Real example using RCSES step 4.
<image>
Figure 10. (g): Real example using RCSES step 7.
<image>
<image>
<image>
Figure 10. (e): Real example using RCSES step 5.
Figure 10. (h): Real example using RCSES step 8.
<image>
|
|
Fig. 4 shows the 3D outcome for M = 751 iterations and
<image>
<image>
Fig. 5 shows the linear approximation for G with respect to
<image> iterations 748 to 751. It can be seen that as the iterations are increased, the values of the objective function decrease. The percentage error is minimum at iteration, M = 748; however, after that it increases until it peaks at M = 750; thereafter, the percentage error decreases again to a level lower than that at M = 748. This shows that the maximum number of iterations that can be used for similar cases in the future can be limited to M = 750. Figs. 6, 7 and 8 show the linear approximation for the decision variables x1, x2 and x3 with respect to the number of iterations. It can be observed that x1, x2 and x3 decrease as the iterations are increased from M = 748 to M = 751.
Table II presents results that involve α1 , α 2 and α 3 with M
= 748 for G, x1, x2 and x3. Other results for M = 749 to 751 are given in the appendix. |
|
4. Prosser, P.: Hybrid algorithms for the constraint satisfaction problem. Computational Intelligence 9(3) (1993) 268–299 5. Bessi`ere, C., Debruyne, R.: Theoretical analysis of singleton arc consistency and its extensions. Artificial Intelligence 172(1) (2008) 29–41 6. Freuder, E.C., Elfe, C.D.: Neighborhood inverse consistency preprocessing. In:
AAAI 1996. 202–208 7. Kotthof f, L.: Constraint solvers: An empirical evaluation of design decisions. CIRCA preprint (2009) http://www-circa.mcs.st-and.ac.uk/Preprints/
solver-design.pdf.
8. Rendl, A., Gent, I.P., Miguel, I.: Tailoring solver-independent constraint models: A
case study with Essence' and Minion. In: SARA 2007. 184–199 9. Gent, I.P., Jefferson, C., Miguel, I.: MINION: A fast scalable constraint solver. In:
ECAI 2006. 98–102 |
|
# Table Of Contents
## Hamiltonian Mechanics
Hamiltonian Mechanics unter besonderer Ber¨ucksichtigung der h¨ohreren Lehranstalten **. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .** 1
Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove, Craig Chambers, Kim B. Bruce, Elisa Bertino
| Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove, Craig Chambers, Kim B. Bruce, Elisa Bertino | |
|-----------------------------------------------------------------------------------------------------|----|
| Hamiltonian Mechanics2 | 7 |
| Ivar Ekeland and Roger Temam | |
| Author Index | 13 |
| Subject Index | 17 |
|
|
<image>
<image>
Fig. 4. **The social golfers problem expressed in** s-COMMA.
<image>
|
|
2765 System . out . println **( ) ;** 2766 aFuser . fuse ( aFuser1 , aFuser2 , myReferee ) ; 2767 System . out . println ( " aFuse r − myRe f e r e e " ) ; 2768 System . out . println ( **" C o n f l i c t Z = "+**aFuser . conflict ( )+" %" ) ; 2769 System . out . println ( aFuser . state ( printMode **) ) ;** 2770 System . out . println **( ) ;**
2771
2772 }
2773 2774
2775 }
|
|
1.
add(currentPair);
update_notification ();
return (B) this ;
}
} |
|
70 ∗/
71 B fuse ( B left , B right , RefereeFunctionDefault <Prop> theRefereeFunction ) ;
72 73 /∗∗ 74 ∗ **Compute th e c om b in a t i on ( f u s i o n ) o f b a s i c b e l i e f a s s i gnm e nt s w i t h i n a r r a y**
75 ∗ <i>bbaIn </ i> by means o f th e r e f e r e e f u n c t i o n <i>th eR e f e r e **e Fun c t i on </ i> and** 76 ∗ **s t o r e th e r e s u l t w i t h i n <i>t h i s </ i >.**
77 ∗
78 ∗/
79 B fuse ( ArrayList <B> bbaIn , RefereeFunctionDefault <Prop> theRefereeFunction ) ;
80 81 /∗∗
82 ∗ **Return th e c o n f l i c t o f th e l a s t c om b in a t i on .** 83 ∗
84 ∗/
85 **d oub l e** conflict **( ) ;**
86 87 } |
|
144 ∗
<image>
<image>
<image>
145 ∗/
146 @Override <image>
151 }
<image> 154 finalProposition . zero **( ) ;**
155 **i n t** i ; 156 f o r ( i=0;i<size (−1) ; i**++) {**
157 tmpProposition . atomic ( i ) ;
159 finalProposition . or ( finalProposition , tmpProposition ) ;
160 }
162 }
163 164 } |
|
69 ∗ d e f i n e d w ith th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.**
70 ∗ **<br><br>** 71 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l**
72 ∗ **d e f i n i t i o n o f { @ l ink L a t t i c e\#i n s t a n c e ( ) } i s a s f o l l o w s :** 73 ∗ **<br><br>**
74 ∗ **<f o n t c o l o r ="\#004488">< code>** 75 ∗ **{@code @Overr ide }**
76 ∗ **<br>**
77 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }** 78 ∗ **</code></f on t>**
79 ∗
80 ∗/
81 @Override 82 **p u b l i c** finalClosedhyperpo wer set instance ( ) { **r e t u r n new** ←֓
finalClosedhyperpo wer set **( ) ; }**
83 } |
|
70 ∗ **<br><br>**
71 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 72 ∗ **d e f i n i t i o n o f { @ l ink L a t t i c e#i n s t a n c e ( ) } i s a s f o l l o w s :**
73 ∗ **<br><br>** 74 ∗ **<f o n t c o l o r ="#004488">< code>**
75 ∗ **{@code @Overr ide }** 76 ∗ **<br>**
77 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
78 ∗ **</code></f on t>** 79 ∗
80 ∗/
81 @Override 82 **p u b l i c** finalFreeboolean instance ( ) { **r e t u r n new** finalFreeboolean **( ) ; }**
83 }
|
|
69 ∗ d e f i n e d w ith th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.**
<image>
70 ∗ **<br><br>** 71 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l**
72 ∗ **d e f i n i t i o n o f { @ l ink L a t t i c e#i n s t a n c e ( ) } i s a s f o l l o w s :** 73 ∗ **<br><br>**
74 ∗ **<f o n t c o l o r ="#004488">< code>** 75 ∗ **{@code @Overr ide }**
76 ∗ **<br>**
77 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }** 78 ∗ **</code></f on t>**
79 ∗
80 ∗/
81 @Override
82 **p u b l i c** finalOpenhyperpowers et instance ( ) { **r e t u r n new** finalOpenhyperpowerset ( )←֓
; }
83 }
|
|
69 ∗ c l a s s e s but <u> i s n e c e s s a r y </u> f o r some methods . <u>I t has **t o be d e f i n e d w ith**
<image>
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.** 71 ∗ **<br><br>**
72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :**
74 ∗ **<br><br>** 75 ∗ **<f o n t c o l o r ="\#004488">< code>**
76 ∗ **{@code @Overr ide }**
77 ∗ **<br>** 78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e e F us er RT S_ Cl os ed hy pe rp ow er se t instance ( ) { **r e t u r n new** ←֓
<image>
84 85 } |
|
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.**
71 ∗ **<br><br>** 72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l**
73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :** 74 ∗ **<br><br>**
75 ∗ **<f o n t c o l o r ="\#004488">< code>** 76 ∗ **{@code @Overr ide }**
77 ∗ **<br>**
78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }** 79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e e Fus er RT S_ Fre eb oo le an instance ( ) { **r e t u r n new** ←֓
f i n a l R e f e r e e Fus er RT S_ Fre eb oo le an **( ) ; }**
84 } |
|
69 ∗ c l a s s e s but <u> i s n e c e s s a r y </u> f o r some methods . <u>I t has **t o be d e f i n e d w ith**
<image>
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.** 71 ∗ **<br><br>**
72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :**
74 ∗ **<br><br>** 75 ∗ **<f o n t c o l o r ="\#004488">< code>**
76 ∗ **{@code @Overr ide }**
77 ∗ **<br>** 78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e e F us er RTS _O pe nh yp er po we rs et instance ( ) { **r e t u r n new** ←֓
<image>
84 } |
|
71 ∗ **<br><br>**
72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :**
74 ∗ **<br><br>** 75 ∗ **<f o n t c o l o r ="\#004488">< code>**
76 ∗ **{@code @Overr ide }** 77 ∗ **<br>**
78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
79 ∗ **</code></f on t>** 80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e eF use rR TS _Po we rse t instance ( ) { **r e t u r n new** ←֓
f i n a l R e f e r e eF us erR TS _Po we rs et **( ) ; }**
84 } |
|
69 ∗ c l a s s e s but <u> i s n e c e s s a r y </u> f o r some methods . <u>I t has **t o be d e f i n e d w ith**
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.** 71 ∗ **<br><br>**
72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :**
74 ∗ **<br><br>** 75 ∗ **<f o n t c o l o r ="\#004488">< code>**
76 ∗ **{@code @Overr ide }**
77 ∗ **<br>** 78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override
<image>
84 85 } |
|
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.**
71 ∗ **<br><br>** 72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l**
73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :** 74 ∗ **<br><br>**
75 ∗ **<f o n t c o l o r ="\#004488">< code>** 76 ∗ **{@code @Overr ide }**
77 ∗ **<br>**
78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }** 79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e eS am pl er_ Fr ee bo ole an instance ( ) { **r e t u r n new** ←֓
f i n a l R e f e r e eS am pl er _Fr ee bo ole an **( ) ; }**
84 } |
|
69 ∗ c l a s s e s but <u> i s n e c e s s a r y </u> f o r some methods . <u>I t has **t o be d e f i n e d w ith**
70 ∗ th e non g e n e r i c ( t y p i c a l l y f i n a l ) sub−**c l a s s e s </u>.** 71 ∗ **<br><br>**
72 ∗ For a g i v e n non g e n e r i c sub−**c l a s s <i>myNonGener icSubc lass </ i >, a t y p i c a l** 73 ∗ d e f i n i t i o n o f { @ l ink B a s i c B e l i e f A s s i g nm e n t\#i n s t a n c e ( ) } **i s a s f o l l o w s :**
74 ∗ **<br><br>** 75 ∗ **<f o n t c o l o r ="\#004488">< code>**
76 ∗ **{@code @Overr ide }**
77 ∗ **<br>** 78 ∗ **p u b l i c myNonGen e r i cSubc lass i n s t a n c e ( ) { r e t u r n new myNonGen e r i cSubc lass ( ) ; }**
79 ∗ **</code></f on t>**
80 ∗
81 ∗/
82 @Override 83 **p u b l i c** f i n a l R e f e r e e Sa mp le r_ Op en hy pe rp ow er se t instance ( ) { **r e t u r n new** ←֓
<image>
84 85 } |
|
127 ∗ **The complement and cocomp lement a r e th e same f o r {@ l ink ArrayBoo l ean }**
128 ∗ **<BR><BR>** 129 ∗ **<b>Documentat ion i n h e r i t e d from {@ l ink Comp l em ent edLatti c e }:</b><BR>**
130 ∗ **{ @ inh e r i tD o c}** 131 ∗
132 ∗/
133 **p u b l i c** L cocomplement **( ) {**
134 **r e t u r n** complement **( ) ;**
135 } 136 137 } |
|
140 ∗
141 ∗/
142 @Override 143 **p u b l i c i n t** size ( **i n t** newSize ) { **// i f n ewS i z e i s p o s s i b l e , th en chang e s i z e t o** ←֓
n ewS i z e 144 i f ( ( newSize **>=0)&&(**newSize<=sizeMax **) ) {**
145 sizeFrame=newSize ;
146 sizeSet**=(1<<**sizeFrame ) ; 147 size_mem_1=( sizeSet −**1) / 6 4 ;**
148 highest_long_one=sizeSet −64∗ size_mem_1 ;
149 highest_long_one =(((1 L<<(highest_long_one − 1 ) ) −**1)<<1)+1;**
150 _memory= **new l on g** [ size_mem_1 **+ 1 ] ;**
151 }
152 **r e t u r n** sizeFrame ;
153 }
154 155 @Override 156 **p u b l i c** L size ( L input ) {
157 size ( input . size (−**1) ) ;**
158 r e t u r n ( L ) **t h i s** ;
159 }
160 161 } |
|
144 **p u b l i c** L cocomplement **( ) {** 145 i f ( theZero==n u l l ) { 146 theZero = instanceNsize **( ) .** zero **( ) ;** 147 theOne = instanceNsize **( ) .** one **( ) ;**
148 }
149 i f ( compareTo ( theOne **)==0)** zero **( ) ;**
150 **e l s e** one **( ) ;**
151 r e t u r n ( L ) **t h i s** ;
152 } 153 154 } |
|
x y z
⎡ == = = ⎢
⎢
⎢ ≅
⎢ ≅ ⎢
⎢ ≅ ⎣
0.14 1.4 5 5 5 A B A B
∪
0.4 0.7 0.5 1.6 16 0.035000 0.061250 0.043750 x y z 5 A
5 B
5 A B
∪
x y z
⎡ == = = ⎢
⎢
⎢ ≅
⎢ ≅ ⎢
⎢ ≅ ⎣
0.005 0.05 6 6 6 A B A B
∪
0.1 0.1 0.5 0.7 7 0.000714 0.000714 0.003572 x y z 6 A
6 B
6 A B
∪
x y
(0.1)(0.1)(0.1) 0.001
⎡ == = ⎢ + ⎢
⎢ ≅
⎢ ≅ ⎢
⎢⎣
7 7 A B
0.1 (0.1)(0.1) 0.1 0.01 0.11 0.000909 0.000091 x y 6 A
6 B
x y
(0.4)(0.7)(0.1) 0.028 2.8
⎡ = = == ⎢ + ⎢
⎢ ≅
⎢ ≅ ⎢
⎢⎣
8 8 A B
0.4 (0.7)(0.1) 0.1 0.01 0.47 47 0.023830 0.004170 x y 8 A
8 B
B B
x x y y
= ≅
= ≅
0.023830 0.004170 A A
9 8 9 8
⎡ = = = == ⎢ + ⎢
⎢ ≅
⎢ ≅ ⎢
⎢⎣
B
x y
(0.1)(0.4)(0.1) 0.004 0.4 0.2 10 10 A B
(0.1)(0.4) 0.1 0.04 0.1 0.14 14 7 0.001143 0.002857 x y 10 A
8 B B
x x y y
= ≅
= ≅
0.001143 0.002857 A A
11 10 11 10 |
|
<image>
|
|
<image>
|
|
[Regin 1994] R ´ egin, J.-C. 1994. A filtering algorithm for ´
constraints of difference in CSPs. In *12th Nat. Conf. on AI*,
362–367.
[Stergiou & Walsh 1999] Stergiou, K., and Walsh, T. 1999.
The difference all-difference makes. In *Proc. of 16th IJCAI*,
414–419. |
|
<image>
|
|
## An Approach To Visualize The Course Of Solving Of A Research Task In Humans
Vladimir L. Gavrikova,*, Rem G. Khleboprosb aV.P.Astafiev Krasnoyarsk Pedagogical University, 89 A. Lebedeva St.,
660049, Krasnoyarsk, Russia bResearch Center of Extreme States of Organism at Krasnoyarsk Scientific Center of RAS, Akademgorodok 50, 660036, Russia Suggested running head: COMPUTER-BASED RESEARCH TASK *Corresponding author. Post OB 8745, Akademgorodok, Krasnoyarsk, 660036, Russia, tel. +7 391 249 8402, mobile +7 913 042 4304, fax +7 391 211 07 29, E-mail address: [email protected] 1 |
|
<image>
Fig. 7. Phase portraits of the work with the RWR program for solvers.
Because of the large number of clicks, the data for the participant "K" were smoothed by a sliding mean with the window width of 5. The time flow is marked by arrows. For the participants "K" and "B", the beginning phase is shown as the solid line and the closing phase is shown as the dashed line.
The phase portraits for non-solvers are shown in Fig. 8. As it is seen from the data, the trajectories of non-solvers are also of the sort of spiral cycles. However while working with the RWR program the participants "Ch" and "G" either did not move to the solution or moved too slow. The estimations of mean increments of error numbers give an idea of difference between solvers and nonsolvers. The mean increments for the participants "K", "M", and "B" amount -0.27, -0.56, and -0.69 correspondingly, i.e. they are negative, which means that 14 |
|
X˙ =f X is shown as the dashed line. The arrows show the spontaneous dynamics of the system, with the point "0" being stable and the point "1" being unstable steady state.
A description of the research task that was suggested to volunteer participants is given below in the Methods section.
## Methods
A specially developed computer program Right-Wrong Responder (RWR)
was used that generated and visualized the cues material for the participants. The cues material presented geometrical figures as circles, squares, and triangles. Each of the figures had three grades of gray color: light, medium, and dark. Also, they had three grades of size: small, medium, and large. Thus, all the variety of figures consisted of 27 figures variants. The quantity of figures that were shown to the participants as one set equaled nine figures. Every definite set of the cues was generated by the program. A view of a realization of the program work is shown in Fig. 2.
<image>
|
|
| «Ch» | 16,7 | 71 | no |
|--------|--------|------|------|
| «G» | 14,4 | 219 | no |
## Results And Discussion
Error frequencies Figures 3 and 4 show the participants clicks against the time for solvers and non-solvers, correspondingly. These data say of what was the speed of clicking in the course of work with the program. To compare, every figure contains a straight line the slope of which characterizes an average clicking speed for the overall time of work.
<image>
Fig. 3. Sequence number of clicks plotted against time when the clicks were done by solvers. The straight lines characterize the average speed of clicking. |
|
[Turing, 1986] **A. M. Turing. Lecture to the London Mathematical Society**
on 20th February 1947. In B. E. Carpenter and R. W. Doran, editors, A. M. Turing's ACE report of 1946 and other papers**. MIT Press, Cambridge,**
1986. |
|
<image>
|
|
<image>
|
|
<image>
|
|
<image>
online fora (e-mail discussions, blog and twitter posts, wikis, etc.) with direct rendering of the results. In short, the third advantage is that a web service provides mechanism for online communication of verification results.
The fourth (and probably greatest) advantage of server-based verification is the raw verification speed. A dedicated server usually runs on reasonably new hardware with enough memory, etc. For example, even for the relatively short card 1 Mizar article mentioned above, full verification on a recent notebook |
|
After the th answer has been received, an assistant carries out the following instructions:
1. Save the answer in memory and put = 0.
2. For = 1, … , , start up
(
) for cycles of calculation.
3. If has provided not less than + − 1 answers, and these answers are equal to the received answers, put = 1.
4. Report the value of to .
Supervisor treats the messages δ and δℜ of and ℜ according to the following table.
<image>
If some communicable TM ℭ has passed the test, then symbol 1 appears in the sequence of messages from the assistant that processes the answers of , and then for some the first + - 1 answers of are equal to the answers of
. As a result, {ℭ ⊳ ℑ, } ≤
∞ +−1
=1. □ |
|
Algorithm 2: Rescaling Input: Training records Φ. Output: Clusters C.
1 initialize C to empty 2 forall (**φ, F**) ∈ Φ do 3 if ∃c ∈ C *such that* F ⊆ c or F ⊇ c **then**
4 forall p ∈ F \ c do 5 add p to c with wc(p) := ǫ 6 end 7 if wc(φ) ≥ 1 **then** 8 increment wc(φ)
9 **else**
10 wc(φ) := 1 11 end 12 **else**
13 initialize c to empty 14 add c to C 15 **forall** p ∈ F do 16 add p to c with wc(p) := ǫ 17 end 18 wc(φ) := 1 19 end 20 end 21 while ∃**c, d** ∈ C *such that* c ∩ d 6= ∅ do 22 sum ratios := 0 23 **forall** p ∈ c ∩ d do sum ratios += wc(p)
wd(p)
24 25 end scale := sum ratios |c ∩ d| 26 27 **forall** p ∈ d \ c do 28 add p to c with wc(p) := wd(p) · scale 29 end 30 remove d from C
31 end 32 **return** C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.