text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
symmetric Laplacian. Physical Interpretation of Laplacian Eigenfunctions
0 10 20 30 40 50 60 70 80 90 100−0. 200. 2
0Max
Min
0Max
Min
(c)/phi. 00 /phi. 01 /phi. 02 /phi. 03/phi. 00/phi. 00
/phi. 03/phi. 02/phi. 01
/phi. 01 /phi. 02 /phi. 03
(b) (a)
/phi. 0
0
/phi. 0
/phi. 0 /phi. 0
1 /phi. 0
2
/phi. 0
/phi. 0 /phi. 0 | ieee_xplore | 7,340 |
Rule classification organizes the rules according to their
semantic interpretation. Three basic classes of rules are defined:
•Traffic flow rules involve source and destination
addresses. •Provided services rules consist of destination port (i. e. , the service) and destination address (i. e. , the service | ieee_xplore | 7,816 |
In practical evaluation, the rule base instead of single rule isused to test the performance of the IDS. Ten thousand runs
of GP are executed and the average results are reported. The
average value of FAR is 0. 41% and the average value of P
Dis
0. 5714. The ROC shows P Dclose to 100% when the FAR is | ieee_xplore | 7,966 |
The study used the 1999 DARPA intrusion detection data
set. To make the data set more realistic, the subset chosen con-
sisted of 1% to 1. 5% attacks and 98. 5% to 99% normal traffic. The results from the Enhanced SVMs had 87. 74% accuracy, a10. 20% FP rate, and a 27. 27% FN rate. Those results were sub- | ieee_xplore | 8,052 |
relations. Facts observed in the KG are stored as a collection
of triples IDþ¼fðh; r; tÞg. Each triple is composed of a head
entity h2E, a tail entity t2E, and a relation r2IRbetween
them, e. g. , ( AlfredHitchcock, DirectorOf, Psycho ). Here, Edenotes the set of entities, and IRthe set of relations. | ieee_xplore | 8,239 |
former uses the same sparse projection matrix MrðurÞfor
each relation r, i. e. , h?¼MrðurÞh;t?¼MrðurÞt:
The latter introduces two separate sparse projection matri-
cesM1
rðu1
rÞandM2
rðu2
rÞfor that relation, one to project head
entities, and the other tail entities, i. e. , h?¼M1
rðu1
rÞh;t?¼M2
rðu2 | ieee_xplore | 8,268 |
such that
að‘Þ¼Mð‘Þzð‘/C01Þþbð‘Þ;‘¼1;. . . ;L ;
zð‘Þ¼ReLUðað‘ÞÞ;‘¼1;. . . ;L ;
where Mð‘Þandbð‘Þrepresent the weight matrix and bias for
the‘th layer respectively. After the feedforward process, the score is given by matching the output of the last hidden
layer and the embedding of the tail entity, i. e. , | ieee_xplore | 8,327 |
position of the projection matrices associated with all sub-
categories of ci. Two types of composition operations are
used, i. e. , addition : Mci¼b1Mcð1Þ
iþ/C1/C1/C1þ b‘Mcð‘Þ
i;
multiplication : Mci¼Mcð1Þ
i/C14/C1/C1/C1/C14 Mcð‘Þ
i:
Here cð1Þ
i;. . . ;cð‘Þ
iare sub-categories of ciin the hierarchy; | ieee_xplore | 8,402 |
attributes of entities (e. g. , ( AlfredHitchcock, Gender, Male )), but most KG embedding techniques do not explicitly
distinguish between relations and attributes. Take the tensor
factorization model RESCAL as an example. In this model, each KG relation is encoded as a slice of the tensor, no matter | ieee_xplore | 8,466 |
scheme which can multicast from the
source to all the sinks. To simplify notation, we adopt the con-vention that
for. At time, information
transactions occur in the following order:
T1. sends to, T2. sends to, , and, T3. sends to
T4. sends to
T5. sends to
T6. sends to
T7. sends to
T8. sends to
T9. decodes | ieee_xplore | 9,387 |
ARC Linkage International Grant LX045446 and the ARC Discovery Project
Grant DP0453089. F. Scarselli, M. Gori, and G. Monfardini are with the Faculty of Informa-
tion Engineering, University of Siena, Siena 53100, Italy (e-mail: franco@dii. unisi. it; marco@dii. unisi. it; monfardini@dii. unisi. it). | ieee_xplore | 9,455 |
that identifies the revenue/penalty region at the cloud server j
Dcloud
j≤Dj. We consider to tradeoff between the power consumption and
computation delay in the cloud computing subsystem. That is, we have the the SP2
min
yj, fj, nj, σj∑
j∈Mσjnj(
Ajfp
j+Bj)s. t. ⎧
⎪⎨
⎪⎩∑
j∈Myj=Y
Dcloud
j≤Dj∀j∈M
(4)−(7). | ieee_xplore | 9,942 |
Definition 2: Objective function F(f, n, σ)and constraint
functions G(y), H(y, f, n, σ)
⎧
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩F(f, n, σ)≜∑
j∈Mσjnj(
Ajfp
j+Bj)
G(y)≜∑
j∈Myj−Y
H(y, f, n, σ)≜[
h1, . . . , hj, . . . , hM]T
hj≜σj[
C(
nj, yjK/fj)
njfj/K−yj+K
fj]
−Dj. Thus, MINLP SP2 is
min
y∈Y, f∈F, n∈N, σ∈/Sigma1F(f, n, σ)
s. t. {G(y)=0 | ieee_xplore | 9,976 |
3 Solve MPkby, e. g. , branch and bound;
4 iffeasible solution then
5 Obtain solution(
nk, σk, LBk)
;
6 else if unbounded solution then
7 Choose arbitrary nk∈Nandσk∈/Sigma1;
8 SetLBk←− ∞ ;
9 end if
10 Solve SP(
nk, σk)
by, e. g. , dual decomposition;
11 iffeasible solution then
12 Obtain solution(
yk, fk) | ieee_xplore | 9,978 |
and Lagrangian multiplier(
λk, μk)
;
13 SetUBk←min{
UBk−1, F(
fk, nk, σk)}
;
14 if⏐⏐UBk−LBk⏐⏐≤ϵthen/*Converged */
15 return(
yk, fk, nk, σk)
;
16 else/*Add feasible constraint */
17 SetIk+1←Ik∪{k}, Jk+1←Jk;
18 end if
19 else if infeasible solution then
20 Solve SPF(
nk, σk)
by, e. g. , dual decomposition; | ieee_xplore | 9,979 |
21 Obtain solution(
yk, fk)
and Lagrangian multiplier(
λk, μk)
;
22 SetUBk←UBk−1;
/*Add infeasible constraint */
23 SetIk+1←Ik, Jk+1←Jk∪{k};
24 end if
25 Setk←k+1;
26end while
Definition 4: SP(nk, σk)
min
y∈Y, f∈FF(
f, nk, σk)
s. t. {G(y)=0
H(
y, f, nk, σk)
≤0. Definition 5: SP feasibility-check SPF (nk, σk)
min | ieee_xplore | 9,980 |
For generality, we define the cost matrix to be the n×n
matrix
C≜⎡
⎢⎣d11C11. . . d1nC1n. . . . . . . . . d
n1Cn1··· dnnCnn⎤
⎥⎦. An assignment is a set of nentry positions in the cost matrix, no two of which lie in the same row or column. The sum
of the nentries of an assignment is its cost. An assign- | ieee_xplore | 9,987 |
mations are made mAP would remain at 59% (Section 3. 2). 2. 3. Training
Supervised pre-training. We discriminatively pre-trained
the CNN on a large auxiliary dataset (ILSVRC 2012) with
image-level annotations (i. e. , no bounding box labels). Pre-
training was performed using the open source Caffe CNN | ieee_xplore | 10,053 |
1. 0 1. 0 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9 0. 9
1. 0 0. 9 0. 9 0. 8 0. 8 0. 8 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 6 0. 6
1. 0 0. 8 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 6 0. 6
1. 0 0. 9 0. 8 0. 8 0. 8 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 0. 7 | ieee_xplore | 10,083 |
0. 593R−CNN FT fc7: sensitivity and impact
occ trn size asp view part00. 20. 40. 60. 8
0. 2110. 731
0. 5420. 676
0. 3850. 786
0. 4840. 709
0. 4530. 779
0. 3680. 720
0. 633R−CNN FT fc7 BB: sensitivity and impact
occ trn size asp view part00. 20. 40. 60. 8
0. 1320. 339
0. 2160. 347
0. 0560. 487
0. 1260. 453
0. 1370. 391
0. 0940. 388 | ieee_xplore | 10,123 |
The advantage of neural on-line learning rules is that the
inputs
can be used in the algorithm at once, thus enabling
faster adaptation in a nonstationary environment. A resulting
tradeoff, however, isthattheconvergenceisslow, anddepends
on a good choice of the learning rate sequence, i. e. , the step | ieee_xplore | 10,427 |
HYV¨ARINEN: FAST AND ROBUST FIXED-POINT ALGORITHMS 633
the good statistical properties (e. g. , robustness) of the new
contrast functions, and the good algorithmic properties of thefixed-point algorithm, a very appealing method for ICA wasobtained. Simulations as well as applications on real-life data | ieee_xplore | 10,494 |
186 IRE TRANSACTIONS ON X Y 6. 2020 2. 4986 6. 1104 2. 0853 10. 4136 4. 1818 8. 2045 3. 0911 8. 2147 4. 3144 8. 0390 4. 5017 8. 6096 3. 0127 7. 6243 1. 1825 11. 9780 11. 2824 10. 4118 6. 6854 7. 3278 2. 5620 12. 0662 8. 3889 5. 7356 0. 0540 TABLE I X 5. 7885 8. 2829 7. 0329 6. 7674 6. 2707 7. 7501 10. 6216 | ieee_xplore | 11,015 |
novel system designs, i. e. , designs of relay weights and allo-cation of transmit power, that meet the following objectives:1) maximize the achievable secrecy rate subject to a totaltransmit power constraint, or 2) minimize the total transmitpower subject to a secrecy rate constraint. We should note | ieee_xplore | 11,081 |
a common receiver (i. e. , multiple access) in the presence ofan eavesdropper is considered, and the optimal transmit powerallocation policy is chosen to maximize the secrecy sum-rate. A user that is prevented from transmitting based on the ob-tained power allocation can help increase the secrecy rate | ieee_xplore | 11,097 |
a 4a X, 1 0. 4528 0. 9816 2 1. 5104 w TABLE III GAUSSIAN, Y = 8 a 4a X, 1 0. 245 1 0. 5006 2 0. 7560 1. 0500 3 1. 3439 1. 7480 4 2. 1520 co TABLE IV GAUSSXAN, ~ = 16 a 1 0. 7584 0. %82 2 0. 3880 0. 5224 3 0. 6568 0. 7996 4 0. 9423 1. 0993 5 1. 2562 1. 4371 6 1. 6181 1. 8435 7 2. 0690 2. 4008 8 2. 1326 co | ieee_xplore | 11,357 |
starts from the lowest level P2and gradually approaches
P5as shown in Figure 1(b). From P2toP5, the spa-
tial size is gradually down-sampled with factor 2. We use
{N2, N3, N4, N5}to denote newly generated feature maps
corresponding to {P2, P3, P4, P5}. Note that N2is simply
P2, without any processing. | ieee_xplore | 11,943 |
00. 10. 20. 30. 40. 5
LEVEL 1 LEVEL 2 LEVEL 3 LEVEL 4FEATURE DISTRIBUTION
level 1 level 2 level 3 level 4
Figure 3. Ratio of features pooled from different feature levels
with adaptive feature pooling. Each line represents a set of pro-
posals that should be assigned to the same feature level in FPN, | ieee_xplore | 11,951 |
Mask R-CNN [ 21] + FPN [ 35] 39. 8 62. 3 43. 4 22. 1 43. 2 51. 2 ResNeXt-101
PANet / PANet [ms-train] 41. 2 / 42. 5 60. 4 / 62. 3 44. 4 / 46. 4 22. 7 / 26. 3 44. 0 / 47. 0 54. 6 / 52. 3 ResNet-50
PANet / PANet [ms-train] 45. 0 / 47. 4 65. 0 / 67. 2 48. 6 / 51. 8 25. 4 / 30. 1 48. 6 / 51. 7 59. 1 / 60. 0 ResNeXt-101 | ieee_xplore | 11,991 |
✓✓✓ 35. 7 / 37. 1 / 38. 9 57. 3 38. 0 18. 6 / 24. 2 / 25. 3 39. 4 / 42. 5 / 43. 6 51. 7 / 47. 1 / 49. 9
✓✓✓✓ 36. 4 / 38. 0 / 39. 9 57. 8 39. 2 19. 3 / 23. 3 / 26. 2 39. 7 / 42. 9 / 44. 3 52. 6 / 49. 4 / 51. 3
✓✓✓ ✓ 36. 3 / 37. 9 / 39. 6 58. 0 38. 9 19. 0 / 25. 4 / 26. 4 40. 1 / 43. 1 / 44. 9 52. 4 / 48. 6 / 50. 5 | ieee_xplore | 11,994 |
Settings AP AP 50 AP75 APbbAPbb
50APbb
75
baseline 35. 7 57. 3 38 37. 1 58. 9 40. 0
fu. fc1fc2 35. 7 57. 2 38. 2 37. 3 59. 1 40. 1
fc1fu. fc2 36. 3 58. 0 38. 9 37. 9 60. 0 40. 7
MAX 36. 3 58. 0 38. 9 37. 9 60. 0 40. 7
SUM 36. 2 58. 0 38. 8 38. 0 59. 8 40. 7
Table 4. Ablation study on adaptive feature pooling on val-2017 in | ieee_xplore | 12,011 |
terms of mask AP and box ap APbbof the independently trained
object detector. Settings AP AP 50 AP75 APS APM APL
baseline 36. 9 58. 5 39. 7 19. 6 40. 7 53. 2
conv2 37. 5 59. 3 40. 1 20. 7 41. 2 54. 1
conv3 37. 6 59. 1 40. 6 20. 3 41. 3 53. 8
conv4 37. 2 58. 9 40. 0 19. 0 41. 2 52. 8
PROD 36. 9 58. 6 39. 7 20. 2 40. 8 52. 2 | ieee_xplore | 12,012 |
bles 6and 7, compared with last year champion, we achieve
9. 1% absolute and 24% relative improvement on instance
segmentation. While for object detection, 9. 4% absolute
and23% relative improvement is yielded. APbbAPbb
50APbb
75APbb
SAPbb
MAPbb
L
Champion 2015 [ 23] 37. 4 59. 0 40. 2 18. 3 41. 7 52. 9 | ieee_xplore | 12,019 |
adopted. The common testing tricks [ 23, 33, 10, 15, 39, 62], such as multi-scale testing, horizontal flip testing, mask vot-
ing and box voting, are used too. For multi-scale testing, we
set the longer edge to 1, 400 and let the other range from 600
to1, 200 with step 200. Only 4scales are used. Second, | ieee_xplore | 12,021 |
Mask R-CNN [fine-only] [ 21] 31. 5 26. 2 49. 9 30. 5 23. 7 46. 9 22. 8 32. 2 18. 6 19. 1 16. 0
SegNet - 29. 5 55. 6 29. 9 23. 4 43. 4 29. 8 41. 0 33. 3 18. 7 16. 7
Mask R-CNN [COCO] [ 21] 36. 4 32. 0 58. 1 34. 8 27. 0 49. 1 30. 1 40. 9 30. 9 24. 1 18. 7
PANet [fine-only] 36. 5 31. 8 57. 1 36. 8 30. 4 54. 8 27. 0 36. 3 25. 5 22. 6 20. 8 | ieee_xplore | 12,031 |
category. Searching on Google with “Yan Mo Nobel Prize, ”resulted in 1, 050, 000 web pointers on the Internet (as of3 January 2013). “For all praises as well as criticisms, ” saidMo recently, “I am grateful. ” What types of praises andcriticisms has Mo actually received over his 31-year writingcareer? | ieee_xplore | 12,678 |
computers with a high-performance computing platform, with a data mining task being deployed by running someparallel programming tools, such as MapReduce orEnterprise Control Language (ECL), on a large number ofcomputing nodes (i. e. , clusters). The role of the softwarecomponent is to make sure that | ieee_xplore | 12,750 |
note that our approach is equally applicable to cases whereinformation releases refer to different kinds of respondents
(e. g. , business establishments). In the following, we therefore
use the terms individual and respondent interchangeably. Since removal of explicit identifiers is the first step to | ieee_xplore | 12,958 |
1;. . . ;An, a set of attributes
fAi;. . . ;AjgfA1;. . . ;Ang;
and a tuple t2T, tAi;. . . ;Ajdenotes the sequence of the
values ofAi;. . . ;Ajint, TAi;. . . ;Ajdenotes the projection, maintaining duplicate tuples, of attributes Ai;. . . ;AjinT. Also, jTjdenotesT's cardinality, that is, the number of
tuples inT. | ieee_xplore | 12,969 |
We can now introduce the definition of k-anonymity for
a table as follows:
Definition 2. 2 ( k-anonymity). LetT
A
1;. . . ;Anbe a table
andQIbe a quasi-identifier associated with it. Tis said to
satisfy k-anonymity wrt QIiff each sequence of values in
TQIappears at least with k occurrences in TQI. | ieee_xplore | 12,987 |
all domains in a generalization hierarchy. In the following, dom
Ai;Tdenotes the domain of attribute Aiin tableT. We start by introducing the definition of generalized
table as follows:
Definition 3. 1 (Generalized Table). LetTi
A1;. . . , Anand
Tj
A1;. . . ;Anbe two tables defined on the same set of | ieee_xplore | 13,022 |
thus collapsing all tuples in Tto the same list of values, provides k-anonymity at the price of a strong generalization
of the data. Such extreme generalization is not needed if a
more specific table (i. e. , containing more specific values)exists which satisfies k-anonymity. This concept is captured | ieee_xplore | 13,030 |
Tjis said to be a k-minimal generalization of a table Tiiff:
1. Tjsatisfies k-anonymity enforcing minimal required
suppression (Definitions 2. 2 and 4. 2). 2. jTijÿjTjjMaxSup:
3. 8Tz:TiTzandTzsatisfies Conditions 1 and 2
):
DVi;z<DVi;j. Intuitively, generalization Tjisk-minimal iff it satisfies k- | ieee_xplore | 13,084 |
search all the strategies. This process is clearly much toocostly, given the high number of strategies that should befollowed. The number of different strategies for a domain
tupleDThD
1;. . . ;Dniis
h1. . . hn!
h1!. . . hn!, where each hiis the
length of the path from Dito the top domain in DGHDi. | ieee_xplore | 13,108 |
1 Symmetric min g(i-l, j-l)+2d(i, j) [ g(i-l, j-2)+2d(i, j-l)+d(i, j) g(i-2, j-l)+2d(i-l, j)+d(i, j) g(i-l, j-2)+(d(i, j-l)+d(i, j)) g(i-2, j-l)+d(i-l, j)+d(i, j) 1 Asymmetric ~ Symmetrlc 1 mi: [(i-l, j-l)+2d(i, j) ], 1 g(i-2, j-3)+2d(i. -I, j-2)+2d(i, j-l)+d(i, j) g(i-3, j-2)+2d(j-2, j-1)+2d(i-l, j)+d(i, j) | ieee_xplore | 13,244 |
(IoU) ratio is less than 0. 3 to any ground-truth faces;
2) positives: IoU above 0. 65 to a ground truth face;
3) part faces: IoU between 0. 4 and 0. 65 to a ground truth
face; and
4) landmark faces: faces labeled five landmarks’ positions. There is an unclear gap between part faces and negatives, and | ieee_xplore | 13,349 |
Traces; Generalization and Function Approximation; Planning andLearning; Dimensions of Reinforcement Learning; and Case Studies. Talking Nets: An Oral History of Neural Networks —James A. AndersonandEdwardRosenfeld, Eds. Cambridge, MA: MITPress, 1998, 433 pp. , soft cover, $39. 95. ISBN 0–262–01167–0. ) | ieee_xplore | 13,376 |
R. Vinayakumar et al. : Deep Learning Approach for Intelligent IDS
in Section IV. Section V includes information related to
major shortcomings of IDS datasets, problem formulation
and statistical measures. Section VI includes description of
datasets. Section VII and Section VIII includes experimental | ieee_xplore | 13,426 |
attain considerable performance. It was found that the layer
containing 1, 024 units had shown highest number of attack
detection rates. When we increased the number of hidden
units from 1, 024 to 2, 048, the performance in attack detection
rate deteriorated. Hence, we decided to use 1, 024 units for | ieee_xplore | 13,626 |
optimalatanyscale. Theoptimaldetectorhasasimpleapproximate
implementation inwhichedgesaremarkedatmaximaingradientmag-
nitudeofaGaussian-smoothed image. Weextendthissimpledetector
usingoperatorsofseveralwidthstocopewithdifferentsignal-to-noise
ratiosintheimage. Wepresentageneralmethod, calledfeaturesyn- | ieee_xplore | 13,993 |
toaddathirdcriteriontocircumventthepossibilityof
multipleresponsestoasingleedge. Usingnumericalop-
timization, wederiveoptimaloperatorsforridgeandroof
edges. Wewillthenspecializethecriteriaforstepedges
andgiveaparametricclosedformforthesolution. Inthe
processwewilldiscoverthatthereisanuncertaintyprin- | ieee_xplore | 14,005 |
LetHn(x)betheresponseofthefiltertonoiseonly, and
HG(x)beitsresponsetotheedge, andsupposethereisa
localmaximuminthetotalresponseatthepointx=xO. Thenwehave
Hn(XO)+HG(x0)=0. (4)
TheTaylorexpansionofH&(xo)abouttheorigingives
H&(xo)=HG(O)+HG(0)x0+O(x0). (5)
ByassumptionHG(0)=0, i. e. , theresponseofthefil- | ieee_xplore | 14,023 |
scaleswiththeoperatorwidth. Thatis, wefirstdefinean
operatorf, whichistheresultofstretchingfbyafactor
ofw, fw(x)=f(xlw). Thenaftersubstitutinginto(12)we
findthattheintermaximum spacingforf, isx, , (fj)=
wxzc(f). Therefore, ifafunctionfsatisfiesthemultiple
responseconstraint(13)forfixedk, thenthefunctionf, | ieee_xplore | 14,042 |
haveoverthefullrange. Also, thisenablesthevalueof
f'(0)tobesetasaboundarycondition, ratherthanex-
pressedasanintegraloff". Iftheintegraltobemini-
mizedsharesthesamelimitsastheconstraintintegrals, itispossibletoexploittheisoperimetric constraintcon-
dition(see[6, p. 216]). Whenthisconditionisfulfilled, | ieee_xplore | 14,074 |
functionfattheorigin. Sincef(x)isasymmetric, wecan
extendtheabovedefinition totherange [-W, W]using
f(-x)=-f(x). Thefourboundaryconditionsenableus
tosolveforthequantities a1througha4intermsofthe
unknownconstants a, co, c, ands. Theboundarycondi-
tionsmayberewrittena. +a4+c=0
a, easinw+a, eacosw+a3etsinX | ieee_xplore | 14,093 |
sionalspaceoffunctionstoanonlinearoptimization in
threevariablesa, w, and3(notsurprisingly, thecom-
binedcriteriondoesnotdependonc). Unfortunately the
resultingcriterion, whichmuststillsatisfythemultiple
responseconstraint, isprobablytoocomplextobesolved
analytically, andnumericalmethodsmustbeusedtopro- | ieee_xplore | 14,098 |
whererisascloseaspossibleto1. Theperformance in-
dexesandparametervaluesforseveralfiltersaregivenin
Fig. 4. Theaicoefficientsforallthesefilterscanbefound
from(37), byfixingcto, say, c=1. Unfortunately, the
largestvalueofrthatcouldbeobtainedusingthecon-
strainednumericaloptimization wasabout0. 576forfilter | ieee_xplore | 14,107 |
number6inthetable. Inourimplementation, wehaveFiltcrParameters
nx, zE1Araw=_
10. 154. 210. 21521. 595500. 1225063. 97566
20. 32. 870. 31312. 471200. 3828431. 26860
30. 52. 130. 4177. 858692. 6285618. 28800
40. 81. 570. 5155. 065002. 5677011. 06100
51. 01. 330. 5613. 455800. 07161 4. 80684
1. 21. 120. 5762. 052201. 569:392. 91540 | ieee_xplore | 14,108 |
71410. 750. 4840. 002973. 503507. 47700
Fig. 4. Filterparametersandperformance measuresforthefiltersillus-
tratedinFig. 5. approximated thisfilterusingthefirstderivativeofa
Gaussianasdescribedinthenextsection. ThefirstderivativeofGaussianoperator, orevenfilter
6itself, shouldnotbetakenasthefinalwordinedge | ieee_xplore | 14,109 |
detectionfilters, evenwithrespecttothecriteriawehave
used. Ifwearewillingtotolerateaslightreductionin
multipleresponseperformance r, wecanobtainsignifi-
cantimprovements intheothertwocriteria. Forexample, filters4and5bothhavesignificantly betterEAproduct
thanfilter6, andonlyslightlylowerr. FromFig. 5we | ieee_xplore | 14,110 |
688 ~~~~IEEETRANSACTIONS ONPATTERNANALYSISANDMACHINEINTELLIGENCE, VOL. PAMI8, NO. 6, NOVEMBER 1986
8ze 40 60 ae. 6zZ. azz. 2. 8Z. o 320 380
-1. 3141194
1, 28S213
alaTI, Z2. 3"
le
as zzleZ'!eZ. qZW 3" 3ze
1. 1515[57
zeqoso80 149 3ee
Z26Z49Zee 3qo 380qoo
355
0. 6200538
99ze 60 as IN 220Z45Z"Z" 3zv 3qe 350 3ae | ieee_xplore | 14,116 |
IEEETRANSACTIONS ONPATTERNANALYSISANDMACHINEINTELLIGENCE, VOL. PAMI-8, NO. 6, NOVEMBER1986
(a)
(b)
Fig. 7. (a)Partsimage, 576by454pixels. (b)Imagethesholded atT, . (c)
Imagethresholded at2T, . (d)Imagethresholdedwithhysteresisusing
boththethresholdsin(a)and(b). thresholdalongthelengthofthecontour. Suppose we | ieee_xplore | 14,139 |
CANNY:COMPUTATIONAL APPROACH TOEDGEDETECTION
functionalignednormaltotheedgedirectionwithapro-
jectionfunctionparalleltotheedgedirection. Asubstan-
tialsavingsincomputational effortispossibleifthepro-
jectionfunctionisaGaussianwiththesameaasthe(first
derivativeofthe)Gaussianusedasthedetectionfunction. | ieee_xplore | 14,148 |
Ifthewindowfunctionisabruptlytruncated, e. g. , ifitis
rectangular, thefilteredimagewillnotbesmoothbecause
oftheveryhighbandwidthofthiswindow. Thiseffectis
relatedtotheGibbsphenomenon inFouriertheorywhich
occurswhenasignalistransformed overafinitewindow. Whennonmaximum suppression isappliedtothisrough | ieee_xplore | 14,161 |
resolution, i. e. , thereislesspossibilityofinterference
fromneighboring edges. Thatargumentisalsoveryrel-
evantinthepresentcontext, astodatetherehasbeenno
consideration ofthepossibilityofmorethanoneedgein
agivenoperatorsupport. Interestingly, Rosenfeldand
Thurstonproposedexactlytheoppositecriterionin | ieee_xplore | 14,177 |
synthesisisappliedwefindthatredundant responsesof
thelargeroperator areeliminatedleadingtoasharpedge
map. Bycontrast, inFig. 9theedgesmarkedbythetwoop-
eratorsareessentiallyindependent, anddirectsuperposi-
tionoftheedgesgivesausefuledgemap. Whenweapply
featuresynthesistothesesetsofedgeswefindthatmost | ieee_xplore | 14,188 |
aslargeaprojectionfunctionaspossible. Thereareprac-
ticallimitations onthishowever, inparticularedgesinan
imageareoflimitedextent, andfewareperfectlylinear. However, mostedgescontinueforsomedistance, infact
muchfurtherthanthe3or4pixelsupportsofmostedge
operators. Evencurvededgescanbeapproximated bylin- | ieee_xplore | 14,198 |
sampletheoutputofnonelongated maskswiththesame
direction. Thisoutputissampledatregularintervalsina
lineparalleltotheedgedirection. Ifthesamplesareclose
together(lessthan2aapart), theresultingmaskisessen-
tiallyflatovermostofitsrangeintheedgedirectionand
fallssmoothlyofftozeroatitsends. Twocrosssections | ieee_xplore | 14,201 |
Wehavedescribed aprocedureforthedesignofedge
detectorsforarbitraryedgeprofiles. Thedesignwasbased
onthespecification ofdetectionandlocalizationcriteria
inamathematical form. Itwasnecessarytoaugmentthe
originaltwocriteriawithamultipleresponse measurein
ordertofullycapturetheintuitionofgooddetection. A | ieee_xplore | 14,215 |
noiseintheimage, asdeterminedbyanoiseestimation
scheme. Thisdetectormadeuseofseveraloperatorwidths
tocopewithvaryingimagesignal-to-noise ratios, andop-
eratoroutputswerecombinedusingamethodcalledfea-
turesynthesis, wheretheresponsesofthesmalleropera-
torswereusedtopredictthelargeoperatorresponses. If | ieee_xplore | 14,220 |
inanumberofapplicationareas. Digitalaudio, video, andpicturesare increasingly furnished with distinguishing but imperceptiblemarks, which may contain a hidden copyright notice or serialnumber or even help to prevent unauthorized copying directly. Military communications systems make increasing use of | ieee_xplore | 14,308 |
bridge area (c. 1550). At that time, watermarks were mainly used
to identify the mill producing the paper—a means of guaranteeing
quality. (CourtesyofDr. E. Leedham-Green, CambridgeUniversity
Archives. Reproduction technique: beta radiography. )
reactive inks) and secondary features whose presence may | ieee_xplore | 14,462 |
(a) (b)
(c) (d)
Fig. 7. When applied to images, the distortions introduced by
StirMark are almost unnoticeable. “Lena” (a) before and (b) after
StirMarkwithdefaultparameters. (c), (d)Forcomparison, thesame
distortions have been applied to a grid. available since November 1997. 3It applies a minor unno- | ieee_xplore | 14,495 |
letVdenote the set of buses and Efor the set of transmission
lines, then an undirected graph (V, E)can represent a power
system. For a subset of branches A⊂E, l e tg(A)denote the
set of meters on A’s branches and adjacent buses. In the graph
(V, E\A), l e th(A)denote the number of interconnected mod- | ieee_xplore | 14,831 |
j=1Glj(sj−dj)≤fmax
l ∀l∈L(
μmin
l, μmaxl)
smin
j≤sj≤smax
j ∀j∈N(
νmin
j, νmax
j)
(28)
where sjis the power generation at bus j, cjis the correspond-
ing generation cost, djis the forecasted load at bus j, Gljis
the shift factor (with respect to the reference bus) from bus jto
branch l, fmin
l andfmax | ieee_xplore | 14,841 |
l are the power flow limits for transmis-
sion line l, smin
j andsmax
j are the lower and upper bounds of the
power generation at bus j, ands=[s1, s2, . . . , s n]⊺. The objec-
tive function is to minimize the aggregated generation cost, and
the constraints are supply-demand balance constraint, transmis- | ieee_xplore | 14,842 |
satisfied. Second, if define two sets L1≜{l:Glj1>G lj2}and
L2≜{l:Glj2>G lj1}, t ol e t
LMPEP
j1−LMPEP
j2=(Gj1−Gj2)T(ˆμmin−ˆμmax)
=∑
l∈L1(Glj1−Glj2)(
ˆμmin
l−ˆμmax
l)
+∑
l∈L2(Glj2−Glj1)(
ˆμmax
l−ˆμmin
l)
>0 (37)heuristically, one sufficient condition is ˆfl<fmax
l (i. e. , ˆμmax
l=
0)f o r∀l∈L1andˆfl>fmin | ieee_xplore | 14,867 |
to problems (i. e. , how best can we relax the black-box nature
of the algorithms and have them exploit some knowledgeconcerning the optimization problem)? In particular, whileserious optimization practitioners almost always perform such
matching, it is usually on a heuristic basis; can such matching | ieee_xplore | 15,122 |
with smaller coefficients. Thus random hidden layer gener-
ates weakly correlated hidden layer features, which allow
for a solution with a small norm and a good generalization
performance. A formal description of an ELM is following. Consider a
set of Ndistinct training samples ( xi, ti), i∈J1, NKwith | ieee_xplore | 15,445 |
which correspond to the data samples of a class j. Alternatively, the correlation matrices can be computed
for each class separately 1
h, 1
t, . . . , c
h, c
t. Then the
weights are applied during the summation of the correlation
matriceshandt:
h=α11
h+. . . +αcc
h, (16)
t=α11
t+. . . +αcc
t. (17) | ieee_xplore | 15,500 |
inputs, the optimal value of sdiffers from 1. standard deviation sby√
d, and generating weights as
W=N(0, s/√
d) (see Figure 6). In the following experiments, ELM is used with automat-
ically generated weights from W=N(0, s/√
d). The input
data is normalized to zero mean and unit variance. Biases are | ieee_xplore | 15,590 |
andtheminimaxtheorem, "NumerischeMathematik, vol. 5, pp. 371-379, 1963. R. T. Rockafellar, "Dualityandstabilityinextremumprob-lemsinvolvingconvexfunctions, "PacificJ. Math. , vol. 21, pp. 167-187, 1967. P. Wolfe, "Adualitytheoremfornonlinearprogramming, "Q. Appl. Math. , vol. 19, pp. 239-244, 1961. R. T. Rockafellar, "Non | ieee_xplore | 15,640 |
normalspace, "Proc. IEEESystemsScienceandCyberneticsConf. (Boston, Mass. , October11-13, 1967). J. M. Danskin, "Thetheoryofmax-minwithapplications, "J. SIAM, vol. 14, pp. 641-665, July1966. 113]W. Fenchel, "Convexcones, sets, andfunctions, "mimeo-
graphednotes, PrincetonUniversity, Princeton, N. J. , September | ieee_xplore | 15,643 |
plantcontrol, "ISATrans. , vol. 5, pp. 175-183, April1966. C. B. BrosilowandL. S. Lasdon, "Atwoleveloptimizationtechniqueforrecycleprocesses, "1965Proc. AICHE-Symp. onApplication ofMathematical ModelsinChemicalEngineering Re-search, Design, andProduction(London, England). L. S. Lasdon, "Dualityanddecomposition | ieee_xplore | 15,649 |
inmathematicalprogramming, " SystemsResearchCenter, CaseInstituteofTech-nology, Cleveland, Ohio, Rept. SRC119-C-67-52, 1967. A. V. FiaccoandG. P. McCormick, SequentialUnconstrainedMinimization TechniquesforNonlinearProgramming. NewYork:Wiley, 1968. 12]R. FoxandL. Schmit, "Advancesintheintegratedapproach | ieee_xplore | 15,650 |
PETERE. HART, MEMBER, IEEE, NILSJ. NILSSON, MEMBER, IEEE, ANDBERTRAM RAPHAEL
Abstract-Although theproblemofdetermining theminimum
costpaththroughagrapharisesnaturallyinanumberofinteresting
applications, therehasbeennounderlyingtheorytoguidethe
development ofefficientsearchprocedures. Moreover, thereisno | ieee_xplore | 15,653 |
ManuscriptreceivedNovember24, 1967. TheauthorsarewiththeArtificialIntelligenceGroupoftheAppliedPhysicsLaboratory, StanfordResearchInstitute, MenloPark, Calif. mechanicaltheorem-proving andproblem-solving. These
problemshaveusuallybeenapproachedinoneoftwo
ways, whichweshallcallthemathematical approachand | ieee_xplore | 15,656 |
successoroperatorP, definedonnil}, whosevalueforeach
niisasetofpairs{(nj, cij)}. Inotherwords, applyingrto
nodeniyieldsallthesuccessorsnjofniandthecostscij
associatedwiththearcsfromnitothevariousnj. Applica-
tionofrtothesourcenodes, totheirsuccessors, andso
forthaslongasnewnodescanbegeneratedresultsinan | ieee_xplore | 15,668 |
Apathfromn, tonkisanorderedsetofnodes(n1, n2, . . . , nk)witheachni+asuccessorofni. Thereexistsapath
fromnitonjifandonlyifnjisaccessiblefromni. Everypathhasacostwhichisobtainedbyaddingtheindividual
costsofeacharc, ci, i+l, inthepath. Anoptimalpathfrom
nitonjisapathhavingthesmallestcostoverthesetofall | ieee_xplore | 15,670 |
IEEETRANSACTIONS ONSYSTEMS SCIENCEANDCYBERNETICS, JULY1968
optimalpath, itwillsometimesfailtofindsuchapathand
thusnotbeadmissible. Anefficientalgorithmobviously
needssomewaytoevaluateavailablenodestodetermine
whichoneshouldbeexpandednext. Supposesomeevalu-
ationfunctionf(n)couldbecalculatedforanynoden. | ieee_xplore | 15,680 |
Asimpleexamplewillillustratethatthisestimateis
easytocalculateasthealgorithmproceeds. ConsidertheFig. 1. subgraphshowninFig. 1. Itconsistsofastartnodesand
threeothernodes, n3, n2, andn3. Thearcsareshownwith
arrowheadsandcosts. LetustracehowalgorithmA*pro-
ceededingeneratingthissubgraph. Startingwiths, we | ieee_xplore | 15,688 |
whichcompletestheproof. Wecannowproveourfirst
theorem. Theorem1
Ifii(n)<h(n)foralln, thenA*isadmissible. Proof:Weprovethistheorembyassumingthecontrary, namelythatA*doesnotterminatebyfindinganoptimal
pathtoapreferredgoalnodeofs. Therearethreecasesto
consider:eitherthealgorithmterminatesatanongoalnode, | ieee_xplore | 15,698 |
proceduresforcomputingtheestimatesAalwaysleadto
valuesthatsatisfy(5). Weshallcallthisassumptionthe
consistencyassumption. NotethattheestimateA(n)=0
forallntriviallysatisfiestheconsistencyassumption. Intuitively, theconsistencyassumptionwillgenerallybe
satisfiedbyacomputation ruleforhthatuniformlyuses | ieee_xplore | 15,723 |
supposethatnodenisclosedbyA*. Theng(n)=g(n). Proof:ConsiderthesubgraphGsjustbeforeclosingn, andsupposethecontrary, i. e. , supposeg(n)>g(n). NoW
thereexistssomeoptimalpathPfromston. Since'(n)>
g(n), A*didnotfindP. ByLemma1, thereexistsanopen
noden'onPwithg(n')=g(n'). Ifn'=n, wehave
provedthelemma. Otherwise, 104 | ieee_xplore | 15,727 |
contradicting thefactthatA*selectednforexpansion
whenn'wasavailableandthusprovingthelemma. Thenextlemmastatesthatfismonotonically nonde-
creasingonthesequenceofnodesclosedbyA*. LemmaS
Let(n1, n2, . . , n, )bethesequenceofnodesclosedby
A*. Then, iftheconsistencyassumptionissatisfied, p. q
impliesf(np, )<f(nq). | ieee_xplore | 15,730 |
f(n)<f(t*)=f(t)
(Strictinequalityoccursbecausenotiesareallowed. )
ThereexistssomegraphGn, , 0eEOn, 3forwhichA(n)-
h(n)bythedefinitioinofh. NowbyLemma2, 0(n)=g(n). ThenonthegraphGn, Oif(n)=f(n). SinceAisnomore
informedthan. A*, Acouldnotruleouttheexistenceof
Gn, O;butAdidnotexpandnbeforeterminationandis, | ieee_xplore | 15,737 |
therefore, notadmissible, contrarytoourassumptionand
completingtheproof. UpondefiningN(A, Gs)tobethetotalnumberofnodes
inG, expandedbythealgorithmA, thefollowingsimple
corollaryisimmediate. Corollary
UnderthepremisesofTheorem2, N(A*, Gs)<N(A, Gs)
withequalityifandonlyifAexpandstheidenticalsetof
nodesasA*. | ieee_xplore | 15,738 |
choosen'insteadofn. Byrepeatingtheaboveargument, weobtainforsomeianA*, ea*thatexpandsonlynodes
thatarealsoexpandedbyA, completingtheproofofthe
theorem. Corollary1
Supposethepremisesofthetheoremaresatisfied. Then
foranyagraphG, thereexistsanA*ea*suchthatN(A*, GQ)AN(A, G, ), withequalityifandonlyifAexpandsthe | ieee_xplore | 15,747 |
HARTetal. :DETERMINATION OFMINIMUMCOSTPATHS
Itisbeyondthescopeofthediscussiontoconsiderhow
todefineasuccessoroperator Porassigncostscijsothat
theresultinggraphrealistically reflectsthenatureofa
specificproblemdomain. 2
B. TheHeuristicPoweroftheEstimate A
ThealgorithmA*isactuallyafamilyofalgorithms;the | ieee_xplore | 15,760 |
3Exceptforpossiblecriticalties, asdiscussedinCorollary2of
Theorem3. ample, chooseh(n)=(x+y). Since(x+y)<
V/x2+y2, thealgorithmisstilladmissible. Sinceweare
notusing"all"ourknowledgeoftheproblemdomain, a
fewextranodesmaybeexpanded, buttotalcomputational
effortmaybereduced;again, each"extra"nodemustalso | ieee_xplore | 15,767 |
Subsets and Splits