text
stringlengths 301
426
| source
stringclasses 3
values | __index_level_0__
int64 0
404k
|
---|---|---|
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,237 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,238 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,240 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,242 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,243 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,245 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,247 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,248 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,250 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,252 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,253 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,255 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,257 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,258 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,260 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,262 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,263 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,265 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,267 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,268 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,270 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,272 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,273 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,275 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,277 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,278 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,280 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,282 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,283 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,285 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,287 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,288 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,290 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,292 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,293 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,295 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,297 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,298 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,300 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,302 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,303 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,305 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,307 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,308 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,310 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,312 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,313 |
Neural networks. A neural network is a group of interconnected units called neurons that send signals to one another. neurons can be either biological cells or mathematical models. while individual neurons are simple, many of them together in a network can perform complex tasks. there are two main types of neural | wikipedia | 87,315 |
Neural networks. Neurons in an artificial neural network are usually arranged into layers, with information passing from the first layer (the input layer) through one or more intermediate layers (hidden layers) to the final layer (the output layer). the "signal" input to each neuron is a number, specifically a | wikipedia | 87,317 |
Neural networks. Linear combination of the outputs of the connected neurons in the previous layer. the signal each neuron outputs is calculated from this number, according to its activation function. the behavior of the network depends on the strengths (or weights) of the connections between neurons. a network | wikipedia | 87,318 |
3. 3 Analysis
To compare the quantitative performance of the different
decoder variants, we use three commonly used perfor-
mance measures: global accuracy (G) which measures the
percentage of pixels correctly classified in the dataset, class
average accuracy (C) is the mean of the predictive accuracy | ieee_xplore | 128 |
FCN 81. 97 54. 38 46. 59 22. 86 82. 71 56. 22 47. 95 24. 76 83. 27 59. 56 49. 83 27. 99 200 K
FCN (learnt deconv) 83. 21 56. 05 48. 68 27. 40 83. 71 59. 64 50. 80 31. 01 83. 14 64. 21 51. 96 33. 18 160 K
DeconvNet 85. 26 46. 40 39. 69 27. 36 85. 19 54. 08 43. 74 29. 33 89. 58 70. 24 59. 77 52. 23 260 K | ieee_xplore | 228 |
previous section is sharp for the Protocol Model, by exhibiting
ascenariowhereitisachieved. Thisscenarioisalsofeasiblefor
the Physical Model. Theorem 3. 1: There is a placement of nodes and an as-
signment of traffic patterns such that the network can achieve
bit-meterspersecondundertheProtocolModel, | ieee_xplore | 528 |
and
bit-meterspersecondunderthePhysicalModel, bothwhenever
is a multiple of
Proof:Consider the Protocol Model. Define
Recallthatthedomainisadiskofunitarea, i. e. , ofradius in
theplane. Withthecenterofthedisklocatedattheorigin, place
transmitters at locations
and
where is even. Also place receivers at
and | ieee_xplore | 529 |
where isodd. Eachtransmittercantransmittoitsnearest
receiver, whichisatadistance away, withoutinterferencefrom
any other transmitter–receiver pair. It can be verified that there
are at least transmitter–receiver pairs all located within thedomain. (This is done by noting that for a tessellation of the | ieee_xplore | 530 |
Lemma 4. 4 for the Physical Model. Theorem 4. 1:
(i) ForRandomNetworkson intheProtocolModel, there
isadeterministicconstant notdependingon, , or, such that
bits per second is feasible whp. ii) ForRandomNetworkson inthePhysicalModel, there
aredeterministicconstants andnotdependingon, , , , o r, such that | ieee_xplore | 623 |
Scene Train Test Extent (m) (Uses RGB-D) Nearest Neighbour PoseNet Dense PoseNet
King’s College 1220 343 140 x 40m N/A 3. 34m, 2. 96◦1. 92m, 2. 70◦1. 66m, 2. 43◦
Street 3015 2923 500 x 100m N/A 1. 95m, 4. 51◦3. 67m, 3. 25◦2. 96m, 3. 00◦
Old Hospital 895 182 50 x 40m N/A 5. 38m, 4. 51◦2. 31m, 2. 69◦2. 62m, 2. 45◦ | ieee_xplore | 778 |
Shop Fac ¸ade 231 103 35 x 25m N/A 2. 10m, 5. 20◦1. 46m, 4. 04◦1. 41m, 3. 59◦
St Mary’s Church 1487 530 80 x 60m N/A 4. 48m, 5. 65◦2. 65m, 4. 24◦2. 45m, 3. 98◦
Chess 4000 2000 3x2x1 m 0. 03m, 0. 66◦0. 41m, 5. 60◦0. 32m, 4. 06◦0. 32m, 3. 30◦
Fire 2000 2000 2. 5x1x1 m 0. 05m, 1. 50◦0. 54m, 7. 77◦0. 47m, 7. 33◦0. 47m, 7. 02◦ | ieee_xplore | 779 |
Red Kitchen 7000 5000 4x3x1. 5m 0. 04m, 0. 76◦0. 58m, 5. 65◦0. 59m, 4. 32◦0. 58m, 4. 17◦
Stairs 2000 1000 2. 5x2x1. 5m 0. 32m, 1. 32◦0. 56m, 7. 71◦0. 47m, 6. 93◦0. 48m, 6. 54◦
Figure 6: Dataset details and results. We show median performance for PoseNet on all scenes, evaluated on a single 224x224 center crop | ieee_xplore | 781 |
reprints@ieee. org, and reference the Digital Object Identifier below. Digital Object Identifier no. 10. 1109/TC. 2016. 25199142986 IEEE TRANSACTIONS ON COMPUTERS, VOL. 65, NO. 10, OCTOBER 2016
0018-9340 /C2232016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. | ieee_xplore | 859 |
The value of corr (U;V) falls in a definite closed interval
[/C01, 1]. A value close to either /C01 or 1 indicates a strong rela-
tionship between the two variables. A value close to 0 infers
a weak relationship between them. Algorithm 2 shows our
proposed algorithm based on LCC, and this algorithm is | ieee_xplore | 928 |
(a) Feature ranking results on the KDD Cup 99 dataset
Algorithm # Feature Feature ranking
FMIFS 19 f5, f23, f6, f3, f36, f12, f24, f37, f2, f32, f9, f31, f29, f26, f17, f33, f35, f39, f34
MIFS ( b¼0:3)2 5 f5, f23, f6, f9, f32, f18, f19, f15, f17, f16, f14, f7, f20, f11, f21, f13, f8, f22, f29, f31, f41, f1, f26, f10, f37 | ieee_xplore | 957 |
MIFS ( b¼1)2 5 f5, f7, f17, f32, f18, f20, f9, f15, f14, f21, f16, f8, f22, f19, f13, f11, f29, f1, f41, f31, f10, f27, f26, f12, f28
FLCFS 17 f23, f29, f12, f24, f3, f36, f32, f2, f8, f31, f25, f1, f11, f39, f10, f4, f19
(b) Feature ranking results on the NSL-KDD dataset
Algorithm # Features Feature ranking | ieee_xplore | 958 |
FMIFS 18 f5, f30, f6, f3, f4, f29, f12, f33, f26, f37, f39, f34, f25, f38, f23, f35, f36, f28
MIFS ( b¼0:3)2 3 f5, f3, f26, f9, f18, f22, f20, f21, f14, f8, f11, f12, f7, f17, f16, f19, f1, f15, f41, f32, f13, f28, f36
MIFS ( b¼1)2 8 f5, f22, f9, f26, f18, f20, f14, f21, f16, f8, f11, f1, f17, f7, f12, f19, f15, f40, f32, f13, f10, f28, f31, f27, f2, f36, f23, f3 | ieee_xplore | 959 |
LSSVM-IDS + FMIFS 99. 46 0. 13 99. 79 98. 76 0. 28 99. 91 99. 64 0. 13 99. 77
LSSVM-IDS + MIFS ( b¼0:3) 99. 38 0. 23 99. 70 95. 96 0. 53 97. 96 98. 59 0. 16 99. 32
LSSVM-IDS + MIFS ( b¼1) 89. 26 0. 34 97. 63 93. 26 0. 47 96. 75 98. 10 0. 58 99. 12
LSSVM-IDS + FLCFS 98. 47 0. 61 98. 41 92. 29 0. 41 96. 45 98. 07 0. 82 98. 99 | ieee_xplore | 1,013 |
LSSVM-IDS + All features 99. 16 0. 97 99. 19 91. 12 0. 38 95. 96 94. 29 0. 33 97. 42
Fig. 2. Building and testing times of LSSVM-IDS using all features and
LSSVM-IDS combined with FMIFS, respectively, on three datasets. TABLE 3
Feature Ranking Results for the Four Types of Attacks
on the KDD Cup 99 Dataset | ieee_xplore | 1,014 |
Class # Feature Feature ranking
DoS 12 f23, f5, f3, f6, f32, f24, f12, f2, f37, f36, f8, f31
Probe 19 f5, f27, f3, f35, f40, f37, f33, f17, f41, f30, f34, f28, f22, f4, f24, f25, f19, f32, f29
U2R 23 f37, f17, f8, f18, f16, f1, f4, f15, f7, f22, f20, f21, f31, f19, f12, f13, f14, f6, f32, f29, f3, f40, f2 | ieee_xplore | 1,015 |
DR FPR Train ðsÞ TestðsÞ DR FPR Train ðsÞ TestðsÞ
1 96. 01 0. 84 0. 152 0. 246 79. 65 4. 54 1. 823 7. 76
2 97. 01 0. 64 0. 296 0. 396 84. 72 4. 03 3. 463 10. 363
3 97. 13 0. 64 0. 505 0. 656 85. 58 3. 92 5. 26 15. 443
4 97. 18 0. 64 1. 140 1. 343 86. 08 3. 80 9. 662 19. 532
5 97. 26 0. 60 1. 475 1. 773 86. 81 3. 54 11. 302 22. 735 | ieee_xplore | 1,030 |
R2L LSSVM-IDS + FMIFS 90. 08 1. 06 0. 44
PLSSVM + MMIFS 98. 70 54
Overall LSSVM-IDS + FMIFS 97. 33 6. 51 3. 85
PLSSVM + MMIFS 93. 50 21. 4 9. 20TABLE 8
Detection Rate (%) for Different Algorithm Performances
on the Test Dataset with Corrected Labels of KDD Cup 99
Dataset (n/a Means no Available Results) | ieee_xplore | 1,034 |
state transition probability function, and ρi:X×U×X→R, i=1, . . . , n are the reward functions of the agents. In the multiagent case, the state transitions are the result of
the joint action of all the agents, uk=[uT
1, k, . . . , uT
n, k]T, uk∈
U, ui, k∈Ui(Tdenotes vector transpose). Consequently, the | ieee_xplore | 1,537 |
available, the task would reduce to a Markov decision process, the action space of which would be the joint action space of
the SG. In this case, the goal could be achieved by learning the
optimal joint-action values with Q-learning
Q
k+1(xk, uk)=Qk(xk, uk)
+α[
rk+1+γmax
u′Qk(xk+1, u′)−Qk(xk, uk)]
(7) | ieee_xplore | 1,613 |
¯h∗
i(x)=argmax
uimax
u1, . . . , u i−1, ui+1, . . . , u nQ∗(x, u). (8)
Fig. 3. (Left) Two mobile agents approaching an obstacle need to coordinate
their action selection. (Right) The common Q-values of the agents for the state
depicted to the left. However, the greedy action selection mechanism breaks ties | ieee_xplore | 1,615 |
Q1(x, u1, u2)+Q2(x, u1, u3)+Q3(x, u3, u4). The decompo-
sition might be different for different states. Typically (like in
this example), the local Q-functions have smaller dimensions
than the global Q-function. Maximization of the joint Q-value
is done by solving simpler, local maximizations in terms of | ieee_xplore | 1,630 |
principle to compute strategies and values for the stage games, and a temporal-difference rule similar to Q-learning to propa-
gate the values across state-action pairs. The algorithm is given
here for agent 1
h
1, k(xk, ·)=argm1(Qk, xk) (13)
Qk+1(xk, u1, k, u2, k)=Qk(xk, u1, k, u2, k)
+α[rk+1+γm1(Qk, xk+1) | ieee_xplore | 1,662 |
inherently compositional: secure subsystems combine to form
a larger secure system as long as the external type signatures
of the subsystems agree. The recent development of seman-
tics-based security models (i. e. , models that formalize security
intermsofprogrambehavior)hasprovidedpowerfulreasoning | ieee_xplore | 1,919 |
mitted on the basis of a static analysis of process authority andrelationshipsbetweenprincipals. Securitylabelshaveadditionalstructure that describes the entities capable of performing de-classification. Thismodelsupportsthelabelingofcomputationsperformed on behalf of mutually distrusting principals. | ieee_xplore | 2,106 |
13:d. Calculate gradient ∇w(l)
kand∇b(l)
kfor weights∇w(l)
kand bias respectively for each layer
14: Gradient calculated in the following sequence
15: i. convolution layer
16: ii. pooling layer
17: iii. fully connected layer
18:e. Update weights
19: w(l)
ji←w(l)
ji+1w(l)
ji
20:f. Update bias
21: b(l) | ieee_xplore | 2,283 |
e. g. , ears, eyes, etc. , and the third layer can go further up the
complexity order by even learning facial shapes of various
persons. Even though each layer might learn or detect a
defined feature, the sequence is not always designed for it, especially in unsupervised learning. These feature extrac- | ieee_xplore | 2,325 |
get the best results for specific problems. The training algo-
rithms can be finetuned at different levels by incorporating
heuristics, e. g. , for hyperparameter optimization. The time
to train a deep learning network model is a major factor to
gauge the performance of an algorithm or network. Instead | ieee_xplore | 2,526 |
into the well-known training algorithms and architectures. We highlighted their shortcomings, e. g. , getting stuck in the
local minima, overfitting and training time for large prob-
lem sets. We examined several state-of-the-art ways to over-
come these challenges with different optimization methods. | ieee_xplore | 2,531 |
Y. Bengio is with the D ´epartement d’Informatique et de Recherche
Op´erationelle, Universit ´edeMontr ´eal, Montr ´eal, Qu´ebecH3C3J7Canada. Publisher Item Identifier S 0018-9219(98)07863-3. NN Neural network. OCR Optical character recognition. PCA Principal component analysis. RBF Radial basis function. | ieee_xplore | 2,547 |
1. Introduction
Action recognition methods based on skeleton data have
been widely investigated and attracted considerable atten-tion due to their strong adaptability to the dynamic circum-stance and complicated background [31, 8, 6, 27, 22, 29, 33, 19, 20, 21, 14, 13, 23, 18, 17, 32, 30, 34]. Conventional | ieee_xplore | 3,565 |
is divided into a training set (40, 320 videos)and a validation set (16, 560 videos), where the actors inthe two subsets are different. 2). Cross-view (X-View): thetraining set in this benchmark contains 37, 920 videos thatare captured by cameras 2 and 3, and the validation set con-tains 18, 960 videos | ieee_xplore | 3,670 |
can be adopted as the loss function to learn the trainable
parameters /Theta1in DnCNN. Here {(yi, xi)}N
i=1represents N
noisy-clean training image (patch) pairs. Fig. 1 illustrates thearchitecture of the proposed DnCNN for learning R(y). I n
the following, we explain the architecture of DnCNN and the | ieee_xplore | 3,793 |
many candidate patches to find the most likely location. /C15The authors are with the Institute of Systems and Robotics, University of
Coimbra, Coimbra, Portugal. E-mail: {henriques, ruicaseiro, pedromartins, batista}@isr. uc. pt. Manuscript received 17 Mar. 2014; revised 11 July 2014; accepted 28 July | ieee_xplore | 5,533 |
The cyclic shifts of each base sample xican be expressed in a
circulant matrix Xi. Then, replacing the data matrix
X0¼X1
X2. . . 2
643
75in Eq. (3) results in
w¼X
jX
iXH
iXiþ/C21I !/C01
XH
jy; (60)
by direct application of the rule for products of block matri-
ces. Factoring the bracketed expression, | ieee_xplore | 5,727 |
Corresponding author: Young-Gab Kim (alwaysgabi@sejong. ac. kr)
This work was supported by the National Research Foundation of Korea (NRF) funded by the Korean Government (MSIT) under
Grant 2021R1A2C2012635. ABSTRACT Unlike previous studies on the Metaverse based on Second Life, the current Metaverse | ieee_xplore | 5,791 |
other hand, another type of object is imaginary animals (e. g. , unicorns, dragons) and anthropomorphic objects (e. g. , talking
chairs) that do not exist. 4) SOUND AND SPEECH SYNTHESIS
Sound synthesis is a field that gives the user a sense of
immersion, but research is insufficient compared to vision. | ieee_xplore | 5,910 |
much different from what it is now, although a translucent
display window tablet is used. 6) DISCUSSION AND OPEN CHALLENGES
Player Ready One shows negative aspects of the Metaverse
(e. g. , surrogate exam, taste cheating, and mirroring). The
problem of over-addiction is explained in the appearance of | ieee_xplore | 6,100 |
a connection relationship (e. g. , Metaverse access, messenger)
must be maintained continuously in a relatively low-spec
mobile device that can always be accessed. Using an episodic
memory that effectively manages the user’s log allows the
user to feel the comfort and advantage of accessing Metaverse | ieee_xplore | 6,361 |
dimensions three and four are provided. A reader who is only interested in code construction and
the application of space–time block codes may choose to read
Section V-B, Definition 5. 4. 1, Definition 5. 5. 2, the proof ofTheorem 5. 5. 2, Corollary 5. 5. 1, the remark after Corollary5. 5. 1, and Section V-F. | ieee_xplore | 6,509 |
during the experiment. The random noise obeys the Gaussian
distribution N∈(0, σ2), w h e r e σ∈(0. 2, 0. 4, 0. 8, 1, 2)and
the Poisson distribution P(λ), w h e r e λ∈(1, 2, 4, 8, 16). Then, we normalize the values of the noise matrices turn to be
between 0 and 1. Using different evaluation metrics, the results | ieee_xplore | 6,725 |
isitselfalinearsystemwhosecomplexitydecreasesasthenumberof
outputquantitiesavailableincreases. Theobservermaybeincorpo-
ratedinthecontrolofasystemwhichdoesnothaveitsstatevector
availableformeasurement. Theobserversuppliesthestatevector, butattheexpenseofaddingpolestotheover-allsystem. I. INTRODUCTION | ieee_xplore | 7,111 |
Supposethatsuchatransformation didexist;i. e. , supposethatforallt
z(t)=Ty(t). Thetwosystemsaregovernedby
y=Ay, x=Bz+Cy, (1)
(2)Fig. 1-Asimpleobserver. Fig. 2-Observation ofafirst-ordersystem. So, iftheinitialconditiononz(o)ischosenas
az(o)=y(o), X_p
thenforallt>0, az(t)=y(t), butusingtherelationz=Ty, | ieee_xplore | 7,129 |
Assume, now, thattheplantorsystem, Si, thatisto
beobservedisgovernedby
y=Ay+Dx, (11)
wherexisaninputvector. Asbefore, anobserverfor
thissystemwillbedrivenbythestatevectory. In
addition, itisnaturaltoexpectthattheobservermust
alsobedrivenbytheinputvectorx. Considerthesys-
temS2governedby
=Bz+Cy+Gx. (12) | ieee_xplore | 7,132 |
Luenberger: StateofaLinearSystem
Proof:AsafirstattemptletS1drivethenth-order
system
Mi
0=
OFig. 4-Reduction ofthedynamicorder. 011
Y2 1Z+V
I. AnIjjJ(22)
Itisappropriate atthispointtoreviewthedefinitions
ofcontrollability andobservability forlineartime-
invariantsystems. Adiscussionofthephysicalinter- | ieee_xplore | 7,151 |
controllable. LetTbetheuniquesolutionofTA-BT
=ba'. ThenTisinvertible. WiththisTheoremitiseasytoderivearesultconcern-
ingthedynamicorderofanobserverforasingleoutput
system. Theorem3:LetSi:y=Ay, v=a'ybeannth-order
completelyobservablesystem. Let1, u2, . . I*nbea
setofdistinctcomplexconstantsdistinctfromthe | ieee_xplore | 7,156 |
x=c'ytheresultingclosed-loop systemwillhave
AliA2***, A, asitseigenvalues. Proof:Firstassumethateach, uiisdistinctfromthe
eigenvalues ofA. LetBbeamatrixinJordanform
whichhastheuiasitseigenvaluesandhasonlyone
Jordanblockassociatedwitheachdistincteigenvalue[31. Letc1beanyvectorsuchthat(B, c1')iscompletely | ieee_xplore | 7,179 |
server, S2, maybebuiltforS1usingonlyn-mdynamic
elements. (Asillustratedbytheproof, theeigenvalues
oftheobserver areessentiallyarbitrary. )
Proof:Letthemoutputsbegivenbyal'y, a2'y, *
ainy. ThensinceS1iscompletelyobservablethecollec-
tionofvectors
(A')iaji-O1, l2, *, n1
j. 1-1, 2;, *, m
spansndimensions. | ieee_xplore | 7,195 |
Lemma1guaranteesthateachbiisnotzero. Itwillbeshownthatthevectors(A'-ujI)-'aj, i=1, 2, **, p, generatethesamespaceasthevectors
(A')ka, , k=1, 2, **, n. Assumethatwecanfindai's
suchthat
p
ai(A'-ujI) =0. (42)
i=1
Thiscanberewritten as
P(A')fi(A-ujI)-l =0(43)
i=lwherePisapolynomialofdegreelessthanp. Butsince | ieee_xplore | 7,197 |
Subsets and Splits