id
stringlengths 1
5
| document_id
stringlengths 1
5
| text_1
stringlengths 78
2.56k
| text_2
stringlengths 95
23.3k
| text_1_name
stringclasses 1
value | text_2_name
stringclasses 1
value |
---|---|---|---|---|---|
30301 | 30300 | In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem. | An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density. Non-Gaussian statistical signal processing is important when signals and or noise deviate from the ideal Gaussian model. Stable distributions are among the most important non-Gaussian models. They share defining characteristics with the Gaussian distribution, such as the stability property and central limit theorems, and in fact include the Gaussian distribution as a limiting case. To help engineers better understand the stable models and develop methodologies for their applications in signal processing. A tutorial review of the basic characteristics of stable distributions and stable signal processing is presented. The emphasis is on the differences and similarities between stable signal processing methods based on fractional lower-order moments and Gaussian signal processing methods based on second-order moments. > This paper addresses non-Gaussian statistical modeling of interference as a superposition of a large number of small effects from terminals scatterers distributed in the plane volume according to a Poisson point process. This problem is relevant to multiple access communication systems without power control and radar. Assuming that the signal strength is attenuated over distance r as 1 r m, we show that the interference clutter could be modeled as a spherically symmetric spl alpha -stable noise. A novel approach to stable noise modeling is introduced based on the LePage series representation. This establishes grounds to investigate practical constraints in the system model adopted, such as the finite number of interferers and nonhomogeneous Poisson fields of interferers. In addition, the formulas derived allow us to predict noise statistics in environments with lognormal shadowing and Rayleigh fading. The results obtained are useful for the prediction of noise statistics in a wide range of environments with deterministic and stochastic power propagation laws. Computer simulations are provided to demonstrate the efficiency of the spl alpha -stable noise model in multiuser communication systems. The analysis presented will be important in the performance evaluation of complex communication systems and in the design of efficient interference suppression techniques. In an electronically controlled sewing machine, new stitch position coordinate information is retrieved from a solid state memory and transferred to the sewing machine servo system one bit per some proportional clock period, at either a fast stepping rate or a slow stepping rate depending upon the speed of the sewing machine. By use of suitable logic gates, a low speed condition will initiate a slow stepping rate of some fraction of a predetermined clock period, whereas a high speed condition will initiate a fast stepping rate of once every clock period. The new data from the solid state memory is fed to a comparator wherein it is compared to the immediately prior stitch position coordinate information transferred to the servo system and retained in an up down counter. The comparator determines whether the new data requires an up or a down count of the counter. The signal from the up down counter is transferred to a pulse width modulator, the output of which is fed to a digital to analog converter for the appropriate bight or feed servo system of the sewing machine. In effect, the pulse width modulated signal transferred to the servo system is a ramp signal of steep or shallow slope dependent on whether the sewing machine is operating above or below a predetermined speed. We present a methodology for determining the outage probability of wideband ad hoc networks with random wireless channels. Assuming that the nodes are Poisson distributed and subject to a required SINR constraint, we develop a simple framework that gives upper and lower bounds on the outage probability. These bounds are important in that they can be manipulated to obtain bounds on the transmission capacity, i.e., the maximum permissible spatial density of transmissions ensuring an acceptably low outage probability. In this paper, we derive the outage probability of wireless ad hoc networks under path loss and shadowing, which are the dominant large-scale effects in wideband ad hoc networks. The analytical framework is rooted in stochastic geometry, employing marked point processes, void probabilities, Palm measure, and Campbell’s Theorem. | Abstract of query paper | Cite abstracts |
30302 | 30301 | In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem. | An Aloha-type access control mechanism for large mobile, multihop, wireless networks is defined and analyzed. This access scheme is designed for the multihop context, where it is important to find a compromise between the spatial density of communications and the range of each transmission. More precisely, the analysis aims at optimizing the product of the number of simultaneously successful transmissions per unit of space (spatial reuse) by the average range of each transmission. The optimization is obtained via an averaging over all Poisson configurations for the location of interfering mobiles, where an exact evaluation of signal over noise ratio is possible. The main mathematical tools stem from stochastic geometry and are spatial versions of the so-called additive and max shot noise processes. The resulting medium access control (MAC) protocol exhibits some interesting properties. First, it can be implemented in a decentralized way provided some local geographic information is available to the mobiles. In addition, its transport capacity is proportional to the square root of the density of mobiles which is the upper bound of Gupta and Kumar. Finally, this protocol is self-adapting to the node density and it does not require prior knowledge of this density. We derive outage expressions and throughput bounds for wireless networks subject to different sources of nondeterminism. The degree of uncertainty is characterized by the location of the network in the uncertainty cube whose three axes represent the three main sources of uncertainty in interference-limited networks: the node distribution, the channel gains, and the channel access. The range for the coordinates is [0,1], where 0 indicates complete determinism, and 1 a maximum degree of randomness (nodes distributed in a Poisson point process, fading with fading figure 1, and ALOHA channel access, respectively) | Abstract of query paper | Cite abstracts |
30303 | 30302 | In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem. | In this paper, upper and lower bounds on the transmission capacity of spread-spectrum (SS) wireless ad hoc networks are derived. We define transmission capacity as the product of the maximum density of successful transmissions multiplied by their data rate, given an outage constraint. Assuming that the nodes are randomly distributed in space according to a Poisson point process, we derive upper and lower bounds for frequency hopping (FH-CDMA) and direct sequence (DS-CDMA) SS networks, which incorporate traditional modulation types (no spreading) as a special case. These bounds cleanly summarize how ad hoc network capacity is affected by the outage probability, spreading factor, transmission power, target signal-to-noise ratio (SNR), and other system parameters. Using these bounds, it can be shown that FH-CDMA obtains a higher transmission capacity than DS-CDMA on the order of M sup 1-2 spl alpha , where M is the spreading factor and spl alpha >2 is the path loss exponent. A tangential contribution is an (apparently) novel technique for obtaining tight bounds on tail probabilities of additive functionals of homogeneous Poisson point processes. | Abstract of query paper | Cite abstracts |
30304 | 30303 | The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results. | An instance of the k-Steiner forest problem consists of an undirected graph G = (V,E), the edges of which are associated with non-negative costs, and a collection D = (si,ti): 1 ≤ i ≤ d of distinct pairs of vertices, interchangeably referred to as demands. We say that a forest F ⊆ G connects a demand (si, ti) when it contains an si-ti path. Given a requirement parameter K ≤ |D|, the goal is to find a minimum cost forest that connects at least k demands in D. This problem has recently been studied by Hajiaghayi and Jain [SODA'06], whose main contribution in this context was to relate the inapproximability of k-Steiner forest to that of the dense k-subgraph problem. However, Hajiaghayi and Jain did not provide any algorithmic result for the respective settings, and posed this objective as an important direction for future research. In this paper, we present the first non-trivial approximation algorithm for the k-Steiner forest problem, which is based on a novel extension of the Lagrangian relaxation technique. Specifically, our algorithm constructs a feasible forest whose cost is within a factor of O(min n2 3, √d ċ log d) of optimal, where n is the number of vertices in the input graph and d is the number of demands. In this paper we study the prize-collecting version of the Generalized Steiner Tree problem. To the best of our knowledge, there is no general combinatorial technique in approximation algorithms developed to study the prize-collecting versions of various problems. These problems are studied on a case by case basis by [5] by applying an LP-rounding technique which is not a combinatorial approach. The main contribution of this paper is to introduce a general combinatorial approach towards solving these problems through novel primal-dual schema (without any need to solve an LP). We fuse the primal-dual schema with Farkas lemma to obtain a combinatorial 3-approximation algorithm for the Prize-Collecting Generalized Steiner Tree problem. Our work also inspires a combinatorial algorithm [19] for solving a special case of Kelly's problem [22] of pricing edges.We also consider the k-forest problem, a generalization of k-MST and k-Steiner tree, and we show that in spite of these problems for which there are constant factor approximation algorithms, the k-forest problem is much harder to approximate. In particular, obtaining an approximation factor better than O(n1 6-e) for k-forest requires substantially new ideas including improving the approximation factor O(n1 3-e) for the notorious densest k-subgraph problem. We note that k-forest and prize-collecting version of Generalized Steiner Tree are closely related to each other, since the latter is the Lagrangian relaxation of the former. | Abstract of query paper | Cite abstracts |
30305 | 30304 | The k-forest problem is a common generalization of both the k-MST and the dense- @math -subgraph problems. Formally, given a metric space on @math vertices @math , with @math demand pairs @math and a target'' @math , the goal is to find a minimum cost subgraph that connects at least @math demand pairs. In this paper, we give an @math -approximation algorithm for @math -forest, improving on the previous best ratio of @math by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an @math point metric space with @math objects each with its own source and destination, and a vehicle capable of carrying at most @math objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an @math -approximation algorithm for the @math -forest problem implies an @math -approximation algorithm for Dial-a-Ride. Using our results for @math -forest, we get an @math - approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an @math -approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity @math is large, we give a slight improvement on their results. | We present a polynomial time 2-approximation algorithm for the problem of finding the minimum tree that spans at least k vertices. Our result also leads to a 2-approximation algorithm for finding the minimum tour that visits k vertices and to a 3-approximation algorithm for the problem of finding the maximum number of vertices that can be spanned by a tree of length at most a given bound. In this paper we study the prize-collecting version of the Generalized Steiner Tree problem. To the best of our knowledge, there is no general combinatorial technique in approximation algorithms developed to study the prize-collecting versions of various problems. These problems are studied on a case by case basis by [5] by applying an LP-rounding technique which is not a combinatorial approach. The main contribution of this paper is to introduce a general combinatorial approach towards solving these problems through novel primal-dual schema (without any need to solve an LP). We fuse the primal-dual schema with Farkas lemma to obtain a combinatorial 3-approximation algorithm for the Prize-Collecting Generalized Steiner Tree problem. Our work also inspires a combinatorial algorithm [19] for solving a special case of Kelly's problem [22] of pricing edges.We also consider the k-forest problem, a generalization of k-MST and k-Steiner tree, and we show that in spite of these problems for which there are constant factor approximation algorithms, the k-forest problem is much harder to approximate. In particular, obtaining an approximation factor better than O(n1 6-e) for k-forest requires substantially new ideas including improving the approximation factor O(n1 3-e) for the notorious densest k-subgraph problem. We note that k-forest and prize-collecting version of Generalized Steiner Tree are closely related to each other, since the latter is the Lagrangian relaxation of the former. We consider the k-traveling repairmen problem, also known as the minimum latency problem, to multiple repairmen. We give a polynomial-time 8.497α-approximation algorithm for this generalization, where α denotes the best achievable approximation factor for the problem of finding the least-cost rooted tree spanning i vertices of a metric. For the latter problem, a (2 p e)-approximation is known. Our results can be compared with the best-known approximation algorithm using similar techniques for the case k e 1, which is 3.59α. Moreover, recent work of [2003] shows how to remove the factor of α, thus improving all of these results by that factor. We are aware of no previous work on the approximability of the present problem. In addition, we give a simple proof of the 3.59α-approximation result that can be more easily extended to the case of multiple repairmen, and may be of independent interest. | Abstract of query paper | Cite abstracts |
30306 | 30305 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | Game theory provides a wealth of tools that can be applied to the design and operation of communications systems. In this article, we provide a brief introduction to game theory. We then present applications of game theory to problems in random access and power control. In the case of random access, we examine the behavior of selfish users in a simplified Aloha system; surprisingly, rational selfish users do not implement the "always transmit" strategy that one might expect. In the case of power control, we show that game theoretic techniques can yield an optimal operating point without the intervention of an external controller. With cellular phones mass-market consumer items, the next frontier is mobile multimedia communications. This situation raises the question of how to perform power control for information sources other than voice. To explore this issue, we use the concepts and mathematics of microeconomics and game theory. In this context, the quality of service of a telephone call is referred to as the "utility" and the distributed power control problem for a CDMA telephone is a "noncooperative game." The power control algorithm corresponds to a strategy that has a locally optimum operating point referred to as a "Nash equilibrium." The telephone power control algorithm is also "Pareto efficient," in the terminology of game theory. When we apply the same approach to power control in wireless data transmissions, we find that the corresponding strategy, while locally optimum, is not Pareto efficient. Relative to the telephone algorithm, there are other algorithms that produce higher utility for at least one terminal, without decreasing the utility for any other terminal. This article presents one such algorithm. The algorithm includes a price function proportional to transmitter power. When terminals adjust their power levels to maximize the net utility (utility-price), they arrive at lower power levels and higher utility than they achieve when they individually strive to maximize utility. | Abstract of query paper | Cite abstracts |
30307 | 30306 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | A major challenge in the operation of wireless communications systems is the efficient use of radio resources. One important component of radio resource management is power control, which has been studied extensively in the context of voice communications. With the increasing demand for wireless data services, it is necessary to establish power control algorithms for information sources other than voice. We present a power control solution for wireless data in the analytical setting of a game theoretic framework. In this context, the quality of service (QoS) a wireless terminal receives is referred to as the utility and distributed power control is a noncooperative power control game where users maximize their utility. The outcome of the game results in a Nash (1951) equilibrium that is inefficient. We introduce pricing of transmit powers in order to obtain Pareto improvement of the noncooperative power control game, i.e., to obtain improvements in user utilities relative to the case with no pricing. Specifically, we consider a pricing function that is a linear function of the transmit power. The simplicity of the pricing function allows a distributed implementation where the price can be broadcast by the base station to all the terminals. We see that pricing is especially helpful in a heavily loaded system. Recent publications recognize that decentralized algorithms useful in wireless data applications can be obtained via microeconomics and game theory. In these studies, each agent maximizes, under appropriate rules and constraints, a quality-of-service (QoS) index. A key solution is a "Nash equilibrium"; i.e., an allocation from which no agent is better off by unilaterally "deviating". The actual maximization may be made by software which may not be directly "controllable" by a human user. The model and, especially, the chosen QoS index should be as general as possible, so that the derived results be applicable to a wide variety of channel conditions, modulation schemes, and other physical-layer characteristics. Likewise, the chosen index should exhibit predictable and reliable technical behavior, without exacting a high complexity cost. This note describes a model, and particularly, a QoS index which can accommodate a wide variety of physical layer situations. The proposed index is shown to exhibit solid technical behavior, be physically significant, intuitively appealing, and applicable to a wide variety of physical layer situations. A game in which terminals carrying multi-rate traffic seek to maximize this index is analyzed, and closed-form equilibrium conditions and power levels are derived "from first principles". All terminals want the same signal-to-interference ratio (SIR), but some cannot reach the necessary power level. At equilibrium, a number of terminals transmit full power, and others achieve the same optimal SIR. A basic rationale to search for these equilibria is provided. | Abstract of query paper | Cite abstracts |
30308 | 30307 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | A major challenge in the operation of wireless communications systems is the efficient use of radio resources. One important component of radio resource management is power control, which has been studied extensively in the context of voice communications. With the increasing demand for wireless data services, it is necessary to establish power control algorithms for information sources other than voice. We present a power control solution for wireless data in the analytical setting of a game theoretic framework. In this context, the quality of service (QoS) a wireless terminal receives is referred to as the utility and distributed power control is a noncooperative power control game where users maximize their utility. The outcome of the game results in a Nash (1951) equilibrium that is inefficient. We introduce pricing of transmit powers in order to obtain Pareto improvement of the noncooperative power control game, i.e., to obtain improvements in user utilities relative to the case with no pricing. Specifically, we consider a pricing function that is a linear function of the transmit power. The simplicity of the pricing function allows a distributed implementation where the price can be broadcast by the base station to all the terminals. We see that pricing is especially helpful in a heavily loaded system. | Abstract of query paper | Cite abstracts |
30309 | 30308 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its "best" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently Recent publications recognize that decentralized algorithms useful in wireless data applications can be obtained via microeconomics and game theory. In these studies, each agent maximizes, under appropriate rules and constraints, a quality-of-service (QoS) index. A key solution is a "Nash equilibrium"; i.e., an allocation from which no agent is better off by unilaterally "deviating". The actual maximization may be made by software which may not be directly "controllable" by a human user. The model and, especially, the chosen QoS index should be as general as possible, so that the derived results be applicable to a wide variety of channel conditions, modulation schemes, and other physical-layer characteristics. Likewise, the chosen index should exhibit predictable and reliable technical behavior, without exacting a high complexity cost. This note describes a model, and particularly, a QoS index which can accommodate a wide variety of physical layer situations. The proposed index is shown to exhibit solid technical behavior, be physically significant, intuitively appealing, and applicable to a wide variety of physical layer situations. A game in which terminals carrying multi-rate traffic seek to maximize this index is analyzed, and closed-form equilibrium conditions and power levels are derived "from first principles". All terminals want the same signal-to-interference ratio (SIR), but some cannot reach the necessary power level. At equilibrium, a number of terminals transmit full power, and others achieve the same optimal SIR. A basic rationale to search for these equilibria is provided. In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed. | Abstract of query paper | Cite abstracts |
30310 | 30309 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | A stronger result on the limiting distribution of the eigenvalues of random Hermitian matrices of the form A + XTX*, originally studied in Marcenko and Pastur, is presented. Here, X(N - n), T(n - n), and A(N - N) are independent, with X containing i.i.d. entries having finite second moments, T is diagonal with real (diagonal) entries, A is Hermitian, and n N --> c > 0 as N --> [infinity]. Under additional assumptions on the eigenvalues of A and T, almost sure convergence of the empirical distribution function of the eigenvalues of A + XTX* is proven with the aid of Stieltjes transforms, taking a more direct approach than previous methods. A game-theoretic model for studying power control in multicarrier code-division multiple-access systems is proposed. Power control is modeled as a noncooperative game in which each user decides how much power to transmit over each carrier to maximize its own utility. The utility function considered here measures the number of reliable bits transmitted over all the carriers per joule of energy consumed and is particularly suitable for networks where energy efficiency is important. The multidimensional nature of users' strategies and the nonquasi-concavity of the utility function make the multicarrier problem much more challenging than the single-carrier or throughput-based-utility case. It is shown that, for all linear receivers including the matched filter, the decorrelator, and the minimum-mean-square-error detector, a user's utility is maximized when the user transmits only on its "best" carrier. This is the carrier that requires the least amount of power to achieve a particular target signal-to-interference-plus-noise ratio at the output of the receiver. The existence and uniqueness of Nash equilibrium for the proposed power control game are studied. In particular, conditions are given that must be satisfied by the channel gains for a Nash equilibrium to exist, and the distribution of the users among the carriers at equilibrium is characterized. In addition, an iterative and distributed algorithm for reaching the equilibrium (when it exists) is presented. It is shown that the proposed approach results in significant improvements in the total utility achieved at equilibrium compared with a single-carrier system and also to a multicarrier system in which each user maximizes its utility over each carrier independently In this contribution, the performance of an uplink CDMA system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). We consider the realistic case of frequency selective channels. This scenario illustrates the case of decentralized schemes and aims at reducing the downlink signaling overhead. Various receivers are considered, namely the Matched filter, the MMSE filter and the optimum filter. The goal of this paper is to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large. To that end we combine two asymptotic methodologies. The first is asymptotic random matrix theory which allows us to obtain explicit expressions for the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games along with the Wardrop equilibrium concept which allows us to compute good approximations of the Nash equilibrium as the number of mobiles grow. In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed. | Abstract of query paper | Cite abstracts |
30311 | 30310 | In this contribution, the performance of a multiuser system is analyzed in the context of frequency selective fading channels. Using game theoretic tools, a useful framework is provided in order to determine the optimal power allocation when users know only their own channel (while perfect channel state information is assumed at the base station). This scenario illustrates the case of decentralized schemes, where limited information on the network is available at the terminal. Various receivers are considered, namely the matched filter, the MMSE filter and the optimum filter. The goal of this paper is to extend previous work, and to derive simple expressions for the non-cooperative Nash equilibrium as the number of mobiles becomes large and the spreading length increases. To that end two asymptotic methodologies are combined. The first is asymptotic random matrix theory which allows us to obtain explicit expressions of the impact of all other mobiles on any given tagged mobile. The second is the theory of non-atomic games which computes good approximations of the Nash equilibrium as the number of mobiles grows. | In this paper, the cross-layer design problem of joint multiuser detection and power control is studied, using a game-theoretic approach that focuses on energy efficiency. The uplink of a direct-sequence code-division multiple-access data network is considered, and a noncooperative game is proposed in which users in the network are allowed to choose their uplink receivers as well as their transmit powers to maximize their own utilities. The utility function measures the number of reliable bits transmitted by the user per joule of energy consumed. Focusing on linear receivers, the Nash equilibrium for the proposed game is derived. It is shown that the equilibrium is one where the powers are signal-to-interference-plus-noise ratio-balanced with the minimum mean-square error (MMSE) detector as the receiver. In addition, this framework is used to study power-control games for the matched filter, the decorrelator, and the MMSE detector; and the receivers' performance is compared in terms of the utilities achieved at equilibrium (in bits joule). The optimal cooperative solution is also discussed and compared with the noncooperative approach. Extensions of the results to the case of multiple receive antennas are also presented. In addition, an admission-control scheme based on maximizing the total utility in the network is proposed. | Abstract of query paper | Cite abstracts |
30312 | 30311 | We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists. | We consider a matching market, in which the aim is to maintain a popular matching between a set of applicants and a set of posts, where each applicant has a preference list that ranks some subset of acceptable posts. The setting is dynamic: applicants and posts can enter and leave the market, and applicants can also change their preferences arbitrarily. After any change, the current matching may no longer be popular, in which case, we are required to update it. However, our model demands that we can switch from one matching to another only if there is consensus among the applicants to agree to the switch. Hence, we need to update via a voting path, which is a sequence of matchings, each more popular than its predecessor, that ends in a popular matching. In this paper, we show that, as long as some popular matching exists, there is a 2-step voting path from any given matching to some popular matching. Furthermore, given any popular matching, we show how to find a shortest-length such voting path in linear time. We consider the problem of finding a popular matching in the Capacitated House Allocation problem (CHA). An instance of CHA involves a set of agents and a set of houses. Each agent has a preference list in which a subset of houses are ranked in strict order, and each house may be matched to a number of agents that must not exceed its capacity. A matching M is popular if there is no other matching M' such that the number of agents who prefer their allocation in M' to that in M exceeds the number of agents who prefer their allocation in M to that in M'. Here, we give an O(√Cn1+m)algorithm to determine if an instance of CHA admits a popular matching, and if so, to find a largest such matching, where C is the total capacity of the houses, n1 is the number of agents and m is the total length of the agents' preference lists. For the case where preference lists may contain ties, we give an O(√Cn1)+m) algorithm for the analogous problem. We consider matching markets where a centralized authority must find a matching between the agents on one side of the market, and the items on the other side. Such settings occur, for example, in mail-based DVD rental services such as NetFlix or in some job markets. The objective is to find a popular matching, or a matching that is preferred by a majority of the agents to any other matching. This concept was first defined and studied by The main drawback of this concept is that popular matchings sometimes do not exist. We partially address this issue in this paper, by proving that in a probabilistic setting where preference lists are drawn at random and the number of items is more than the number of agents by a small multiplicative factor, popular matchings almost surely exist. More precisely, we prove that there is a threshold α ≈ 1.42 such that if the number of items divided by the number of agents exceeds this threshold, then a solution almost always exists. Our proof uses a characterization result by , and a number of tools from the theory of random graphs and phase transitions. | Abstract of query paper | Cite abstracts |
30313 | 30312 | We present a deterministic exploration mechanism for sponsored search auctions, which enables the auctioneer to learn the relevance scores of advertisers, and allows advertisers to estimate the true value of clicks generated at the auction site. This exploratory mechanism deviates only minimally from the mechanism being currently used by Google and Yahoo! in the sense that it retains the same pricing rule, similar ranking scheme, as well as, similar mathematical structure of payoffs. In particular, the estimations of the relevance scores and true-values are achieved by providing a chance to lower ranked advertisers to obtain better slots. This allows the search engine to potentially test a new pool of advertisers, and correspondingly, enables new advertisers to estimate the value of clicks leads generated via the auction. Both these quantities are unknown a priori, and their knowledge is necessary for the auction to operate efficiently. We show that such an exploration policy can be incorporated without any significant loss in revenue for the auctioneer. We compare the revenue of the new mechanism to that of the standard mechanism at their corresponding symmetric Nash equilibria and compute the cost of uncertainty, which is defined as the relative loss in expected revenue per impression. We also bound the loss in efficiency, as well as, in user experience due to exploration, under the same solution concept (i.e. SNE). Thus the proposed exploration mechanism learns the relevance scores while incorporating the incentive constraints from the advertisers who are selfish and are trying to maximize their own profits, and therefore, the exploration is essentially achieved via mechanism design. We also discuss variations of the new mechanism such as truthful implementations. | We introduce an exploration scheme aimed at learning advertiser click-through rates in sponsored search auctions with minimal effect on advertiser incentives. The scheme preserves both the current ranking and pricing policies of the search engine and only introduces one set of parameters which control the rate of exploration. These parameters can be set so as to allow enough exploration to learn advertiser click-through rates over time, but also eliminate incentives for advertisers to alter their currently submitted bids. When advertisers have much more information than the search engine, we show that although this goal is not achievable, incentives to deviate can be made arbitrarily small by appropriately setting the exploration rate. Given that advertisers do not alter their bids, we bound revenue loss due to exploration. | Abstract of query paper | Cite abstracts |
30314 | 30313 | We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol. | Let (X_ k , Y_ k ) ^ _ k=1 be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set X . It is desired to encode the sequence X_ k in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence X _ k , where X _ k X , the reproduction alphabet. The average distortion level is (1 n) ^ n _ k=1 E[D(X_ k , X _ k )] , where D(x, x ) 0, x X , x X , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information Y_ k . In this paper we determine the quantity R (d) , defined as the infimum ofrates R such that (with > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + . The main result is that R (d) = [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: Y Z X , such that E[D(X,f(Y,Z))] d . Let R_ X | Y (d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information Y_ k . In nearly all cases it is shown that when d > 0 then R (d) > R_ X|Y (d) , so that knowledge of the side information at the encoder permits transmission of the X_ k at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of X_ k , i.e., d = for any >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate. We propose the use of punctured turbo codes for compression of correlated binary sources. Compression is achieved because of puncturing. The resulting performance is close to the theoretical limit provided by the Slepian-Wolf (1973) theorem. No information about the correlation between sources is required in the encoding process. The proposed source decoder utilizes iterative schemes, and performs well even when the correlation between the sources is not known in the decoder, since it can be estimated jointly with the iterative decoding process. | Abstract of query paper | Cite abstracts |
30315 | 30314 | We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol. | X and Y are random variables. Person P sub x knows X, Person P sub y knows Y, and both know the joint probability distribution of the pair (X,Y). Using a predetermined protocol, they communicate over a binary error-free channel in order for P sub y to learn X. P sub x may or may not learn Y. It is determined how many information bits must be transmitted (by both persons) on the average. The results show that, when the arithmetic average number of bits is considered, there is no asymptotic advantage to P sub x knowing Y in advance and four messages are asymptotically optimum. By contrast, for the worst-case number of bits, communication can be significantly reduced if P sub x knows Y in advance, and it is not known whether a constant number of messages is asymptotically optimum. > This paper proposes a practical coding scheme for the Slepian-Wolf problem of separate encoding of correlated sources. Finite-state machine (FSM) encoders, concatenated in parallel, are used at the transmit side and an iterative turbo decoder is applied at the receiver. Simulation results of system performance are presented for binary sources with different amounts of correlation. Obtained results show that the proposed technique outperforms by far both an equivalent uncoded system and a system coded with traditional (non-concatenated) FSM coding. We address the problem of compressing correlated distributed sources, i.e., correlated sources which are not co-located or which cannot cooperate to directly exploit their correlation. We consider the related problem of compressing a source which is correlated with another source that is available only at the decoder. This problem has been studied in the information theory literature under the name of the Slepian-Wolf (1973) source coding problem for the lossless coding case, and as "rate-distortion with side information" for the lossy coding case. We provide a constructive practical framework based on algebraic trellis codes dubbed as DIstributed Source Coding Using Syndromes (DISCUS), that can be applicable in a variety of settings. Simulation results are presented for source coding of independent and identically distributed (i.i.d.) Gaussian sources with side information available at the decoder in the form of a noisy version of the source to be coded. Our results reveal the promise of this approach: using trellis-based quantization and coset construction, the performance of the proposed approach is 2-5 dB from the Wyner-Ziv (1976) bound. In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding. We propose the use of punctured turbo codes for compression of correlated binary sources. Compression is achieved because of puncturing. The resulting performance is close to the theoretical limit provided by the Slepian-Wolf (1973) theorem. No information about the correlation between sources is required in the encoding process. The proposed source decoder utilizes iterative schemes, and performs well even when the correlation between the sources is not known in the decoder, since it can be estimated jointly with the iterative decoding process. | Abstract of query paper | Cite abstracts |
30316 | 30315 | We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol. | X and Y are random variables. Person P sub x knows X, Person P sub y knows Y, and both know the joint probability distribution of the pair (X,Y). Using a predetermined protocol, they communicate over a binary error-free channel in order for P sub y to learn X. P sub x may or may not learn Y. It is determined how many information bits must be transmitted (by both persons) on the average. The results show that, when the arithmetic average number of bits is considered, there is no asymptotic advantage to P sub x knowing Y in advance and four messages are asymptotically optimum. By contrast, for the worst-case number of bits, communication can be significantly reduced if P sub x knows Y in advance, and it is not known whether a constant number of messages is asymptotically optimum. > Abstract We propose a novel approach to reducing energy consumption in sensor networks using a distributed adaptive signal processing framework and efficient algorithm. 1 While the topic of energy-aware routing to alleviate energy consumption in sensor networks has received attention recently [C. Toh, IEEE Commun. Mag. June (2001) 138; R. Shah, J. Rabaey, Proc. IEEE WCNC, March 2002], in this paper, we propose an orthogonal approach to complement previous methods. Specifically, we propose a distributed way of continuously exploiting existing correlations in sensor data based on adaptive signal processing and distributed source coding principles. Our approach enables sensor nodes to blindly compress their readings with respect to one another without the need for explicit and energy-expensive inter-sensor communication to effect this compression. Furthermore, the distributed algorithm used by each sensor node is extremely low in complexity and easy to implement (i.e., one modulo operation), while an adaptive filtering framework is used at the data gathering unit to continuously learn the relevant correlation structures in the sensor data. Applying the algorithm to testbed data resulted in energy savings of 10–65 for a multitude of sensor modalities. | Abstract of query paper | Cite abstracts |
30317 | 30316 | We are concerned with the problem of maximizing the worst-case lifetime of a data-gathering wireless sensor network consisting of a set of sensor nodes directly communicating with a base-station.We propose to solve this problem by modeling sensor node and base-station communication as the interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). We provide practical and scalable interactive communication protocols for data gathering in sensor networks and demonstrate their efficiency compared to traditional approaches. In this paper, we first develop a formalism to address the problem of worst-case interactive communication between a set of multiple correlated informants and a recipient. We realize that there can be different objectives to achieve in such a communication scenario and compute the optimal number of messages and bits exchanged to realize these objectives. Then, we propose to adapt these results in the context of single-hop data-gathering sensor networks. Finally, based on this proposed formalism, we propose a clustering based communication protocol for large sensor networks and demonstrate its superiority over a traditional clustering protocol. | The reduction in communication achievable by interaction is investigated. The model assumes two communicators: an informant having a random variable X, and a recipient having a possibly dependent random variable Y. Both communicators want the recipient to learn X with no probability of error, whereas the informant may or may not learn Y. To that end, they alternate in transmitting messages comprising finite sequences of bits. Messages are transmitted over an error-free channel and are determined by an agreed-upon, deterministic protocol for (X,Y) (i.e. a protocol for transmitting X to a person who knows Y). A two-message protocol is described, and its worst case performance is investigated. > We are concerned with maximizing the lifetime of a data-gathering wireless sensor network consisting of set of nodes directly communicating with a base-station. We model this scenario as the m-message interactive communication between multiple correlated informants (sensor nodes) and a recipient (base-station). With this framework, we show that m-message interactive communication can indeed enhance network lifetime. Both worst-case and average-case performances are considered. | Abstract of query paper | Cite abstracts |
30318 | 30317 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | A data warehouse is a repository (database) that integrates information extracted from various remote sources, with the purpose of efficiently supporting decision support queries. The information stored at the warehouse is in the form of database tables, referred to as materialized views, derived from the data in the sources. In order to keep a materialized view consistent with the data at sources, the view needs to be incrementally maintained. The two important issues that arise in the design of a data warehouse are selection of views to materialize and incremental maintenance of materialized views. This doctoral thesis looks at these two design issues and presents comprehensive solutions to both problems. Selection of views to materialize. We develop a theoretical framework for the general problem of selection of views in a data warehouse. Given a set of queries to be supported, the view selection problem is to select a set of views to materialize minimizing the query response time given some resource constraint. For different resource constraints and settings, we have designed approximation algorithms that provably return a set of views having a query benefit within a constant factor of the optimal. Incremental maintenance of general view expressions. Traditional maintenance algorithms maintain view expressions in response to changes at the base relations by computing and propagating insertions and deletions through intermediate subexpressions. In this thesis, we have developed a change-table technique, that computes and propagates “change-tables” through subexpressions, for incremental maintenance of general view expressions involving aggregate and outerjoin operators. We show that the presented change-table technique outperforms the previously proposed techniques by orders of magnitude. Dwarf is a highly compressed structure for computing, storing, and querying data cubes. Dwarf identifies prefix and suffix structural redundancies and factors them out by coalescing their store. Prefix redundancy is high on dense areas of cubes but suffix redundancy is significantly higher for sparse areas. Putting the two together fuses the exponential sizes of high dimensional full cubes into a dramatically condensed data structure. The elimination of suffix redundancy has an equally dramatic reduction in the computation of the cube because recomputation of the redundant suffixes is avoided. This effect is multiplied in the presence of correlation amongst attributes in the cube. A Petabyte 25-dimensional cube was shrunk this way to a 2.3GB Dwarf Cube, in less than 20 minutes, a 1:400000 storage reduction ratio. Still, Dwarf provides 100 precision on cube queries and is a self-sufficient structure which requires no access to the fact table. What makes Dwarf practical is the automatic discovery,in a single pass over the fact table, of the prefix and suffix redundancies without user involvement or knowledge of the value distributions.This paper describes the Dwarf structure and the Dwarf cube construction algorithm. Further optimizations are then introduced for improving clustering and query performance. Experiments with the current implementation include comparisons on detailed measurements with real and synthetic datasets against previously published techniques. The comparisons show that Dwarfs by far out-perform these techniques on all counts: storage space, creation time, query response time, and updates of cubes. | Abstract of query paper | Cite abstracts |
30319 | 30318 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | A data warehouse stores materialized views of aggregate data derived from a fact table in order to minimize the query response time. One of the most important decisions in designing the data warehouse is the selection of materialized views. This paper presents an algorithm which provides appropriate views to be materialized while the goal is to minimize the query response time and maintenance cost. We use a data cube lattice, frequency of queries and updates on views, and view size to select views to be materialized using greedy algorithms. In spite of the simplicity, our algorithm selects views which give us better performance than views that selected by existing algorithms. In this paper, we describe the design of a data warehousing system for an engineering company 'R'. A cost model was developed for this system to enable the evaluation of the total costs and benefits involved in selecting each materialized view. Using the cost analysis methodology for evaluation, an adapted greedy algorithm has been implemented for the selection of materialized views. The algorithm and cost model were applied to a set of real-life database items extracted from company 'R'. By selecting the most cost effective set of materialized summary views, the total of the maintenance, storage and query costs of the system is optimized, thereby resulting in an efficient data warehousing system. A data warehouse uses multiple materialized views to efficiently process a given set of queries. These views are accessed by read-only queries and need to be maintained after updates to base tables. Due to the space constraint and maintenance cost constraint, the materialization of all views is not possible. Therefore, a subset of views needs to be selected to be materialized. The problem is NP-hard, therefore, exhaustive search is infeasible. In this paper, we design a View Relevance Driven Selection (VRDS) algorithm based on view relevance to select views. We take into consideration the query processing cost and the view maintenance cost. Our experimental results show that our heuristic aims to minimize the total processing cost, which is the sum of query processing cost and view maintenance cost. Finally, we compare our results against a popular greedy algorithm. A data warehouse stores materialized views of data from one or more sources, with the purpose of efficiently implementing decision-support or OLAP queries. One of the most important decisions in designing a data warehouse is the selection of materialized views to be maintained at the warehouse. The goal is to select an appropriate set of views that minimizes total query response time and the cost of maintaining the selected views, given a limited amount of resource, e.g., materialization time, storage space, etc. In This work, we have developed a theoretical framework for the general problem of selection of views in a data warehouse. We present polynomial-time heuristics for a selection of views to optimize total query response time under a disk-space constraint, for some important special cases of the general data warehouse scenario, viz.: 1) an AND view graph, where each query view has a unique evaluation, e.g., when a multiple-query optimizer can be used to general a global evaluation plan for the queries, and 2) an OR view graph, in which any view can be computed from any one of its related views, e.g., data cubes. We present proofs showing that the algorithms are guaranteed to provide a solution that is fairly close to (within a constant factor ratio of) the optimal solution. We extend our heuristic to the general AND-OR view graphs. Finally, we address in detail the view-selection problem under the maintenance cost constraint and present provably competitive heuristics. Pre-computation and materialization of views with aggregate functions is a common technique in Data Warehouses. Due to the complex structure of the warehouse and the different profiles of the users who submit queries, there is need for tools that will automate the selection and management of the materialized data. In this paper we present DynaMat, a system that dynamically materializes information at multiple levels of granularity in order to match the demand (workload) but also takes into account the maintenance restrictions for the warehouse, such as down time to update the views and space availability. DynaMat unifies the view selection and the view maintenance problems under a single framework using a novel “goodness” measure for the materialized views. DynaMat constantly monitors incoming queries and materializes the best set of views subject to the space constraints. During updates, DynaMat reconciles the current materialized view selection and refreshes the most beneficial subset of it within a given maintenance window. We compare DynaMat against a system that is given all queries in advance and the pre-computed optimal static view selection. The comparison is made based on a new metric, the Detailed Cost Savings Ratio introduced for quantifying the benefits of view materialization against incoming queries. These experiments show that DynaMat's dynamic view selection outperforms the optimal static view selection and thus, any sub-optimal static algorithm that has appeared in the literature. The goal of on-line analytical processing (OLAP) is to quickly answer queries from large amounts of data residing in a data warehouse. Materialized view selection is an optimization problem encountered in OLAP systems. Published work on the problem of materialized view selection presents solutions scalable in the number of possible views. However, the number of possible views is exponential relative to the number of database dimensions. A truly scalable solution must be polynomial time relative to the number of dimensions. We present such a solution, our Polynomial Greedy Algorithm. Complexity analysis proves scalability, and a performance study verifies the result. Empirical evidence demonstrates benefits close to existing algorithms. We conclude the Polynomial Greedy Algorithm functions effectively where existing algorithms fail dramatically. A multidimensional database is a data repository that supports the efficient execution of complex business decision queries. Query response can be significantly improved by storing an appropriate set of materialized views. These views are selected from the multidimensional lattice whose elements represent the solution space of the problem. Several techniques have been proposed in the past to perform the selection of materialized views for databases with a reduced number of dimensions. When the number and complexity of dimensions increase, the proposed techniques do not scale well. The technique we are proposing reduces the soluticn space by considering only the relevant elements of the multidimensional lattice. An additional statistical analysis allows a further reduction of the solution space. | Abstract of query paper | Cite abstracts |
30320 | 30319 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | This article presents a method for adaptively representing multidimensional data cubes using wavelet view elements in order to more efficiently support data analysis and querying involving aggregations. The proposed method decomposes the data cubes into an indexed hierarchy of wavelet view elements. The view elements differ from traditional data cube cells in that they correspond to partial and residual aggregations of the data cube. The view elements provide highly granular building blocks for synthesizing the aggregated and range-aggregated views of the data cubes. We propose a strategy for selectively materializing alternative sets of view elements based on the patterns of access of views. We present a fast and optimal algorithm for selecting a non-expansive set of wavelet view elements that minimizes the average processing cost for supporting a population of queries of data cube views. We also present a greedy algorithm for allowing the selective materialization of a redundant set of view element sets which, for measured increases in storage capacity, further reduces processing costs. Experiments and analytic results show that the wavelet view element framework performs better in terms of lower processing and storage cost than previous methods that materialize and store redundant views for online analytical processing (OLAP). | Abstract of query paper | Cite abstracts |
30321 | 30320 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | View materialization and indexing are the most effective techniques adopted in data warehouses to improve query performance. Since both materialization and indexing algorithms are driven by a constraint on the disk space made available for each, the designer would greatly benefit from being enabled to determine a priori which fractions of the global space available must be devoted to views and indexes, respectively, in order to optimally tune performances. In this paper we first present a comparative evaluation of the benefit (saving per disk page) brought by view materialization and indexing for a single query expressed on a star scheme. Then, we face the problem of determining an effective trade-off between the two space fractions for the core workload of the warehouse. Some experimental results are reported, which prove that the estimated trade-off is satisfactorily near to the optimal one. Materialized views can provide massive improvements in query processing time, especially for aggregation queries over large tables. To realize this potential, the query optimizer must know how and when to exploit materialized views. This paper presents a fast and scalable algorithm for determining whether part or all of a query can be computed from materialized views and describes how it can be incorporated in transformation-based optimizers. The current version handles views composed of selections, joins and a final group-by. Optimization remains fully cost based, that is, a single “best” rewrite is not selected by heuristic rules but multiple rewrites are generated and the optimizer chooses the best alternative in the normal way. Experimental results based on an implementation in Microsoft SQL Server show outstanding performance and scalability. Optimization time increases slowly with the number of views but remains low even up to a thousand. Recently, multi-query optimization techniques have been considered as beneficial in view selection setting. The main interest of such techniques relies in detecting common sub expressions between the different queries of workload. This feature can be exploited for sharing updates and space storage. However, due to the reuse a query change may entail an important reorganization of the multi query graph. In this paper, we present an approach that is based on multi-query optimization for view selection and that attempts to reduce the drawbacks resulting from these techniques. Finally, we present a performance study using workloads consisting of queries over the schema of the TPC-H benchmark. This study shows that our view selection provides significant benefits over the other approaches. | Abstract of query paper | Cite abstracts |
30322 | 30321 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000. | Abstract of query paper | Cite abstracts |
30323 | 30322 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | This paper describes the concepts used in the implementation of DBDSGN, an experimental physical design tool for relational databases developed at the IBM San Jose Research Laboratory. Given a workload for System R (consisting of a set of SQL statements and their execution frequencies), DBDSGN suggests physical configurations for efficient performance. Each configuration consists of a set of indices and an ordering for each table. Workload statements are evaluated only for atomic configurations of indices, which have only one index per table. Costs for any configuration can be obtained from those of the atomic configurations. DBDSGN uses information supplied by the System R optimizer both to determine which columns might be worth indexing and to obtain estimates of the cost of executing statements in different configurations. The tool finds efficient solutions to the index-selection problem; if we assume the cost estimates supplied by the optimizer are the actual execution costs, it finds the optimal solution. Optionally, heuristics can be used to reduce execution time. The approach taken by DBDSGN in solving the index-selection problem for multiple-table statements significantly reduces the complexity of the problem. DBDSGN's principles were used in the Relational Design Tool (RDT), an IBM product based on DBDSGN, which performs design for SQL DS, a relational system based on System R. System R actually uses DBDSGN's suggested solutions as the tool expects because cost estimates and other necessary information can be obtained from System R using a new SQL statement, the EXPLAIN statement. This illustrates how a system can export a model of its internal assumptions and behavior so that other systems (such as tools) can share this model. This paper introduces the concept of letting an RDBMS optimizer optimize its own environment. In our project, we have used the DB2 optimizer to tackle the index selection problem, a variation of the knapsack problem. This paper discusses our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes. Data warehouses collect copies of information from remote sources into a single database. Since the remote data is cached at the warehouse, it appears as local relations to the users of the warehouse. To improve query response time, the warehouse administrator will often materialize views defined on the local relations to support common or complicated queries. Unfortunately, the requirement to keep the views consistent with the local relations creates additional overhead when the remote sources change. The warehouse is often kept only loosely consistent with the sources: it is periodically refreshed with changes sent from the source. When this happens, the warehouse is taken off-line until the local relations and materialized views can be updated. Clearly, the users would prefer as little down time as possible. Often the down time can be reduced by adding carefully selected materialized views or indexes to the physical schema. This paper studies how to select the sets of supporting views and of indexes to materialize to minimize the down time. We call this the view index selection (VIS) problem. We present an A* search based solution to the problem as well as rules of thumb. We also perform additional experiments to understand the space-time tradeoff as it applies to data warehouses. On-line analytical processing (OLAP) is a recent and important application of database systems. Typically, OLAP data is presented as a multidimensional "data cube." OLAP queries are complex and can take many hours or even days to run, if executed directly on the raw data. The most common method of reducing execution time is to precompute some of the queries into summary tables (subcubes of the data cube) and then to build indexes on these summary tables. In most commercial OLAP systems today, the summary tables that are to be precomputed are picked first, followed by the selection of the appropriate indexes on them. A trial-and-error approach is used to divide the space available between the summary tables and the indexes. This two-step process can perform very poorly. Since both summary tables and indexes consume the same resource-space-their selection should be done together for the most efficient use of space. The authors give algorithms that automate the selection of summary tables and indexes. In particular, they present a family of algorithms of increasing time complexities, and prove strong performance bounds for them. The algorithms with higher complexities have better performance bounds. However, the increase in the performance bound is diminishing, and they show that an algorithm of moderate complexity can perform fairly close to the optimal. Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000. This paper considers the problem of minimizing the response time for a given database workload by a proper choice of indexes. This problem is NP-hard and known in the literature as the Index Selection Problem (ISP). We propose a genetic algorithm (GA) for solving the ISP. Computational results of the GA on standard ISP instances are compared to branch-and-cut method and its initialisation heuristics and two state of the art MIP solvers: CPLEX and OSL. These results indicate good performance, reliability and efficiency of the proposed approach. Abstract DINNER is a knowledge-based tool that assists the administrator of a relational database in the selection of index configurations. Given a set of tables, their statistical properties, and a set of queries on these tables, DINNER recommends an index configuration that includes for each table a primary index and a set of secondary indexes. Although it was proved that the problem is NP-hard, DINNER is capable of handling a practically-useful number of queries (10 queries, half of which are join queries). The database can perform a query with several possible access paths, according to the available indexes. DINNER builds for each query a graph that represents the set of possible solutions (indexes) to be used by the query. For each solution it finds the access path that the database is most likely to choose to perform the query, and estimates the time it takes to perform the access path. In order to find the possible indexes, the access paths that use these indexes, and the time cost of the access paths, DINNER uses knowledge that was elicited from several sources (database administrator (DBA), literature), and represented using several AI representation methods (frames, rules, and logic). After finding the possible indexes, DINNER uses several heuristics to eliminate bad solutions. Then, solutions for all queries are generated. These solutions are searched to find the best one which is the recommendation given by DINNER. The validity of DINNER's recommendations were demonstrated in three ways: examination of a detailed example, a demonstration of the changes in the recommendations as the input is changed, and the evaluation of an expert DBA. We present a novel approach for a tool that assists the database administrator in designing an index configuration for a relational database system. A new methodology for collecting usage statistics at run time is developed which lets the optimizer estimate query execution costs for alternative index configurations. Defining the workload specification required by existing index design tools may be very complex for a large integrated database system. Our tool automatically derives the workload statistics. These statistics are then used to efficiently compute an index configuration. Execution of a prototype of the tool against a sample database demonstrates that the proposed index configuration is reasonably close to the optimum for test query sets. We study the index selection problem: Given a workload consisting of SQL statements on a database, and a user-specified storage constraint, recommend a set of indexes that have the maximum benefit for the given workload. We present a formal statement for this problem and show that it is computationally "hard" to solve or even approximate it. We develop a new algorithm for the problem which is based on treating the problem as a knapsack problem. The novelty of our approach lies in an LP (linear programming) based method that assigns benefits to individual indexes. For a slightly modified algorithm, that does more work, we prove that we can give instance specific guarantees about the quality of our solution. We conduct an extensive experimental evaluation of this new heuristic and compare it with previous solutions. Our results demonstrate that our solution is more scalable while achieving comparable quality. | Abstract of query paper | Cite abstracts |
30324 | 30323 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | In this paper we describe novel techniques that make it possible to build an industrial-strength tool for automating the choice of indexes in the physical design of a SQL database. The tool takes as input a workload of SQL queries, and suggests a set of suitable indexes. We ensure that the indexes chosen are effective in reducing the cost of the workload by keeping the index selection tool and the query optimizer "in step". The number of index sets that must be evaluated to find the optimal configuration is very large. We reduce the complexity of this problem using three techniques. First, we remove a large number of spurious indexes from consideration by taking into account both query syntax and cost information. Second, we introduce optimizations that make it possible to cheaply evaluate the “goodness” of an index set. Third, we describe an iterative approach to handle the complexity arising from multicolumn indexes. The tool has been implemented on Microsoft SQL Server 7.0. We performed extensive experiments over a range of workloads, including TPC-D. The results indicate that the tool is efficient and its choices are close to optimal. An important problem in the physical design of databases is the selection of secondary indices. In general, this problem cannot be solved in an optimal way due to the complexity of the selection process. Often use is made of heuristics such as the well-known ADD and DROP algorithms. In this paper it will be shown that frequently used cost functions can be classified as super- or submodular functions. For these functions several mathematical properties have been derived which reduce the complexity of the index selection problem. These properties will be used to develop a tool for physical database design and also give a mathematical foundation for the success of the before-mentioned ADD and DROP algorithms. This paper introduces the concept of letting an RDBMS optimizer optimize its own environment. In our project, we have used the DB2 optimizer to tackle the index selection problem, a variation of the knapsack problem. This paper discusses our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes. We present a novel approach for a tool that assists the database administrator in designing an index configuration for a relational database system. A new methodology for collecting usage statistics at run time is developed which lets the optimizer estimate query execution costs for alternative index configurations. Defining the workload specification required by existing index design tools may be very complex for a large integrated database system. Our tool automatically derives the workload statistics. These statistics are then used to efficiently compute an index configuration. Execution of a prototype of the tool against a sample database demonstrates that the proposed index configuration is reasonably close to the optimum for test query sets. | Abstract of query paper | Cite abstracts |
30325 | 30324 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | In this paper we describe novel techniques that make it possible to build an industrial-strength tool for automating the choice of indexes in the physical design of a SQL database. The tool takes as input a workload of SQL queries, and suggests a set of suitable indexes. We ensure that the indexes chosen are effective in reducing the cost of the workload by keeping the index selection tool and the query optimizer "in step". The number of index sets that must be evaluated to find the optimal configuration is very large. We reduce the complexity of this problem using three techniques. First, we remove a large number of spurious indexes from consideration by taking into account both query syntax and cost information. Second, we introduce optimizations that make it possible to cheaply evaluate the “goodness” of an index set. Third, we describe an iterative approach to handle the complexity arising from multicolumn indexes. The tool has been implemented on Microsoft SQL Server 7.0. We performed extensive experiments over a range of workloads, including TPC-D. The results indicate that the tool is efficient and its choices are close to optimal. An important problem in the physical design of databases is the selection of secondary indices. In general, this problem cannot be solved in an optimal way due to the complexity of the selection process. Often use is made of heuristics such as the well-known ADD and DROP algorithms. In this paper it will be shown that frequently used cost functions can be classified as super- or submodular functions. For these functions several mathematical properties have been derived which reduce the complexity of the index selection problem. These properties will be used to develop a tool for physical database design and also give a mathematical foundation for the success of the before-mentioned ADD and DROP algorithms. We present a novel approach for a tool that assists the database administrator in designing an index configuration for a relational database system. A new methodology for collecting usage statistics at run time is developed which lets the optimizer estimate query execution costs for alternative index configurations. Defining the workload specification required by existing index design tools may be very complex for a large integrated database system. Our tool automatically derives the workload statistics. These statistics are then used to efficiently compute an index configuration. Execution of a prototype of the tool against a sample database demonstrates that the proposed index configuration is reasonably close to the optimum for test query sets. Intending to develop a tool which aims to support the physical design of relational databases can not be done without considering the problem of index selection. Generally the problem is split into a primary and secondary index selection problem and the selection is done per table. Whereas much attention has been paid on the selection of secondary indices relatively less is known about the selection of a primary index and the relation between them. These are exactly the topics of this paper. > | Abstract of query paper | Cite abstracts |
30326 | 30325 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | Abstract Index selection for relational databases is an important issue which has been researched quite extensively [1–5]. In the literature, in index selection algorithms for relational databases, at most one index is considered as a candidate for each attribute of a relation. However, it is possible that more than one different type of indexes with different storage space requirements may be present as candidates for an attribute. Also, it may not be possible to eliminate locally all but one of the candidate indexes for an attribute due to different benefits and storage space requirements associated with the candidates. Thus, the algorithms available in the literature for optimal index selection may not be used when there are multiple candidates for each attribute and there is a need for a global optimization algorithm in which at most one index can be selected from a set of candidate indexes for an attribute. The problem of index selection in the presence of multiple candidate indexes for each attribute (which we call the multiple choice index selection problem) has not been addressed in the literature. In this paper, we present the multiple choice index selection problem, show that it is NP-hard, and present an algorithm which gives an approximately optimal solution within a user specified error bound in a logarithmic time order. This paper introduces the concept of letting an RDBMS optimizer optimize its own environment. In our project, we have used the DB2 optimizer to tackle the index selection problem, a variation of the knapsack problem. This paper discusses our implementation of index recommendation, the user interface, and provide measurements on the quality of the recommended indexes. A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments. This paper considers the problem of minimizing the response time for a given database workload by a proper choice of indexes. This problem is NP-hard and known in the literature as the Index Selection Problem (ISP). We propose a genetic algorithm (GA) for solving the ISP. Computational results of the GA on standard ISP instances are compared to branch-and-cut method and its initialisation heuristics and two state of the art MIP solvers: CPLEX and OSL. These results indicate good performance, reliability and efficiency of the proposed approach. Abstract DINNER is a knowledge-based tool that assists the administrator of a relational database in the selection of index configurations. Given a set of tables, their statistical properties, and a set of queries on these tables, DINNER recommends an index configuration that includes for each table a primary index and a set of secondary indexes. Although it was proved that the problem is NP-hard, DINNER is capable of handling a practically-useful number of queries (10 queries, half of which are join queries). The database can perform a query with several possible access paths, according to the available indexes. DINNER builds for each query a graph that represents the set of possible solutions (indexes) to be used by the query. For each solution it finds the access path that the database is most likely to choose to perform the query, and estimates the time it takes to perform the access path. In order to find the possible indexes, the access paths that use these indexes, and the time cost of the access paths, DINNER uses knowledge that was elicited from several sources (database administrator (DBA), literature), and represented using several AI representation methods (frames, rules, and logic). After finding the possible indexes, DINNER uses several heuristics to eliminate bad solutions. Then, solutions for all queries are generated. These solutions are searched to find the best one which is the recommendation given by DINNER. The validity of DINNER's recommendations were demonstrated in three ways: examination of a detailed example, a demonstration of the changes in the recommendations as the input is changed, and the evaluation of an expert DBA. | Abstract of query paper | Cite abstracts |
30327 | 30326 | Materialized views and indexes are physical structures for accelerating data access that are casually used in data warehouses. However, these data structures generate some maintenance overhead. They also share the same storage space. Most existing studies about materialized view and index selection consider these structures separately. In this paper, we adopt the opposite stance and couple materialized view and index selection to take view-index interactions into account and achieve efficient storage space sharing. Candidate materialized views and indexes are selected through a data mining process. We also exploit cost models that evaluate the respective benefit of indexing and view materialization, and help select a relevant configuration of indexes and materialized views among the candidates. Experimental results show that our strategy performs better than an independent selection of materialized views and indexes. | View materialization and indexing are the most effective techniques adopted in data warehouses to improve query performance. Since both materialization and indexing algorithms are driven by a constraint on the disk space made available for each, the designer would greatly benefit from being enabled to determine a priori which fractions of the global space available must be devoted to views and indexes, respectively, in order to optimally tune performances. In this paper we first present a comparative evaluation of the benefit (saving per disk page) brought by view materialization and indexing for a single query expressed on a star scheme. Then, we face the problem of determining an effective trade-off between the two space fractions for the core workload of the warehouse. Some experimental results are reported, which prove that the estimated trade-off is satisfactorily near to the optimal one. | Abstract of query paper | Cite abstracts |
30328 | 30327 | Collaborative work on unstructured or semi-structured documents, such as in literature corpora or source code, often involves agreed upon templates containing metadata. These templates are not consistent across users and over time. Rule-based parsing of these templates is expensive to maintain and tends to fail as new documents are added. Statistical techniques based on frequent occurrences have the potential to identify automatically a large fraction of the templates, thus reducing the burden on the programmers. We investigate the case of the Project Gutenberg corpus, where most documents are in ASCII format with preambles and epilogues that are often copied and pasted or manually typed. We show that a statistical approach can solve most cases though some documents require knowledge of English. We also survey various technical solutions that make our approach applicable to large data sets. | Discovering contrasts between collections of data is an important task in data mining. In this paper, we introduce a new type of contrast pattern, called a minimal distinguishing subsequence (MDS). An MDS is a minimal subsequence that occurs frequently in one class of sequences and infrequently in sequences of another class. It is a natural way of representing strong and succinct contrast information between two sequential datasets and can be useful in applications such as protein comparison, document comparison and building sequential classification models. Mining MDS patterns is a challenging task and is significantly different from mining contrasts between relational transactional data. One particularly important type of constraint that can be integrated into the mining process is the maximum gap constraint. We present an efficient algorithm called ConSGapMiner, to mine all MDSs according to a maximum gap constraint. It employs highly efficient bitset and Boolean operations, for powerful gap based pruning within a prefix growth framework. A performance evaluation with both sparse and dense datasets, demonstrates the scalability of ConSGapMiner and shows its ability to mine patterns from high dimensional datasets at low supports. | Abstract of query paper | Cite abstracts |
30329 | 30328 | Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction. | We address the problem of deterministic oversampling of bandlimited sensor fields in a distributed communication-constrained processing environment, where it is desired for a central intelligent unit to reconstruct the sensor field to maximum pointwise accuracy.We show, using a dither-based sampling scheme, that is is possible to accomplish this using minimal inter-sensor communication with the aid of a multitude of low-precision sensors. Furthermore, we show the feasibility of having a flexible tradeoff between the average oversampling rate and the Analog to Digital (A D) quantization precision per sensor sample with respect to achieving exponential accuracy in the number of bits per Nyquist-period, thereby exposing a key underpinning "conservation of bits" principle. That is, we can distribute the bit budget per Nyquist-period along the amplitude-axis (precision of A D converter) and space (or time or space-time) using oversampling in an almost arbitrary discrete-valued manner, while retaining the same reconstruction error decay profile. Interestingly this oversampling is possible in a highly localized communication setting, with only nearest-neighbor communication, making it very attractive for dense sensor networks operating under stringent inter-node communication constraints. Finally we show how our scheme incorporates security as a by-product due to the presence of an underlying dither signal which can be used as a natural encryption device for security. The choice of the dither function enhances the security of the network. Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental trade-off: using higher densities of sensors provides more measurements, higher resolution and better accuracy, but requires more communications and processing. This paper proposes a new approach, called "back-casting," which can significantly reduce communications and energy consumption while maintaining high accuracy. Back-casting operates by first having a small subset of the wireless sensors communicate their information to a fusion center. This provides an initial estimate of the environment being sensed, and guides the allocation of additional network resources. Specifically, the fusion center backcasts information based on the initial estimate to the network at large, selectively activating additional sensor nodes in order to achieve a target error level. The key idea is that the initial estimate can detect correlations in the environment, indicating that many sensors may not need to be activated by the fusion center. Thus, adaptive sampling can save energy compared to dense, non-adaptive sampling. This method is theoretically analyzed in the context of field estimation and it is shown that the energy savings can be quite significant compared to conventional approaches. For example, when sensing a piecewise smooth field with an array of 100 spl times 100 sensors, adaptive sampling can reduce the energy consumption by roughly a factor of 10 while providing the same accuracy achievable if all sensors were activated. The purpose of this paper is to develop methods that can reconstruct a bandlimited discrete-time signal from an irregular set of samples at unknown locations. We define a solution to the problem using first a geometric and then an algebraic point of view. We find the locations of the irregular set of samples by treating the problem as a combinatorial optimization problem. We employ an exhaustive method and two descent methods: the random search and cyclic coordinate methods. The numerical simulations were made on three types of irregular sets of locations: random sets; sets with jitter around a uniform set; and periodic nonuniform sets. Furthermore, for the periodic nonuniform set of locations, we develop a fast scheme that reduces the computational complexity of the problem by exploiting the periodic nonuniform structure of the sample locations in the DFT. | Abstract of query paper | Cite abstracts |
30330 | 30329 | Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction. | This paper describes a distributed, linear-time algorithm for localizing sensor network nodes in the presence of range measurement noise and demonstrates the algorithm on a physical network. We introduce the probabilistic notion of robust quadrilaterals as a way to avoid flip ambiguities that otherwise corrupt localization computations. We formulate the localization problem as a two-dimensional graph realization problem: given a planar graph with approximately known edge lengths, recover the Euclidean position of each vertex up to a global rotation and translation. This formulation is applicable to the localization of sensor networks in which each node can estimate the distance to each of its neighbors, but no absolute position reference such as GPS or fixed anchor nodes is available. We implemented the algorithm on a physical sensor network and empirically assessed its accuracy and performance. Also, in simulation, we demonstrate that the algorithm scales to large networks and handles real-world deployment geometries. Finally, we show how the algorithm supports localization of mobile nodes. Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions. | Abstract of query paper | Cite abstracts |
30331 | 30330 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | In this work, we study the caching performance of Content Centric Networking (CCN), with special emphasis on the size of individual CCN router caches. Specifically, we consider several graph-related centrality metrics (e.g., betweenness, closeness, stress, graph, eccentricity and degree centralities) to allocate content store space heterogeneously across the CCN network, and contrast the performance to that of an homogeneous allocation. Content-Centric Networking (CCN) is a novel network paradigm, which aims at moving from the traditional host-to-host network model to a client-to-content one. This shift brings many benefits but also leads a big challenge on the hardware technologies for implementing CCN nodes. For example the Pending Interest Table, one critical component in a CCN node, is involved in every CCN message forwarding process, and needs to be updated every time a packet comes in. The implementation of this table requires a memory that is fast for short access time and also large enough for storing the pending Interests. Unfortunately today's memory technologies cannot meet these requirements with the current hash-based architecture. After highlighting this limitation, we present a distributed PIT architecture which is based on the Bloom filter structure. The evaluations validate that our solution can reduce the PIT table size and support higher packet arrival rate. Our solution allows to implement this component on today's fast memory like SRAM. Therefore this proposal can improve the content fetching performance and improve the quality of content delivered in CCN network. The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution. The primary use of the Internet is content distribution -- the delivery of web pages, audio, and video to client applications -- yet the Internet was never architected for scalable content delivery. The result has been a proliferation of proprietary protocols and ad hoc mechanisms to meet growing content demand. In this paper, we describe a content routing design based on name-based routing as part of an explicit Internet content layer. We claim that this content routing is a natural extension of current Internet directory and routing systems, allows efficient content location, and can be implemented to scale with the Internet. Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls. In content-oriented networking, content files are typically cached in network nodes, and hence how to cache content files is crucial for the efficient content delivery and cache storage utilization. In this paper, we propose a content caching scheme, WAVE, in which the number of chunks to be cached is adjusted based on the popularity of the content. In WAVE, an upstream node recommends the number of chunks to be cached at its downstream node, which is exponentially increased as the request count increases. Simulation results reveal that the average hop count of content delivery of WAVE is lower than other schemes, and the inter-ISP traffic volume of WAVE is the second lowest (CDN is the lowest). Also, WAVE achieves higher cache hit ratio and fewer frequent cache replacements than other on-demand caching strategies. The delivery of video content is expected to gain huge momentum, fueled by the popularity of user-generated clips, growth of VoD libraries, and wide-spread deployment of IPTV services with features such as CatchUp PauseLive TV and NPVR capabilities. The time-shifted' nature of these personalized applications defies the broadcast paradigm underlying conventional TV networks, and increases the overall bandwidth demands by orders of magnitude. Caching strategies provide an effective mechanism for mitigating these massive bandwidth requirements by replicating the most popular content closer to the network edge, rather than storing it in a central site. The reduction in the traffic load lessens the required transport capacity and capital expense, and alleviates performance bottlenecks. In the present paper, we develop light-weight cooperative cache management algorithms aimed at maximizing the traffic volume served from cache and minimizing the bandwidth cost. As a canonical scenario, we focus on a cluster of distributed caches, either connected directly or via a parent node, and formulate the content placement problem as a linear program in order to benchmark the globally optimal performance. Under certain symmetry assumptions, the optimal solution of the linear program is shown to have a rather simple structure. Besides interesting in its own right, the optimal structure offers valuable guidance for the design of low-complexity cache management and replacement algorithms. We establish that the performance of the proposed algorithms is guaranteed to be within a constant factor from the globally optimal performance, with far more benign worst-case ratios than in prior work, even in asymmetric scenarios. Numerical experiments for typical popularity distributions reveal that the actual performance is far better than the worst-case conditions indicate. Content Centric Networking (CCN) is a promising architecture for the diffusion of popular content over the Internet. While CCN system design is sound, gathering a reliable estimate of its performance in the current Internet is challenging, due to the large scale and to the lack of agreement in some critical elements of the evaluation scenario. In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path. Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path. We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20 in server hits, and up to 10 in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching. | Abstract of query paper | Cite abstracts |
30332 | 30331 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | Instrumented environments, such as modern building automation systems (BAS), are becoming commonplace and are increasingly interconnected with (and sometimes by) enterprise networks and the Internet. Regardless of the underlying communication platform, secure control of devices in such environments is a challenging task. The current trend is to move from proprietary communication media and protocols to IP over Ethernet. While the move to IP represents progress, new and different Internet architectures might be better-suited for instrumented environments. In this paper, we consider security of instrumented environments in the context of Content-Centric Networking (CCN). In particular, we focus on building automation over Named-Data Networking (NDN), a prominent instance of CCN. After identifying security requirements in a specific BAS sub-domain (lighting control), we construct a concrete NDN-based security architecture, analyze its properties and report on preliminary implementation and experimental results. We believe in securing a communication paradigm well outside of its claimed forte of content distribution. At the same time, we provide a viable (secure and efficient) communication platform for a class of instrumented environments exemplified by lighting control. Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls. Several projects propose an information-centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integrating into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. The current security model based on host authentication is not applicable in this context. Basic security functionality must instead be attached directly to the data and its naming scheme. A naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification is here presented. The naming scheme is designed for flexibility and extensibility, e.g., to integrate other security properties like access control. At the same time, the naming scheme offers persistent IDs even though the content, content owner and or owner's organizational structure, or location change. The requirements for the naming scheme and an analysis showing how the proposed scheme fulfills them are presented. Experience with prototyping the naming scheme is also discussed. The naming scheme builds the foundation for a secure information-centric network infrastructure that can also solve some of the main security problems of today's Internet. With the growing realization that current Internet protocols are reaching the limits of their senescence, several on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) - a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking. | Abstract of query paper | Cite abstracts |
30333 | 30332 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community. Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls. | Abstract of query paper | Cite abstracts |
30334 | 30333 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community. Proxy caching servers are widely deployed in today's Internet. While cooperation among proxy caches can significantly improve a network's resilience to denial-of-service (DoS) attacks, lack of cooperation can transform such servers into viable DoS targets. In this paper, we investigate a class of pollution attacks that aim to degrade a proxy's caching capabilities, either by ruining the cache file locality, or by inducing false file locality. Using simulations, we propose and evaluate the effects of pollution attacks both in Web and peer-to-peer (p2p) scenarios, and reveal dramatic variability in resilience to pollution among several cache replacement policies. We develop efficient methods to detect both false-locality and locality-disruption attacks, as well as a combination of the two. To achieve high scalability for a large number of clients requests without sacrificing the detection accuracy, we leverage streaming computation techniques, i.e., bloom filters and probabilistic counting. Evaluation results from large-scale simulations show that these mechanisms are effective and efficient in detecting and mitigating such attacks. Furthermore, a Squid-based implementation demonstrates that our protection mechanism forces the attacker to launch extremely large distributed attacks in order to succeed. | Abstract of query paper | Cite abstracts |
30335 | 30334 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | Named Data Networking architectures have been proposed to improve various shortcomings of the current Internet architecture. A key part of these proposals is the capability of caching arbitrary content in arbitrary network locations. While caching has the potential to improve network performance, the data stored in caches can be seen as transient traces of past communication that attackers can exploit to compromise the users' privacy. With this editorial note, we aim to raise awareness of privacy attacks as an intrinsic and relevant issue in Named Data Networking architectures. Countermeasures against privacy attacks are subject to a trade-off between performance and privacy. We discuss several approaches to countermeasures representing different incarnations of this tradeoff, along with open issues to be looked at by the research community. Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls. | Abstract of query paper | Cite abstracts |
30336 | 30335 | As the Internet struggles to cope with scalability, mobility, and security issues, new network architectures are being proposed to better accommodate the needs of modern systems and applications. In particular, Content-Oriented Networking (CON) has emerged as a promising next-generation Internet architecture: it sets to decouple content from hosts, at the network layer, by naming data rather than hosts. CON comes with a potential for a wide range of benefits, including reduced congestion and improved delivery speed by means of content caching, simpler configuration of network devices, and security at the data level. However, it remains an interesting open question whether or not, and to what extent, this emerging networking paradigm bears new privacy challenges. In this paper, we provide a systematic privacy analysis of CON and the common building blocks among its various architectural instances in order to highlight emerging privacy threats, and analyze a few potential countermeasures. Finally, we present a comparison between CON and today's Internet in the context of a few privacy concepts, such as, anonymity, censoring, traceability, and confidentiality. | Content-centric networking — also known as information-centric networking (ICN) — shifts emphasis from hosts and interfaces (as in today’s Internet) to data. Named data becomes addressable and routable, while locations that currently store that data become irrelevant to applications. Named Data Networking (NDN) is a large collaborative research effort that exemplifies the content-centric approach to networking. NDN has some innate privacyfriendly features, such as lack of source and destination addresses on packets. However, as discussed in this paper, NDN architecture prompts some privacy concerns mainly stemming from the semantic richness of names. We examine privacy-relevant characteristics of NDN and present an initial attempt to achieve communication privacy. Specifically, we design an NDN add-on tool, called ANDaNA, that borrows a number of features from Tor. As we demonstrate via experiments, it provides comparable anonymity with lower relative overhead. Network use has evolved to be dominated by content distribution and retrieval, while networking technology still speaks only of connections between hosts. Accessing content and services requires mapping from the what that users care about to the network's where. We present Content-Centric Networking (CCN) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can simultaneously achieve scalability, security and performance. We implemented our architecture's basic features and demonstrate resilience and performance with secure file downloads and VoIP calls. The recent literature has hailed the benefits of content-oriented network architectures. However, such designs pose a threat to privacy by revealing a user's content requests. In this paper, we study how to ameliorate privacy in such designs. We present an approach that does not require any special infrastructure or shared secrets between the publishers and consumers of content. In lieu of any informational asymmetry, the approach leverages computational asymmetry by forcing the adversary to perform sizable computations to reconstruct each request. This approach does not provide ideal privacy, but makes it hard for an adversary to effectively monitor the content requests of a large number of users. | Abstract of query paper | Cite abstracts |
30337 | 30336 | A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S. | In this paper we propose a way to construct an analytic space over a non-archimedean field, starting with a real manifold with an affine structure which has integral monodromy. Our construction is motivated by the junction of the Homological Mirror conjecture and the geometric Strominger-Yau-Zaslow conjecture. In particular, we glue from “flat pieces” an analytic K3 surface. As a byproduct of our approach we obtain an action of an arithmetic subgroup of the group SO(1, 18) by piecewise-linear transformations on the two-dimensional sphere S 2 equipped with naturally defined singular affine structure. 0 Introduction 293 1 Piecewise RS -linear spaces 298 2 R-colored polysimplicial sets 308 3 R-colored polysimplicial sets of length l 313 4 The skeleton of a nondegenerate pluri-stable formal scheme 327 5 A colored polysimplicial set associated with a nondegenerate poly-stable fibration 336 6 p-Adic analytic and piecewise linear spaces 346 7 Strong local contractibility of smooth analytic spaces 355 8 Cohomology with coefficients in the sheaf of constant functions 362 | Abstract of query paper | Cite abstracts |
30338 | 30337 | A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S. | An algorithm is given for computing the mixed Hodge structure (more precisely, the Hodge–Deligne numbers) for cohomology of complete intersections in toric varieties in terms of Newton polyhedra specifying the complete intersection. In some particular cases the algorithm leads to explicit formulas. Bibliography: 8 titles. We consider mirror symmetry for (essentially arbitrary) hypersurfaces in (possibly noncompact) toric varieties from the perspective of the Strominger-Yau-Zaslow (SYZ) conjecture. Given a hypersurface @math in a toric variety @math we construct a Landau-Ginzburg model which is SYZ mirror to the blowup of @math along @math , under a positivity assumption. This construction also yields SYZ mirrors to affine conic bundles, as well as a Landau-Ginzburg model which can be naturally viewed as a mirror to @math . The main applications concern affine hypersurfaces of general type, for which our results provide a geometric basis for various mirror symmetry statements that appear in the recent literature. We also obtain analogous results for complete intersections. We construct from a real ane manifold with singularities (a tropical manifold) a degeneration of Calabi-Yau manifolds. This solves a fundamental problem in mirror symmetry. Furthermore, a striking feature of our approach is that it yields an explicit and canonical order-by-order description of the degeneration via families of tropical trees. This gives complete control of the B-model side of mirror symmetry in terms of tropical geometry. For example, we expect that our deformation parameter is a canonical coordinate, and expect period calculations to be expressible in terms of tropical curves. We anticipate this will lead to a proof of mirror symmetry via tropical methods. We give a spectral sequence to compute the logarithmic Hodge groups on a hyper- surface type toric log Calabi-Yau space X, compute its E1 term explicitly in terms of tropical degeneration data and Jacobian rings and prove its degeneration at E2 under mild assumptions. We prove the basechange of the affine Hodge groups and deduce itfor the logarithmic Hodge groups in low dimensions. As an application, we prove a mirror symmetry duality in dimension two and four involving the ordinary Hodge numbers, the stringy Hodge numbers and the affine Hodge numbers. The goal of this paper is to propose a theory of mirror symmetry for varieties of general type. Using Landau-Ginzburg mirrors as motivation, we describe the mirror of a hypersurface of general type (and more generally varieties of non-negative Kodaira dimension) as the critical locus of the zero fibre of a certain Landau-Ginzburg potential. The critical locus carries a perverse sheaf of vanishing cycles. Our main results shows that one obtains the interchange of Hodge numbers expected in mirror symmetry. This exchange is between the Hodge numbers of the hypersurface and certain Hodge numbers defined using a mixed Hodge structure on the hypercohomology of the perverse sheaf. This exchange can be anticipated from an analysis of Hochschild homology of the relevant categories arising in homological mirror symmetry in this case; we also conjecture that a similar, but different, exchange of dimensions arises from Hochschild cohomology, relating the cohomology of sheaves of polyvector fields on the hypersurface to the cohomology of the critical locus. | Abstract of query paper | Cite abstracts |
30339 | 30338 | A smooth affine hypersurface Z of complex dimension n is homotopy equivalent to an n-dimensional cell complex. Given a defining polynomial f for Z as well as a regular triangulation of its Newton polytope, we provide a purely combinatorial construction of a compact topological space S as a union of components of real dimension n, and prove that S embeds into Z as a deformation retract. In particular, Z is homotopy equivalent to S. | Abstract It is well-known that a Riemann surface can be decomposed into the so-called pairs-of-pants . Each pair-of-pants is diffeomorphic to a Riemann sphere minus 3 points. We show that a smooth complex projective hypersurface of arbitrary dimension admits a similar decomposition. The n -dimensional pair-of-pants is diffeomorphic to CP n minus n +2 hyperplanes. Alternatively, these decompositions can be treated as certain fibrations on the hypersurfaces. We show that there exists a singular fibration on the hypersurface with an n -dimensional polyhedral complex as its base and a real n -torus as its fiber. The base accommodates the geometric genus of a hypersurface V . Its homotopy type is a wedge of h n , o ( V ) spheres S n . Given a smooth toric variety X and an ample line bundle O(1), we construct a sequence of Lagrangian submanifolds of (C^*)^n with boundary on a level set of the Landau-Ginzburg mirror of X. The corresponding Floer homology groups form a graded algebra under the cup product which is canonically isomorphic to the homogeneous coordinate ring of X. | Abstract of query paper | Cite abstracts |
30340 | 30339 | Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc. | With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the "ground-truth" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook. Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and "forest fire"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph. | Abstract of query paper | Cite abstracts |
30341 | 30340 | Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc. | Social networks provide interesting algorithmic properties that can be used to bootstrap the security of distributed systems. For example, it is widely believed that social networks are fast mixing, and many recently proposed designs of such systems make crucial use of this property. However, whether real-world social networks are really fast mixing is not verified before, and this could potentially affect the performance of such systems based on the fast mixing property. To address this problem, we measure the mixing time of several social graphs, the time that it takes a random walk on the graph to approach the stationary distribution of that graph, using two techniques. First, we use the second largest eigenvalue modulus which bounds the mixing time. Second, we sample initial distributions and compute the random walk length required to achieve probability distributions close to the stationary distribution. Our findings show that the mixing time of social graphs is much larger than anticipated, and being used in literature, and this implies that either the current security systems based on fast mixing have weaker utility guarantees or have to be less efficient, with less security guarantees, in order to compensate for the slower mixing. We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest eigenvalue modulus (SLEM) of the transition probability matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the SLEM, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semidefinite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, @math ) using standard numerical methods for SDPs. Larger problems can be solved by exploiting various types of symmetry and structure in the problem, and far larger problems (say, 100,000 edges) can be solved using a subgradient method we describe. We compare the fastest mixing Markov chain to those obtained using two commonly used heuristics: the maximum-degree method, and the Metropolis--Hastings algorithm. For many of the examples considered, the fastest mixing Markov chain is substantially faster than those obtained using these heuristic methods. We derive the Lagrange dual of the fastest mixing Markov chain problem, which gives a sophisticated method for obtaining (arbitrarily good) bounds on the optimal mixing rate, as well as the optimality conditions. Finally, we describe various extensions of the method, including a solution of the problem of finding the fastest mixing reversible Markov chain, on a fixed graph, with a given equilibrium distribution. | Abstract of query paper | Cite abstracts |
30342 | 30341 | Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long "mixing time") for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by "rewiring" the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a "virtual" overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc. | The small world phenomenon, that consistently occurs in numerous exist- ing networks, refers to two similar but different properties — small average distance and the clustering effect. We consider a hybrid graph model that incorporates both properties by combining a global graph and a local graph. The global graph is modeled by a random graph with a power law degree distribution, while the local graph has specified local connectivity. We will prove that the hybrid graph has average distance and diameter close to that of random graphs with the same degree distribution (under certain mild conditions). We also give a simple decomposition algorithm which, for any given (real) graph, identifies the global edges and extracts the local graph (which is uniquely determined depending only on the local connectivity). We can then apply our theoretical results for analyzing real graphs, provided the parameters of the hybrid model can be appropriately chosen. Access to realistic, complex graph datasets is critical to research on social networking systems and applications. Simulations on graph data provide critical evaluation of new systems and applications ranging from community detection to spam filtering and social web search. Due to the high time and resource costs of gathering real graph datasets through direct measurements, researchers are anonymizing and sharing a small number of valuable datasets with the community. However, performing experiments using shared real datasets faces three key disadvantages: concerns that graphs can be de-anonymized to reveal private information, increasing costs of distributing large datasets, and that a small number of available social graphs limits the statistical confidence in the results. The use of measurement-calibrated graph models is an attractive alternative to sharing datasets. Researchers can "fit" a graph model to a real social graph, extract a set of model parameters, and use them to generate multiple synthetic graphs statistically similar to the original graph. While numerous graph models have been proposed, it is unclear if they can produce synthetic graphs that accurately match the properties of the original graphs. In this paper, we explore the feasibility of measurement-calibrated synthetic graphs using six popular graph models and a variety of real social graphs gathered from the Facebook social network ranging from 30,000 to 3 million edges. We find that two models consistently produce synthetic graphs with common graph metric values similar to those of the original graphs. However, only one produces high fidelity results in our application-level benchmarks. While this shows that graph models can produce realistic synthetic graphs, it also highlights the fact that current graph metrics remain incomplete, and some applications expose graph properties that do not map to existing metrics. | Abstract of query paper | Cite abstracts |
30343 | 30342 | We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors. | In this paper, we address compressed sensing of a low-rank matrix posing the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition providing an analogy between parsimonious representations of a sparse vector and a low-rank matrix and extending efficient greedy algorithms from the vector to the matrix case. In particular, we propose an efficient and guaranteed algorithm named atomic decomposition for minimum rank approximation (ADMiRA) that extends Needell and Tropp's compressive sampling matching pursuit (CoSaMP) algorithm from the sparse vector to the low-rank matrix case. The performance guarantee is given in terms of the rank-restricted isometry property (R-RIP) and bounds both the number of iterations and the error in the approximate solution for the general case of noisy measurements and approximately low-rank solution. With a sparse measurement operator as in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. Numerical experiments for the matrix completion problem show that, although the R-RIP is not satisfied in this case, ADMiRA is a competitive algorithm for matrix completion. Let M be an n? × n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm, which we call OptSpace, that reconstructs M from |E| = O(rn) observed entries with relative root mean square error 1 2 RMSE ? C(?) (nr |E|)1 2 with probability larger than 1 - 1 n3. Further, if r = O(1) and M is sufficiently unstructured, then OptSpace reconstructs it exactly from |E| = O(n log n) entries with probability larger than 1 - 1 n3. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices. This paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m1×m2 matrix A0 corrupted by noise are observed. We propose a new nuclear norm penalized estimator of A0 and establish a general sharp oracle inequality for this estimator for arbitrary values of n,m1,m2 under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting m1m2 ≫ n. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix A0, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of A0 with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by A0 and the aim is to find the best trace regression model approximating the data. | Abstract of query paper | Cite abstracts |
30344 | 30343 | We consider the problem of recovering two unknown vectors, @math and @math , of length @math from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension @math and the other with dimension @math . Although the observed convolution is nonlinear in both @math and @math , it is linear in the rank-1 matrix formed by their outer product @math . This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that for "generic" signals, the program can deconvolve @math and @math exactly when the maximum of @math and @math is almost on the order of @math . That is, we show that if @math is drawn from a random subspace of dimension @math , and @math is a vector in a subspace of dimension @math whose basis vectors are "spread out" in the frequency domain, then nuclear norm minimization recovers @math without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length @math which we code using a random @math coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length @math , then the receiver can recover both the channel response and the message when @math , to within constant and log factors. | The recently developed compressive sensing (CS) framework enables the design of sub-Nyquist analog-to-digital converters. Several architectures have been proposed for the acquisition of sparse signals in large swaths of bandwidth. In this paper we consider a more flexible multi-channel signal model consisting of several discontiguous channels where the occupancy of the combined bandwidth of the channels is sparse. We introduce a new compressive acquisition architecture, the compressive multiplexer (CMUX), to sample such signals. We demonstrate that our architecture is CS-feasible and suggest a simple implementation with numerous practical advantages. Abstract Many applications in cellular systems and sensor networks involve a random subset of a large number of users asynchronously reporting activity to a base station. This paper examines the problem of multiuser detection (MUD) in random access channels for such applications. Traditional orthogonal signaling ignores the random nature of user activity in this problem and limits the total number of users to be on the order of the number of signal space dimensions. Contention-based schemes, on the other hand, suffer from delays caused by colliding transmissions and the hidden node problem. In contrast, this paper presents a novel pairing of an asynchronous non-orthogonal code-division random access scheme with a convex optimization-based MUD algorithm that overcomes the issues associated with orthogonal signaling and contention-based methods. Two key distinguishing features of the proposed MUD algorithm are that it does not require knowledge of the delay or channel state information of every user and it has polynomial-time computational complexity. The main analytical contribution of this paper is the relationship between the performance of the proposed MUD algorithm in the presence of arbitrary or random delays and two simple metrics of the set of user codewords. The study of these metrics is then focused on two specific sets of codewords, random binary codewords and specially constructed algebraic codewords, for asynchronous random access. The ensuing analysis confirms that the proposed scheme together with either of these two codeword sets significantly outperforms the orthogonal signaling-based random access in terms of the total number of users in the system. | Abstract of query paper | Cite abstracts |
30345 | 30344 | We introduce a new family of erasure codes, called group decodable code (GDC), for distributed storage system. Given a set of design parameters ; ; k; t , where k is the number of information symbols, each codeword of an ( ; ; k; t)-group decodable code is a t-tuple of strings, called buckets, such that each bucket is a string of symbols that is a codeword of a [ ; ] MDS code (which is encoded from information symbols). Such codes have the following two properties: (P1) Locally Repairable: Each code symbol has locality ( ; - + 1). (P2) Group decodable: From each bucket we can decode information symbols. We establish an upper bound of the minimum distance of ( ; ; k; t)-group decodable code for any given set of ; ; k; t ; We also prove that the bound is achievable when the coding field F has size |F| > n-1 k-1. | A locally recoverable code (LRC code) is a code over a finite alphabet such that every symbol in the encoding is a function of a small number of other symbols that form a recovering set. Bounds on the rate and distance of such codes have been extensively studied in the literature. In this paper we derive upper bounds on the rate and distance of codes in which every symbol has t ≥ 1 disjoint recovering sets. In distributed storage systems, erasure codes with locality (r ) are preferred because a coordinate can be locally repaired by accessing at most (r ) other coordinates which in turn greatly reduces the disk I O complexity for small (r ) . However, the local repair may not be performed when some of the (r ) coordinates are also erased. To overcome this problem, we propose the ((r, )_ c ) -locality providing ( -1 ) nonoverlapping local repair groups of size no more than (r ) for a coordinate. Consequently, the repair locality (r ) can tolerate ( -1 ) erasures in total. We derive an upper bound on the minimum distance for any linear ([n,k] ) code with information ((r, )_ c ) -locality. Then, we prove existence of the codes that attain this bound when (n k(r( -1)+1) ) . Although the locality ((r, ) ) defined by provides the same level of locality and local repair tolerance as our definition, codes with ((r, )_ c ) -locality attaining the bound are proved to have more advantage in the minimum distance. In particular, we construct a class of codes with all symbol ((r, )_ c ) -locality where the gain in minimum distance is ( ( r ) ) and the information rate is close to 1. | Abstract of query paper | Cite abstracts |
30346 | 30345 | This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods. | In this paper we demonstrate an effective method for parsing clothing in fashion photographs, an extremely challenging problem due to the large number of possible garment items, variations in configuration, garment appearance, layering, and occlusion. In addition, we provide a large novel dataset and tools for labeling garment items, to enable future research on clothing estimation. Finally, we present intriguing initial results on using clothing estimates to improve pose identification, and demonstrate a prototype application for pose-independent visual garment retrieval. Clothing is one of the most informative cues of human appearance. In this paper, we propose a novel multi-person clothing segmentation algorithm for highly occluded images. The key idea is combining blocking models to address the person-wise occlusions. In contrary to the traditional layered model that tries to solve the full layer ranking problem, the proposed blocking model partitions the problem into a series of pair-wise ones and then determines the local blocking relationship based on individual and contextual information. Thus, it is capable of dealing with cases with a large number of people. Additionally, we propose a layout model formulated as Markov Network which incorporates the blocking relationship to pursue an approximately optimal clothing layout for group people. Experiments demonstrated on a group images dataset show the effectiveness of our algorithm. We present a method for segmenting the parts of multiple instances of a known object category exhibiting large variations in projected shape and colour. The method builds on an existing MRF formulation incorporating a prior shape model and colour distributions for the constituent parts. We propose a novel shape model consisting of a deformable spatial prior probability for the part-label at each pixel. We also make a simple extension to the MRF formulation to deal simultaneously with multiple objects within a global optimisation. Finally, we evaluate the method for the task of segmenting individual items of clothing in images depicting groups of people, and demonstrate improved performance against the state of the art for this task. We describe a simple model for parsing pedestrians based on shape. Our model assembles candidate parts from an oversegmentation of the image and matches them to a library of exemplars. Our matching uses a hierarchical decomposition into a variable number of parts and computes scores on partial matchings in order to prune the search space of candidate segment. Simple constraints enforce consistent layout of parts. Because our model is shape-based, it generalizes well. We use exemplars from a controlled dataset of poses but achieve good test performance on unconstrained images of pedestrians in street scenes. We demonstrate results of parsing detections returned from a standard scanning-window pedestrian detector and use the resulting parse to perform viewpoint prediction and detection re-scoring. Cloth modeling and recognition is an important and challenging problem in both vision and graphics tasks, such as dressed human recognition and tracking, human sketch and portrait. In this paper, we present a context sensitive grammar in an And-Or graph representation which will produce a large set of composite graphical templates to account for the wide variabilities of cloth configurations, such as T-shirts, jackets, etc. In a supervised learning phase, we ask an artist to draw sketches on a set of dressed people, and we decompose the sketches into categories of cloth and body components: collars, shoulders, cuff, hands, pants, shoes etc. Each component has a number of distinct subtemplates (sub-graphs). These sub-templates serve as leafnodes in a big And-Or graph where an And-node represents a decomposition of the graph into sub-configurations with Markov relations for context and constraints (soft or hard), and an Or-node is a switch for choosing one out of a set of alternative And-nodes (sub-configurations) - similar to a node in stochastic context free grammar (SCFG). This representation integrates the SCFG for structural variability and the Markov (graphical) model for context. An algorithm which integrates the bottom-up proposals and the topdown information is proposed to infer the composite cloth template from the image. | Abstract of query paper | Cite abstracts |
30347 | 30346 | This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. We propose a data-driven framework consisting of two phases of inference. The first phase, referred as "image co-segmentation", iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. "region colabeling"), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing configuration (e.g., item location and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29 88.23 segmentation accuracy and 65.52 63.89 recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-of-the-art methods. | This paper presents a framework for semantically segmenting a target image without tags by searching for references in an image database, where all the images are unsegmented but annotated with tags. We jointly segment the target image and its references by optimizing both semantic consistencies within individual images and correspondences between the target image and each of its references. In our framework, we first retrieve two types of references with a semantic-driven scheme: i) the compatible references which share similar global appearance with the target image; and ii) the competitive references which have distinct appearance to the target image but similar tags with one of the compatible references. The two types of references have complementary information for assisting the segmentation of the target image. Then we construct a novel graphical representation, in which the vertices are superpixels extracted from the target image and its references. The segmentation problem is posed as labeling all the vertices with the semantic tags obtained from the references. The method is able to label images without the pixel-level annotation and classifier training, and it outperforms the state-of-the-arts approaches on the MSRC-21 database. This paper presents a new algorithm for the automatic recognition of object classes from images (categorization). Compact and yet discriminative appearance-based object class models are automatically learned from a set of training images. The method is simple and extremely fast, making it suitable for many applications such as semantic image retrieval, Web search, and interactive image editing. It classifies a region according to the proportions of different visual words (clusters in feature space). The specific visual words and the typical proportions in each object are learned from a segmented training set. The main contribution of this paper is twofold: i) an optimally compact visual dictionary is learned by pair-wise merging of visual words from an initially large dictionary. The final visual words are described by GMMs. ii) A novel statistical measure of discrimination is proposed which is optimized by each merge operation. High classification accuracy is demonstrated for nine object classes on photographs of real objects viewed under general lighting conditions, poses and viewpoints. The set of test images used for validation comprise: i) photographs acquired by us, ii) images from the Web and iii) images from the recently released Pascal dataset. The proposed algorithm performs well on both texture-rich objects (e.g. grass, sky, trees) and structure-rich ones (e.g. cars, bikes, planes) This paper presents a framework of layered graph matching for integrating graph partition and matching. The objective is to find an unknown number of corresponding graph structures in two images. We extract discriminative local primitives from both images and construct a candidacy graph whose vertices are matching candidates (i.e., a pair of primitives) and whose edges are either negative for mutual exclusion or positive for mutual consistence. Then we pose layered graph matching as a multicoloring problem on the candidacy graph and solve it using a composite cluster sampling algorithm. This algorithm assigns some vertices into a number of colors, each being a matched layer, and turns off all the remaining candidates. The algorithm iterates two steps: 1) Sampling the positive and negative edges probabilistically to form a composite cluster, which consists of a few mutually conflicting connected components (CCPs) in different colors and 2) assigning new colors to these CCPs with consistence and exclusion relations maintained, and the assignments are accepted by the Markov Chain Monte Carlo (MCMC) mechanism to preserve detailed balance. This framework demonstrates state-of-the-art performance on several applications, such as multi-object matching with large motion, shape matching and retrieval, and object localization in cluttered background. In this work, we investigate how to automatically reassign the manually annotated labels at the image-level to those contextually derived semantic regions. First, we propose a bi-layer sparse coding formulation for uncovering how an image or semantic region can be robustly reconstructed from the over-segmented image patches of an image set. We then harness it for the automatic label to region assignment of the entire image set. The solution to bi-layer sparse coding is achieved by convex l1-norm minimization. The underlying philosophy of bi-layer sparse coding is that an image or semantic region can be sparsely reconstructed via the atomic image patches belonging to the images with common labels, while the robustness in label propagation requires that these selected atomic patches come from very few images. Each layer of sparse coding produces the image label assignment to those selected atomic patches and merged candidate regions based on the shared image labels. The results from all bi-layer sparse codings over all candidate regions are then fused to obtain the entire label to region assignments. Besides, the presenting bi-layer sparse coding framework can be naturally applied to perform image annotation on new test images. Extensive experiments on three public image datasets clearly demonstrate the effectiveness of our proposed framework in both label to region assignment and image annotation tasks. We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure- ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automati- cally segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made pos- sible by our use of an Implicit Shape Model, which integrates both into a common probabilistic framework. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a nat- ural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with a novel MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method significantly outper- forms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in differ- ent articulations and with widely varying texture patterns, even under significant partial occlusion. While there has been a lot of recent work on object recognition and image understanding, the focus has been on carefully establishing mathematical models for images, scenes, and objects. In this paper, we propose a novel, nonparametric approach for object recognition and scene parsing using a new technology we name label transfer. For an input image, our system first retrieves its nearest neighbors from a large database containing fully annotated images. Then, the system establishes dense correspondences between the input image and each of the nearest neighbors using the dense SIFT flow algorithm [28], which aligns two images based on local image structures. Finally, based on the dense scene correspondences obtained from SIFT flow, our system warps the existing annotations and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on challenging databases. Compared to existing object recognition approaches that require training classifiers or appearance models for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure. | Abstract of query paper | Cite abstracts |
30348 | 30347 | This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks. | We propose semantic texton forests, efficient and powerful new low-level features. These are ensembles of decision trees that act directly on image pixels, and therefore do not need the expensive computation of filter-bank responses or local descriptors. They are extremely fast to both train and test, especially compared with k-means clustering and nearest-neighbor assignment of feature descriptors. The nodes in the trees provide (i) an implicit hierarchical clustering into semantic textons, and (ii) an explicit local classification estimate. Our second contribution, the bag of semantic textons, combines a histogram of semantic textons over an image region with a region prior category distribution. The bag of semantic textons is computed over the whole image for categorization, and over local rectangular regions for segmentation. Including both histogram and region prior allows our segmentation algorithm to exploit both textural and semantic context. Our third contribution is an image-level prior for segmentation that emphasizes those categories that the automatic categorization believes to be present. We evaluate on two datasets including the very challenging VOC 2007 segmentation dataset. Our results significantly advance the state-of-the-art in segmentation accuracy, and furthermore, our use of efficient decision forests gives at least a five-fold increase in execution speed. This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow). In this paper, we investigate a novel reconfigurable part-based model, namely And-Or graph model, to recognize object shapes in images. Our proposed model consists of four layers: leaf-nodes at the bottom are local classifiers for detecting contour fragments; or-nodes above the leaf-nodes function as the switches to activate their child leaf-nodes, making the model reconfigurable during inference; and-nodes in a higher layer capture holistic shape deformations; one root-node on the top, which is also an or-node, activates one of its child and-nodes to deal with large global variations (e.g. different poses and views). We propose a novel structural optimization algorithm to discriminatively train the And-Or model from weakly annotated data. This algorithm iteratively determines the model structures (e.g. the nodes and their layouts) along with the parameter learning. On several challenging datasets, our model demonstrates the effectiveness to perform robust shape-based object detection against background clutter and outperforms the other state-of-the-art approaches. We also release a new shape database with annotations, which includes more than @math challenging shape instances, for recognition and detection. | Abstract of query paper | Cite abstracts |
30349 | 30348 | This article investigates a data-driven approach for semantic scene understanding, without pixelwise annotation or classifier training. The proposed framework parses a target image in two steps: first, retrieving its exemplars (that is, references) from an image database, where all images are unsegmented but annotated with tags; second, recovering its pixel labels by propagating semantics from the references. The authors present a novel framework making the two steps mutually conditional and bootstrapped under the probabilistic Expectation-Maximization (EM) formulation. In the first step, the system selects the references by jointly matching the appearances as well as the semantics (that is, the assigned labels) with the target. They process the second step via a combinatorial graphical representation, in which the vertices are superpixels extracted from the target and its selected references. Then they derive the potentials of assigning labels to one vertex of the target, which depend upon the graph edges that connect the vertex to its spatial neighbors of the target and to similar vertices of the references. The proposed framework can be applied naturally to perform image annotation on new test images. In the experiments, the authors validated their approach on two public databases, and demonstrated superior performance over the state-of-the-art methods in both semantic segmentation and image annotation tasks. | We address the problem of learning object class models and object segmentations from unannotated images. We introduce LOCUS (learning object classes with unsupervised segmentation) which uses a generative probabilistic model to combine bottom-up cues of color and edge with top-down cues of shape and pose. A key aspect of this model is that the object appearance is allowed to vary from image to image, allowing for significant within-class variation. By iteratively updating the belief in the object's position, size, segmentation and pose, LOCUS avoids making hard decisions about any of these quantities and so allows for each to be refined at any stage. We show that LOCUS successfully learns an object class model from unlabeled images, whilst also giving segmentation accuracies that rival existing supervised methods. Finally, we demonstrate simultaneous recognition and segmentation in novel images using the learned models for a number of object classes, as well as unsupervised object discovery and tracking in video. In this paper we propose a novel nonparametric approach for object recognition and scene parsing using dense scene alignment. Given an input image, we retrieve its best matches from a large database with annotated images using our modified, coarse-to-fine SIFT flow algorithm that aligns the structures within two images. Based on the dense scene correspondence obtained from the SIFT flow, our system warps the existing annotations, and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on a challenging database. Compared to existing object recognition approaches that require training for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure. We propose a novel method for weakly supervised semantic segmentation. Training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method predicts a class label for every pixel. Our main innovation is a multi-image model (MIM) - a graphical model for recovering the pixel labels of the training images. The model connects superpixels from all training images in a data-driven fashion, based on their appearance similarity. For generalizing to new test images we integrate them into MIM using a learned multiple kernel metric, instead of learning conventional classifiers on the recovered pixel labels. We also introduce an “objectness” potential, that helps separating objects (e.g. car, dog, human) from background classes (e.g. grass, sky, road). In experiments on the MSRC 21 dataset and the LabelMe subset of [18], our technique outperforms previous weakly supervised methods and achieves accuracy comparable with fully supervised methods. We propose a novel approach to semantic segmentation using weakly supervised labels. In traditional fully supervised methods, superpixel labels are available for training; however, it is not easy to obtain enough labeled superpixels to learn a satisfying model for semantic segmentation. By contrast, only image-level labels are necessary in weakly supervised methods, which makes them more practical in real applications. In this paper we develop a new way of evaluating classification models for semantic segmentation given weekly supervised labels. For a certain category, provided the classification model parameter, we firstly learn the basis superpixels by sparse reconstruction, and then evaluate the parameters by measuring the reconstruction errors among negative and positive superpixels. Based on Gaussian Mixture Models, we use Iterative Merging Update (IMU) algorithm to obtain the best parameters for the classification models. Experimental results on two real-world datasets show that the proposed approach outperforms the existing weakly supervised methods, and it also competes with state-of-the-art fully supervised methods. | Abstract of query paper | Cite abstracts |
30350 | 30349 | This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches). | When dealing with objects with complex structures, saliency detection confronts a critical problem - namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed. Effective segmentation prior to recognition has been shown to improve recognition performance. However, most segmentation algorithms adopt methods which are not explicitly linked to the goal of object recognition. Here we solve a related but slightly different problem in order to assist object recognition more directly - the extraction of a foreground mask, which identifies the locations of objects in the image. We propose a novel foreground background segmentation algorithm that attempts to segment the interesting objects from the rest of the image, while maximizing an objective function which is tightly related to object recognition. We do this in a manner which requires no class-specific knowledge of object categories, using a probabilistic formulation which is derived from manually segmented images. The model includes a geometric prior and an appearance prior, whose parameters are learnt on the fly from images that are similar to the query image. We use graph-cut based energy minimization to enforce spatial coherence on the model's output. The method is tested on the challenging VOC09 and VOC10 segmentation datasets, achieving excellent results in providing a foreground mask. We also provide comparisons to the recent segmentation method of [7]. We present a novel technique for figure-ground segmentation, where the goal is to separate all foreground objects in a test image from the background. We decompose the test image and all images in a supervised training set into overlapping windows likely to cover foreground objects. The key idea is to transfer segmentation masks from training windows that are visually similar to windows in the test image. These transferred masks are then used to derive the unary potentials of a binary, pairwise energy function defined over the pixels of the test image, which is minimized with standard graph-cuts. This results in a fully automatic segmentation scheme, as opposed to interactive techniques based on similar energy functions. Using windows as support regions for transfer efficiently exploits the training data, as the test image does not need to be globally similar to a training image for the method to work. This enables to compose novel scenes using local parts of training images. Our approach obtains very competitive results on three datasets (PASCAL VOC 2010 segmentation challenge, Weizmann horses, Graz-02). There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset introduced by [29] We have integrated our algorithm in NEIL for enriching its knowledge base [5]. As of 14th April 2014, NEIL has automatically generated approximately 500K segmentations using web data. Detecting visually salient regions in images is one of the fundamental problems in computer vision. We propose a novel method to decompose an image into large scale perceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. By considering both appearance similarity and spatial distribution of image pixels, the proposed representation abstracts out unnecessary image details, allowing the assignment of comparable saliency values across similar regions, and producing perceptually accurate salient region detection. We evaluate our salient region detection approach on the largest publicly available dataset with pixel accurate annotations. The experimental results show that the proposed method outperforms 18 alternate methods, reducing the mean absolute error by 25.2 compared to the previous best result, while being computationally more efficient. Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information. Saliency estimation has become a valuable tool in image processing. Yet, existing approaches exhibit considerable variation in methodology, and it is often difficult to attribute improvements in result quality to specific algorithm properties. In this paper we reconsider some of the design choices of previous methods and propose a conceptually clear and intuitive algorithm for contrast-based saliency estimation. Our algorithm consists of four basic steps. First, our method decomposes a given image into compact, perceptually homogeneous elements that abstract unnecessary detail. Based on this abstraction we compute two measures of contrast that rate the uniqueness and the spatial distribution of these elements. From the element contrast we then derive a saliency measure that produces a pixel-accurate saliency map which uniformly covers the objects of interest and consistently separates fore- and background. We show that the complete contrast and saliency estimation can be formulated in a unified way using high-dimensional Gaussian filters. This contributes to the conceptual simplicity of our method and lends itself to a highly efficient implementation with linear complexity. In a detailed experimental evaluation we analyze the contribution of each individual feature and show that our method outperforms all state-of-the-art approaches. | Abstract of query paper | Cite abstracts |
30351 | 30350 | This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches). | Deep Neural Networks (DNNs) have recently shown outstanding performance on image classification tasks [14]. In this paper we go one step further and address the problem of object detection using DNNs, that is not only classifying but also precisely localizing objects of various classes. We present a simple and yet powerful formulation of object detection as a regression problem to object bounding box masks. We define a multi-scale inference procedure which is able to produce high-resolution object detections at a low cost by a few network applications. State-of-the-art performance of the approach is shown on Pascal VOC. We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat. Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations. Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding. In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on–board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off–line) and current (on–line) information are combined to detect road areas in single images. From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7 compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8 compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined. Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset. Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these "Deep And Wide Multiscale Recursive" (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ( @math ) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks. | Abstract of query paper | Cite abstracts |
30352 | 30351 | In this paper, we propose a novel model for high-dimensional data, called the Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a linear orthogonal projection and a finite mixture model under a unified generative modeling framework. The HOPE model itself can be learned unsupervised from unlabelled data based on the maximum likelihood estimation as well as discriminatively from labelled data. More interestingly, we have shown the proposed HOPE models are closely related to neural networks (NNs) in a sense that each hidden layer can be reformulated as a HOPE model. As a result, the HOPE framework can be used as a novel tool to probe why and how NNs work, more importantly, to learn NNs in either supervised or unsupervised ways. In this work, we have investigated the HOPE framework to learn NNs for several standard tasks, including image recognition on MNIST and speech recognition on TIMIT. Experimental results have shown that the HOPE framework yields significant performance gains over the current state-of-the-art methods in various types of NN learning problems, including unsupervised feature learning, supervised or semi-supervised learning. | We present the theory for heteroscedastic discriminant analysis (HDA), a model-based generalization of linear discriminant analysis (LDA) derived in the maximum-likelihood framework to handle heteroscedastic-unequal variance-classifier models. We show how to estimate the heteroscedastic Gaussian model parameters jointly with the dimensionality reducing transform, using the EM algorithm. In doing so, we alleviate the need for an a priori ad hoc class assignment. We apply the theoretical results to the problem of speech recognition and observe word-error reduction in systems that employed both diagonal and full covariance heteroscedastic Gaussian models tested on the TI-DIGITS database. | Abstract of query paper | Cite abstracts |
30353 | 30352 | The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time). | Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of "submodularity". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions. | Abstract of query paper | Cite abstracts |
30354 | 30353 | The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time). | Given a social network G and a constant @math , the influence maximization problem asks for k nodes in G that (directly and indirectly) influence the largest number of nodes under a pre-defined diffusion model. This problem finds important applications in viral marketing, and has been extensively studied in the literature. Existing algorithms for influence maximization, however, either trade approximation guarantees for practical efficiency, or vice versa. In particular, among the algorithms that achieve constant factor approximations under the prominent independent cascade (IC) model or linear threshold (LT) model, none can handle a million-node graph without incurring prohibitive overheads. This paper presents TIM, an algorithm that aims to bridge the theory and practice in influence maximization. On the theory side, we show that TIM runs in O((k+ l) (n+m) log n e2) expected time and returns a (1-1 e-e)-approximate solution with at least 1 - n-l probability. The time complexity of TIM is near-optimal under the IC model, as it is only a log n factor larger than the Ω(m + n) lower-bound established in previous work (for fixed k, l, and e). Moreover, TIM supports the triggering model, which is a general diffusion model that includes both IC and LT as special cases. On the practice side, TIM incorporates novel heuristics that significantly improve its empirical efficiency without compromising its asymptotic performance. We experimentally evaluate TIM with the largest datasets ever tested in the literature, and show that it outperforms the state-of-the-art solutions (with approximation guarantees) by up to four orders of magnitude in terms of running time. In particular, when k = 50, e = 0.2, and l = 1, TIM requires less than one hour on a commodity machine to process a network with 41.6 million nodes and 1.4 billion edges. This demonstrates that influence maximization algorithms can be made practical while still offering strong theoretical guarantees. Influence maximization is the problem of finding a small subset of nodes (seed nodes) in a social network that could maximize the spread of influence. In this paper, we study the efficient influence maximization from two complementary directions. One is to improve the original greedy algorithm of [5] and its improvement [7] to further reduce its running time, and the second is to propose new degree discount heuristics that improves influence spread. We evaluate our algorithms by experiments on two large academic collaboration graphs obtained from the online archival database arXiv.org. Our experimental results show that (a) our improved greedy algorithm achieves better running time comparing with the improvement of [7] with matching influence spread, (b) our degree discount heuristics achieve much better influence spread than classic degree and centrality-based heuristics, and when tuned for a specific influence cascade model, it achieves almost matching influence thread with the greedy algorithm, and more importantly (c) the degree discount heuristics run only in milliseconds while even the improved greedy algorithms run in hours in our experiment graphs with a few tens of thousands of nodes. Based on our results, we believe that fine-tuned heuristics may provide truly scalable solutions to the influence maximization problem with satisfying influence spread and blazingly fast running time. Therefore, contrary to what implied by the conclusion of [5] that traditional heuristics are outperformed by the greedy approximation algorithm, our results shed new lights on the research of heuristic algorithms. We live in a world of social networks. Our everyday choices are often influenced by social interactions. Word of mouth, meme diffusion on the Internet, and viral marketing are all examples of how social networks can affect our behaviour. In many practical applications, it is of great interest to determine which nodes have the highest influence over the network, i.e., which set of nodes will, indirectly, reach the largest audience when propagating information. These nodes might be, for instance, the target for early adopters of a product, the most influential endorsers in political elections, or the most important investors in financial operations, just to name a few examples. Here, we tackle the NP-hard problem of influence maximization on social networks by means of a Genetic Algorithm. We show that, by using simple genetic operators, it is possible to find in feasible runtime solutions of high-influence that are comparable, and occasionally better, than the solutions found by a number of known heuristics (one of which was previously proven to have the best possible approximation guarantee, in polynomial time, of the optimal solution). The advantages of Genetic Algorithms show, however, in them not requiring any assumptions about the graph underlying the network, and in them obtaining more diverse sets of feasible solutions than current heuristics. Nowadays, in the world of limited attention, the techniques that maximize the spread of social influence are more than welcomed. Companies try to maximize their profits on sales by providing customers with free samples believing in the power of word-of-mouth marketing, governments and non-governmental organizations often want to introduce positive changes in the society by appropriately selecting individuals or election candidates want to spend least budget yet still win the election. In this work we propose the use of evolutionary algorithm as a mean for selecting seeds in social networks. By framing the problem as genetic algorithm challenge we show that it is possible to outperform well-known greedy algorithm in the problem of influence maximization for the linear threshold model in both: quality (up to 16 better) and efficiency (up to 35 times faster). We implemented these two algorithms by using GPGPU approach showing that also the evolutionary algorithm can benefit from GPU acceleration making it efficient and scaling better than the greedy algorithm. As the experiments conducted by using three real world datasets reveal, the evolutionary approach proposed in this paper outperforms the greedy algorithm in terms of the outcome and it also scales much better than the greedy algorithm when the network size is increasing. The only drawback in the GPGPU approach so far is the maximum size of the network that can be processed - it is limited by the memory of the GPU card. We believe that by showing the superiority of the evolutionary approach over the greedy algorithm, we will motivate the scientific community to look for an idea to overcome this limitation of the GPU approach - we also suggest one of the possible paths to explore. Since the proposed approach is based only on topological features of the network, not on the attributes of nodes, the applications of it are broader than the ones that are dataset-specific. Given a social network G and a positive integer k, the influence maximization problem asks for k nodes (in G) whose adoptions of a certain idea or product can trigger the largest expected number of follow-up adoptions by the remaining nodes. This problem has been extensively studied in the literature, and the state-of-the-art technique runs in O((k+l) (n+m) log n e2) expected time and returns a (1-1 e-e)-approximate solution with at least 1 - 1 n l probability. This paper presents an influence maximization algorithm that provides the same worst-case guarantees as the state of the art, but offers significantly improved empirical efficiency. The core of our algorithm is a set of estimation techniques based on martingales, a classic statistical tool. Those techniques not only provide accurate results with small computation overheads, but also enable our algorithm to support a larger class of information diffusion models than existing methods do. We experimentally evaluate our algorithm against the states of the art under several popular diffusion models, using real social networks with up to 1.4 billion edges. Our experimental results show that the proposed algorithm consistently outperforms the states of the art in terms of computation efficiency, and is often orders of magnitude faster. Diffusion is a fundamental graph process, underpinning such phenomena as epidemic disease contagion and the spread of innovation by word-of-mouth. We address the algorithmic problem of finding a set of k initial seed nodes in a network so that the expected size of the resulting cascade is maximized, under the standard independent cascade model of network diffusion. Runtime is a primary consideration for this problem due to the massive size of the relevant input networks. We provide a fast algorithm for the influence maximization problem, obtaining the near-optimal approximation factor of (1--1 e -- e), for any e > 0, in time O((m + n)e-3 log n). Our algorithm is runtime-optimal (up to a logarithmic factor) and substantially improves upon the previously best-known algorithms which run in time Ω(mnk · POLY(e-1)). Furthermore, our algorithm can be modified to allow early termination: if it is terminated after O(β(m + n) log n) steps for some β < 1 (which can depend on n), then it returns a solution with approximation factor O(β). Finally, we show that this runtime is optimal (up to logarithmic factors) for any β and fixed seed size k. Influence maximization is a hard combinatorial optimization problem. It requires the identification of an optimum set of k network vertices that triggers the activation of a maximum total number of remaining network nodes with respect to a chosen propagation model. The problem is appealing because it is provably hard and has a number of practical applications in domains such as data mining and social network analysis. Although there are many exact and heuristic algorithms for influence maximization, it has been tackled by metaheuristic and evolutionary methods as well. This paper presents and evaluates a new evolutionary method for influence maximization that employs a recent genetic algorithm for fixed–length subset selection. The algorithm is extended by the concept of guiding that prevents selection of infeasible vertices, reduces the search space, and effectively improves the evolutionary procedure. | Abstract of query paper | Cite abstracts |
30355 | 30354 | The problem of influence maximization is to select the most influential individuals in a social network. With the popularity of social network sites, and the development of viral marketing, the importance of the problem has been increased. The influence maximization problem is NP-hard, and therefore, there will not exist a polynomial-time algorithm to solve the problem unless P=NP. Many heuristics are proposed to find a nearly good solution in a shorter time. In this paper, we propose two heuristic algorithms to find good solutions. The heuristics are based on two ideas: (1) vertices of high degree have more influence in the network, and (2) nearby vertices influence on almost analogous sets of vertices. We evaluate our algorithms on several well-known data sets and show that our heuristics achieve better results (up to @math in influence spread) for this problem in a shorter time (up to @math improvement in the running time). | Online social network today is an effective media to share and disperse tons of information, especially for advertizing and marketing. However, with limited budgets, commercial companies make hard efforts to determine a set of source persons who can highly diffuse information of their products, implying that more benefits will be received. In this paper, we propose an algorithm, called community centrality-based greedy algorithm, for the problem of finding top-k influencers in social networks. The algorithm is composed of four main processes. First, a social network is partitioned into communities using the Markov clustering algorithm. Second, nodes with highest centrality values are extracted from each community. Third, some communities are combined; and last, top-k influencers are determined from a set of highest centrality nodes based on the independent cascade model. We conduct experiments on a publicly available Higgs Twitter dataset. Experimental results show that the proposed algorithm executes much faster than the state-of-the-art greedy one, while still maximized nearly the same influence spread. With the proliferation of mobile devices and wireless technologies, mobile social network systems are increasingly available. A mobile social network plays an essential role as the spread of information and influence in the form of “word-of-mouth”. It is a fundamental issue to find a subset of influential individuals in a mobile social network such that targeting them initially (e.g., to adopt a new product) will maximize the spread of the influence (further adoptions of the new product). The problem of finding the most influential nodes is unfortunately NP-hard. It has been shown that a Greedy algorithm with provable approximation guarantees can give good approximation; However, it is computationally expensive, if not prohibitive, to run the greedy algorithm on a large mobile social network. In this paper, a divide-and-conquer strategy with parallel computing mechanism has been adopted. We first propose an algorithm called Community-based Greedy algorithm for mining top-K influential nodes. It encompasses two components: dividing the large-scale mobile social network into several communities by taking into account information diffusion and selecting communities to find influential nodes by a dynamic programming. Then, to further improve the performance, we parallelize the influence propagation based on communities and consider the influence propagation crossing communities. Also, we give precision analysis to show approximation guarantees of our models. Experiments on real large-scale mobile social networks show that the proposed methods are much faster than previous algorithms, meanwhile, with high accuracy. Given a social graph, the problem of influence maximization is to determine a set of nodes that maximizes the spread of influences. While some recent research has studied the problem of influence maximization, these works are generally too time consuming for practical use in a large-scale social network. In this article, we develop a new framework, community-based influence maximization (CIM), to tackle the influence maximization problem with an emphasis on the time efficiency issue. Our proposed framework, CIM, comprises three phases: (i) community detection, (ii) candidate generation, and (iii) seed selection. Specifically, phase (i) discovers the community structure of the network; phase (ii) uses the information of communities to narrow down the possible seed candidates; and phase (iii) finalizes the seed nodes from the candidate set. By exploiting the properties of the community structures, we are able to avoid overlapped information and thus efficiently select the number of seeds to maximize information spreads. The experimental results on both synthetic and real datasets show that the proposed CIM algorithm significantly outperforms the state-of-the-art algorithms in terms of efficiency and scalability, with almost no compromise of effectiveness. | Abstract of query paper | Cite abstracts |
30356 | 30355 | We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial. | We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks . In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer. We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Under these conditions, it is also possible to construct networks that provide a geometric order of approximation for analytic target functions. The permissible activation functions include the squashing function (1-e-x)-1 as well as a variety of radial basis functions. Our proofs are constructive. The weights and thresholds of our networks are chosen independently of the target function; we give explicit formulas for the coefficients as simple, continuous, linear functionals of the target function. We introduce a definition of nonlinear n-widths and then determine the n-widths of the unit ball of the Sobolev spaceW p r inLq. We prove that in the sense of these widths the manifold of splines of fixed degree with n free knots is optimal for approximating functions in these Sobolev spaces. | Abstract of query paper | Cite abstracts |
30357 | 30356 | We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial. | We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipshitz-continuous losses. We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practi cal limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least quadratic in the number of observations n, i.e., O(n 2 ). Low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to O(p 2 n), where p is the rank of the approximation. The practicality of such methods thus depends on the required rank p. In this paper, we show that in the context of kernel ridge regression, for approximations based on a random subset of columns of the original kernel matrix, the rank p may be chosen to be linear in the degrees of freedom associated with the problem, a quantity which is classically used in the statistical analysis of such method s, and is often seen as the implicit number of parameters of non-parametric estimators. This result enables simple algorithms that have sub-quadratic running time complexity, but provably exhibit the same predictive performancethan existing algorithms, for any given problem instance, and not only for worst-case situations. We study the generalization properties of ridge regression with random features in the statistical learning framework. We show for the first time that @math learning bounds can be achieved with only @math random features rather than @math as suggested by previous results. Further, we prove faster learning rates and show that they might require more random features, unless they are sampled according to a possibly problem dependent distribution. Our results shed light on the statistical computational trade-offs in large scale kernelized learning, showing the potential effectiveness of random features in reducing the computational complexity while keeping optimal generalization properties. 1 Theory.- 2 RKHS AND STOCHASTIC PROCESSES.- 3 Nonparametric Curve Estimation.- 4 Measures And Random Measures.- 5 Miscellaneous Applications.- 6 Computational Aspects.- 7 A Collection of Examples.- to Sobolev spaces.- A.l Schwartz-distributions or generalized functions.- A.1.1 Spaces and their topology.- A.1.2 Weak-derivative or derivative in the sense of distributions.- A.1.3 Facts about Fourier transforms.- A.2 Sobolev spaces.- A.2.1 Absolute continuity of functions of one variable.- A.2.2 Sobolev space with non negative integer exponent.- A.2.3 Sobolev space with real exponent.- A.2.4 Periodic Sobolev space.- A.3 Beppo-Levi spaces. One approach to improving the running time of kernel-based methods is to build a small sketch of the kernel matrix and use it in lieu of the full matrix in the machine learning task of interest. Here, we describe a version of this approach that comes with running time guarantees as well as improved guarantees on its statistical performance. By extending the notion of statistical leverage scores to the setting of kernel ridge regression, we are able to identify a sampling distribution that reduces the size of the sketch (i.e., the required number of columns to be sampled) to the effective dimensionality of the problem. This latter quantity is often much smaller than previous bounds that depend on the maximal degrees of freedom. We give an empirical evidence supporting this fact. Our second contribution is to present a fast algorithm to quickly compute coarse approximations to these scores in time linear in the number of samples. More precisely, the running time of the algorithm is O(np2) with p only depending on the trace of the kernel matrix and the regularization parameter. This is obtained via a variant of squared length sampling that we adapt to the kernel setting. Lastly, we discuss how this new notion of the leverage of a data point captures a fine notion of the difficulty of the learning problem. | Abstract of query paper | Cite abstracts |
30358 | 30357 | We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial. | We show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels, for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipshitz-continuous losses. We introduce a definition of nonlinear n-widths and then determine the n-widths of the unit ball of the Sobolev spaceW p r inLq. We prove that in the sense of these widths the manifold of splines of fixed degree with n free knots is optimal for approximating functions in these Sobolev spaces. | Abstract of query paper | Cite abstracts |
30359 | 30358 | We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial. | A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions. Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet). We show that learning methods interpolating the training data can achieve optimal rates for the problems of nonparametric regression and prediction with square loss. One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an @math hidden node shallow neural network with ReLU activation and @math training data, we show as long as @math is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods. We study the problem of training deep neural networks with Rectified Linear Unit (ReLU) activation function using gradient descent and stochastic gradient descent. In particular, we study the binary classification problem and show that for a broad family of loss functions, with proper random weight initialization, both gradient descent and stochastic gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under mild assumption on the training data. The key idea of our proof is that Gaussian random initialization followed by (stochastic) gradient descent produces a sequence of iterates that stay inside a small perturbation region centering around the initial weights, in which the empirical loss function of deep ReLU networks enjoys nice local curvature properties that ensure the global convergence of (stochastic) gradient descent. Our theoretical results shed light on understanding the optimization for deep learning, and pave the way for studying the optimization dynamics of training modern deep neural networks. Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result. Generalization performance of classifiers in deep learning has recently become a subject of intense study. Heavily over-parametrized deep models tend to fit training data exactly. Despite overfitting, they perform well on test data, a phenomenon not yet fully understood. The first point of our paper is that strong performance of overfitted classifiers is not a unique feature of deep learning. Using real-world and synthetic datasets, we establish that kernel classifiers trained to have zero classification error (overfitting) or even zero regression error (interpolation) perform very well on test data. We proceed to prove lower bounds on the norm of overfitted solutions for smooth kernels, showing that they increase nearly exponentially with the data size. Since most generalization bounds depend polynomially on the norm of the solution, this result implies that they diverge as data increases. Furthermore, the existing bounds do not apply to interpolated classifiers. We also show experimentally that (non-smooth) Laplacian kernels easily fit random labels using a version of SGD, a finding that parallels results reported for ReLU neural networks. In contrast, fitting noisy data requires many more epochs for smooth Gaussian kernels. The observation that the performance of overfitted Laplacian and Gaussian classifiers on the test is quite similar, suggests that generalization is tied to the properties of the kernel function rather than the optimization process. We see that some key phenomena of deep learning are manifested similarly in kernel methods in the overfitted regime. We argue that progress on understanding deep learning will be difficult, until more analytically tractable "shallow" kernel methods are better understood. The experimental and theoretical results presented in this paper indicate a need for new theoretical ideas for understanding classical kernel methods. | Abstract of query paper | Cite abstracts |
30360 | 30359 | We consider the problem of learning an unknown function @math on the @math -dimensional sphere with respect to the square loss, given i.i.d. samples @math where @math is a feature vector uniformly distributed on the sphere and @math . We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: (RF) The random feature model of Rahimi-Recht; (NT) The neural tangent kernel model of Jacot-Gabriel-Hongler. Both these approaches can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and hence enjoy universal approximation properties when the number of neurons @math diverges, for a fixed dimension @math . We prove that, if both @math and @math are large, the behavior of these models is instead remarkably simpler. If @math , then RF performs no better than linear regression with respect to the raw features @math , and NT performs no better than linear regression with respect to degree-one and two monomials in the @math . More generally, if @math then RF fits at most a degree- @math polynomial in the raw features, and NT fits at most a degree- @math polynomial. | Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number @math of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of @math . We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as @math . Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math . We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that—in a suitable scaling limit—SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for “averaging out” some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD. Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis, and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called "propagation of chaos". | Abstract of query paper | Cite abstracts |
30361 | 30360 | The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks. | To implement a program functionality, developers can reuse previously written code snippets by searching through a large-scale codebase. Over the years, many code search tools have been proposed to help developers. The existing approaches often treat source code as textual documents and utilize information retrieval models to retrieve relevant code snippets that match a given query. These approaches mainly rely on the textual similarity between source code and natural language query. They lack a deep understanding of the semantics of queries and source code. In this paper, we propose a novel deep neural network named CODEnn (Code-Description Embedding Neural Network). Instead of matching text similarity, CODEnn jointly embeds code snippets and natural language descriptions into a high-dimensional vector space, in such a way that code snippet and its corresponding description have similar vectors. Using the unified vector representation, code snippets related to a natural language query can be retrieved according to their vectors. Semantically related words can also be recognized and irrelevant noisy keywords in queries can be handled. As a proof-of-concept application, we implement a code search tool named D eep CS using the proposed CODEnn model. We empirically evaluate D eep CS on a large scale codebase collected from GitHub. The experimental results show that our approach can effectively retrieve relevant code snippets and outperforms previous techniques. | Abstract of query paper | Cite abstracts |
30362 | 30361 | The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks. | Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93 of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers. | Abstract of query paper | Cite abstracts |
30363 | 30362 | The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks. | Deep neural networks have made significant breakthroughs in many fields of artificial intelligence. However, it has not been applied in the field of programming language processing. In this paper, we propose the tree-based convolutional neural network (TBCNN) to model programming languages, which contain rich and explicit tree structural information. In our model, program vector representations are learned by the "coding" pretraining criterion based on abstract syntax trees (ASTs); the convolutional layer explicitly captures neighboring features on the tree; with the "binary continuous tree" and "3-way pooling," our model can deal with ASTs of different shapes and sizes.We evaluate the program vector representations empirically, showing that the coding criterion successfully captures underlying features of AST nodes, and that program vector representations significantly speed up supervised learning. We also compare TBCNN to baseline methods; our model achieves better accuracy in the task of program classification. To our best knowledge, this paper is the first to analyze programs with deep neural networks; we extend the scope of deep learning to the field of programming language processing. The experimental results validate its feasibility; they also show a promising future of this new research area. | Abstract of query paper | Cite abstracts |
30364 | 30363 | The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks. | We present a neural model for representing snippets of code as continuous distributed vectors ( code embeddings''). The main idea is to represent a code snippet as a single fixed-length code vector, which can be used to predict semantic properties of the snippet. To this end, code is first decomposed to a collection of paths in its abstract syntax tree. Then, the network learns the atomic representation of each path while simultaneously learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 12M methods. We show that code vectors trained on this dataset can predict method names from files that were unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. A comparison of our approach to previous techniques over the same dataset shows an improvement of more than 75 , making it the first to successfully predict method names based on a large, cross-project corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at http: code2vec.org. The code, data and trained models are available at https: github.com tech-srl code2vec. Source code clones are categorized into four types of increasing difficulty of detection, ranging from purely textual (Type-1) to purely semantic (Type-4). Most clone detectors reported in the literature work well up to Type-3, which accounts for syntactic differences. In between Type-3 and Type-4, however, there lies a spectrum of clones that, although still exhibiting some syntactic similarities, are extremely hard to detect – the Twilight Zone. Most clone detectors reported in the literature fail to operate in this zone. We present Oreo, a novel approach to source code clone detection that not only detects Type-1 to Type-3 clones accurately, but is also capable of detecting harder-to-detect clones in the Twilight Zone. Oreo is built using a combination of machine learning, information retrieval, and software metrics. We evaluate the recall of Oreo on BigCloneBench, and perform manual evaluation for precision. Oreo has both high recall and precision. More importantly, it pushes the boundary in detection of clones with moderate to weak syntactic similarity in a scalable manner Defects are common in software systems and can potentially cause various problems to software users. Different methods have been developed to quickly predict the most likely locations of defects in large code bases. Most of them focus on designing features (e.g. complexity metrics) that correlate with potentially defective code. Those approaches however do not sufficiently capture the syntax and different levels of semantics of source code, an important capability for building accurate prediction models. In this paper, we develop a novel prediction model which is capable of automatically learning features for representing source code and using them for defect prediction. Our prediction system is built upon the powerful deep learning, tree-structured Long Short Term Memory network which directly matches with the Abstract Syntax Tree representation of source code. An evaluation on two datasets, one from open source projects contributed by Samsung and the other from the public PROMISE repository, demonstrates the effectiveness of our approach for both within-project and cross-project predictions. | Abstract of query paper | Cite abstracts |
30365 | 30364 | The abundance of open-source code, coupled with the success of recent advances in deep learning for natural language processing, has given rise to a promising new application of machine learning to source code. In this work, we explore the use of a Siamese recurrent neural network model on Python source code to create vectors which capture the semantics of code. We evaluate the quality of embeddings by identifying which problem from a programming competition the code solves. Our model significantly outperforms a bag-of-tokens embedding, providing promising results for improving code embeddings that can be used in future software engineering tasks. | During software maintenance, code comments help developers comprehend programs and reduce additional time spent on reading and navigating source code. Unfortunately, these comments are often mismatched, missing or outdated in the software projects. Developers have to infer the functionality from the source code. This paper proposes a new approach named DeepCom to automatically generate code comments for Java methods. The generated comments aim to help developers understand the functionality of Java methods. DeepCom applies Natural Language Processing (NLP) techniques to learn from a large code corpus and generates comments from learned features. We use a deep neural network that analyzes structural information of Java methods for better comments generation. We conduct experiments on a large-scale Java corpus built from 9,714 open source projects from GitHub. We evaluate the experimental results on a machine translation metric. Experimental results demonstrate that our method DeepCom outperforms the state-of-the-art by a substantial margin. | Abstract of query paper | Cite abstracts |
30366 | 30365 | As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability. | Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning. Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored. Existing GIF datasets with emotion labels are too small for training contemporary machine learning models, so we propose a semi-automatic method to collect emotional animated GIFs from the Internet with the least amount of human labor. The method trains weak emotion recognizers on labeled data, and uses them to sort a large quantity of unlabeled GIFs. We found that by exploiting the clustered structure of emotions, the number of GIFs a labeler needs to check can be greatly reduced. Using the proposed method, a dataset called GIFGIF+ with 23,544 GIFs over 17 emotions was created, which provides a promising platform for affective computing research. Animated GIFs are widely used on the Internet to express emotions, but their automatic analysis is largely unexplored before. To help with the search and recommendation of GIFs, we aim to predict their emotions perceived by humans based on their contents. Since previous solutions to this problem only utilize image-based features and lose all the motion information, we propose to use 3D convolutional neural networks (CNNs) to extract spatiotemporal features from GIFs. We evaluate our methodology on a crowd-sourcing platform called GIFGIF with more than 6000 animated GIFs, and achieve a better accuracy then any previous approach in predicting crowd-sourced intensity scores of 17 emotions. It is also found that our trained model can be used to distinguish and cluster emotions in terms of valence and risk perception. Animated GIFs have been around since 1987 and recently gained more popularity on social networking sites. Tumblr, a large social networking and micro blogging platform, is a popular venue to share animated GIFs. Tumblr users follow blogs, generating a feed or posts, and choose to "like' or to "reblog' favored posts. In this paper, we use these actions as signals to analyze the engagement of over 3.9 million posts, and conclude that animated GIFs are significantly more engaging than other kinds of media. We follow this finding with deeper visual analysis of nearly 100k animated GIFs and pair our results with interviews with 13 Tumblr users to find out what makes animated GIFs engaging. We found that the animation, lack of sound, immediacy of consumption, low bandwidth and minimal time demands, the storytelling capabilities and utility for expressing emotions were significant factors in making GIFs the most engaging content on Tumblr. We also found that engaging GIFs contained faces and had higher motion energy, uniformity, resolution and frame rate. Our findings connect to media theories and have implications in design of effective content dashboards, video summarization tools and ranking algorithms to enhance engagement. We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods. | Abstract of query paper | Cite abstracts |
30367 | 30366 | As an intuitive way of expression emotion, the animated Graphical Interchange Format (GIF) images have been widely used on social media. Most previous studies on automated GIF emotion recognition fail to effectively utilize GIF's unique properties, and this potentially limits the recognition performance. In this study, we demonstrate the importance of human related information in GIFs and conduct human-centered GIF emotion recognition with a proposed Keypoint Attended Visual Attention Network (KAVAN). The framework consists of a facial attention module and a hierarchical segment temporal module. The facial attention module exploits the strong relationship between GIF contents and human characters, and extracts frame-level visual feature with a focus on human faces. The Hierarchical Segment LSTM (HS-LSTM) module is then proposed to better learn global GIF representations. Our proposed framework outperforms the state-of-the-art on the MIT GIFGIF dataset. Furthermore, the facial attention module provides reliable facial region mask predictions, which improves the model's interpretability. | Over the last decade, automatic emotion recognition has become well established. The gold standard target is thereby usually calculated based on multiple annotations from different raters. All related efforts assume that the emotional state of a human subject can be identified by a 'hard' category or a unique value. This assumption tries to ease the human observer's subjectivity when observing patterns such as the emotional state of others. However, as the number of annotators cannot be infinite, uncertainty remains in the emotion target even if calculated from several, yet few human annotators. The common procedure to use this same emotion target in the learning process thus inevitably introduces noise in terms of an uncertain learning target. In this light, we propose a 'soft' prediction framework to provide a more human-like and comprehensive prediction of emotion. In our novel framework, we provide an additional target to indicate the uncertainty of human perception based on the inter-rater disagreement level, in contrast to the traditional framework which is merely producing one single prediction (category or value). To exploit the dependency between the emotional state and the newly introduced perception uncertainty, we implement a multi-task learning strategy. To evaluate the feasibility and effectiveness of the proposed soft prediction framework, we perform extensive experiments on a time- and value-continuous spontaneous audiovisual emotion database including late fusion results. We show that the soft prediction framework with multi-task learning of the emotional state and its perception uncertainty significantly outperforms the individual tasks in both the arousal and valence dimensions. Current image emotion recognition works mainly classified the images into one dominant emotion category, or regressed the images with average dimension values by assuming that the emotions perceived among different viewers highly accord with each other. However, due to the influence of various personal and situational factors, such as culture background and social interactions, different viewers may react totally different from the emotional perspective to the same image. In this paper, we propose to formulate the image emotion recognition task as a probability distribution learning problem. Motivated by the fact that image emotions can be conveyed through different visual features, such as aesthetics and semantics, we present a novel framework by fusing multi-modal features to tackle this problem. In detail, weighted multi-modal conditional probability neural network (WMMCPNN) is designed as the learning model to associate the visual features with emotion probabilities. By jointly exploring the complementarity and learning the optimal combination coefficients of different modality features, WMMCPNN could effectively utilize the representation ability of each uni-modal feature. We conduct extensive experiments on three publicly available benchmarks and the results demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for emotion distribution prediction. In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classification (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual features from both global and local views. Existing image emotion classification works using hand-crafted features or deep features mainly focus on either low-level visual features or semantic-level image representations without taking all factors into consideration. The proposed MldrNet combines deep representations of different levels, i.e. image semantics, image aesthetics, and low-level visual features to effectively classify the emotion types of different kinds of images, such as abstract paintings and web images. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-crafted features. The proposed approach also outperforms the state-of-the-art methods with at least 6 performance improvement in terms of overall classification accuracy. Previous works on image emotion analysis mainly focused on predicting the dominant emotion category or the average dimension values of an image for affective image classification and regression. However, this is often insufficient in various real-world applications, as the emotions that are evoked in viewers by an image are highly subjective and different. In this paper, we propose to predict the continuous probability distribution of image emotions which are represented in dimensional valence-arousal space. We carried out large-scale statistical analysis on the constructed Image-Emotion-Social-Net dataset, on which we observed that the emotion distribution can be well-modeled by a Gaussian mixture model. This model is estimated by an expectation-maximization algorithm with specified initializations. Then, we extract commonly used emotion features at different levels for each image. Finally, we formalize the emotion distribution prediction task as a shared sparse regression (SSR) problem and extend it to multitask settings, named multitask shared sparse regression (MTSSR), to explore the latent information between different prediction tasks. SSR and MTSSR are optimized by iteratively reweighted least squares. Experiments are conducted on the Image-Emotion-Social-Net dataset with comparisons to three alternative baselines. The quantitative results demonstrate the superiority of the proposed method. Emotions can be evoked in humans by images. Most previous works on image emotion analysis mainly used the elements-of-art-based low-level visual features. However, these features are vulnerable and not invariant to the different arrangements of elements. In this paper, we investigate the concept of principles-of-art and its influence on image emotions. Principles-of-art-based emotion features (PAEF) are extracted to classify and score image emotions for understanding the relationship between artistic principles and emotions. PAEF are the unified combination of representation features derived from different principles, including balance, emphasis, harmony, variety, gradation, and movement. Experiments on the International Affective Picture System (IAPS), a set of artistic photography and a set of peer rated abstract paintings, demonstrate the superiority of PAEF for affective image classification and regression (with about 5 improvement on classification accuracy and 0.2 decrease in mean squared error), as compared to the state-of-the-art approaches. We then utilize PAEF to analyze the emotions of master paintings, with promising results. Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning. Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people's emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs. | Abstract of query paper | Cite abstracts |
30368 | 30367 | Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively. | Magnetic Resonance Angiography (MRA) has become an essential MR contrast for imaging and evaluation of vascular anatomy and related diseases. MRA acquisitions are typically ordered for vascular interventions, whereas in typical scenarios, MRA sequences can be absent in the patient scans. This motivates the need for a technique that generates inexistent MRA from existing MR multi-contrast, which could be a valuable tool in retrospective subject evaluations and imaging studies. We present a generative adversarial network (GAN) based technique to generate MRA from T1- and T2-weighted MRI images, for the first time to our knowledge. To better model the representation of vessels which the MRA inherently highlights, we design a loss term dedicated to a faithful reproduction of vascularities. To that end, we incorporate steerable filter responses of the generated and reference images as a loss term. Extending the well-established generator-discriminator architecture based on the recent PatchGAN model with the addition of steerable filter loss, the proposed steerable GAN (sGAN) method is evaluated on the large public database IXI. Experimental results show that the sGAN outperforms the baseline GAN method in terms of an overlap score with similar PSNR values, while it leads to improved visual perceptual quality. We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https: github.com agis85 multimodal_brain_synthesis . Fluid Attenuated Inversion Recovery (FLAIR) is a commonly acquired pulse sequence for multiple sclerosis (MS) patients. MS white matter lesions appear hyperintense in FLAIR images and have excellent contrast with the surrounding tissue. Hence, FLAIR images are commonly used in automated lesion segmentation algorithms to easily and quickly delineate the lesions. This expedites the lesion load computation and correlation with disease progression. Unfortunately for numerous reasons the acquired FLAIR images can be of a poor quality and suffer from various artifacts. In the most extreme cases the data is absent, which poses a problem when consistently processing a large data set. We propose to fill in this gap by reconstructing a FLAIR image given the corresponding T1-weighted, T2-weighted, and PD-weighted images of the same subject using random forest regression. We show that the images we produce are similar to true high quality FLAIR images and also provide a good surrogate for tissue segmentation. Accurate synthesis of a full 3D MR image containing tumours from available MRI (e.g. to replace an image that is currently unavailable or corrupted) would provide a clinician as well as downstream inference methods with important complementary information for disease analysis. In this paper, we present an end-to-end 3D convolution neural network that takes a set of acquired MR image sequences (e.g. T1, T2, T1ce) as input and concurrently performs (1) regression of the missing full resolution 3D MRI (e.g. FLAIR) and (2) segmentation of the tumour into subtypes (e.g. enhancement, core). The hypothesis is that this would focus the network to perform accurate synthesis in the area of the tumour. Experiments on the BraTS 2015 and 2017 datasets [1] show that: (1) the proposed method gives better performance than state-of-the art methods in terms of established global evaluation metrics (e.g. PSNR), (2) replacing real MR volumes with the synthesized MRI does not lead to significant degradation in tumour and sub-structure segmentation accuracy. The system further provides uncertainty estimates based on Monte Carlo (MC) dropout [11] for the synthesized volume at each voxel, permitting quantification of the system’s confidence in the output at each location. | Abstract of query paper | Cite abstracts |
30369 | 30368 | Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences, implicitly infers which sequences are missing, and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively. | In a research context, image acquisition will often involve a pre-defined static protocol and the data will be of high quality. If we are to build applications that work in hospitals without significant operational changes in care delivery, algorithms should be designed to cope with the available data in the best possible way. In a clinical environment, imaging protocols are highly flexible, with MRI sequences commonly missing appropriate sequence labeling (e.g. T1, T2, FLAIR). To this end we introduce PIMMS, a Permutation Invariant Multi-Modal Segmentation technique that is able to perform inference over sets of MRI scans without using modality labels. We present results which show that our convolutional neural network can, in some settings, outperform a baseline model which utilizes modality labels, and achieve comparable performance otherwise. We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches. | Abstract of query paper | Cite abstracts |